31 December, 2016

12cR1 RAC Posts -- 2 : Convert AdminManaged DB to PolicyManaged

I have an AdminManaged Database in my (2node) RAC Cluster.

How do I convert it to PolicyManaged ?

(Yes, let me admit :  It makes no sense to have PolicyManaged on a 2node Cluster.  But since I can't create an 8 or 16 node Cluster (with multiple databases to boot ?!), let me demonstrate with a 2node  Cluster.  The principle remains the same).

First, I show the configuration of the database in the Cluster :
[oracle@collabn1 ~]$ srvctl status database -d RAC
Instance RAC1 is running on node collabn1
Instance RAC2 is running on node collabn2
[oracle@collabn1 ~]$ srvctl config database -d RAC
Database unique name: RAC
Database name: RAC
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/spfileRAC1.ora
Password file: +DATA/RAC/PASSWORD/pwdrac.277.931824933
Domain: racattack
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: FRA,DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances: RAC1,RAC2
Configured nodes: collabn1,collabn2
Database is administrator managed
[oracle@collabn1 ~]$


This is the current definition of server pool(s) :
[oracle@collabn1 ~]$ srvctl config srvpool
Server pool name: Free
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
Server pool name: Generic
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names: collabn1,collabn2
[oracle@collabn1 ~]$


So, we see that I don't have any Server Pools defined. Only the default FREE and GENERIC (for AdminManaged database(s)) exist.
I now proceed to remove the database from the configuration.
[oracle@collabn1 ~]$ srvctl stop database -d RAC
[oracle@collabn1 ~]$ srvctl remove database -d RAC
Remove the database RAC? (y/[n]) y
[oracle@collabn1 ~]$


I now create a new (custom) Server Pool (called "MyPool").
[oracle@collabn1 ~]$ srvctl add srvpool -serverpool MyPool -importance 100 -min 1 -max 2 \
> -servers "collabn1,collabn2" -verbose
[oracle@collabn1 ~]$ srvctl config srvpool -serverpool MyPool
Server pool name: MyPool
Importance: 100, Min: 1, Max: 2
Category:
Candidate server names: collabn1,collabn2
[oracle@collabn1 ~]$


So, now with an "upto 2nodes" Server Pool, I add my database to it.
[oracle@collabn1 ~]$ srvctl add database -d RAC -oraclehome /u01/app/oracle/product/12.1.0/dbhome_1 \
> -serverpool MyPool -verbose
[oracle@collabn1 ~]$ srvctl config database -d RAC
Database unique name: RAC
Database name:
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile:
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: MyPool
Disk Groups:
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances:
Configured nodes:
Database is policy managed
[oracle@collabn1 ~]$


This shows that RAC is now a PolicyManaged database in the "MyPool" Server Pool !
Can I now start the database and check on the instance(s) ?
[oracle@collabn1 ~]$ srvctl start database -d RAC
PRCR-1079 : Failed to start resource ora.rac.db
CRS-5017: The resource action "ora.rac.db start" encountered the following error:
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/12.1.0/dbhome_1/dbs/initRAC_2.ora'
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/collabn2/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.rac.db' on 'collabn2' failed
CRS-5017: The resource action "ora.rac.db start" encountered the following error:
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/12.1.0/dbhome_1/dbs/initRAC_1.ora'
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/collabn1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.rac.db' on 'collabn1' failed
CRS-2632: There are no more servers to try to place resource 'ora.rac.db' on that would satisfy its placement policy
[oracle@collabn1 ~]$


Yes, of course. I need to create initRAC_1.ora and initRAC_2.ora.
After creating the new parameter files (pointing to the SPFILE in the ASM Diskgroup), I try again.
[oracle@collabn1 ~]$ srvctl start database -d RAC
[oracle@collabn1 ~]$ srvctl status database -d RAC
Instance RAC_1 is running on node collabn1
Instance RAC_2 is running on node collabn2
[oracle@collabn1 ~]$ ps -ef |grep smon
oracle    3447     1  0 11:37 ?        00:00:00 asm_smon_+ASM1
root      3605     1  0 11:37 ?        00:00:11 /u01/app/12.1.0/grid/bin/osysmond.bin
oracle    4203     1  0 11:38 ?        00:00:00 mdb_smon_-MGMTDB
oracle   22882     1  0 12:08 ?        00:00:00 ora_smon_RAC_1
oracle   23422 12657  0 12:10 pts/0    00:00:00 grep smon
[oracle@collabn1 ~]$
[oracle@collabn2 ~]$ ps -ef |grep smon
oracle    3495     1  0 11:41 ?        00:00:00 asm_smon_+ASM2
root      3593     1  0 11:41 ?        00:00:09 /u01/app/12.1.0/grid/bin/osysmond.bin
oracle   15973     1  0 12:08 ?        00:00:00 ora_smon_RAC_2
oracle   16647  4582  0 12:10 pts/0    00:00:00 grep smon
[oracle@collabn2 ~]$
[oracle@collabn1 ~]$ srvctl config database -d RAC
Database unique name: RAC
Database name:
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile:
Password file:
Domain: racattack
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: MyPool
Disk Groups: DATA,FRA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances:
Configured nodes:
Database is policy managed
[oracle@collabn1 ~]$


Yes, I now have instances RAC_1 and RAC_2 (instead of RAC1 and RAC2) now running. If I had multiple (4 or more ?!) nodes (and a Server Pool configuration to match), there's no guarantee that RAC_1 starts on collabn1 (and RAC_2 on collabn2).  These are "floating" instances that can start on any nodes in the Cluster.


(UPDATE : It seems that when I shutdown a node, a PolicyManaged Instance is SHUTDOWN ABORT, unlike an AdminManaged Instance which gets SHUTDOWN NORMAL ?)

.
.
.

24 December, 2016

12cR1 RAC Posts -- 1 : Grid Infrastructure Install completed (first cycle)

Just as I had posted 11gR2 RAC Posts in 2014  (listed here), I plan to post some 12cR1 RAC (GI, ASM) posts over the next few weeks.

Here's my Grid Infrastructure up and running.  (Yes, I used racattack for this first 12cR1 RAC setup.)

[root@collabn1 ~]# crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.SHARED.advm
               ONLINE  ONLINE       collabn1                 Volume device /dev/a
                                                             sm/shared-141 is onl
                                                             ine,STABLE
               ONLINE  ONLINE       collabn2                 Volume device /dev/a
                                                             sm/shared-141 is onl
                                                             ine,STABLE
ora.DATA.dg
               ONLINE  ONLINE       collabn1                 STABLE
               ONLINE  ONLINE       collabn2                 STABLE
ora.FRA.dg
               ONLINE  ONLINE       collabn1                 STABLE
               ONLINE  ONLINE       collabn2                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       collabn1                 STABLE
               ONLINE  ONLINE       collabn2                 STABLE
ora.asm
               ONLINE  ONLINE       collabn1                 Started,STABLE
               ONLINE  ONLINE       collabn2                 Started,STABLE
ora.data.shared.acfs
               ONLINE  ONLINE       collabn1                 mounted on /shared,S
                                                             TABLE
               ONLINE  ONLINE       collabn2                 mounted on /shared,S
                                                             TABLE
ora.net1.network
               ONLINE  ONLINE       collabn1                 STABLE
               ONLINE  ONLINE       collabn2                 STABLE
ora.ons
               ONLINE  ONLINE       collabn1                 STABLE
               ONLINE  ONLINE       collabn2                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       collabn2                 STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       collabn1                 STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       collabn1                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       collabn1                 169.254.3.70 172.16.
                                                             100.51,STABLE
ora.collabn1.vip
      1        ONLINE  ONLINE       collabn1                 STABLE
ora.collabn2.vip
      1        ONLINE  ONLINE       collabn2                 STABLE
ora.cvu
      1        ONLINE  ONLINE       collabn1                 STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       collabn1                 Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       collabn1                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       collabn2                 STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       collabn1                 STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       collabn1                 STABLE
--------------------------------------------------------------------------------
[root@collabn1 ~]#
[root@collabn1 ~]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   96fbcb40bfeb4ff7bf18881adcfef149 (/dev/asm-disk1) [DATA]
Located 1 voting disk(s).
[root@collabn1 ~]#
[root@collabn1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1632
         Available space (kbytes) :     407936
         ID                       :  827167720
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@collabn1 ~]#
[root@collabn1 ~]# nslookup collabn-cluster-scan
Server:         192.168.78.51
Address:        192.168.78.51#53

Name:   collabn-cluster-scan.racattack
Address: 192.168.78.252
Name:   collabn-cluster-scan.racattack
Address: 192.168.78.253
Name:   collabn-cluster-scan.racattack
Address: 192.168.78.251

[root@collabn1 ~]#


I hope to run a few cycles of setups, switching to different node names, IPs, DiskGroup names etc over the next few weeks).
.
.
.

06 December, 2016

12.2 New Features -- 5 : Memory Parameters for Pluggable Database

12.2 allows Instance Memory parameters to be configured at the PDB level.

[timestamp in UTC]

[oracle@HKCORCL ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.2.0.1.0 Production on Tue Dec 6 13:56:28 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> show parameter sga

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
allow_group_access_to_sga            boolean     FALSE
lock_sga                             boolean     FALSE
pre_page_sga                         boolean     TRUE
sga_max_size                         big integer 2544M
sga_min_size                         big integer 0
sga_target                           big integer 2544M
unified_audit_sga_queue_size         integer     1048576
SQL> show parameter db_cach

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_cache_advice                      string      ON
db_cache_size                        big integer 0
SQL>


Those are parameters set at the CDB level. Let's see the PDB.

SQL> alter session set container = PDB1;

Session altered.

SQL> show parameter sga

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
allow_group_access_to_sga            boolean     FALSE
lock_sga                             boolean     FALSE
pre_page_sga                         boolean     TRUE
sga_max_size                         big integer 2544M
sga_min_size                         big integer 0
sga_target                           big integer 0
unified_audit_sga_queue_size         integer     1048576
SQL> show parameter db_cache

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_cache_advice                      string      ON
db_cache_size                        big integer 0
SQL> alter system set db_cache_size=400M;

System altered.

SQL>
SQL> alter system set sga_target=512M;
alter system set sga_target=512M
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-56750: invalid value 536870912 for parameter sga_target; must be larger
than 200% of parameter db_cache_size


SQL> alter system set sga_target=810M;

System altered.

SQL> alter system set shared_pool_size=256M;

System altered.

SQL> show parameter db_cache

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_cache_advice                      string      ON
db_cache_size                        big integer 400M
SQL> show parameter sga_target

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sga_target                           big integer 810M
SQL> show parameter shared_pool

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
shared_pool_reserved_size            big integer 26004684
shared_pool_size                     big integer 256M
SQL>
SQL> alter system set pga_aggregate_target=128M;

System altered.

SQL>


Returning to the CDB ...

SQL> connect / as sysdba
Connected.
SQL> show parameter db_cache

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_cache_advice                      string      ON
db_cache_size                        big integer 0
SQL> show parameter sga_target

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sga_target                           big integer 2544M
SQL> show parameter shared_pool

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
shared_pool_reserved_size            big integer 26004684
shared_pool_size                     big integer 0
SQL> show parameter pga_aggergate_target
SQL> show parameter pga_aggregate_target

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
pga_aggregate_target                 big integer 1775294400
SQL>


Thus, multiple PDBs can have their own private target and limits (even an SGA_MIN_SIZE) all shared within the single instance that they co-exist in.
Note : The requirement is that MEMORY_TARGET is to be not set.
.
.
.

01 December, 2016

12.2 New Features -- 4 : AWR for Pluggable Database

12.2 now allows AWR Snapshots and Reports to be created at the PDB level.

Here I demonstrate a Manual Snapshot.  Although Automatic PDB AWR Snapshots are possible (with the AWR_PDB_AUTOFLUSH_ENABLED parameter) , they are disabled by default and Oracle recommends Manual Snapshots.

[timestamps in UTC]

SQL> connect / as sysdba
Connected.
SQL> alter session set container=PDB1;

Session altered.

SQL> exec dbms_workload_repository.create_snapshot();

PL/SQL procedure successfully completed.

SQL>


I then proceed to create an AWR Report, still in the PDB1 container.

SQL> @?/rdbms/admin/awrrpt

Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
AWR reports can be generated in the following formats.  Please enter the
name of the format at the prompt.  Default value is 'html'.

'html'          HTML format (default)
'text'          Text format
'active-html'   Includes Performance Hub active report

Enter value for report_type: text

Type Specified:  text

Specify the location of AWR Data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
AWR_ROOT - Use AWR data from root (default)
AWR_PDB - Use AWR data from PDB
Enter value for awr_location: AWR_PDB




Current Instance
~~~~~~~~~~~~~~~~
DB Id          DB Name        Inst Num       Instance       Container Name
-------------- -------------- -------------- -------------- --------------
 3774315809     HKCORCL                     1 HKCORCL        PDB1


Root DB Id      Container DB Id AWR DB Id
--------------- --------------- ---------------
    947935822      3774315809      3774315809









Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  DB Id      Inst Num   DB Name      Instance     Host
------------ ---------- ---------    ----------   ------
  3774315809     1      HKCORCL      HKCORCL      HKCORCL.comp

Using 3774315809 for database Id
Using          1 for instance number


Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed.  Pressing  without
specifying a number lists all completed snapshots.


Enter value for num_days: 1

Listing the last day's Completed Snapshots
Instance     DB Name      Snap Id       Snap Started    Snap Level
------------ ------------ ---------- ------------------ ----------

HKCORCL      HKCORCL              1  01 Dec 2016 08:48    1
                                  2  01 Dec 2016 08:49    1
                                  3  01 Dec 2016 08:52    1


Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 1
Begin Snapshot Id specified: 1

Enter value for end_snap: 3
End   Snapshot Id specified: 3




Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is awrrpt_1_1_3.txt.  To use this name,
press  to continue, otherwise enter an alternative.

Enter value for report_name:

Using the report name awrrpt_1_1_3.txt


Here's a look at the header of the AWR report.

WORKLOAD REPOSITORY PDB report (PDB snapshots)

DB Name         DB Id    Unique Name DB Role          Edition Release    RAC CDB
------------ ----------- ----------- ---------------- ------- ---------- --- ---
HKCORCL       3774315809 HKCORCL     PRIMARY          EE      12.2.0.1.0 NO  NO

Instance     Inst Num Startup Time
------------ -------- ---------------
HKCORCL             1 16-Nov-16 06:13

    PDB Name PDB Id PDB DB Id        Open Time
------------ ------ ---------- ---------------
PDB1               3  3774315809 25-Nov-16 14:11

Host Name        Platform                         CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
HKCORCL.compute- Linux x86 64-bit                    2     2       1       7.05

              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:         1 01-Dec-16 08:48:46         0       4.0
  End Snap:         3 01-Dec-16 08:52:08         1      12.0
   Elapsed:                3.36 (mins)
   DB Time:                0.29 (mins)

Load Profile                    Per Second   Per Transaction  Per Exec  Per Call
~~~~~~~~~~~~~~~            ---------------   --------------- --------- ---------
             DB Time(s):               0.1               4.3      0.00      0.06
              DB CPU(s):               0.1               2.9      0.00      0.04
      Background CPU(s):               0.0               0.0      0.00      0.00
      Redo size (bytes):         138,443.8       6,976,599.0
  Logical read (blocks):           1,798.4          90,625.0
          Block changes:             282.3          14,224.3
 Physical read (blocks):              21.0           1,055.8
Physical write (blocks):               0.7              34.5
       Read IO requests:              20.9           1,051.8
      Write IO requests:               0.3              12.5
           Read IO (MB):               0.2               8.3
          Write IO (MB):               0.0               0.3
           IM scan rows:               0.0               0.0
Session Logical Read IM:               0.0               0.0
             User calls:               1.5              77.5
           Parses (SQL):              17.9             904.0
      Hard parses (SQL):               3.2             161.5
     SQL Work Area (MB):               2.5             123.5
                 Logons:               0.0               1.0
         Executes (SQL):              45.7           2,302.3
              Rollbacks:               0.0               0.0
           Transactions:               0.0

Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                           Total Wait       Avg   % DB Wait
Event                                Waits Time (sec)      Wait   time Class
------------------------------ ----------- ---------- --------- ------ --------
DB CPU                                           11.5             66.9
db file sequential read              3,758        2.5  667.74us   14.6 User I/O
direct path write                       23         .7   29.08ms    3.9 User I/O
flashback log file sync                 36         .6   16.98ms    3.6 User I/O
local write wait                        12         .4   37.17ms    2.6 User I/O
acknowledge over PGA limit               9         .1    9.50ms     .5 Schedule
control file sequential read           189         .1  293.42us     .3 System I
PGA memory operation                 3,687          0   11.02us     .2 Other
db file scattered read                   4          0    8.32ms     .2 User I/O
log file sync                            3          0    9.35ms     .2 Commit


The Header identifies the PDB being reported on.Note that Snapshots 1 to 3 are local to the PDB and are not in the Root.  PDB Snapshots can be maintained (create or drop snapshot) in the same manner as CDB snapshots.  (Note : PDB AWR Snapshots are in the view AWR_PDB_SNAPSHOT,  not DBA_HIST_SNAPSHOT).




In contrast, this below is the Header for a CDB where Automatic Snapshots have meant Snap IDs are already at 379,380.  Thus, the CDB snapshots are different from the PDB snapshots.

WORKLOAD REPOSITORY report for

DB Name         DB Id    Unique Name DB Role          Edition Release    RAC CDB
------------ ----------- ----------- ---------------- ------- ---------- --- ---
HKCORCL        947935822 HKCORCL     PRIMARY          EE      12.2.0.1.0 NO  YES

Instance     Inst Num Startup Time
------------ -------- ---------------
HKCORCL             1 16-Nov-16 06:13

Host Name        Platform                         CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
HKCORCL.compute- Linux x86 64-bit                    2     2       1       7.05

              Snap Id      Snap Time      Sessions Curs/Sess  PDBs
            --------- ------------------- -------- --------- -----
Begin Snap:       379 01-Dec-16 08:00:47        51        .6     2
  End Snap:       380 01-Dec-16 09:00:09        62        .9     2
   Elapsed:               59.36 (mins)
   DB Time:                1.14 (mins)



Note how it doesn't identify a PDB.

You need to be explicitly connected to a PDB before awrrpt shows you the option to generate PDB-level AWR report.
.
.
.

25 November, 2016

12.2 New Features -- 3 : Flashback Pluggable Database

12.1 allows Point In Time Recovery of a Pluggable Database but not Flashback of an individual PDB.

12.2 now allows Flashback of an individual PDB.   This is easier with a Local Undo Tablespace instead of a Shared Undo Tablespace.

Here is a quick demo  :

[timestsamps in UTC]

[oracle@HKCORCL ~]$ sqlplus system/Oracle_4U@PDB1

SQL*Plus: Release 12.2.0.1.0 Production on Fri Nov 25 14:19:06 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Thu Nov 24 2016 01:03:52 +00:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select count(*) from hr.employees_part;

  COUNT(*)
----------
       107

SQL> drop table hr.employees_part purge;

Table dropped.

SQL> connect / as sysdba
Connected.
SQL> alter system switch logfile;

System altered.

SQL> alter pluggable database pdb1 close;

Pluggable database altered.

SQL>
SQL> select sysdate, sysdate-oldest_flashback_time
  2  from v$flashback_database_log;

SYSDATE   SYSDATE-OLDEST_FLASHBACK_TIME
--------- -----------------------------
25-NOV-16                    2.36273148

SQL>
SQL> flashback pluggable database pdb1
  2  to timestamp sysdate-2/24;

Flashback complete.

SQL> alter pluggable database pdb1 open;
alter pluggable database pdb1 open
*
ERROR at line 1:
ORA-01113: file 17 needs media recovery
ORA-01110: data file 17:
'/u02/app/oracle/oradata/HKCORCL/4157E08302CC2021E053B2D4100AABA3/datafile/o1_mf
_pdb1undo_d3dgxhbh_.dbf'


SQL> alter pluggable database pdb1 open resetlogs;

Pluggable database altered.

SQL>
SQL> connect system/Oracle_4U@PDB1
Connected.
SQL> select count(*) from hr.employees_part;

  COUNT(*)
----------
       107

SQL>


(Note : The 12.2 documentation shows the TO TIME clause, it is actually TO TIMESTAMP).
I have done a flashback of PDB1 to a time as of 2hours ago, when the table HR.EMPLOYEES_PART still existed.

Let's look for messages in the alert log.

2016-11-25T14:19:52.992589+00:00
Archived Log entry 11 added for T-1.S-11 ID 0x38800462 LAD:1
2016-11-25T14:19:57.621705+00:00
alter pluggable database pdb1 close
2016-11-25T14:19:57.640353+00:00
PDB1(3):JIT: pid 7920 requesting stop
2016-11-25T14:19:58.885892+00:00
Pluggable database PDB1 closed
Completed: alter pluggable database pdb1 close
2016-11-25T14:26:10.205824+00:00
flashback pluggable database pdb1
to timestamp sysdate-2/24
2016-11-25T14:26:10.627900+00:00
Flashback Restore Start
2016-11-25T14:26:11.513882+00:00
Restore Flashback Pluggable Database PDB1 (3) until change 3536013
Flashback Restore Complete
2016-11-25T14:26:11.707236+00:00
Flashback Media Recovery Start
2016-11-25T14:26:11.718480+00:00
Serial Media Recovery started
2016-11-25T14:26:12.006472+00:00
Recovery of Online Redo Log: Thread 1 Group 2 Seq 11 Reading mem 0
  Mem# 0: /u04/app/oracle/redo/redo02.log
2016-11-25T14:26:12.283587+00:00
Incomplete Recovery applied until change 3536477 time 11/25/2016 12:26:56
Flashback Media Recovery Complete
Flashback Pluggable Database PDB1 (3) recovered until change 3536477, at 11/25/2016 12:26:56
Completed: flashback pluggable database pdb1
to timestamp sysdate-2/24
2016-11-25T14:26:21.451523+00:00
alter pluggable database pdb1 open
PDB1(3):Autotune of undo retention is turned on.
2016-11-25T14:26:21.659109+00:00
Pdb PDB1 hit error 1113 during open read write (1) and will be closed.
2016-11-25T14:26:21.659410+00:00
Errors in file /u01/app/oracle/diag/rdbms/hkcorcl/HKCORCL/trace/HKCORCL_ora_7920.trc:
ORA-01113: file 17 needs media recovery
ORA-01110: data file 17: '/u02/app/oracle/oradata/HKCORCL/4157E08302CC2021E053B2D4100AABA3/datafile/o1_mf_pdb1undo_d3dgxhbh_.dbf'
PDB1(3):JIT: pid 7920 requesting stop
2016-11-25T14:26:21.804780+00:00
Errors in file /u01/app/oracle/diag/rdbms/hkcorcl/HKCORCL/trace/HKCORCL_m000_9995.trc:
ORA-01110: data file 9: '/u02/app/oracle/oradata/HKCORCL/PDB1/system01.dbf'
ORA-1113 signalled during: alter pluggable database pdb1 open...
2016-11-25T14:26:22.086212+00:00
Errors in file /u01/app/oracle/diag/rdbms/hkcorcl/HKCORCL/trace/HKCORCL_m000_9995.trc:
ORA-01110: data file 10: '/u02/app/oracle/oradata/HKCORCL/PDB1/sysaux01.dbf'
2016-11-25T14:26:22.175778+00:00
Errors in file /u01/app/oracle/diag/rdbms/hkcorcl/HKCORCL/trace/HKCORCL_m000_9995.trc:
ORA-01110: data file 12: '/u02/app/oracle/oradata/HKCORCL/PDB1/users01.dbf'
2016-11-25T14:26:22.270876+00:00
Errors in file /u01/app/oracle/diag/rdbms/hkcorcl/HKCORCL/trace/HKCORCL_m000_9995.trc:
ORA-01110: data file 17: '/u02/app/oracle/oradata/HKCORCL/4157E08302CC2021E053B2D4100AABA3/datafile/o1_mf_pdb1undo_d3dgxhbh_.dbf'
Checker run found 4 new persistent data failures
2016-11-25T14:26:39.804216+00:00
alter pluggable database pdb1 open resetlogs
2016-11-25T14:26:40.377390+00:00
Online datafile 17
Online datafile 12
Online datafile 10
Online datafile 9
2016-11-25T14:26:40.881181+00:00
PDB1(3):Autotune of undo retention is turned on.
PDB1(3):Endian type of dictionary set to little
PDB1(3):[7920] Successfully onlined Undo Tablespace 7.
PDB1(3):Undo initialization finished serial:0 start:868281239 end:868281333 diff:94 ms (0.1 seconds)
PDB1(3):Database Characterset for PDB1 is AL32UTF8
PDB1(3):JIT: pid 7920 requesting stop
2016-11-25T14:26:42.441388+00:00
PDB1(3):Autotune of undo retention is turned on.
2016-11-25T14:26:42.827673+00:00
PDB1(3):Endian type of dictionary set to little
PDB1(3):[7920] Successfully onlined Undo Tablespace 7.
PDB1(3):Undo initialization finished serial:0 start:868283079 end:868283168 diff:89 ms (0.1 seconds)
PDB1(3):Pluggable database PDB1 dictionary check beginning
2016-11-25T14:26:43.706672+00:00
PDB1(3):Pluggable Database PDB1 Dictionary check complete
PDB1(3):Database Characterset for PDB1 is AL32UTF8
2016-11-25T14:26:44.083617+00:00
PDB1(3):Opatch validation is skipped for PDB PDB1 (con_id=0)
PDB1(3):Opening pdb with no Resource Manager plan active
2016-11-25T14:26:45.205147+00:00
Starting control autobackup

Deleted Oracle managed file /u03/app/oracle/fast_recovery_area/HKCORCL/415864F430FE5FFEE053B2D4100A149C/backupset/2016_11_16/o1_mf_nnndf_TAG20161116T024856_d2qldlnv_.bkp
2016-11-25T14:26:46.523130+00:00
Deleted Oracle managed file /u03/app/oracle/fast_recovery_area/HKCORCL/3E09703FB0AF1A7EE053DE4BC40A6C1D/backupset/2016_11_16/o1_mf_nnndf_TAG20161116T024856_d2qlfzqg_.bkp
Control autobackup written to DISK device

handle '/u03/app/oracle/fast_recovery_area/HKCORCL/autobackup/2016_11_25/o1_mf_s_928852005_d3jlk651_.bkp'

Pluggable database PDB1 closed
Completed: alter pluggable database pdb1 open resetlogs


The set of ORA-01113 and ORA-01110 errors are when I tried to open PDB1 without a RESETLOGS.
The OPEN RESETLOGS issued at 2016-11-25T14:26:39.804216+00:00 was successful.
(Note : The ALTER SYSTEM SWITCH LOGFILE wasn't required but I like to archive out the CURRENT redo whenever I make a significant action against the database).

.
.
.

24 November, 2016

12.2 New Features -- 2 : Partitioning an Existing Table

A non-partitioned table can be Partitioned (without having to use DBMS_REDEFINITION) online.

SQL> connect hr/Oracle_4U@PDB1
Connected.
SQL> select count(*) from employees;

  COUNT(*)
----------
       107

SQL> create table employees_part as select * from employees;

Table created.

SQL> select table_name from user_part_tables;

no rows selected

SQL> alter table employees_part
  2  modify
  3  partition by range (last_name)
  4  (partition p_N values less than ('O'),
  5   partition p_Q values less than ('R'),
  6   partition p_LAST values less than (MAXVALUE))
  7  online;

Table altered.

SQL>
SQL> select partition_name, high_value
  2  from user_tab_partitions
  3  where table_name = 'EMPLOYEES_PART'
  4  order by partition_position
  5  /

PARTITION_NAME
--------------------------------------------------------------------------------
HIGH_VALUE
------------
P_N
'O'

P_Q
'R'

P_LAST
MAXVALUE


SQL>
SQL> select table_name, partitioning_type, partition_count
  2  from user_part_tables
  3  where table_name = 'EMPLOYEES_PART'
  4  /

TABLE_NAME
--------------------------------------------------------------------------------
PARTITION PARTITION_COUNT
--------- ---------------
EMPLOYEES_PART
RANGE                   3


SQL>
SQL> select partition_name, num_rows
  2  from user_tab_partitions
  3  where table_name = 'EMPLOYEES_PART'
  4  order by partition_position
  5  /

PARTITION_NAME
--------------------------------------------------------------------------------
  NUM_ROWS
----------
P_N
        71

P_Q
        10

P_LAST
        26


SQL>


I was able to convert a Non-Partitioned Table to a Range-Partitioned Table online.
.
.
.
There was a question in the Comments about the Object_ID.  The table actually now consists of *multiple* objects (the Partitions) each with it's own Object_ID.

SQL> REM This test in 19c 19.3.  I expect the same behaviour in 12.2.0.1
SQL> create table employees_part as select * from employees;

Table created.

SQL> select object_id, data_object_id from user_objects
  2  where object_name = 'EMPLOYEES_PART';

 OBJECT_ID DATA_OBJECT_ID
---------- --------------
     73228          73228

SQL> alter table employees_part
  2  modify
  3  partition by range (last_name)
  4  (partition p_N values less than ('O'),
  5   partition p_Q values less than ('R'),
  6   partition p_LAST values less than (MAXVALUE));

Table altered.

SQL> select object_id, data_object_id, subobject_name
  2  from user_objects
  3  where object_name = 'EMPLOYEES_PART'
  4  order by 1,2
  5  /

 OBJECT_ID DATA_OBJECT_ID SUBOBJECT_NAME
---------- -------------- ------------------
     73228
     73233          73233 P_N
     73234          73234 P_Q
     73235          73235 P_LAST

SQL>


(You can search this blog for "DATA_OBJECT_ID" being different from OBJECT_ID.  See this earlier post.)

12.2 New Features -- 1 : Separate Undo Tablespace for each PDB

Unlike 12.1 MultiTenant, 12.2 introduces a separate Undo Tablespace for each PDB.

SQL> l
  1  select c.con_id, c.name  con_name, t.tablespace_name, t.contents, t.status
  2  from v$containers c, cdb_tablespaces t
  3  where c.con_id=t.con_id
  4  and t.tablespace_name like '%UNDO%'
  5* order by 1,2
SQL> /

    CON_ID CON_NAME         TABLESPACE_NAME  CONTENTS              STATUS
---------- ---------------- ---------------- --------------------- ---------
         1 CDB$ROOT         UNDOTBS1         UNDO                  ONLINE
         3 PDB1             UNDOTBS1         UNDO                  ONLINE
         5 PDB2             UNDOTBS1         UNDO                  ONLINE

SQL>


I have two PDBs and each PDB has an Undo Tablespace.

Let me create a new Undo Tablespace.

SQL> connect system/Oracle_4U@PDB1
Connected.
SQL> create undo tablespace PDB1UNDO ;

Tablespace created.

SQL> show parameter undo

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled                    boolean     FALSE
undo_management                      string      AUTO
undo_retention                       integer     900
undo_tablespace                      string      UNDOTBS1
SQL> alter system set undo_tablespace='PDB1UNDO';

System altered.

SQL> drop tablespace undotbs1;

Tablespace dropped.

SQL> connect / as sysdba
Connected.
SQL> alter pluggable database pdb1 close;

Pluggable database altered.

SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL>
SQL> select c.con_id, c.name  con_name, t.tablespace_name, t.contents, t.status
  2  from v$containers c, cdb_tablespaces t
  3  where c.con_id=t.con_id
  4  and t.tablespace_name like '%UNDO%'
  5  order by 1,2
  6  /

    CON_ID CON_NAME         TABLESPACE_NAME  CONTENTS              STATUS
---------- ---------------- ---------------- --------------------- ---------
         1 CDB$ROOT         UNDOTBS1         UNDO                  ONLINE
         3 PDB1             PDB1UNDO         UNDO                  ONLINE
         5 PDB2             UNDOTBS1         UNDO                  ONLINE

SQL>
SQL> connect system/Oracle_4U@PDB1
Connected.
SQL> show parameter undo

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled                    boolean     FALSE
undo_management                      string      AUTO
undo_retention                       integer     900
undo_tablespace                      string      PDB1UNDO
SQL> select tablespace_name, contents, status
  2  from dba_tablespaces
  3  where tablespace_name like '%UNDO%'
  4  /

TABLESPACE_NAME  CONTENTS              STATUS
---------------- --------------------- ---------
PDB1UNDO         UNDO                  ONLINE

SQL>


I was able to switch PDB1 to a new Undo Tablespace (and drop the old Undo Tablespace).
.
.
.

20 November, 2016

Flashback Database -- 3 : Purging (older) Flashback Logs

As demonstrated earlier, Oracle may maintain Flashback Logs for a duration that is longer than the Flashback Retention Target.  This can happen when the db_recovery_filie_dest_size is large enough to support them (along with ArchiveLogs, Backups etc)

For example, in my play database I have reset the retention target to 1day but the Flashback Logs exceed 4 days :

SQL> show parameter flashback_ret

NAME         TYPE  VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target      integer  1440
SQL> select sysdate-oldest_flashback_time from v$flashback_database_log;

SYSDATE-OLDEST_FLASHBACK_TIME
-----------------------------
     4.21686343



The DBA should not manually delete Flashback Logs.

The only way I've found to purge older Flashback Logs is to reset db_recovery_file_dest_size to a lower value such that current FRA usage exceeds the dest_size.  This prompts Oracle to purge older Flashback Logs.

However, if ArchiveLogs exist and consume significant space and frequncy in the FRA, you do run the risk of

ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance orcl - Archival Error
ORA-16038: log 1 sequence# nnn cannot be archived
ORA-19809: limit exceeded for recovery files

and/or

ORACLE Instance orcl- Cannot allocate log, archival required
Thread 1 cannot allocate new log, sequence nnn
All online logs need archiving
Examine archive trace files for archiving errors


errors.
So, be careful to monitor your FRA usage and the Flashback Logs.  Use V$FLASHBACK_DATABASE_LOG, V$FLASHBACK_DATABASE_LOGFILE, V$FLASHBACK_DATABASE_STAT and V$FLASH_RECOVERY_AREA_USAGE view.
(see my earlier post that also points to an Oracle Support Doc about the first two views).
.
.
.


14 November, 2016

Flashback Database -- 2 : Flashback Requires Redo (ArchiveLog)

Although Flashback Logs support the ability to execute a FLASHBACK DATABASE command, the actual Flashback also requires Redo to be applied.  This is because the Flashback resets the images of blocks but doesn't guarantee that all transactions are reset to the same point in time (any one block can contain one or more active, uncommitted transactions, and there can be multiple blocks with active transactions at any point in time).  Therefore, since Oracle must revert the database to a consistent image, it needs to be able to apply redo as well (just as it would do for a roll-forward recovery from a backup).

Here's a quick demo of what happens if the redo is not available.

SQL> alter session set nls_date_format='DD-MON-RR HH24:MI:SS';

Session altered.

SQL> select sysdate, l.oldest_flashback_scn, l.oldest_flashback_time
  2  from v$flashback_database_log l;

SYSDATE            OLDEST_FLASHBACK_SCN OLDEST_FLASHBACK_T
------------------ -------------------- ------------------
14-NOV-16 22:51:37              7246633 14-NOV-16 22:39:43

SQL> 

sh-4.1$ pwd
/u02/FRA/ORCL/archivelog/2016_11_14
sh-4.1$ date
Mon Nov 14 22:52:29 SGT 2016
sh-4.1$ rm *
sh-4.1$ 

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size                  2260088 bytes
Variable Size             750781320 bytes
Database Buffers          310378496 bytes
Redo Buffers                5517312 bytes
Database mounted.

SQL> flashback database to timestamp to_date('14-NOV-16 22:45:00','DD-MON-RR HH24:MI:SS');
flashback database to timestamp to_date('14-NOV-16 22:45:00','DD-MON-RR HH24:MI:SS')
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 7246634 to SCN 7269074
ORA-38761: redo log sequence 70 in thread 1, incarnation 5 could not be
accessed


SQL> 
SQL> l
  1  select sequence#, first_change#, first_time
  2  from v$archived_log
  3  where resetlogs_time=(select resetlogs_time from v$database)
  4  and sequence# between 60 and 81
  5* order by 1
SQL> /

 SEQUENCE# FIRST_CHANGE# FIRST_TIME
---------- ------------- ------------------
        60       7245238 14-NOV-16 22:27:35
        61       7248965 14-NOV-16 22:40:46
        62       7250433 14-NOV-16 22:40:52
        63       7251817 14-NOV-16 22:41:04
        64       7253189 14-NOV-16 22:41:20
        65       7254583 14-NOV-16 22:41:31
        66       7255942 14-NOV-16 22:41:44
        67       7257317 14-NOV-16 22:41:59
        68       7258689 14-NOV-16 22:42:10
        69       7260094 14-NOV-16 22:42:15
        70       7261397 14-NOV-16 22:42:22
        71       7262843 14-NOV-16 22:42:28
        72       7264269 14-NOV-16 22:42:32
        73       7265697 14-NOV-16 22:42:37
        74       7267121 14-NOV-16 22:42:43
        75       7269075 14-NOV-16 22:48:05
        76       7270476 14-NOV-16 22:48:11
        77       7271926 14-NOV-16 22:48:17
        78       7273370 14-NOV-16 22:48:23
        79       7274759 14-NOV-16 22:48:32
        80       7276159 14-NOV-16 22:48:39
        81       7277470 14-NOV-16 22:48:43

22 rows selected.

SQL> 



Note how the error message states that Redo(Archive)Log Sequence#70 is required but provides a range of SCNs that span Sequence#60 to Sequence#74 !

Bottom Line : Flashback Logs alone aren't adequate to Flashback database.  You also need the corresponding Redo.

Just to confirm that I can continue with the current (non-Flashbacked Database) state (in spite of the failed Flashback)  :

SQL> shutdown;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size                  2260088 bytes
Variable Size             750781320 bytes
Database Buffers          310378496 bytes
Redo Buffers                5517312 bytes
Database mounted.
Database opened.
SQL> archive log list
Database log mode             Archive Mode
Automatic archival            Enabled
Archive destination           USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence    89
Next log sequence to archive  90
Current log sequence          90
SQL> select current_scn from v$database; 

CURRENT_SCN
-----------
    7289329

SQL> 


.Bottom Line : *Before* you attempt a FLASHBACK DATABASE to the OLDEST_FLASHBACK_TIME (or SCN) from V$FLASHBACK_DATABASE_LOG, ensure that you *do* have the "nearby"  Archive/Redo Logs. !
.
.
.

07 November, 2016

Flashback Database -- 1 : Introduction to Operations

Continuing on my previous post,  ....

In 11gR2,  ALTER DATABASE FLASHBACK ON   and OFF can be executed when the database is OPEN.  Setting FLASHBACK OFF results in deletion of all Flashback Files.

Here is some information that I have pulled from my test database environment :

SQL> alter session set nls_date_format='DD-MON-RR HH24:MI:SS';

Session altered.

SQL> 
SQL> select oldest_flashback_scn, oldest_flashback_time,
  2  retention_target, flashback_size
  3  from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FLASHBACK_T RETENTION_TARGET FLASHBACK_SIZE
-------------------- ------------------ ---------------- --------------
             7140652 07-NOV-16 10:53:30              180      314572800

SQL> select sysdate from dual;

SYSDATE
------------------
07-NOV-16 17:46:54

SQL> 
SQL> select begin_time, end_time, flashback_data, estimated_flashback_size
  2  from v$flashback_database_stat
  3  order by begin_time;

BEGIN_TIME         END_TIME           FLASHBACK_DATA ESTIMATED_FLASHBACK_SIZE
------------------ ------------------ -------------- ------------------------
06-NOV-16 18:56:28 06-NOV-16 21:20:55      202129408                251873280
06-NOV-16 21:20:55 07-NOV-16 09:53:26      107102208                 62054400
07-NOV-16 09:53:26 07-NOV-16 10:53:30       51609600                 67866624
07-NOV-16 10:53:30 07-NOV-16 13:14:45       10682368                 60887040
07-NOV-16 13:14:45 07-NOV-16 14:14:51       66002944                 67986432
07-NOV-16 14:14:51 07-NOV-16 15:14:57       10018816                 66112512
07-NOV-16 15:14:57 07-NOV-16 16:15:01       10190848                 64441344
07-NOV-16 16:15:01 07-NOV-16 17:15:05       53559296                 68751360
07-NOV-16 17:15:05 07-NOV-16 17:47:57       52862976                        0

9 rows selected.

SQL> 
SQL> select log#, sequence#, bytes/1048576 Size_MB, first_time
  2  from v$flashback_database_logfile
  3  order by sequence#;

      LOG#  SEQUENCE#    SIZE_MB FIRST_TIME
---------- ---------- ---------- ------------------
         6          6         50 07-NOV-16 09:00:46
         1          7         50 07-NOV-16 10:36:01
         2          8         50 07-NOV-16 13:13:22
         3          9         50 07-NOV-16 13:43:28
         4         10         50 07-NOV-16 16:43:49
         5         11         50 07-NOV-16 17:44:42

6 rows selected.

SQL> 


Firstly, we note (as in my previous blog post), that the available flashback that is from 10:53am to 5:46pm (almost 7hours) exceeds the Flashback Target of 3hours (180minutes).  Apparently, Flashback Logfiles 1 to 5 have already been purged (but I find no entries for the deletions in the alert log).

Note how the "earliest time" does not match in all three views.  The OLDEST_FLASHBACK_TIME is 10:53am although V$FLASHBACK_DATABASE_STAT reports statistics from the previous day (I had enabled Flashback in the database at 18:56:27 of 06-Nov) while V$FLASHBACK_DATABASE_LOGILE shows an existing logfile from 09:00am to 10:36am.

Let me do a Flashback.  I must rely on the V$FLASHBACK_DATABASE_LOG view to know that I  cannot flashback beyond 10:53am.

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size                  2260088 bytes
Variable Size             750781320 bytes
Database Buffers          310378496 bytes
Redo Buffers                5517312 bytes
Database mounted.
SQL> 
SQL> flashback database to timestamp trunc(sysdate)+11/24;

Flashback complete.

SQL> 
SQL> alter database open read only;  --- to verify data if necessary

Database altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;     
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size                  2260088 bytes
Variable Size             750781320 bytes
Database Buffers          310378496 bytes
Redo Buffers                5517312 bytes
Database mounted.
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open


SQL> alter database open resetlogs;

Database altered.

SQL> 


A FLASHBACK DATABASE requires an OPEN RESETLOGS to open READ WRITE.

Let's look at the alert log for messages about the Flashback operation itself :

Mon Nov 07 17:56:36 2016
flashback database to timestamp trunc(sysdate)+11/24
Flashback Restore Start
Flashback Restore Complete
Flashback Media Recovery Start
 started logmerger process
Parallel Media Recovery started with 2 slaves
Flashback Media Recovery Log /u02/FRA/ORCL/archivelog/2016_11_07/o1_mf_1_81_d2052ofj_.arc
Mon Nov 07 17:56:43 2016
Incomplete Recovery applied until change 7141255 time 11/07/2016 11:00:01
Flashback Media Recovery Complete
Completed: flashback database to timestamp trunc(sysdate)+11/24
Mon Nov 07 17:57:08 2016
alter database open read only


What happens if I disable and re-enable Flashback ?

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

SQL> alter database flashback off;

Database altered.

SQL> 

From the alert log :
Mon Nov 07 18:03:02 2016
alter database flashback off
Stopping background process RVWR
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y32vjv_.flb
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y32xq0_.flb
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y3bhkx_.flb
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y3dd8r_.flb
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y6r6bf_.flb
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1ycky3v_.flb
Flashback Database Disabled
Completed: alter database flashback off

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

SQL> alter database flashback on;

Database altered.

SQL> 

From the alert log :
Mon Nov 07 18:04:21 2016
alter database flashback on
Starting background process RVWR
Mon Nov 07 18:04:21 2016
RVWR started with pid=30, OS id=12621
Flashback Database Enabled at SCN 7142426
Completed: alter database flashback on

From the FRA :
[oracle@ora11204 flashback]$ pwd
/u02/FRA/ORCL/flashback
[oracle@ora11204 flashback]$ ls -ltr
total 102416
-rw-rw----. 1 oracle oracle 52436992 Nov  7 18:04 o1_mf_d20nf7wc_.flb
-rw-rw----. 1 oracle oracle 52436992 Nov  7 18:05 o1_mf_d20nf5nz_.flb
[oracle@ora11204 flashback]$ 

SQL> alter session set nls_date_Format='DD-MON-RR HH24:MI:SS';

Session altered.

SQL> select log#, sequence#, bytes/1048576 Size_MB, first_time
  2  from v$flashback_database_logfile
  3  order by sequence#;

      LOG#  SEQUENCE#    SIZE_MB FIRST_TIME
---------- ---------- ---------- ------------------
         2          1         50
         1          1         50 07-NOV-16 18:04:22

SQL> 
 


So, I can set FLASHBACK OFF and ON when the database is OPEN.  (But I can't execute a FLASHBACK TO .... with the database OPEN).
.
.
.

29 October, 2016

Flashback Database Logs can exceed the Retention Target

The documentation on the Flashback Retention Target in 11.2 and 12.1 states that this parameter specifies an upper limit on how far the database may be flashed back.

However, if the FRA (db_recovery_file_dest_size) is actually large enough, Oracle may retain flashback logs for a much longer duration.

SQL> alter session set nls_date_format='DD-MON-RR HH24:MI:SS';

Session altered.

SQL> select sysdate, l.* from v$flashback_database_log l;

SYSDATE            OLDEST_FLASHBACK_SCN OLDEST_FLASHBACK_T RETENTION_TARGET
------------------ -------------------- ------------------ ----------------
FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------- ------------------------
29-OCT-16 07:42:44              6968261 28-OCT-16 22:35:50              180
     157286400                 86467584


SQL> 
SQL> show parameter flashback

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target        integer                                180
SQL> 
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size           2260088 bytes
Variable Size      750781320 bytes
Database Buffers   310378496 bytes
Redo Buffers         5517312 bytes
Database mounted.
SQL> flashback database to timestamp trunc(sysdate);

Flashback complete.

SQL>


Thus, it is useful to check the V$FLASHBACK_DATABASE_LOG, V$FLASHBACK_DATABASE_LOGFILE and V$FLASHBACK_DATABASE_STAT and V$FLASH_RECOVERY_AREA_USAGE views from time to time.

See Oracle Support Doc# 1194013.1 for a discrepancy between the first two views.

Note : If you have Standby database configured, also see John Hallas's blog post.
.
.
.

11 October, 2016

OTN Appreciation Day : Undo and Redo

On OTN Appreciation Day, let me say I like the Undo and Redo features of Oracle.  I name them together as they work together.

Undo also supports MultiVersionReadConsistency -- a great advantage of Oracle.

Redo, with Archive Logging, also supports Online Backups -- an absolute necessity.

These features have been around for almost 30 years now.

Here are some Quick and Rough Notes on Undo and Redo .
.
.
.

Undo and Redo

Quick and Rough Notes :


Undo and Redo


Undo is where Oracle logs how to reverse a transaction (one or more DMLs in a transaction)

Redo is where Oracle logs how to replay a transaction

Undo and Redo are written to as the transaction proceeds, not merely at the end of the transaction
(imagine a transaction that consists of 1million single-row inserts,  each distinct insert is written to undo and redo)
Undo segments
Oracle dynamically creates and drops Undo segments depending on transaction volume
An undo segment consists of multiple extents.  As a transaction grows beyond the current extent, a new extent may be allocated
One undo segment can support multiple transactions but a transaction cannot span multiple undo segments
After COMMIT the undo information is retained for undo_retention or autotuned_undo_retention.  
At the end of the retention period, the undo is discarded, the extent is expired 

Undo retention
Oracle may autotune the undo retention
If the datafile(s) for the active undo tablespace are set to autoextend OFF, Oracle automatically uses the datafile to the fullest and ignores undo_retention
If the datafile(s) are set to autoextend ON, Oracle autotunes undo_retention to match query lengths
Check V$undostat for this information

Undo and Read Consistency
Oracle's implementation of MultiVersionReadConsistency relies on a user session being able to read the undo generated by another session
A session may need to read the prior image of data because the data has been modified (and may even have been commited) by another session
It clones the current version of the block it is reading and applies the undo for that block to create its read consistent version
Flashback Query is supported by reading from Undo

Isolation levels (READ COMMITTED, SERIALIZABLE, READ ONLY) 
Read Consistency with READ COMMITTED is at *statement* level by default
A session running multiple queries may each read a different version by default because Read Committed is enforced for each statement
(This also means that if you have a PLSQL block running the same SQL multiple times, each execution can see a different version of the data-- if the data is modified by another session between executions of the SQL !)
A session can choose to set it's ISOLATION LEVEL to SERIALIZABLE which means that every query sees the same version of data
This works only for short running queries and with few changes to the data or read only data.
SERIALIZABLE can update data provided that the same data hasn't been updated and committed by another session after the start (else you get ORA-08177)
READ ONLY does not allow the session to make changes 

Transactions
When a transaction is in progress, it is identified by the Transaction Address, Undo segment, slot and sequence
The ITL slot in the block header contains the reference (address) to the Undo
The SCN is assigned at commit time (therefore a transaction doesn't begin with an SCN)

Temp Undo
12c also allows temporary undo
Normally, changes to GTT generate undo which needs to be written to undo segments
With 12c temp undo, those undo entries are also, like the actual changes, temporary and can be discarded when the commit is issued
Thus, the undo doesn't need to be written to disk (remember data in a GTT is not visible to another session, so there is no need to persist the undo)
Redo also captures Undo One transaction (or multiple concurrent transactions) may have updated multiple database blocks So, DBWR may have written down some of the modified buffers to disk, even before the transaction COMMIT has been issued This means that some of the blocks on disk may have uncomitted changes What happens if the instance were to fail (e.g. a bug takes down a background process or the server crashes due to an OS bug or a CPU failure ?) On instance recovery, Oracle must identify the uncommited transactions and roll them back But if the undo for that was only in memory and was lost on instance/server failure, how can Oracle rollback the uncomitted transaction ? Oracle knows that it must "undo" modified blocks This is done by protecting the undo through the redo as well Before a modified buffer is written to disk by DBWR, LGWR writes the redo for it That redo also captures the undo This ensures that, on the need to do Instance Recovery or Media Recovery, the undo is also available The Rollforward process writes the undo to the undo segments This allows Oracle to rollback the uncommitted transaction because the undo is now on disk (and not lost from memory) Redo Strands Redo consists of multiple strands Since 10g, Oracle has introduced private strands for single-instance databases This allows a process to manage it's private strand of redo until it decides to commit At commit time, the private strand is written into the public redo area and this allows LGWR to flush the redo to disk IMU Similarly, Oracle also manages undo "in memory" (using IMU pools). This means that, for a short period or small transactions, Undo is managed in memory rather than through undo segments Therefore, Oracle doesn't have to track undo segment changes in the redo This also allows bundling the undo for multiple changes into a single redo record, instead of separate redo records RAC In RAC, every instance has (a) a seperate Redo Thread (b) a separate Undo Tablespace However, the redo thread must be readable by every other instance -- as instance recovery by another (surviving) instance needs to read the redo Similarly, the undo tablespace is read by any other instance because queries in instance 2 may need to read undo of instance 1 for read-consistency

03 October, 2016

Obj# and DataObj# in 12c AWR

The 12c AWR Report section on Segment Statistics now reports both Obj# (OBJECT_ID) and DataObj# (DATA_OBJECT_ID).  This is useful information when you have (table) objects undergoing TRUNCATEs during the workload.
A TRUNCATE (or a MOVE {or REBUILD for an Index}) causes a reallocation of the data segment and change of the DATA_OBJECT_ID.

Thus :

SQL> show user
USER is "HEMANT"
SQL> create table hkc_t_1 (id_col number);

Table created.

SQL> select object_id, data_object_id     
  2  from user_objects
  3  where object_name = 'HKC_T_1'
  4  and object_type = 'TABLE'
  5  /

 OBJECT_ID DATA_OBJECT_ID
---------- --------------
     94422          94422

SQL> insert into hkc_t_1 values (1);

1 row created.

SQL> truncate table hkc_t_1;

Table truncated.

SQL> select object_id, data_object_id
  2  from user_objects
  3  where object_name = 'HKC_T_1'
  4  and object_type = 'TABLE'
  5  /

 OBJECT_ID DATA_OBJECT_ID
---------- --------------
     94422          94423

SQL> 


The 12c AWR report shows both the Obj# and DataObj#..

In my lab, I created a workload where two tables were TRUNCATED 100 times in a loop that would insert fresh rows after each TRUNCATE.  (Those familiar with Peoplesoft Batch Jobs would know this behaviour)

Some extracts from the AWR show :

Segments by Physical Writes               DB/Inst: NONCDB/NONCDB  Snaps: 53-54
-> Total Physical Writes:         340,305
-> Captured Segments account for    0.7% of Total
-> When ** MISSING ** occurs, some of the object attributes may not be available

                     Tablespace
Owner                   Name
-------------------- ----------
                     Subobject  Obj.                            Physical
Object Name            Name     Type        Obj#   Dataobj#       Writes  %Total
-------------------- ---------- ----- ---------- ---------- ------------ -------
HEMANT               HEMANT
SOURCE_TB_1                     TABLE      94220      94220        1,575     .46
HEMANT               USERS
LIST_TB_2                       TABLE      94219      94231          263     .08
HEMANT               USERS
WORKLOAD_LOG                    TABLE      94221      94221           71     .02
SYS                  SYSTEM
COL$                            TABLE         21          2           46     .01
SYS                  SYSTEM
SEG$                            TABLE         14          8           45     .01
                          ------------------------------------------------------

Segments by Physical Write Requests       DB/Inst: NONCDB/NONCDB  Snaps: 53-54
-> Total Physical Write Requests:         175,206
-> Captured Segments account for   22.3% of Total
-> When ** MISSING ** occurs, some of the object attributes may not be available

                     Tablespace
Owner                   Name
-------------------- ----------
                     Subobject  Obj.                          Phys Write
Object Name            Name     Type        Obj#   Dataobj#     Requests  %Total
-------------------- ---------- ----- ---------- ---------- ------------ -------
HEMANT               USERS
LIST_TB_1                       TABLE      94218      94370        1,086     .62
HEMANT               USERS
LIST_TB_1                       TABLE      94218      94234          983     .56
HEMANT               USERS
LIST_TB_1                       TABLE      94218      94228          981     .56
HEMANT               USERS
LIST_TB_1                       TABLE      94218      94232          971     .55
HEMANT               USERS
LIST_TB_1                       TABLE      94218      94218          964     .55
                          ------------------------------------------------------

Segments by Table Scans                   DB/Inst: NONCDB/NONCDB  Snaps: 53-54
-> Total Table Scans:             243
-> Captured Segments account for   18.9% of Total
-> When ** MISSING ** occurs, some of the object attributes may not be available

                     Tablespace
Owner                   Name
-------------------- ----------
                     Subobject  Obj.                               Table
Object Name            Name     Type        Obj#   Dataobj#        Scans  %Total
-------------------- ---------- ----- ---------- ---------- ------------ -------
HEMANT               USERS
LIST_TB_1                       TABLE      94218      94240            1     .41
HEMANT               USERS
LIST_TB_1                       TABLE      94218      94248            1     .41
HEMANT               USERS
LIST_TB_1                       TABLE      94218      94388            1     .41
HEMANT               USERS
LIST_TB_1                       TABLE      94218      94224            1     .41
HEMANT               USERS
LIST_TB_1                       TABLE      94218      94234            1     .41
                          ------------------------------------------------------

Segments by DB Blocks Changes             DB/Inst: NONCDB/NONCDB  Snaps: 53-54
-> % of Capture shows % of DB Block Changes for each top segment compared
-> with total DB Block Changes for all segments captured by the Snapshot
-> When ** MISSING ** occurs, some of the object attributes may not be available

                     Tablespace
Owner                   Name
-------------------- ----------
                     Subobject  Obj.                            DB Block    % of
Object Name            Name     Type        Obj#   Dataobj#      Changes Capture
-------------------- ---------- ----- ---------- ---------- ------------ -------
** MISSING **        TEMP
** MISSING: -4001635 MISSING ** UNDEF 4.2550E+09    4218752       10,032    2.89
HEMANT               USERS
LIST_TB_2                       TABLE      94219      94235        7,616    2.20
HEMANT               USERS
LIST_TB_2                       TABLE      94219      94231        7,488    2.16
HEMANT               USERS
LIST_TB_2                       TABLE      94219      94403        7,392    2.13
HEMANT               USERS
LIST_TB_1                       TABLE      94218      94314        7,360    2.12
                          ------------------------------------------------------


These are the two target tables (with the current DATA_OBJECT_ID after 100 TRUNCATEs each) :

SQL> select object_name, object_id, data_object_id
  2  from user_objects
  3  where object_name like 'LIST_TB_%'
  4  and object_type = 'TABLE'
  5  order by 1
  6  /

OBJECT_NAME                     OBJECT_ID DATA_OBJECT_ID
------------------------------ ---------- --------------
LIST_TB_1                           94218          94418
LIST_TB_2                           94219          94419

SQL> 


The fact that the AWR report shows a different Dataobj# from the Obj# indicates that a TRUNCATE may have occurred sometime in the past (The TRUNCATE may have occurred before the beignning of the AWR report window !).  Multiple entries for the same table indicate that multiple TRUNCATES occurred within the AWR report window.
Note : Although both tables underwent 100 TRUNCATEs in the workload, AWR reports only 5 occurrences of activity.
.
.
.

26 September, 2016

SQLLoader DIRECT option and Unique Index

The DIRECT parameter for SQLLoader Command-Line enables Direct Path Load which uses a Direct Path API instead of a regular INSERT statement to load data into the target table.

However, one needs to know how it handles a Unique Index on the target table.
It actually leaves the  UNUSABLE if, duplicate values are loaded. The Index rebuild fails but the duplicate values remain in the table.

Here is a quick demo  (this in 12.1.0.2 MultiTenant). I first setup the target table with a Unique Index.

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@PDB1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Sep 26 22:36:51 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Mon Sep 26 2016 22:26:16 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> create table test_sqlldr_direct
  2  (id_column number,
  3  data_column varchar2(15))
  4  /

Table created.

SQL> create unique index test_sqlldr_direct_u1 on test_sqlldr_direct(id_column);

Index created.

SQL> insert into test_sqlldr_direct values (1, 'First Row');

1 row created.

SQL> commit;

Commit complete.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ 

Next, I setup the datafile with a duplicate record and the controlfile.

[oracle@ora12102 Desktop]$ ls -l
total 8
-rw-r--r-- 1 oracle oinstall   40 Sep 26 22:40 load_data.dat
-rw-r--r-- 1 oracle oinstall  165 Sep 26 22:45 load_control.ctl
[oracle@ora12102 Desktop]$ cat load_data.dat
2,'Second Row'
3,'Third Row'
3,'Oops !'
[oracle@ora12102 Desktop]$ cat load_control.ctl
LOAD DATA
INFILE load_data.dat
APPEND  INTO TABLE TEST_SQLLDR_DIRECT
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
id_column,
data_column)
[oracle@ora12102 Desktop]$ 

I am now ready to run a Direct Path Load.

[oracle@ora12102 Desktop]$ sqlldr hemant/hemant@pdb1 control=load_control.ctl direct=TRUE

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 26 22:47:09 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct

Load completed - logical record count 3.

Table TEST_SQLLDR_DIRECT:
  3 Rows successfully loaded.

Check the log file:
  load_control.log
for more information about the load.
[oracle@ora12102 Desktop]$ 

What is that ? 3 rows loaded successfully ?  So, the duplicate row also did get loaded ?  Let's check the log file.

[oracle@ora12102 Desktop]$ ls -ltr
total 12
-rw-r--r-- 1 oracle oinstall   40 Sep 26 22:40 load_data.dat
-rw-r--r-- 1 oracle oinstall  165 Sep 26 22:45 load_control.ctl
-rw-r--r-- 1 oracle oinstall 1833 Sep 26 22:47 load_control.log
[oracle@ora12102 Desktop]$ cat load_control.log

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 26 22:47:09 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Control File:   load_control.ctl
Data File:      load_data.dat
  Bad File:     load_data.bad
  Discard File:  none specified
 
 (Allow all discards)

Number to load: ALL
Number to skip: 0
Errors allowed: 50
Continuation:    none specified
Path used:      Direct

Table TEST_SQLLDR_DIRECT, loaded from every logical record.
Insert option in effect for this table: APPEND
TRAILING NULLCOLS option in effect

   Column Name                  Position   Len  Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
ID_COLUMN                           FIRST     *   ,  O(") CHARACTER            
DATA_COLUMN                          NEXT     *   ,  O(") CHARACTER            

The following index(es) on table TEST_SQLLDR_DIRECT were processed:
index HEMANT.TEST_SQLLDR_DIRECT_U1 was made unusable due to:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found

Table TEST_SQLLDR_DIRECT:
  3 Rows successfully loaded.
  0 Rows not loaded due to data errors.
  0 Rows not loaded because all WHEN clauses were failed.
  0 Rows not loaded because all fields were null.

Bind array size not used in direct path.
Column array  rows :    5000
Stream buffer bytes:  256000
Read   buffer bytes: 1048576

Total logical records skipped:          0
Total logical records read:             3
Total logical records rejected:         0
Total logical records discarded:        0
Total stream buffers loaded by SQL*Loader main thread:        1
Total stream buffers loaded by SQL*Loader load thread:        0

Run began on Mon Sep 26 22:47:09 2016
Run ended on Mon Sep 26 22:47:11 2016

Elapsed time was:     00:00:01.88
CPU time was:         00:00:00.01
[oracle@ora12102 Desktop]$ 

Did you notice the section in the log file that says :
The following index(es) on table TEST_SQLLDR_DIRECT were processed:
index HEMANT.TEST_SQLLDR_DIRECT_U1 was made unusable due to:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found

Apparently, the Index is left UNUSABLE.

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@pdb1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Sep 26 22:50:51 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Mon Sep 26 2016 22:47:09 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> select status from user_indexes     
  2  where index_name = 'TEST_SQLLDR_DIRECT_U1'        
  3  /

STATUS
--------
UNUSABLE

SQL> select * from test_sqlldr_direct order by 1;

 ID_COLUMN DATA_COLUMN
---------- ---------------
  1 First Row
  2 'Second Row'
  3 'Third Row'
  3 'Oops !'

SQL> alter index test_sqlldr_direct_u1 rebuild;
alter index test_sqlldr_direct_u1 rebuild
*
ERROR at line 1:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found


SQL> 

We can see the duplicated row for ID_COLUMN=3 and find that the Index cannot be rebuilt.  Oracle has allowed duplicate rows to load and left the Index UNUSABLE.

So, if you are planning to use DIRECT=TRUE and have a Unique Index, make sure you check the status of the Index and/or check the Log file before you proceed with processing the data.


Conversely, here is how the data is handled without DIRECT=TRUE :.

SQL> truncate table test_sqlldr_direct;

Table truncated.

SQL> insert into test_sqlldr_direct values (1,'First Row');

1 row created.

SQL> select status from user_indexes
  2  where index_name = 'TEST_SQLLDR_DIRECT_U1'
  3  /

STATUS
--------
VALID

SQL> 

[oracle@ora12102 Desktop]$ rm load_control.log
[oracle@ora12102 Desktop]$ sqlldr hemant/hemant@PDB1 control=load_control.ctl

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 26 22:59:58 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Conventional
Commit point reached - logical record count 3

Table TEST_SQLLDR_DIRECT:
  2 Rows successfully loaded.

Check the log file:
  load_control.log
for more information about the load.
[oracle@ora12102 Desktop]$ 
[oracle@ora12102 Desktop]$ ls -ltr
total 16
-rw-r--r-- 1 oracle oinstall   40 Sep 26 22:40 load_data.dat
-rw-r--r-- 1 oracle oinstall  165 Sep 26 22:45 load_control.ctl
-rw-r--r-- 1 oracle oinstall   11 Sep 26 22:59 load_data.bad
-rw-r--r-- 1 oracle oinstall 1668 Sep 26 22:59 load_control.log
[oracle@ora12102 Desktop]$ cat load_data.bad
3,'Oops !'
[oracle@ora12102 Desktop]$ cat load_control.log

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 26 22:59:58 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Control File:   load_control.ctl
Data File:      load_data.dat
  Bad File:     load_data.bad
  Discard File:  none specified
 
 (Allow all discards)

Number to load: ALL
Number to skip: 0
Errors allowed: 50
Bind array:     64 rows, maximum of 256000 bytes
Continuation:    none specified
Path used:      Conventional

Table TEST_SQLLDR_DIRECT, loaded from every logical record.
Insert option in effect for this table: APPEND
TRAILING NULLCOLS option in effect

   Column Name                  Position   Len  Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
ID_COLUMN                           FIRST     *   ,  O(") CHARACTER            
DATA_COLUMN                          NEXT     *   ,  O(") CHARACTER            

Record 3: Rejected - Error on table TEST_SQLLDR_DIRECT.
ORA-00001: unique constraint (HEMANT.TEST_SQLLDR_DIRECT_U1) violated


Table TEST_SQLLDR_DIRECT:
  2 Rows successfully loaded.
  1 Row not loaded due to data errors.
  0 Rows not loaded because all WHEN clauses were failed.
  0 Rows not loaded because all fields were null.


Space allocated for bind array:                  33024 bytes(64 rows)
Read   buffer bytes: 1048576

Total logical records skipped:          0
Total logical records read:             3
Total logical records rejected:         1
Total logical records discarded:        0

Run began on Mon Sep 26 22:59:58 2016
Run ended on Mon Sep 26 22:59:58 2016

Elapsed time was:     00:00:00.07
CPU time was:         00:00:00.00
[oracle@ora12102 Desktop]$ 

SQL> select * from test_sqlldr_direct 
  2  order by id_column
  3  /

 ID_COLUMN DATA_COLUMN
---------- ---------------
  1 First Row
  2 'Second Row'
  3 'Third Row'

SQL> 
SQL> select status from user_indexes
  2  where index_name = 'TEST_SQLLDR_DIRECT_U1'
  3  /

STATUS
--------
VALID

SQL> 

The duplicate row was rejected and went to the BAD file and was REJECTED.
.
.
.
So, the next time you chose to use SQLLDR DIRECT=TRUE for its performance benefits, make sure you know how to validate the status of Unique Indexes and check the log file ! Else, you might allow duplicates if the incoming data has not been cleansed before loading into the database.
.
.
.
If you are running 11g, the behaviour is the same in 11g/
.
.
.
.

18 September, 2016

SQL*Net Message Waits

Here are some extracts from an 11.2.0.4 AWR Report  I ran a simulated workload on this server for about 40minutes and generated this report.  I understand that some DBAs may be misinterpreting SQL*Net message time.

Host Name        Platform                         CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
ora11204         Linux x86 64-bit                    2     2       1       3.04

              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:       158 18-Sep-16 21:42:34        36       1.0
  End Snap:       159 18-Sep-16 22:23:01        33       1.0
   Elapsed:               40.45 (mins)
   DB Time:               42.04 (mins)

Load Profile                    Per Second   Per Transaction  Per Exec  Per Call
~~~~~~~~~~~~~~~            ---------------   --------------- --------- ---------
             DB Time(s):               1.0              10.2      0.12      0.07
              DB CPU(s):               0.0               0.1      0.00      0.00
      Redo size (bytes):           2,523.6          24,793.8
  Logical read (blocks):              41.7             409.6


Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                            Tota    Wait   % DB
Event                                 Waits Time Avg(ms)   time Wait Class
------------------------------ ------------ ---- ------- ------ ----------
log file sync                         1,095 2508    2291   99.5 Commit
db file sequential read                 162 37.5     231    1.5 User I/O
DB CPU                                      19.1             .8
SQL*Net message to client            31,579   .1       0     .0 Network
Disk file operations I/O                103    0       0     .0 User I/O


Wait Classes by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                        Avg             Avg
                                        Total Wait     Wait   % DB   Active
Wait Class                  Waits       Time (sec)     (ms)   time Sessions
---------------- ---------------- ---------------- -------- ------ --------
Commit                      1,095            2,509     2291   99.5      1.0
System I/O                 12,899            2,210      171   87.6      0.9
User I/O                    1,866               38       20    1.5      0.0
DB CPU                                          19              .8      0.0
Network                    33,651                9        0     .4      0.0


Foreground Wait Class                       DB/Inst: ORCL/orcl  Snaps: 158-159
-> s  - second, ms - millisecond -    1000th of a second
-> ordered by wait time desc, waits desc
-> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0
-> Captured Time accounts for        101.7%  of Total DB time       2,522.36 (s)
-> Total FG Wait Time:             2,546.18 (s)  DB CPU time:          19.14 (s)


                                                                  Avg
                                      %Time       Total Wait     wait
Wait Class                      Waits -outs         Time (s)     (ms)  %DB time
-------------------- ---------------- ----- ---------------- -------- ---------
Commit                          1,095     0            2,509     2291      99.5
User I/O                          276     0               38      136       1.5
DB CPU                                                    19                0.8
Network                        31,579     0                0        0       0.0
Concurrency                        21     0                0        0       0.0

Foreground Wait Events                     DB/Inst: ORCL/orcl  Snaps: 158-159
-> s  - second, ms - millisecond -    1000th of a second
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by wait time desc, waits desc (idle events last)
-> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0


                                                             Avg
                                        %Time Total Wait    wait    Waits   % DB
Event                             Waits -outs   Time (s)    (ms)     /txn   time
-------------------------- ------------ ----- ---------- ------- -------- ------
log file sync                     1,095     0      2,509    2291      4.4   99.5
db file sequential read             162     0         37     231      0.7    1.5
SQL*Net message to client        31,579     0          0       0    127.9     .0
Disk file operations I/O            103     0          0       0      0.4     .0
latch: shared pool                    2     0          0       4      0.0     .0
direct path sync                      2     0          0       2      0.0     .0
db file scattered read                6     0          0       0      0.0     .0
jobq slave wait                   5,522   100      2,770     502     22.4
SQL*Net message from clien       31,577     0      2,404      76    127.8


Wait Event Histogram                       DB/Inst: ORCL/orcl  Snaps: 158-159
-> Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
-> % of Waits: value of .0 indicates value was <.05%; value of null is truly 0
-> % of Waits: column heading of <=1s is truly <1024ms>1s is truly >=1024ms
-> Ordered by Event (idle events last)

                                                    % of Waits
                                 -----------------------------------------------
                           Total
Event                      Waits  <1ms ms="" s="">1s
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----
ARCH wait on ATTACH           37                                      97.3   2.7
ARCH wait on DETACH           37 100.0
Disk file operations I/O     920  99.9                            .1
LGWR wait for redo copy       54 100.0
Parameter File I/O           640 100.0
SQL*Net break/reset to cli     6 100.0
SQL*Net message to client  33.6K 100.0
...
SQL*Net message from clien 34.2K  82.7   9.5   2.7   1.6    .7   2.4    .3    .0


Other Instance Activity Stats              DB/Inst: ORCL/orcl  Snaps: 158-159
-> Ordered by statistic name

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
SQL*Net roundtrips to/from clien             31,579           13.0         127.9



Should I be worried about the SQL*Net message waits (to client, from client) ? How should I interpret them ?  I have more than 30K SQL*Net messages and roundtrips.  Apparently, messages from client are very slow -- at 76ms/message.  Apparently, round-trips are very slow -- at 13 round-trips per second.  Here's a hint :  In this workload that I created, all the client sessions were on the database server -- so  you shouldn't be expecting such large wait times !

SQL*Net message to client waits are not really measured in terms of the time it took for the message to reach the client.  Oracle doesn't know how long the transmission took.  It only knows the time taken to put the message onto the TCP stack on the database server itself.  Subsequent network transmission time is unknown. That is why SQL*Net message to client will always be an extremely low figure in terms of time -- because it isn't a true measure of elapsed time sending a message to a client.

SQL*Net message from client is not  just the time spent on the network.  It is the time between the last message to the client upto the next message from the client.  Thus, it also includes client overheads (like "think time", CPU time, network stack on the client) besides transmission over the network.  In most cases, the major component of SQL*Net message from the client is client "think time" -- how long the client spent before formatting and sending the next SQL call to the database server.   This should also mean that it is not always true that SQL*Net message from client is an idle event.

If I have a single client that is either or some or all of :
a. Loading data in sets of rows (e.g. an ETL client)
b. Extracting data in sets of rows
c. Retrieving results and formatting the results for presentation to a user
b. Running a batch job that majorly consists of SQL calls, not PLSQL
the SQL*Net message from client is majorly the time spent by the client  (--- unless you really have a high latency network.).  In most such cases, when tracing the individual session, this wait event is NOT an idle event.  However, when reading an AWR, you cannot isolate such a session from the other sessions that are doing a mix of activity -- some with real interactive end-users, some sending periodic "heart-beat" messages, some completely idle waiting for a request from a user.  In this AWR report, there are a mix of clients with different "think-times", some completely idle for 40minutes.  We can't identify them in the AWR report.

Can you use the SQL*Net roundtrips to/from client figure from the AWR ?  Not if you have a mix of different clients doing different forms of activity.  Idle clients will have very few roundtrips in the 40minutes while clients extracting data row-by-row (not in PLSQL, but SQL or some other client like Java) would have a high number of roundtrips.  So, you can't separate the two types of behaviour in an AWR.

If you are really concerned about identifying SQL*Net overheads and/or round-trips, you should *trace* the specific individual session of interest and extract figures from the trace file.
.
.
.


05 September, 2016

CODE : View My Source Code -- a Function

If you need to view the source code of a stored program you need to either :
a.  Be the owner of the program
b.  Have EXECUTE privilege on the program
c.  Have EXECUTE ANY ... privilege or the DBA role

If you are not the owner o the program, the owner can grant you access to view but not modify the program.

Here's code for a PL/SQL Function that allows this.  (I wrote this as a Function --- and not as a PL/SQL Procedure -- to be similar to the DBMS_METADATA.GET_DDL Function).

Imagine  that HR is the Application Schema and the owner of Tables and Programs.  Imagine that HEMANT has not been granted the DBA role or an EXECUTE privilege but needs access to view the source code.

Here's a quick method.  ("Quick" meaning that this code can be improved further -- e.g. by adding an Audit Trail and by better Error Handling).

SQL> connect hr/oracle
Connected.
SQL>
SQL> drop table authorized_view_source purge;

Table dropped.

SQL>
SQL> create table authorized_view_source
  2     (username  varchar2(30),
  3      object_type varchar2(23),
  4      object_name varchar2(30))
  5  /

Table created.

SQL>
SQL> create or replace function view_my_source(object_type_in in varchar2, object_name_in in varchar2)
  2  return clob
  3  as
  4  return_clob clob;
  5  line_out varchar2(4000);
  6  line_count pls_integer;
  7  line_no pls_integer;
  8  verify_count pls_integer;
  9  return_source  clob;
 10
 11  begin
 12   select count(*) into verify_count from authorized_view_source
 13  --  check if any of these three predicates fail
 14   where username = user
 15   and object_type = object_type_in
 16   and object_name = object_name_in;
 17
 18   if verify_count = 0 then
 19  -- don't tell if the object exists or not
 20   raise_application_error(-20001,'You are not authorized to view the source code of this object');
 21   return('FAILURE');
 22
 23   else
 24
 25   select count(*) into line_count from user_source
 26   where 1=1
 27   and type = object_type_in
 28   and name = object_name_in;
 29
 30   return_clob := ' ';
 31
 32   for line_no in 1..line_count
 33   loop
 34   return_clob := return_clob || line_out;
 35   select text into line_out from user_source
 36   where 1=1
 37   and type = object_type_in
 38   and name = object_name_in
 39   and line = line_no;
 40   end loop;
 41   return_clob := return_clob || line_out;
 42
 43   return return_clob;
 44   end if;
 45
 46  end view_my_source;
 47  /

Function created.
                                                                                                    
SQL>
SQL> show errors
No errors.
SQL> grant execute on view_my_source to hemant;

Grant succeeded.

SQL>
SQL> -- list all code objects
SQL> col object_name format a30
SQL> select object_type, object_name
  2  from user_objects
  3  where object_type not in ('TABLE','INDEX','VIEW')
  4  order by object_type, object_name
  5  /

OBJECT_TYPE             OBJECT_NAME
----------------------- ------------------------------
FUNCTION                VIEW_MY_SOURCE
PACKAGE                 ANOTHER_PKG
PACKAGE                 DEFINE_MY_VARIABLES
PACKAGE                 DUMMY_PKG
PACKAGE BODY            ANOTHER_PKG
PACKAGE BODY            DUMMY_PKG
PROCEDURE               ADD_JOB_HISTORY
PROCEDURE               SECURE_DML
SEQUENCE                DEPARTMENTS_SEQ
SEQUENCE                EMPLOYEES_SEQ
SEQUENCE                LOCATIONS_SEQ
TRIGGER                 SECURE_EMPLOYEES
TRIGGER                 UPDATE_JOB_HISTORY

13 rows selected.

SQL>
SQL> -- store list of authorzed access
SQL> -- e.g. HEMANT can't view the source for
SQL> --    "ANOTHER_PKG" and "VIEW_MY_SOURCE"
SQL> insert into authorized_view_source
  2  select 'HEMANT', object_type, object_name
  3  from user_objects
  4  where object_type not in ('TABLE','INDEX','VIEW')
  5  and object_name not in ('ANOTHER_PKG','VIEW_MY_SOURCE')
  6  /

10 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select count(*) from authorized_view_source
  2  where username = 'HEMANT'
  3  /

  COUNT(*)
----------
        10

SQL>
SQL>


So, HR has created a Table to list the Users that are authorized to view a certain list of stored programs.  This table, AUTHORIZED_VIEW_SOURCE excludes "ANOTHER_PKG" and "VIEW_MY_SOURCE" from the authorized list  for HEMANT.

Let's see what HEMANT can do :

SQL>
SQL> connect hemant/hemant
Connected.
SQL> -- the return type is a CLOB, so we SET LOMG
SQL> set long 1000000
SQL>
SQL> select hr.view_my_source('PACKAGE','DEFINE_MY_VARIABLES') from dual ;

HR.VIEW_MY_SOURCE('PACKAGE','DEFINE_MY_VARIABLES')
--------------------------------------------------------------------------------
 package
define_my_variables
authid definer
is
  my_application varchar2(25) := 'Human Resources';                             
  my_base_schema varchar2(25) := 'HR';                                          
end;                                                                            


SQL>
SQL> select hr.view_my_source('PACKAGE BODY','DUMMY_PKG') from dual ;

HR.VIEW_MY_SOURCE('PACKAGEBODY','DUMMY_PKG')
--------------------------------------------------------------------------------
 package body dummy_pkg as
procedure dummy_proc is
 begin
  raise_application_error (-20001,'Dummy Procedure');                           
  null;                                                                         
 end;                                                                           
end;                                                                            


SQL>
SQL> select hr.view_my_source('TRIGGER','SECURE_EMPLOYEES') from dual;

HR.VIEW_MY_SOURCE('TRIGGER','SECURE_EMPLOYEES')
--------------------------------------------------------------------------------
 TRIGGER secure_employees
  BEFORE INSERT OR UPDATE OR DELETE ON employees
BEGIN
  secure_dml;                                                                   
END secure_employees;                                                           


SQL>
SQL> -- these two should raise an error
SQL> select hr.view_my_source('PACKAGE','ANOTHER_PKG') from dual;
ERROR:
ORA-20001: You are not authorized to view the source code of this object
ORA-06512: at "HR.VIEW_MY_SOURCE", line 20



no rows selected

SQL> select hr.view_my_source('FUNCTION','VIEW_MY_SOURCE') from dual;
ERROR:
ORA-20001: You are not authorized to view the source code of this object
ORA-06512: at "HR.VIEW_MY_SOURCE", line 20



no rows selected

SQL>
SQL> select hr.view_my_source('PACKAGE','NONEXISTENT') from dual;
ERROR:
ORA-20001: You are not authorized to view the source code of this object
ORA-06512: at "HR.VIEW_MY_SOURCE", line 20



no rows selected

SQL> 


This shows that HEMANT can view the source code of programs listed in the AUTHORIZED_VIEW_SOURCE table and not any others.   The way I have defined the "Your are not authroized ..." message, it doesn't even show whether the requested program exists or not.
.
.
.