Wednesday, 30 December 2015

Error while starting 12C OMS


Today i got the following error when i was trying to start 12C OMS, the error message was found in emctl log file location in <12c_install_home>/gc_inst/em/EMGC_OMS1/sysman/log/emctl.log, below was the error message :-

2015-12-30 14:41:24,136 [Thread-1] INFO wls.OMSController run.1744 - <OUT>Jython scans all the jar files it can find at first startup. Depending on the system, this process may take a few minutes to complete, and WLST may not return a prompt right away.
2015-12-30 14:41:24,136 [Thread-1] INFO wls.OMSController run.1744 - <OUT>
2015-12-30 14:41:24,771 [Thread-2] INFO wls.OMSController run.1744 - <ERR>*sys-package-mgr*: can't create package cache dir, '/tmpWLSTTemporacle/packages'
2015-12-30 14:41:45,764 [Thread-2] INFO wls.OMSController run.1744 - <ERR>java.io.IOException: Permission denied


# Interestingly the solution was to change the /tmp directory permission to 755 and make sure /tmp is readable and writeable. Sometimes what happens the permissions on tmp directory changes , and there is a 'Permission denied' message when trying to create the 'Packages' directory for the package cache directory.  In this case the directory is "/tmp/WLSTTemporacle' but could also be  /tmp/wlstTemp or /var/tmp/wlstTemp depending on Platform. Regardless of the platform the solution is the same i mentioned above.

Troubleshooting the 'Pending' status for a Listener Target in Enterprise Manager 12c and 13c Cloud Control


What Does the 'Pending' Status of a listener Mean?

# Sometimes what happens that the target has just been added and its status cannot be determined, not only this it can be some other issue as well, like for example due to some issue oms has not yet received the status of the target, so the Status 'pending' actually means that OMS has not yet received the 'availability' status of the target from the agent and the 'pending' status (clock icon) for a listener target should only be a temporary status (for example, when a new target is added or if a target has been under blackout and has just ended the blackout period). A listener target with a long status pending period should be investigated.  In 12.1.0.4 onwards additional status types were included for example 'Status pending (Target Addition in Progress)'

If a listener is showing as status 'pending' in Cloud Control, it means that the 'Response' metric has not been able to determine whether the listener is down or up. Sometimes even though a listener is shown as status 'pending' in the console, the agent has evaluated the status of the listener correctly but this status has not been received by the OMS.  Sometimes however, the agent is not able to evaluate the status correctly (eg. in the case of target being marked as 'broken', or target not existing at agent end).

In order to obtain the status of the listener target, the 'Response' metric is run every 5 minutes by the agent. The response metric runs an OS script called lsnrresp.pl which is found in <install home of agent>/plugins/oracle.sysman.db.agent.plugin_<pluginversion>/scripts. The lsnrresp.pl script performs a tnsping to the listener machine/port and confirms that the listener has been started with the 'correct' listener.ora file (the "correct" listener.ora file is that listener.ora file shown in "listener.ora location" (on the listener home page).  The listener target in Cloud Control is tied to a particular listener.ora file and directory (unlike previous versions of Enterprise Manager).  This means that for example, a listener in $TNS_ADMIN and a listener in $ORACLE_HOME/network/admin cannot be swapped between.


There is a very useful metalink i found and it is (Troubleshooting the 'Pending' status for a Listener Target in Enterprise Manager 12c and 13c Cloud Control (Doc ID 1546576.1))

Sunday, 29 November 2015

Working with Fast Start Failover (FSFO) in DATA GUARD

Working with Fast Start Failover (FSFO) in DATA GUARD

Fast start failover is a feature which allows a primary database to failover to one of the target standby terminal automatically in case any error conditions becomes true

Prerequisites of FSFO

# First we will check whether flashback is enabled or not on both primary and standby

# FastStartFailoverTarget property of broker should be set to a target standby terminal

# enable fast_start failover

# start the observer

__________________________________________________________________________

Lets check on primary database

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
YES

SQL> @role

DATABASE_ROLE    OPEN_MODE
---------------- --------------------
PRIMARY          READ WRITE

_____________________________________________

Lets check on standby database

SQL> @role

DATABASE_ROLE    OPEN_MODE
---------------- --------------------
PHYSICAL STANDBY MOUNTED

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
YES

DGMGRL> connect sys@prod
Password:
Connected.
DGMGRL> enable database sby1;
Enabled.
DGMGRL>
DGMGRL>
DGMGRL> show configuration

Configuration - dgconf1

  Protection Mode: MaxPerformance
  Databases:
    prod - Primary database
    sby1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL>
DGMGRL> show fast_start failover

Fast-Start Failover: DISABLED

  Threshold:        30 seconds
  Target:           (none)
  Observer:         (none)
  Lag Limit:        30 seconds
  Shutdown Primary: TRUE
  Auto-reinstate:   TRUE

Configurable Failover Conditions
  Health Conditions:
    Corrupted Controlfile          YES
    Corrupted Dictionary           YES
    Inaccessible Logfile            NO
    Stuck Archiver                  NO
    Datafile Offline               YES

  Oracle Error Conditions:
    (none)

DGMGRL> enable fast_start failover condition '1578';
Succeeded.
DGMGRL>
DGMGRL> show fast_start failover

Fast-Start Failover: DISABLED

  Threshold:        30 seconds
  Target:           (none)
  Observer:         (none)
  Lag Limit:        30 seconds
  Shutdown Primary: TRUE
  Auto-reinstate:   TRUE

Configurable Failover Conditions
  Health Conditions:
    Corrupted Controlfile          YES
    Corrupted Dictionary           YES
    Inaccessible Logfile            NO
    Stuck Archiver                  NO
    Datafile Offline               YES

  Oracle Error Conditions:
    ORA-01578: ORACLE data block corrupted (file # %s, block # %s)


## We will simulate an insufficient privilege error and invoke fast start failover


DGMGRL> enable fast_start failover condition '1031';
Succeeded.
DGMGRL> show fast_start failover

Fast-Start Failover: DISABLED

  Threshold:        30 seconds
  Target:           (none)
  Observer:         (none)
  Lag Limit:        30 seconds
  Shutdown Primary: TRUE
  Auto-reinstate:   TRUE

Configurable Failover Conditions
  Health Conditions:
    Corrupted Controlfile          YES
    Corrupted Dictionary           YES
    Inaccessible Logfile            NO
    Stuck Archiver                  NO
    Datafile Offline               YES

  Oracle Error Conditions:
    ORA-01578: ORACLE data block corrupted (file # %s, block # %s)
    ORA-01031: insufficient privileges


DGMGRL> show database verbose prod

Database - prod

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    prod

  Properties:
    DGConnectIdentifier             = 'prod'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'auto'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = ''
    LogFileNameConvert              = ''
    FastStartFailoverTarget         = ''
    StatusReport                    = '(monitor)'
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    HostName                        = 'node4.oracle.com'
    SidName                         = 'prod'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=node4.oracle.com)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=prod_DGMGRL)(INSTANCE_NAME=prod)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Database Status:
SUCCESS

DGMGRL>
DGMGRL> edit database prod set property 'FastStartFailoverTarget' = 'sby1';
Property "FastStartFailoverTarget" updated

DGMGRL> enable fast_start failover ;
Enabled.

Now we will simulate the failure and invoke error number 1031 (insufficient privileges)

SQL> conn scott/s
Connected.
SQL> alter system set log_checkpoints_to_alert=true;
alter system set log_checkpoints_to_alert=true
*
ERROR at line 1:
ORA-01031: insufficient privileges


SQL> /
alter system set log_checkpoints_to_alert=true
*
ERROR at line 1:
ORA-01031: insufficient privileges


SQL> /
alter system set log_checkpoints_to_alert=true
*
ERROR at line 1:
ORA-01031: insufficient privileges


DGMGRL>
DGMGRL> start observer
Observer started

14:08:19.81  Sunday, November 29, 2015
Initiating Fast-Start Failover to database "sby1"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "sby1"
14:08:21.11  Sunday, November 29, 2015


DGMGRL> connect sys@sby1
Password:
Connected.
DGMGRL> stop observer
Error: ORA-16636: fast-start failover target standby in error state, cannot stop observer

Failed.

DGMGRL> disable fast_start failover;
Error: ORA-16652: fast-start failover target standby database is disabled

Failed.
DGMGRL> disable fast_start failover force;
Disabled.
DGMGRL> stop observer
Done.
DGMGRL>
DGMGRL> reinstate database prod;
Reinstating database "prod", please wait...
Operation requires shutdown of instance "prod" on database "prod"
Shutting down instance "prod"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "prod" on database "prod"
Starting instance "prod"...
ORACLE instance started.
Database mounted.
Continuing to reinstate database "prod" ...
Reinstatement of database "prod" succeeded


DGMGRL> show fast_start failover

Fast-Start Failover: DISABLED

  Threshold:        30 seconds
  Target:           (none)
  Observer:         (none)
  Lag Limit:        30 seconds
  Shutdown Primary: TRUE
  Auto-reinstate:   TRUE

Configurable Failover Conditions
  Health Conditions:
    Corrupted Controlfile          YES
    Corrupted Dictionary           YES
    Inaccessible Logfile            NO
    Stuck Archiver                 YES
    Datafile Offline               YES

  Oracle Error Conditions:
    ORA-01578: ORACLE data block corrupted (file # %s, block # %s)
    ORA-01031: insufficient privileges


You can give all oracle error numbers for fast start failover conditions except ora-600 and ora-7445



















How to do a switchover in Data Guard with Data guard Broker

How to do a switchover with Data guard Broker

Below you can see i have primary database prod and standby database sby1, so in this demo we will switchover where standby (sby1) becomes primary and primary (prod) becomes standby

DGMGRL>  show configuration

Configuration - dgconf1

  Protection Mode: MaxPerformance
  Databases:
    prod - Primary database
    sby1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> switchover to sby1;
Performing switchover NOW, please wait...
New primary database "sby1" is opening...
Operation requires shutdown of instance "prod" on database "prod"
Shutting down instance "prod"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "prod" on database "prod"
Starting instance "prod"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "sby1"

DGMGRL>
DGMGRL>   show configuration

Configuration - dgconf1

  Protection Mode: MaxPerformance
  Databases:
    sby1 - Primary database
    prod - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS



How to enable flashback in Standby database

How to enable flashback in Standby database

Below is the command to start MRP process

SQL>  recover managed standby database disconnect;
Media recovery complete.

Below we get error because MRP is still active and applying the changes

SQL>  alter database flashback on;
 alter database flashback on
*
ERROR at line 1:
ORA-01153: an incompatible media recovery is active

Below is the command to stop the MRP process

SQL>  recover managed standby database cancel;
Media recovery complete.

So now we will be able to enable flashback

SQL>  alter database flashback on;

Database altered.

Below is the command to start the MRP process

SQL>  recover managed standby database disconnect;
Media recovery complete.



Changing delaymins property of a standby through broker

Changing delaymins property of a standby through broker

Sometimes we dont want MRP process to apply the redo changes immediately to standby database, so in this case we have to use delaymins feature. Let us see how we can work with it :-

My standby database is sby1

DGMGRL> edit database sby1 set property 'DelayMins' = '2';

At primary

SQL> @role

DATABASE_ROLE    OPEN_MODE
---------------- --------------------
PRIMARY          READ WRITE

SQL> @logswitch

System altered.

### On standby

SQL> select archived,sequence#,applied,to_Char(completion_Time,'dd-mon-yyyy hh24:mi:ss') from v$archived_log
...
....
......

ARC  SEQUENCE# APPLIED   TO_CHAR(COMPLETION_T
--- ---------- --------- --------------------
YES         70 YES       29-nov-2015 10:02:19
YES         71 NO        29-nov-2015 10:02:36

## As we can see Applied is NO

## Now after we set delaymins to 0 then immediately MRP will apply the changes to standby

DGMGRL> edit database sby1 set property 'DelayMins' = '0';
Property "DelayMins" updated

## And now we will see applied column will have YES

SQL> @arch
..
.....
........
ARC  SEQUENCE# APPLIED   TO_CHAR(COMPLETION_T
--- ---------- --------- --------------------
YES         70 YES       29-nov-2015 10:02:19
YES         71 YES       29-nov-2015 10:02:36
YES         72 YES       29-nov-2015 10:02:55
YES         73 YES       29-nov-2015 10:06:06

## Finally we confirm and see all the properties with delaymins for our standby database sby1

DGMGRL> show database verbose sby1

Database - sby1

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds
  Apply Lag:       0 seconds
  Real Time Query: OFF
  Instance(s):
    sby1

  Properties:
    DGConnectIdentifier             = 'sby1'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'OPTIONAL'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'auto'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = 'prod, sby1'
    LogFileNameConvert              = 'prod, sby1'
    FastStartFailoverTarget         = ''
    StatusReport                    = '(monitor)'
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    HostName                        = 'node4.example.com'
    SidName                         = 'sby1'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=node4.example.com)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=sby1_DGMGRL)(INSTANCE_NAME=sby1)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Database Status:
SUCCESS










Saturday, 28 November 2015

_Optimizer_ignore_hints can be changed at session level

I was recently asked by one of my clients about whether we can change the value of the initilization parameter _optimizer_ignore_hints at the session level or not. Well the answer is yes we can change the value of this parameter at session level. Here is a test of it on one of my instance running on 11.2.0.1.0 database version.

SQL*Plus: Release 11.2.0.1.0 Production on Sat Nov 28 22:38:30 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>  alter session set "_optimizer_ignore_hints" = TRUE ;

Session altered.

How to configure a physical standby in 11G

My primary database name will be PROD

Standby database name will be SBY1

Below are the contents of primary Pfile :-

[oracle@node4 u01]$ cat /u01/initprod.ora
prod.__db_cache_size=322961408
prod.__java_pool_size=4194304
prod.__large_pool_size=4194304
prod.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
prod.__pga_aggregate_target=314572800
prod.__sga_target=524288000
prod.__shared_io_pool_size=0
prod.__shared_pool_size=176160768
prod.__streams_pool_size=8388608
*.archive_lag_target=0
*.audit_file_dest='/u01/app/oracle/admin/prod/adump'
*.audit_trail='db'
*.compatible='11.2.0.0.0'
*.control_files='/u01/app/oracle/oradata/prod/control01.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='prod'
*.db_recovery_file_dest_size=10737418240
*.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
*.db_unique_name='prod'
*.dg_broker_start=TRUE
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=prodXDB)'
*.log_archive_config='dg_config=(prod,sby1)'
*.log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST','valid_for=(ALL_LOGFILES, ALL_ROLES)'
*.log_archive_dest_2='service=sby1 valid_for=(primary_role,online_logfiles) db_unique_name=sby1'
*.log_archive_dest_state_2='ENABLE'
prod.log_archive_format='%t_%s_%r.dbf'
*.log_archive_max_processes=4
*.log_archive_min_succeed_dest=1
prod.log_archive_trace=0
*.open_cursors=300
*.pga_aggregate_target=314572800
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.service_names='prod'
*.sga_target=524288000
*.standby_file_management='auto'
*.undo_retention=3600
*.undo_tablespace='UNDOTBS1'


Below are the contents of Standby Pfile

[oracle@node4 u01]$ cat initsby1.ora
*.audit_file_dest='/u01/app/oracle/admin/sby1/adump'
*.audit_trail='db'
*.compatible='11.2.0.0.0'
*.control_files='/u01/app/oracle/oradata/sby1/control01.ctl'#Restore Controlfile
*.db_block_size=8192
*.db_domain=''
*.db_name='prod'
*.db_recovery_file_dest_size=10737418240
*.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=sby1XDB)'
*.open_cursors=300
*.pga_aggregate_target=314572800
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=524288000
*.undo_tablespace='UNDOTBS1'

log_archive_Config='dg_config=(prod,sby1)'
log_archive_Dest_2='service=prod valid_for=(primary_role,online_logfiles) db_unique_name=prod'

db_unique_name=sby1
service_names=sby1
standby_File_management=auto
db_file_name_convert='prod','sby1'
log_file_name_convert='prod','sby1'
_____________________________________

Make sure you create appropriate directories which are mentioned below :-

/u01/app/oracle/admin/sby1/adump
/u01/app/oracle/oradata/sby1

Start the standby instance in no mount

SQL> create spfile from pfile='/u01/initsby1.ora';

File created.

SQL> startup force nomount
ORACLE instance started.

Total System Global Area  521936896 bytes
Fixed Size                  2214936 bytes
Variable Size             159384552 bytes
Database Buffers          356515840 bytes
Redo Buffers                3821568 bytes

Below are network files entries for TNS and Listener :-

[oracle@node4 ~]$ cat /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames.ora
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

PROD =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = node4.example.com)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = prod)
    )
  )

CAT =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = node4.example.com)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = cat)
    )
  )

SBY1 =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = node4.example.com)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = sby1)
    )
  )

[oracle@node4 ~]$ cat /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
# listener.ora Network Configuration File: /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
# Generated by Oracle configuration tools.

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = sby1)
      (ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1)
      (SID_NAME = sby1)
    )
    (SID_DESC =
      (GLOBAL_DBNAME = prod)
      (ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1)
      (SID_NAME = prod)
    )
    (SID_DESC =
      (GLOBAL_DBNAME = sby1_DGMGRL)
      (ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1)
      (SID_NAME = sby1)
    )
    (SID_DESC =
      (GLOBAL_DBNAME = prod_DGMGRL)
      (ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1)
      (SID_NAME = prod)
    )
  )

LISTENER =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node4.example.com)(PORT = 1521))
  )

ADR_BASE_LISTENER = /u01/app/oracle


Do the following on primary database RMAN Prompt :-

[oracle@node4 ~]$ rman target/

Recovery Manager: Release 11.2.0.1.0 - Production on Sat Nov 28 22:14:23 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PROD (DBID=308370143)

RMAN> connect auxiliary sys@sby1

auxiliary database Password:
connected to auxiliary database: PROD (not mounted)

RMAN>  duplicate target database for standby  from active database;

Starting Duplicate Db at 28-NOV-15
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=20 device type=DISK

contents of Memory Script:
{
   backup as copy reuse
   targetfile  '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwprod' auxiliary format
 '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwsby1'   ;
}
executing Memory Script

Starting backup at 28-NOV-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=51 device type=DISK
Finished backup at 28-NOV-15

contents of Memory Script:
{
   backup as copy current controlfile for standby auxiliary format  '/u01/app/oracle/oradata/sby1/control01.ctl';
}
executing Memory Script

Starting backup at 28-NOV-15
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
copying standby control file
output file name=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_prod.f tag=TAG20151128T221444 RECID=4 STAMP=896998484
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 28-NOV-15

contents of Memory Script:
{
   sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database

contents of Memory Script:
{
   set newname for tempfile  1 to
 "/u01/app/oracle/oradata/sby1/temo02.dbf";
   switch clone tempfile all;
   set newname for datafile  1 to
 "/u01/app/oracle/oradata/sby1/system01.dbf";
   set newname for datafile  2 to
 "/u01/app/oracle/oradata/sby1/sysaux01.dbf";
   set newname for datafile  3 to
 "/u01/app/oracle/oradata/sby1/undotbs01.dbf";
   set newname for datafile  4 to
 "/u01/app/oracle/oradata/sby1/users01.dbf";
   set newname for datafile  5 to
 "/u01/app/oracle/oradata/sby1/example01.dbf";
   set newname for datafile  6 to
 "/u01/app/oracle/oradata/sby1/tbs1a.dbf";
   set newname for datafile  7 to
 "/u01/app/oracle/oradata/sby1/undo2.dbf";
   backup as copy reuse
   datafile  1 auxiliary format
 "/u01/app/oracle/oradata/sby1/system01.dbf"   datafile
 2 auxiliary format
 "/u01/app/oracle/oradata/sby1/sysaux01.dbf"   datafile
 3 auxiliary format
 "/u01/app/oracle/oradata/sby1/undotbs01.dbf"   datafile
 4 auxiliary format
 "/u01/app/oracle/oradata/sby1/users01.dbf"   datafile
 5 auxiliary format
 "/u01/app/oracle/oradata/sby1/example01.dbf"   datafile
 6 auxiliary format
 "/u01/app/oracle/oradata/sby1/tbs1a.dbf"   datafile
 7 auxiliary format
 "/u01/app/oracle/oradata/sby1/undo2.dbf"   ;
   sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to /u01/app/oracle/oradata/sby1/temo02.dbf in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting backup at 28-NOV-15
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/u01/app/oracle/oradata/prod/system01.dbf
output file name=/u01/app/oracle/oradata/sby1/system01.dbf tag=TAG20151128T221450
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=/u01/app/oracle/oradata/prod/sysaux01.dbf
output file name=/u01/app/oracle/oradata/sby1/sysaux01.dbf tag=TAG20151128T221450
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/u01/app/oracle/oradata/prod/undotbs01.dbf
output file name=/u01/app/oracle/oradata/sby1/undotbs01.dbf tag=TAG20151128T221450
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=/u01/app/oracle/oradata/prod/example01.dbf
output file name=/u01/app/oracle/oradata/sby1/example01.dbf tag=TAG20151128T221450
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00006 name=/u01/app/oracle/oradata/prod/tbs1a.dbf
output file name=/u01/app/oracle/oradata/sby1/tbs1a.dbf tag=TAG20151128T221450
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
input datafile file number=00007 name=/u01/app/oracle/oradata/prod/undo2.dbf
output file name=/u01/app/oracle/oradata/sby1/undo2.dbf tag=TAG20151128T221450
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u01/app/oracle/oradata/prod/users01.dbf
output file name=/u01/app/oracle/oradata/sby1/users01.dbf tag=TAG20151128T221450
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 28-NOV-15

sql statement: alter system archive log current

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=4 STAMP=896998540 file name=/u01/app/oracle/oradata/sby1/system01.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=5 STAMP=896998540 file name=/u01/app/oracle/oradata/sby1/sysaux01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=6 STAMP=896998540 file name=/u01/app/oracle/oradata/sby1/undotbs01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=7 STAMP=896998540 file name=/u01/app/oracle/oradata/sby1/users01.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=8 STAMP=896998540 file name=/u01/app/oracle/oradata/sby1/example01.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=9 STAMP=896998540 file name=/u01/app/oracle/oradata/sby1/tbs1a.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=10 STAMP=896998540 file name=/u01/app/oracle/oradata/sby1/undo2.dbf
Finished Duplicate Db at 28-NOV-15

Add standby logfiles with below command :-

 alter database add standby logfile group &a ('/u01/app/oracle/oradata/sby1/&a') size 50m;

Make sure you add one extra log group on standby for standby logfiles

#Finally start the MRP Process

SQL> recover managed standby database disconnect;
Media recovery complete.

SQL> select archived,sequence#,applied,to_Char(completion_Time,'dd-mon-yyyy hh24:mi:ss') from v$archived_log

ARC  SEQUENCE# APPLIED   TO_CHAR(COMPLETION_T
--- ---------- --------- --------------------
YES         57 YES       28-nov-2015 22:17:27
YES         58 YES       28-nov-2015 22:17:28
















How to configure a logical standby


First step is to configure a physical standby, below is the link of my blog post where you can see how to configure a physical standby :-

Configure a physical standby

After configuring the physical standby follow below simple steps to configure a logical standby :-

Do the following steps on standby database :-

SQL> select archived,sequence#,applied,to_Char(completion_Time,'dd-mon-yyyy hh24:mi:ss') from v$archived_log;
...
....
......

ARC  SEQUENCE# APPLIED   TO_CHAR(COMPLETION_T
--- ---------- --------- --------------------
YES         43 YES       28-nov-2015 21:15:26
YES         44 YES       28-nov-2015 21:15:28
YES         45 YES       28-nov-2015 21:36:11

Stop the MRP Process

SQL> recover managed standby database cancel;
Media recovery complete.

Do the below step on primary database

Build a Dictionary in the Redo Data

A LogMiner dictionary must be built into the redo data so that the LogMiner component of SQL Apply can properly interpret changes it sees in the redo. As part of building the LogMiner dictionary, supplemental logging is automatically set up to log primary key and unique-constraint/index columns. The supplemental logging information ensures each update contains enough information to logically identify each row that is modified by the statement.

To build the LogMiner dictionary, issue the following statement:

SQL> EXECUTE DBMS_LOGSTDBY.BUILD;

The DBMS_LOGSTDBY.BUILD procedure waits for all existing transactions to complete. Long-running transactions executed on the primary database will affect the timeliness of this command.

Do the below steps on standby database

SQL> alter database recover to logical standby sby1;

Database altered.

Shutdown the standby instance and bring it to mount state

SQL> shu immediate
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area  521936896 bytes
Fixed Size                  2214936 bytes
Variable Size             159384552 bytes
Database Buffers          356515840 bytes
Redo Buffers                3821568 bytes
Database mounted.

SQL> alter database open resetlogs;

Database altered.

SQL> alter database start logical standby apply immediate;

Database altered.

SQL> desc v$logstdby_progress
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 APPLIED_SCN                                        NUMBER
 APPLIED_TIME                                       DATE
 RESTART_SCN                                        NUMBER
 RESTART_TIME                                       DATE
 LATEST_SCN                                         NUMBER
 LATEST_TIME                                        DATE
 MINING_SCN                                         NUMBER
 MINING_TIME                                        DATE
 RESETLOGS_ID                                       NUMBER

SQL> select APPLIED_SCN,LATEST_SCN from v$logstdby_progress;

APPLIED_SCN LATEST_SCN
----------- ----------
    1294315    1294373



Friday, 27 November 2015

My session at Sangam 15

Hello everyone, Sangam 15 was a great event with more than 800 participants at Hyderabad, There were some great sessions from Tim hall, Francisco Munoz Alvarez, Satyendra Kumar, Sai Ram, Kamran Agayev. My session at Sangam 15 was on SQL Plan Management and what is new in 12C. Below are some pictures which i am sharing from the event :-





Monday, 16 November 2015

Sangam 15

http://www.sangam15.com/

Sangam, the largest independent Oracle user’s group conference will feature over 90+ in-depth technical sessions presented by expert speakers from inside and outside of Oracle. Over 800+ database administrators, developers and architects professionals are expected at the conference from November 21st to 22nd 2015 at HICC in Hyderabad. This is our moment of glory and time for you to join us!
After the first year of registering as a user group and getting to understand the interests of Oracle user community in India, through technology days (TechDay) and technology nights (TechNight) in various cities around the country, we launched a user conference calling speakers from around the world to share their knowledge and experience with users in India. Our first conference was Sangam09 held in September 2009 in Bangalore. We invited Tom Kyte to provide three, half day seminar series. Along with the seminar by Tom Kyte, we also invited key speakers from inside and outside Oracle to do sessions for the remaining period of the day. This was the first event conducted by us at such a large scale. We had a lot to learn from the interests of attendees and how to reach out to those who could not attend. Our second conference Sangam10 was held in September 2010 in Hyderabad. We invited Jonathan Lewis to provide two, half day seminar series. Along with the seminar by Jonathan Lewis, we also invited key speakers from inside and outside Oracle to do sessions for the remaining period of the day. Based on a survey we conducted at Sangam09 and Sangam10, we discovered that Oracle users in India preferred to have this event towards the end of the week/weekend. So they could use their personal time to learn and educate themselves. Sangam11 which was hosted on Friday and Saturday was an outcome of this survey and request from our users. 

Our previous key speakers
Sangam14 - Tom Kyte, Maria Colgan, Jim Viscusi
Sangam13 – Arup Nanda
Sangam12 – Tom Kyte, Andrew Holdsworth, Graham Wood
Sangam11 – Arup Nanda
Sangam10 – Jonathan Lewis
Sangam09 – Tom Kyte 

Please have a look at our previous Sangam events:
Sangam14 – Day1Day2Day3
Sangam13 – Day1Day2
Sangam12 – Day1Day2
Sangam11 – Day1Day2Video1video2
Sangam10 – Day1Day2
Sangam09 – Day1Day2Day3

Agenda
As part of the survey in Sangam15, we asked attendees who they would like to listen to and learn from in 2014. Apart from key sessions, we had breakout sessions from various key speakers from around the world. We had a great response from several Oracle experts from around the world. To ensure that the papers selected match our user’s interests, we requested attendees from previous Sangam conference to help us with the selection process. Through a ranking system with 1 being the low and 10 being the high we requested our volunteers to rank the various abstracts and thus the papers with the highest scores were selected.

Tuesday, 3 November 2015

ORA-65093: multitenant container database not set up properly

Today when i was trying to mount my Far Sync Instance, i got the following error :-

SQL> startup mount
ORACLE instance started.

Total System Global Area  218103808 bytes
Fixed Size                  2922712 bytes
Variable Size             159385384 bytes
Database Buffers           50331648 bytes
Redo Buffers                5464064 bytes

ORA-65093: multitenant container database not set up properly

## This error is applicable for multitenant 12C Database option

## Lets see what this indicates

[oracle@node4 u01]$ oerr ora 65093
65093, 00000, "multitenant container database not set up properly"
// *Cause:  An attempt was made to open a multitenant container database without
//          the correct parameter set for a multitenant container database in
//          the initialization parameter file.
// *Action: Set the 'enable_pluggable_database=true' parameter for the
//          multitenant container database in the initialization parameter file
//          and restart the database.
//


## To resolve this we need to make sure that initialization parameter enable_pluggable_database   is set to true

SQL> show parameter enable_pluggable_database

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
enable_pluggable_database            boolean     TRUE

## Then i was able to mount the Far Sync Instance

SQL>  startup mount
ORACLE instance started.

Total System Global Area  272629760 bytes
Fixed Size                  2923336 bytes
Variable Size             213910712 bytes
Database Buffers           50331648 bytes
Redo Buffers                5464064 bytes
Database mounted.













Friday, 30 October 2015

Data Guard Redo Transport Encryption

Data Guard Redo Transport Encryption

There are cases when redo encryption is enabled and encryption must be enabled between primary and far sync instance as well as the far sync and other standby terminal databases, This is due to the redo being unencrypted when taken off the wire at the Far Sync instance.

Advanced Security Option network encryption has been available since Oracle version 7,  For example, enabling Advanced Encryption Standard (AES) encryption algorithm requires only a few parameter changes in sqlnet.ora file. No certificate or directory setup is required and only requires restart of the database.  With Oracle Database Version 11g Release 2, network encryption (native network encryption and SSL/TLS) and strong authentication services (Kerberos, PKI, and RADIUS) are no longer part of the Oracle Advanced Security Option and are available in all licensed editions of all supported releases of the Oracle database. See My Oracle Support Note 749947.1 for details on enabling transport encryption.

In order to enable encryption of the redo information, follow these requirements:

# Install Oracle Advanced Security option at both the primary and standby database.

# Set up the appropriate sqlnet.ora parameters as documented in the Oracle Advanced Security manual to allow Oracle Net to encrypt and integrity checksum the redo traffic shipped to the standby.

Oracle Advanced Security Transparent Data Encryption

ENCRYPTION_WALLET_LOCATION = (SOURCE =
                                  (METHOD = FILE)
                                  (METHOD_DATA =
                                  (DIRECTORY =
                                   /oracle/dbsid/admin/pdcs11/wallet)))

Oracle Advanced Security Network Encryption

#ASO Encryption

sqlnet.encryption_server=accepted
sqlnet.encryption_client=requested
sqlnet.encryption_types_server=(RC4_40)
sqlnet.encryption_types_client=(RC4_40)

Oracle Advanced Security Network Data Integrity

#ASO Checksum

sqlnet.crypto_checksum_server=requested
sqlnet.crypto_checksum_client=requested
sqlnet.crypto_checksum_types_server = (MD5)
sqlnet.crypto_checksum_types_client = (MD5)

SSL

#SSL

WALLET_LOCATION = (SOURCE=
                          (METHOD = FILE)
                          (METHOD_DATA =
                           DIRECTORY=/wallet)

SSL_CIPHER_SUITES=(SSL_DH_anon_WITH_RC4_128_MD5)
SSL_VERSION= 3
SSL_CLIENT_AUTHENTICATION=FALSE 

12C Data guard Broker ORA-01017: invalid username/password; logon denied


## While doing switchover and failover with 12C Broker i kept on investigating for a long time why invalid username password error is coming even though the password and password file is same for primary and standby, i later realized that the password was not complex and that was the reason i was getting this error. Let us see now for a switchover and failover example.

DGMGRL> switchover to prd ;
Performing switchover NOW, please wait...
Operation requires a connection to instance "prd" on database "prd"
Connecting to instance "prd"...
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

        connect to instance "prd" of database "prd"

DGMGRL> switchover to prd ;
not logged on
DGMGRL> connect sys@drs
Password:
ORA-01017: invalid username/password; logon denied

## Now i made sure my password is complex

DGMGRL> connect sys@drs
Password:
Connected as SYSDBA.
DGMGRL> switchover to prd ;
Performing switchover NOW, please wait...
Operation requires a connection to instance "prd" on database "prd"
Connecting to instance "prd"...
Connected as SYSDBA.
New primary database "prd" is opening...
Operation requires startup of instance "drs" on database "drs"
Starting instance "drs"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "prd"
DGMGRL> show configuration

Configuration - myconfig1

  Protection Mode: MaxPerformance
  Databases:
  prd - Primary database
    drs - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Thursday, 29 October 2015

Playing with DBMS_COMPARISON package in 11G

One of a good feature in 11G is the ability to compare and converge the data with the DBMS_COMPARISON package, The DBMS_COMPARISON package provides interfaces to compare and converge database objects at different databases. The best part is we can compare tables with different names and columns also provided the data type is same, also it is possible to compare a table with a materialzied view.

Some prerequisites are there in order to use this package:

1. The source database must be an Oracle 11g Release 1 or later.

2. The destination database must be an Oracle database.

3. The character set must be the same.

  1  BEGIN
  2    DBMS_COMPARISON.CREATE_COMPARISON
  3    ( comparison_name => 'compare_t1_and_t22'
  4    , schema_name     => 'scott'
  5    , object_name     => 'T1'
  6    , dblink_name     => null
  7    , remote_schema_name=> 'SCOTT'
  8    , remote_object_name=>'T2'
  9    );
 10* END;
SQL> /
BEGIN
*
ERROR at line 1:
ORA-23626: No eligible index on table SCOTT.T1
ORA-06512: at "SYS.DBMS_COMPARISON", line 4793
ORA-06512: at "SYS.DBMS_COMPARISON", line 448
ORA-06512: at line 2

## Lets see what error explains :-

[oracle@node4 ~]$ oerr ora 23626
23626, 00000, "No eligible index on table %s.%s"
// *Cause:  Table did not have an eligible index to use for comparison.
// *Action: For a table to be compared using DBMS_COMPARISON, the table must
//          have an appropriate index on a column of certain data types.
//          Check the documentation for supported data types.


## So we need an index on the column but it should be the primary key one not the non unique one

  1  BEGIN
  2    DBMS_COMPARISON.CREATE_COMPARISON
  3    ( comparison_name => 'compare_t1_and_t22'
  4    , schema_name     => 'scott'
  5    , object_name     => 'T1'
  6    , dblink_name     => null
  7    , remote_schema_name=> 'SCOTT'
  8    , remote_object_name=>'T2'
  9    );
 10* END;
SQL> /

PL/SQL procedure successfully completed.

## Let us now compare both the objects


  1  DECLARE
  2    consistent   BOOLEAN;
  3    scan_info    DBMS_COMPARISON.COMPARISON_TYPE;
  4  BEGIN
  5    consistent := DBMS_COMPARISON.COMPARE
  6                  ( comparison_name =>'compare_t1_and_t22'
  7                  , scan_info       => scan_info , perform_row_dif => TRUE);
  8    DBMS_OUTPUT.PUT_LINE('Scan ID: '||scan_info.scan_id);
  9    IF consistent=TRUE THEN
 10      DBMS_OUTPUT.PUT_LINE('No differences were found.');
 11    ELSE
 12      DBMS_OUTPUT.PUT_LINE('Differences were found.');
 13    END IF;
 14* END;
SQL> /
Scan ID: 1
Differences were found.

PL/SQL procedure successfully completed.

## So there were differences found, let us see them in below query


  1  SELECT s.scan_id
  2  ,      c.COMPARISON_NAME
  3  ,      c.SCHEMA_NAME
  4  ,      c.OBJECT_NAME
  5  ,      s.CURRENT_DIF_COUNT
  6  FROM   USER_COMPARISON c
  7  ,      USER_COMPARISON_SCAN_SUMMARY s
  8  WHERE  c.COMPARISON_NAME = s.COMPARISON_NAME
  9  AND    s.scan_id =
 10* 1
SQL> /

   SCAN_ID COMPARISON_NAME                SCHEMA_NAME                    OBJECT_NAME                    CURRENT_DIF_COUNT
---------- ------------------------------ ------------------------------ ------------------------------ -----------------
         1 COMPARE_T1_AND_T22             SCOTT                          T1                                             2

## And lets see the values


  1  SELECT c.COLUMN_NAME
  2  ,      r.INDEX_VALUE
  3  ,      case
  4         when r.LOCAL_ROWID is null
  5         then 'No'
  6         else 'Yes'
  7         end  LOCAL_ROWID
  8  ,      case
  9         when r.REMOTE_ROWID is null
 10         then 'No'
 11         else 'Yes'
 12         end  REMOTE_ROWID
 13  FROM   USER_COMPARISON_COLUMNS c
 14  ,      USER_COMPARISON_ROW_DIF r
 15  ,      USER_COMPARISON_SCAN s
 16  WHERE  c.COMPARISON_NAME = 'COMPARE_T1_AND_T22'
 17  AND    r.SCAN_ID         = s.SCAN_ID
 18  AND    r.STATUS          = 'DIF'
 19  --AND    c.INDEX_COLUMN    = 'Y'
 20  AND    c.COMPARISON_NAME = r.COMPARISON_NAME
 21  ORDER
 22* BY     r.INDEX_VALUE
SQL> /

COLUMN_NAME                    INDEX_VALUE                    LOC REM
------------------------------ ------------------------------ --- ---
C2                             1                              Yes No
ID                             1                              Yes No
C2                             2                              No  Yes
ID                             2                              No  Yes


## Lets now do the converging part by making sure local wins


  1  DECLARE
  2    scan_info    DBMS_COMPARISON.COMPARISON_TYPE;
  3  BEGIN
  4    DBMS_COMPARISON.CONVERGE
  5    ( comparison_name  => 'compare_t1_and_t22'
  6    , scan_id          => 1
  7    , scan_info        => scan_info
  8    , converge_options => DBMS_COMPARISON.CMP_CONVERGE_LOCAL_WINS
  9    );
 10    DBMS_OUTPUT.PUT_LINE('Local Rows Merged: '||scan_info.loc_rows_merged);
 11    DBMS_OUTPUT.PUT_LINE('Remote Rows Merged: '||scan_info.rmt_rows_merged);
 12    DBMS_OUTPUT.PUT_LINE('Local Rows Deleted: '||scan_info.loc_rows_deleted);
 13    DBMS_OUTPUT.PUT_LINE('Remote Rows Deleted: '||scan_info.rmt_rows_deleted);
 14* END;
 15  /
Local Rows Merged: 0
Remote Rows Merged: 1
Local Rows Deleted: 0
Remote Rows Deleted: 1

PL/SQL procedure successfully completed.





Wednesday, 30 September 2015

Integrated Replicat in Goldengate

Integrated Replicat or also known as Inetgrated Deliver has been introduced from 11.2.0.4 onwards that means it is also available in 12C, In integrated mode, the Replicat process leverages the apply processing functionality that is available within the Oracle database. In this mode, Replicat operates as follows:

# Reads the Oracle GoldenGate trail.

# Performs data filtering, mapping, and conversion.

# Constructs logical change records (LCR) that represent source database DML transactions (in committed order). DDL is applied directly by Replicat.

# Attaches to a background process in the target database known as a database inbound server by means of a lightweight streaming interface.

# Transmits the LCRs to the inbound server, which applies the data to the target database.


Below is an image which lists the example of an integrated apply with 2 Apply Servers






Errors in Goldengate Scenario 4

Scenario 4


ERROR   OGG-00396  Command 'MAP' not terminated by semi-colon.

ERROR   OGG-01668  PROCESS ABENDING.

## This error message clearly specifies MAP command not finished with semi- colon, so below is how we fixed it :-

## Now you will see that semicolon is clearly mentioned :-

GGSCI (node4.example.com as ogguser@euro) 61> edit params rep1

GGSCI (node4.example.com as ogguser@euro) 62> view params rep1

REPLICAT REP1
ASSUMETARGETDEFS
USERID ogguser, PASSWORD o
MAP scott.emp4, TARGET scott.emp4;


GGSCI (node4.example.com as ogguser@euro) 63> start replicat rep1

Sending START request to MANAGER ...
REPLICAT REP1 starting


GGSCI (node4.example.com as ogguser@euro) 64> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                        
REPLICAT    RUNNING     REP1        00:00:00      00:05:43


__________________

Errors in Goldengate Scenario 3

Scenario 3

Error Message :- ERROR   OGG-01044  The trail '/u01/app/oracle/product/ogg_trg/dirdat/rg' is not assigned to extract 'EXT1'. Assign the trail to the extract with the command "ADD EXTTRAIL/RMTTRAIL /u01/app/oracle/product/ogg_trg/dirdat/rg, EXTRACT EXT1".

2015-09-26 17:34:57  ERROR   OGG-01668  PROCESS ABENDING.


GGSCI (node4.example.com as ogguser@amer) 49> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                        
EXTRACT     ABENDED     EXT1        00:00:00      00:07:30

## After checking we found that remote trail file name was not correct so we fixed it solve it as per below details :-

## Let check what exact remote should the extract send to, so its the rt file not rg which was observed in the error log

GGSCI (node4.example.com as ogguser@amer) 51> info extract ext1, detail

EXTRACT    EXT1      Last Started 2015-09-26 17:34   Status ABENDED
Checkpoint Lag       00:00:00 (updated 00:09:14 ago)
Log Read Checkpoint  Oracle Redo Logs
                     2015-09-26 17:30:07  Seqno 20, RBA 25548800
                     SCN 0.2082218 (2082218)

  Target Extract Trails:

  Trail Name                                       Seqno        RBA     Max MB Trail Type

  /u01/app/oracle/product/ogg_trg/dirdat/rt            0       1964        100 RMTTRAIL


## We can also check from below command :-


GGSCI (node4.example.com as ogguser@amer) 52> info rmttrail *

       Extract Trail: /u01/app/oracle/product/ogg_trg/dirdat/rt
             Extract: EXT1
               Seqno: 0
                 RBA: 1964
           File Size: 100M

## Below are details of how we have fixed it :-

GGSCI (node4.example.com as ogguser@amer) 54> edit params ext1



GGSCI (node4.example.com as ogguser@amer) 55> view params ext1

EXTRACT ext1
userid ogguser, password o
RMTHOST node4.example.com , mgrport 7909
rmttrail /u01/app/oracle/product/ogg_trg/dirdat/rt
TABLE SCOTT.EMP4;



GGSCI (node4.example.com as ogguser@amer) 56> start extract ext1

Sending START request to MANAGER ...
EXTRACT EXT1 starting


GGSCI (node4.example.com as ogguser@amer) 57> info all    

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                        
EXTRACT     RUNNING     EXT1        00:10:54      00:00:05

Errors in Goldengate Scenario 2

# Scenario 2

WARNING OGG-01223  TCP/IP error 111 (Connection refused), endpoint: node4.example.com:7907.

INFO    OGG-01971  The previous message, 'WARNING OGG-01223', repeated 2 times.


Whenever there is problem with extract always check whether it is able to capture the records or not, this will be done by stats extract command from GGSCI prompt

GGSCI (node4.example.com as ogguser@amer) 42> stats extract ext1

Sending STATS request to EXTRACT EXT1 ...

No active extraction maps.

## this means that extract is not able to capture the records as of now

We check the extract report file in location

/u01/app/oracle/product/ogg_src/dirrpt

and the report file name is EXT1.rpt

Below is the error details :-

2015-09-26 17:31:20  WARNING OGG-01223  TCP/IP error 111 (Connection refused), endpoint: node4.example.com:7907.

2015-09-26 17:31:50  INFO    OGG-01971  The previous message, 'WARNING OGG-01223', repeated 2 times.

## After reading the above error message, we came to know that the MGR port number was not correct in extract parameter file, so after correcting it we are able to solve it, lets check how

GGSCI (node4.example.com as ogguser@amer) 37> view params ext1


EXTRACT ext1
userid ogguser, password o
RMTHOST node4.example.com , mgrport 7907
rmttrail /u01/app/oracle/product/ogg_trg/dirdat/rt
TABLE SCOTT.EMP4;

## After correcting the port number to 7909


GGSCI (node4.example.com as ogguser@amer) 47> start extract ext1

Sending START request to MANAGER ...
EXTRACT EXT1 starting



GGSCI (node4.example.com as ogguser@amer) 44> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                        
EXTRACT     RUNNING     EXT1        00:00:00      00:02:10

Errors in Goldengate Scenario 1

 Scenario 1

ERROR   OGG-00446  Missing filename opening checkpoint file.

ERROR   OGG-01668  PROCESS ABENDING.

___

Checking the replicat status

GGSCI (node4.example.com as ogguser@euro) 45> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                        
REPLICAT    STOPPED     REP1        00:00:00      00:00:06


Below is the error :-

2015-09-26 17:26:03  ERROR   OGG-00446  Missing filename opening checkpoint file.

2015-09-26 17:26:03  ERROR   OGG-01668  PROCESS ABENDING.


## this means that in replicat file keyword REPLICAT rep_group name is not written, lets fix this :-

edit params rep1

GGSCI (node4.example.com as ogguser@euro) 49> view params rep1

REPLICAT REP1
ASSUMETARGETDEFS
USERID ogguser, PASSWORD o
MAP scott.emp4, TARGET scott.emp4;


# Now we have updated REPLICAT REP1


GGSCI (node4.example.com as ogguser@euro) 52> start replicat rep1

Sending START request to MANAGER ...
REPLICAT REP1 starting


GGSCI (node4.example.com as ogguser@euro) 53> info replicat rep1

REPLICAT   REP1      Last Started 2015-09-26 17:28   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:08 ago)
Process ID           9086
Log Read Checkpoint  File /u01/app/oracle/product/ogg_trg/dirdat/rt000000
                     2015-09-26 16:49:45.000288  RBA 1964

Monday, 31 August 2015

Important points related to Multitenant Architecture Oracle 12C

Important points related to Multitenant Architecture Oracle 12C

# Undo tablespace is common for all containers.

# A common temporary tablespace is required for all containers however each PDB can have its own temporay tablespace for its application users

# Redo log files is common for all containers.

# Each Datafile is specific to a single container represented by a con_id

# Control files are common for all containers.

# Regarding archivelog mode or non archivemode, all the PDBs in a CDB share the archivelog mode of the CDB.

# Now each container has its own dictionary that means each container has its own system tablespace now and a sysaux tablespace also.

# There is a limit of 253 PDBs in a CDB including the seed

# Limit of 512 services in a CDB

Storing application data in Pluggable databases.

Storing application data in Pluggable databases.

First we will connect to pluggable database pdb1

SQL> alter session set container = pdb1;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
PDB1

Now we will create tablespace pdb1_tbs1 in pluggable database pdb1

SQL> create tablespace pdb1_tbs1 datafile '/u01/app/oracle/oradata/cdb1/pdb1/pdb1_tbs1a.dbf' size 100m;

Tablespace created.

Creating the table in above tablespace in pdb1 pluggable database

SQL> create table sales (id number) tablespace pdb1_tbs1;

Table created.

## Connecting to root now

SQL> alter session set container = cdb$root ;

Session altered.

SQL> create table sales (id number) ;                  

Table created.

## Confirming the table_name, tablespace_name, con_id and owner

SQL> select table_name,CON_ID,owner from cdb_Tables where table_name='SALES'

SQL> select table_name,CON_ID,tablespacE_name,owner from cdb_Tables where table_name='SALES';

TABLE_NAME     CON_ID TABLESPACE_NAME                OWNER
---------- ---------- ------------------------------ ------------------------------
SALES               3 PDB1_TBS1                      SYS
SALES               1 SYSTEM                         SYS










Querying about CDB or a NON CDB

To determine whether a database is a CDB or a non CDB, execute the below query :-

SQL> SELECT CDB FROM V$DATABASE;

CDB
---
YES

## Let us check about each Container in a CDB

We will query the V$CONTAINERS view.

SQL> COLUMN NAME FORMAT A8

SQL> SELECT NAME, CON_ID, DBID, CON_UID, GUID FROM V$CONTAINERS ORDER BY CON_ID;

NAME         CON_ID       DBID    CON_UID GUID
-------- ---------- ---------- ---------- --------------------------------
CDB$ROOT          1  862823670          1 FD9AC20F64D344D7E043B6A9E80A2F2F
PDB$SEED          2 3526827491 3526827491 1E8418ECE0501751E05372C909C0C35E
PDB1              3 1620003564 1620003564 1E84263113671CB0E05372C909C0893F
PDB2              4 2018651655 2018651655 1E84287383001DBEE05372C909C012F3
PDB2_11           5  938823506  938823506 1E851D182997251BE05372C909C034B6
PDB_22            6 2820822967 2820822967 1E85456E7EFC246EE05372C909C08EA3

6 rows selected.

Lets check container ID, Name, and Status of Each PDB

SQL> COLUMN PDB_NAME FORMAT A15

SQL> SELECT PDB_ID, PDB_NAME, STATUS FROM DBA_PDBS ORDER BY PDB_ID;

    PDB_ID PDB_NAME        STATUS
---------- --------------- ---------
         2 PDB$SEED        NORMAL
         3 PDB1            NORMAL
         4 PDB2            NORMAL
         5 PDB2_11         NORMAL
         6 PDB_22          NEW



Seed PDB in Oracle 12C


A Seed PDB is actually the PDB$SEED which is an oracle supplied template that the CDB can use to create new PDBs. We are not allowed to add or modify objects in seed PDB.

You can use the CREATE PLUGGABLE DATABASE statement to create a PDB by copying the files from PDB$SEED, which is a template for creating PDBs.

The following SQL statement creates a PDB named crmpdb from the seed using Oracle Managed Files:

CREATE PLUGGABLE DATABASE crmpdb
 ADMIN USER karan IDENTIFIED BY mypassword

We can see below that PDB seed may not be dropped or altered

SQL> alter pluggable database PDB$SEED open;
alter pluggable database PDB$SEED open
                         *
ERROR at line 1:
ORA-65017: seed pluggable database may not be dropped or altered

However any other pdb definately can be dropped or altered.

SQL> alter pluggable database PDB1 open ;

Pluggable database altered.

Some important Views in Oracle Multitenant Environment.

Some important Views in Oracle Multitenant Environment.

    DBA_CDB_RSRC_PLANS: This view contains information about all the CDB resource plan directives  
 
    DBA_CDB_RSRC_PLANS: This view contains information about all the CDB resource plans.  
         
    DBA_PDBS:  This view contains PDBs belonging to a given CDB. When queried from the root,  it will describe all PDBs belonging to a given CDB. When queried from a PDB, it will always return no rows.
     
     DBA_PDB_HISTORY: This view contains the lineage of the PDB to which it belongs.
     
    V$PDBS: This view contains information about PDBs associated with the current instance.
     
    V$PDB_INCARNATION: This view contains information about all PDB incarnations.  Oracle creates a new PDB incarnation whenever a PDB is opened with the RESETLOGS option.

Friday, 31 July 2015

Setting up Consumer Groups and Categories with PL/SQL in the Database

How to set up Consumer Groups and Categories with PL/SQL in the Database

BEGIN

  DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

  DBMS_RESOURCE_MANAGER.CREATE_CATEGORY(
     CATEGORY => 'dss',
     COMMENT => 'DSS consumer groups');

  DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(
     CONSUMER_GROUP => 'critical_dss',
     CATEGORY => 'dss',
     COMMENT => 'performance-critical DSS queries');

  DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(
     CONSUMER_GROUP => 'normal_dss',
     CATEGORY => 'dss',
     COMMENT => 'non performance-critical DSS queries');

  DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(
     CONSUMER_GROUP => 'etl',
     CATEGORY => 'maintenance',
     COMMENT => 'data import operations');

  DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

In addition to the consumer groups that you set up, the database contains predefined consumer groups. The DBA_RSRC_CONSUMER_GROUPS view displays information about consumer groups, and the DBA_RSRC_CATEGORIES view displays information about categories in the database.

Consumer Groups and Categories in an Oracle Database

SQL> SELECT consumer_group, category FROM DBA_RSRC_CONSUMER_GROUPS where
     consumer_group not like 'ORA%' ORDER BY category;

CONSUMER_GROUP                                   CATEGORY
---------------------------------------------  ---------------------------------------------
SYS_GROUP                                                   ADMINISTRATIVE
ETL_GROUP                                                   BATCH
BATCH_GROUP                                             BATCH
DSS_GROUP                                                   BATCH
CRITICAL_DSS                                              DSS
NORMAL_DSS                                               DSS
DSS_CRITICAL_GROUP                              INTERACTIVE
INTERACTIVE_GROUP                               INTERACTIVE
ETL                                                                  MAINTENANCE
LOW_GROUP                                                OTHER
OTHER_GROUPS                                          OTHER
AUTO_TASK_CONSUMER_GROUP          OTHER
DEFAULT_CONSUMER_GROUP               OTHER

13 rows selected


SQL Plan Management (Part 1) Tutorial

Just posted a video tutorial on YOUTUBE for SQL PLAN MANAGEMENT (Part 1) Click here to see it. The rest parts will be posted soon, For the moment enjoy the part 1 by clicking here

See SQL Plan Management (Part 1) Tutorial  

Thursday, 30 July 2015

Managing Users and Authentication

Managing Users and Authentication

When it comes to security of users and authentication often Linux Admins can overlook and invoke a big security hole. Check the system for unused and unlocked user accounts on a regular basis, and set passwords on any accounts that aren't protected. Make sure that no non-root user accounts have the user ID of 0.

When you install software like oracle or any other software that creates a default user account and password, be sure to change the vendor's default password immediately. A centralized user authentication method (such as OpenLDAP or other LDAP implementations) can help to simplify user authentication and management tasks, which might help to lower the risk of unused accounts or accounts with null passwords.

To tell exactly who has performed a privileged administrative action, set up the system so it is not possible to log in directly as root. Instead, all administrators should log in to the system first as a named user and then use the su or sudo commands to perform tasks as root. To prevent users from logging in as root directly, edit the /etc/passwd file, changing the shell from /bin/bash to /sbin/nologin. Modify the /etc/sudoers file using visudo to grant specific users authority to perform administrative tasks.

Oracle Linux supports PAM, which makes it easier to enforce strong user authentication and password policies, including password complexity, length, age, expiration rules. PAM also prevents the use of previous passwords. It can be configured to block user access after too many failed login attempts, after normal working hours, or if too many concurrent sessions are opened. PAM is highly customizable by adding different modules, and you can add external password integrity checkers to test password strength.