Tuesday 31 May 2016

LOGDUMP Utility in Goldengate

To view the record header with the data:

Logdump 1> GHDR ON

The record header contains information about the transaction.

To add column information:

Logdump 2> DETAIL ON

Column information includes the number and length in hex and ASCII.

To add hex and ASCII data values to the column information:

Logdump 3> DETAIL DATA

To view user tokens:

Logdump 4> USERTOKEN ON

User tokens are custom user-defined information that is specified in a TABLE or FILE mapping statement and stored in the trail file for specific purposes.

To view automatically generated tokens:

Logdump 4> GGSTOKEN ON
Oracle GoldenGate automatically generated tokes include the transaction ID (XID), the row id for DML operations, the fetching status (if applicable), and tag value.

To control how much record data is displayed:

Logdump 5> RECLEN length
______________________________


I will now insert 3 rows in T1 Table

SQL> insert into t1 values(3,3,3);

1 row created.

SQL> commit;

Commit complete.

SQL>  insert into t1 values(4,4,4);

1 row created.

SQL> commit;

Commit complete.

SQL>  insert into t1 values(5,5,5);

1 row created.

SQL> commit;

Commit complete.

## Now lets see how we can read the data with above explanations :-

[oracle@edvmr1p0 gg_$$$]$ ./logdump

Oracle GoldenGate Log File Dump Utility for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.



Logdump 25 >DETAIL ON
Logdump 26 >GHDR ON
Logdump 27 >DETAIL DATA
Logdump 28 >USERTOEN ON
sh: USERTOEN: command not found

Logdump 29 >USERTOKEN ON
Logdump 30 >n
Error: Logtrail not opened
Logdump 31 >open dirdat/rt000000
Current LogTrail is /u01/app/oracle/product/gg_$$$/dirdat/rt000000
Logdump 32 >n

2016/05/31 19:42:54.395.268 FileHeader           Len  1414 RBA 0
Name: *FileHeader*
 3000 0326 3000 0008 4747 0d0a 544c 0a0d 3100 0002 | 0..&0...GG..TL..1...
 0004 3200 0004 2000 0000 3300 0008 02f2 5a57 6cd5 | ..2... ...3.....ZWl.
 1d84 3400 0033 0031 7572 693a 6564 766d 7231 7030 | ..4..3.1uri:edvmr1p0
 3a3a 7530 313a 6170 703a 6f72 6163 6c65 3a70 726f | ::u01:app:oracle:pro
 6475 6374 3a67 675f 616d 6572 3a45 5854 3136 0000 | duct:gg_amer:EXT16..
 3100 2f2f 7530 312f 6170 702f 6f72 6163 6c65 2f70 | 1.//u01/app/oracle/p
 726f 6475 6374 2f67 675f 6575 726f 2f64 6972 6461 | roduct/gg_$$$/dirda

Logdump 33 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    27  (x001b)   IO Time    : 2016/05/31 19:43:01.000.232
IOType     :     5  (x05)     OrigNode   :   255  (xff)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :         23       AuditPos   : 40069136
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/05/31 19:43:01.000.232 Insert               Len    27 RBA 1422
Name: SCOTT.T1
After  Image:                                             Partition 4   G  s
 0000 0005 0000 0001 3300 0100 0500 0000 0133 0002 | ........3........3..
 0005 0000 0001 33                                 | ......3
Column     0 (x0000), Len     5 (x0005)
 0000 0001 33                                      | ....3
Column     1 (x0001), Len     5 (x0005)
 0000 0001 33                                      | ....3
Column     2 (x0002), Len     5 (x0005)
 0000 0001 33                                      | ....3

### Here we see values 3,3,3

Logdump 34 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    27  (x001b)   IO Time    : 2016/05/31 19:49:12.000.372
IOType     :     5  (x05)     OrigNode   :   255  (xff)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :         23       AuditPos   : 40925712
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/05/31 19:49:12.000.372 Insert               Len    27 RBA 1563
Name: SCOTT.T1
After  Image:                                             Partition 4   G  s
 0000 0005 0000 0001 3400 0100 0500 0000 0134 0002 | ........4........4..
 0005 0000 0001 34                                 | ......4
Column     0 (x0000), Len     5 (x0005)
 0000 0001 34                                      | ....4
Column     1 (x0001), Len     5 (x0005)
 0000 0001 34                                      | ....4
Column     2 (x0002), Len     5 (x0005)
 0000 0001 34                                      | ....4

### Here we see values 4,4,4

Logdump 35 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    27  (x001b)   IO Time    : 2016/05/31 19:59:19.000.300
IOType     :     5  (x05)     OrigNode   :   255  (xff)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :         23       AuditPos   : 41101328
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/05/31 19:59:19.000.300 Insert               Len    27 RBA 1700
Name: SCOTT.T1
After  Image:                                             Partition 4   G  s
 0000 0005 0000 0001 3500 0100 0500 0000 0135 0002 | ........5........5..
 0005 0000 0001 35                                 | ......5
Column     0 (x0000), Len     5 (x0005)
 0000 0001 35                                      | ....5
Column     1 (x0001), Len     5 (x0005)
 0000 0001 35                                      | ....5
Column     2 (x0002), Len     5 (x0005)
 0000 0001 35                                      | ....5

### Here we see values 5,5,5





















LAG REPLICAT Command

LAG REPLICAT command is to determine a true lag time between Replicat and the trail. LAG REPLICAT estimates the lag time more precisely than INFO REPLICAT because it communicates with Replicat directly rather than reading a checkpoint position.

For Replicat, lag is the difference, in seconds, between the time that the last record was processed by Replicat (based on the system clock) and the timestamp of the record in the trail.

An example :-

GGSCI (node07.example.com) 2> lag replicat rep1

Sending GETLAG request to REPLICAT REP1 ...
Last record lag 5 seconds.
At EOF, no more records to process.

LAG EXTRACT Command

LAG EXTRACT is the command to determine a true lag time between Extract and the data source. LAG EXTRACT calculates the lag time more precisely than INFO EXTRACT because it communicates with Extract directly, rather than reading a checkpoint position in the trail.

For Extract, lag is the difference, in seconds, between the time that a record was processed by Extract (based on the system clock) and the timestamp of that record in the data source.

An example :-

GGSCI (node07.example.com) 2> lag extract ext1

Sending GETLAG request to EXTRACT EXT1 ...
Last record lag 2 seconds.
At EOF, no more records to process.

Bounded Recovery in Oracle Goldengate


Bounded Recovery

Bounded Recovery is a component of the general Extract checkpointing facility. It guarantees an efficient recovery after Extract stops for any reason, planned or unplanned, no matter how many open (uncommitted) transactions there were at the time that Extract stopped, nor how old they were. Bounded Recovery sets an upper boundary for the maximum amount of time that it would take for Extract to recover to the point where it stopped and then resume normal processing.

Caution: Before changing this parameter from its default settings, contact Oracle Support for guidance. Most production environments will not require changes to this parameter. You can, however, specify the directory for the Bounded Recovery checkpoint files without assistance.

How Extract Recovers Open Transactions

When Extract encounters the start of a transaction in the redo log (in Oracle, this is the first executable SQL statement) it starts caching to memory all of the data that is specified to be captured for that transaction. Extract must cache a transaction even if it contains no captured data, because future operations of that transaction might contain data that is to be captured.

When Extract encounters a commit record for a transaction, it writes the entire cached transaction to the trail and clears it from memory. When Extract encounters a rollback record for a transaction, it discards the entire transaction from memory. Until Extract processes a commit or rollback, the transaction is considered open and its information continues to be collected.

If Extract stops before it encounters a commit or rollback record for a transaction, all of the cached information must be recovered when Extract starts again. This applies to all transactions that were open at the time that Extract stopped.

Extract performs this recovery as follows:

If there were no open transactions when Extract stopped, the recovery begins at the current Extract read checkpoint. This is a normal recovery.

If there were open transactions whose start points in the log were very close in time to the time when Extract stopped, Extract begins recovery by re-reading the logs from the beginning of the oldest open transaction. This requires Extract to do redundant work for transactions that were already written to the trail or discarded before Extract stopped, but that work is an acceptable cost given the relatively small amount of data to process. This also is considered a normal recovery.

If there were one or more transactions that Extract qualified as long-running open transactions, Extract begins its recovery with a Bounded Recovery.

How Bounded Recovery Works

A transaction qualifies as long-running if it has been open longer than one Bounded Recovery interval, which is specified with the BRINTERVAL option of the BR parameter. For example, if the Bounded Recovery interval is four hours, a long-running open transaction is any transaction that started more than four hours ago.

At each Bounded Recovery interval, Extract makes a Bounded Recovery checkpoint, which persists the current state and data of Extract to disk, including the state and data (if any) of long-running transactions. If Extract stops after a Bounded Recovery checkpoint, it will recover from a position within the previous Bounded Recovery interval or at the last Bounded Recovery checkpoint, instead of processing from the log position where the oldest open long-running transaction first appeared.

The maximum Bounded Recovery time (maximum time for Extract to recover to where it stopped) is never more than twice the current Bounded Recovery checkpoint interval. The actual recovery time will be a factor of the following:

# the time from the last valid Bounded Recovery interval to when Extract stopped.

# the utilization of Extract in that period.

# the percent of utilization for transactions that were previously written to the trail. Bounded Recovery processes these transactions much faster (by discarding them) than Extract did when it first had to perform the disk writes. This constitutes most of the reprocessing that occurs for transactional data.

When Extract recovers, it restores the persisted data and state that were saved at the last Bounded Recovery checkpoint (including that of any long running transactions).

For example, suppose a transaction has been open for 24 hours, and suppose the Bounded Recovery interval is four hours. In this case, the maximum recovery time will be no longer than eight hours worth of Extract processing time, and is likely to be less. It depends on when Extract stopped relative to the last valid Bounded Recovery checkpoint, as well as Extract activity during that time.

Advantages of Bounded Recovery

The use of disk persistence to store and then recover long-running transactions enables Extract to manage a situation that rarely arises but would otherwise significantly (adversely) affect performance if it occurred. The beginning of a long-running transaction is often very far back in time from the place in the log where Extract was processing when it stopped. A long-running transaction can span numerous old logs, some of which might no longer reside on accessible storage or might even have been deleted. Not only would it take an unacceptable amount of time to read the logs again from the start of a long-running transaction but, since long-running transactions are rare, most of that work would be the redundant capture of other transactions that were already written to the trail or discarded. Being able to restore the state and data of persisted long-running transactions eliminates that work.

Friday 27 May 2016

ERROR OGG-01201 Error reported by MGR : Access denied

Today i faced an issue in goldengate

2016-05-27 15:55:42  ERROR   OGG-01201  Error reported by MGR : Access denied.

2016-05-27 15:55:42  ERROR   OGG-01668  PROCESS ABENDING.


What happens in Oracle GoldenGate 12.2 is that MANAGER and related EXTRACT/REPLICAT cannot be started or stopped remotely. So what happens is when the direct load is started on the source server it tries to start replicat remotely on the target server, so this leads to ERROR OGG-01201  Error reported by MGR : Access denied, and this error will be reported in report file of the process.

The solution and the fix i found is that it is necessary to write below lines in the remote manager parameter file  ACCESSRULE, PROG *, IPADDR 192.168.1.161, ALLOW

This will allow source server to make connection and start the replicat process remotely

OTN Yathra 2016

OTNYathra 2015 was a great event, and i had a chance to attend and speak at the event, I really had a great privilege to attend the session of Riyaz who is a world know Global Expert, I have some memories to share of the OTNYathra 2015 Event with some pictures














Saturday 7 May 2016

How to load a SQL with its plan hash value in SQL Plan Baseline ?



Recently i was asked a question on how can I load a SQL with its plan hash value in SQL Plan Baseline ? Because sometimes it becomes important when you have multiple child cursors and you would just like to load just one child cursor plan as 1 or 2 child cursors might not have a good plan. So the best strategy is to load the best plan. Here is how to do it


  1     declare
  2      a pls_integer;
  3      begin
  4      a := dbms_spm.load_plans_from_cursor_cache(
  5      sql_id=>'1026nxs7ff5c8',plan_hash_value=>2949544139);
  6*    end;
  7  /

PL/SQL procedure successfully completed.

_________________________________________________________________________________