Wednesday 14 December 2016

Oracle Checkpoint, A Facebook Page

Visit my Facebook Page Oracle Checkpoint by clicking on the link

Sangam 16 Event , Bangalore, India

Sangam 16

Hello Everyone, Sangam 16 was a big success with over 800 people coming in the event. I am posting few pictures for the memories of Sangam 16 Event. Below are some moments captured from Sangam 16













Tuesday 25 October 2016

SANGAM 16

SANGAM16

SANGAM is the Largest Independent Oracle Users Group Conference in India, organised annually in the month of November.

This year's Sangam (Sangam16 - 8th Annual Oracle Users Group Conference) is in Crowne Plaza Bengaluru Electronics City on Friday 11th & Saturday 12th November 2016. Checkout the schedule from the link

Sangam 16 Schedule https://sangam16.sched.org/

Hello Everyone, As you all know Sangam 16 event is about to come on 11 and 12 November 2016, Please check the schedule from the link

The List of Speakers is here

Saturday 8 October 2016

Protecting Schemas in Oracle 12C Vault

Recently i made a video tutorial for Protecting Schemas in Oracle 12C Vault, here is the Link

Recovering datafiles in Oracle 12C Multitenant Database

Recently i made a video tutorial for Recovering tablespaces in Oracle 12C Multitenant Database, here is the Link

Creating a database with Vault Option in Oracle 12C

Recently i made a video tutorial for Creating a database with Vault Option in 12C, here is the Link

Adding a database as a target to 12C Cloud Control

Recently i made a video tutorial for Adding a database as a target to 12C Cloud Control, here is the  Link

Friday 9 September 2016

Exadata X6-2 enhancements in the hardware

While there are so many great features in Exadata Database Machine software, in this post we will see what is inside Exadata X6-2 when it comes to the hardware

Exadata X6-2 enhancements in the hardware

Database Server

# For each databaser server, Exadata X6 -2 Machine now comes with 2x 22-core Xeon E5-2699 v4 processors per Database Server
# Memory defaults to 256 GM RAM and 768 GB (max)
# disks 4x 600 GB 10,000 RPM disks (Hot-Swappable) Expandable to 8


For each Storage Server HC

CPU :- 2x 10-core Xeon E5-2630 v4 processors and the same goes for Extreme Flash Storage
Memory :- 128 GB
Disks :- 12 x 8 TB 7,200 RPM disks
Flash :- 4 x 3.2 TB NVMe PCIe 3.0 flash cards


For each Extreme Flash Storage Server

CPU :- 2x 10-core Xeon E5-2630 v4 processors
Memory :- 128 GB
Flash Capacity :- 8x 3.2 TB NVMe PCIe 3.0 flash drives


That means if you have a full rack Exadata Database Machine and if you go with Extreme flash storage you get 358.4 TB flash capacity, 8 x DB servers 352 cores , 14 x Storage servers 280 cores for SQL offload. Really Extreme performance.

Wednesday 20 July 2016

Oracle Hash Scan

The Oracle Optimizer considers a hash scan when a query accesses a table in a hash cluster.

In a hash cluster, all rows with the same hash value are stored in the same data block. To perform a hash scan, Oracle Database first obtains the hash value by applying a hash function to a cluster key value specified by the statement. Oracle Database then scans the data blocks containing rows with that hash value. Now in this example, in order for Oracle to do a hash scan, Oracle Database first obtains the hash value by applying a hash function to the key value 30, and then uses this hash value to scan the data blocks and retrieve the rows.

Lets see how we can do it :-

CREATE CLUSTER employees_departments_cluster
   (deptno NUMBER(2)) SIZE 8192 HASHKEYS 100;

CREATE TABLE employees2
   CLUSTER employees_departments_cluster (deptno)
   AS SELECT * FROM emp;

CREATE TABLE departments2
   CLUSTER employees_departments_cluster (deptno)
   AS SELECT * FROM dept;

You query the employees in department 30 as follows:

SQL> SELECT * FROM   employees2 WHERE  deptno = 30;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1251        500         30
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30

6 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1423052330

--------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            |     1 |    87 |     1   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS HASH| EMPLOYEES2 |     1 |    87 |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("DEPTNO"=30)

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)


Statistics
----------------------------------------------------------
         15  recursive calls
          0  db block gets
         77  consistent gets
         64  physical reads
          0  redo size
       1279  bytes sent via SQL*Net to client
        551  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          6  rows processed

Wednesday 1 June 2016

How to know if my replicat is Integrated or not ?

In the below output of Report file of the replicat process, you see the message "Integrated replicat successfully attached to inbound server..." That means your replicat is integrated not the old one ....

GGSCI (node7.example.com as oggadmin@orcl) 62> view report rep4


***********************************************************************
                 Oracle GoldenGate Delivery for Oracle
   Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
   Linux, x64, 64bit (optimized), Oracle 12c on Aug  7 2014 10:47:47
............................
..................................
.........................................

Output truncated...
.........................

2016-06-02 00:59:07  INFO    OGG-02530  Integrated replicat successfully attached to inbound server OGG$rep4.

Is my Extract Classic or Intergrated ?


Recently i was asked how to know if my extract is classic extract or integrated extract, here is how we can know about it

If below output is Oracle Integrated Redo Logs for Log Read Checkpoint then it is integrated extract

GGSCI (node7.example.com as gguser@orcl) 78> info extract extcap

EXTRACT    extcap   Last Started 2016-06-01 23:39   Status STOPPED
Checkpoint Lag       00:00:04 (updated 00:05:43 ago)
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2016-06-02 00:47:48
                     SCN 0.2037575 (2037575)

If below output if it is Oracle Redo Logs for Log Read Checkpoint then it is classic extract

GGSCI (node7.example.com as gguser@orcl) 79> info ext2

EXTRACT    EXT2     Last Started 2016-06-02 00:50   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:05 ago)
Process ID           14091
Log Read Checkpoint  Oracle Redo Logs
                     2016-06-02 00:53:10  Seqno 35, RBA 10232832
                     SCN 0.2038468 (2038468)

Tuesday 31 May 2016

LOGDUMP Utility in Goldengate

To view the record header with the data:

Logdump 1> GHDR ON

The record header contains information about the transaction.

To add column information:

Logdump 2> DETAIL ON

Column information includes the number and length in hex and ASCII.

To add hex and ASCII data values to the column information:

Logdump 3> DETAIL DATA

To view user tokens:

Logdump 4> USERTOKEN ON

User tokens are custom user-defined information that is specified in a TABLE or FILE mapping statement and stored in the trail file for specific purposes.

To view automatically generated tokens:

Logdump 4> GGSTOKEN ON
Oracle GoldenGate automatically generated tokes include the transaction ID (XID), the row id for DML operations, the fetching status (if applicable), and tag value.

To control how much record data is displayed:

Logdump 5> RECLEN length
______________________________


I will now insert 3 rows in T1 Table

SQL> insert into t1 values(3,3,3);

1 row created.

SQL> commit;

Commit complete.

SQL>  insert into t1 values(4,4,4);

1 row created.

SQL> commit;

Commit complete.

SQL>  insert into t1 values(5,5,5);

1 row created.

SQL> commit;

Commit complete.

## Now lets see how we can read the data with above explanations :-

[oracle@edvmr1p0 gg_$$$]$ ./logdump

Oracle GoldenGate Log File Dump Utility for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.



Logdump 25 >DETAIL ON
Logdump 26 >GHDR ON
Logdump 27 >DETAIL DATA
Logdump 28 >USERTOEN ON
sh: USERTOEN: command not found

Logdump 29 >USERTOKEN ON
Logdump 30 >n
Error: Logtrail not opened
Logdump 31 >open dirdat/rt000000
Current LogTrail is /u01/app/oracle/product/gg_$$$/dirdat/rt000000
Logdump 32 >n

2016/05/31 19:42:54.395.268 FileHeader           Len  1414 RBA 0
Name: *FileHeader*
 3000 0326 3000 0008 4747 0d0a 544c 0a0d 3100 0002 | 0..&0...GG..TL..1...
 0004 3200 0004 2000 0000 3300 0008 02f2 5a57 6cd5 | ..2... ...3.....ZWl.
 1d84 3400 0033 0031 7572 693a 6564 766d 7231 7030 | ..4..3.1uri:edvmr1p0
 3a3a 7530 313a 6170 703a 6f72 6163 6c65 3a70 726f | ::u01:app:oracle:pro
 6475 6374 3a67 675f 616d 6572 3a45 5854 3136 0000 | duct:gg_amer:EXT16..
 3100 2f2f 7530 312f 6170 702f 6f72 6163 6c65 2f70 | 1.//u01/app/oracle/p
 726f 6475 6374 2f67 675f 6575 726f 2f64 6972 6461 | roduct/gg_$$$/dirda

Logdump 33 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    27  (x001b)   IO Time    : 2016/05/31 19:43:01.000.232
IOType     :     5  (x05)     OrigNode   :   255  (xff)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :         23       AuditPos   : 40069136
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/05/31 19:43:01.000.232 Insert               Len    27 RBA 1422
Name: SCOTT.T1
After  Image:                                             Partition 4   G  s
 0000 0005 0000 0001 3300 0100 0500 0000 0133 0002 | ........3........3..
 0005 0000 0001 33                                 | ......3
Column     0 (x0000), Len     5 (x0005)
 0000 0001 33                                      | ....3
Column     1 (x0001), Len     5 (x0005)
 0000 0001 33                                      | ....3
Column     2 (x0002), Len     5 (x0005)
 0000 0001 33                                      | ....3

### Here we see values 3,3,3

Logdump 34 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    27  (x001b)   IO Time    : 2016/05/31 19:49:12.000.372
IOType     :     5  (x05)     OrigNode   :   255  (xff)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :         23       AuditPos   : 40925712
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/05/31 19:49:12.000.372 Insert               Len    27 RBA 1563
Name: SCOTT.T1
After  Image:                                             Partition 4   G  s
 0000 0005 0000 0001 3400 0100 0500 0000 0134 0002 | ........4........4..
 0005 0000 0001 34                                 | ......4
Column     0 (x0000), Len     5 (x0005)
 0000 0001 34                                      | ....4
Column     1 (x0001), Len     5 (x0005)
 0000 0001 34                                      | ....4
Column     2 (x0002), Len     5 (x0005)
 0000 0001 34                                      | ....4

### Here we see values 4,4,4

Logdump 35 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    27  (x001b)   IO Time    : 2016/05/31 19:59:19.000.300
IOType     :     5  (x05)     OrigNode   :   255  (xff)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :         23       AuditPos   : 41101328
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/05/31 19:59:19.000.300 Insert               Len    27 RBA 1700
Name: SCOTT.T1
After  Image:                                             Partition 4   G  s
 0000 0005 0000 0001 3500 0100 0500 0000 0135 0002 | ........5........5..
 0005 0000 0001 35                                 | ......5
Column     0 (x0000), Len     5 (x0005)
 0000 0001 35                                      | ....5
Column     1 (x0001), Len     5 (x0005)
 0000 0001 35                                      | ....5
Column     2 (x0002), Len     5 (x0005)
 0000 0001 35                                      | ....5

### Here we see values 5,5,5





















LAG REPLICAT Command

LAG REPLICAT command is to determine a true lag time between Replicat and the trail. LAG REPLICAT estimates the lag time more precisely than INFO REPLICAT because it communicates with Replicat directly rather than reading a checkpoint position.

For Replicat, lag is the difference, in seconds, between the time that the last record was processed by Replicat (based on the system clock) and the timestamp of the record in the trail.

An example :-

GGSCI (node07.example.com) 2> lag replicat rep1

Sending GETLAG request to REPLICAT REP1 ...
Last record lag 5 seconds.
At EOF, no more records to process.

LAG EXTRACT Command

LAG EXTRACT is the command to determine a true lag time between Extract and the data source. LAG EXTRACT calculates the lag time more precisely than INFO EXTRACT because it communicates with Extract directly, rather than reading a checkpoint position in the trail.

For Extract, lag is the difference, in seconds, between the time that a record was processed by Extract (based on the system clock) and the timestamp of that record in the data source.

An example :-

GGSCI (node07.example.com) 2> lag extract ext1

Sending GETLAG request to EXTRACT EXT1 ...
Last record lag 2 seconds.
At EOF, no more records to process.

Bounded Recovery in Oracle Goldengate


Bounded Recovery

Bounded Recovery is a component of the general Extract checkpointing facility. It guarantees an efficient recovery after Extract stops for any reason, planned or unplanned, no matter how many open (uncommitted) transactions there were at the time that Extract stopped, nor how old they were. Bounded Recovery sets an upper boundary for the maximum amount of time that it would take for Extract to recover to the point where it stopped and then resume normal processing.

Caution: Before changing this parameter from its default settings, contact Oracle Support for guidance. Most production environments will not require changes to this parameter. You can, however, specify the directory for the Bounded Recovery checkpoint files without assistance.

How Extract Recovers Open Transactions

When Extract encounters the start of a transaction in the redo log (in Oracle, this is the first executable SQL statement) it starts caching to memory all of the data that is specified to be captured for that transaction. Extract must cache a transaction even if it contains no captured data, because future operations of that transaction might contain data that is to be captured.

When Extract encounters a commit record for a transaction, it writes the entire cached transaction to the trail and clears it from memory. When Extract encounters a rollback record for a transaction, it discards the entire transaction from memory. Until Extract processes a commit or rollback, the transaction is considered open and its information continues to be collected.

If Extract stops before it encounters a commit or rollback record for a transaction, all of the cached information must be recovered when Extract starts again. This applies to all transactions that were open at the time that Extract stopped.

Extract performs this recovery as follows:

If there were no open transactions when Extract stopped, the recovery begins at the current Extract read checkpoint. This is a normal recovery.

If there were open transactions whose start points in the log were very close in time to the time when Extract stopped, Extract begins recovery by re-reading the logs from the beginning of the oldest open transaction. This requires Extract to do redundant work for transactions that were already written to the trail or discarded before Extract stopped, but that work is an acceptable cost given the relatively small amount of data to process. This also is considered a normal recovery.

If there were one or more transactions that Extract qualified as long-running open transactions, Extract begins its recovery with a Bounded Recovery.

How Bounded Recovery Works

A transaction qualifies as long-running if it has been open longer than one Bounded Recovery interval, which is specified with the BRINTERVAL option of the BR parameter. For example, if the Bounded Recovery interval is four hours, a long-running open transaction is any transaction that started more than four hours ago.

At each Bounded Recovery interval, Extract makes a Bounded Recovery checkpoint, which persists the current state and data of Extract to disk, including the state and data (if any) of long-running transactions. If Extract stops after a Bounded Recovery checkpoint, it will recover from a position within the previous Bounded Recovery interval or at the last Bounded Recovery checkpoint, instead of processing from the log position where the oldest open long-running transaction first appeared.

The maximum Bounded Recovery time (maximum time for Extract to recover to where it stopped) is never more than twice the current Bounded Recovery checkpoint interval. The actual recovery time will be a factor of the following:

# the time from the last valid Bounded Recovery interval to when Extract stopped.

# the utilization of Extract in that period.

# the percent of utilization for transactions that were previously written to the trail. Bounded Recovery processes these transactions much faster (by discarding them) than Extract did when it first had to perform the disk writes. This constitutes most of the reprocessing that occurs for transactional data.

When Extract recovers, it restores the persisted data and state that were saved at the last Bounded Recovery checkpoint (including that of any long running transactions).

For example, suppose a transaction has been open for 24 hours, and suppose the Bounded Recovery interval is four hours. In this case, the maximum recovery time will be no longer than eight hours worth of Extract processing time, and is likely to be less. It depends on when Extract stopped relative to the last valid Bounded Recovery checkpoint, as well as Extract activity during that time.

Advantages of Bounded Recovery

The use of disk persistence to store and then recover long-running transactions enables Extract to manage a situation that rarely arises but would otherwise significantly (adversely) affect performance if it occurred. The beginning of a long-running transaction is often very far back in time from the place in the log where Extract was processing when it stopped. A long-running transaction can span numerous old logs, some of which might no longer reside on accessible storage or might even have been deleted. Not only would it take an unacceptable amount of time to read the logs again from the start of a long-running transaction but, since long-running transactions are rare, most of that work would be the redundant capture of other transactions that were already written to the trail or discarded. Being able to restore the state and data of persisted long-running transactions eliminates that work.

Friday 27 May 2016

ERROR OGG-01201 Error reported by MGR : Access denied

Today i faced an issue in goldengate

2016-05-27 15:55:42  ERROR   OGG-01201  Error reported by MGR : Access denied.

2016-05-27 15:55:42  ERROR   OGG-01668  PROCESS ABENDING.


What happens in Oracle GoldenGate 12.2 is that MANAGER and related EXTRACT/REPLICAT cannot be started or stopped remotely. So what happens is when the direct load is started on the source server it tries to start replicat remotely on the target server, so this leads to ERROR OGG-01201  Error reported by MGR : Access denied, and this error will be reported in report file of the process.

The solution and the fix i found is that it is necessary to write below lines in the remote manager parameter file  ACCESSRULE, PROG *, IPADDR 192.168.1.161, ALLOW

This will allow source server to make connection and start the replicat process remotely

OTN Yathra 2016

OTNYathra 2015 was a great event, and i had a chance to attend and speak at the event, I really had a great privilege to attend the session of Riyaz who is a world know Global Expert, I have some memories to share of the OTNYathra 2015 Event with some pictures














Saturday 7 May 2016

How to load a SQL with its plan hash value in SQL Plan Baseline ?



Recently i was asked a question on how can I load a SQL with its plan hash value in SQL Plan Baseline ? Because sometimes it becomes important when you have multiple child cursors and you would just like to load just one child cursor plan as 1 or 2 child cursors might not have a good plan. So the best strategy is to load the best plan. Here is how to do it


  1     declare
  2      a pls_integer;
  3      begin
  4      a := dbms_spm.load_plans_from_cursor_cache(
  5      sql_id=>'1026nxs7ff5c8',plan_hash_value=>2949544139);
  6*    end;
  7  /

PL/SQL procedure successfully completed.

_________________________________________________________________________________

Monday 18 April 2016

OTNYathra 2016 http://otnyathra.info


I am speaking at OTN Yathra 2016. Please click here to find more details http://otnyathra.info

OTNYathra 2016

After a great success with our OTNYathra 2013, 2014 and 2015!  The Oracle ACE directors will once again be organizing an evangelist event called ‘OTNYathra 2016’  during April 2016. Please click here to register and find out details of the entire event.

Wednesday 30 March 2016

CPU_MTH in Oracle Resource Manager


The CREATE_CONSUMER_GROUP procedure of DBMS_RESOURCE_MANAGER package has an argument cpu_mth which can be used to specify ROUND-ROBIN the default or RUN-TO-COMPLETION. These are resource allocation methods for distributing CPU among sessions in the consumer group. The default is ROUND-ROBIN, which uses a round-robin scheduler to ensure sessions are fairly executed. RUN-TO-COMPLETION specifies that sessions with the largest active time are scheduled ahead of other sessions.

OTNYathra 2016

After a great success with our OTNYathra 2013, 2014 and 2015!  The Oracle ACE directors will once again be organizing an evangelist event called ‘OTNYathra 2016’  during April 2016.  Check here  http://otnyathra.info/

OTNYathra the Indian OTN TOUR is about to start

OTN Tour of India is about to start, OTNYathra 2016 is 30 days away. Limited Seats.. Hurry up! Register soon! Check here on OTNYATHRA  http://otnyathra.info/

Monday 29 February 2016

PARALLEL_DEGREE_POLICY in 12C

The PARALLEL_DEGREE_POLICY initialization parameter has new possible value in 12C i.e ADAPTIVE, it has the same functionality as of AUTO in addition to it, Oracle may re-evaluate the statement in order to provide a better degree of parallelism for subsequent executions based on feedback gathered during statement execution.

Thursday 28 January 2016

SQL ordered by Physical Reads (UnOptimized) or (Optimized) ??


Interestingly from 11.2 onwards there is "SQL ordered by Physical Reads (UnOptimized)" Section in AWR Reports and that is for Smart Flash Cache Database, also note Optimized Read Requests are read requests that are satisfied from the Smart Flash Cache ( or the Smart Flash Cache in OracleExadata V2). A very important point is that the concept and use of  'Smart Flash Cache' in Exadata V2 is different from  'Smart Flash Cache' in Database Smart Flash Cache. Also see that the 'Physical Read Reqs' column in the 'SQL ordered by Physical Reads (UnOptimized)' section is the number of I/O requests and not the number of blocks returned. Be careful not to confuse these with the Physical Reads statistics from the AWR section 'SQL ordered by Reads', which counts database blocks read from the disk not actual I/Os (a single I/O operation  may return many blocks from disk).

In this context Optimized reads will be the one which will be served by the smart flash cache

Database Checkpoints (SQL Server)

For performance reasons, the Database Engine performs modifications to database pages in memory—in the buffer cache—and does not write these pages to disk after every change. Rather, the Database Engine periodically issues a checkpoint on each database. A checkpoint writes the current in-memory modified pages (known as dirty pages) and transaction log information from memory to disk and, also, records information about the transaction log.

Types of Checkpoints :-

Indirect Checkpoint :- Issued in the background to meet a user-specified target recovery time for a given database. The default is 0, which indicates that the database will use automatic checkpoints, whose frequency depends on the recovery interval setting of the server instance.

Automatic Checkpoints :- Issued automatically in the background to meet the upper time limit suggested by the recovery interval server configuration option. Automatic checkpoints run to completion. Automatic checkpoints are throttled based on the number of outstanding writes and whether the Database Engine detects an increase in write latency above 20 milliseconds.  

Manual Checkpoint :- Issued when you execute a Transact-SQL CHECKPOINT command. The manual checkpoint occurs in the current database for your connection. By default, manual checkpoints run to completion. Throttling works the same way as for automatic checkpoints. Optionally, the checkpoint_duration parameter specifies a requested amount of time, in seconds, for the checkpoint to complete.

Internal Checkpoint :- Issued by various server operations such as backup and database-snapshot creation to guarantee that disk images match the current state of the log.