Tuesday, 16 November 2021

Sangam 21 at All India Oracle User Group (AIOUG)

 Sangam is a meeting minds, great Oracle minds meet at this conference, it is largest independent Oracle user group conference, this time like last year it is a virtual event. Join us for deep technical sessions, hands-on-labs, Fireside chats, keynotes, and of course open your mind and expand your network at Sangam21. Book your seats before the slots get full from https://www.aioug.org 

I am speaking at Sangam 21 , my session is on Automation with Ansible for Oracle DBAs , check here for more details https://www.aioug.org/sangam21#speakers

Saturday, 14 August 2021

Oracle Data Pump Tracing

Ensure that the user that connects to the database with the Export Data Pump or Import Data Pump utility, is a privileged user (i.e. a user who has the DBA role or the EXP_FULL_DATABASE resp. IMP_FULL_DATABASE role), e.g.:

-- run this Data Pump job with TRACE as a privileged user:

% expdp <LOGIN_USER>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp \ 

LOGFILE=<LOG_NAME>.log TABLES=<SCHEMA_NAME>.<TABLE_NAME> TRACE=480300 

-- or:

-- make the login user a privileged user:

CONNECT / AS SYSDBA

GRANT exp_full_database TO <LOGIN_USER>;

% expdp <LOGIN_USER>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp \  

LOGFILE=<LOG_NAME>.log TABLES=<TABLE_NAME> TRACE=480300  


The majority of errors that occur during a Data Pump job, can be diagnosed by creating a trace file for the Master Control Process (MCP) and the Worker Process(es) only, e.g.:

-- To run a Data Pump job with standard tracing:

-- This results in two trace files in BACKGROUND_DUMP_DEST: 

--    Master Process trace file: <SID>_dm<number>_<process_id>.trc  

--    Worker Process trace file: <SID>_dw<number>_<process_id>.trc  


% expdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp \ 

LOGFILE=<LOG_NAME>.log SCHEMAS=<SCHEMA_NAME> TRACE=480300


Each Data Pump component can be specified explicitly in order to obtain tracing details of that component:

-- Summary of Data Pump trace levels:

-- ==================================


  Trace   DM   DW  ORA  Lines

  level  trc  trc  trc     in

  (hex) file file file  trace                                         Purpose

------- ---- ---- ---- ------ -----------------------------------------------

  10300    x    x    x  SHDW: To trace the Shadow process (API) (expdp/impdp)

  20300    x    x    x  KUPV: To trace Fixed table

  40300    x    x    x  'div' To trace Process services

  80300    x            KUPM: To trace Master Control Process (MCP)      (DM)

 100300    x    x       KUPF: To trace File Manager

 200300    x    x    x  KUPC: To trace Queue services

 400300         x       KUPW: To trace Worker process(es)                (DW)

 800300         x       KUPD: To trace Data Package

1000300         x       META: To trace Metadata Package

------- 'Bit AND'

1FF0300    x    x    x  'all' To trace all components          (full tracing)


There is very useful note Note:351598.1 "Export/Import DataPump: The Minimum Requirements to Use Export DataPump and Import DataPump (System Privileges)"


Combinations of tracing Data Pump components are possible, e.g.:


-- Example of combination (last 4 digits are usually 0300): 


 40300 to trace Process services

 80300 to trace Master Control Process (MCP)

400300 to trace Worker process(es)

------- 'Bit AND'

4C0300 to trace Process services and Master Control and Worker processes

In order to trace all Data Pump components, level 1FF0300 can be specified:


-- Run a Data Pump job with full tracing:

-- This results in two trace files in BACKGROUND_DUMP_DEST:   

--    Master Process trace file: <SID>_dm<number>_<process_id>.trc    

--    Worker Process trace file: <SID>_dw<number>_<process_id>.trc    

-- And one trace file in USER_DUMP_DEST:   

--    Shadow Process trace file: <SID>_ora_<process_id>.trc    


% impdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp \   

LOGFILE=<LOG_NAME>.log FULL=y TRACE=1ff0300 


Also ensure that the init.ora/spfile initialization parameter MAX_DUMP_FILE_SIZE is large enough to capture all the trace information (or set it to unlimited which is the default value), e.g.:


-- Ensure enough trace data can be written to the trace files:


CONNECT / as sysdba

SHOW PARAMETER max_dump


NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

max_dump_file_size                   string      10M


ALTER SYSTEM SET max_dump_file_size = unlimited SCOPE = both; 


SHOW PARAMETER max_dump 


NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

max_dump_file_size                   string      UNLIMITED


Special thanks to Mike Dietrich for sharing this doc id in his session : 

Exadata Information Centre


Recently i was asked a question on the important note on the Exadata Information Center and be updated with the latest updates and patches. Here are the details :  

Wednesday, 24 February 2021

Server Time Synchronization with Chrony in Exadata System Software release 19.1.0

Server Time Synchronization Uses Chrony

Starting with Oracle Exadata System Software release 19.1.0, Exadata database servers and storage severs running Oracle Linux 7 no longer use ntpd. Instead, chrony is used to synchronize the system clock on the servers with NTP servers. chrony can usually synchronize the system clock faster and with better time accuracy compared to ntpd.

When you upgrade from Oracle Linux 6 to Oracle Linux 7, the NTP server settings are migrated to chrony. All Oracle Grid Infrastructure and Oracle Database releases certified with Oracle Linux 7 also support chrony.

Minimum requirements:

Oracle Exadata System Software release 19.1.0

chrony vs ntp

Things chrony can do better than ntp:

chrony can perform usefully in an environment where access to the time reference is intermittent. ntp needs regular polling of the reference to work well.

chrony can usually synchronise the clock faster and with better time accuracy.

chrony quickly adapts to sudden changes in the rate of the clock (e.g. due to changes in the temperature of the crystal oscillator). ntp may need a long time to settle down again.

chrony can perform well even when the network is congested for longer periods of time.

chrony in the default configuration never steps the time to not upset other running programs. ntp can be configured to never step the time too, but in that case it has to use a different means of adjusting the clock (daemon loop instead of kernel discipline), which may have a negative effect on accuracy of the clock.

chrony can adjust the rate of the clock in a larger range, which allows it to operate even on machines with broken or unstable clock (e.g. in some virtual machines).

chrony is smaller, it uses less memory and it wakes up the CPU only when necessary, which is better for power saving.

Things chrony can do that ntp can’t:

chrony supports the Network Time Security (NTS) authentication mechanism.

chrony supports hardware timestamping on Linux, which allows an extremely stable and accurate synchronisation in local network.

Chrony provides support for isolated networks whether the only method of time correction is manual entry (e.g. by the administrator looking at a clock). chrony can look at the errors corrected at different updates to work out the rate at which the computer gains or loses time, and use this estimate to trim the computer clock subsequently.

Chrony provides support to work out the gain or loss rate of the real-time clock, i.e. the clock that maintains the time when the computer is turned off. It can use this data when the system boots to set the system time from a corrected version of the real-time clock. These real-time clock facilities are only available on Linux, so far.

Things ntp can do that chrony can’t:

  • ntp supports all operating modes from RFC 5905, including broadcast, multicast, and manycast server/client. However, the broadcast and multicast modes are inherently less accurate and less secure (even with authentication) than the ordinary server/client mode, and should generally be avoided.
  • ntp supports the Autokey protocol (RFC 5906) to authenticate servers with public-key cryptography. Note that the protocol has been shown to be insecure and has been obsoleted by NTS (RFC 8915).
  • ntp has been ported to more operating systems.
  • ntp includes a large number of drivers for various hardware reference clocks. chrony requires other programs (e.g. gpsd or ntp-refclock) to provide reference time via the SHM or SOCK interface.

Chronyc is a very useful command line utility, lets see how we can invoke it

root@192.9.201.159's password:
Last login: Wed Feb 24 22:56:01 2021
[root@rhel7 ~]# chronyc
chrony version 1.29.1
Copyright (C) 1997-2003, 2007, 2009-2013 Richard P. Curnow and others
chrony comes with ABSOLUTELY NO WARRANTY.  This is free software, and
you are welcome to redistribute it under certain conditions.  See the
GNU General Public License version 2 for details.

chronyc> exit
[root@rhel7 ~]# chronyc sources
210 Number of sources = 0
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================

Also with systemctl you can manage the chronyd service

[root@rhel7 ~]# systemctl restart chronyd
[root@rhel7 ~]# systemctl status chronyd
chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled)
   Active: active (running) since Wed 2021-02-24 23:20:15 IST; 7s ago
  Process: 3793 ExecStartPost=/usr/libexec/chrony-helper add-dhclient-servers (code=exited, status=0/SUCCESS)
  Process: 3790 ExecStart=/usr/sbin/chronyd -u chrony $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 3792 (chronyd)
   CGroup: /system.slice/chronyd.service
           └─3792 /usr/sbin/chronyd -u chrony

Feb 24 23:20:15 rhel7.example.com systemd[1]: Starting NTP client/server...
Feb 24 23:20:15 rhel7.example.com chronyd[3792]: chronyd version 1.29.1 starting
Feb 24 23:20:15 rhel7.example.com chronyd[3792]: Linux kernel major=3 minor=10 patch=0
Feb 24 23:20:15 rhel7.example.com chronyd[3792]: hz=100 shift_hz=7 freq_scale=1.00000000 nominal_tick=10000 slew_delta_tick=833 max_tick_bias=1000 shift_pll=2
Feb 24 23:20:15 rhel7.example.com systemd[1]: Started NTP client/server.

After my client computers have synchronized with the NTP server, I like to set the system hardware clock from the system (OS) time by using the following command:

/sbin/hwclock --systohc

For more details refer to https://chrony.tuxfamily.org/comparison.html

Saturday, 30 January 2021

Immuning code from SQL INJECTION Attacks in Oracle (Part 1)

 There are various methods of how the plsql coders write their code to avoid sql injection attacks, and here we will see some methods and tips of how we can really protect the code. The tests here below are on 19.3 Oracle Version. 

Native dynamic SQL code what i feel is actually little easy to read and write than the code that uses the DBMS_SQL package, and runs much faster specially when it can be optimized by the compiler. But thats not the point of our discussion, the point is that SQL injection is a pretty famous attack and we can avoid this with strict adherence to some basic coding practices....

Assuming that you know the concepts of dynamic sql and then bind arguments, and what oracle says you must use dynamic SQL and not only this using bind arguments protects us against SQL injection attacks. Using bind arguments also enables cursor sharing, and thus improves application performance.

If an interface is not available to an attacker, it is clearly not available to be abused. Thus the first, and arguably most important, line of defense is to reduce the exposed interfaces to only those absolutely required.

Be alert of the following rights :=

Use Definer's Right: When you want to provide users unrestricted access to a table or tables via a subprogram, create the subprogram with definer's right.

Use Invoker's Right: When the purpose of the subprogram is to perform a parameterized, but powerful, operation by using the privileges of the user that invokes it, create the subprogram with invoker's right. Using invoker's rights helps to limit the privileges, and thereby, minimize the security exposure. However, it is not sufficient as the sole measure for eliminating SQL injection vulnerabilities.

Invoker's rights helps to minimize the security exposure

For the following procedure if you give execute privilege to HR user lets say, HR user would only be able to change the password of any user including the user SYS only if he has the ALTER USER system privilege.

create or replace procedure change_password (p_username varchar2 default null,p_new_password varchar2 default null)
is
v_sql_stmt varchar2(500);
begin
v_sql_stmt := 'ALTER USER '||p_username||' IDENTIFIED BY '||p_new_password;
execute immediate v_sql_stmt;
end change_password;
/

How bind arguments should be used ? Basically complete protected code from sql injection attack can be achieved only through elimination of input string concatenation in dynamic SQL, so 1 should avoid input string concatenation and use bind arguments be it automatically through static SQL or explicitly through dynamic SQL statements. A basic code of how bind arguments are used is below :=

create or replace procedure sum_it_up
    ( a number, b number, c number)
    is
    begin
    dbms_output.put_line(a+b+c);
    end;
   /


  DECLARE
   x NUMBER := 2;
   y NUMBER := 5;
   plsql_block VARCHAR2(100);
  BEGIN
   plsql_block := 'BEGIN sum_it_up(:g, :g, :h); END;';
   EXECUTE IMMEDIATE plsql_block USING x, y;
 END;
 /

How to reduce the actual attack, below we will see how the code is not immune in the first go and then how the code is immune in the following 2 cases ( PROCEDURE GET_CODE), below is the table of citizens where humas have their social security number which is the most private number and shall be hidden or protected :=

SQL> create table citizens (fname varchar2(20),lname varchar2(20), s_security_num number);

Table created.

SQL> insert into citizens values ('karan','dodwal',10034007);

1 row created.

SQL> insert into citizens values ('ram','singh',10059991);

1 row created.

SQL> commit;

Commit complete.

--FIRST ORDER ATTACK

Now look how it is vulnuerable in the following code

create or replace procedure get_code (p_fname varchar2 default null)
is
type c is ref cursor;
cv c;
vcode citizens.s_security_num%type;
v_stmt varchar2(300);
begin
v_stmt := 'select s_security_num from citizens where fname='''||p_fname||'''';
dbms_output.put_line('sql query : '||v_stmt);
open cv for v_stmt;
loop
fetch cv into vcode;
exit when cv%notfound;
dbms_output.put_line('code is '||vcode);
end loop;
close cv;
exception when others then
dbms_output.put_line(sqlerrm);
dbms_output.put_line('sql query '||v_stmt);
end;
/

Social security number of only ram is printed in the following output

SQL> exec get_code('ram');
sql query : select s_security_num from citizens where fname='ram'
code is 10059991

PL/SQL procedure successfully completed.

But look below how the Social security number of KARAN is also printed without even knowing RAM or KARAN, just by dummy ''X''

SQL>  exec get_code('x'' union select s_security_num from citizens where ''x''=''x');
sql query : select s_security_num from t1 where fname='x' union select
s_security_num from t1 where 'x'='x'
code is 10034007
code is 10059991

ISNT IT REALLY BAD...

--PROTECTION

--Now see how it isnt vulnuerable in the following code because here we are avoiding the use of dynamic SQL with concatenated input values

create or replace procedure get_code (p_fname varchar2)
is
begin
for i in (select s_security_num from citizens where fname=p_fname)
 loop
dbms_output.put_line(i.s_security_num);
end loop;
end;
/

Social security number of only ram is printed in the following output

SQL> exec get_code('ram');
10059991

PL/SQL procedure successfully completed.

But look below how neither of Social security numbers are printed and attacker is defeated clearly

SQL>  exec get_code('x'' union select s_security_num from citizens where ''x''=''x');

PL/SQL procedure successfully completed.

In my subsequent posts i will share with you other protection mechanisms, but for the time being i wish you all happy new year and also do note the following practices to observe when you secure the Oracle database:

1) Encrypt sensitive data so that it cannot be viewed.
 
2) Evaluate all PUBLIC privileges and revoke them where possible.

3) Do not widely grant EXECUTE ANY PROCEDURE.

4) Avoid granting privileges WITH ADMIN option.

5) Ensure that application users are granted minimum privileges by default. Make privileges configurable if necessary.

6) Do not allow wide access to any standard Oracle packages that can operate on the operating system. These packages include:
        UTL_HTTP, UTL_SMTP, UTL_TCP, DBMS_PIPE, UTL_MAIL, and UTL_FTP

7) Certain Oracle packages such as UTL_FILE and DBMS_LOB are governed by the privilege model of the Oracle DIRECTORY object. Protect Oracle DIRECTORY objects.

8) Lock the database default accounts and expire the default passwords.

9) Remove example scripts and programs from the Oracle directory.

10) Run the database listener as a nonprivileged user.

11) Ensure that password management is active.

12) Enforce password management. Apply basic password management rules, such as password length, history, and complexity, to all user passwords. Mandate that all the users change their passwords regularly.

13) Lock and expire the default user accounts and change the default user password.

Tuesday, 8 December 2020

ORA-01194 after Control file Restore in Oracle and willing to do complete recovery

I got this complaint from one of the customer that they were not able to recover the database post controlfile restore, so lets have a look what happened. 

My oracle version is this : 

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Version 19.3.0.0.0


Firstly the controlfile is lost , so when i restart my database this happens: 

So lets restart our database with startup force (shutdown abort & startup) 


SQL> startup force

ORACLE instance started.


Total System Global Area 3321886736 bytes

Fixed Size                  9140240 bytes

Variable Size             704643072 bytes

Database Buffers         2600468480 bytes

Redo Buffers                7634944 bytes

ORA-00205: error in identifying control file, check alert log for more info


* So i have to restore controlfile now


[oracle@rhel7 PROD]$ rman target/


Recovery Manager: Release 19.0.0.0.0 - Production on Wed Dec 9 00:09:27 2020

Version 19.3.0.0.0


Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.


connected to target database: PROD (not mounted)


RMAN> restore controlfile from '/u01/app/oracle/fast_recovery_area/PROD/autobackup/2020_12_08/o1_mf_s_1058658290_hwzjnw2h_.bkp';


Starting restore at 09-DEC-20

using channel ORA_DISK_1


channel ORA_DISK_1: restoring control file

channel ORA_DISK_1: restore complete, elapsed time: 00:00:01

output file name=/u01/app/oracle/oradata/PROD/control01.ctl

output file name=/u01/app/oracle/fast_recovery_area/PROD/control02.ctl

Finished restore at 09-DEC-20



SQL> alter database mount;


Database altered.


SQL> alter database  open ;

alter database  open

*

ERROR at line 1:

ORA-01589: must use RESETLOGS or NORESETLOGS option for database open



SQL> alter database  open resetlogs;

alter database  open resetlogs

*

ERROR at line 1:

ORA-01194: file 1 needs more recovery to be consistent

ORA-01110: data file 1: '/u01/app/oracle/oradata/PROD/system01.dbf'


* Thats it , this is the error : 


SQL> recover database;

ORA-00283: recovery session canceled due to errors

ORA-01610: recovery using the BACKUP CONTROLFILE option must be done



SQL> recover database using backup controlfile;

ORA-00279: change 2290523 generated at 12/08/2020 23:41:34 needed for thread 1

ORA-00289: suggestion :

/u01/app/oracle/fast_recovery_area/PROD/archivelog/2020_12_09/o1_mf_1_9_%u_.arc

ORA-00280: change 2290523 for thread 1 is in sequence #9



Specify log: {<RET>=suggested | filename | AUTO | CANCEL}

auto

ORA-00308: cannot open archived log

'/u01/app/oracle/fast_recovery_area/PROD/archivelog/2020_12_09/o1_mf_1_9_%u_.arc

'

ORA-27037: unable to obtain file status

Linux-x86_64 Error: 2: No such file or directory

Additional information: 7



ORA-00308: cannot open archived log

'/u01/app/oracle/fast_recovery_area/PROD/archivelog/2020_12_09/o1_mf_1_9_%u_.arc

'

ORA-27037: unable to obtain file status

Linux-x86_64 Error: 2: No such file or directory

Additional information: 7


* So the problem here is we are not able to recover the database because we dont have enough redo changes to be applied to do complete recovery.

When we do recovery from RMAN , its intelligent enough to understand it and applies the redo logfile redo03.log  that needs to applied.

RMAN> recover database;


Starting recover at 09-DEC-20

released channel: ORA_DISK_1

Starting implicit crosscheck backup at 09-DEC-20

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=39 device type=DISK

Crosschecked 3 objects

Finished implicit crosscheck backup at 09-DEC-20


Starting implicit crosscheck copy at 09-DEC-20

using channel ORA_DISK_1

Finished implicit crosscheck copy at 09-DEC-20


searching for all files in the recovery area

cataloging files...

cataloging done


List of Cataloged Files

=======================

File Name: /u01/app/oracle/fast_recovery_area/PROD/autobackup/2020_12_08/o1_mf_s_1058658290_hwzjnw2h_.bkp


using channel ORA_DISK_1


starting media recovery


archived log for thread 1 with sequence 9 is already on disk as file /u01/app/oracle/oradata/PROD/redo03.log

archived log file name=/u01/app/oracle/oradata/PROD/redo03.log thread=1 sequence=9

media recovery complete, elapsed time: 00:00:00

Finished recover at 09-DEC-20

** Remember ** Although RMAN works for us, but in case you wanted to use sqlplus itself , so you should have done this assuming redo01.log was required if it was sequence 1 which was missing :

SQL>  recover database using backup controlfile;

ORA-00279: change 2294948 generated at 12/09/2020 00:18:15 needed for thread 1

ORA-00289: suggestion :

/u01/app/oracle/fast_recovery_area/PROD/archivelog/2020_12_09/o1_mf_1_1_%u_.arc

ORA-00280: change 2294948 for thread 1 is in sequence #1


Specify log: {<RET>=suggested | filename | AUTO | CANCEL}

'/u01/app/oracle/oradata/PROD/redo01.log'

Log applied.

Media recovery complete.


** thats it, we are able to open the database now. Happy days

SQL> alter database open resetlogs;

Database altered.






Friday, 2 October 2020

Loading plans in the baselines with a Linux Shell script.

Its a monday morning and Smith the junior dba is muddled since some developers are chasing him to load a good current plan in the baseline to avoid plan regression, Smith being relatively a new and a junior dba is not quite familiar on how he should use plsql and is complacent to do so, since he thinks the plsql code might load everything from shared pool to the baselines and might cause some issues  if any, So what Smith does is, he calls Martin the senior dba and asks him if he can help him, Martin replies he already has a shell script which does that, all Smith would need to do is pass the sql id in the shell script argument as a $1and that would do the job. Smith couldnt be more happier and that makes his day.

The shell script is given below in this article , so what it does is, $1 as we are aware is the first argument when any shell script is invoked and $2 and $3 .. and so on could be used for as many arguments as much as you want but here we just need 1 argument to the job. So here $1 we use to pass the sql id in the plsql block of the shell script, please note we use $1 not the oracle plsql &a for that matter. Make sure you give executable permission to this script to make it work with chmod u+x spm.sh


[oracle@node2 dk]$ cat spm.sh

sqlplus / as sysdba << EOF

set echo off

set feedback off

set pages 0

set lin 100

set sqlnumber off

set serveroutput on

declare

a pls_integer;

begin

a :=  DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => '$1');

dbms_output.put_line(a|| ' plans loaded in the baseline');

end;

/

exit;

EOF


Lets run the code 


[oracle@node2 dk]$ ./spm.sh 1uw84jcq6802a

Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Version 19.3.0.0.0


SQL> 

1 plans loaded in the baseline

SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Version 19.3.0.0.0


# So there you go , 1 plans loaded in the baseline is successfully returned as per our dbms_output.


Could we validate our plsql block to make sure it works as expected without the shell cover. Lets see : 


SQL> declare

a pls_integer;

begin

a :=  DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => '&a');

dbms_output.put_line(a|| ' plans loaded in the baseline');

end;

/

  2    3    4    5    6    7  Enter value for a:

old   4: a :=  DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => '&a');

new   4: a :=  DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => '');


0 plans loaded in the baseline


PL/SQL procedure successfully completed.


So here above we have zero plans loaded because we passed null value


SQL> /

Enter value for a: 1uw84jcq6802a

old   4: a :=  DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => '&a');

new   4: a :=  DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => '1uw84jcq6802a');


1 plans loaded in the baseline


PL/SQL procedure successfully completed.



So here above we give a valid sql id and it makes sure it is loaded in the baseline.


SQL>  select count(*) from dba_sql_plan_baselines;


  COUNT(*)

----------

         1


So yes it does work. Happy shell scripting