Thursday 7 May 2015

Monitoring Index Usage


It would always be good to drop your unused indexes means which are not selected although they may be suffering from too many DML's, so its better to drop them as they are not getting used by select queries and its not good performance that you use them only for DML's. Let us see how to monitor the indexes and get a report :-

To alter an index, your schema must contain the index or you must have the ALTER ANY INDEX system privilege. With the ALTER INDEX statement, you can:

# Rebuild or coalesce an existing index
# Deallocate unused space or allocate a new extent
# Specify parallel execution (or not) and alter the degree of parallelism
# Alter storage parameters or physical attributes
# Specify LOGGING or NOLOGGING
# Enable or disable key compression
# Mark the index unusable
# Make the index invisible
# Rename the index
# Start or stop the monitoring of index usage

Below script will generate commands all you need for an entire schema to monitor the indexes :-

SELECT ' alter index '||index_name||' MONITORING USAGE; ' FROM user_indexes  ;

Index monitoring is started and stopped using the ALTER INDEX syntax shown below.

ALTER INDEX my_index_i MONITORING USAGE;
ALTER INDEX my_index_i NOMONITORING USAGE;

Information about the index usage can be displayed using the V$OBJECT_USAGE view.

In between you will also see below error :-

ERROR at line 1: ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

This means you cannot alter this index at the moment as there is already some changes going and you cannot get an exlcusive lock on it. Interestingly you should ignore this because this means the index is already being used/changed but selected, so you can do monitoring later for these indexes.


SELECT index_name,
       table_name,
       monitoring,
       used,
       start_monitoring,
       end_monitoring
FROM   v$object_usage
WHERE  index_name = 'IDX1';

The V$OBJECT_USAGE view does not contain an OWNER column so you must to log on as the object owner to see the usage data.

Installing Oracle Goldengate Software

Below we will learn how to Install Oracle Goldengate Software :-

Let us see with ls -ltr and check the zip file of the software :-

[oracle@node4 gg_orcl]$ ls -ltr
total 87192
-rwxrw-rw- 1 oracle oinstall 89186858 Mar  4 16:07 ogg112101_fbo_ggs_Linux_x64_ora11g_64bit.zip
[oracle@node4 gg_orcl]$
[oracle@node4 gg_orcl]$

Lets unzip the file :-

[oracle@node4 gg_orcl]$ unzip ogg112101_fbo_ggs_Linux_x64_ora11g_64bit.zip
Archive:  ogg112101_fbo_ggs_Linux_x64_ora11g_64bit.zip
  inflating: fbo_ggs_Linux_x64_ora11g_64bit.tar
  inflating: OGG_WinUnix_Rel_Notes_11.2.1.0.1.pdf
  inflating: Oracle GoldenGate 11.2.1.0.1 README.txt
  inflating: Oracle GoldenGate 11.2.1.0.1 README.doc

[oracle@node4 gg_orcl]$ tar -xvof *.tar
UserExitExamples/
UserExitExamples/ExitDemo_more_recs/
UserExitExamples/ExitDemo_more_recs/Makefile_more_recs.HPUX
UserExitExamples/ExitDemo_more_recs/Makefile_more_recs.SOLARIS
UserExitExamples/ExitDemo_more_recs/Makefile_more_recs.LINUX
UserExitExamples/ExitDemo_more_recs/Makefile_more_recs.AIX
UserExitExamples/ExitDemo_more_recs/exitdemo_more_recs.vcproj
UserExitExamples/ExitDemo_more_recs/exitdemo_more_recs.c
UserExitExamples/ExitDemo_more_recs/readme.txt
UserExitExamples/ExitDemo_passthru/
UserExitExamples/ExitDemo_passthru/exitdemo_passthru.c
UserExitExamples/ExitDemo_passthru/exitdemopassthru.vcproj
UserExitExamples/ExitDemo_passthru/Makefile_passthru.HPUX
UserExitExamples/ExitDemo_passthru/Makefile_passthru.AIX
UserExitExamples/ExitDemo_passthru/Makefile_passthru.HP_OSS
UserExitExamples/ExitDemo_passthru/Makefile_passthru.LINUX
UserExitExamples/ExitDemo_passthru/readme.txt
UserExitExamples/ExitDemo_passthru/Makefile_passthru.SOLARIS
UserExitExamples/ExitDemo_lobs/
UserExitExamples/ExitDemo_lobs/exitdemo_lob.c
UserExitExamples/ExitDemo_lobs/Makefile_lob.HPUX
UserExitExamples/ExitDemo_lobs/Makefile_lob.SOLARIS
UserExitExamples/ExitDemo_lobs/Makefile_lob.AIX
UserExitExamples/ExitDemo_lobs/exitdemo_lob.vcproj
UserExitExamples/ExitDemo_lobs/Makefile_lob.LINUX
UserExitExamples/ExitDemo_lobs/readme.txt
UserExitExamples/ExitDemo_pk_befores/
UserExitExamples/ExitDemo_pk_befores/Makefile_pk_befores.AIX
UserExitExamples/ExitDemo_pk_befores/Makefile_pk_befores.LINUX
UserExitExamples/ExitDemo_pk_befores/exitdemo_pk_befores.c
UserExitExamples/ExitDemo_pk_befores/Makefile_pk_befores.HPUX
UserExitExamples/ExitDemo_pk_befores/exitdemo_pk_befores.vcproj
UserExitExamples/ExitDemo_pk_befores/Makefile_pk_befores.SOLARIS
UserExitExamples/ExitDemo_pk_befores/readme.txt
UserExitExamples/ExitDemo/
UserExitExamples/ExitDemo/exitdemo.vcproj
UserExitExamples/ExitDemo/Makefile_exit_demo.SOLARIS
UserExitExamples/ExitDemo/Makefile_exit_demo.HP_OSS
UserExitExamples/ExitDemo/exitdemo.c
UserExitExamples/ExitDemo/Makefile_exit_demo.LINUX
UserExitExamples/ExitDemo/exitdemo_utf16.c
UserExitExamples/ExitDemo/Makefile_exit_demo.HPUX
UserExitExamples/ExitDemo/Makefile_exit_demo.AIX
UserExitExamples/ExitDemo/readme.txt
bcpfmt.tpl
bcrypt.txt
cfg/
cfg/password.properties
cfg/MPMetadataSchema.xsd
cfg/jps-config-jse.xml
cfg/ProfileConfig.xml
cfg/mpmetadata.xml
cfg/Config.properties
chkpt_ora_create.sql
cobgen
convchk
db2cntl.tpl
ddl_cleartrace.sql
ddl_ddl2file.sql
ddl_disable.sql
ddl_enable.sql
ddl_filter.sql
ddl_nopurgeRecyclebin.sql
ddl_ora10.sql
ddl_ora10upCommon.sql
ddl_ora11.sql
ddl_ora9.sql
ddl_pin.sql
ddl_purgeRecyclebin.sql
ddl_remove.sql
ddl_session.sql
ddl_session1.sql
ddl_setup.sql
ddl_status.sql
ddl_staymetadata_off.sql
ddl_staymetadata_on.sql
ddl_trace_off.sql
ddl_trace_on.sql
ddl_tracelevel.sql
ddlcob
defgen
demo_more_ora_create.sql
demo_more_ora_insert.sql
demo_ora_create.sql
demo_ora_insert.sql
demo_ora_lob_create.sql
demo_ora_misc.sql
demo_ora_pk_befores_create.sql
demo_ora_pk_befores_insert.sql
demo_ora_pk_befores_updates.sql
dirjar/
dirjar/xmlparserv2.jar
dirjar/fmw_audit.jar
dirjar/jps-internal.jar
dirjar/org.springframework.jdbc-3.0.0.RELEASE.jar
dirjar/org.springframework.context-3.0.0.RELEASE.jar
dirjar/jps-upgrade.jar
dirjar/oraclepki.jar
dirjar/org.springframework.transaction-3.0.0.RELEASE.jar
dirjar/xstream-1.3.jar
dirjar/jsr250-api-1.0.jar
dirjar/org.springframework.beans-3.0.0.RELEASE.jar
dirjar/ldapjclnt11.jar
dirjar/spring-security-cas-client-3.0.1.RELEASE.jar
dirjar/jps-manifest.jar
dirjar/org.springframework.aspects-3.0.0.RELEASE.jar
dirjar/identityutils.jar
dirjar/org.springframework.aop-3.0.0.RELEASE.jar
dirjar/jacc-spi.jar
dirjar/jmxremote_optional-1.0-b02.jar
dirjar/slf4j-log4j12-1.4.3.jar
dirjar/jps-api.jar
dirjar/slf4j-api-1.4.3.jar
dirjar/identitystore.jar
dirjar/jps-unsupported-api.jar
dirjar/osdt_xmlsec.jar
dirjar/org.springframework.orm-3.0.0.RELEASE.jar
dirjar/jagent.jar
dirjar/commons-codec-1.3.jar
dirjar/jps-ee.jar
dirjar/spring-security-taglibs-3.0.1.RELEASE.jar
dirjar/log4j-1.2.15.jar
dirjar/osdt_core.jar
dirjar/spring-security-acl-3.0.1.RELEASE.jar
dirjar/xpp3_min-1.1.4c.jar
dirjar/spring-security-web-3.0.1.RELEASE.jar
dirjar/spring-security-core-3.0.1.RELEASE.jar
dirjar/spring-security-config-3.0.1.RELEASE.jar
dirjar/jps-mbeans.jar
dirjar/org.springframework.test-3.0.0.RELEASE.jar
dirjar/jdmkrt-1.0-b02.jar
dirjar/jps-common.jar
dirjar/org.springframework.web-3.0.0.RELEASE.jar
dirjar/jps-patching.jar
dirjar/jps-wls.jar
dirjar/commons-logging-1.0.4.jar
dirjar/org.springframework.expression-3.0.0.RELEASE.jar
dirjar/org.springframework.instrument-3.0.0.RELEASE.jar
dirjar/monitor-common.jar
dirjar/osdt_cert.jar
dirjar/org.springframework.asm-3.0.0.RELEASE.jar
dirjar/org.springframework.context.support-3.0.0.RELEASE.jar
dirjar/org.springframework.core-3.0.0.RELEASE.jar
dirprm/
dirprm/jagent.prm
emsclnt
extract
freeBSD.txt
ggMessage.dat
ggcmd
ggsci
help.txt
jagent.sh
keygen
libantlr3c.so
libdb-5.2.so
libgglog.so
libggrepo.so
libicudata.so.38
libicui18n.so.38
libicuuc.so.38
libxerces-c.so.28
libxml2.txt
logdump
marker_remove.sql
marker_setup.sql
marker_status.sql
mgr
notices.txt
oggerr
params.sql
prvtclkm.plb
pw_agent_util.sh
remove_seq.sql
replicat
retrace
reverse
role_setup.sql
sequence.sql
server
sqlldr.tpl
tcperrs
ucharset.h
ulg.sql
usrdecs.h
zlib.txt

[oracle@node4 gg_orcl]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Apr 23 2012 08:32:14

Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.



GGSCI (node4.oracle.com) 1> Create Subdirs

Creating subdirectories under current directory /u01/app/oracle/gg_orcl

Parameter files                /u01/app/oracle/gg_orcl/dirprm: already exists
Report files                   /u01/app/oracle/gg_orcl/dirrpt: created
Checkpoint files               /u01/app/oracle/gg_orcl/dirchk: created
Process status files           /u01/app/oracle/gg_orcl/dirpcs: created
SQL script files               /u01/app/oracle/gg_orcl/dirsql: created
Database definitions files     /u01/app/oracle/gg_orcl/dirdef: created
Extract data files             /u01/app/oracle/gg_orcl/dirdat: created
Temporary files                /u01/app/oracle/gg_orcl/dirtmp: created
Stdout files                   /u01/app/oracle/gg_orcl/dirout: created


GGSCI (node4.oracle.com) 2> dblogin userid krishna, password some_password
Successfully logged into database.


FAR SYNC in Oracle 12C Database

In this article we will see  setting up a far sync configuration in your Data Guard Environment, We will also see how to integrate it with the broker but before the practical setup we will understand the concept of FAR SYNC INSTANCE.


In previous releases before 12C, the problem in sync configuration of primary to standby was that if geographical distance was very huge commit latency was affected for the primary because sync means transaction is not complete till the time standby acknowledges the redo change vector. Wouldnt it be great if primary database ships the redo to a dummy instance which is close to it geographically which means it can very well work in sync mode with it with a very good commit latency and later that dummy instance would transfer the redo changes to other standby databases in ASYNC mode without affecting the transaction commit latency for the primary database. Well yes it happens in Oracle Database 12C with a thing known as FAR SYNC INSTANCE.


FAR SYNC is a new enhancement in Oracle 12C Data guard, A Data Guard far sync instance is a destination geographically close to the primary which will accept redo from primary database and then ship it to the other standby terminals. This far sync instance is basically a light weight instance which just has a controlfile , standby redo logs and it archives the standby redo logfiles to local archive redo logs. This type of instance does not have any user datafiles so it means it cannot be opened for read access and it cannot run redo apply and also it can never function in primary role and also it cannot be converted to any type of standby. Its function is to receive redo quickly from primary in sync mode so that transaction on primary commits quickly and later if geographical distance is big between far sync and standby terminals it can keep up with those standby terminals later easily in asynch mode without affecting primary database commit latency.


Setting up a far sync configuration in your Dr Environment, We will also see how to integrate it with the broker.

## In our configuration we have a primary database (PROD1) and 1 physical standby (DR1) and we will see how to add a far sync PRODFS in this configuraton

## In the previous article on this blog of mine i have shown the setup of Data guard for primary PROD1 and standby DR1

## lets create a far sync instance named as prodfs

## create far sync controlfile from primary database prod1

SQL> ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE AS '/u01/app/oracle/oradata/prodfs/control01.ctl';

## Making changes in spfile of Primary database PROD1

Make the following changes in spfile

LOG_ARCHIVE_CONFIG='DG_CONFIG=(prod1,prodfs,dr1)'

LOG_ARCHIVE_DEST_2='SERVICE=prodfs SYNC AFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=prodfs'

## Making changes in spfile of FAR SYNC (PRODFS)

Make the following changes in spfile

DB_UNIQUE_NAME=prodfs

CONTROL_FILES='/u01/app/oracle/oradata/prodfs/control01.ctl'

DB_FILE_NAME_CONVERT='/u01/app/oracle/oradata/prod1','/u01/app/oracle/oradata/prodfs'

LOG_FILE_NAME_CONVERT='/u01/app/oracle/oradata/prod1','/u01/app/oracle/oradata/prodfs'

FAL_SERVER=prod1

LOG_ARCHIVE_CONFIG='DG_CONFIG=(prod1,prodfs,dr1)'

LOG_ARCHIVE_DEST_1='LOCATION= USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=prodfs'

LOG_ARCHIVE_DEST_2='SERVICE=dr1 ASYNC VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=dr1'

## Making changes in spfile of PHYSICAL STANDBY (DR1)

Make the following changes in spfile

## Below are the commands shown practically this time

SQL> alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(prod1,prodfs,dr1)' ;

System altered.

SQL> alter system set LOG_ARCHIVE_DEST_2='SERVICE=prod1 SYNC AFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=prod1' ;

System altered.

SQL>  alter system set fal_server = 'prodfs','prod' ;

System altered.


## Lets login to far sync instance

[oracle@node4 ~]$ export ORACLE_SID=prodfs
[oracle@node4 ~]$
[oracle@node4 ~]$
[oracle@node4 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Thu Oct 2 17:25:55 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.


SQL> create spfile from pfile='/u01/initprodfs.ora' ;

File created.

SQL> startup mount
ORACLE instance started.

Total System Global Area  617975808 bytes
Fixed Size                  2291376 bytes
Variable Size             276826448 bytes
Database Buffers          335544320 bytes
Redo Buffers                3313664 bytes
Database mounted.

## We are almost done

_____________________________________________________________

Broker setup for far sync standby

export ORACLE_SID=prod1

[oracle@node4 ~]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.1.0 - 64bit Production

Copyright (c) 2000, 2012, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /
Connected as SYSDG.
DGMGRL>
DGMGRL> create configuration conf1 as primary database is prod1 connect identifier is prod1 ;
Error: ORA-16698: LOG_ARCHIVE_DEST_n parameter set for object to be added

Failed.

Fixing it on primary database

SQL> alter system set log_archive_dest_2='' scope=both;

System altered.


DGMGRL> create configuration conf1 as primary database is 'prod1' connect identifier is prod1 ;
Configuration "conf1" created with primary database "prod1"
DGMGRL>
DGMGRL>


DGMGRL> add database 'dr1' as connect identifier is dr1  maintained as physical ;
Error: ORA-16698: LOG_ARCHIVE_DEST_n parameter set for object to be added

Failed.


## Fixing this on our standby terminal

SQL>  alter system set log_archive_dest_2='' scope=both;

System altered.


DGMGRL>  add database 'dr1' as connect identifier is dr1  maintained as physical ;
Database "dr1" added

DGMGRL> enable configuration ;
Enabled.
DGMGRL>
DGMGRL> ADD FAR_SYNC 'prodfs' AS CONNECT IDENTIFIER IS prodfs;
Error: ORA-16698: LOG_ARCHIVE_DEST_n parameter set for object to be added

Failed.

## fixing this on far sync instance

SQL>  alter system set log_archive_dest_2='' scope=both;

System altered.


DGMGRL> ADD FAR_SYNC 'prodfs' AS CONNECT IDENTIFIER IS prodfs;
far sync instance "prodfs" added

DGMGRL> ENABLE FAR_SYNC 'prodfs';
Enabled.

DGMGRL> SHOW CONFIGURATION;

Configuration - conf1

  Protection Mode: MaxPerformance
  Databases:
  prod1  - Primary database
    dr1    - Physical standby database
    prodfs - Far Sync (inactive)

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

## Far sync still shows inactive, to enable this we need to do the following

DGMGRL> EDIT DATABASE 'prod1' SET PROPERTY RedoRoutes='(LOCAL : PRODFS SYNC)';
Property "redoroutes" updated

DGMGRL>  EDIT FAR_SYNC 'prodfs' SET PROPERTY RedoRoutes='(PROD1 : DR1)' ;
Property "redoroutes" updated
DGMGRL> SHOW CONFIGURATION;

Configuration - conf1

  Protection Mode: MaxPerformance
  Databases:
  prod1  - Primary database
    prodfs - Far Sync
      dr1    - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS


DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;
Succeeded.

DGMGRL>  show configuration

Configuration - conf1

  Protection Mode: MaxAvailability
  Databases:
  prod1  - Primary database
    prodfs - Far Sync
      dr1    - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Creating a Scheduler job with a specific country Time Zone

Let us see how to create a Scheduler job with a specific country Start Time with its respective time zone, Lets say i want to create a scheduler job in the oracle database where its start time should be tiem zone for country PERU because if you specify systimestamp it will take the database time zone which cab be lets say India and to create a job which has a start time of local time of country PERU will be created as below :-


BEGIN
  SYS.DBMS_SCHEDULER.CREATE_JOB
    (
       job_name        => 'MY_STATS_JOB'
      ,start_date      => TO_TIMESTAMP_TZ('2015/05/05 06:00:00.000000  America/Lima','yyyy/mm/dd hh24:mi:ss.ff tzr')
      ,repeat_interval => 'FREQ=DAILY;BYHOUR=06;'
      ,end_date        => NULL
      ,job_class       => 'SOME_JOB_CLASS'
      ,job_type        => 'PLSQL_BLOCK'
      ,job_action      => 'begin DBMS_STATS.gather_schema_stats(ownname=>''SCHEMA_NAME'',cascade => TRUE,degree=>4); end;'
      ,comments        => NULL
    );
end;
/

Please note that we used America/Lima because its a time of Lima which is the captial of PERU.

begin
  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE
    ( name      => 'MY_STATS_JOB'
     ,attribute => 'RESTARTABLE'
     ,value     => FALSE);
  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE
    ( name      => 'MY_STATS_JOB'
     ,attribute => 'LOGGING_LEVEL'
     ,value     => SYS.DBMS_SCHEDULER.LOGGING_RUNS);
  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE_NULL
    ( name      => 'MY_STATS_JOB'
     ,attribute => 'MAX_FAILURES');
  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE_NULL
    ( name      => 'MY_STATS_JOB'
     ,attribute => 'MAX_RUNS');
  BEGIN
    SYS.DBMS_SCHEDULER.SET_ATTRIBUTE
      ( name      => 'MY_STATS_JOB'
       ,attribute => 'STOP_ON_WINDOW_CLOSE'
       ,value     => FALSE);
  EXCEPTION
    -- could fail if program is of type EXECUTABLE...
    WHEN OTHERS THEN
      NULL;
  END;
  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE
    ( name      => 'MY_STATS_JOB'
     ,attribute => 'JOB_PRIORITY'
     ,value     => 3);
  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE_NULL
    ( name      => 'MY_STATS_JOB'
     ,attribute => 'SCHEDULE_LIMIT');
  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE
    ( name      => 'MY_STATS_JOB'
     ,attribute => 'AUTO_DROP'
     ,value     => FALSE);

  SYS.DBMS_SCHEDULER.ENABLE
    (name                  => 'MY_STATS_JOB');
END;
/

Understanding TOP Command



If you see the above snapshot taken of the TOP command, then lets understand this whole output and interpretation of TOP command in below notes :-

TOP ROW

This first line indicates in order:

current time (15:00:52)
uptime of the machine (up 41 min)
users sessions logged in (6 users)
average load on the system (load average: 1.20, 0.33,0.12) the 3 values refer to the last minute, five minutes and 15 minutes.

TASKS ROW

The second row gives the following information:

Processes running in totals (178 total)
Processes running (4 running)
Processes sleeping (174 sleeping)
Processes stopped (0 stopped)
Processes waiting to be stoppati from the parent process (0 zombie)

CPU ROW

The third line indicates how the cpu is used. If you sum up all the percentages the total will be 100% of the cpu. Let’s see what these values indicate in order:

Percentage of the CPU for user processes (90.3%us)
Percentage of the CPU for system processes (9.7%sy)
Percentage of the CPU processes with priority upgrade nice (0.0%ni)
Percentage of the CPU not used (0.0%id)
Percentage of the CPU processes waiting for I/O operations(0.0%wa)
Percentage of the CPU serving hardware interrupts (0.0% hi — Hardware IRQ
Percentage of the CPU serving software interrupts (0.0% si — Software Interrupts
The amount of CPU ‘stolen’ from this virtual machine by the hypervisor for other tasks (such as running another virtual machine) this will be 0 on desktop and server without Virtual machine. (0.0%st — Steal Time)

Now the memory usage

The fourth and fifth rows respectively indicate the use of physical memory (RAM) and swap. In this order: Total memory in use, free, buffers cached. On this topic you can also read the following article

Now the Processes list

And as last thing ordered by CPU usage (as default) there are the processes currently in use. Let’s see what information we can get in the different columns:

PID – l’ID of the process(5016)
USER – The user that is the owner of the process (oracle)
PR – priority of the process (20)
NI – The “NICE” value of the process (0)
VIRT – virtual memory used by the process (287m)
RES – physical memory used from the process (21m)
SHR – shared memory of the process (8888)
S – indicates the status of the process: S=sleep R=running Z=zombie (S)
%CPU – This is the percentage of CPU used by this process (38.9)
%MEM – This is the percentage of RAM used by the process (0.5)
TIME+ –This is the total time of activity of this process (0:52.84)
COMMAND – And this is the name of the process (gnome-terminal)

Parallel rebuild of an Index findings

In a recent finding i saw that a index was rebuilt with below command :-

SQL>  select degree from user_indexes where index_name='IDX';

DEGREE
----------------------------------------
1

SQL> Alter index idx rebuild online parallel 4 ;

SQL>  select degree from user_indexes where index_name='IDX';

DEGREE
----------------------------------------
4

Now the problem is that this parallelism sticks to this index forever, now if you read from this index also it would be read in parallel resulting in so many parallel reads and resulting too much of CPU usage, this is what we saw recently as it was degrading our OLTP performance specially during peak hours.

So it is better always to revert back this setting to normal after every rebuild in order for index to behave and work in serial mode rather than parallel mode :-

Alter index idx noparallel ;

SQL>  select degree from user_indexes where index_name='IDX';

DEGREE
----------------------------------------
1

ORA-10635: Invalid segment or tablespace type problem resolution

Normally if you are trying to shrink a compressed table lets say you get below error, and the trick to solve it is below :-

For partition tables and for normal tables

alter table table_name modify partition  part_name shrink space ;
*
ERROR at line 1:
ORA-10635: Invalid segment or tablespace type

SQL> alter table <table_name> enable row movement;
Table altered.

SQL> alter table <table_name> shrink space compact;
alter table <table_name> shrink space compact
*
ERROR at line 1:
ORA-10635: Invalid segment or tablespace type

There are two possibilities:

(a) The SHRINK clause is not valid on a compressed table and you will get ORA-10635 .

(a) In case of a table being compressed table, can resolve issue like this:

sql> conn / as sysdba
sql> alter table <table_name> nocompress;
sql> alter table <table_name> enable row movement;
sql> alter table <table_name> shrink space compact;
sql> exit

(b) Autosegment space management was not set for the tablespace. Shrink operations can be performed only on segments in locally managed tablespaces with automatic segment space management (ASSM). Within an ASSM tablespace, all segment types are eligible for online segment shrink except these:
* IOT mapping tables
* Tables with rowid based materialized views
* Tables with function-based indexes.

b) In case autosegment space management is not set for the tablespace, please enable it. The way to “convert” to ASSM is to export all the objects from the MSSM tablespace, then create a new ASSM tablespace and finally import in this new one all the objects.

Tuesday 5 May 2015

AIOUG Tech - Focus Days


Its always good to check out at  AIOUG Events about which events are coming up and going on, Recently Oracle RAC DAY was a great event from AIOUG where people learned a lot, It started from Chennai and then chandigarh and then continuing to more cities of India. Stay tuned on the page AIOUG Events for recent updates. Happy Learning.

Monday 4 May 2015

Difference between SPM and SQL Profiles



Difference between SPM and SQL Profiles :-

Below are some valid points regarding the differences of the above :-

SPM baselines :

# Enforce specific plans and Guarantee plan stability
# Contain multiple plans
# Can record new plans for potential improvement
# Only matches a whole SQL statement in its entirety
# Only accepted plans are used

A SQL Profile is generated by SQL Tuning Advisor and:

# Does not enforce a specific plan
# Does not guarantee plan stability
# Does not contains a plan at all, whereas SPM can contain multiple plans
# Is a collection of directives (generated by STA based on empirical information) which can be applied across many SQL statements to influence the optimizer to hopefully produce a better plan.
# Because no plans are recorded, if there is a change that results in a new, better, plan, then that plan would be lost if the change was backed out.
# Can be configured to match part of multiple SQL statement (ie can apply to SQL strings which contain all the same characters)
# If there is already a SQL plan baseline in place when a SQL Profile is created, the new plan is added to the baseline