Vishal desai’s Oracle Blog

October 3, 2014

Compare Oracle SQL Plan in Excel

Filed under: Performance Tuning, Tools — vishaldesai @ 2:36 pm

 

Input Screen:

image

 

 

Output Screen:

image

Download xlsm file, input source and target details, double click on list box to populate plan hash values and select one item and click on Run button to compare plan. The output screen will highlight first line (only first) where it finds difference in operation.

Download

Compare Oracle Database Parameters in Excel

Filed under: Performance Tuning, Tools — vishaldesai @ 2:13 pm

Input Screen:

image

Output Screen:

image

It will highlight differences in red. I have added logic to mark it blue if values are different for parameters instance_number, thread, undo_tablespace and local_listener. You can add more parameters to that logic for further customization.

Download xlsm file, add tns alias to tnsnames.ora file on your windows client and start comparing parameters.

Download

May 15, 2014

IO Waits, Average Wait Time and Histogram Heat maps using SQL Developer

Filed under: Tools, Troubleshooting — vishaldesai @ 11:00 pm

Jeff Smith pointed out David Mann’s blog in recent session on SQL Developer Reporting. Using David’s base code I have developed SQL developer heat map reports to visualize, troubleshoot and compare IO related waits.

 image

 

image

From above we can clearly visualize that on 11-MAY-2013 between 2:00 – 3:00 pm average wait time for log file parallel write event was 27.3 ms. The number of waits during same time slot was 10801 which is equivalent to 3 waits per second and that’s not significant. If intermittent slowness was reported by application team and if it was related to commit wait class, DBAs need to engage SA/SAN team to investigate this further.

Below are IO wait histogram reports for log file parallel write event on same database instance. For OLTP workload you want to see bright red boxes for log file parallel write between 1 and 16ms buckets for wait counts and average wait times. Such heat maps can also be used to do relative performance comparison by Day or Day/Hour.

image

 

image

 

Download (I will upload SQL developer report definition files early next week.)

Another example of vertical stacked bar chart.

IO Wait Histogram by Count – db file sequential read

image

 

IO Wait Histogram by Avg Wait Time(s) – db file sequential read

image

April 18, 2014

Database Time Viewer

Filed under: Tools — vishaldesai @ 2:31 pm

DBTimeViewer is a great standalone tool created by Dominic Giles to monitor Oracle Database Time. It requires simple configuration file update which includes database connection information such as username, password, hostname, port and service. In large enterprise environment with hundreds of thousands of databases it becomes little difficult to update database.xml file every time you have to connect to different database. I have created simple combination of batch and SQL script which will automatically create database.xml.

Scripts:

Create SQL and Batch scripts with proper path settings:

Script dbtimemonitor.sql

set head off
set feed off
set linesize 200
set pages 9999
set feedback off
set verify off
set pagesize 0


set term off

spool C:\<dbtime path>\databases.xml

select '<?xml version = ' || '''' || '1.0' || '''' || ' encoding = ' || '''' || 'UTF-8' || '''' || '?>'  from dual
union all
select '<WaitMonitor Title="Monitored Databases" xmlns="http://www.dominicgiles.com/waitmonitor">' from dual
union all
select  '<MonitoredDatabase><ConnectString>//' || a.host_name || '<domain>:<port>/' || b.name || '</ConnectString><Comment></Comment><Username>' || '&1' || '</Username><Password>' || '&2' || '</Password></MonitoredDatabase>'  from gv$instance a, v$database b
union all
select '</WaitMonitor>' from dual;

spool off
set term on
exit


Script dbtime.bat

@ECHO off
SET DBUser=%1
SET DBPass=%2
SET DBTNS=%3

sqlplus -s "%DBUser%/%DBPass%@%DBTNS%" @C:\<path>\dbtimemonitor.sql %1 %2

cd C:\<path>
dbtimemonitor.bat

Output:

C:\<path>>dbtime.bat <dbuser> <dbpassword> <dbtns>

Capture4

January 24, 2014

Build Bind Variables from v$sql_monitor binds_xml column

Filed under: SQL, Troubleshooting — vishaldesai @ 7:41 pm

One of my client had interesting SQL performance issue. Whenever client executes query from ab initio it takes forever and same SQL code from sqlplus runs in less than a minute. Both tools were producing different plan_hash_value. I looked at v$ses_optimizer_env to ensure there were no session level parameters set for ab initio session.

I wanted to run exact SQL from sqlplus, create sql plan baseline so that when SQL is submitted from AI it can use good plan. AI was generating SQL dynamically so I decided get exact SQL text, bind variables and values from v$sql_monitor. I defined bind variables, assigned values and ran SQL from sqlplus and created profile which was eventually used by AI. To automate constructing bind variables and SQL Text from v$sql_monitor, I wrote script called build_bind_sqlmontior.sql.

Script will prompt for Key, SID and SQL_ID. Enter value for at least on prompt and later on enter key value to construct bind variables and SQL Text as shown below:

SQL> @build_bind_sqlmonitor.sql
Please enter the value for Key if known       :
Please enter the value for Sid if known       : 986
Please enter the value for sql_id if known    :

             KEY STATUS              USERNAME   MODULE                      SID SQL_ID        FIRST_REFRESH_TIME   PROGRAM         SQL_TEXT
---------------- ------------------- ---------- -------------------- ---------- ------------- -------------------- --------------- --------------------
   6996503228544 DONE (ALL ROWS)     SYS        sqlplus.exe                 986 8wy5mjmh53vxf 01/24/14 10:41:56    sqlplus.exe     select /*+ monitor f
   8362302828663 DONE (ERROR)        SYS        sqlplus.exe                 986 4bkxr36kasm7s 01/24/14 10:40:51    sqlplus.exe     select /*+ monitor f
  16471201083509 DONE (ERROR)        SYS        sqlplus.exe                 986 azb329svj1ajd 01/24/14 10:39:36    sqlplus.exe     select /*+ monitor f
   7082402574450 DONE (ERROR)        SYS        sqlplus.exe                 986 3036w6mx13rq5 01/24/14 10:38:36    sqlplus.exe     select /*+ monitor f
   5845451993201 DONE (ERROR)        SYS        sqlplus.exe                 986 20fmy44010wvw 01/24/14 10:38:00    sqlplus.exe     select /*+ monitor f
  11643657842205 DONE (ALL ROWS)                                            986 bfjf698jtp6n1 01/24/14 03:10:14
   7421704990227 DONE                                                       986 2vn8ztuzaq3k4 01/24/14 03:10:11
  14856293379585 DONE (ALL ROWS)                                            986 53afdbcv9637w 01/24/14 03:10:10
.....
.....
Please enter Key from above       : 6996503228544

variable SYS_B_0 VARCHAR2(32);
variable SYS_B_1 VARCHAR2(32);
variable SYS_B_2 NUMBER;

exec :SYS_B_0 := '31-OCT-12 00.00.00';
exec :SYS_B_1 := 'DD-MON-YY HH24:MI:SS';
exec :SYS_B_2 := 30205578;

select /*+ monitor full(ism) */ *
from ism.table_name ism
where col1=to_date(:"SYS_B_0",:"SYS_B_1")
and   col2=:"SYS_B_2";

SQL> variable SYS_B_0 VARCHAR2(32);
SQL> variable SYS_B_1 VARCHAR2(32);
SQL> variable SYS_B_2 NUMBER;
SQL>
SQL> exec :SYS_B_0 := '31-OCT-12 00.00.00';

PL/SQL procedure successfully completed.

SQL> exec :SYS_B_1 := 'DD-MON-YY HH24:MI:SS';

PL/SQL procedure successfully completed.

SQL> exec :SYS_B_2 := 30205578;

PL/SQL procedure successfully completed.

SQL> select /*+ monitor full(ism) */ *
  2  from ism.table_name ism
  3  where col1=to_date(:"SYS_B_0",:"SYS_B_1")
  4  and   col2=:"SYS_B_2";

.....

1 row selected.

Download Script

If there is any data type conversion script may not produce reliable output.

January 22, 2014

Yet Another Case of multiple child cursors, BIND_MISMATCH and OPTIMIZER_MISMATCH

Filed under: Performance Tuning, Troubleshooting — vishaldesai @ 9:08 pm

One of my client had unpredictable response time for bunch of SQL statements. I did not have any sqlid or sql code to start with so I started by running Kerry’s unstable_plans.sql. As per awr_plan_stats.sql, 3799146274 was good hash and 2775678588 was bad hash.

Output from unstable_plans.sql

SQL_ID        SUM(EXECS)   MIN_ETIME   MAX_ETIME   NORM_STDDEV
------------- ---------- ----------- ----------- -------------
....
djtwm74rrch17       1709         .00       23.69    7,395.6254
..

@awr_plan_stats
Enter value for sql_id: djtwm74rrch17

SQL_ID        PLAN_HASH_VALUE        EXECS    AVG_ETIME AVG_PX_SERVERS AVG_CPU_TIME            AVG_LIO          AVG_PIO
------------- --------------- ------------ ------------ -------------- ------------ ------------------ ----------------
djtwm74rrch17      3799146274          462        1.042     .004329004         .000                7.0               .1
djtwm74rrch17      2775678588        1,250   29,602.465          .0016        2.161          157,655.3        157,725.4

SQL_TEXT for djtwm74rrch17

SELECT ...
FROM TAB1 PAL,
  TAB2 COMPANY
WHERE PAL.CPY_ID      = COMPANY.CPYID
AND ( PAL.LOAN_NUMBER = :"SYS_B_0" )
ORDER BY ...

We clearly had multiple child cursors for given hash and had two different hash values. Child 0 is the first child, Child 1 was created due to bind_mismatch, child 2 was created due to optimizer_mismatch marking child 0 as non shareable and child 3 was created due to optimizer and bind mismatch marking child 1 as non shareable..

SQL> select sql_id,plan_hash_value,child_number,is_shareable from gv$sql where sql_id='djtwm74rrch17' and inst_id=2;

SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER I
------------- --------------- ------------ -
djtwm74rrch17      3799146274            0 N
djtwm74rrch17      2775678588            1 N
djtwm74rrch17      3799146274            2 Y
djtwm74rrch17      2775678588            3 Y

select * from GV$SQL_SHARED_CURSOR where sql_id='djtwm74rrch17'
and inst_id=2;

INST_ID    SQL_ID            ADDRESS                CHILD_ADDRESS    CHILD_NUMBER    OPTIMIZER_MISMATCH    BIND_MISMATCH
2        djtwm74rrch17    000000025C7B1F90    000000025C8296D0        0                        N                   N
2        djtwm74rrch17    000000025C7B1F90    000000025C6E2DF0        1                        N                   Y
2        djtwm74rrch17    000000025C7B1F90    000000025C6E4210        2                        Y                   N
2        djtwm74rrch17    000000025C7B1F90    000000025C7B90E0        3                        Y                   Y

Child cursors 2 and 3 were created due to optimizer_mismatch and parameter that caused optimizer_mismatch was _pga_max_size. But we never set or change _pga_max_size on our databases. But databases were configured with AMM. _pga_max_size is derived from pga_aggregate_target so whenever PGA was resized it was affecting _pga_max_size making existing cursors non shareable.

Based on output from gv$sql_optimizer_env
                    Child        Child        Child        Child
Parameter                0            1            2            3
_pga_max_size    229360 KB    229360 KB    239200 KB    239200 KB
SQL> show parameter memory%target

NAME                                 TYPE                             VALUE
------------------------------------ -------------------------------- ------------------------------
memory_max_target                    big integer                      8G
memory_target                        big integer                      8G

And the reason for bind mismatch was using different data types for literal values as shown below:

SQL> select sql_id,child_number,name,position,datatype,datatype_string,max_length,last_captured,value_string from GV$SQL_BIND_CAPTURE where inst_id=2 and sql_id='djtwm74rrch17'; 

SQL_ID        CHILD_NUMBER NAME                             POSITION   DATATYPE DATATYPE_STRING MAX_LENGTH LAST_CAPTURED     VALUE_STRI
------------- ------------ ------------------------------ ---------- ---------- --------------- ---------- ----------------- ----------
djtwm74rrch17            3 :SYS_B_0                                1          2 NUMBER                  22 01/22/14 08:14:14 477026652
djtwm74rrch17            2 :SYS_B_0                                1          1 VARCHAR2(32)            32 01/22/14 10:14:44 0476735873
djtwm74rrch17            1 :SYS_B_0                                1          2 NUMBER                  22 01/21/14 10:49:20 476783261
djtwm74rrch17            0 :SYS_B_0                                1          1 VARCHAR2(32)            32 01/21/14 15:45:13 0476636964 

Predicate section of plan shows to_number function used for hash 2775678588. Same information can be derived from v$sql_plan_statistics_all.

SQL_ID  djtwm74rrch17, child number 2
-------------------------------------
Plan hash value: 3799146274

-----------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                    |       |       |     6 (100)|          |
|   1 |  SORT ORDER BY                 |                    |     1 |   313 |     6  (17)| 00:00:01 |
|   2 |   NESTED LOOPS                 |                    |       |       |            |          |
|   3 |    NESTED LOOPS                |                    |     1 |   313 |     5   (0)| 00:00:01 |
|   4 |     TABLE ACCESS BY INDEX ROWID| TAB1               |     1 |   279 |     4   (0)| 00:00:01 |
|*  5 |      INDEX RANGE SCAN          | PKTAB1             |     1 |       |     3   (0)| 00:00:01 |
|*  6 |     INDEX UNIQUE SCAN          | TAB1_CPYID         |     1 |       |     0   (0)|          |
|   7 |    TABLE ACCESS BY INDEX ROWID | TAB2               |     1 |    34 |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access("PAL"."LOAN_NUMBER"=:SYS_B_0)
   6 - access("PAL"."CPY_ID"="COMPANY"."CPYID")
   
   
SQL_ID  djtwm74rrch17, child number 3
-------------------------------------
Plan hash value: 2775678588

----------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                    |       |       | 44394 (100)|          |
|   1 |  SORT ORDER BY                |                    |     1 |   313 | 44394   (1)| 00:08:53 |
|   2 |   NESTED LOOPS                |                    |       |       |            |          |
|   3 |    NESTED LOOPS               |                    |     1 |   313 | 44393   (1)| 00:08:53 |
|*  4 |     TABLE ACCESS FULL         | TAB1               |     1 |   279 | 44392   (1)| 00:08:53 |
|*  5 |     INDEX UNIQUE SCAN         | TAB1_CPYID            |     1 |       |     0   (0)|          |
|   6 |    TABLE ACCESS BY INDEX ROWID| TAB2               |     1 |    34 |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------


   4 - filter(TO_NUMBER("PAL"."LOAN_NUMBER")=:SYS_B_0)
   5 - access("PAL"."CPY_ID"="COMPANY"."CPYID")
   
   
SQL> l
  1   select distinct sql_id,PLAN_HASH_VALUE,id,PARENT_ID,ACCESS_PREDICATES,FILTER_PREDICATES FROM gv$sql_plan_statistics_all
  2   WHERE sql_id = 'djtwm74rrch17' and inst_id=2
  3   and (ACCESS_PREDICATES is not null or FILTER_PREDICATES is not null)
  4*  order by PLAN_HASH_VALUE
SQL> /

SQL_ID        PLAN_HASH_VALUE         ID  PARENT_ID ACCESS_PREDICATES                        FILTER_PREDICATES
------------- --------------- ---------- ---------- ---------------------------------------- ----------------------------------------
djtwm74rrch17      2775678588          5          3 "PAL"."CPY_ID"="COMPANY"."CPYID"
djtwm74rrch17      2775678588          4          3                                          TO_NUMBER("PAL"."LOAN_NUMBER")=:SYS_B_0
djtwm74rrch17      3799146274          6          3 "PAL"."CPY_ID"="COMPANY"."CPYID"
djtwm74rrch17      3799146274          5          4 "PAL"."LOAN_NUMBER"=:SYS_B_0

Unfortunately AWR does not store access_predicates and filter_predicates so if cursors have edged out it won’t be possible to troubleshoot such random behavior.

January 10, 2014

Golden Gate conditional processing using UPDATEDELETES, TOKENS and FILTER

Filed under: Golden Gate, Replication — vishaldesai @ 4:01 pm

Client had two databases (source and target) and wanted to replicate data as follows:

All transactions generated by OLTP user needs to be replicated (as is) in near real time to target database. ARCHIVER user runs batch operations and deletes data from source database but on Target database it should convert ARCHIVER deletes into updates and there should be column indicator on target database to identify records that were deleted by ARCHIVER user on source database. Eventually these records will be deleted from Target database once its consumed by queue processor.

Test Case:

For demo purpose my Target database is same as Source database.

Source Database Target Database
APPSCHEMA – stores application tables APPSCHEMA – stores application tables
OLTP – web user  
ARCHIVER – user for archiving old data using batch operations  

Scripts to setup demo:

Database schema and table setup:
================================

DROP USER oltp cascade;
DROP USER archiver cascade;
DROP USER appschema cascade;
DROP TABLE appschema.test;
DROP TABLE appschema.test1;

CREATE USER oltp IDENTIFIED BY oltp DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp;
CREATE USER archiver IDENTIFIED BY archiver DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp;
CREATE USER appschema IDENTIFIED BY appschema DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp;
GRANT CREATE session, RESOURCE, DBA TO oltp,archiver,appschema;
  
CREATE TABLE appschema.test
(
   id      NUMBER PRIMARY KEY,
   name    VARCHAR2(20),
   company VARCHAR2(20)
);

  
CREATE TABLE appschema.test1
(
   id          NUMBER PRIMARY KEY,
   name        VARCHAR2(20),
   company     VARCHAR2(20),
   deleted_row VARCHAR(20)
);

Source:
=======

--Add Extract
dblogin  userid ggadmin@DBTNS, password pass1234
ADD EXTRACT e_test, TRANLOG, BEGIN NOW
ADD EXTTRAIL ./dirdat/et, EXTRACT e_test

--Extract parameter file
edit params e_test

EXTRACT e_test
SETENV (NLS_LANG="AMERICAN_AMERICA.AL32UTF8")
SETENV (ORACLE_HOME = "/opt/oracle/app/11.2.0.3")
SETENV (ORACLE_SID = "DBTNS")
DBOPTIONS
USERID ggadmin, PASSWORD pass1234
EXTTRAIL ./dirdat/et
TABLE appschema.test,TOKEN(TK_DBUSER = @GETENV ("TRANSACTION" , "USERNAME"));

--Generate definition file
edit params defgen1

defsfile /opt/oracle/app/gghome/dirdef/test.def
USERID ggadmin@DBTNS, PASSWORD pass1234
table appschema.test;

defgen paramfile /opt/oracle/app/gghome/dirprm/defgen1.prm

Copy /opt/oracle/app/gghome/dirdef/test.def to target server.

Target:
=======

--Add Replicat
dblogin  userid ggadmin@DBTNS, password pass1234
ADD CHECKPOINTTABLE ggadmin.CHKPTAB
ADD REPLICAT r_test, EXTTRAIL ./dirdat/et CHECKPOINTTABLE ggadmin.CHKPTAB

--Replicat parameter file
edit params r_test

REPLICAT r_test
SETENV (NLS_LANG="AMERICAN_AMERICA.AL32UTF8")
SETENV (ORACLE_HOME = "/opt/oracle/app/11.2.0.3")
SETENV (ORACLE_SID = "DBTNS")
SOURCEDEFS /opt/oracle/app/gghome/dirdef/test.def
USERID ggadmin, PASSWORD pass1234
ALLOWDUPTARGETMAP
GETINSERTS
GETUPDATES
GETDELETES
MAP appschema.test, TARGET appschema.test1, COLMAP (USEDEFAULTS, deleted_row = " "), FILTER ( @STRFIND(@TOKEN("TK_DBUSER"),"OLTP") > 0);
IGNOREINSERTS
IGNOREUPDATES
GETDELETES
UPDATEDELETES
MAP appschema.test, TARGET appschema.test1, COLMAP (USEDEFAULTS, deleted_row = "ARCHIVER"), FILTER ( @STRFIND(@TOKEN("TK_DBUSER"),"ARCHIVER") > 0);

Basically we are telling Golden Gate to replicate transactions as is when TOKEN value has OLTP stored in it and convert deletes into updates when TOKEN value has ARCHIVER stored in it.

Testing Transactions:

-- Populate Data and verify INSERTS are replicated to Target

SQL> conn oltp/oltp
Connected.
SQL> insert into appschema.test values (20,'VISHAL','ORA');

1 row created.

SQL> insert into appschema.test values (21,'ERIC','ORA');

1 row created.

SQL> insert into appschema.test values (22,'PHIL','ORA');

1 row created.

SQL> insert into appschema.test values (23,'GARY','ORA');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from appschema.test;

        ID NAME                 COMPANY
---------- -------------------- --------------------
        20 VISHAL               ORA
        21 ERIC                 ORA
        22 PHIL                 ORA
        23 GARY                 ORA

SQL> select * from appschema.test1;

        ID NAME                 COMPANY              DELETED_ROW
---------- -------------------- -------------------- --------------------
        20 VISHAL               ORA
        21 ERIC                 ORA
        22 PHIL                 ORA
        23 GARY                 ORA

-- Run DELETE as OLTP user

SQL> conn oltp/oltp
Connected.
SQL> delete from appschema.test where id=20;

1 row deleted.

SQL> commit;

Commit complete.

SQL> select * from appschema.test  order by id;

        ID NAME                 COMPANY
---------- -------------------- --------------------
        21 ERIC                 ORA
        22 PHIL                 ORA
        23 GARY                 ORA

SQL> select * from appschema.test1 order by id;

        ID NAME                 COMPANY              DELETED_ROW
---------- -------------------- -------------------- --------------------
        21 ERIC                 ORA
        22 PHIL                 ORA
        23 GARY                 ORA

-- Run DELETE as ARCHIVER user

SQL> conn archiver/archiver
Connected.
SQL> delete from appschema.test where id=22;

1 row deleted.

SQL> commit;

Commit complete.

SQL> select * from appschema.test order by id;

        ID NAME                 COMPANY
---------- -------------------- --------------------
        21 ERIC                 ORA
        23 GARY                 ORA

SQL> select * from appschema.test1 order by id;

        ID NAME                 COMPANY              DELETED_ROW
---------- -------------------- -------------------- --------------------
        21 ERIC                 ORA
        22 PHIL                 ORA                  ARCHIVER
        23 GARY                 ORA

That concludes the demo.

P.S. FILTER (@TOKEN("TK_DBUSER")="OLTP") did not work for some reason. Probably Token was storing blank padding’s. I will update this blog when I have time to verify that. :)

December 11, 2013

Bloom Filter and function in join condition

Filed under: Performance Tuning, SQL — vishaldesai @ 4:26 pm

One of my data warehouse user wanted to compare data between two Quarters. Original SQL is slightly complicated so I have created test case to demonstrate the issue.

DROP TABLE t1;

CREATE TABLE t1
  (
    id       NUMBER NOT NULL,
    datecol  DATE NOT NULL,
    comments VARCHAR2(500)
  )
  PARTITION BY RANGE
  (
    datecol
  )
  (
    PARTITION q3 VALUES LESS THAN (TO_DATE('10/10/2013', 'DD/MM/YYYY')) TABLESPACE users,
    PARTITION q4 VALUES LESS THAN (TO_DATE('01/01/2043', 'DD/MM/YYYY')) TABLESPACE users
  );
  
INSERT INTO t1
SELECT
  rownum,
  to_date('31/12/2013','DD/MM/YYYY'),
  dbms_random.string('X', '500')
FROM dual a
  CONNECT BY level <= 5000;
  
INSERT INTO t1
SELECT
  rownum,
  to_date('30/09/2013','DD/MM/YYYY'),
  dbms_random.string('X', '500')
FROM dual a
  CONNECT BY level <= 5000;

For sake of simplicity, I have added only two partitions here but original fact table had 10 years of data. Below is subset of SQL statement that user was running to compare data across quarters.

select /*+ leading(xa,xb) parallel(xa,4) parallel(xb,4) */ sum(xa.id),count( distinct xb.comments)
from t1 xa ,t1 xb
where xa.datecol = add_months(xb.datecol,-3)
and   xa.datecol = '30-SEP-13';
------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |          |     1 |   265 |   367  (74)| 00:00:01 |       |       |        |      |            |
|   1 |  SORT AGGREGATE                  |          |     1 |   265 |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR                 |          |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)           | :TQ10001 |     1 |   265 |            |          |       |       |  Q1,01 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE               |          |     1 |   265 |            |          |       |       |  Q1,01 | PCWP |            |
|   5 |      VIEW                        | VW_DAG_0 |    34M|  8774M|   367  (74)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   6 |       HASH GROUP BY              |          |    34M|  9370M|   367  (74)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   7 |        PX RECEIVE                |          |    34M|  9370M|   367  (74)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   8 |         PX SEND HASH             | :TQ10000 |    34M|  9370M|   367  (74)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
|   9 |          HASH GROUP BY           |          |    34M|  9370M|   367  (74)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|* 10 |           HASH JOIN              |          |    34M|  9370M|   122  (20)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|  11 |            PARTITION RANGE SINGLE|          |  5501 |   118K|    33   (0)| 00:00:01 |   KEY |   KEY |  Q1,00 | PCWC |            |
|* 12 |             TABLE ACCESS FULL    | T1       |  5501 |   118K|    33   (0)| 00:00:01 |   KEY |   KEY |  Q1,00 | PCWP |            |
|  13 |            PX BLOCK ITERATOR     |          |  6311 |  1608K|    65   (0)| 00:00:01 |     1 |     2 |  Q1,00 | PCWC |            |
|* 14 |             TABLE ACCESS FULL    | T1       |  6311 |  1608K|    65   (0)| 00:00:01 |     1 |     2 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  10 - access("XA"."DATECOL"=ADD_MONTHS(INTERNAL_FUNCTION("XB"."DATECOL"),-3))
  12 - filter("XA"."DATECOL"='30-SEP-13')
  14 - filter(ADD_MONTHS(INTERNAL_FUNCTION("XB"."DATECOL"),-3)='30-SEP-13')

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

As you can see from line 14, table T1 (XB) was doing full table scan and was accessing both the partitions (pstart 1 and pstop 2). In original SQL it was accessing all partitions for last 10 years.

I asked user to add additional filter for T1 (XB) so that it will scan required partition as shown below. But user was not satisfied (as it would require calculating dates manually every time they run this SQL and sometimes they run this SQL for multiple Quarters). Other option was to pass filter as xb.datecol = (select add_months(‘30-SEP-13’,3) from dual) but I was curios to find out more about bloom filters.

select /*+ leading(xa,xb) parallel(xa,4) parallel(xb,4) */ sum(xa.id),count( distinct xb.comments)
from t1 xa ,t1 xb
where xa.datecol = add_months(xb.datecol,-3)
and   xa.datecol = '30-SEP-13'
and   xb.datecol = '31-DEC-13';

I was wondering whether Oracle would create bloom filter after scanning T1 (XA) and apply that to T2 (XB) and do partition scan on T2(XB) instead of scanning all partitions but the problem was add_months function in join condition.

Original Plan:

image

Recommended SQL and plan (bloom filter in action):

select /*+ leading(xa,xb) parallel(xa,4) parallel(xb,4) */ sum(xa.id),count( distinct xb.comments)
from t1 xa ,t1 xb
where add_months(xa.datecol,+3) = xb.datecol
and   xa.datecol = '30-SEP-13';
-------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |          |     1 |   265 |   315  (69)| 00:00:01 |       |       |        |      |            |
|   1 |  SORT AGGREGATE                   |          |     1 |   265 |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR                  |          |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)            | :TQ10001 |     1 |   265 |            |          |       |       |  Q1,01 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE                |          |     1 |   265 |            |          |       |       |  Q1,01 | PCWP |            |
|   5 |      VIEW                         | VW_DAG_0 |    28M|  7145M|   315  (69)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   6 |       HASH GROUP BY               |          |    28M|  7630M|   315  (69)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   7 |        PX RECEIVE                 |          |    28M|  7630M|   315  (69)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   8 |         PX SEND HASH              | :TQ10000 |    28M|  7630M|   315  (69)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
|   9 |          HASH GROUP BY            |          |    28M|  7630M|   315  (69)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|* 10 |           HASH JOIN               |          |    28M|  7630M|   118  (17)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|  11 |            PART JOIN FILTER CREATE| :BF0000  |  5501 |   118K|    33   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|  12 |             PARTITION RANGE SINGLE|          |  5501 |   118K|    33   (0)| 00:00:01 |   KEY |   KEY |  Q1,00 | PCWC |            |
|* 13 |              TABLE ACCESS FULL    | T1       |  5501 |   118K|    33   (0)| 00:00:01 |   KEY |   KEY |  Q1,00 | PCWP |            |
|  14 |            PX BLOCK ITERATOR      |          | 10278 |  2619K|    65   (0)| 00:00:01 |:BF0000|:BF0000|  Q1,00 | PCWC |            |
|  15 |             TABLE ACCESS FULL     | T1       | 10278 |  2619K|    65   (0)| 00:00:01 |:BF0000|:BF0000|  Q1,00 | PCWP |            |
-------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  10 - access("XB"."DATECOL"=ADD_MONTHS(INTERNAL_FUNCTION("XA"."DATECOL"),3))
  13 - filter("XA"."DATECOL"='30-SEP-13')

image

As you can see from above, just by flipping the add_months function Oracle now creates and uses bloom filter to scan required partition for T1 (XB). So from this behavior it looks like bloom filters cannot be used for join column surrounded by function. It sounds very similar to having index on column x, scanning table with function(x)=y will not use index on column x. May be in future Oracle will add function based bloom filters.

November 26, 2013

Compare IO outliers

Filed under: Performance Tuning, Tools, Troubleshooting — vishaldesai @ 9:47 pm

Oracle SQL Developer has nice reporting capability to plot charts. To compare IO outliers using db_hist_event_histogram between two periods, I have created couple of reports for quick comparison.

When you click on report, it will prompt you to select database and input event, period one start snap and end snap, period two start and end snap.

Compare IO wait count outliers by %

image

Compare IO wait count outliers by Value

image

Compare IO wait time outlier by %

image

IO latency for all snaps by Day

image

IO latency for last 24 hours

image

Download link for report definition file.

PS Current version of SQL developer supports only 5 different colors for pie charts so there is overlap of colors in first 3 charts.

October 7, 2013

SQL Developer Child Reports and Drill Down

Filed under: Tools, Troubleshooting — vishaldesai @ 10:36 pm

SQL Developer has some cool reporting capability. Using these features and some of my favorite scripts, I can now quickly troubleshoot Oracle Database related issue.

Example: For given database session, I can run Tanel’s asqlmon, look at historical SQL performance using Kerry’s AWR scripts, and drill down into IO latency with just point and click.

 

Older Posts »

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.