Vishal desai’s Oracle Blog

January 24, 2014

Build Bind Variables from v$sql_monitor binds_xml column

Filed under: SQL, Troubleshooting — vishaldesai @ 7:41 pm

One of my client had interesting SQL performance issue. Whenever client executes query from ab initio it takes forever and same SQL code from sqlplus runs in less than a minute. Both tools were producing different plan_hash_value. I looked at v$ses_optimizer_env to ensure there were no session level parameters set for ab initio session.

I wanted to run exact SQL from sqlplus, create sql plan baseline so that when SQL is submitted from AI it can use good plan. AI was generating SQL dynamically so I decided get exact SQL text, bind variables and values from v$sql_monitor. I defined bind variables, assigned values and ran SQL from sqlplus and created profile which was eventually used by AI. To automate constructing bind variables and SQL Text from v$sql_monitor, I wrote script called build_bind_sqlmontior.sql.

Script will prompt for Key, SID and SQL_ID. Enter value for at least on prompt and later on enter key value to construct bind variables and SQL Text as shown below:

SQL> @build_bind_sqlmonitor.sql
Please enter the value for Key if known       :
Please enter the value for Sid if known       : 986
Please enter the value for sql_id if known    :

             KEY STATUS              USERNAME   MODULE                      SID SQL_ID        FIRST_REFRESH_TIME   PROGRAM         SQL_TEXT
---------------- ------------------- ---------- -------------------- ---------- ------------- -------------------- --------------- --------------------
   6996503228544 DONE (ALL ROWS)     SYS        sqlplus.exe                 986 8wy5mjmh53vxf 01/24/14 10:41:56    sqlplus.exe     select /*+ monitor f
   8362302828663 DONE (ERROR)        SYS        sqlplus.exe                 986 4bkxr36kasm7s 01/24/14 10:40:51    sqlplus.exe     select /*+ monitor f
  16471201083509 DONE (ERROR)        SYS        sqlplus.exe                 986 azb329svj1ajd 01/24/14 10:39:36    sqlplus.exe     select /*+ monitor f
   7082402574450 DONE (ERROR)        SYS        sqlplus.exe                 986 3036w6mx13rq5 01/24/14 10:38:36    sqlplus.exe     select /*+ monitor f
   5845451993201 DONE (ERROR)        SYS        sqlplus.exe                 986 20fmy44010wvw 01/24/14 10:38:00    sqlplus.exe     select /*+ monitor f
  11643657842205 DONE (ALL ROWS)                                            986 bfjf698jtp6n1 01/24/14 03:10:14
   7421704990227 DONE                                                       986 2vn8ztuzaq3k4 01/24/14 03:10:11
  14856293379585 DONE (ALL ROWS)                                            986 53afdbcv9637w 01/24/14 03:10:10
.....
.....
Please enter Key from above       : 6996503228544

variable SYS_B_0 VARCHAR2(32);
variable SYS_B_1 VARCHAR2(32);
variable SYS_B_2 NUMBER;

exec :SYS_B_0 := '31-OCT-12 00.00.00';
exec :SYS_B_1 := 'DD-MON-YY HH24:MI:SS';
exec :SYS_B_2 := 30205578;

select /*+ monitor full(ism) */ *
from ism.table_name ism
where col1=to_date(:"SYS_B_0",:"SYS_B_1")
and   col2=:"SYS_B_2";

SQL> variable SYS_B_0 VARCHAR2(32);
SQL> variable SYS_B_1 VARCHAR2(32);
SQL> variable SYS_B_2 NUMBER;
SQL>
SQL> exec :SYS_B_0 := '31-OCT-12 00.00.00';

PL/SQL procedure successfully completed.

SQL> exec :SYS_B_1 := 'DD-MON-YY HH24:MI:SS';

PL/SQL procedure successfully completed.

SQL> exec :SYS_B_2 := 30205578;

PL/SQL procedure successfully completed.

SQL> select /*+ monitor full(ism) */ *
  2  from ism.table_name ism
  3  where col1=to_date(:"SYS_B_0",:"SYS_B_1")
  4  and   col2=:"SYS_B_2";

.....

1 row selected.

Download Script

If there is any data type conversion script may not produce reliable output.

January 22, 2014

Yet Another Case of multiple child cursors, BIND_MISMATCH and OPTIMIZER_MISMATCH

Filed under: Performance Tuning, Troubleshooting — vishaldesai @ 9:08 pm

One of my client had unpredictable response time for bunch of SQL statements. I did not have any sqlid or sql code to start with so I started by running Kerry’s unstable_plans.sql. As per awr_plan_stats.sql, 3799146274 was good hash and 2775678588 was bad hash.

Output from unstable_plans.sql

SQL_ID        SUM(EXECS)   MIN_ETIME   MAX_ETIME   NORM_STDDEV
------------- ---------- ----------- ----------- -------------
....
djtwm74rrch17       1709         .00       23.69    7,395.6254
..

@awr_plan_stats
Enter value for sql_id: djtwm74rrch17

SQL_ID        PLAN_HASH_VALUE        EXECS    AVG_ETIME AVG_PX_SERVERS AVG_CPU_TIME            AVG_LIO          AVG_PIO
------------- --------------- ------------ ------------ -------------- ------------ ------------------ ----------------
djtwm74rrch17      3799146274          462        1.042     .004329004         .000                7.0               .1
djtwm74rrch17      2775678588        1,250   29,602.465          .0016        2.161          157,655.3        157,725.4

SQL_TEXT for djtwm74rrch17

SELECT ...
FROM TAB1 PAL,
  TAB2 COMPANY
WHERE PAL.CPY_ID      = COMPANY.CPYID
AND ( PAL.LOAN_NUMBER = :"SYS_B_0" )
ORDER BY ...

We clearly had multiple child cursors for given hash and had two different hash values. Child 0 is the first child, Child 1 was created due to bind_mismatch, child 2 was created due to optimizer_mismatch marking child 0 as non shareable and child 3 was created due to optimizer and bind mismatch marking child 1 as non shareable..

SQL> select sql_id,plan_hash_value,child_number,is_shareable from gv$sql where sql_id='djtwm74rrch17' and inst_id=2;

SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER I
------------- --------------- ------------ -
djtwm74rrch17      3799146274            0 N
djtwm74rrch17      2775678588            1 N
djtwm74rrch17      3799146274            2 Y
djtwm74rrch17      2775678588            3 Y

select * from GV$SQL_SHARED_CURSOR where sql_id='djtwm74rrch17'
and inst_id=2;

INST_ID    SQL_ID            ADDRESS                CHILD_ADDRESS    CHILD_NUMBER    OPTIMIZER_MISMATCH    BIND_MISMATCH
2        djtwm74rrch17    000000025C7B1F90    000000025C8296D0        0                        N                   N
2        djtwm74rrch17    000000025C7B1F90    000000025C6E2DF0        1                        N                   Y
2        djtwm74rrch17    000000025C7B1F90    000000025C6E4210        2                        Y                   N
2        djtwm74rrch17    000000025C7B1F90    000000025C7B90E0        3                        Y                   Y

Child cursors 2 and 3 were created due to optimizer_mismatch and parameter that caused optimizer_mismatch was _pga_max_size. But we never set or change _pga_max_size on our databases. But databases were configured with AMM. _pga_max_size is derived from pga_aggregate_target so whenever PGA was resized it was affecting _pga_max_size making existing cursors non shareable.

Based on output from gv$sql_optimizer_env
                    Child        Child        Child        Child
Parameter                0            1            2            3
_pga_max_size    229360 KB    229360 KB    239200 KB    239200 KB
SQL> show parameter memory%target

NAME                                 TYPE                             VALUE
------------------------------------ -------------------------------- ------------------------------
memory_max_target                    big integer                      8G
memory_target                        big integer                      8G

And the reason for bind mismatch was using different data types for literal values as shown below:

SQL> select sql_id,child_number,name,position,datatype,datatype_string,max_length,last_captured,value_string from GV$SQL_BIND_CAPTURE where inst_id=2 and sql_id='djtwm74rrch17'; 

SQL_ID        CHILD_NUMBER NAME                             POSITION   DATATYPE DATATYPE_STRING MAX_LENGTH LAST_CAPTURED     VALUE_STRI
------------- ------------ ------------------------------ ---------- ---------- --------------- ---------- ----------------- ----------
djtwm74rrch17            3 :SYS_B_0                                1          2 NUMBER                  22 01/22/14 08:14:14 477026652
djtwm74rrch17            2 :SYS_B_0                                1          1 VARCHAR2(32)            32 01/22/14 10:14:44 0476735873
djtwm74rrch17            1 :SYS_B_0                                1          2 NUMBER                  22 01/21/14 10:49:20 476783261
djtwm74rrch17            0 :SYS_B_0                                1          1 VARCHAR2(32)            32 01/21/14 15:45:13 0476636964 

Predicate section of plan shows to_number function used for hash 2775678588. Same information can be derived from v$sql_plan_statistics_all.

SQL_ID  djtwm74rrch17, child number 2
-------------------------------------
Plan hash value: 3799146274

-----------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                    |       |       |     6 (100)|          |
|   1 |  SORT ORDER BY                 |                    |     1 |   313 |     6  (17)| 00:00:01 |
|   2 |   NESTED LOOPS                 |                    |       |       |            |          |
|   3 |    NESTED LOOPS                |                    |     1 |   313 |     5   (0)| 00:00:01 |
|   4 |     TABLE ACCESS BY INDEX ROWID| TAB1               |     1 |   279 |     4   (0)| 00:00:01 |
|*  5 |      INDEX RANGE SCAN          | PKTAB1             |     1 |       |     3   (0)| 00:00:01 |
|*  6 |     INDEX UNIQUE SCAN          | TAB1_CPYID         |     1 |       |     0   (0)|          |
|   7 |    TABLE ACCESS BY INDEX ROWID | TAB2               |     1 |    34 |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access("PAL"."LOAN_NUMBER"=:SYS_B_0)
   6 - access("PAL"."CPY_ID"="COMPANY"."CPYID")
   
   
SQL_ID  djtwm74rrch17, child number 3
-------------------------------------
Plan hash value: 2775678588

----------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                    |       |       | 44394 (100)|          |
|   1 |  SORT ORDER BY                |                    |     1 |   313 | 44394   (1)| 00:08:53 |
|   2 |   NESTED LOOPS                |                    |       |       |            |          |
|   3 |    NESTED LOOPS               |                    |     1 |   313 | 44393   (1)| 00:08:53 |
|*  4 |     TABLE ACCESS FULL         | TAB1               |     1 |   279 | 44392   (1)| 00:08:53 |
|*  5 |     INDEX UNIQUE SCAN         | TAB1_CPYID            |     1 |       |     0   (0)|          |
|   6 |    TABLE ACCESS BY INDEX ROWID| TAB2               |     1 |    34 |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------


   4 - filter(TO_NUMBER("PAL"."LOAN_NUMBER")=:SYS_B_0)
   5 - access("PAL"."CPY_ID"="COMPANY"."CPYID")
   
   
SQL> l
  1   select distinct sql_id,PLAN_HASH_VALUE,id,PARENT_ID,ACCESS_PREDICATES,FILTER_PREDICATES FROM gv$sql_plan_statistics_all
  2   WHERE sql_id = 'djtwm74rrch17' and inst_id=2
  3   and (ACCESS_PREDICATES is not null or FILTER_PREDICATES is not null)
  4*  order by PLAN_HASH_VALUE
SQL> /

SQL_ID        PLAN_HASH_VALUE         ID  PARENT_ID ACCESS_PREDICATES                        FILTER_PREDICATES
------------- --------------- ---------- ---------- ---------------------------------------- ----------------------------------------
djtwm74rrch17      2775678588          5          3 "PAL"."CPY_ID"="COMPANY"."CPYID"
djtwm74rrch17      2775678588          4          3                                          TO_NUMBER("PAL"."LOAN_NUMBER")=:SYS_B_0
djtwm74rrch17      3799146274          6          3 "PAL"."CPY_ID"="COMPANY"."CPYID"
djtwm74rrch17      3799146274          5          4 "PAL"."LOAN_NUMBER"=:SYS_B_0

Unfortunately AWR does not store access_predicates and filter_predicates so if cursors have edged out it won’t be possible to troubleshoot such random behavior.

January 10, 2014

Golden Gate conditional processing using UPDATEDELETES, TOKENS and FILTER

Filed under: Golden Gate, Replication — vishaldesai @ 4:01 pm

Client had two databases (source and target) and wanted to replicate data as follows:

All transactions generated by OLTP user needs to be replicated (as is) in near real time to target database. ARCHIVER user runs batch operations and deletes data from source database but on Target database it should convert ARCHIVER deletes into updates and there should be column indicator on target database to identify records that were deleted by ARCHIVER user on source database. Eventually these records will be deleted from Target database once its consumed by queue processor.

Test Case:

For demo purpose my Target database is same as Source database.

Source Database Target Database
APPSCHEMA – stores application tables APPSCHEMA – stores application tables
OLTP – web user  
ARCHIVER – user for archiving old data using batch operations  

Scripts to setup demo:

Database schema and table setup:
================================

DROP USER oltp cascade;
DROP USER archiver cascade;
DROP USER appschema cascade;
DROP TABLE appschema.test;
DROP TABLE appschema.test1;

CREATE USER oltp IDENTIFIED BY oltp DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp;
CREATE USER archiver IDENTIFIED BY archiver DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp;
CREATE USER appschema IDENTIFIED BY appschema DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp;
GRANT CREATE session, RESOURCE, DBA TO oltp,archiver,appschema;
  
CREATE TABLE appschema.test
(
   id      NUMBER PRIMARY KEY,
   name    VARCHAR2(20),
   company VARCHAR2(20)
);

  
CREATE TABLE appschema.test1
(
   id          NUMBER PRIMARY KEY,
   name        VARCHAR2(20),
   company     VARCHAR2(20),
   deleted_row VARCHAR(20)
);

Source:
=======

--Add Extract
dblogin  userid ggadmin@DBTNS, password pass1234
ADD EXTRACT e_test, TRANLOG, BEGIN NOW
ADD EXTTRAIL ./dirdat/et, EXTRACT e_test

--Extract parameter file
edit params e_test

EXTRACT e_test
SETENV (NLS_LANG="AMERICAN_AMERICA.AL32UTF8")
SETENV (ORACLE_HOME = "/opt/oracle/app/11.2.0.3")
SETENV (ORACLE_SID = "DBTNS")
DBOPTIONS
USERID ggadmin, PASSWORD pass1234
EXTTRAIL ./dirdat/et
TABLE appschema.test,TOKEN(TK_DBUSER = @GETENV ("TRANSACTION" , "USERNAME"));

--Generate definition file
edit params defgen1

defsfile /opt/oracle/app/gghome/dirdef/test.def
USERID ggadmin@DBTNS, PASSWORD pass1234
table appschema.test;

defgen paramfile /opt/oracle/app/gghome/dirprm/defgen1.prm

Copy /opt/oracle/app/gghome/dirdef/test.def to target server.

Target:
=======

--Add Replicat
dblogin  userid ggadmin@DBTNS, password pass1234
ADD CHECKPOINTTABLE ggadmin.CHKPTAB
ADD REPLICAT r_test, EXTTRAIL ./dirdat/et CHECKPOINTTABLE ggadmin.CHKPTAB

--Replicat parameter file
edit params r_test

REPLICAT r_test
SETENV (NLS_LANG="AMERICAN_AMERICA.AL32UTF8")
SETENV (ORACLE_HOME = "/opt/oracle/app/11.2.0.3")
SETENV (ORACLE_SID = "DBTNS")
SOURCEDEFS /opt/oracle/app/gghome/dirdef/test.def
USERID ggadmin, PASSWORD pass1234
ALLOWDUPTARGETMAP
GETINSERTS
GETUPDATES
GETDELETES
MAP appschema.test, TARGET appschema.test1, COLMAP (USEDEFAULTS, deleted_row = " "), FILTER ( @STRFIND(@TOKEN("TK_DBUSER"),"OLTP") > 0);
IGNOREINSERTS
IGNOREUPDATES
GETDELETES
UPDATEDELETES
MAP appschema.test, TARGET appschema.test1, COLMAP (USEDEFAULTS, deleted_row = "ARCHIVER"), FILTER ( @STRFIND(@TOKEN("TK_DBUSER"),"ARCHIVER") > 0);

Basically we are telling Golden Gate to replicate transactions as is when TOKEN value has OLTP stored in it and convert deletes into updates when TOKEN value has ARCHIVER stored in it.

Testing Transactions:

-- Populate Data and verify INSERTS are replicated to Target

SQL> conn oltp/oltp
Connected.
SQL> insert into appschema.test values (20,'VISHAL','ORA');

1 row created.

SQL> insert into appschema.test values (21,'ERIC','ORA');

1 row created.

SQL> insert into appschema.test values (22,'PHIL','ORA');

1 row created.

SQL> insert into appschema.test values (23,'GARY','ORA');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from appschema.test;

        ID NAME                 COMPANY
---------- -------------------- --------------------
        20 VISHAL               ORA
        21 ERIC                 ORA
        22 PHIL                 ORA
        23 GARY                 ORA

SQL> select * from appschema.test1;

        ID NAME                 COMPANY              DELETED_ROW
---------- -------------------- -------------------- --------------------
        20 VISHAL               ORA
        21 ERIC                 ORA
        22 PHIL                 ORA
        23 GARY                 ORA

-- Run DELETE as OLTP user

SQL> conn oltp/oltp
Connected.
SQL> delete from appschema.test where id=20;

1 row deleted.

SQL> commit;

Commit complete.

SQL> select * from appschema.test  order by id;

        ID NAME                 COMPANY
---------- -------------------- --------------------
        21 ERIC                 ORA
        22 PHIL                 ORA
        23 GARY                 ORA

SQL> select * from appschema.test1 order by id;

        ID NAME                 COMPANY              DELETED_ROW
---------- -------------------- -------------------- --------------------
        21 ERIC                 ORA
        22 PHIL                 ORA
        23 GARY                 ORA

-- Run DELETE as ARCHIVER user

SQL> conn archiver/archiver
Connected.
SQL> delete from appschema.test where id=22;

1 row deleted.

SQL> commit;

Commit complete.

SQL> select * from appschema.test order by id;

        ID NAME                 COMPANY
---------- -------------------- --------------------
        21 ERIC                 ORA
        23 GARY                 ORA

SQL> select * from appschema.test1 order by id;

        ID NAME                 COMPANY              DELETED_ROW
---------- -------------------- -------------------- --------------------
        21 ERIC                 ORA
        22 PHIL                 ORA                  ARCHIVER
        23 GARY                 ORA

That concludes the demo.

P.S. FILTER (@TOKEN("TK_DBUSER")="OLTP") did not work for some reason. Probably Token was storing blank padding’s. I will update this blog when I have time to verify that. :)

December 11, 2013

Bloom Filter and function in join condition

Filed under: Performance Tuning, SQL — vishaldesai @ 4:26 pm

One of my data warehouse user wanted to compare data between two Quarters. Original SQL is slightly complicated so I have created test case to demonstrate the issue.

DROP TABLE t1;

CREATE TABLE t1
  (
    id       NUMBER NOT NULL,
    datecol  DATE NOT NULL,
    comments VARCHAR2(500)
  )
  PARTITION BY RANGE
  (
    datecol
  )
  (
    PARTITION q3 VALUES LESS THAN (TO_DATE('10/10/2013', 'DD/MM/YYYY')) TABLESPACE users,
    PARTITION q4 VALUES LESS THAN (TO_DATE('01/01/2043', 'DD/MM/YYYY')) TABLESPACE users
  );
  
INSERT INTO t1
SELECT
  rownum,
  to_date('31/12/2013','DD/MM/YYYY'),
  dbms_random.string('X', '500')
FROM dual a
  CONNECT BY level <= 5000;
  
INSERT INTO t1
SELECT
  rownum,
  to_date('30/09/2013','DD/MM/YYYY'),
  dbms_random.string('X', '500')
FROM dual a
  CONNECT BY level <= 5000;

For sake of simplicity, I have added only two partitions here but original fact table had 10 years of data. Below is subset of SQL statement that user was running to compare data across quarters.

select /*+ leading(xa,xb) parallel(xa,4) parallel(xb,4) */ sum(xa.id),count( distinct xb.comments)
from t1 xa ,t1 xb
where xa.datecol = add_months(xb.datecol,-3)
and   xa.datecol = '30-SEP-13';
------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |          |     1 |   265 |   367  (74)| 00:00:01 |       |       |        |      |            |
|   1 |  SORT AGGREGATE                  |          |     1 |   265 |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR                 |          |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)           | :TQ10001 |     1 |   265 |            |          |       |       |  Q1,01 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE               |          |     1 |   265 |            |          |       |       |  Q1,01 | PCWP |            |
|   5 |      VIEW                        | VW_DAG_0 |    34M|  8774M|   367  (74)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   6 |       HASH GROUP BY              |          |    34M|  9370M|   367  (74)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   7 |        PX RECEIVE                |          |    34M|  9370M|   367  (74)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   8 |         PX SEND HASH             | :TQ10000 |    34M|  9370M|   367  (74)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
|   9 |          HASH GROUP BY           |          |    34M|  9370M|   367  (74)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|* 10 |           HASH JOIN              |          |    34M|  9370M|   122  (20)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|  11 |            PARTITION RANGE SINGLE|          |  5501 |   118K|    33   (0)| 00:00:01 |   KEY |   KEY |  Q1,00 | PCWC |            |
|* 12 |             TABLE ACCESS FULL    | T1       |  5501 |   118K|    33   (0)| 00:00:01 |   KEY |   KEY |  Q1,00 | PCWP |            |
|  13 |            PX BLOCK ITERATOR     |          |  6311 |  1608K|    65   (0)| 00:00:01 |     1 |     2 |  Q1,00 | PCWC |            |
|* 14 |             TABLE ACCESS FULL    | T1       |  6311 |  1608K|    65   (0)| 00:00:01 |     1 |     2 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  10 - access("XA"."DATECOL"=ADD_MONTHS(INTERNAL_FUNCTION("XB"."DATECOL"),-3))
  12 - filter("XA"."DATECOL"='30-SEP-13')
  14 - filter(ADD_MONTHS(INTERNAL_FUNCTION("XB"."DATECOL"),-3)='30-SEP-13')

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

As you can see from line 14, table T1 (XB) was doing full table scan and was accessing both the partitions (pstart 1 and pstop 2). In original SQL it was accessing all partitions for last 10 years.

I asked user to add additional filter for T1 (XB) so that it will scan required partition as shown below. But user was not satisfied (as it would require calculating dates manually every time they run this SQL and sometimes they run this SQL for multiple Quarters). Other option was to pass filter as xb.datecol = (select add_months(‘30-SEP-13’,3) from dual) but I was curios to find out more about bloom filters.

select /*+ leading(xa,xb) parallel(xa,4) parallel(xb,4) */ sum(xa.id),count( distinct xb.comments)
from t1 xa ,t1 xb
where xa.datecol = add_months(xb.datecol,-3)
and   xa.datecol = '30-SEP-13'
and   xb.datecol = '31-DEC-13';

I was wondering whether Oracle would create bloom filter after scanning T1 (XA) and apply that to T2 (XB) and do partition scan on T2(XB) instead of scanning all partitions but the problem was add_months function in join condition.

Original Plan:

image

Recommended SQL and plan (bloom filter in action):

select /*+ leading(xa,xb) parallel(xa,4) parallel(xb,4) */ sum(xa.id),count( distinct xb.comments)
from t1 xa ,t1 xb
where add_months(xa.datecol,+3) = xb.datecol
and   xa.datecol = '30-SEP-13';
-------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |          |     1 |   265 |   315  (69)| 00:00:01 |       |       |        |      |            |
|   1 |  SORT AGGREGATE                   |          |     1 |   265 |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR                  |          |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)            | :TQ10001 |     1 |   265 |            |          |       |       |  Q1,01 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE                |          |     1 |   265 |            |          |       |       |  Q1,01 | PCWP |            |
|   5 |      VIEW                         | VW_DAG_0 |    28M|  7145M|   315  (69)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   6 |       HASH GROUP BY               |          |    28M|  7630M|   315  (69)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   7 |        PX RECEIVE                 |          |    28M|  7630M|   315  (69)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   8 |         PX SEND HASH              | :TQ10000 |    28M|  7630M|   315  (69)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
|   9 |          HASH GROUP BY            |          |    28M|  7630M|   315  (69)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|* 10 |           HASH JOIN               |          |    28M|  7630M|   118  (17)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|  11 |            PART JOIN FILTER CREATE| :BF0000  |  5501 |   118K|    33   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|  12 |             PARTITION RANGE SINGLE|          |  5501 |   118K|    33   (0)| 00:00:01 |   KEY |   KEY |  Q1,00 | PCWC |            |
|* 13 |              TABLE ACCESS FULL    | T1       |  5501 |   118K|    33   (0)| 00:00:01 |   KEY |   KEY |  Q1,00 | PCWP |            |
|  14 |            PX BLOCK ITERATOR      |          | 10278 |  2619K|    65   (0)| 00:00:01 |:BF0000|:BF0000|  Q1,00 | PCWC |            |
|  15 |             TABLE ACCESS FULL     | T1       | 10278 |  2619K|    65   (0)| 00:00:01 |:BF0000|:BF0000|  Q1,00 | PCWP |            |
-------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  10 - access("XB"."DATECOL"=ADD_MONTHS(INTERNAL_FUNCTION("XA"."DATECOL"),3))
  13 - filter("XA"."DATECOL"='30-SEP-13')

image

As you can see from above, just by flipping the add_months function Oracle now creates and uses bloom filter to scan required partition for T1 (XB). So from this behavior it looks like bloom filters cannot be used for join column surrounded by function. It sounds very similar to having index on column x, scanning table with function(x)=y will not use index on column x. May be in future Oracle will add function based bloom filters.

November 26, 2013

Compare IO outliers

Filed under: Performance Tuning, Tools, Troubleshooting — vishaldesai @ 9:47 pm

Oracle SQL Developer has nice reporting capability to plot charts. To compare IO outliers using db_hist_event_histogram between two periods, I have created couple of reports for quick comparison.

When you click on report, it will prompt you to select database and input event, period one start snap and end snap, period two start and end snap.

Compare IO wait count outliers by %

image

Compare IO wait count outliers by Value

image

Compare IO wait time outlier by %

image

IO latency for all snaps by Day

image

IO latency for last 24 hours

image

Download link for report definition file.

PS Current version of SQL developer supports only 5 different colors for pie charts so there is overlap of colors in first 3 charts.

October 7, 2013

SQL Developer Child Reports and Drill Down

Filed under: Tools, Troubleshooting — vishaldesai @ 10:36 pm

SQL Developer has some cool reporting capability. Using these features and some of my favorite scripts, I can now quickly troubleshoot Oracle Database related issue.

Example: For given database session, I can run Tanel’s asqlmon, look at historical SQL performance using Kerry’s AWR scripts, and drill down into IO latency with just point and click.

 

September 18, 2013

Learning Bloom Filters

Filed under: Exadata, Performance Tuning — vishaldesai @ 9:35 pm

Thanks to Jonathan Lewis for quiz on bloom filter. I learned more about bloom filters from quiz and trying to understand answers for below questions:

1) How many bloom filters are created?

2) How many bloom filters are used and where exactly they are used?

Scripts to create test case are as follows:

drop table t1 purge;
drop table t2 purge;
drop table t3 purge;
drop table t4 purge;

create table t1 tablespace ism_ts
as
select
    rownum        id,
    to_char(rownum)    small_vc,
    rpad('x',100)    padding
from
    all_objects
where
    rownum <= 100
;

alter table t1 
    add constraint t1_pk primary key(id)
;

create table t2 tablespace ism_ts
as
select
    rownum        id,
    to_char(rownum)    small_vc,
    rpad('x',100)    padding
from
    all_objects
where
    rownum <= 100
;

alter table t2
    add constraint t2_pk primary key(id)
;

create table t3 tablespace ism_ts
as
select
    rownum        id,
    to_char(rownum)    small_vc,
    rpad('x',100)    padding
from
    all_objects
where
    rownum <= 100
;

alter table t3
    add constraint t3_pk primary key(id)
;

create table t4 tablespace ism_ts
nologging
as
select
    t1.id            id1,
    t2.id            id2,
    t3.id            id3,
    rpad(rownum,10)        small_vc,
    rpad('x',100)        padding
from
    t1, t2, t3
;


begin
    dbms_stats.gather_table_stats(
        user,
        't1',
        cascade => true,
        estimate_percent => null,
        method_opt => 'for all columns size 1'
    );
end;
/

begin
    dbms_stats.gather_table_stats(
        user,
        't2',
        cascade => true,
        estimate_percent => null,
        method_opt => 'for all columns size 1'
    );
end;
/

begin
    dbms_stats.gather_table_stats(
        user,
        't3',
        cascade => true,
        estimate_percent => null,
        method_opt => 'for all columns size 1'
    );
end;
/

begin
    dbms_stats.gather_table_stats(
        user,
        't4',
        cascade => true,
        estimate_percent => null,
        method_opt => 'for all columns size 1'
    );
end;
/

Case 1: Broadcast method

SQL, Output and plan are as follows:

select /*+
             monitor
             parallel(t1,4)
             parallel(t2,4)
             parallel(t3,4)
             parallel(t4,4)
             full(t1)
             full(t2)
             full(t3)
             full(t4)
     */
      count(t1.small_vc),
      count(t2.small_vc),
      count(t3.small_vc),
      count(t4.small_vc)
  from
      t3,
      t2,
      t1,
      t4
  where
      t1.id = t4.id1
  and t2.id = t4.id2
  and t3.id = t4.id3
  and t1.small_vc in (1,2,3)
  and t2.small_vc in (1,2,3)
  and t3.small_vc in (1,2,3)
;

COUNT(T1.SMALL_VC) COUNT(T2.SMALL_VC) COUNT(T3.SMALL_VC) COUNT(T4.SMALL_VC)
------------------ ------------------ ------------------ ------------------
                27                 27                 27                 27
                
---------------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |          |       |       |   833 (100)|          |        |      |            |
|   1 |  SORT AGGREGATE             |          |     1 |    38 |            |          |        |      |            |
|   2 |   PX COORDINATOR            |          |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)      | :TQ10003 |     1 |    38 |            |          |  Q1,03 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE          |          |     1 |    38 |            |          |  Q1,03 | PCWP |            |
|*  5 |      HASH JOIN              |          |    26 |   988 |   833   (1)| 00:00:01 |  Q1,03 | PCWP |            |
|   6 |       PX RECEIVE            |          |     3 |    18 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |
|   7 |        PX SEND BROADCAST    | :TQ10000 |     3 |    18 |     2   (0)| 00:00:01 |  Q1,00 | P->P | BROADCAST  |
|   8 |         PX BLOCK ITERATOR   |          |     3 |    18 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|*  9 |          TABLE ACCESS FULL  | T1       |     3 |    18 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
|* 10 |       HASH JOIN             |          |   882 | 28224 |   830   (1)| 00:00:01 |  Q1,03 | PCWP |            |
|  11 |        PX RECEIVE           |          |     3 |    18 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |
|  12 |         PX SEND BROADCAST   | :TQ10001 |     3 |    18 |     2   (0)| 00:00:01 |  Q1,01 | P->P | BROADCAST  |
|  13 |          PX BLOCK ITERATOR  |          |     3 |    18 |     2   (0)| 00:00:01 |  Q1,01 | PCWC |            |
|* 14 |           TABLE ACCESS FULL | T2       |     3 |    18 |     2   (0)| 00:00:01 |  Q1,01 | PCWP |            |
|* 15 |        HASH JOIN            |          | 29701 |   754K|   828   (1)| 00:00:01 |  Q1,03 | PCWP |            |
|  16 |         PX RECEIVE          |          |     3 |    18 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |
|  17 |          PX SEND BROADCAST  | :TQ10002 |     3 |    18 |     2   (0)| 00:00:01 |  Q1,02 | P->P | BROADCAST  |
|  18 |           PX BLOCK ITERATOR |          |     3 |    18 |     2   (0)| 00:00:01 |  Q1,02 | PCWC |            |
|* 19 |            TABLE ACCESS FULL| T3       |     3 |    18 |     2   (0)| 00:00:01 |  Q1,02 | PCWP |            |
|  20 |         PX BLOCK ITERATOR   |          |  1000K|    19M|   824   (1)| 00:00:01 |  Q1,03 | PCWC |            |
|* 21 |          TABLE ACCESS FULL  | T4       |  1000K|    19M|   824   (1)| 00:00:01 |  Q1,03 | PCWP |            |
---------------------------------------------------------------------------------------------------------------------
  
Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access("T1"."ID"="T4"."ID1")
   9 - access(:Z>=:Z AND :Z<=:Z)
       filter((TO_NUMBER("T1"."SMALL_VC")=:SYS_B_0 OR TO_NUMBER("T1"."SMALL_VC")=:SYS_B_1 OR
              TO_NUMBER("T1"."SMALL_VC")=:SYS_B_2))
  10 - access("T2"."ID"="T4"."ID2")
  14 - access(:Z>=:Z AND :Z<=:Z)
       filter((TO_NUMBER("T2"."SMALL_VC")=:SYS_B_3 OR TO_NUMBER("T2"."SMALL_VC")=:SYS_B_4 OR
              TO_NUMBER("T2"."SMALL_VC")=:SYS_B_5))
  15 - access("T3"."ID"="T4"."ID3")
  19 - access(:Z>=:Z AND :Z<=:Z)
       filter((TO_NUMBER("T3"."SMALL_VC")=:SYS_B_6 OR TO_NUMBER("T3"."SMALL_VC")=:SYS_B_7 OR
              TO_NUMBER("T3"."SMALL_VC")=:SYS_B_8))
  21 - access(:Z>=:Z AND :Z<=:Z)
       filter(SYS_OP_BLOOM_FILTER_LIST(SYS_OP_BLOOM_FILTER(:BF0000,"T4"."ID3"),SYS_OP_BLOOM_FILTER(:BF0000,"T
              4"."ID2"),SYS_OP_BLOOM_FILTER(:BF0000,"T4"."ID1")))

 

image

 

image

1) How many bloom filters are created?

Bloom filter/filters were applied at line 21. Based on comment from Sayan on Jonathan’s blog it looks like only one filter was created after accessing table T1 and same bloom filter was appended after accessing table T2 and T3 (:BF0000) which got applied on line 21. So probably the answer to this question is 1.

2) How many bloom filters are used and where exactly they are used?

If answer to above question is 1, then obviously oracle used only one bloom filter at line 21. Did it really converted all joins into one bloom filter?

SQL> select distinct id from t1,t4 where t1.id = t4.id1 and t1.small_vc in (1,2,3);

        ID
----------
         1
         2
         3<= output 1

SQL> select distinct id from t2,t4 where t2.id = t4.id2 and t2.small_vc in (1,2,3);

        ID
----------
         1
         2
         3<= output 2

SQL> select distinct id from t3,t4 where t3.id = t4.id3 and t3.small_vc in (1,2,3);

        ID
----------
         1
         2
         3<= output 3
         
SQL> select count(1) from t4 where id1 in (1,2,3) AND id2 in (1,2,3) AND id3 in (1,2,3);

  COUNT(1)
----------
        27 <= output 4

Bloom filters basically converts join into filters and that’s why its a big deal on Exadata as they can be offloaded to Cell server. As you can see from above bloom filter created got applied at line 21 and as per real time SQL monitoring actual number of rows returned are 27 which matches with output 4.

So  one bloom filter (appended with multiple filters) is created and its used on single plan line.

Case 2: HASH method

select /*+
          monitor
             parallel(t1,4)
             parallel(t2,4)
             parallel(t3,4)
             parallel(t4,4)
             full(t1)
             full(t2)
             full(t3)
             full(t4)
             PQ_DISTRIBUTE(@"SEL$1" "T4"@"SEL$1" HASH HASH)
             PQ_DISTRIBUTE(@"SEL$1" "T2"@"SEL$1" HASH HASH)
             PQ_DISTRIBUTE(@"SEL$1" "T1"@"SEL$1" HASH HASH)
     */
      count(t1.small_vc),
      count(t2.small_vc),
      count(t3.small_vc),
      count(t4.small_vc)
  from
      t3,
      t2,
      t1,
      t4
  where
      t1.id = t4.id1
  and t2.id = t4.id2
  and t3.id = t4.id3
  and t1.small_vc in (1,2,3)
  and t2.small_vc in (1,2,3)
  and t3.small_vc in (1,2,3)
;

----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |          |       |       |   833 (100)|          |        |      |            |
|   1 |  SORT AGGREGATE                    |          |     1 |    38 |            |          |        |      |            |
|   2 |   PX COORDINATOR                   |          |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)             | :TQ10006 |     1 |    38 |            |          |  Q1,06 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE                 |          |     1 |    38 |            |          |  Q1,06 | PCWP |            |
|*  5 |      HASH JOIN                     |          |    26 |   988 |   833   (1)| 00:00:01 |  Q1,06 | PCWP |            |
|   6 |       JOIN FILTER CREATE           | :BF0000  |     3 |    18 |     2   (0)| 00:00:01 |  Q1,06 | PCWP |            |
|   7 |        PX RECEIVE                  |          |     3 |    18 |     2   (0)| 00:00:01 |  Q1,06 | PCWP |            |
|   8 |         PX SEND HASH               | :TQ10004 |     3 |    18 |     2   (0)| 00:00:01 |  Q1,04 | P->P | HASH       |
|   9 |          PX BLOCK ITERATOR         |          |     3 |    18 |     2   (0)| 00:00:01 |  Q1,04 | PCWC |            |
|* 10 |           TABLE ACCESS FULL        | T1       |     3 |    18 |     2   (0)| 00:00:01 |  Q1,04 | PCWP |            |
|  11 |       PX RECEIVE                   |          |   882 | 28224 |   830   (1)| 00:00:01 |  Q1,06 | PCWP |            |
|  12 |        PX SEND HASH                | :TQ10005 |   882 | 28224 |   830   (1)| 00:00:01 |  Q1,05 | P->P | HASH       |
|  13 |         JOIN FILTER USE            | :BF0000  |   882 | 28224 |   830   (1)| 00:00:01 |  Q1,05 | PCWP |            |
|* 14 |          HASH JOIN BUFFERED        |          |   882 | 28224 |   830   (1)| 00:00:01 |  Q1,05 | PCWP |            |
|  15 |           JOIN FILTER CREATE       | :BF0001  |     3 |    18 |     2   (0)| 00:00:01 |  Q1,05 | PCWP |            |
|  16 |            PX RECEIVE              |          |     3 |    18 |     2   (0)| 00:00:01 |  Q1,05 | PCWP |            |
|  17 |             PX SEND HASH           | :TQ10002 |     3 |    18 |     2   (0)| 00:00:01 |  Q1,02 | P->P | HASH       |
|  18 |              PX BLOCK ITERATOR     |          |     3 |    18 |     2   (0)| 00:00:01 |  Q1,02 | PCWC |            |
|* 19 |               TABLE ACCESS FULL    | T2       |     3 |    18 |     2   (0)| 00:00:01 |  Q1,02 | PCWP |            |
|  20 |           PX RECEIVE               |          | 29701 |   754K|   828   (1)| 00:00:01 |  Q1,05 | PCWP |            |
|  21 |            PX SEND HASH            | :TQ10003 | 29701 |   754K|   828   (1)| 00:00:01 |  Q1,03 | P->P | HASH       |
|  22 |             JOIN FILTER USE        | :BF0001  | 29701 |   754K|   828   (1)| 00:00:01 |  Q1,03 | PCWP |            |
|* 23 |              HASH JOIN BUFFERED    |          | 29701 |   754K|   828   (1)| 00:00:01 |  Q1,03 | PCWP |            |
|  24 |               JOIN FILTER CREATE   | :BF0002  |     3 |    18 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |
|  25 |                PX RECEIVE          |          |     3 |    18 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |
|  26 |                 PX SEND HASH       | :TQ10000 |     3 |    18 |     2   (0)| 00:00:01 |  Q1,00 | P->P | HASH       |
|  27 |                  PX BLOCK ITERATOR |          |     3 |    18 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|* 28 |                   TABLE ACCESS FULL| T3       |     3 |    18 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
|  29 |               PX RECEIVE           |          |  1000K|    19M|   824   (1)| 00:00:01 |  Q1,03 | PCWP |            |
|  30 |                PX SEND HASH        | :TQ10001 |  1000K|    19M|   824   (1)| 00:00:01 |  Q1,01 | P->P | HASH       |
|  31 |                 JOIN FILTER USE    | :BF0002  |  1000K|    19M|   824   (1)| 00:00:01 |  Q1,01 | PCWP |            |
|  32 |                  PX BLOCK ITERATOR |          |  1000K|    19M|   824   (1)| 00:00:01 |  Q1,01 | PCWC |            |
|* 33 |                   TABLE ACCESS FULL| T4       |  1000K|    19M|   824   (1)| 00:00:01 |  Q1,01 | PCWP |            |
----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access("T1"."ID"="T4"."ID1")
  10 - access(:Z>=:Z AND :Z<=:Z)
       filter((TO_NUMBER("T1"."SMALL_VC")=:SYS_B_0 OR TO_NUMBER("T1"."SMALL_VC")=:SYS_B_1 OR
              TO_NUMBER("T1"."SMALL_VC")=:SYS_B_2))
  14 - access("T2"."ID"="T4"."ID2")
  19 - access(:Z>=:Z AND :Z<=:Z)
       filter((TO_NUMBER("T2"."SMALL_VC")=:SYS_B_3 OR TO_NUMBER("T2"."SMALL_VC")=:SYS_B_4 OR
              TO_NUMBER("T2"."SMALL_VC")=:SYS_B_5))
  23 - access("T3"."ID"="T4"."ID3")
  28 - access(:Z>=:Z AND :Z<=:Z)
       filter((TO_NUMBER("T3"."SMALL_VC")=:SYS_B_6 OR TO_NUMBER("T3"."SMALL_VC")=:SYS_B_7 OR
              TO_NUMBER("T3"."SMALL_VC")=:SYS_B_8))
  33 - access(:Z>=:Z AND :Z<=:Z)
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"T4"."ID3"))

image

image

image

image

1) How many bloom filters are created?

If we look at SQL predicate information it shows only one bloom filter is used (:BF0000) at line 33. According to PLAN, there are three bloom filters created (:BF0000, :BF0001, :BF0002). So answer is 3 bloom filters are created.

2) How many bloom filters are used and where exactly they are used?

SQL> select count(1) from t3,t4 where t3.id = t4.id3 and t3.small_vc in (1,2,3);

  COUNT(1)
----------
     30000 <= output 5

Based on real time SQL monitoring, actual rows returned from line 33 is 30,000 which matches with output 5. :BF0000 was created at line 6 (For T1, t1.id = t4.id1) and as per filter predicate it used :BF0000 for T4.ID3 and line 31 says it used :BF0002 which is confusing.

Going back to 3 bloom filters where are :BF0000 and  :BF0001 used?

I think they are used exactly where they are shown in plan. Notice number of rows returned drops from 30k to 906 and 900 to 33.

image

With "_bloom_filter_enabled" set to FALSE plan looks like following:

image

Memory consumption for hash join buffered also went up from 1 MB to 3 MB.

So three bloom filters (each having one filter) are created and all three are used on different plan lines. One filter was used while accessing T4 and other two were used when accessing data from hash joins result set.

September 3, 2013

Oracle IO latency monitoring

Filed under: Performance Tuning, Tools — vishaldesai @ 6:13 pm

I have converted Kyle Hailey’s oramon, to SQL format so that it can be run from client sqlplus window.

Sample Screenshot:

image

Usage:

oraiomon.sql <interval> <samples>

Download

July 5, 2013

v$event_histogram – buckets of time in Nth/Snap interval

Filed under: Performance Tuning, Tools, Troubleshooting — vishaldesai @ 6:22 pm

v$event_histogram stores cumulative data since instance startup. To monitor real time events and under what latency bucket they fall under, I have written script called wait_histogram_wc_pct.sql. Script will display percentage of wait counts that falls into individual latency bucket.

Example: As shown below in last line, 25% of direct path read events waited for >32ms and <64ms (307 out of 1228  events).

SQL> @wait_histogram_wc_pct.sql 'direct path read' 10 10
---------------------------------------------------------------Percent Wait Count--------------------------------------------------------------------------|
Event               |Time                |<1    |<2    |<4    |<8    |<16   |<32   |<64   |<128  |<256  |<512  |<1024 |<2048 |<4096 |<8192 |Tot Wait Count |
-----------------------------------------------------------------------------------------------------------------------------------------------------------|
direct path read    |07/05/13 11:49:51   |     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|              1|
direct path read    |07/05/13 11:50:01   |     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|              1|
direct path read    |07/05/13 11:50:11   |     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|              1|
direct path read    |07/05/13 11:50:21   |     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|              1|
direct path read    |07/05/13 11:50:31   |     1|     2|     3|     7|    17|    38|    21|     7|     3|     0|     0|     0|     0|     0|           1212|
direct path read    |07/05/13 11:50:41   |     2|     2|     6|    11|    20|    33|    20|     5|     1|     0|     0|     0|     0|     0|           2696|
direct path read    |07/05/13 11:50:51   |     4|     3|     6|    11|    20|    29|    19|     7|     1|     0|     0|     0|     0|     0|           2492|
direct path read    |07/05/13 11:51:01   |     4|     3|     6|    11|    18|    24|    25|     9|     1|     0|     0|     0|     0|     0|           2164|
direct path read    |07/05/13 11:51:12   |     2|     2|     4|     9|    16|    23|    25|    16|     3|     0|     0|     0|     0|     0|           1588|
direct path read    |07/05/13 11:51:22   |     1|     2|     3|     7|    13|    19|    27|    21|     6|     0|     0|     0|     0|     0|           1228|

Total wait counts may not be 100% accurate if v$event_histogram gets updated between last and first sample.

To monitor historical events from AWR in similar fashion you can use awr_wait_hist_wc_pct.sql

SQL> @awr_wait_hist_wc_pct.sql
Please enter start_date(mm/dd/yy)    :07/05/13
Please enter end_date  (mm/dd/yy)    :07/05/13


     14757 05-JUL-13 09.00.57.689 AM
     14758 05-JUL-13 09.10.58.136 AM
     14759 05-JUL-13 09.20.58.580 AM
     14760 05-JUL-13 09.30.59.208 AM
     14761 05-JUL-13 09.40.59.867 AM
     14762 05-JUL-13 09.50.00.963 AM
     14763 05-JUL-13 10.00.02.200 AM
     14764 05-JUL-13 10.10.03.637 AM
     14765 05-JUL-13 10.20.04.821 AM
     14766 05-JUL-13 10.30.05.240 AM
     14767 05-JUL-13 10.40.05.723 AM
     14768 05-JUL-13 10.50.06.094 AM
     14769 05-JUL-13 11.00.06.516 AM
     14770 05-JUL-13 11.10.07.016 AM
     14771 05-JUL-13 11.20.07.402 AM
     14772 05-JUL-13 11.30.07.838 AM
     14773 05-JUL-13 11.40.08.306 AM
Enter value for start snap_id   :14757
Enter value for end snap_id     :14773
Enter wait event                :direct path read
Enter instance number for RAC   :1
---------------------------------------------------------------Percent Wait Count--------------------------------------------------------------------------|
Event               |Time                |<1    |<2    |<4    |<8    |<16   |<32   |<64   |<128  |<256  |<512  |<1024 |<2048 |<4096 |<8192 |Tot Wait Count |
-----------------------------------------------------------------------------------------------------------------------------------------------------------|
direct path read    |07/05/13 09:00      |     0|     0|    33|    33|    33|     0|     0|     0|     0|     0|     0|     0|     0|     0|              3|
direct path read    |07/05/13 09:10      |    50|     0|    50|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|              2|
direct path read    |07/05/13 09:20      |     2|     3|     6|    11|    20|    36|    15|     5|     1|     0|     0|     0|     0|     0|           6002|
direct path read    |07/05/13 09:30      |     3|     3|     6|    10|    19|    33|    18|     6|     1|     0|     0|     0|     0|     0|         158867|
direct path read    |07/05/13 09:40      |     3|     3|     6|    11|    20|    30|    20|     7|     1|     0|     0|     0|     0|     0|          82122|
direct path read    |07/05/13 09:50      |     2|     3|     5|     9|    15|    22|    25|    15|     5|     0|     0|     0|     0|     0|          27698|
direct path read    |07/05/13 10:00      |     2|     2|     5|     9|    15|    24|    24|    15|     4|     0|     0|     0|     0|     0|          59854|
direct path read    |07/05/13 10:10      |     3|     3|     5|    10|    16|    24|    25|    14|     2|     0|     0|     0|     0|     0|          51421|
direct path read    |07/05/13 10:20      |     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|              0|
direct path read    |07/05/13 10:30      |    20|    20|     0|    40|    20|     0|     0|     0|     0|     0|     0|     0|     0|     0|              5|
direct path read    |07/05/13 10:40      |     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|              0|
direct path read    |07/05/13 10:50      |    60|     0|     0|     0|     0|    40|     0|     0|     0|     0|     0|     0|     0|     0|              5|
direct path read    |07/05/13 11:00      |    33|    33|     0|    33|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|              3|
direct path read    |07/05/13 11:10      |     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|              0|
direct path read    |07/05/13 11:20      |    50|     0|     0|    50|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|              2|
direct path read    |07/05/13 11:30      |    50|     0|     0|    50|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|              2|
direct path read    |07/05/13 11:40      |   100|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|     0|              1|

If you make any enhancements, find any bug with scripts or have any other better way to monitor this, please let me know.

Download Scripts

July 3, 2013

v$event_histogram

Filed under: Performance Tuning, Troubleshooting — vishaldesai @ 8:48 pm

Luca has written awesome script to visualize Oracle latency using Heat Maps. Many times Management appreciate numbers and percentages so my script will attempt to show those numbers since instance startup.

SQL> @wait_histogram direct%path%read

                                                                                % Wait Roll % Wait   % Wait Roll % Wait
EVENT                               WAIT_TIME_MILLI_RANGE          WAIT_COUNT    Count       Count     Time        Time LAST_UPDATE_TIME
----------------------------------- ------------------------------ ---------- -------- ----------- -------- ----------- ---------------------------------------------------
direct path read                    4096-8192                               1        0           0     0              0 02-JUL-13 03.15.05.925862 PM -05:00
direct path read                    2048-4096                              25        0           0     0              0 03-JUL-13 09.05.04.647574 AM -05:00
direct path read                    1024-2048                              54        0           0     0              0 03-JUL-13 12.40.05.158016 PM -05:00
direct path read                    512-1024                              798        0           0     1              1 03-JUL-13 01.27.50.565441 PM -05:00
direct path read                    256-512                              6775        0           0     5              6 03-JUL-13 02.35.25.224134 PM -05:00
direct path read                    128-256                             53429        3           3    18             24 03-JUL-13 02.35.30.723528 PM -05:00
direct path read                    64-128                             158033        9          12    26             50 03-JUL-13 02.35.31.271218 PM -05:00
direct path read                    32-64                              295652       17          29    25             75 03-JUL-13 02.35.31.168757 PM -05:00
direct path read                    16-32                              371819       21          50    15             90 03-JUL-13 03.17.51.706468 PM -05:00
direct path read                    8-16                               310634       18          68     6             96 03-JUL-13 03.17.36.320652 PM -05:00
direct path read                    4-8                                228440       13          81     2             98 03-JUL-13 03.17.36.327927 PM -05:00
direct path read                    2-4                                153154        9          90     1             99 03-JUL-13 03.17.06.015746 PM -05:00
direct path read                    1-2                                 89236        5          95     0             99 03-JUL-13 03.17.51.710201 PM -05:00
direct path read                    0-1                                 93204        5         100     0             99 03-JUL-13 03.35.50.715609 PM -05:00

Avg wait time ms
----------------
      32.7397445

Download

Older Posts »

The Silver is the New Black Theme Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.