Using 10053 Trace Events and get outline
When it comes to performance tuning, we can spend time on one or both ends of the problem. On the "before there is a problem" end, anyone who writes SQL has the opportunity to write good or efficient SQL. The end result is a statement which is optimal, or nearly so, and for the most part, we've done our due diligence in terms of avoiding introducing a problem child SQL statement into the fray.
On the other end is where we get to practice our "Oracle CSI" skills and choose a tool which helps us solve the "why is this statement so bad?" issue. Common to both ends is the Cost Based Optimizer. The execution plan generated by the CBO is used to confirm several aspects of our statement of interest. Access method(s), or how Oracle intends on going about the task of getting rows is one key element within a plan. For example, if you were expecting an index to be used but the access method against that table reflects a full table scan, you know that you have some work to do regarding the index and perhaps some initialization parameters.
Another key area has to do with join methods. Aside from hints, we don't generally change what Oracle (the CBO) does. There's nothing we're coding that has much to do with which join method (typical methods being nested loop, hash, and merge sort) will be used. What influences the join method is the size of rowsets, and the CBO gets that from statistics. Looked at another way, Oracle naturally considers indexes once we put them in place. Oracle internally comes up with join methods and what we code indirectly influences how the two result sets are joined.
So, behind the scenes, what is the CBO doing when it comes to how it comes up with an execution plan? This is where the 10053 trace event comes into play. Other tools or settings show us WHAT the CBO comes up with; the 10053 setting tells us HOW the CBO came to its decision (the final execution plan).
For a relatively simple query, you might be amazed at all the work Oracle performs via the CBO. Let's run a 10053 trace event and examine the contents of the trace file.
There are a couple of ways to start this trace event. A simple alter session command turns it on (and off).
ALTER SESSION SET EVENTS='10053 trace name context forever, level 1';
...your statement here...
ALTER SESSION SET EVENTS '10053 trace name context off';
Another method involves using
ORADEBUG, which overall, has the advantage of
outputting the name of the trace file, but has the
disadvantage of having to connect as SYS. Using the
ALTER SESSION option is more flexible as pretty much
any user can do this.
conn / as sysdba
oradebug setmypid
oradebug unlimit
oradebug event 10053 trace name context forever, level 1
...your statement here...
oradebug event 10053 trace name context off
oradebug tracefile_name
Yet another method includes
DBMS_SYSTEM and its SET_EV subprogram. This built-in
doesn't have the best documentation in the world (or
public support for that matter).
You have a choice of two levels
with the 10053 trace event. Level 1 is more
comprehensive than level 2. What is collected in the
trace file includes:
1. Parameters used by the
optimizer (level 1 only)
2. Index statistics (level 1
only)
3. Column statistics
4. Single Access Paths
5. Join Costs
6. Table Joins Considered
7. Join Methods Considered
(NL/MS/HA)
Unlike "normal" tracing using
SQL_TRACE, the trace file generated here is already
formatted. Having the PLAN_TABLE in your schema
(assuming this is done via ALTER SESSION by someone
other than SYS) is needed as that is where, as always,
the execution plan output is stored and 10053 tracing
includes the output of the execution plan for your
statement.
Let's take a fairly simple
statement (as seen in various places throughout
Oracle's documentation) and examine its 10053 output.
SELECT ch.channel_class,
c.cust_city,
t.calendar_quarter_desc,
SUM(s.amount_sold) sales_amount
FROM sh.sales s,
sh.times t,
sh.customers c,
sh.channels ch
WHERE s.time_id = t.time_id
AND s.cust_id = c.cust_id
AND s.channel_id = ch.channel_id
AND c.cust_state_province = 'CA'
AND ch.channel_desc in ('Internet','Catalog')
AND t.calendar_quarter_desc IN ('1999-01','1999-02')
GROUP BY ch.channel_class, c.cust_city, t.calendar_quarter_desc
ORDER by 1,2,3,4;
Using Oracle 11g,
the output file can be found in the trace folder under
this path (ORACLE_BASE for me is c:\apporacle):
$ORACLE_BASE\diag\rdbms\ora11g\ora11g\trace
This location corresponds to your
background_dump_dest parameter. The trace file (for
the above statement) has nearly 6,000 lines in it, and
much of the output is similar blocks. You can clearly
see how the optimizer tries different join orders,
giving way to the idea that the order in which you
list tables in the FROM clause is not necessarily the
join order seen within the execution plan.
For this example, the CBO tries 21
permutations. Note that these are permutations, where
order is important, not combinations, where order is
not important.
Join order[1]: CHANNELS[CH]#0 TIMES[T]#1 CUSTOMERS[C]#2 SALES[S]#3
***********************
Best so far: Table#: 0 cost: 3.0015 card: 2.0000 bytes: 42
Table#: 1 cost: 37.2306 card: 362.0000 bytes: 13394
Table#: 2 cost: 146316.6073 card: 1233849.4565 bytes: 77732487
Table#: 3 cost: 152643.9004 card: 27500.9348 bytes: 2310084
***********************
Join order[2]: CHANNELS[CH]#0 TIMES[T]#1 SALES[S]#3 CUSTOMERS[C]#2
***********************
Best so far: Table#: 0 cost: 3.0015 card: 2.0000 bytes: 42
Table#: 1 cost: 37.2306 card: 362.0000 bytes: 13394
Table#: 3 cost: 536.1840 card: 56955.6791 bytes: 3303448
Table#: 2 cost: 946.4226 card: 27500.9348 bytes: 2310084
***********************
Join order[3]: CHANNELS[CH]#0 CUSTOMERS[C]#2 TIMES[T]#1 SALES[S]#3
Join order aborted: cost > best plan cost (--shown here once)
***********************
Join order[4]: CHANNELS[CH]#0 CUSTOMERS[C]#2 SALES[S]#3 TIMES[T]#1
Join order[5]: CHANNELS[CH]#0 SALES[S]#3 TIMES[T]#1 CUSTOMERS[C]#2
***********************
Best so far: Table#: 0 cost: 3.0015 card: 2.0000 bytes: 42
Table#: 3 cost: 501.9521 card: 459421.5000 bytes: 19295724
Table#: 1 cost: 522.8420 card: 56955.6791 bytes: 3303448
Table#: 2 cost: 933.0806 card: 27500.9348 bytes: 2310084
***********************
Join order[6]: CHANNELS[CH]#0 SALES[S]#3 CUSTOMERS[C]#2 TIMES[T]#1
Join order[7]: TIMES[T]#1 CHANNELS[CH]#0 CUSTOMERS[C]#2 SALES[S]#3
Join order[8]: TIMES[T]#1 CHANNELS[CH]#0 SALES[S]#3 CUSTOMERS[C]#2
Join order[9]: TIMES[T]#1 CUSTOMERS[C]#2 CHANNELS[CH]#0 SALES[S]#3
Join order[10]: TIMES[T]#1 SALES[S]#3 CHANNELS[CH]#0 CUSTOMERS[C]#2
***********************
Best so far: Table#: 1 cost: 18.1146 card: 181.0000 bytes: 2896
Table#: 3 cost: 517.0666 card: 113911.3582 bytes: 4214707
Table#: 0 cost: 521.1319 card: 56955.6791 bytes: 3303448
Table#: 2 cost: 931.3705 card: 27500.9348 bytes: 2310084
***********************
Join order[11]: TIMES[T]#1 SALES[S]#3 CUSTOMERS[C]#2 CHANNELS[CH]#0
***********************
Best so far: Table#: 1 cost: 18.1146 card: 181.0000 bytes: 2896
Table#: 3 cost: 517.0666 card: 113911.3582 bytes: 4214707
Table#: 2 cost: 923.7783 card: 55001.8696 bytes: 3465126
Table#: 0 cost: 931.3608 card: 27500.9348 bytes: 2310084
***********************
Join order[12]: CUSTOMERS[C]#2 CHANNELS[CH]#0 TIMES[T]#1 SALES[S]#3
Join order[13]: CUSTOMERS[C]#2 TIMES[T]#1 CHANNELS[CH]#0 SALES[S]#3
Join order[14]: CUSTOMERS[C]#2 SALES[S]#3 CHANNELS[CH]#0 TIMES[T]#1
Join order[15]: CUSTOMERS[C]#2 SALES[S]#3 TIMES[T]#1 CHANNELS[CH]#0
Join order[16]: SALES[S]#3 CHANNELS[CH]#0 TIMES[T]#1 CUSTOMERS[C]#2
Join order[17]: SALES[S]#3 CHANNELS[CH]#0 CUSTOMERS[C]#2 TIMES[T]#1
Join order[18]: SALES[S]#3 TIMES[T]#1 CHANNELS[CH]#0 CUSTOMERS[C]#2
Join order[19]: SALES[S]#3 TIMES[T]#1 CUSTOMERS[C]#2 CHANNELS[CH]#0
Join order[20]: SALES[S]#3 CUSTOMERS[C]#2 CHANNELS[CH]#0 TIMES[T]#1
Join order[21]: SALES[S]#3 CUSTOMERS[C]#2 TIMES[T]#1 CHANNELS[CH]#0
The best (lowest cost) permutation
is join order #11, which starts with TIMES, joins to
SALES, then joined to CUSTOMERS, and finally joined to
CHANNELS. What does the execution plan look like?
PLAN_TABLE_OUTPUT
--------------------------------------------------------------
| Id | Operation | Name | Rows |
--------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1237 |
| 1 | SORT ORDER BY | | 1237 |
| 2 | HASH GROUP BY | | 1237 |
|* 3 | HASH JOIN | | 27501 |
|* 4 | TABLE ACCESS FULL | CHANNELS | 2 |
|* 5 | HASH JOIN | | 55002 |
|* 6 | TABLE ACCESS FULL | CUSTOMERS | 3408 |
|* 7 | HASH JOIN | | 113K|
| 8 | PART JOIN FILTER CREATE | :BF0000 | 181 |
|* 9 | TABLE ACCESS FULL | TIMES | 181 |
| 10 | PARTITION RANGE JOIN-FILTER| | 918K|
| 11 | TABLE ACCESS FULL | SALES | 918K|
Reading the lines (RSO, or row
source operations), we see exactly that progression:
TIMES-SALES-CUSTOMERS-CHANNELS.
Another line in the 10053 trace
file shows that 21 permutations were tried and up to
2000 would have been considered. Since four tables
results in 24 permutations (4!=4*3*2*1), why were
three permutations skipped? In my example, table
permutations 1230, 2031 and 2130 were not considered
and there is nothing in the trace file to show why
these were omitted (there is no obvious message about
2-0-3-1 being pruned because anything starting with
2-0 is guaranteed to be higher than what 2-0-1-3 had
as a cost).
The trace file includes an
extensive listing of all default hidden and unhidden
parameters that are in place, plus a long list of bug
fixes (enabled or not). Techniques such as join
eliminations, query transformations, common
subexpression elimination, self-join conversions,
predicate move-around, and a slew of other approaches
are detailed in the file.
A side benefit of running a trace
is the output showing all of the statistics for a
table and its associated indexes. The "BASE
STATISTICAL INFORMATION" (section title in the file)
for the customers table shows the following:
Table Stats::
Table: CUSTOMERS Alias: C
#Rows: 55500 #Blks: 1486 AvgRowLen: 181.00
Index Stats::
Index: CUSTOMERS_GENDER_BIX Col#: 4
LVLS: 1 #LB: 3 #DK: 2 LB/K: 1.00 DB/K: 2.00 CLUF: 5.00
Index: CUSTOMERS_MARITAL_BIX Col#: 6
LVLS: 1 #LB: 5 #DK: 11 LB/K: 1.00 DB/K: 1.00 CLUF: 18.00
Index: CUSTOMERS_PK Col#: 1
LVLS: 1 #LB: 115 #DK: 55500 LB/K: 1.00 DB/K: 1.00 CLUF: 54405.00
Index: CUSTOMERS_YOB_BIX Col#: 5
LVLS: 1 #LB: 19 #DK: 75 LB/K: 1.00 DB/K: 1.00 CLUF: 75.00
The output here can be correlated
against DBA_TABLES and DBA_INDEXES if need be (DB/K,
for example, corresponds to average data blocks per
key).
Yet another benefit of running this
trace event is that you can see what Oracle comes up
with for a stored outline.
Outline Data:
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.1')
DB_VERSION('11.2.0.1')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
FULL(@"SEL$1" "T"@"SEL$1")
FULL(@"SEL$1" "S"@"SEL$1")
FULL(@"SEL$1" "C"@"SEL$1")
FULL(@"SEL$1" "CH"@"SEL$1")
LEADING(@"SEL$1" "T"@"SEL$1" "S"@"SEL$1" "C"@"SEL$1" "CH"@"SEL$1")
USE_HASH(@"SEL$1" "S"@"SEL$1")
USE_HASH(@"SEL$1" "C"@"SEL$1")
USE_HASH(@"SEL$1" "CH"@"SEL$1")
PX_JOIN_FILTER(@"SEL$1" "S"@"SEL$1")
SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")
SWAP_JOIN_INPUTS(@"SEL$1" "CH"@"SEL$1")
USE_HASH_AGGREGATION(@"SEL$1")
END_OUTLINE_DATA
*/
In a SQL*Plus session, create a
stored outline and query USER_OUTLINE_HINTS. You
should expect to see the same output.
CREATE OUTLINE star_query
FOR CATEGORY test_outlines ON
;
SELECT join_pos, hint
FROM user_outline_hints
WHERE name = 'STAR_QUERY';
JOIN_POS HINT
---------- ------------------------------------------------------------
0 USE_HASH_AGGREGATION(@"SEL$1")
0 SWAP_JOIN_INPUTS(@"SEL$1" "CH"@"SEL$1")
0 SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")
0 PX_JOIN_FILTER(@"SEL$1" "S"@"SEL$1")
0 USE_HASH(@"SEL$1" "CH"@"SEL$1")
0 USE_HASH(@"SEL$1" "C"@"SEL$1")
0 USE_HASH(@"SEL$1" "S"@"SEL$1")
0 LEADING(@"SEL$1" "T"@"SEL$1" "S"@"SEL$1" "C"@"SEL$1" "CH"@"SEL$1")
4 FULL(@"SEL$1" "CH"@"SEL$1")
3 FULL(@"SEL$1" "C"@"SEL$1")
2 FULL(@"SEL$1" "S"@"SEL$1")
1 FULL(@"SEL$1" "T"@"SEL$1")
0 OUTLINE_LEAF(@"SEL$1")
0 ALL_ROWS
0 DB_VERSION('11.2.0.1')
0 OPTIMIZER_FEATURES_ENABLE('11.2.0.1')
0 IGNORE_OPTIM_EMBEDDED_HINTS
In Closing
Exploring the dump file from the
10053 trace event will give you a better understanding
of what the optimizer is doing while it is coming up
with the best plan it can (based on the number of
permutations). The plan you see is probably the best
possible plan, given certain conditions. One is good
statistics. Another, and not quite as obvious, is what
session or system parameters are set to. If you
noticed the name of the outline (STAR_QUERY), does the
execution plan look like the results of a star query?
The answer is no (but Oracle can elect to use a
"normal" plan if the cost is best), but for certainty,
we know that a star query transformation was not
considered because we did not see any star-related
lines in the outline. One line in the outline would
have been this:
OPT_PARAM('star_transformation_enabled' 'true')
Another tell-tale would have been
the use of bitmap indexes in the execution plan. The
overall point of this is that the CBO returned the
best plan, given what it had to work with. It still
may not have been the best plan overall because not
all of the prerequisites to have a better plan were
enabled.
In addition,
the following text is an excerpt from the bestselling book "Oracle
PL/SQL Tuning: Expert Secrets for High Performance Programming" by Dr. Tim
Hall, Oracle ACE of the year, 2006.
The command to dump the optimizer statistics whenever a SQL
statement is parsed can be done at many levels:
ALTER
SESSION SET sql_trace=TRUE;
EXEC DBMS_SESSION.set_sql_trace(sql_trace => TRUE);
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
EXEC DBMS_SYSTEM.set_sql_trace_in_session(sid=>123, serial#=>1234, sql_trace=>TRUE);
EXEC DBMS_SYSTEM.set_ev(si=>123, se=>1234, ev=>10046, le=>8, nm=>' ');
Oracle 10g introduced new ways to enable SQL tracing:
SQL> EXEC DBMS_MONITOR.session_trace_enable;
SQL> EXEC DBMS_MONITOR.session_trace_enable(waits=>TRUE, binds=>FALSE);
SQL> EXEC
DBMS_MONITOR.session_trace_enable(session_id=>1234, serial_num=>1234);
SQL> EXEC DBMS_MONITOR.session_trace_enable(session_id =>1234, serial_num=>1234,
waits=>TRUE,
binds=>FALSE);
SQL> EXEC
DBMS_MONITOR.client_id_trace_enable(client_id=>'tim_hall');
SQL> EXEC DBMS_MONITOR.client_id_trace_enable(client_id=>'tim_hall',
waits=>TRUE,
binds=>FALSE);
SQL> EXEC
DBMS_MONITOR.serv_mod_act_trace_enable(service_name=>'db10g',
module_name=>'test_api', action_name=>'running');
SQL> EXEC DBMS_MONITOR.serv_mod_act_trace_enable(service_name=>'db10g',
module_name=>'test_api', action_name=>'running', waits=>TRUE, binds=>FALSE);
SQL> EXEC DBMS_MONITOR.serv_mod_act_trace_disable(service_name=>'db10g',
module_name=>'test_api', action_name=>'running');
Note that the 10053 trace event output files can get huge,
especially for SQL statements that touch thousands of rows. Here are
step-by-step details for using the 10053 SQL trace event:
10053 trace files - Dr. Hall
Alternatives to SQL trace with
10053
Oracle has an more detailed script
alternative to the 10053 trace called
SQLTXPLAIN.SQL - Enhanced Explain Plan and related diagnostic info for one
SQL statement.
Using 10053 Trace Events and get outline的更多相关文章
- Oracle 11g trace events
oracle的events,是我们在做自己的软件系统时可以借鉴的 Oracle 11g trace eventsORA-10001: control file crash event1ORA-1000 ...
- Systemtap kernel.trace("*") events source code
http://blog.163.com/digoal@126/blog/static/16387704020131014562216/
- SQL Server Extended Events 进阶 1:从SQL Trace 到Extended Events
http://www.sqlservercentral.com/articles/Stairway+Series/134869/ SQL server 2008 中引入了Extended Events ...
- Oracle 课程八之性能优化之10053事件
一. 10053事件 当一个SQL出现性能问题的时候,可以使用SQL_TRACE 或者 10046事件来跟踪SQL. 通过生成的trace来了解SQL的执行过程. 我们在查看一条SQL的执行计划的时候 ...
- Oracle 10053
[10053]alter session set events '10053 trace name context forever,level 1'; <Run your SQL here;&g ...
- oracle SQL性能分析之10053事件
优化器生成正确执行计划的前提条件是要有正确的统计信息,不准确的统计信息往往会导致错误的执行计划.当通过SQL和基数推断出的执行计划和实际执行计划不同时,就可以借助10053事件.10053事件是用来诊 ...
- Oracle 性能调优 10053事件
思维导图 10053事件概述 我们在查看一条SQL语句的执行计划时,只看到了CBO最终告诉我们的执行计划结果,但是我们并不知道CBO为何要这样做. 特别是当执行计划明显失真时,我们特别想搞清楚为什么C ...
- 10046 trace and sql
1. SQLT 下载 从metalink上下载SQLT工具,参考文档 (以下大部分(SQL可以在sqlt\utl 目录下找到)) 1.1 SQLT 安装 SQLT安装在自己的schema SQLT ...
- 【Oracle】详解10053事件
借助Oracle的10053事件event,我们可以监控到CBO对SQL进行成本计算和路径选择的过程和方法. 10053事件有两个级别: Level 2:2级是1级的一个子集,它包含以下内容: Col ...
随机推荐
- Gym 100792 King's Rout 拓扑排序
K. King's Rout time limit per test 4.0 s memory limit per test 512 MB input standard input output st ...
- ArcGIS for Android入门程序之DrawTool2.0
来自:http://blog.csdn.net/arcgis_mobile/article/details/8084763 GISpace博客<ArcGIS for Android入门程序之Dr ...
- ArcGIS Engine效率探究——要素的添加和删除、属性的读取和更新
ArcGIS Engine效率探究——要素的添加和删除.属性的读取和更新 来自:http://blog.csdn.net/freewaywalker/article/details/23703863 ...
- IOS程序崩溃报告管理解决方案(Crashlytics 在2014-09-24)
预研Crashlytics 在2014-09-241:实现原理在原理上,Crashlytics通过以下2步完成崩溃日志的上传和分析:(1)提供应用SDK,你需要在应用启动时调用其SDK来设置你的应用 ...
- C++ - 库函数优先级队列(priority_queue)输出最小值 代码
库函数优先级队列(priority_queue)输出最小值 代码 本文地址: http://blog.csdn.net/caroline_wendy 库函数优先级队列(priority_queue)的 ...
- Mac使用技巧之Finder的个人收藏
当使Finder的时候,左側会列出来个人收藏,能够非常方便的打开对应的文件夹.那么怎样把自己新建的文件夹也增加到个人收藏呢? 1.默认的个人收藏 2.新建名字为my_ios_demo文件夹,拖动这个文 ...
- 为axure生成的html站点添加关闭所有节点的功能
上一篇随笔:将Axure用于需求分析工具中,我分享了我做了一个axure部件,方便用axure中制作各种uml图. 用axure的朋友可能会发现一个问题,如下图,axure生成的html站点中所有的文 ...
- 剑指offer面试题18-树的子结构
题目: 输入两颗二叉树A和B,推断B是不是A的子结构. 树的结构例如以下: package com.aii.algorithm; public class TreeNode { int value; ...
- Guava ---- EventBus事件驱动模型
在软件开发过程中, 难免有信息的共享或者对象间的协作. 怎样让对象间信息共享高效, 而且耦合性低. 这是一个难题. 而耦合性高将带来编码改动牵一发而动全身的连锁效应. Spring的风靡正是由于攻克了 ...
- [办公自动化]如何让excel图表标签中显示最新值数据
同事做了一张excel图表,希望最新的数据显示数据标签,其他都不显示.并且当单元格的数据新增加时,这个标签要能自动更新. 这里需要用到公式,获取到这个最新值.在b2输入公式=lookup(9e+307 ...