SQL 性能分析器(SPA)工具概览

作为 Oracle Real Application Testing 选件/特性,这篇文章将提供一个关于 SQL 性能分析器(SPA)工具的简要概览。这是此系列的第一部分。第二部分于下个月继续讲述数据库捕获和重演。关于 SPA 的详细信息可参考:

数据库测试指南

DBA 的一个重要工作是确保在一个计划内的变更安排后,当前生产环境负载和 SQL 执行计划可以持续平滑运行。变更可能包含数据库升级、增加一个新索引或改变一个特定的数据库参数。 SPA 工具作为 Oracle Real Application Testing 选件的一部分提供,允许将生产环境负载中的 SQL 拿到测试环境的目标数据库运行,所以可以通过比对结果识别退化的性能问题,并在迁移、升级或特定系统变更之前修复。如果您计划使用数据库重演特性,在运行重演之前使用 SPA 是 Oracle 建议的最佳实践。目标是在数据库重演之前识别和修复所有的 SQL 性能退化,所以我们可以只关注重演特性中的并发和吞吐量 。SQL 性能分析器使用 SQL 调优集(STS)作为输入,STS 已经存在了很长一段时间了,这允许 DBA 提取现有生产环境的 SQL 语句工作负载并在节省时间和资源的前提下轻松比对一组变更前和变更后的执行结果。 一个 SQL 调优集(STS)是一个包含了一系列从工作负载中得来的 SQL 语句集及其执行上下文信息(例如用户和绑定变量、执行的统计信息和执行计划)的数据库对象。关于 STS 的更多信息,请参考: Managing SQL Tuning Sets

注: SQL 性能分析器需要 Oracle Real Application Testing 许可. 更多信息,请参考:  Oracle Database Licensing Information.

如下清单提供了一些 DBA 考虑使用 SPA 工具的常见场景。

使用场景


1.    数据库升级 – 一个新版本数据库意味着一个新版本的优化器。DBA 能在升级生产系统之前主动发现任何 SQL 性能退化。
2.    部署一个补丁 – 您可能会部署一个与性能或优化器相关的特定修复的补丁。使用 SPA 来检查您的生产环境 SQL 负载能帮助您验证这个补丁不会引起任何 SQL 性能退化。
3.    数据库初始化参数变更 - 有各种各样的数据库参数可能影响性能,所以这是 SPA 用处的一个很好的场景。
4.    Schema 变更例如增加索引 – schema 变更和修改,如增加索引会直接影响优化器的决定和计划。SPA 可用来测试这些变更并确保不会引入负面影响。
5.    改变或刷新优化器统计信息 – 优化器统计信息直接关系到优化器的决策和执行计划的生成,您可以使用 SPA 来测试新的统计信息和设置来确保它们不会引起 SQL 性能退化。

使用 SPA 包括执行以下工作流文档/步骤。SPA 工具完全集成到 Oracle12c Cloud Control 中,Oracle 也提供了一个名为 DBMS_SQLPA 的 PLSQL 包来允许 DBA 使用 PL/SQL 实施这些步骤。这个工作流使用了一个迭代的过程来执行、对比和分析、以及修复这些退化。DBA 可使用诸如 SQL 执行计划基线或 SQL 调优顾问等工具/特性来修复 SPA 发现的坏或退化的 SQL 语句。

SPA 工作流
1.    捕捉您想要分析的生产系统的 SQL 工作负载,并将其保存为一个 SQL 调优集。
2.    设置目标测试系统(这应该尽可能多地和生产系统一致)。
3.    在测试系统创建一个 SPA 任务。

4.    构建变更前 SPA 任务。
5.    进行系统变更。
6.    构建变更后 SPA 任务。                                                              
7.    对比和分析变更前后的性能数据。
8.    调优或修复任何退化的 SQL 语句。
9.    重复地6~8步,直到 SQL 性能在测试系统上可接受。

出于本文的目的,我们将通过一个简单实例“给表增加一个索引 Schema 变更”来介绍。

•    源数据库版本 12.1.0.2.0
•    目标测试系统 12.1.0.2.0
•    系统变更是给 t1 表增加一个索引 
•    性能报告将生成 HTML 格式的详细信息

SPA – 使用 PL/SQL API 的简单介绍

注:DBMS_SQLPA 包及其用法的更多信息,请参考:  Using DBMS_SQLPA

1.    捕获 SQL 工作负载到一个 SQL 调优集

创建和填充 STS

BEGIN 
  DBMS_SQLTUNE.DROP_SQLSET (sqlset_name  => 'MYSIMPLESTSUSINGAPI'); 
END;
/

BEGIN 
  DBMS_SQLTUNE.CREATE_SQLSET (sqlset_name  => 'MYSIMPLESTSUSINGAPI', description  => 'My Simple STS Using the API' );
END;
/

1a. 使用 SCOTT 用户运行一下 PLSQL 代码执行 SQL 语句。(PLSQL 用来模拟使用绑定变量的 SQL 语句工作负载) 
var b1 number;

declare
v_num number;
 begin
  for i in 1..10000 loop
  :b1 := i;
  select c1 into v_num from t1 where c1 = :b1;
 end loop;
end;
/

1b. 从游标缓存中找到使用 parsing schema=SCOTT 的语句来填充 STS

DECLARE
  c_sqlarea_cursor DBMS_SQLTUNE.SQLSET_CURSOR;
BEGIN
 OPEN c_sqlarea_cursor FOR SELECT VALUE(p) FROM TABLE(DBMS_SQLTUNE.SELECT_CURSOR_CACHE('parsing_schema_name = ''SCOTT''', NULL, NULL, NULL, NULL, 1, NULL,'ALL')) p;
 DBMS_SQLTUNE.LOAD_SQLSET (sqlset_name => 'MYSIMPLESTSUSINGAPI', populate_cursor => c_sqlarea_cursor);
END;
/

1c. 检查 STS 中捕获了多少 SQL 语句

COLUMN NAME FORMAT a20
COLUMN COUNT FORMAT 99999
COLUMN DESCRIPTION FORMAT a30

SELECT NAME, STATEMENT_COUNT AS "SQLCNT", DESCRIPTION FROM   USER_SQLSET;

Results:

NAME                     SQLCNT DESCRIPTION
-------------------- ---------- ------------------------------
MYSIMPLESTSUSINGAPI          12 My Simple STS Using the API

1d. 显示 STS 的内容

COLUMN SQL_TEXT FORMAT a30   
COLUMN SCH FORMAT a3
COLUMN ELAPSED FORMAT 999999999

SELECT SQL_ID, PARSING_SCHEMA_NAME AS "SCOTT", SQL_TEXT,   ELAPSED_TIME AS "ELAPSED", BUFFER_GETS FROM   TABLE( DBMS_SQLTUNE.SELECT_SQLSET( 'MYSIMPLESTSUSINGAPI' ) );

Results: (partial)

SQL_ID        SCOTT                          SQL_TEXT                                               ELAPSED                  BUFFER_GETS
------------- ------------------------------ ------------------------------                                  ----------                     -----------
0af4p26041xkv SCOTT                 SELECT C1 FROM T1 WHERE C1 = :  169909252             18185689

2.  设置目标系统

为了演示目的,这里将使用 STS 的捕获源作为同样的目标测试系统。

3.  创建 SPA 任务

VARIABLE t_name VARCHAR2(100);
EXEC :t_name := DBMS_SQLPA.CREATE_ANALYSIS_TASK(sqlset_name => 'MYSIMPLESTSUSINGAPI', task_name => 'MYSPATASKUSINGAPI');
print t_name

Results:

T_NAME
-----------------
MYSPATASKUSINGAPI

4.  创建和执行变更前的 SPA 任务

EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'MYSPATASKUSINGAPI', execution_type => 'TEST EXECUTE', execution_name => 'MY_BEFORE_CHANGE');

5.  做出系统变更

CREATE INDEX t1_idx ON t1 (c1);

6.  创建和执行变更后的 SPA 任务

EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'MYSPATASKUSINGAPI', execution_type => 'TEST EXECUTE', execution_name => 'MY_AFTER_CHANGE');

7.  对比和分析变更前和变更后的性能

EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'MYSPATASKUSINGAPI', execution_type => 'COMPARE PERFORMANCE', execution_name => 'MY_EXEC_COMPARE', execution_params => dbms_advisor.arglist('comparison_metric', 'elapsed_time'));

-- Generate the Report

set long 100000000 longchunksize 100000000 linesize 200 head off feedback off echo off TRIMSPOOL ON TRIM ON
VAR rep   CLOB;
EXEC :rep := DBMS_SQLPA.REPORT_ANALYSIS_TASK('MYSPATASKUSINGAPI', 'html', 'typical', 'all');
SPOOL C:\mydir\SPA_detailed.html
PRINT :rep
SPOOL off

HTML 格式的报告示例:

下面是 SPA 报告的一部分截屏。报告由3个部分组成,一个部分涉及到变更前和变更后的任务包括 范围,状态,执行起始时间,错误个数和比较的标准;第二个部分总结部分包括了变更带来的负载影响;第三个部分包括了详细的 SQL 信息,比如 SQLID 以及需要比较的度量,比如对负载的影响,执行的频率,以及在这个例子里在变更前后的执行时间这个度量。比如对于 SQL ID 0af4p26041xkv 来说,负载影响是97%。我们要实施的变更对于性能有好的影响,可以把执行时间从12766降低到29。我们还可以看到执行计划在变更后发生了变化。这些信息可以帮助 DBA 进一步关注在特定的问题或者性能退化上,对于当前的这个例子,影响是对性能有提升。

下面的截屏显示了在增加了索引后执行计划的变化。这部分信息可以让 DBA 进一步关注在变更前后某个具体 SQL 的执行计划上。对任何 SQL 退化来说,这可以让 DBA 来清楚了解执行计划是如何变化的,并且可以进一步采取计划,比如使用 SQL Tuning Advisor 或者创建 SPM 基线。

关于 Real Application Testing 的推荐资源清单:

•    Oracle Real Application Testing Product Information
•    Master Note for Real Application Testing Option (Doc ID 1464274.1)
•    Database Testing: Best Practices (Doc ID 1535885.1)
•    Mandatory Patches for Database Testing Functionality for Current and Earlier Releases (Doc ID 560977.1)

##### sample 0

注意事项:

SPA 数据存在2个数据完全一致的库,如果是生产库和测试库不一致,则没有比较意义,因为数据不一致:
同时测试库需要准备2个环境,一个是类生产库,一个类新库

section 1:

1 恢复平台 搭建10g性能测试环境 在恢复平台恢复出10g的生产库,作为性能测试环境,参数保持和生产库一致
2 11g性能测试库 搭建11g性能测试环境 在linux主机进行跨平台迁移,搭建11g性能测试环境
3.生产环境 抓取sqlset "在生产环境执行附件sql,对生产性能影响不大。
抓取持续一周。如果有异常,可以立即停掉。"

---------------------------------------------------
--Step1: 创建名称为STS_SQLSET的SQL_SET.
---------------------------------------------------

BEGIN
DBMS_SQLTUNE.DROP_SQLSET(
sqlset_name => 'STS_SQLSET'
);
END;
/

BEGIN
DBMS_SQLTUNE.CREATE_SQLSET(SQLSET_NAME => 'STS_SQLSET',
DESCRIPTION => 'COMPLETE APPLICATION WORKLOAD',
SQLSET_OWNER =>'DBMGR');
END;
/

---------------------------------------------------
--Step2: 初始加载当前数据库中的SQL.
---------------------------------------------------

DECLARE
STSCUR DBMS_SQLTUNE.SQLSET_CURSOR;
BEGIN
OPEN STSCUR FOR
SELECT VALUE(P)
FROM TABLE(DBMS_SQLTUNE.SELECT_CURSOR_CACHE('PARSING_SCHEMA_NAME NOT IN (''SYS'', ''SYSTEM'', ''OUTLN'', ''DBSNMP'', ''WMSYS'', ''CTXSYS'', ''XDB'', ''MDSYS'',
''ORDPLUGINS'', ''ORDSYS'', ''OLAPSYS'', ''HR'', ''OE'', ''SCOTT'', ''QS_CB'', ''QS_CBADM'', ''QS_ES'', ''QS_OS'', ''QS_CS'', ''QS'', ''QS_ADM'', ''QS_WS'', ''DIP'', ''TSMSYS'', ''EXFSYS'',
''LBACSYS'', ''TRACESVR'', ''AURORA$JIS$UTILITY$'', ''OSE$HTTP$ADMIN'', ''DBMGR'', ''OVSEE'', ''DBMONOPR'')
AND PLAN_HASH_VALUE <> 0 AND UPPER(SQL_TEXT) NOT LIKE ''%TABLE_NAME%''',
NULL, NULL, NULL, NULL, 1, NULL,
'ALL')) P;
-- POPULATE THE SQLSET
DBMS_SQLTUNE.LOAD_SQLSET(SQLSET_NAME => 'STS_SQLSET',
POPULATE_CURSOR => STSCUR,
COMMIT_ROWS => 100,
SQLSET_OWNER => 'DBMGR');
CLOSE STSCUR;
COMMIT;
EXCEPTION
WHEN OTHERS THEN
RAISE;
END;

/

select owner,name,STATEMENT_COUNT from dba_sqlset;

--观察下初始化收集数据,如果数据太大,可以停下进程
---------------------------------------------------
--Step3: 增量抓取数据库中的SQL, 会连续抓取7天,每小时抓取一次,Sessions一直持续7天. 这一步用shell脚本在后台执行。或者在sqlplus 上执行,如果不会被kiill 到。

BEGIN
DBMS_SQLTUNE.CAPTURE_CURSOR_CACHE_SQLSET(SQLSET_NAME => 'STS_SQLSET',
TIME_LIMIT => 345600,
REPEAT_INTERVAL => 3600,
CAPTURE_OPTION => 'MERGE',
CAPTURE_MODE => DBMS_SQLTUNE.MODE_ACCUMULATE_STATS,
BASIC_FILTER => 'PARSING_SCHEMA_NAME NOT IN (''SYS'', ''SYSTEM'', ''OUTLN'', ''DBSNMP'', ''WMSYS'', ''CTXSYS'', ''XDB'', ''MDSYS'',
''ORDPLUGINS'', ''ORDSYS'', ''OLAPSYS'', ''HR'', ''OE'', ''SCOTT'', ''QS_CB'', ''QS_CBADM'', ''QS_ES'', ''QS_OS'', ''QS_CS'', ''QS'', ''QS_ADM'', ''QS_WS'', ''DIP'', ''TSMSYS'', ''EXFSYS'',''LBACSYS'', ''TRACESVR'', ''AURORA$JIS$UTILITY$'', ''OSE$HTTP$ADMIN'', ''DBMGR'', ''OVSEE'', ''DBMONOPR'')
AND PLAN_HASH_VALUE <> 0 AND UPPER(SQL_TEXT) NOT LIKE ''%TABLE_NAME%''',
SQLSET_OWNER => 'DBMGR');
END;
/

--for script runnign in background ,collct from 8/2 to 8/6, 4days totoal

cd /db/cps/app/opcps/dba

vi collect_spq.sh

sqlplus / as sysdba <<eof
select instance_name from v\$instance;
BEGIN
DBMS_SQLTUNE.CAPTURE_CURSOR_CACHE_SQLSET(SQLSET_NAME => 'STS_SQLSET',
TIME_LIMIT => 345600,
REPEAT_INTERVAL => 3600,
CAPTURE_OPTION => 'MERGE',
CAPTURE_MODE => DBMS_SQLTUNE.MODE_ACCUMULATE_STATS,
BASIC_FILTER => 'PARSING_SCHEMA_NAME NOT IN (''SYS'', ''SYSTEM'', ''OUTLN'', ''DBSNMP'', ''WMSYS'', ''CTXSYS'', ''XDB'', ''MDSYS'',
''ORDPLUGINS'', ''ORDSYS'', ''OLAPSYS'', ''HR'', ''OE'', ''SCOTT'', ''QS_CB'', ''QS_CBADM'', ''QS_ES'', ''QS_OS'', ''QS_CS'', ''QS'', ''QS_ADM'', ''QS_WS'', ''DIP'', ''TSMSYS'', ''EXFSYS'',''LBACSYS'', ''TRACESVR'', ''AURORA$JIS$UTILITY$'', ''OSE$HTTP$ADMIN'', ''DBMGR'', ''OVSEE'', ''DBMONOPR'')
AND PLAN_HASH_VALUE <> 0 AND UPPER(SQL_TEXT) NOT LIKE ''%TABLE_NAME%''',
SQLSET_OWNER => 'DBMGR');
END;
/
eof

4 生产环境 "第三步完成后,
创建中间表,将sqlset打包到中间表" "exec dbms_sqltune.create_stgtab_sqlset(table_name => 'STS_STBTAB' ,schema_name => 'DBMGR');
exec dbms_sqltune.pack_stgtab_sqlset(sqlset_name =>'STS_SQLSET' ,sqlset_owner =>'DBMGR' ,staging_table_name =>'STS_STBTAB' ,staging_schema_owner => 'DBMGR' );"

方法1:

(

转换成中转表之后,我们可以再做一次去除重复的操作。当然,你也可以根据module来删除一些不必要的游标。

delete from SPA.SQLSET_TAB a where rowid !=(select max(rowid) from SQLSET_TAB b where a.FORCE_MATCHING_SIGNATURE=b.FORCE_MATCHING_SIGNATURE and a.FORCE_MATCHING_SIGNATURE<>0);

delete from SPA.SQLSET_TAB where MODULE='PL/SQL Developer';

)

方法2:

(

exec dbms_sqltune.create_stgtab_sqlset(table_name => 'STS_STBTAB_08' ,schema_name => 'DBMGR');
exec dbms_sqltune.pack_stgtab_sqlset(sqlset_name =>'STS_SQLSET' ,sqlset_owner =>'DBMGR' ,staging_table_name =>'STS_STBTAB_08' ,staging_schema_owner => 'DBMGR' );

create table sts_b as select distinct(to_char(s.force_matching_signature)) b from DBMGR.STS_STBTAB_08 s;

create index sts_b_1_force on STS_STBTAB_08(to_char(force_matching_signature));

DECLARE
T VARCHAR2(50);
cursor STSCUR IS select b from sts_b;
STSCUR_1 STS_STBTAB_08%ROWTYPE;
BEGIN
OPEN STSCUR;
LOOP
fetch STSCUR into T;
EXIT WHEN STSCUR%NOTFOUND;
-- DBMS_OUTPUT.PUT_LINE(T);
select * into STSCUR_1 from ( (select * from STS_STBTAB_08 st where (to_char(st.force_matching_signature)) =T ) order by cpu_time desc) where rownum < 2;
delete STS_STBTAB_08 where (to_char(force_matching_signature)) =T;
insert into STS_STBTAB_08 values STSCUR_1;
commit;
-- DBMS_OUTPUT.PUT_LINE(STSCUR_1.SQL_ID);
END LOOP;
CLOSE STSCUR;
END;

)

5 生产环境 导出sqlset "expdp导出DBMGR.STS_STBTAB

select count(*) from DBMGR.STS_STBTAB;

exp dbmgr/db1234DBA tables=STS_STBTAB file=/db/cps/archivelog/exp_SQLSET_TAB.dmp log=/db/cps/archivelog/exp_SQLSET_TAB.log FEEDBACK=1000 BUFFER=5000000
"

section 2:

将sqlset导入11g性能测试库

6 11g性能测试库 导入到11g性能测试库 impdp导入DBMGR.STS_STBTAB",

因为有70万条数据,跑得时间非常慢,考虑先过滤信息,在跑,主要过滤方法是排除文字常量一样的SQL,然后取其中最大buffer get 的一条记录,最后的sql_id 和记录放在sts_b_2 表里。最终记录在2万条左右,方法如下:

drop table sts_b;
drop table sts_b_1;
drop table sts_b_2;
drop table sts_b_3;

create table sts_b as select distinct(to_char(s.force_matching_signature)) b from dba_sqlset_statements s;
create table sts_b_1 as select * from dba_sqlset_statements;
create index sts_b_1_force on sts_b_1(to_char(force_matching_signature));
create table sts_b_2 (sql_id varchar2(50));
create table sts_b_3 as select * from dba_sqlset_statements;

DECLARE
T VARCHAR2(50);
cursor STSCUR IS select b from sts_b;
STSCUR_1 dba_sqlset_statements%ROWTYPE;
BEGIN
OPEN STSCUR;
LOOP
fetch STSCUR into T;
EXIT WHEN STSCUR%NOTFOUND;
-- DBMS_OUTPUT.PUT_LINE(T);
select * into STSCUR_1 from ( (select * from sts_b_1 st where (to_char(st.force_matching_signature)) =T ) order by cpu_time desc) where rownum < 2;
delete sts_b_1 where (to_char(force_matching_signature)) =T;
insert into sts_b_2 values(STSCUR_1.SQL_ID);
commit;
-- DBMS_OUTPUT.PUT_LINE(STSCUR_1.SQL_ID);
END LOOP;
CLOSE STSCUR;
END;

--select * into STSCUR_1 from ( (select * from dba_sqlset_statements st where (to_char(st.force_matching_signature)) =6018318222786944325 ) order by cpu_time ---desc) where rownum < 2;

7 11g性能测试库 sqlset解包 exec dbms_sqltune.unpack_stgtab_sqlset(sqlset_name => 'STS_SQLSET',sqlset_owner => 'DBMGR' ,replace => True,staging_table_name =>'STS_STBTAB' ,staging_schema_owner => 'DBMGR');

section 3:
将sqlset导入10g测试库

8 10g测试环境 授权 "grant all on spa_sqlpa to dbmgr;
grant all on dbms_sqlpa to dbmgr;
"
9 10g测试环境 导出导入 "impdp导入DBMGR.STS_STBTAB

imp dbmgr/dbmgr fromuser=dbmgr touser=dbmgr file=/oraclelv/exp_SQLSET_TAB.dmp feedback=1000 log=/oraclelv/imp_SQLSET_TAB.log BUFFER=5000000

imp dbmgr/db1234DBA fromuser=dbmgr touser=dbmgr file=/datalv03/afa/exp_SQLSET_TAB.dmp feedback=1000 log=/datalv03/afa/imp_SQLSET_TAB.log BUFFER=5000000
"
10 10g测试环境 解包sqlset exec dbms_sqltune.unpack_stgtab_sqlset(sqlset_name => 'STS_SQLSET',sqlset_owner => 'DBMGR' ,replace => True,staging_table_name =>'STS_STBTAB' ,staging_schema_owner => 'DBMGR');
11 11g性能测试库 创建DBLINK "create public database link to_10g connect to dbmgr identified by xxxxx using 'xxxxxxx'

create public database link to_10g connect to dbmgr identified by db1234DBA
using ' (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =25.10.0.199)(PORT = 1539)) (CONNECT_DATA =(sid = afa)))';"

section 4:
第一次执行spa回放,获得10g性能测试环境的数据

12 11性能测试库 创建task "declare
mytask varchar2(100);
begin
mytask := dbms_sqlpa.create_analysis_task(sqlset_name => 'STS_SQLSET',sqlset_owner => 'DBMGR',task_name => 'TASK_10G');
END;
/
"
13 11g性能测试库 生成第一次执行task的脚本,通过DATABASE_LINK to_10g 远程执行"
vi spa_10g.sh
sqlplus -S /nolog <<SQLEnd
conn / as sysdba
exec DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name=>'TASK_10G',execution_type=>'test execute',execution_name=>'spa10g',execution_params =>dbms_advisor.argList('DATABASE_LINK','TO_10G','EXECUTE_COUNT',5) ,execution_desc => 'before_change');
exit
SQLEnd" "可以看到SPA_DIR下有BEF_TASK_SQLSET_NO_i.sh脚本,
"
14 11g性能测试库 "执行task,
执行第一次spa回放" "nohup sh spa_10g.sh > spa_10g.log &
" 发起目标库执行第一次spa回放
15 11g性能测试库 查询回放进度 "SELECT owner,task_name,execution_name,a.execution_type,execution_start,execution_last_modified,execution_end,status,b.SOFAR,b.START_TIME,b.LAST_UPDATE_TIME
from dba_advisor_executions a, v$advisor_progress b where a.execution_name like 'spa10g%';" 计算 2万笔 笔数据需要多久完成,

section 5:
第二次执行spa回放,获得11g性能测试环境的数据

16 11g性能测试库 创建task "declare
mytask varchar2(100);
begin
mytask := dbms_sqlpa.create_analysis_task(sqlset_name => 'STS_SQLSET',sqlset_owner => 'DBMGR',task_name => 'TASK_11G');
END;
/
"
17 11g性能测试库 生成第二次执行task的脚本 "vi spa_11g.sh
sqlplus -S /nolog <<SQLEnd
conn / as sysdba
exec DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name=>'TASK_11G',execution_type=>'test execute',execution_name=>'spa11g_1',execution_params =>dbms_advisor.argList('EXECUTE_COUNT',5) ,execution_desc => 'after_change');
exit
SQLEnd" 可以看到SPA_DIR下有AFT_TASK_SQLSET_NO_i.sh脚本
18 11g性能测试库 "执行task,
执行第二次spa回放" "nohup sh spa_11g.sh > spa_11g.log &
" 发起目标库执行第二次spa回放,5个小时左右
19 11g性能测试库 查询回放进度 "SELECT owner,task_name,execution_name,a.execution_type,execution_start,execution_last_modified,execution_end,status,b.SOFAR,b.START_TIME,b.LAST_UPDATE_TIME
from dba_advisor_executions a, v$advisor_progress b where a.execution_name like 'spa11g%';"

section 6:
20 11g性能测试库 取出buffer gets变大的sql进行分析

20.1.分析SQL 如下
Select *
From (Select b.*, dbms_lob.substr(st.sql_text, 3400) sql_text
From SYS.Wrh$_Sqltext st,
(Select task_name,
sql_id,
executions,
detal_buffer_gets,
bf_buffer_gets,
af_buffer_gets,
bf_plan_hash_value,
bf_rows_processed,
af_plan_hash_value,
af_ROWS_PROCESSED
From (Select task_name,
sql_id,
bf_executions executions,
round(af_buffer_gets / af_EXECUTIONS) -
round(bf_buffer_gets / bf_executions) detal_buffer_gets,
round(bf_buffer_gets / bf_executions) bf_buffer_gets,
round(af_buffer_gets / af_EXECUTIONS) af_buffer_gets,
bf_plan_hash_value,
bf_rows_processed / bf_executions bf_rows_processed,
af_plan_hash_value,
af_ROWS_PROCESSED / af_EXECUTIONS af_ROWS_PROCESSED
From (Select bf.task_name,
bf.sql_id,
bf.executions bf_executions,
bf.plan_hash_value bf_plan_hash_value,
bf.buffer_gets bf_buffer_gets,
bf.rows_processed bf_rows_processed,
af.plan_hash_value af_plan_hash_value,
af.buffer_gets af_buffer_gets,
af.executions af_executions,
af.rows_processed af_rows_processed
From dba_advisor_sqlstats af,
dba_advisor_sqlstats bf
Where af.execution_name='SPA11G'
And af.task_name like '%TASK_11G%'
And bf.task_name like '%TASK_10G%'
And bf.execution_name='SPA10G'
And bf.sql_id = af.sql_id))
Where detal_buffer_gets > 0) b
Where st.sql_id = b.sql_id
Order By detal_buffer_gets Desc)
Where sql_text Not Like '%Analyze(%'
And sql_text Not Like '%SELECT /* DS_SVC */%'
And sql_text Not Like '%/* OPT_DYN_SAMP */%';

Select *
From (Select b.*, dbms_lob.substr(st.sql_text, 3400) sql_text
From SYS.Wrh$_Sqltext st,
(Select task_name,
sql_id,
executions,
detal_buffer_gets,
bf_buffer_gets,
af_buffer_gets,
bf_plan_hash_value,
bf_rows_processed,
af_plan_hash_value,
af_ROWS_PROCESSED
From (Select task_name,
sql_id,
bf_executions executions,
round(af_buffer_gets / af_EXECUTIONS) -
round(bf_buffer_gets / bf_executions) detal_buffer_gets,
round(bf_buffer_gets / bf_executions) bf_buffer_gets,
round(af_buffer_gets / af_EXECUTIONS) af_buffer_gets,
bf_plan_hash_value,
bf_rows_processed / bf_executions bf_rows_processed,
af_plan_hash_value,
af_ROWS_PROCESSED / af_EXECUTIONS af_ROWS_PROCESSED
From (Select bf.task_name,
bf.sql_id,
bf.executions bf_executions,
bf.plan_hash_value bf_plan_hash_value,
bf.buffer_gets bf_buffer_gets,
bf.rows_processed bf_rows_processed,
af.plan_hash_value af_plan_hash_value,
af.buffer_gets af_buffer_gets,
af.executions af_executions,
af.rows_processed af_rows_processed
From dba_advisor_sqlstats af,
dba_advisor_sqlstats bf
Where af.execution_name='spa11g_1'
And af.task_name like '%TASK_11G%'
And bf.task_name like '%TASK_10G%'
And bf.execution_name='spa10g'
And bf.sql_id = af.sql_id))
Where detal_buffer_gets > 0) b
Where st.sql_id = b.sql_id
Order By detal_buffer_gets Desc)
Where sql_text Not Like '%Analyze(%'
And sql_text Not Like '%SELECT /* DS_SVC */%'
And sql_text Not Like '%/* OPT_DYN_SAMP */%';

20.2." 1.最后检查的结果SQL大于10%的有将近400多条。在过滤一遍buffer_get 大于1000以上的,只有30多条,因此重点分析这30条sql数据。
2.把这30条SQL依次放在PL/SQL developer里格式化后,在放入同一个文件,按照sql_id 编好号。"

20.3."1.如果spa数据过多,每次都要查询很久的话 ,可以考虑创建临时表 a,加快查询速度 。
create table a as
(
Select *
From (Select b.*, dbms_lob.substr(st.sql_text, 3400) sql_text
From SYS.Wrh$_Sqltext st,
(Select task_name,
sql_id,
executions,
detal_buffer_gets,
bf_buffer_gets,
af_buffer_gets,
bf_plan_hash_value,
bf_rows_processed,
af_plan_hash_value,
af_ROWS_PROCESSED
From (Select task_name,
sql_id,
bf_executions executions,
round(af_buffer_gets / af_EXECUTIONS) -
round(bf_buffer_gets / bf_executions) detal_buffer_gets,
round(bf_buffer_gets / bf_executions) bf_buffer_gets,
round(af_buffer_gets / af_EXECUTIONS) af_buffer_gets,
bf_plan_hash_value,
bf_rows_processed / bf_executions bf_rows_processed,
af_plan_hash_value,
af_ROWS_PROCESSED / af_EXECUTIONS af_ROWS_PROCESSED
From (Select bf.task_name,
bf.sql_id,
bf.executions bf_executions,
bf.plan_hash_value bf_plan_hash_value,
bf.buffer_gets bf_buffer_gets,
bf.rows_processed bf_rows_processed,
af.plan_hash_value af_plan_hash_value,
af.buffer_gets af_buffer_gets,
af.executions af_executions,
af.rows_processed af_rows_processed
From dba_advisor_sqlstats af,
dba_advisor_sqlstats bf
Where af.execution_name='spa11g_1'
And af.task_name like '%TASK_11G%'
And bf.task_name like '%TASK_10G%'
And bf.execution_name='spa10g'
And bf.sql_id = af.sql_id))
Where detal_buffer_gets > 0) b
Where st.sql_id = b.sql_id
Order By detal_buffer_gets Desc)
Where sql_text Not Like '%Analyze(%'
And sql_text Not Like '%SELECT /* DS_SVC */%'
And sql_text Not Like '%/* OPT_DYN_SAMP */%');" "

2.如下是从a表中找到升级前后变化量大于10%的SQL,detal_buffer_gets这个变量就是变化量.

select sql_id,sql_text,bf_buffer_gets,af_buffer_gets,(detal_buffer_gets/bf_buffer_gets *100) change from a where detal_buffer_gets/bf_buffer_gets *100 > 10;
"

21 11g性能测试库 plan改变的sql

Select st.sql_id,
sst.executions,
dbms_lob.substr(st.sql_text, 3000) sql_text
From sys.wrh$_sqltext st,
dba_sqlset_statements sst,
(Select Distinct sql_id
From (Select sql_id,
operation,
options,
object_name,
object_alias,
object_type,
Id,
parent_id,
depth
From dba_advisor_sqlplans af
Where af.task_name = 'TASK_11G' And af.execution_name = 'SPA11G'
Minus
Select sql_id,
operation,
options,
object_name,
object_alias,
object_type,
Id,
parent_id,
depth
From dba_advisor_sqlplans bf
Where bf.task_name = 'TASK_10G' And bf.execution_name = 'SPA10G'
)) cp
Where st.sql_id = cp.sql_id
And sst.sql_id = cp.sql_id
And st.sql_text Not Like '%Analyze(%'
And st.sql_text Not Like '%SELECT /* DS_SVC */%'
And st.sql_text Not Like '%/* OPT_DYN_SAMP */%'
And sst.sqlset_name Like 'STS_SQLSETNO%'
;

Select st.sql_id,
sst.executions,
dbms_lob.substr(st.sql_text, 3000) sql_text
From sys.wrh$_sqltext st,
dba_sqlset_statements sst,
(Select Distinct sql_id
From (Select sql_id,
operation,
options,
object_name,
object_alias,
object_type,
Id,
parent_id,
depth
From dba_advisor_sqlplans af
Where af.task_name = 'TASK_11G' And af.execution_name = 'spa11g_1'
Minus
Select sql_id,
operation,
options,
object_name,
object_alias,
object_type,
Id,
parent_id,
depth
From dba_advisor_sqlplans bf
Where bf.task_name = 'TASK_10G' And bf.execution_name = 'spa10g'
)) cp
Where st.sql_id = cp.sql_id
And sst.sql_id = cp.sql_id
And st.sql_text Not Like '%Analyze(%'
And st.sql_text Not Like '%SELECT /* DS_SVC */%'
And st.sql_text Not Like '%/* OPT_DYN_SAMP */%'
And sst.sqlset_name Like '%STS_SQLSET%'

22 11g性能测试库 "生成任务task_11g的报告,
生成任务task_10g的报告" "conn / as sysdba
1.SQL> SET LONG 999999 longchunksize 100000 linesize 200 head off feedback off echo off
SQL> spool task01_before_change.html
SQL> SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('TASK_11G', 'HTML', 'ALL', 'ALL') FROM dual;
SQL>spoo off

SQL>alter session set events '31156 trace name context forever,level 0x400';
SQL> spool task01_after_change.html
SQL> SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('TASK_10G', 'HTML', 'ALL', 'ALL') FROM dual;
SQL>spoo off"

23.
23 11g性能测试库 分析性能变差的sql突然变化的原因以及对应SQL优化。"分析性能突然变差的SQL原因周期比较长。
主要可以通过SQLT报告来分析。,这里主要是对性能变化的SQL的调优。" 新生成的SQL_PROFILE可以通过导入和导出方式迁移到新库,这样可以实现优化SQL的目的

--手工调优SQL方法,在11g 数据库里,根据SQL_ID,然后根据SQL_ID绑定profile

DECLARE
my_task_name VARCHAR2(30);
my_sqltext CLOB;
BEGIN
select dbms_lob.substr(sql_fulltext,4000) sql_text from v$sqlarea where sql_id='cybxr1trru31n';
my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
sql_text=> my_sqltext,
user_name => 'AFA',
scope => 'COMPREHENSIVE',
time_limit => 60,
task_name => 'my_sql_tuning_task_test1',
description => 'Task to tune a query on a specified table');
END;
/

exec DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => 'my_sql_tuning_task_test1');

set long 2000
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( 'my_sql_tuning_task_test1') from DUAL;

--自动调优方法:思路如下,手工执行SQL,得到SQL_ID,再调用自动分析方法DBMS_SQLTUNE分析,最后得到分析结果以及建议,最后将结果建议实施。

## 自动调优开始begin
rm t.log
sqlplus afa/afa <<eof
spool t.log

SELECT *
FROM (SELECT agentserialno,
FROM v_beps_returnticketinfo
WHERE brno = '756045'
AND workdate >= '20180701'
ORDER BY workdate desc, agentserialno desc))
WHERE RN <= 15
AND RN BETWEEN 1 AND 15
/

select * from table(dbms_xplan.display_cursor());

spool off
eof

sql_id=`grep SQL_ID t.log|awk '{print $2}'|awk -F, '{print $1}'`

sqlplus / as sysdba <<eof1
set pagesize 0 linesize 300
select * from dual;
exec DBMS_SQLTUNE.DROP_TUNING_TASK(task_name => 'my_sql_tuning_task_test1');
select * from dual;
DECLARE
my_task_name VARCHAR2(30);
my_sqltext CLOB;
BEGIN
select dbms_lob.substr(sql_fulltext,4000) sql_text into my_sqltext from v\$sqlarea where sql_id='$sql_id';
my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
sql_text=> my_sqltext,
user_name => 'AFA',
scope => 'COMPREHENSIVE',
time_limit => 1600,
task_name => 'my_sql_tuning_task_test1',
description => 'Task to tune a query on a specified table');
END;
/
exec DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => 'my_sql_tuning_task_test1');
/
set long 200000
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( 'my_sql_tuning_task_test1') from DUAL;
execute dbms_sqltune.accept_sql_profile(task_name => 'my_sql_tuning_task_test1', task_owner => 'SYS', replace => TRUE,force_match=>true);
/
eof1

####自动调优结束

附录:分析sql变慢的原因,需要使用SQLT 中xplore, 方法如下:

Method 3)如何找到是优化器的哪个变化导致SQL性能变化 ,如果数据库升级,比如从10g升级到11g,sql 运行缓慢,可以使用如下方法

单独安装sqlt->utl->xplore目录下执行install.sql
生成create_xplore_script.sql并执行
修改sql语句,增加hint /* ^^unique_id */保存到文件
选择XECUTE模式执行
选择CBO Parameters Y
选择EXADATA Parameters N
选择Fix Control Y
选择SQL Monitor N
执行@xplore_script_1.sql带入变量用户、密码、sql文件名
分析生成的html格式报告

##sample
上传SQLT,解压缩SQLT, 不需要安装SQLT ,直接按照如下方法实施

cd sqlt/utl/xplore

Install:
~~~~~~~
1. Connect as SYS and execute install script:

# sqlplus / as sysdba
SQL> START install.sql

Installation completed.
You are now connected as afa.

1. Set CBO env if needed
2. Execute @create_xplore_script.sql
--

2. Generate the xplore_script in the same session within which you executed step one.

cd /home/oracle/sqlt/sqlt/utl/xplore

改写sql create_xplore_script.sql ,思路如下,去掉ACC这一行,增加define 这一行。

--#ACC xplore_method PROMPT 'Enter "XPLORE Method" [XECUTE]: ';
define xplore_method="XECUTE"
PRO Parameter 2:
PRO Include CBO Parameters: Y (default) or N
--ACC include_cbo_parameters PROMPT 'Enter "CBO Parameters" [Y]: ';
define include_cbo_parameters="Y"
PRO
PRO Parameter 3:
PRO Include Exadata Parameters: Y (default) or N
--ACC include_exadata_parameters PROMPT 'Enter "EXADATA Parameters" [Y]: ';
define include_exadata_parameters="N"
PRO
PRO Parameter 4:
PRO Include Fix Control: Y (default) or N
--ACC include_fix_control PROMPT 'Enter "Fix Control" [Y]: ';
define include_fix_control="Y"
PRO
PRO Parameter 5:
PRO Generate SQL Monitor Reports: N (default) or Y
PRO Only applicable when XPLORE Method is XECUTE
--ACC generate_sql_monitor_reports PROMPT 'Enter "SQL Monitor" [N]: ';
define generate_sql_monitor_reports="Y"

SQL> conn app/passwd
sqlplus afa/afa <<eof
@create_xplore_script.sql
eof

3. Execute generated xplore_script. It will ask for two parameters:
修改sql语句,增加hint /* ^^unique_id */保存到文件

conn app/passwd

@xplore_script_1.sql带入变量用户、密码、sql文件名

P1. Name of the script to be executed. 一定要加入/* ^^unique_id */ 这个关键字,即可,最后结果会生成zip文件

Notes:
Example:
SELECT /* ^^unique_id */ t1.col1, etc.

P2. Password for <user>

4. After you are done using XPLORE you may want to bounce the
database since it executed some ALTER SYSTEM commands:
(when meet erro ERROR at line 1:ORA-01422: exact fetch returns more than requested number of rows, need start db)

# sqlplus / as sysdba
SQL> shutdown immediate
SQL> startup

Uninstall:
~~~~~~~~~
1. Connect as SYS and execute uninstall script:

# sqlplus <user>
SQL> START uninstall.sql

Note:
You will be asked for the test case user.

##### sample 1

作为 Oracle Real Application Testing 选件/特性,这篇文章将提供一个关于 SQL 性能分析器(SPA)工具的简要概览。这是此系列的第一部分。第二部分于下个月继续讲述数据库捕获和重演。关于 SPA 的详细信息可参考:

数据库测试指南

DBA 的一个重要工作是确保在一个计划内的变更安排后,当前生产环境负载和 SQL 执行计划可以持续平滑运行。变更可能包含数据库升级、增加一个新索引或改变一个特定的数据库参数。 SPA 工具作为 Oracle Real Application Testing 选件的一部分提供,允许将生产环境负载中的 SQL 拿到测试环境的目标数据库运行,所以可以通过比对结果识别退化的性能问题,并在迁移、升级或特定系统变更之前修复。如果您计划使用数据库重演特性,在运行重演之前使用 SPA 是 Oracle 建议的最佳实践。目标是在数据库重演之前识别和修复所有的 SQL 性能退化,所以我们可以只关注重演特性中的并发和吞吐量 。SQL 性能分析器使用 SQL 调优集(STS)作为输入,STS 已经存在了很长一段时间了,这允许 DBA 提取现有生产环境的 SQL 语句工作负载并在节省时间和资源的前提下轻松比对一组变更前和变更后的执行结果。 一个 SQL 调优集(STS)是一个包含了一系列从工作负载中得来的 SQL 语句集及其执行上下文信息(例如用户和绑定变量、执行的统计信息和执行计划)的数据库对象。关于 STS 的更多信息,请参考: Managing SQL Tuning Sets

注: SQL 性能分析器需要 Oracle Real Application Testing 许可. 更多信息,请参考: Oracle Database Licensing Information.

如下清单提供了一些 DBA 考虑使用 SPA 工具的常见场景。

使用场景

1. 数据库升级 – 一个新版本数据库意味着一个新版本的优化器。DBA 能在升级生产系统之前主动发现任何 SQL 性能退化。
2. 部署一个补丁 – 您可能会部署一个与性能或优化器相关的特定修复的补丁。使用 SPA 来检查您的生产环境 SQL 负载能帮助您验证这个补丁不会引起任何 SQL 性能退化。
3. 数据库初始化参数变更 - 有各种各样的数据库参数可能影响性能,所以这是 SPA 用处的一个很好的场景。
4. Schema 变更例如增加索引 – schema 变更和修改,如增加索引会直接影响优化器的决定和计划。SPA 可用来测试这些变更并确保不会引入负面影响。
5. 改变或刷新优化器统计信息 – 优化器统计信息直接关系到优化器的决策和执行计划的生成,您可以使用 SPA 来测试新的统计信息和设置来确保它们不会引起 SQL 性能退化。

使用 SPA 包括执行以下工作流文档/步骤。SPA 工具完全集成到 Oracle12c Cloud Control 中,Oracle 也提供了一个名为 DBMS_SQLPA 的 PLSQL 包来允许 DBA 使用 PL/SQL 实施这些步骤。这个工作流使用了一个迭代的过程来执行、对比和分析、以及修复这些退化。DBA 可使用诸如 SQL 执行计划基线或 SQL 调优顾问等工具/特性来修复 SPA 发现的坏或退化的 SQL 语句。

SPA 工作流
1. 捕捉您想要分析的生产系统的 SQL 工作负载,并将其保存为一个 SQL 调优集。
2. 设置目标测试系统(这应该尽可能多地和生产系统一致)。
3. 在测试系统创建一个 SPA 任务。

Image

4. 构建变更前 SPA 任务。
5. 进行系统变更。
6. 构建变更后 SPA 任务。
7. 对比和分析变更前后的性能数据。
8. 调优或修复任何退化的 SQL 语句。
9. 重复地6~8步,直到 SQL 性能在测试系统上可接受。

出于本文的目的,我们将通过一个简单实例10.2.0.4 到11,2,0.4 来介绍。

• 源数据库版本 10.2.0.4 25.10.0.199
• 目标测试系统 11.2.0.4 25.10.0.31
• 性能报告将生成 HTML 格式的详细信息

SPA – 使用 PL/SQL API 的简单介绍

注:DBMS_SQLPA 包及其用法的更多信息,请参考: Using DBMS_SQLPA

1. 捕获 SQL 工作负载到一个 SQL 调优集

创建和填充 STS

BEGIN
DBMS_SQLTUNE.DROP_SQLSET (sqlset_name => 'MYSIMPLESTSUSINGAPI');
END;
/

BEGIN
DBMS_SQLTUNE.CREATE_SQLSET (sqlset_name => 'MYSIMPLESTSUSINGAPI', description => 'My Simple STS Using the API' );
END;
/
PL/SQL procedure successfully completed.

2,检查SYSAUX空间是否足够

09:26:38 sys@LUNAR>@ts

SELECT a.tablespace_name ,b.maxbytes/1024/1024/1024 "maxbyes_GB",total/1024/1024/1024 "bytes_GB",free/1024/1024/1024 "free_GB",(total-free) /1024/1024/1024 "use_GB",
ROUND((total-free)/total,4)*100 "use_%",ROUND((total-free)/b.maxbytes,4)*100 "maxuse_%"
FROM
(SELECT tablespace_name,SUM(bytes) free FROM DBA_FREE_SPACE
GROUP BY tablespace_name
) a,
(SELECT tablespace_name,sum(case autoextensible when 'YES' then maxbytes else bytes end) maxbytes,SUM(bytes) total FROM DBA_DATA_FILES
GROUP BY tablespace_name
) b
WHERE a.tablespace_name=b.tablespace_name
order by "maxuse_%" desc;

8 rows selected.

1a. 使用 SCOTT 用户运行一下 PLSQL 代码执行 SQL 语句。(PLSQL 用来模拟使用绑定变量的 SQL 语句工作负载) 使用swithbech 模拟压力
##var b1 number;

##declare
##v_num number;
## begin
## for i in 1..10000 loop
## :b1 := i;
## select c1 into v_num from t1 where c1 = :b1;
## end loop;
##end;
##/

1b. 从游标缓存中找到使用 parsing schema=SCOTT 的语句来填充 STS

DECLARE
c_sqlarea_cursor DBMS_SQLTUNE.SQLSET_CURSOR;
BEGIN
OPEN c_sqlarea_cursor FOR SELECT VALUE(p) FROM TABLE(DBMS_SQLTUNE.SELECT_CURSOR_CACHE('parsing_schema_name = ''SOE''', NULL, NULL, NULL, NULL, 1, NULL,'ALL')) p;
DBMS_SQLTUNE.LOAD_SQLSET (sqlset_name => 'MYSIMPLESTSUSINGAPI', populate_cursor => c_sqlarea_cursor);
END;
/

上述过程一般执行时间比较长,因此,通常放到后台执行。
这里我们看到加载的SQL明显增加了很多:

1c. 检查 STS 中捕获了多少 SQL 语句

COLUMN NAME FORMAT a20
COLUMN COUNT FORMAT 99999
COLUMN DESCRIPTION FORMAT a30

SELECT NAME, STATEMENT_COUNT AS "SQLCNT", DESCRIPTION FROM USER_SQLSET;

Results:

NAME SQLCNT DESCRIPTION
-------------------- ---------- ------------------------------
MYSIMPLESTSUSINGAPI 12 My Simple STS Using the API

1d. 显示 STS 的内容

COLUMN SQL_TEXT FORMAT a30
COLUMN SCH FORMAT a3
COLUMN ELAPSED FORMAT 999999999

SELECT SQL_ID, PARSING_SCHEMA_NAME AS "SCOTT", SQL_TEXT, ELAPSED_TIME AS "ELAPSED", BUFFER_GETS FROM TABLE( DBMS_SQLTUNE.SELECT_SQLSET( 'MYSIMPLESTSUSINGAPI' ) );

Results: (partial)

SQL_ID SCOTT SQL_TEXT ELAPSED BUFFER_GETS
------------- ------------------------------ ------------------------------ ---------- -----------
0af4p26041xkv SCOTT SELECT C1 FROM T1 WHERE C1 = : 169909252 18185689

在源库上执行打包SQL TUNING SET的操作,然后exp/imp到新库上

exec dbms_sqltune.create_stgtab_sqlset(table_name => 'STS_STBTAB' ,schema_name => 'DBMGR');

exec dbms_sqltune.pack_stgtab_sqlset(sqlset_name =>'MYSIMPLESTSUSINGAPI' ,sqlset_owner =>'SYS' ,staging_table_name =>'STS_STBTAB' ,staging_schema_owner => 'DBMGR' );

select count(*) from DBMGR.STS_STBTAB;

exp dbmgr/db1234DBA tables=STS_STBTAB file=/home/oracle/xtts/bak/exp_SQLSET_TAB.dmp log=/home/oracle/xtts/bak/exp_SQLSET_TAB.log FEEDBACK=1000 BUFFER=5000000

####
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in ZHS16GBK character set and UTF8 NCHAR character set

About to export specified tables via Conventional Path ...
. . exporting table STS_STBTAB
61 rows exported
. . exporting table STS_STBTAB_CBINDS
0 rows exported
. . exporting table STS_STBTAB_CPLANS
260 rows exported
#####

2. 设置目标系统

###为了演示目的,这里将使用 STS 的捕获源作为同样的目标测试系统。

imp dbmgr/dbmgr fromuser=dbmgr touser=sys file=/home/oracle/xtts/bak/exp_SQLSET_TAB.dmp feedback=1000 log=/home/oracle/xtts/bak/imp_SQLSET_TAB.log BUFFER=5000000

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Tes

Export file created by EXPORT:V10.02.01 via conventional path
import done in ZHS16GBK character set and AL16UTF16 NCHAR character set
export server uses UTF8 NCHAR character set (possible ncharset conversion)
. importing DBMGR's objects into DBMGR

. . importing table "STS_STBTAB"
61 rows imported
. . importing table "STS_STBTAB_CBINDS"
0 rows imported
. . importing table "STS_STBTAB_CPLANS"
260 rows imported

exec dbms_sqltune.unpack_stgtab_sqlset(sqlset_name => 'MYSIMPLESTSUSINGAPI',sqlset_owner => 'SYS' ,replace => True,staging_table_name =>'STS_STBTAB' ,staging_schema_owner => 'DBMGR');

conn / as sysdba
SELECT NAME, STATEMENT_COUNT AS "SQLCNT", DESCRIPTION FROM USER_SQLSET;
MYSIMPLESTSUSINGAPI

3. 创建 SPA 任务

VARIABLE t_name VARCHAR2(100);
EXEC :t_name := DBMS_SQLPA.CREATE_ANALYSIS_TASK(sqlset_name => 'MYSIMPLESTSUSINGAPI', task_name => 'MYSPATASKUSINGAPI');
print t_name

Results:

T_NAME
-----------------
MYSPATASKUSINGAPI

4. 创建和执行变更前的 SPA 任务 in 10.2.0.5 type 为 CONVERT SQLSET,生成11.2.0.1的SPA Trail,采用STS转化方式

EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'MYSPATASKUSINGAPI', execution_type => 'CONVERT SQLSET', execution_name => 'MY_BEFORE_CHANGE');

###5. 做出系统变更
###CREATE INDEX t1_idx ON t1 (c1);

6. 创建和执行变更后的 SPA 任务 in 11.2.0.4 type 为 TEST EXECUTE,从性能数据生成SPA trial

EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'MYSPATASKUSINGAPI', execution_type => 'TEST EXECUTE', execution_name => 'MY_AFTER_CHANGE');

7. 对比和分析变更前和变更后的性能

EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'MYSPATASKUSINGAPI', execution_type => 'COMPARE PERFORMANCE', execution_name => 'MY_EXEC_COMPARE_CPU', execution_params => dbms_advisor.arglist('comparison_metric', 'elapsed_time'));

EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'MYSPATASKUSINGAPI', execution_type => 'COMPARE PERFORMANCE', execution_name => 'MY_EXEC_COMPARE_BF', execution_params => dbms_advisor.arglist('comparison_metric', 'BUFFER_GETS'));

-- Generate the Report of cpu

set long 100000000 longchunksize 100000000 linesize 200 head off feedback off echo off TRIMSPOOL ON TRIM ON
VAR rep CLOB;
EXEC :rep := DBMS_SQLPA.REPORT_ANALYSIS_TASK('MYSPATASKUSINGAPI', 'html', 'typical', 'all',execution_name=>'MY_EXEC_COMPARE_CPU');
##EXEC :rep := DBMS_SQLPA.REPORT_ANALYSIS_TASK('MYSPATASKUSINGAPI', 'html', 'typical', 'all');
##SPOOL C:\mydir\SPA_detailed.html
spool /tmp/dba/cpu.html
PRINT :rep
SPOOL off

-- Generate the Report of buffer

set long 100000000 longchunksize 100000000 linesize 200 head off feedback off echo off TRIMSPOOL ON TRIM ON
VAR rep CLOB;
EXEC :rep := DBMS_SQLPA.REPORT_ANALYSIS_TASK('MYSPATASKUSINGAPI', 'html', 'typical', 'all',execution_name=>'MY_EXEC_COMPARE_BF');
spool /tmp/dba/bf.html
PRINT :rep
SPOOL off

HTML 格式的报告示例:

下面是 SPA 报告的一部分截屏。报告由3个部分组成,一个部分涉及到变更前和变更后的任务包括 范围,状态,执行起始时间,错误个数和比较的标准;第二个部分总结部分包括了变更带来的负载影响;第三个部分包括了详细的 SQL 信息,比如 SQLID 以及需要比较的度量,比如对负载的影响,执行的频率,以及在这个例子里在变更前后的执行时间这个度量。比如对于 SQL ID 0af4p26041xkv 来说,负载影响是97%。我们要实施的变更对于性能有好的影响,可以把执行时间从12766降低到29。我们还可以看到执行计划在变更后发生了变化。这些信息可以帮助 DBA 进一步关注在特定的问题或者性能退化上,对于当前的这个例子,影响是对性能有提升。

Image

下面的截屏显示了在增加了索引后执行计划的变化。这部分信息可以让 DBA 进一步关注在变更前后某个具体 SQL 的执行计划上。对任何 SQL 退化来说,这可以让 DBA 来清楚了解执行计划是如何变化的,并且可以进一步采取计划,比如使用 SQL Tuning Advisor 或者创建 SPM 基线。

Image

关于 Real Application Testing 的推荐资源清单:

• Oracle Real Application Testing Product Information
• Master Note for Real Application Testing Option (Doc ID 1464274.1)
• Database Testing: Best Practices (Doc ID 1535885.1)
• Mandatory Patches for Database Testing Functionality for Current and Earlier Releases (Doc ID 560977.1)

##### sample 2

1 恢复平台 搭建10g性能测试环境 在恢复平台恢复出10g的生产库,作为性能测试环境,参数保持和生产库一致
2 11g性能测试库 搭建11g性能测试环境 在linux主机进行跨平台迁移,搭建11g性能测试环境
3 生产环境 抓取sqlset "在生产环境执行附件sql,对生产性能影响不大。

抓取持续一周。如果有异常,可以立即停掉。"
4 生产环境 "第三步完成后,
创建中间表,将sqlset打包到中间表" "exec dbms_sqltune.create_stgtab_sqlset(table_name => 'STS_STBTAB' ,schema_name => 'DBMGR');
exec dbms_sqltune.pack_stgtab_sqlset(sqlset_name =>'STS_SQLSET' ,sqlset_owner =>'DBMGR' ,staging_table_name =>'STS_STBTAB' ,staging_schema_owner => 'DBMGR' );" 792485

5 生产环境 导出sqlset "expdp导出DBMGR.STS_STBTAB

select count(*) from DBMGR.STS_STBTAB;

exp dbmgr/db1234DBA tables=STS_STBTAB file=/db/cps/archivelog/exp_SQLSET_TAB.dmp log=/db/cps/archivelog/exp_SQLSET_TAB.log FEEDBACK=1000 BUFFER=5000000
" 如果cow库有sqlset,则直接在cow库按以下拆分步骤操作,sqlset完后拆分后再导入已拆分的sqlset到11g性能测试库。
将sqlset导入11g性能测试库

6 10g性能测试库 导入到11g性能测试库 impdp导入DBMGR.STS_STBTAB

########-------------将优化集打包到stgtab表里面 中转表过滤 ref    http://ju.outofmemory.cn/entry/77139

方法1:

转换成中转表之后,我们可以再做一次去除重复的操作。当然,你也可以根据module来删除一些不必要的游标。

  1. delete from SPA.SQLSET_TAB a where rowid !=(select max(rowid) from SQLSET_TAB b where a.FORCE_MATCHING_SIGNATURE=b.FORCE_MATCHING_SIGNATURE and a.FORCE_MATCHING_SIGNATURE<>0);
  2.  
     
  3.  
    delete from SPA.SQLSET_TAB where MODULE='PL/SQL Developer';

方法2:

create table sts_b as select distinct(to_char(s.force_matching_signature)) b from DBMGR.STS_STBTAB_08 s;

create index sts_b_1_force on STS_STBTAB_08(to_char(force_matching_signature));

DECLARE
T VARCHAR2(50);
cursor STSCUR IS select b from sts_b;
STSCUR_1 STS_STBTAB_08%ROWTYPE;
BEGIN
OPEN STSCUR;
LOOP
fetch STSCUR into T;
EXIT WHEN STSCUR%NOTFOUND;
-- DBMS_OUTPUT.PUT_LINE(T);
select * into STSCUR_1 from ( (select * from STS_STBTAB_08 st where (to_char(st.force_matching_signature)) =T ) order by cpu_time desc) where rownum < 2;
delete STS_STBTAB_08 where (to_char(force_matching_signature)) =T;
insert into STS_STBTAB_08 values STSCUR_1;
commit;
-- DBMS_OUTPUT.PUT_LINE(STSCUR_1.SQL_ID);
END LOOP;
CLOSE STSCUR;
END;

7 10g性能测试库 sqlset解包 exec dbms_sqltune.unpack_stgtab_sqlset(sqlset_name => 'STS_SQLSET',sqlset_owner => 'DBMGR' ,replace => True,staging_table_name =>'STS_STBTAB' ,staging_schema_owner => 'DBMGR');
将sqlset导入10g测试库

8 10g测试环境 授权 ##grant all on spa_sqlpa to dbmgr;

grant all on dbms_sqlpa to dbmgr;
9 10g测试环境 导出导入 "impdp导入DBMGR.STS_STBTAB

imp dbmgr/dbmgr fromuser=dbmgr touser=dbmgr file=/oraclelv/exp_SQLSET_TAB.dmp feedback=1000 log=/oraclelv/imp_SQLSET_TAB.log BUFFER=5000000
"
10 10g测试环境 解包sqlset exec dbms_sqltune.unpack_stgtab_sqlset(sqlset_name => 'STS_SQLSET',sqlset_owner => 'DBMGR' ,replace => True,staging_table_name =>'STS_STBTAB' ,staging_schema_owner => 'DBMGR');

11 11g性能测试库 创建DBLINK "create public database link to_10g connect to dbmgr identified by xxxxx using 'xxxxxxx'

create public database link to_10g connect to dbmgr identified by db1234DBA
using ' (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =25.10.0.199)(PORT = 1539)) (CONNECT_DATA =(sid = db)))';" 创建到10g测试库的dblink

第一次执行spa回放,获得10g性能测试环境的数据
12 11性能测试库 创建task "declare
mytask varchar2(100);
begin
mytask := dbms_sqlpa.create_analysis_task(sqlset_name => 'STS_SQLSET',sqlset_owner => 'DBMGR',task_name => 'TASK_10G');
END;
/
"
13 11g性能测试库 生成第一次执行task的脚本 (通过db_link 方式)

"vi spa_10g.sh
sqlplus -S /nolog <<SQLEnd
conn / as sysdba
exec DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name=>'TASK_10G',execution_type=>'test execute',execution_name=>'spa10g',execution_params =>dbms_advisor.argList('DATABASE_LINK','TO_10G','EXECUTE_COUNT',5) ,execution_desc => 'before_change');
exit
SQLEnd" 可以看到SPA_DIR下有BEF_TASK_SQLSET_NO_i.sh脚本

14 11g性能测试库 "执行task,
执行第一次spa回放" "nohup sh spa_10g.sh > spa_10g.log &
" 发起目标库执行第一次spa回放

15 11g性能测试库 查询回放进度 "SELECT owner,task_name,execution_name,a.execution_type,execution_start,execution_last_modified,execution_end,status,b.SOFAR,b.START_TIME,b.LAST_UPDATE_TIME
from dba_advisor_executions a, v$advisor_progress b where a.execution_name like 'spa10g%';" 计算 792485 笔数据需要多久完成

第二次执行spa回放,获得11g性能测试环境的数据

16 11g性能测试库 创建task "declare
mytask varchar2(100);
begin
mytask := dbms_sqlpa.create_analysis_task(sqlset_name => 'STS_SQLSET',sqlset_owner => 'DBMGR',task_name => 'TASK_11G');
END;
/
"

17 11g性能测试库 生成第二次执行task的脚本 "

vi spa_11g.sh
sqlplus -S /nolog <<SQLEnd
conn / as sysdba
exec DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name=>'TASK_11G',execution_type=>'test execute',execution_name=>'spa11g_1',execution_params =>dbms_advisor.argList('EXECUTE_COUNT',5) ,execution_desc => 'after_change');
exit
SQLEnd" 可以看到SPA_DIR下有AFT_TASK_SQLSET_NO_i.sh脚本

18 11g性能测试库 "执行task,
执行第二次spa回放" "nohup sh spa_11g.sh > spa_11g.log &
" 发起目标库执行第二次spa回放

19 11g性能测试库 查询回放进度

"SELECT owner,task_name,execution_name,a.execution_type,execution_start,execution_last_modified,execution_end,status,b.SOFAR,b.START_TIME,b.LAST_UPDATE_TIME
from dba_advisor_executions a, v$advisor_progress b where a.execution_name like 'spa11g%';"
分析回放结果

20 11g性能测试库 取出buffer gets变大的sql进行分析

21 11g性能测试库 plan改变的sql

附录:

3.

---------------------------------------------------
--Step1: 创建名称为STS_SQLSET的SQL_SET.
---------------------------------------------------

BEGIN
DBMS_SQLTUNE.DROP_SQLSET(
sqlset_name => 'STS_SQLSET'
);
END;
/

BEGIN
DBMS_SQLTUNE.CREATE_SQLSET(SQLSET_NAME => 'STS_SQLSET',
DESCRIPTION => 'COMPLETE APPLICATION WORKLOAD',
SQLSET_OWNER =>'DBMGR');
END;
/

---------------------------------------------------
--Step2: 初始加载当前数据库中的SQL.
---------------------------------------------------

DECLARE
STSCUR DBMS_SQLTUNE.SQLSET_CURSOR;
BEGIN
OPEN STSCUR FOR
SELECT VALUE(P)
FROM TABLE(DBMS_SQLTUNE.SELECT_CURSOR_CACHE('PARSING_SCHEMA_NAME NOT IN (''SYS'', ''SYSTEM'', ''OUTLN'', ''DBSNMP'', ''WMSYS'', ''CTXSYS'', ''XDB'', ''MDSYS'',
''ORDPLUGINS'', ''ORDSYS'', ''OLAPSYS'', ''HR'', ''OE'', ''SCOTT'', ''QS_CB'', ''QS_CBADM'', ''QS_ES'', ''QS_OS'', ''QS_CS'', ''QS'', ''QS_ADM'', ''QS_WS'', ''DIP'', ''TSMSYS'', ''EXFSYS'',
''LBACSYS'', ''TRACESVR'', ''AURORA$JIS$UTILITY$'', ''OSE$HTTP$ADMIN'', ''DBMGR'', ''OVSEE'', ''DBMONOPR'')
AND PLAN_HASH_VALUE <> 0 AND UPPER(SQL_TEXT) NOT LIKE ''%TABLE_NAME%''',
NULL, NULL, NULL, NULL, 1, NULL,
'ALL')) P;
-- POPULATE THE SQLSET
DBMS_SQLTUNE.LOAD_SQLSET(SQLSET_NAME => 'STS_SQLSET',
POPULATE_CURSOR => STSCUR,
COMMIT_ROWS => 100,
SQLSET_OWNER => 'DBMGR');
CLOSE STSCUR;
COMMIT;
EXCEPTION
WHEN OTHERS THEN
RAISE;
END;
/

---------------------------------------------------
--Step3: 增量抓取数据库中的SQL, 会连续抓取7天,每小时抓取一次,Sessions一直持续7天. 这一步用shell脚本在后台执行。

BEGIN
DBMS_SQLTUNE.CAPTURE_CURSOR_CACHE_SQLSET(SQLSET_NAME => 'STS_SQLSET',
TIME_LIMIT => 345600,
REPEAT_INTERVAL => 3600,
CAPTURE_OPTION => 'MERGE',
CAPTURE_MODE => DBMS_SQLTUNE.MODE_ACCUMULATE_STATS,
BASIC_FILTER => 'PARSING_SCHEMA_NAME NOT IN (''SYS'', ''SYSTEM'', ''OUTLN'', ''DBSNMP'', ''WMSYS'', ''CTXSYS'', ''XDB'', ''MDSYS'',
''ORDPLUGINS'', ''ORDSYS'', ''OLAPSYS'', ''HR'', ''OE'', ''SCOTT'', ''QS_CB'', ''QS_CBADM'', ''QS_ES'', ''QS_OS'', ''QS_CS'', ''QS'', ''QS_ADM'', ''QS_WS'', ''DIP'', ''TSMSYS'', ''EXFSYS'',''LBACSYS'', ''TRACESVR'', ''AURORA$JIS$UTILITY$'', ''OSE$HTTP$ADMIN'', ''DBMGR'', ''OVSEE'', ''DBMONOPR'')
AND PLAN_HASH_VALUE <> 0 AND UPPER(SQL_TEXT) NOT LIKE ''%TABLE_NAME%''',
SQLSET_OWNER => 'DBMGR');
END;
/

20.

Select *
From (Select b.*, dbms_lob.substr(st.sql_text, 3000) sql_text
From SYS.Wrh$_Sqltext st,
(Select task_name,
sql_id,
executions,
detal_buffer_gets,
bf_buffer_gets,
af_buffer_gets,
bf_plan_hash_value,
bf_rows_processed,
af_plan_hash_value,
af_ROWS_PROCESSED
From (Select task_name,
sql_id,
bf_executions executions,
round(af_buffer_gets / af_EXECUTIONS) -
round(bf_buffer_gets / bf_executions) detal_buffer_gets,
round(bf_buffer_gets / bf_executions) bf_buffer_gets,
round(af_buffer_gets / af_EXECUTIONS) af_buffer_gets,
bf_plan_hash_value,
bf_rows_processed / bf_executions bf_rows_processed,
af_plan_hash_value,
af_ROWS_PROCESSED / af_EXECUTIONS af_ROWS_PROCESSED
From (Select bf.task_name,
bf.sql_id,
bf.executions bf_executions,
bf.plan_hash_value bf_plan_hash_value,
bf.buffer_gets bf_buffer_gets,
bf.rows_processed bf_rows_processed,
af.plan_hash_value af_plan_hash_value,
af.buffer_gets af_buffer_gets,
af.executions af_executions,
af.rows_processed af_rows_processed
From dba_advisor_sqlstats af,
dba_advisor_sqlstats bf
Where af.execution_name='SPA11G'
And af.task_name like '%TASK_11G%'
And bf.task_name like '%TASK_10G%'
And bf.execution_name='SPA10G'
And bf.sql_id = af.sql_id))
Where detal_buffer_gets > 0

and af_plan_hash_value ! = bf_plan_hash_value) b
Where st.sql_id = b.sql_id
Order By detal_buffer_gets Desc)
Where sql_text Not Like '%Analyze(%'
And sql_text Not Like '%SELECT /* DS_SVC */%'
And sql_text Not Like '%/* OPT_DYN_SAMP */%';

21.

Select st.sql_id,
sst.executions,
dbms_lob.substr(st.sql_text, 3000) sql_text
From sys.wrh$_sqltext st,
dba_sqlset_statements sst,
(Select Distinct sql_id
From (Select sql_id,
operation,
options,
object_name,
object_alias,
object_type,
Id,
parent_id,
depth
From dba_advisor_sqlplans af
Where af.task_name = 'TASK_11G' And af.execution_name = 'SPA11G'
Minus
Select sql_id,
operation,
options,
object_name,
object_alias,
object_type,
Id,
parent_id,
depth
From dba_advisor_sqlplans bf
Where bf.task_name = 'TASK_10G' And bf.execution_name = 'SPA10G'
)) cp
Where st.sql_id = cp.sql_id
And sst.sql_id = cp.sql_id
And st.sql_text Not Like '%Analyze(%'
And st.sql_text Not Like '%SELECT /* DS_SVC */%'
And st.sql_text Not Like '%/* OPT_DYN_SAMP */%'
And sst.sqlset_name Like 'STS_SQLSETNO%'
;

###refer 1

http://www.cnblogs.com/jyzhao/p/9210517.html

生产端:Windows 2008 + Oracle 10.2.0.5
测试端:RHEL 6.5 + Oracle 11.2.0.4
需求:因为Oracle跨越大版本,优化器、新特性变动较多,需要进行SPA测试比对前后期性能差异。
说明:本文是根据DBA Travel的SPA参考规范文档(在此致谢Travel同学),结合实际某客户需求整理的整个测试过程。为了更真实的反映整个过程,在生产端使用swingbench压力测试软件持续运行了一段时间,模拟真实的业务压力。

1.SPA测试流程

为了尽可能的减小对正式生产库的性能影响,本次SPA测试只是从AWR资料库中的SQL数据转化而来的SQL Tuning Set进行整体的SQL性能测试。

本次SPA测试主要分为以下几个步骤:
在生产库端:

  1. 环境准备:创建SPA测试专用用户
  2. 采集数据: a) 在生产库转化AWR中SQL为SQL Tuning Set b) 在生产库从现有SQL Tuning Set提取SQL
  3. 导出数据:打包(pack)转化后的SQL Tuning Set,并导出传输到测试服务器

在测试库端:

  1. 环境准备:创建SPA测试专用用户
  2. 测试准备:导入SQL Tuning Set表,并解包(unpack),创建SPA分析任务
  3. 前期性能:从SQL Tuning Set中转化得出10g的性能Trail
  4. 后期性能:在11g测试数据库中执行SQL Tuning Set中SQL,生成11g性能Trail
  5. 对比分析:执行对比分析任务,分别按执行时间,CPU时间和逻辑读三个维度进行
  6. 汇总报告:取出对比报告,对每个维度分别取出All,Unsupport,Error 3类报告

总结报告:

  1. 总结报告:分析汇总报告,优化其中的性能下降SQL,编写SPA测试报告

2.SPA操作流程

2.1 本文使用的命名规划

类型                 规划
SQLSET ORCL_SQLSET_201806
Analysis Task SPA_TASK_201806
STGTAB ORCL_STSTAB_201806
Dmpfile ORCL_STSTAB_201806.dmp

2.2 生产端:环境准备

conn / as sysdba
CREATE USER SPA IDENTIFIED BY SPA DEFAULT TABLESPACE SYSAUX;
GRANT DBA TO SPA;
GRANT ADVISOR TO SPA;
GRANT SELECT ANY DICTIONARY TO SPA;
GRANT ADMINISTER SQL TUNING SET TO SPA;

2.3 生产端:采集数据
1). 获取AWR快照的边界ID

SET LINES 188 PAGES 1000
COL SNAP_TIME FOR A22
COL MIN_ID NEW_VALUE MINID
COL MAX_ID NEW_VALUE MAXID
SELECT MIN(SNAP_ID) MIN_ID, MAX(SNAP_ID) MAX_ID
FROM DBA_HIST_SNAPSHOT
WHERE END_INTERVAL_TIME > trunc(sysdate)-10
ORDER BY 1;

2). 创建SQL Set

--连接用户
conn SPA/SPA --如果之前有这个SQLSET的名字,可以这样删除
EXEC DBMS_SQLTUNE.DROP_SQLSET (SQLSET_NAME => 'ORCL_SQLSET_201806', SQLSET_OWNER => 'SPA'); --新建SQLSET:ORCL_SQLSET_201806
EXEC DBMS_SQLTUNE.CREATE_SQLSET ( -
SQLSET_NAME => 'ORCL_SQLSET_201806', -
DESCRIPTION => 'SQL Set Create at : '||TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'), -
SQLSET_OWNER => 'SPA');

3). 转化AWR数据中的SQL数据,将其中的SQL载入到SQL Set中

DECLARE
SQLSET_CUR DBMS_SQLTUNE.SQLSET_CURSOR;
BEGIN
OPEN SQLSET_CUR FOR
SELECT VALUE(P) FROM TABLE(
DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY( 16, 24,
'PARSING_SCHEMA_NAME NOT IN (''SYS'', ''SYSTEM'')',
NULL, NULL, NULL, NULL, 1, NULL, 'ALL')) P;
DBMS_SQLTUNE.LOAD_SQLSET(
SQLSET_NAME => 'ORCL_SQLSET_201806',
SQLSET_OWNER => 'SPA',
POPULATE_CURSOR => SQLSET_CUR,
LOAD_OPTION => 'MERGE',
UPDATE_OPTION => 'ACCUMULATE');
CLOSE SQLSET_CUR;
END;
/

4). 打包SQL Set

DROP TABLE SPA.JYZHAO_SQLSETTAB_20180106;
EXEC DBMS_SQLTUNE.CREATE_STGTAB_SQLSET ('ORCL_STSTAB_201806', 'SPA', 'SYSAUX');
EXEC DBMS_SQLTUNE.PACK_STGTAB_SQLSET ( -
SQLSET_NAME => 'ORCL_SQLSET_201806', -
SQLSET_OWNER => 'SPA', -
STAGING_TABLE_NAME => 'ORCL_STSTAB_201806', -
STAGING_SCHEMA_OWNER => 'SPA');

2.4 生产端:导出数据
1). 在操作系统中,导出打包后的SQL Set数据

cat > ./export_sqlset_201806.par <<EOF
USERID='SPA/SPA'
FILE=ORCL_STSTAB_201806.dmp
LOG=exp_spa_sqlset_201806.log
TABLES=ORCL_STSTAB_201806
DIRECT=N
BUFFER=10240000
STATISTICS=NONE
EOF

注意:这里DIRECT=Y参数在遇到问题后尝试改为了DIRECT=N,默认也是N。

set NLS_LANG=AMERICAN_AMERICA.US7ASCII
exp PARFILE=export_sqlset_201806.par

注意:NLS_LANG变量是Oracle的变量,设置字符集和数据库字符集一致,避免发生错误转换。

2). 将导出后的Dump文件传输到测试服务器
将 ORCL_STSTAB_201806.dmp 传输到 目标服务器 /orabak/spa下。

2.5 测试端:环境准备

conn / as sysdba
CREATE USER SPA IDENTIFIED BY SPA DEFAULT TABLESPACE SYSAUX;
GRANT DBA TO SPA;
GRANT ADVISOR TO SPA;
GRANT SELECT ANY DICTIONARY TO SPA;
GRANT ADMINISTER SQL TUNING SET TO SPA;

2.6 测试端:测试准备
在进行SPA测试前需要准备测试环境,包括导入生产库中的SQL Set,对其进行解包(unpack)操作,并创建SPA分析任务。
1). 在操作系统中,执行导入命令,导入SQL Set表

cat > ./import_sqlset_201806.par <<EOF
USERID='SPA/SPA'
FILE=ORCL_STSTAB_201806.dmp
LOG=imp_spa_sqlset_201806.log
FULL=Y
EOF export NLS_LANG=AMERICAN_AMERICA.US7ASCII
imp PARFILE=import_sqlset_201806.par

2). 解包(unpack)SQL Set

conn SPA/SPA
EXEC DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET (-
SQLSET_NAME => 'ORCL_SQLSET_201806', -
SQLSET_OWNER => 'SPA', -
REPLACE => TRUE, -
STAGING_TABLE_NAME => 'ORCL_STSTAB_201806', -
STAGING_SCHEMA_OWNER => 'SPA');

3). 创建SPA分析任务

VARIABLE SPA_TASK  VARCHAR2(64);
EXEC :SPA_TASK := DBMS_SQLPA.CREATE_ANALYSIS_TASK( -
TASK_NAME => 'SPA_TASK_201806', -
DESCRIPTION => 'SPA Analysis task at : '||TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'), -
SQLSET_NAME => 'ORCL_SQLSET_201806', -
SQLSET_OWNER => 'SPA');

2.7 测试端:前期性能
在测试服务器中,可以直接从SQL Tuning Set中转化得到所有SQL在10g数据库中的执行效率,得到10g中的SQL Trail。

EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK( -
TASK_NAME => 'SPA_TASK_201806', -
EXECUTION_NAME => 'EXEC_10G_201806', -
EXECUTION_TYPE => 'CONVERT SQLSET', -
EXECUTION_DESC => 'Convert 10g SQLSET for SPA Task at : '||TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'));

2.8 测试端:后期性能
在测试服务器(运行11g数据库)中,需要在本地数据库(11g)测试运行SQL Tuning Set中的SQL语句,分析所有语句在11g环境中的执行效率,得到11g中的SQL Trail。

vi spa2.sh

echo "WARNING: SPA2 Start @`date`"
sqlplus SPA/SPA << EOF!
EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK( -
TASK_NAME => 'SPA_TASK_201806', -
EXECUTION_NAME => 'EXEC_11G_201806', -
EXECUTION_TYPE => 'TEST EXECUTE', -
EXECUTION_DESC => 'Execute SQL in 11g for SPA Task at : '||TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'));
exit
EOF!
echo "WARNING:SPA2 OK @`date`" nohup sh spa2.sh &

2.9 测试端:性能对比 
得到两次SQL Trail之后,可以对比两次Trial之间的SQL执行性能,可以从不同的维度对两次Trail中的所有SQL进行对比分析,主要关注的维度有:SQL执行时间,SQL执行的CPU时间,SQL执行的逻辑读。

1). 对比两次Trail中的SQL执行时间

conn SPA/SPA
EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK( -
TASK_NAME => 'SPA_TASK_201806', -
EXECUTION_NAME => 'COMPARE_ET_201806', -
EXECUTION_TYPE => 'COMPARE PERFORMANCE', -
EXECUTION_PARAMS => DBMS_ADVISOR.ARGLIST( -
'COMPARISON_METRIC', 'ELAPSED_TIME', -
'EXECUTE_FULLDML', 'TRUE', -
'EXECUTION_NAME1','EXEC_10G_201806', -
'EXECUTION_NAME2','EXEC_11G_201806'), -
EXECUTION_DESC => 'Compare SQLs between 10g and 11g at :'||TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'));

2). 对比两次Trail中的SQL执行的CPU时间

EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK( -
TASK_NAME => 'SPA_TASK_201806', -
EXECUTION_NAME => 'COMPARE_CT_201806', -
EXECUTION_TYPE => 'COMPARE PERFORMANCE', -
EXECUTION_PARAMS => DBMS_ADVISOR.ARGLIST( -
'COMPARISON_METRIC', 'CPU_TIME', -
'EXECUTION_NAME1','EXEC_10G_201806', -
'EXECUTION_NAME2','EXEC_11G_201806'), -
EXECUTION_DESC => 'Compare SQLs between 10g and 11g at :'||TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'));

3). 对比两次Trail中的SQL执行的逻辑读

EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK( -
TASK_NAME => 'SPA_TASK_201806', -
EXECUTION_NAME => 'COMPARE_BG_201806', -
EXECUTION_TYPE => 'COMPARE PERFORMANCE', -
EXECUTION_PARAMS => DBMS_ADVISOR.ARGLIST( -
'COMPARISON_METRIC', 'BUFFER_GETS', -
'EXECUTION_NAME1','EXEC_10G_201806', -
'EXECUTION_NAME2','EXEC_11G_201806'), -
EXECUTION_DESC => 'Compare SQLs between 10g and 11g at :'||TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'));

2.10 测试端:汇总报告
执行对比分析任务之后,就可以取出对应的对比分析任务的结果报告,主要关注的报告类型有:汇总SQL报告,错误SQL报告以及不支持SQL报告。

a) 获取执行时间全部报告

conn SPA/SPA
ALTER SESSION SET EVENTS='31156 TRACE NAME CONTEXT FOREVER, LEVEL 0X400';
SET LINES 1111 PAGES 50000 LONG 1999999999 TRIM ON TRIMS ON SERVEROUTPUT ON SIZE UNLIMITED
SPOOL elapsed_all.html
SELECT XMLTYPE(DBMS_SQLPA.REPORT_ANALYSIS_TASK('SPA_TASK_201806','HTML','ALL','ALL',NULL,1000,'COMPARE_ET_201806')).GETCLOBVAL(0,0) FROM DUAL;
spool off

b) 获取执行时间下降报告

ALTER SESSION SET EVENTS='31156 TRACE NAME CONTEXT FOREVER, LEVEL 0X400';
SET LINES 1111 PAGES 50000 LONG 1999999999 TRIM ON TRIMS ON SERVEROUTPUT ON SIZE UNLIMITED
SPOOL elapsed_regressed.html
SELECT XMLTYPE(DBMS_SQLPA.REPORT_ANALYSIS_TASK('SPA_TASK_201806','HTML','REGRESSED','ALL',NULL,1000,'COMPARE_ET_201806')).GETCLOBVAL(0,0) FROM DUAL;
spool off

c) 获取逻辑读全部报告

ALTER SESSION SET EVENTS='31156 TRACE NAME CONTEXT FOREVER, LEVEL 0X400';
SET LINES 1111 PAGES 50000 LONG 1999999999 TRIM ON TRIMS ON SERVEROUTPUT ON SIZE UNLIMITED
SPOOL buffer_all.html
SELECT XMLTYPE(DBMS_SQLPA.REPORT_ANALYSIS_TASK('SPA_TASK_201806','HTML','ALL','ALL',NULL,1000,'COMPARE_BG_201806')).GETCLOBVAL(0,0) FROM DUAL;
spool off

d) 获取逻辑读下降报告

ALTER SESSION SET EVENTS='31156 TRACE NAME CONTEXT FOREVER, LEVEL 0X400';
SET LINES 1111 PAGES 50000 LONG 1999999999 TRIM ON TRIMS ON SERVEROUTPUT ON SIZE UNLIMITED
SPOOL buffer_regressed.html
SELECT XMLTYPE(DBMS_SQLPA.REPORT_ANALYSIS_TASK('SPA_TASK_201806','HTML','REGRESSED','ALL',NULL,1000,'COMPARE_BG_201806')).GETCLOBVAL(0,0) FROM DUAL;
spool off

e) 获取错误报告

ALTER SESSION SET EVENTS='31156 TRACE NAME CONTEXT FOREVER, LEVEL 0X400';
SET LINES 1111 PAGES 50000 LONG 1999999999 TRIM ON TRIMS ON SERVEROUTPUT ON SIZE UNLIMITED
SPOOL error.html
SELECT XMLTYPE(DBMS_SQLPA.REPORT_ANALYSIS_TASK('SPA_TASK_201806','HTML','ERRORS','ALL',NULL,1000,'COMPARE_ET_201806')).GETCLOBVAL(0,0) FROM DUAL;
spool off

f) 获取不支持报告

ALTER SESSION SET EVENTS='31156 TRACE NAME CONTEXT FOREVER, LEVEL 0X400';
SET LINES 1111 PAGES 50000 LONG 1999999999 TRIM ON TRIMS ON SERVEROUTPUT ON SIZE UNLIMITED
SPOOL unsupported.html
SELECT XMLTYPE(DBMS_SQLPA.REPORT_ANALYSIS_TASK('SPA_TASK_201806','HTML','UNSUPPORTED','ALL',NULL,1000,'COMPARE_ET_201806')).GETCLOBVAL(0,0) FROM DUAL;
spool off

g) 获取执行计划变化报告

ALTER SESSION SET EVENTS='31156 TRACE NAME CONTEXT FOREVER, LEVEL 0X400';
SET LINES 1111 PAGES 50000 LONG 1999999999 TRIM ON TRIMS ON SERVEROUTPUT ON SIZE UNLIMITED
SPOOL changed_plans.html
SELECT XMLTYPE(DBMS_SQLPA.REPORT_ANALYSIS_TASK('SPA_TASK_201806','HTML','CHANGED_PLANS','ALL',NULL,1000,'COMPARE_ET_201806')).GETCLOBVAL(0,0) FROM DUAL;
spool off

3.SPA环境清理

3.1 查看SQLSET

conn SPA/SPA
select owner,name,STATEMENT_COUNT from dba_sqlset;

3.2 查看分析任务

select owner,task_id,task_name,created,LAST_MODIFIED,STATUS from DBA_ADVISOR_TASKS  where task_name like upper('%&task_name%') order by 2;
SPA_TASK_201806

3.3 删除ANALYSIS_TASK

exec dbms_sqlpa.DROP_ANALYSIS_TASK('SPA_TASK_201806');

3.4 删除sqlset

exec dbms_sqltune.DROP_SQLSET('ORCL_SQLSET_201806');

如果删除时出现异常情况"ORA-13757",提示STS是活动的,可以尝试使用下面SQL修改后再进行删除。

delete from wri$_sqlset_references
where sqlset_id in (select id
from wri$_sqlset_definitions
where name in ('ORCL_SQLSET_201806','ORCL_SQLSET_201806'));
commit;

3.5 删除用户
删除SPA用户(两端)

drop user spa cascade;
AlfredZhao©版权所有「从Oracle起航,领略精彩的IT技术。」

####refer

http://www.lunar2013.com/2015/05/spasql%E6%80%A7%E8%83%BD%E5%88%86%E6%9E%90%E5%99%A8%E7%9A%84%E4%BD%BF%E7%94%A8-1-%E6%94%B6%E9%9B%86%E5%92%8C%E8%BF%81%E7%A7%BBsql-tuning-set.html

### section 1
SPA(SQL Performance Analyzer , SQL 性能分析器),是11g引入的新功能,主要用于预测潜在的更改对 SQL 查询工作量的性能影响。
一般有几种情况下,我们会建议做SPA:
1,OS版本发生变化
2,硬件发生变化
3,数据库版本的升级
4,实施某些优化建议
5, 收集统计信息
6,更改数据库参数
等等
.
SPA的主要实施步骤如下:
1, 在生产系统上捕捉SQL负载,并生成SQL Tuning Set;
2, 创建一个中转表,将SQL Tuning Set导入到中转表,导出中转表并传输到测试库;
3, 导入中转表,并解压中转表的数据到SQL Tuning Set;
4, 创建SPA任务,先生成10g的trail,然后在11g中再生成11g的trail;
5, 执行比较任务,再生成SPA报告;
6, 分析性能退化的SQL语句;
.
我这里的例子是,将一根数据库从10.2.0.1升级到11.2.0.4.
1,在源库创建spa用户:

create user LUNAR identified by LUNAR;
grant connect,resource,dba to LUNAR;

10:38:37 lunar@LUNAR>select username,default_tablespace,temporary_tablespace
10:41:41 2 from dba_users
10:41:41 3 where username in ('LUNAR','SPA')
10:41:41 4 order by 1,2;

USERNAME DEFAULT_TABLESPACE TEMPORARY_TABLESPACE
------------------------------ ------------------------------ ------------------------------
LUNAR USERS TEMP

Elapsed: 00:00:00.27
10:41:41 lunar@LUNAR>
2,检查SYSAUX空间是否足够

09:26:38 sys@LUNAR>@ts

Name TS Type All Size Max Size Free Size Max Free Pct. Free Max Free%
------------------------------ ------------ ---------- ---------- ---------- ---------- --------- ---------
UNDOTBS1 UNDO 148,433 221,521 19,467 92,555 13 42
LUNAR_IDX PERMANENT 352,256 352,256 84,272 84,272 24 24
LUNAR_DAT PERMANENT 1,048,576 1,048,576 258,728 258,728 25 25
LUNAR_TESTS PERMANENT 251,904 251,904 139,424 139,424 55 55
LUNAR_TESTS_IDX PERMANENT 329,728 329,728 196,351 196,351 60 60
USERS PERMANENT 4,096 32,768 2,582 31,254 63 95
SYSAUX PERMANENT 4,096 32,768 2,786 31,458 68 96
SYSTEM PERMANENT 4,096 32,768 2,882 31,554 70 96

8 rows selected.

Elapsed: 00:00:00.07
09:26:40 sys@LUNAR>
3,创建SQL优化器:

conn LUNAR/LUNAR
10:33:30 lunar@LUNAR>exec dbms_sqltune.create_sqlset('Lunar_11201STS_LUNAR');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.11
10:34:25 lunar@LUNAR>
4,往SQL优化其中,加载优化集

1). 从AWR快照中加载
11:31:55 lunar@LUNAR>select INSTANCE_NUMBER ,min(snap_id),max(snap_id) from dba_hist_snapshot group by INSTANCE_NUMBER;

INSTANCE_NUMBER MIN(SNAP_ID) MAX(SNAP_ID)
--------------- ------------ ------------
1 19355 19555

Elapsed: 00:00:00.01
11:32:12 lunar@LUNAR>
b).加载2个快照之间的所有查询(这一步大概执行了4分钟)

11:33:12 lunar@LUNAR>declare
11:33:14 2 own VARCHAR2(30) := 'LUNAR';
11:33:14 3 bid NUMBER := '&begin_snap';
11:33:14 4 eid NUMBER := '&end_snap';
11:33:14 5 stsname VARCHAR2(30) :='Lunar_11201STS_LUNAR';
11:33:14 6 sts_cur dbms_sqltune.sqlset_cursor;
11:33:14 7 begin
11:33:14 8 open sts_cur for
11:33:14 9 select value(P) from table(dbms_sqltune.select_workload_repository(bid,eid, null, null, null, null, null, 1, null, 'ALL')) P;
11:33:14 10 dbms_sqltune.load_sqlset(sqlset_name => stsname,populate_cursor => sts_cur,load_option => 'MERGE');
11:33:14 11 end;
11:33:14 12 /
Enter value for begin_snap: 19355
old 3: bid NUMBER := '&begin_snap';
new 3: bid NUMBER := '19355';
Enter value for end_snap: 19555
old 4: eid NUMBER := '&end_snap';
new 4: eid NUMBER := '19555';

PL/SQL procedure successfully completed.

Elapsed: 00:03:07.05
11:36:29 lunar@LUNAR>
c) 验证创建的SQL优化集

10:52:58 lunar@LUNAR>select NAME,OWNER,CREATED,STATEMENT_COUNT, LAST_MODIFIED FROM DBA_SQLSET;

NAME OWNER CREATED STATEMENT_COUNT LAST_MODIFIED
------------------------------ ------------------------------ ------------------- --------------- -------------------
Lunar_11201STS_LUNAR LUNAR 2015-04-18 10:34:25 921 2015-04-18 10:38:27

Elapsed: 00:00:00.06
10:53:03 lunar@LUNAR>
2). 如果需要,可以从AWR快照中加载指定sql_id和plan_hash_value的sql语句

12:06:31 lunar@LUNAR>SELECT sql_id, substr(sql_text, 1, 50) sql
12:06:32 2 FROM TABLE( DBMS_SQLTUNE.select_sqlset ('Lunar_11201STS_LUNAR'))
12:06:32 3 where sql_id in ('34xbj7bv7suyk','gxsfh4gm276d3');

SQL_ID SQL
------------- --------------------------------------------------
34xbj7bv7suyk UPDATE "LUNAR_PRD".MDRT_1472A$ set info= :1 where ro
gxsfh4gm276d3 update LUNARINFO t set TIME=:1, LUNARMARK=:2, LO

Elapsed: 00:00:01.14
12:06:34 lunar@LUNAR>
3). 从当前游标缓存中加载

DECLARE
cur sys_refcursor;
BEGIN
OPEN cur FOR
SELECT value(P)
FROM TABLE(dbms_sqltune.select_cursor_cache('parsing_schema_name <> ''SYS''',NULL,NULL,NULL,NULL,1,NULL,'ALL')) p;
dbms_sqltune.load_sqlset('Lunar_11201STS_LUNAR', cur);
CLOSE cur;
END;
/
上述过程一般执行时间比较长,因此,通常放到后台执行。
这里我们看到加载的SQL明显增加了很多:

12:55:02 sys@LUNAR>select NAME,OWNER,CREATED,STATEMENT_COUNT FROM DBA_SQLSET;

NAME OWNER CREATED STATEMENT_COUNT
------------------------------ ------------------------------ ------------------- ---------------
Lunar_11201STS_LUNAR LUNAR 2015-04-18 11:31:55 41928

12:57:11 sys@LUNAR>

执行完上述所有操作后,我们就可以将这个SQL TUNING SET迁移到新的环境,进行分析,具体过程如下:
1,在新库中创建SQL优化器用户

create user LUNAR identified by LUNAR;
grant connect,resource,dba to LUNAR;
2,检查SYSAUX空间是否足够

3,在源库上执行打包SQL TUNING SET的操作,然后exp/imp到新库上

[oracle@lunardb tmp]$ ss

SQL*Plus: Release 11.2.0.1.0 Production on Sat Apr 18 23:22:26 2015

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

23:22:26 sys@GPS>conn LUNAR/LUNAR
Connected.
23:22:28 lunar@GPS>BEGIN
23:22:33 2 DBMS_SQLTUNE.create_stgtab_sqlset(table_name => 'SQLSET_TAB_LUNAR',
23:22:34 3 schema_name => 'LUNAR',
23:22:34 4 tablespace_name => 'USERS');
23:22:34 5 END;
23:22:34 6 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.32
23:22:36 lunar@GPS>
3,打包SQL TUNING SET的操作,然后exp/imp到新库上

conn LUNAR/LUNAR

BEGIN
DBMS_SQLTUNE.pack_stgtab_sqlset(sqlset_name => 'Lunar_11201STS_GPS',
sqlset_owner => 'LUNAR',
staging_table_name => 'SQLSET_TAB_LUNAR',
staging_schema_owner => 'LUNAR');
END;
/
执行过程中,我们可以监控一下:

[oracle@lunardb tmp]$ ss

SQL*Plus: Release 11.2.0.1.0 Production on Sat Apr 18 23:26:18 2015

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

23:26:18 sys@GPS>select count(*) from LUNAR.SQLSET_TAB_LUNAR;

COUNT(*)
----------
496641

Elapsed: 00:00:00.57
23:28:04 sys@GPS>
exp LUNAR/LUNAR tables=SQLSET_TAB_LUNAR file=/u01/oradata/tmp/exp_SQLSET_TAB_LUNAR.dmp log=/u01/oradata/tmp/exp_SQLSET_TAB_LUNAR.log FEEDBACK=1000 BUFFER=5000000

4,在新库上执行导入SQL TUNING SET的表(LUNAR.SQLSET_TAB_LUNAR)
imp LUNAR/LUNAR fromuser=LUNAR touser=LUNAR file=/u01/oradata/tmp/exp_SQLSET_TAB_LUNAR.dmp feedback=1000 log=/u01/oradata/tmp/imp_SQLSET_TAB_LUNAR.log BUFFER=5000000

### section 2

1,查看当前STS中的SQL数量:

09:52:42 LUNAR@ lunardb> select count(*) from LUNAR.SQLSET_TAB_LUNAR;

COUNT(*)
----------
496641

Elapsed: 00:00:00.24
09:53:13 LUNAR@ lunardb>
删除一些没用的:

LUNAR@ lunardb> delete from LUNAR.SQLSET_TAB_LUNAR
where (PARSING_SCHEMA_NAME in ('LUNAR', 'GGUSR','EXFSYS','SYS') )
or ( module in ('PL/SQL Developer','SQL*Plus','sqlplus.exe','plsqldev.exe','DBMS_SCHEDULER') );

701 rows deleted.

Elapsed: 00:00:00.96
10:07:34 LUNAR@ lunardb> commit;

Commit complete.

Elapsed: 00:00:00.00
10:07:38 LUNAR@ lunardb>

2,在新库创建Lunar_11201STS_LUNAR SQLSET集

create user LUNARSPA identified by LUNARSPA;
grant connect,resource,dba to LUNARSPA;

10:24:41 LUNAR@ lunardb>
10:24:41 2 from dba_users
10:24:41 3 where username in ('LUNAR','LUNARSPA')
10:24:41 4 order by 1,2;

USERNAME DEFAULT_TABLESPACE TEMPORARY_TABLESPACE
------------------------------ ------------------------------ ------------------------------
LUNAR USERS TEMP
LUNARSPA USERS TEMP

Elapsed: 00:00:00.03
10:24:42 LUNAR@ lunardb>
—(2)使用LUNAR用户,创建STS:Lunar_11204STS_LUNAR

10:24:42 LUNAR@ lunardb> conn LUNARSPA/LUNARSPA
Connected.
10:25:33 LUNARSPA@ lunardb> exec DBMS_SQLTUNE.create_sqlset(sqlset_name => 'Lunar_11204STS_LUNAR');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.05
10:25:40 LUNARSPA@ lunardb>
—(2)使用LUNARSPA用户,将源库的LUNAR.Lunar_11201STS_LUNAR的SQL优化器映射到LUNARSPA.Lunar_11204STS_LUNAR

10:28:24 LUNARSPA@ lunardb> select NAME,OWNER,CREATED,STATEMENT_COUNT FROM DBA_SQLSET;

NAME OWNER CREATED STATEMENT_COUNT
------------------------------ ------------------------------ ------------------- ---------------
Lunar_11204STS_LUNAR LUNARSPA 2015-04-19 10:25:40 0

Elapsed: 00:00:00.00

10:40:44 LUNARSPA@ lunardb> exec dbms_sqltune.remap_stgtab_sqlset(old_sqlset_name =>'Lunar_11201STS_LUNAR',old_sqlset_owner => 'LUNAR', new_sqlset_name => 'Lunar_11204STS_LUNAR',new_sqlset_owner => 'LUNARSPA', staging_table_name => 'SQLSET_TAB_LUNAR',staging_schema_owner => 'LUNAR');

PL/SQL procedure successfully completed.

Elapsed: 00:00:09.39
10:41:06 LUNARSPA@ lunardb>
使用LUNARSPA用户执行remap:

BEGIN
DBMS_SQLTUNE.unpack_stgtab_sqlset(
sqlset_name => 'Lunar_11201STS_LUNAR',
sqlset_owner => 'SPA',
replace => TRUE,
staging_table_name => 'SQLSET_TAB',
staging_schema_owner => 'SPA');
END;
/

11:21:16 LUNARSPA@ lunardb> select NAME,OWNER,CREATED,STATEMENT_COUNT FROM DBA_SQLSET;

NAME OWNER CREATED STATEMENT_COUNT
------------------------------ ------------------------------ ------------------- ---------------
Lunar_11204STS_LUNAR LUNARSPA 2015-04-19 11:19:04 6005

Elapsed: 00:00:00.01
11:21:19 LUNARSPA@ lunardb>
至此,SPA在新库的数据已经准备完毕,可以开始生成SPA报告了。
常见报告的就提步骤如下:
1)创建SPA任务

11:33:10 LUNARSPA@ lunardb> exec :sname := 'Lunar_11204STS_GPS';
exec :tname := 'SPA_LUNARTEST1';
PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
11:33:10 LUNARSPA@ lunardb>

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
11:33:10 LUNARSPA@ lunardb> exec :tname := DBMS_SQLPA.CREATE_ANALYSIS_TASK(sqlset_name => :sname, task_name => :tname);

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.19
11:33:10 LUNARSPA@ lunardb>
2)生成11.2.0.1的SPA Trail,采用STS转化方式

begin
DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(
task_name => 'SPA_LUNARTEST1',
execution_type => 'CONVERT SQLSET',
execution_name => 'CONVERT_11204G');
end;
/

3)在11.2.0.4中测试执行,从性能数据生成SPA trial
begin
DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(
task_name => 'SPA_LUNARTEST1',
execution_type => 'TEST EXECUTE',
execution_name => 'EXEC_11204G');
end;
/
5 执行比较任务(一般取Elapsed Time、CPU Time、Buffer Get等指标)

begin
DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(
task_name => 'SPA_LUNARTEST1',
execution_type => 'COMPARE PERFORMANCE',
execution_name => 'Compare_elapsed_time',
execution_params => dbms_advisor.arglist('execution_name1', 'CONVERT_11204G', 'execution_name2', 'EXEC_11204G', 'comparison_metric', 'elapsed_time') );
end;
/

begin
DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(
task_name => 'SPA_LUNARTEST1',
execution_type => 'COMPARE PERFORMANCE',
execution_name => 'Compare_CPU_time',
execution_params => dbms_advisor.arglist('execution_name1', 'CONVERT_11204G', 'execution_name2', 'EXEC_11204G', 'comparison_metric', 'CPU_TIME') );
end;
/

begin
DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(
task_name => 'SPA_LUNARTEST1',
execution_type => 'COMPARE PERFORMANCE',
execution_name => 'Compare_BUFFER_GETS_time',
execution_params => dbms_advisor.arglist('execution_name1', 'CONVERT_11204G', 'execution_name2', 'EXEC_11204G', 'comparison_metric', 'BUFFER_GETS') );
end;
/
6 生成SPA报告

spool spa_report_elapsed_time.html
SELECT dbms_sqlpa.report_analysis_task('SPA_LUNARTEST1', 'HTML', 'ALL','ALL', execution_name=>'Compare_elapsed_time',top_sql=>500) FROM dual;
spool off;

spool spa_report_CPU_time.html
SELECT dbms_sqlpa.report_analysis_task('SPA_LUNARTEST1', 'HTML', 'ALL','ALL', execution_name=>'Compare_CPU_time',top_sql=>500) FROM dual;
spool off;

spool spa_report_buffer_time.html
SELECT dbms_sqlpa.report_analysis_task('SPA_LUNARTEST1','HTML','ALL','ALL', execution_name=>'Compare_BUFFER_GETS_time',top_sql=>500) FROM dual;
spool off;

spool spa_report_elapsed_time_regressed.html
SELECT dbms_sqlpa.report_analysis_task('SPA_LUNARTEST1', 'HTML', 'REGRESSED','ALL', execution_name=>'Compare_elapsed_time',top_sql=>500) FROM dual;
spool off;

spool spa_report_CPU_time_regressed.html
SELECT dbms_sqlpa.report_analysis_task('SPA_LUNARTEST1', 'HTML', 'REGRESSED','ALL', execution_name=>'Compare_CPU_time',top_sql=>500) FROM dual;
spool off;

spool spa_report_buffer_time_regressed.html
SELECT dbms_sqlpa.report_analysis_task('SPA_LUNARTEST1','HTML','REGRESSED','ALL', execution_name=>'Compare_BUFFER_GETS_time',top_sql=>500) FROM dual;
spool off;

spool spa_report_errors.html
SELECT dbms_sqlpa.report_analysis_task('SPA_LUNARTEST1', 'HTML', 'errors','summary') FROM dual;
spool off;

spool spa_report_unsupport.html
SELECT dbms_sqlpa.report_analysis_task('SPA_LUNARTEST1', 'HTML', 'unsupported','all') FROM dual;
spool off;
生成的报告一般如下:

-rwxrwxrwx 1 oracle oracle 1850 Apr 19 21:33 report_spa.sh
-rw-rw-r-- 1 oracle oracle 8498134 Apr 19 21:37 spa_report_elapsed_time.html
-rw-rw-r-- 1 oracle oracle 8954773 Apr 19 21:41 spa_report_CPU_time.html
-rw-rw-r-- 1 oracle oracle 7941640 Apr 19 21:44 spa_report_buffer_time.html
-rw-rw-r-- 1 oracle oracle 38933 Apr 19 21:44 spa_report_elapsed_time_regressed.html
-rw-rw-r-- 1 oracle oracle 61982 Apr 19 21:44 spa_report_CPU_time_regressed.html
-rw-rw-r-- 1 oracle oracle 28886 Apr 19 21:44 spa_report_buffer_time_regressed.html
-rw-rw-r-- 1 oracle oracle 15537 Apr 19 21:44 spa_report_errors.html
-rw-rw-r-- 1 oracle oracle 58703 Apr 19 21:44 spa_report_unsupport.html
-rw-rw-r-- 1 oracle oracle 18608938 Apr 19 21:44 report_spa.log
[oracle@lunardb tmp]$

############https://www.databasejournal.com/img/2008/02/jsc_Oracle_11g_SQL_Plan_Management_Listing2.html#List0201

/*
|| Oracle 11g SQL Plan Management Listing 2
||
|| Demonstrates Oracle 11g SQL Plan Management (SPM) advanced techniques,
|| including:
|| - Capturing SQL Plan Baselines via manual methods with DBMS_SPM
|| - Transferring captured SQL Plan Baselines between Oracle 10g and 11g databases
|| to "pre-seed" the SQL Management Baseline (SMB) with the most optimal execution
|| plans before an upgrade of an Oracle 10g database to Oracle 11g
|| - Transferring captured SQL Plan Baselines between test and production environments
|| to "pre-seed" the SQL Management Baseline (SMB) with the most typical execution
|| plans prior to deployment of a brand-new application
|| - Dropping existing SQL Plan Baselines from the SMB via manual methods
||
|| Author: Jim Czuprynski
||
|| Usage Notes:
|| These examples are provided to demonstrate various features of Oracle 11g
|| SQL Plan Management features, and they should be carefully proofread
|| before executing them against any existing Oracle database(s) to avoid
|| potential damage!
*/ /*
|| Listing 2.1:
|| Create and prepare to populate a SQL Tuning Set (STS)
|| for selected SQL statements. Note that this STS will capture
|| all SQL statements which are executed by the LDGN user account
|| within a 5-minute period, and Oracle will check every 5 seconds
|| for any new statements
*/ BEGIN
DBMS_SQLTUNE.DROP_SQLSET(
sqlset_name => 'STS_SPM_200'
);
END; @SPM_2_1.sql;
/
BEGIN
DBMS_SQLTUNE.CREATE_SQLSET(
sqlset_name => 'STS_SPM_200'
);
DBMS_SQLTUNE.CAPTURE_CURSOR_CACHE_SQLSET(
sqlset_name => 'STS_SPM_200'
,basic_filter=> q'#sql_text LIKE '%SPM_2_1%' AND parsing_schema_name = 'LDGN'#'
,time_limit => 300
,repeat_interval => 5
);
END;
/ /*
|| Listing 2.2:
|| "Packing up" and exporting the Oracle 10gR2 SQL Tuning Set prior to
|| its transport to Oracle 11g
*/ -----
-- Create a staging table to hold the SQL Tuning Set statements just created,
-- and then "pack up" (i.e. populate) the staging table
-----
DROP TABLE ldgn.sts_staging PURGE;
BEGIN
DBMS_SQLTUNE.CREATE_STGTAB_SQLSET(
table_name => 'STS_STAGING'
,schema_name => 'LDGN'
,tablespace_name => 'USERS'
);
DBMS_SQLTUNE.PACK_STGTAB_SQLSET(
sqlset_name => 'STS_SPM_200'
,sqlset_owner => 'SYS'
,staging_table_name => 'STS_STAGING'
,staging_schema_owner => 'LDGN'
);
END;
/ -----
-- Invoke DataPump Export to export the table that contains the staged
-- SQL Tuning Set statements
-----
rm -f /u01/app/oracle/product/10.2.0/db_1/rdbms/log/*.log
rm -f /u01/app/oracle/product/10.2.0/db_1/rdbms/log/*.dmp
expdp system/oracle PARFILE=DumpStagingTable.dpectl #####
# Contents of DumpStagingTable.dpectl parameter file:
#####
JOB_NAME=DumpStagingTable
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=LDGN_STS_Staging.dmp
SCHEMAS=LDGN >>> Results: Export: Release 10.2.0.1.0 - Production on Monday, 18 February, 2008 19:03:57
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."DUMPSTAGINGTABLE": system/******** PARFILE=DumpStagingTable.dpectl
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 576 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
. . exported "LDGN"."STS_STAGING" 22.67 KB 8 rows
. . exported "LDGN"."STS_STAGING_CPLANS" 35.35 KB 25 rows
. . exported "LDGN"."STS_STAGING_CBINDS" 9.476 KB 0 rows
. . exported "LDGN"."PLAN_TABLE" 0 KB 0 rows
Master table "SYSTEM"."DUMPSTAGINGTABLE" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.DUMPSTAGINGTABLE is:
/u01/app/oracle/product/10.2.0/db_1/rdbms/log/LDGN_STS_Staging.dmp
Job "SYSTEM"."DUMPSTAGINGTABLE" successfully completed at 19:05:21 /*
|| Listing 2.3:
|| Transporting, importing, and "unpacking" the staged Oracle 10gR2 SQL Tuning
|| Set on the target Oracle 11g database
*/ -----
-- Invoke DataPump Import to import the table that contains the staged
-- SQL Tuning Set statements. Note that the default action of SKIPping
-- a table if it already exists has been overridden by supplying a value
-- of REPLACE for parameter TABLE_EXISTS_ACTION.
-----
impdp system/oracle PARFILE=LoadStagingTable.dpictl #####
# Contents of LoadStagingTable.dpictl parameter file:
#####
JOB_NAME=LoadStagingTable
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=LDGN_STS_Staging.dmp
TABLE_EXISTS_ACTION=REPLACE >>> Results of DataPump Import operation: Import: Release 11.1.0.6.0 - Production on Monday, 18 February, 2008 19:09:29
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."LOADSTAGINGTABLE" successfully loaded/unloaded
Starting "SYSTEM"."LOADSTAGINGTABLE": system/******** PARFILE=LoadStagingTable.dpictl
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"LDGN" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "LDGN"."STS_STAGING" 22.67 KB 8 rows
. . imported "LDGN"."STS_STAGING_CPLANS" 35.35 KB 25 rows
. . imported "LDGN"."STS_STAGING_CBINDS" 9.476 KB 0 rows
. . imported "LDGN"."PLAN_TABLE" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Job "SYSTEM"."LOADSTAGINGTABLE" completed with 1 error(s) at 19:11:07 -----
-- Accept the SQL Tuning Set statements from the imported staging table
-- into the Oracle 11gR1 database
-----
BEGIN
DBMS_SQLTUNE.DROP_SQLSET(
sqlset_name => 'STS_SPM_200'
);
END;
/
BEGIN
DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET(
sqlset_name => 'STS_SPM_200'
,sqlset_owner => 'SYS'
,replace => TRUE
,staging_table_name => 'STS_STAGING'
,staging_schema_owner => 'LDGN'
);
END;
/ -----
-- Listing 2.4:
-- Prove that the SQL Plan Baselines loaded into the SMB via manual methods are
-- actually being utilized by executing EXPLAIN PLANs against each statement from the
-- target Oracle 11g database
----- SQL> EXPLAIN PLAN FOR
2 SELECT /*SPM_2_1.1*/
3 CTY.country_total_id
4 ,PR.promo_total_id
5 ,COUNT(S.amount_sold)
6 ,SUM(S.amount_sold)
7 ,SUM(S.quantity_sold)
8 FROM
9 sh.sales S
10 ,sh.customers C
11 ,sh.countries CTY
12 ,sh.promotions PR
13 WHERE S.cust_id = C.cust_id
14 AND C.country_id = CTY.country_id
15 AND S.promo_id = PR.promo_id
16 GROUP BY
17 CTY.country_total_id
18 ,PR.promo_total_id
19 ; Explained. SQL> SELECT *
2 FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE',NULL,'+NOTE'))
3 ; Plan hash value: 491136032 --------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 44 | | 2325 (5)| 00:00:28 | | |
| 1 | HASH GROUP BY | | 1 | 44 | | 2325 (5)| 00:00:28 | | |
|* 2 | HASH JOIN | | 918K| 38M| | 2270 (3)| 00:00:28 | | |
| 3 | TABLE ACCESS FULL | PROMOTIONS | 503 | 3521 | | 17 (0)| 00:00:01 | | |
|* 4 | HASH JOIN | | 918K| 32M| | 2246 (2)| 00:00:27 | | |
| 5 | TABLE ACCESS FULL | COUNTRIES | 23 | 230 | | 3 (0)| 00:00:01 | | |
|* 6 | HASH JOIN | | 918K| 23M| 1200K| 2236 (2)| 00:00:27 | | |
| 7 | TABLE ACCESS FULL | CUSTOMERS | 55500 | 541K| | 406 (1)| 00:00:05 | | |
| 8 | PARTITION RANGE ALL| | 918K| 14M| | 498 (4)| 00:00:06 | 1 | 28 |
| 9 | TABLE ACCESS FULL | SALES | 918K| 14M| | 498 (4)| 00:00:06 | 1 | 28 |
-------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id):
--------------------------------------------------- 2 - access("S"."PROMO_ID"="PR"."PROMO_ID")
4 - access("C"."COUNTRY_ID"="CTY"."COUNTRY_ID")
6 - access("S"."CUST_ID"="C"."CUST_ID") Note
-----
- SQL plan baseline "SYS_SQL_PLAN_587c0594825d2e47" used for this statement 27 rows selected. SQL>
SQL> EXPLAIN PLAN FOR
2 SELECT /*SPM_2_1.2*/
3 CTY.country_id
4 ,CTY.country_subregion_id
5 ,CTY.country_region_id
6 ,CTY.country_total_id
7 ,PR.promo_total_id
8 ,COUNT(S.amount_sold)
9 ,SUM(S.amount_sold)
10 ,SUM(S.quantity_sold)
11 FROM
12 sh.sales S
13 ,sh.customers C
14 ,sh.countries CTY
15 ,sh.promotions PR
16 WHERE S.cust_id = C.cust_id
17 AND C.country_id = CTY.country_id
18 AND S.promo_id = PR.promo_id
19 GROUP BY
20 CTY.country_id
21 ,CTY.country_subregion_id
22 ,CTY.country_region_id
23 ,CTY.country_total_id
24 ,PR.promo_total_id
25 ; Explained. SQL> SELECT *
2 FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE',NULL,'+NOTE'))
3 ; Plan hash value: 491136032 --------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 228 | 12312 | | 2325 (5)| 00:00:28 | | |
| 1 | HASH GROUP BY | | 228 | 12312 | | 2325 (5)| 00:00:28 | | |
|* 2 | HASH JOIN | | 918K| 47M| | 2270 (3)| 00:00:28 | | |
| 3 | TABLE ACCESS FULL | PROMOTIONS | 503 | 3521 | | 17 (0)| 00:00:01 | | |
|* 4 | HASH JOIN | | 918K| 41M| | 2246 (2)| 00:00:27 | | |
| 5 | TABLE ACCESS FULL | COUNTRIES | 23 | 460 | | 3 (0)| 00:00:01 | | |
|* 6 | HASH JOIN | | 918K| 23M| 1200K| 2236 (2)| 00:00:27 | | |
| 7 | TABLE ACCESS FULL | CUSTOMERS | 55500 | 541K| | 406 (1)| 00:00:05 | | |
| 8 | PARTITION RANGE ALL| | 918K| 14M| | 498 (4)| 00:00:06 | 1 | 28 |
| 9 | TABLE ACCESS FULL | SALES | 918K| 14M| | 498 (4)| 00:00:06 | 1 | 28 |
-------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id):
--------------------------------------------------- 2 - access("S"."PROMO_ID"="PR"."PROMO_ID")
4 - access("C"."COUNTRY_ID"="CTY"."COUNTRY_ID")
6 - access("S"."CUST_ID"="C"."CUST_ID") Note
-----
- SQL plan baseline "SYS_SQL_PLAN_54f64750825d2e47" used for this statement 27 rows selected. SQL>
SQL> EXPLAIN PLAN FOR
2 SELECT /*SPM_2_1.3*/
3 CTY.country_total_id
4 ,P.prod_id
5 ,P.prod_subcategory_id
6 ,P.prod_category_id
7 ,P.prod_total_id
8 ,CH.channel_id
9 ,CH.channel_class_id
10 ,CH.channel_total_id
11 ,PR.promo_total_id
12 ,COUNT(S.amount_sold)
13 ,SUM(S.amount_sold)
14 ,SUM(S.quantity_sold)
15 FROM
16 sh.sales S
17 ,sh.customers C
18 ,sh.countries CTY
19 ,sh.products P
20 ,sh.channels CH
21 ,sh.promotions PR
22 WHERE S.cust_id = C.cust_id
23 AND C.country_id = CTY.country_id
24 AND S.prod_id = P.prod_id
25 AND S.channel_id = CH.channel_id
26 AND S.promo_id = PR.promo_id
27 GROUP BY
28 CTY.country_total_id
29 ,P.prod_id
30 ,P.prod_subcategory_id
31 ,P.prod_category_id
32 ,P.prod_total_id
33 ,CH.channel_id
34 ,CH.channel_class_id
35 ,CH.channel_total_id
36 ,PR.promo_total_id
37 ; Explained. SQL> SELECT *
2 FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE',NULL,'+NOTE'))
3 ; Plan hash value: 2634317694 ----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5940 | 435K| | 8393 (2)| 00:01:41 | | |
| 1 | HASH GROUP BY | | 5940 | 435K| 74M| 8393 (2)| 00:01:41 | | |
|* 2 | HASH JOIN | | 918K| 65M| | 2593 (3)| 00:00:32 | | |
| 3 | TABLE ACCESS FULL | PROMOTIONS | 503 | 3521 | | 17 (0)| 00:00:01 | | |
|* 4 | HASH JOIN | | 918K| 59M| | 2569 (3)| 00:00:31 | | |
| 5 | TABLE ACCESS FULL | PRODUCTS | 72 | 1080 | | 3 (0)| 00:00:01 | | |
|* 6 | HASH JOIN | | 918K| 46M| | 2560 (2)| 00:00:31 | | |
| 7 | TABLE ACCESS FULL | COUNTRIES | 23 | 230 | | 3 (0)| 00:00:01 | | |
|* 8 | HASH JOIN | | 918K| 37M| | 2550 (2)| 00:00:31 | | |
| 9 | TABLE ACCESS FULL | CHANNELS | 5 | 45 | | 3 (0)| 00:00:01 | | |
|* 10 | HASH JOIN | | 918K| 29M| 1200K| 2541 (2)| 00:00:31 | | |
| 11 | TABLE ACCESS FULL | CUSTOMERS | 55500 | 541K| | 406 (1)| 00:00:05 | | |
| 12 | PARTITION RANGE ALL| | 918K| 21M| | 498 (4)| 00:00:06 | 1 | 28 |
| 13 | TABLE ACCESS FULL | SALES | 918K| 21M| | 498 (4)| 00:00:06 | 1 | 28 |
---------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id):
--------------------------------------------------- 2 - access("S"."PROMO_ID"="PR"."PROMO_ID")
4 - access("S"."PROD_ID"="P"."PROD_ID")
6 - access("C"."COUNTRY_ID"="CTY"."COUNTRY_ID")
8 - access("S"."CHANNEL_ID"="CH"."CHANNEL_ID")
10 - access("S"."CUST_ID"="C"."CUST_ID") Note
-----
- SQL plan baseline "SYS_SQL_PLAN_8ec1a5862d9d97db" used for this statement 33 rows selected. SQL>
SQL> EXPLAIN PLAN FOR
2 SELECT /*SPM_2_1.4*/
3 CTY.country_total_id
4 ,P.prod_category_id
5 ,P.prod_total_id
6 ,CH.channel_id
7 ,CH.channel_class_id
8 ,CH.channel_total_id
9 ,PR.promo_total_id
10 ,COUNT(S.amount_sold)
11 ,SUM(S.amount_sold)
12 ,SUM(S.quantity_sold)
13 FROM
14 sh.sales S
15 ,sh.customers C
16 ,sh.countries CTY
17 ,sh.products P
18 ,sh.channels CH
19 ,sh.promotions PR
20 WHERE S.cust_id = C.cust_id
21 AND C.country_id = CTY.country_id
22 AND S.prod_id = P.prod_id
23 AND S.channel_id = CH.channel_id
24 AND S.promo_id = PR.promo_id
25 GROUP BY
26 CTY.country_total_id
27 ,P.prod_category_id
28 ,P.prod_total_id
29 ,CH.channel_id
30 ,CH.channel_class_id
31 ,CH.channel_total_id
32 ,PR.promo_total_id
33 ; Explained. SQL> SELECT *
2 FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE',NULL,'+NOTE'))
3 ; Plan hash value: 2634317694 ----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8 | 568 | | 2648 (5)| 00:00:32 | | |
| 1 | HASH GROUP BY | | 8 | 568 | | 2648 (5)| 00:00:32 | | |
|* 2 | HASH JOIN | | 918K| 62M| | 2593 (3)| 00:00:32 | | |
| 3 | TABLE ACCESS FULL | PROMOTIONS | 503 | 3521 | | 17 (0)| 00:00:01 | | |
|* 4 | HASH JOIN | | 918K| 56M| | 2569 (3)| 00:00:31 | | |
| 5 | TABLE ACCESS FULL | PRODUCTS | 72 | 792 | | 3 (0)| 00:00:01 | | |
|* 6 | HASH JOIN | | 918K| 46M| | 2560 (2)| 00:00:31 | | |
| 7 | TABLE ACCESS FULL | COUNTRIES | 23 | 230 | | 3 (0)| 00:00:01 | | |
|* 8 | HASH JOIN | | 918K| 37M| | 2550 (2)| 00:00:31 | | |
| 9 | TABLE ACCESS FULL | CHANNELS | 5 | 45 | | 3 (0)| 00:00:01 | | |
|* 10 | HASH JOIN | | 918K| 29M| 1200K| 2541 (2)| 00:00:31 | | |
| 11 | TABLE ACCESS FULL | CUSTOMERS | 55500 | 541K| | 406 (1)| 00:00:05 | | |
| 12 | PARTITION RANGE ALL| | 918K| 21M| | 498 (4)| 00:00:06 | 1 | 28 |
| 13 | TABLE ACCESS FULL | SALES | 918K| 21M| | 498 (4)| 00:00:06 | 1 | 28 |
---------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id):
--------------------------------------------------- 2 - access("S"."PROMO_ID"="PR"."PROMO_ID")
4 - access("S"."PROD_ID"="P"."PROD_ID")
6 - access("C"."COUNTRY_ID"="CTY"."COUNTRY_ID")
8 - access("S"."CHANNEL_ID"="CH"."CHANNEL_ID")
10 - access("S"."CUST_ID"="C"."CUST_ID") Note
-----
- SQL plan baseline "SYS_SQL_PLAN_96f761da2d9d97db" used for this statement 33 rows selected. SQL>
SQL> EXPLAIN PLAN FOR
2 SELECT /*SPM_2_1.5*/
3 CTY.country_id
4 ,CTY.country_subregion_id
5 ,CTY.country_region_id
6 ,CTY.country_total_id
7 ,P.prod_id
8 ,P.prod_subcategory_id
9 ,P.prod_category_id
10 ,P.prod_total_id
11 ,CH.channel_id
12 ,CH.channel_class_id
13 ,CH.channel_total_id
14 ,PR.promo_total_id
15 ,COUNT(S.amount_sold)
16 ,SUM(S.amount_sold)
17 ,SUM(S.quantity_sold)
18 FROM
19 sh.sales S
20 ,sh.customers C
21 ,sh.countries CTY
22 ,sh.products P
23 ,sh.channels CH
24 ,sh.promotions PR
25 WHERE S.cust_id = C.cust_id
26 AND C.country_id = CTY.country_id
27 AND S.prod_id = P.prod_id
28 AND S.channel_id = CH.channel_id
29 AND S.promo_id = PR.promo_id
30 GROUP BY
31 CTY.country_id
32 ,CTY.country_subregion_id
33 ,CTY.country_region_id
34 ,CTY.country_total_id
35 ,P.prod_id
36 ,P.prod_subcategory_id
37 ,P.prod_category_id
38 ,P.prod_total_id
39 ,CH.channel_id
40 ,CH.channel_class_id
41 ,CH.channel_total_id
42 ,PR.promo_total_id
43 ; Explained. SQL> SELECT *
2 FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE',NULL,'+NOTE'))
3 ; Plan hash value: 2634317694 ----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 918K| 74M| | 20707 (1)| 00:04:09 | | |
| 1 | HASH GROUP BY | | 918K| 74M| 168M| 20707 (1)| 00:04:09 | | |
|* 2 | HASH JOIN | | 918K| 74M| | 2593 (3)| 00:00:32 | | |
| 3 | TABLE ACCESS FULL | PROMOTIONS | 503 | 3521 | | 17 (0)| 00:00:01 | | |
|* 4 | HASH JOIN | | 918K| 68M| | 2569 (3)| 00:00:31 | | |
| 5 | TABLE ACCESS FULL | PRODUCTS | 72 | 1080 | | 3 (0)| 00:00:01 | | |
|* 6 | HASH JOIN | | 918K| 55M| | 2560 (2)| 00:00:31 | | |
| 7 | TABLE ACCESS FULL | COUNTRIES | 23 | 460 | | 3 (0)| 00:00:01 | | |
|* 8 | HASH JOIN | | 918K| 37M| | 2550 (2)| 00:00:31 | | |
| 9 | TABLE ACCESS FULL | CHANNELS | 5 | 45 | | 3 (0)| 00:00:01 | | |
|* 10 | HASH JOIN | | 918K| 29M| 1200K| 2541 (2)| 00:00:31 | | |
| 11 | TABLE ACCESS FULL | CUSTOMERS | 55500 | 541K| | 406 (1)| 00:00:05 | | |
| 12 | PARTITION RANGE ALL| | 918K| 21M| | 498 (4)| 00:00:06 | 1 | 28 |
| 13 | TABLE ACCESS FULL | SALES | 918K| 21M| | 498 (4)| 00:00:06 | 1 | 28 |
---------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id):
--------------------------------------------------- 2 - access("S"."PROMO_ID"="PR"."PROMO_ID")
4 - access("S"."PROD_ID"="P"."PROD_ID")
6 - access("C"."COUNTRY_ID"="CTY"."COUNTRY_ID")
8 - access("S"."CHANNEL_ID"="CH"."CHANNEL_ID")
10 - access("S"."CUST_ID"="C"."CUST_ID") Note
-----
- SQL plan baseline "SYS_SQL_PLAN_816fca3a2d9d97db" used for this statement 33 rows selected. /*
|| Listing 2.5:
|| Prepare to deploy a simulated new application to the current Oracle 11g database.
|| Note that all SQL Plan Baselines that are currently tagged as SPM_2 statements
|| will first be purged from the SMB.
*/ -----
-- Clear all SQL Plan Baselines whose SQL text contains the tag "SPM_2"
-----
SET SERVEROUTPUT ON
VARIABLE nRtnCode NUMBER;
BEGIN
:nRtnCode := 0;
FOR r_SPMB IN (
SELECT sql_handle, plan_name
FROM dba_sql_plan_baselines
WHERE sql_text LIKE '%SPM_2%'
)
LOOP
:nRtnCode :=
DBMS_SPM.DROP_SQL_PLAN_BASELINE(r_SPMB.sql_handle, r_SPMB.plan_name);
DBMS_OUTPUT.PUT_LINE('Drop of SPBs for Handle ' || r_SPMB.sql_handle
|| ' and Plan ' || r_SPMB.plan_name
|| ' completed: RC = ' || :nRtnCode);
END LOOP;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Fatal error during cleanup of SQL Plan Baselines!');
ROLLBACK;
END;
/ -----
-- Run DDL commands to create the Sales Force Administration (SFA) schema
-- and all related objects
-----
@SFA_Setup.sql; /*
|| Listing 2.6:
|| Generate a SQL workload against the new application objects using six
|| queries tagged with a comment of SPM_2_2, and then capture the SQL Plan
|| Baselines into the SMB using DBMS_SPM.LOAD_PLANS_FROM CURSOR_CACHE
*/ ALTER SYSTEM FLUSH SHARED_POOL;
ALTER SYSTEM FLUSH BUFFER_CACHE; @SPM_2_2.sql; SET SERVEROUTPUT ON
VARIABLE plans_cached NUMBER;
BEGIN
:plans_cached :=
DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(
attribute_name => 'SQL_TEXT'
,attribute_value => '%SPM_2_2%'
,fixed => 'NO'
,enabled => 'YES'
);
DBMS_OUTPUT.PUT_LINE('>>> ' || :plans_cached || ' SQL statement(s) loaded from the cursor cache.');
END;
/ /*
|| Listing 2.7:
|| Staging, Packing, and Exporting SQL Plan Baselines
*/ -----
-- Create a SQL Plan Baseline staging table and then "pack" those SQL
-- Plan Baselines into a staging table
-----
BEGIN
DBMS_SPM.CREATE_STGTAB_BASELINE (
table_name => 'SPM_STAGING'
,table_owner => 'SFA'
,tablespace_name => 'EXAMPLE'
);
END;
/ SET SERVEROUTPUT ON
VARIABLE plans_staged NUMBER;
BEGIN
:plans_staged :=
DBMS_SPM.PACK_STGTAB_BASELINE (
table_name => 'SPM_STAGING'
,table_owner => 'SFA'
,creator => 'SYS'
);
DBMS_OUTPUT.PUT_LINE('Total SQL Plan Baselines Staged: ' || :plans_staged);
END;
/ -----
-- Export SPM staging table via DataPump Export
-----
rm -f /u01/app/oracle/admin/orcl/dpdump/*.log
rm -f /u01/app/oracle/admin/orcl/dpdump/*.dmp
expdp system/oracle PARFILE=DumpStagedSPMs.dpectl #####
# Contents of DumpStagedSPMs.dpectl parameter file:
#####
JOB_NAME=DumpStagedSPMs
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=SFA_SPM_Staging.dmp
TABLES=SFA.SPM_STAGING >>> Results: Export: Release 11.1.0.6.0 - Production on Tuesday, 19 February, 2008 9:28:34
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."DUMPSTAGEDSPMS": system/******** PARFILE=DumpStagedSPMs.dpectl
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "SFA"."SPM_STAGING" 46.49 KB 6 rows
Master table "SYSTEM"."DUMPSTAGEDSPMS" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.DUMPSTAGEDSPMS is:
/u01/app/oracle/admin/orcl/dpdump/SFA_SPM_Staging.dmp
Job "SYSTEM"."DUMPSTAGEDSPMS" successfully completed at 09:29:50 /*
|| Listing 2.8:
|| Importing and "unpacking" the staged Oracle 11g SQL Plan Baselines into
|| the target Oracle 11g database
*/ -----
-- Invoke DataPump Import to import the table that contains the staged
-- SQL Tuning Set statements. Note that the default action of SKIPping
-- a table if it already exists has been overridden by supplying a value
-- of REPLACE for parameter TABLE_EXISTS_ACTION.
-----
impdp system/oracle PARFILE=LoadStagedSPMs.dpictl #####
# Contents of LoadStagedSPMs.dpictl parameter file:
#####
JOB_NAME=LoadStagedSPMs
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=SFA_SPM_Staging.dmp
TABLE_EXISTS_ACTION=REPLACE >>> Results of DataPump Import operation: Import: Release 11.1.0.6.0 - Production on Tuesday, 19 February, 2008 9:31:41
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."LOADSTAGEDSPMS" successfully loaded/unloaded
Starting "SYSTEM"."LOADSTAGEDSPMS": system/******** PARFILE=LoadStagedSPMs.dpictl
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SFA"."SPM_STAGING" 46.49 KB 6 rows
Job "SYSTEM"."LOADSTAGEDSPMS" successfully completed at 09:31:52 -----
-- Clear all SQL Plan Baselines whose SQL text contains the tag "SPM_2"
-----
SET SERVEROUTPUT ON
VARIABLE nRtnCode NUMBER;
BEGIN
:nRtnCode := 0;
FOR r_SPMB IN (
SELECT sql_handle, plan_name
FROM dba_sql_plan_baselines
WHERE sql_text LIKE '%SPM_2%'
)
LOOP
:nRtnCode :=
DBMS_SPM.DROP_SQL_PLAN_BASELINE(r_SPMB.sql_handle, r_SPMB.plan_name);
DBMS_OUTPUT.PUT_LINE('Drop of SPBs for Handle ' || r_SPMB.sql_handle
|| ' and Plan ' || r_SPMB.plan_name
|| ' completed: RC = ' || :nRtnCode);
END LOOP;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Fatal error during cleanup of SQL Plan Baselines!');
ROLLBACK;
END;
/ -----
-- Now, "unpack" the SQL Plan Baselines staging table directly into the SMB
-----
SET SERVEROUTPUT ON
VARIABLE plans_loaded NUMBER;
BEGIN
:plans_loaded :=
DBMS_SPM.UNPACK_STGTAB_BASELINE (
table_name => 'SPM_STAGING'
,table_owner => 'SFA'
,creator => 'SYS'
);
DBMS_OUTPUT.PUT_LINE('Total SQL Plan Baselines Loaded: ' || :plans_loaded);
END;
/ /*
|| Listing 2.9:
|| Show the current contents of the SQL Management Base
*/ Tue Feb 19 page 1
Current SQL Plan Baselines
(From DBA_SQL_PLAN_BASELINES) SQL Plan CBO Ena- Auto Created Last
Creator Handle Name SQL Text Origin Cost bled Acpt Fixd Purg On Executed
-------- -------- -------- ------------------------- ------------ -------- ---- ---- ---- ---- ----------- -----------
LDGN 68516a84 07e0351f SELECT /*SPM_1.1*/ AUTO-CAPTURE 757 YES YES NO YES 2008-01-20 2008-01-20
S.cust_id 10:47:14 10:47:31
,C.cust_last_name
,S.prod_id
,P.pro LDGN 68516a84 ddc1fcd0 SELECT /*SPM_1.1*/ AUTO-CAPTURE 2388 YES NO NO YES 2008-01-20
S.cust_id 11:04:03
,C.cust_last_name
,S.prod_id
,P.pro SYS 0047dfb5 e86f00e7 SELECT /*SPM_2_2.5*/ MANUAL-LOAD 13 YES YES NO YES 2008-02-19
SR.abbr 09:32:42
,SD.abbr
,SUM(SH.quantity_sold) SYS 1e72d0bd dd777d18 SELECT /*SPM_2_2.4*/ MANUAL-LOAD 13 YES YES NO YES 2008-02-19
rgn_abbr 09:32:42
,dst_abbr
,ter_abbr
,cust_id SYS 7f161ead bb24e20c SELECT /*SPM_2.2.2*/ MANUAL-LOAD 415 YES YES NO YES 2008-02-19
SR.abbr, 09:32:42
SD.abbr,
SZ.geo_id, SYS 831c508c 3519879f SELECT /*SPM_2_2.3*/ MANUAL-LOAD 71 YES YES NO YES 2008-02-19
SR.abbr 09:32:42
,SD.abbr
,SZ.geo_id
,C.cust_id SYS 9c7bbbfb 9d1c7b8e SELECT /*SPM_2.2.1*/ MANUAL-LOAD 921 YES YES NO YES 2008-02-19
C.cust_state_provinc 09:32:42
e
,SUM(sh.quantity_sold
) SYS f6743c1d b197d40d SELECT /*SPM_2_2.6*/ MANUAL-LOAD 60 YES YES NO YES 2008-02-19
SR.abbr 09:32:42
,SUM(SH.quantity_sold)
,AVG(SH.quant 8 rows selected.

##############ref 3

https://blog.yannickjaquier.com/oracle/sql-performance-analyzer.html

SQL Performance Analyzer

Preamble

How to test and predict impact on your application of any system change ? What a difficult question for a developer/DBA and except comparing SQL statement by SQL statement there is no dedicated tool for this. With 11gR2 Oracle has released a new tool called SQL Performance Analyzer.

Before going further it’s worth to mention that SQL Performance Analyzer (SPA) is included in Oracle Real Application Testing (RAT) Enterprise Edition paidoption.

By system changes Oracle mean (not exhaustively):

  • Database upgrades.
  • Tuning changes.
  • Schema changes.
  • Statistics gathering.
  • Initialization parameter change
  • OS or hardware changes.

Database upgrade is exactly what we will test in this blog post by simulating execution of a SQL statement in Oracle 9iR2 and then in 11gR2. Notice that same strategy can be applied to evaluate any initialization parameter, statistics (with pending statistics) changes.

Test database of this blog post is Oracle Enterprise Edition 11.2.0.2.0 running on Red Hat Enterprise Linux Server release 5.6 (Tikanga).

SQL Performance Analyzer testing

Just to show one limitation of SPA, but not of SQL Tuning Set (STS) I’m choosing a sql_id that has multiple plan with following query:

SQL> SELECT * FROM (SELECT sql_id,COUNT(DISTINCT plan_hash_value) FROM v$sql a
WHERE EXISTS (SELECT sql_id, COUNT(*) FROM v$sql b WHERE a.sql_id=b.sql_id GROUP BY sql_id HAVING COUNT(*)>1)
GROUP BY sql_id ORDER BY 2 DESC) WHERE rownum<=10;
 
SQL_ID COUNT(DISTINCTPLAN_HASH_VALUE)
------------- ------------------------------
94rn6s4ba24wn 5
9j8p0n3104sdg 4
gv9varx8zfkq4 4
9wbvj5pud8t2f 4
20pm94kcsc31s 3
afrmyr507wu03 3
0tnssv00b0nyr 2
1ds1kuqzkr7kn 2
18hzyzu9945g4 2
1290sa2814wt2 2
 
10 ROWS selected.

Let’s choose first one and create a STS with the five sql_id, plan_hash_value pairs:

DECLARE
cursor1 DBMS_SQLTUNE.SQLSET_CURSOR;
BEGIN
OPEN cursor1 FOR SELECT VALUE(p)
FROM TABLE(DBMS_SQLTUNE.SELECT_CURSOR_CACHE('sql_id = ''94rn6s4ba24wn''')) p;
 
DBMS_SQLTUNE.LOAD_SQLSET(sqlset_name => 'STS01', populate_cursor => cursor1);
END;
/
 
PL/SQL PROCEDURE successfully completed.

We easily see that one SQL statement has been added to our STS even if the sql_id has five distinct explain plan:

SQL> SET lines 200
SQL> col description FOR a30
SQL> SELECT * FROM dba_sqlset;
 
ID NAME OWNER DESCRIPTION CREATED LAST_MODI STATEMENT_COUNT
---------- ------------------------------ ------------------------------ ------------------------------ --------- --------- ---------------
1 STS01 SYS STS FOR sql_id 94rn6s4ba24wn 17-NOV-11 17-NOV-11 1

Now let’s create a SPA task and associate the STS with it:

DECLARE
task_name VARCHAR2(64);
sts_task VARCHAR2(64);
BEGIN
task_name := 'Task01';
 
sts_task:=DBMS_SQLPA.CREATE_ANALYSIS_TASK(sqlset_name => 'STS01', task_name => task_name, description => 'Task for sql_id 94rn6s4ba24wn');
END;
/
 
PL/SQL PROCEDURE successfully completed.

When executing the task you have to decide, with execution_type parameter, which kind of execution you want to perform. A standard SPA task is made of following steps:

  • Execute the task in TEST EXECUTE mode and generate a before change task report.
  • Change what you want on your database (upgrade, optimizer parameters, statistics, …), execute the task in TEST EXECUTE mode and generate an after change task report.
  • Execute the task in COMPARE PERFORMANCE mode and generate a compare performance task report.

Just to show one limitation of SPA I’ll first use the CONVERT SQLSET mode and generate the report:

SQL> EXEC DBMS_SQLPA.RESET_ANALYSIS_TASK('Task01');
 
PL/SQL PROCEDURE successfully completed.
 
SQL> EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'Task01', execution_type => 'CONVERT SQLSET', execution_name => 'convert_sqlset');
 
PL/SQL PROCEDURE successfully completed.

You can check the task has well completed with:

SQL> SELECT execution_name, execution_type, TO_CHAR(execution_start,'dd-mon-yyyy hh24:mi:ss') AS execution_start,
TO_CHAR(execution_end,'dd-mon-yyyy hh24:mi:ss') AS execution_end, status
FROM dba_advisor_executions
WHERE task_name='Task01';
 
EXECUTION_NAME EXECUTION_TYPE EXECUTION_START EXECUTION_END STATUS
-------------------- ------------------------------ ----------------------------- ----------------------------- -----------
convert_sqlset CONVERT SQLSET 29-nov-2011 15:01:26 29-nov-2011 15:01:27 COMPLETED

Generate the report with a SQL statement like this:

SQL> SET LONG 999999 longchunksize 100000 linesize 200 head off feedback off echo off
SQL> spool task01_before_change.html
SQL> SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('Task01', 'HTML', 'ALL', 'ALL') FROM dual;
SQL> spool off

We can see in task01_convert_sqlset.html result file that with this particular execution all different plans are displayed, while in compare performance objective SPA is taking only one plan (the one it is currently parsing with information currently available). In any case comparing all plans of same sql_id would provide very complex reports that would probably be not usable…

The EXPLAIN PLAN execute mode does not provide any added value as it generates the explain plan for every SQL statement of the STS, explain plans that are also displayed in TEST EXECUTE reports.

For my test case, to simulate a database upgrade from 9iR2 to 11gR2, I first set optimizer feature to 9iR2 optimizer, execute the task in TEST EXECUTE mode and generate report, put then back optimizer parameter to default value, execute again the task in TEST EXECUTE mode and generate report and finally execute the task in COMPARE PERFORMANCE mode and generate final comparison report (the most interesting one).

We set environment variable for report generation, check optimizer value before changing and reset the task before starting:

SQL> SET LONG 999999 longchunksize 100000 linesize 200 head off feedback off echo off
SQL> show parameter optimizer_features_enable
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
optimizer_features_enable string 11.2.0.2
 
SQL> EXEC DBMS_SQLPA.RESET_ANALYSIS_TASK('Task01');
 
PL/SQL PROCEDURE successfully completed.

We set optimizer to 9.2.0 to simulate a database upgrade situation and execute the task:

SQL> ALTER SESSION SET optimizer_features_enable='9.2.0';
 
SESSION altered.
 
SQL> EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'Task01', execution_type => 'TEST EXECUTE', execution_name => 'before_change');
 
PL/SQL PROCEDURE successfully completed.
 
SQL> spool task01_before_change.html
SQL> SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('Task01', 'HTML', 'ALL', 'ALL') FROM dual;
SQL> spool off

We set back optimizer to default value and execute task again (we see that plan chosen is one of five initial plans associated with query):

SQL> ALTER SESSION SET optimizer_features_enable='11.2.0.2';
 
SESSION altered.
 
SQL> EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'Task01', execution_type => 'TEST EXECUTE', execution_name => 'after_change');
 
PL/SQL PROCEDURE successfully completed.
 
SQL> spool task01_after_change.html
SQL> SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('Task01', 'HTML', 'ALL', 'ALL') FROM dual;
SQL> spool off

We can now execute a third time the task and generate the compare performance report based on the two previous test execute runs:

SQL> EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'Task01', execution_type => 'COMPARE PERFORMANCE', execution_name => 'compare_performance');
 
PL/SQL PROCEDURE successfully completed.
 
SQL> spool task01_compare_performance.html
SQL> SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('Task01', 'HTML', 'ALL', 'ALL') FROM dual;
SQL> spool off

SQL Performance Analyzer result

First let’s check all has been well executed:

SQL> SELECT execution_name, execution_type, TO_CHAR(execution_start,'dd-mon-yyyy hh24:mi:ss') AS execution_start,
TO_CHAR(execution_end,'dd-mon-yyyy hh24:mi:ss') AS execution_end, advisor_name, status
FROM dba_advisor_executions
WHERE task_name='Task01';
 
EXECUTION_NAME EXECUTION_TYPE EXECUTION_START EXECUTION_END STATUS
------------------------------ ------------------------------ ----------------------------- ----------------------------- -----------
after_change TEST EXECUTE 18-nov-2011 16:18:01 18-nov-2011 16:18:18 COMPLETED
before_change TEST EXECUTE 18-nov-2011 16:16:39 18-nov-2011 16:17:11 COMPLETED
compare_performance COMPARE PERFORMANCE 18-nov-2011 16:18:54 18-nov-2011 16:18:57 COMPLETED
 
SQL> SELECT last_execution,execution_type,TO_CHAR(execution_start,'dd-mon-yyyy hh24:mi:ss') AS execution_start,
TO_CHAR(execution_end,'dd-mon-yyyy hh24:mi:ss') AS execution_end,status
FROM dba_advisor_tasks
WHERE task_name='Task01';
 
LAST_EXECUTION EXECUTION_TYPE EXECUTION_START EXECUTION_END STATUS
------------------------------ ------------------------------ ----------------------------- ----------------------------- -----------
compare_performance COMPARE PERFORMANCE 18-nov-2011 16:18:54 18-nov-2011 16:18:57 COMPLETED

We can have a test overview of plan comparison with:

SQL> col EXECUTION_NAME FOR a15
SQL> SELECT execution_name, plan_hash_value, parse_time, elapsed_time, cpu_time,user_io_time,buffer_gets,disk_reads,direct_writes,
physical_read_bytes,physical_write_bytes,rows_processed
FROM dba_advisor_sqlstats
WHERE task_name='Task01';
 
EXECUTION_NAME PLAN_HASH_VALUE PARSE_TIME ELAPSED_TIME CPU_TIME USER_IO_TIME BUFFER_GETS DISK_READS DIRECT_WRITES PHYSICAL_READ_BYTES PHYSICAL_WRITE_BYTES ROWS_PROCESSED
--------------- --------------- ---------- ------------ ---------- ------------ ----------- ---------- ------------- ------------------- -------------------- --------------
before_change 1328242299 40664 8630688 1831720 808827 135782 117208 0 960167936 0 60
after_change 2949292326 167884 1808470 988850 340845 57450 38114 0 312229888 0 60

We can generate the text version of the two explain plans, but again html version is much more readable for a normal human being:

SQL> col PLAN FOR a140
SQL> SET pages 500
SQL> SELECT p.plan_id, RPAD('(' || p.ID || ' ' || NVL(p.parent_id,'0') || ')',8) || '|' ||
RPAD(LPAD (' ', 2*p.DEPTH) || p.operation || ' ' || p.options,40,'.') ||
NVL2(p.object_owner||p.object_name, '(' || p.object_owner|| '.' || p.object_name || ') ', '') ||
'Cost:' || p.COST || ' ' || NVL2(p.bytes||p.CARDINALITY,'(' || p.bytes || ' bytes, ' || p.CARDINALITY || ' rows)','') || ' ' ||
NVL2(p.partition_id || p.partition_start || p.partition_stop,'PId:' || p.partition_id || ' PStart:' ||
p.partition_start || ' PStop:' || p.partition_stop,'') ||
'io cost=' || p.io_cost || ',cpu_cost=' || p.cpu_cost AS PLAN
FROM dba_advisor_sqlplans p
WHERE task_name='Task01'
oder BY p.plan_id, p.id, p.parent_id;
 
PLAN_ID PLAN
---------- --------------------------------------------------------------------------------------------------------------------------------------------
89713 (0 0) |SELECT STATEMENT .......................COST:11331 (207480 bytes, 1064 ROWS) io COST=11331,cpu_cost=
89713 (1 0) | SORT GROUP BY.........................COST:11331 (207480 bytes, 1064 ROWS) io COST=11331,cpu_cost=
89713 (2 1) | FILTER .............................COST: io COST=,cpu_cost=
89713 (3 2) | HASH JOIN ........................COST:11294 (207480 bytes, 1064 ROWS) io COST=11294,cpu_cost=
89713 (4 3) | TABLE ACCESS BY INDEX ROWID.....(GSNX.OM_SHIPMENT_LINE) COST:3 (130 bytes, 5 ROWS) io COST=3,cpu_cost=
89713 (5 4) | NESTED LOOPS .................COST:905 (166423 bytes, 1021 ROWS) io COST=905,cpu_cost=
89713 (6 5) | HASH JOIN ..................COST:275 (28770 bytes, 210 ROWS) io COST=275,cpu_cost=
89713 (7 6) | TABLE ACCESS FULL.........(GSNX.CORE_PARTY) COST:3 (4932 bytes, 274 ROWS) io COST=3,cpu_cost=
89713 (8 6) | HASH JOIN ................COST:271 (24990 bytes, 210 ROWS) io COST=271,cpu_cost=
89713 (9 8) | TABLE ACCESS FULL.......(GSNX.CORE_PARTY) COST:3 (4932 bytes, 274 ROWS) io COST=3,cpu_cost=
89713 (10 8) | TABLE ACCESS FULL.......(GSNX.OM_SHIPMENT) COST:267 (21210 bytes, 210 ROWS) io COST=267,cpu_cost=
89713 (11 5) | INDEX RANGE SCAN............(GSNX.OM_SHIPMENT_LINE_N1) COST:2 ( bytes, 6 ROWS) io COST=2,cpu_cost=
89713 (12 3) | VIEW ...........................(SYS.VW_NSO_1) COST:10385 (637184 bytes, 19912 ROWS) io COST=10385,cpu_cost=
89713 (13 12) | SORT UNIQUE...................COST:10385 (423284 bytes, 19912 ROWS) io COST=8900,cpu_cost=
89713 (14 13) | UNION-ALL ..................COST: io COST=,cpu_cost=
89713 (15 14) | SORT UNIQUE...............COST:8900 (190 bytes, 2 ROWS) io COST=8900,cpu_cost=
89713 (16 15) | FILTER .................COST: io COST=,cpu_cost=
89713 (17 16) | SORT GROUP BY.........COST:8900 (190 bytes, 2 ROWS) io COST=8900,cpu_cost=
89713 (18 17) | NESTED LOOPS .......COST:8892 (2755 bytes, 29 ROWS) io COST=8892,cpu_cost=
89713 (19 18) | HASH JOIN ........COST:8842 (1975 bytes, 25 ROWS) io COST=8842,cpu_cost=
89713 (20 19) | TABLE ACCESS FUL(GSNX.MFG_WIP) COST:8808 (166191 bytes, 5361 ROWS) io COST=8808,cpu_cost=
89713 (21 19) | INDEX FAST FULL (GSNX.OM_SHIPMENT_N2) COST:27 (1008432 bytes, 21009 ROWS) io COST=27,cpu_cost=
89713 (22 18) | INDEX RANGE SCAN..(GSNX.MFG_WIP_LOT_QTY_N1) COST:2 (16 bytes, 1 ROWS) io COST=2,cpu_cost=
89713 (23 16) | SORT AGGREGATE........COST: (9 bytes, 1 ROWS) io COST=,cpu_cost=
89713 (24 23) | INDEX RANGE SCAN....(GSNX.OM_SHIPMENT_LINE_N1) COST:3 (360 bytes, 40 ROWS) io COST=3,cpu_cost=
89713 (25 14) | MINUS ....................COST: io COST=,cpu_cost=
89713 (26 25) | SORT UNIQUE.............COST: (219010 bytes, 19910 ROWS) io COST=,cpu_cost=
89713 (27 26) | INDEX FAST FULL SCAN..(GSNX.OM_SHIPMENT_UK1) COST:19 (231099 bytes, 21009 ROWS) io COST=19,cpu_cost=
89713 (28 25) | SORT UNIQUE.............COST: (204084 bytes, 22676 ROWS) io COST=,cpu_cost=
89713 (29 28) | INDEX FAST FULL SCAN..(GSNX.MFG_WIP_N5) COST:1296 (518760 bytes, 57640 ROWS) io COST=1296,cpu_cost=
89713 (30 2) | FILTER ...........................COST: io COST=,cpu_cost=
89713 (31 30) | TABLE ACCESS BY INDEX ROWID.....(GSNX.MFG_WIP) COST:4 (19 bytes, 1 ROWS) io COST=4,cpu_cost=
89713 (32 31) | INDEX RANGE SCAN..............(GSNX.MFG_WIP_N5) COST:3 ( bytes, 1 ROWS) io COST=3,cpu_cost=
89714 (0 0) |SELECT STATEMENT .......................COST:19324 (252720 bytes, 1296 ROWS) io COST=19260,cpu_cost=663547063
89714 (1 0) | SORT GROUP BY.........................COST:19324 (252720 bytes, 1296 ROWS) io COST=19260,cpu_cost=663547063
89714 (2 1) | FILTER .............................COST: io COST=,cpu_cost=
89714 (3 2) | HASH JOIN ........................COST:16730 (252720 bytes, 1296 ROWS) io COST=16668,cpu_cost=633246741
89714 (4 3) | NESTED LOOPS ...................COST: io COST=,cpu_cost=
89714 (5 4) | NESTED LOOPS .................COST:1248 (166423 bytes, 1021 ROWS) io COST=1240,cpu_cost=81128743
89714 (6 5) | HASH JOIN ..................COST:617 (28770 bytes, 210 ROWS) io COST=610,cpu_cost=74949007
89714 (7 6) | VIEW .....................(GSNX.INDEX$_join$_004) COST:3 (4932 bytes, 274 ROWS) io COST=2,cpu_cost=5340263
89714 (8 7) | HASH JOIN ..............COST: io COST=,cpu_cost=
89714 (9 8) | INDEX FAST FULL SCAN..(GSNX.CORE_PARTY_PK) COST:1 (4932 bytes, 274 ROWS) io COST=1,cpu_cost=77402
89714 (10 8) | INDEX FAST FULL SCAN..(GSNX.CORE_PARTY_UK2) COST:1 (4932 bytes, 274 ROWS) io COST=1,cpu_cost=77402
89714 (11 6) | HASH JOIN ................COST:614 (24990 bytes, 210 ROWS) io COST=608,cpu_cost=64398724
89714 (12 11) | VIEW ...................(GSNX.INDEX$_join$_003) COST:3 (4932 bytes, 274 ROWS) io COST=2,cpu_cost=5340263
89714 (13 12) | HASH JOIN ............COST: io COST=,cpu_cost=
89714 (14 13) | INDEX FAST FULL SCAN(GSNX.CORE_PARTY_PK) COST:1 (4932 bytes, 274 ROWS) io COST=1,cpu_cost=77402
89714 (15 13) | INDEX FAST FULL SCAN(GSNX.CORE_PARTY_UK2) COST:1 (4932 bytes, 274 ROWS) io COST=1,cpu_cost=77402
89714 (16 11) | TABLE ACCESS FULL.......(GSNX.OM_SHIPMENT) COST:611 (21210 bytes, 210 ROWS) io COST=606,cpu_cost=53848440
89714 (17 5) | INDEX RANGE SCAN............(GSNX.OM_SHIPMENT_LINE_N1) COST:2 ( bytes, 6 ROWS) io COST=2,cpu_cost=16293
89714 (18 4) | TABLE ACCESS BY INDEX ROWID...(GSNX.OM_SHIPMENT_LINE) COST:3 (130 bytes, 5 ROWS) io COST=3,cpu_cost=29445
89714 (19 3) | VIEW ...........................(SYS.VW_NSO_1) COST:15481 (808672 bytes, 25271 ROWS) io COST=15428,cpu_cost=544289828
89714 (20 19) | HASH UNIQUE...................COST:15481 (685783 bytes, 25271 ROWS) io COST=12215,cpu_cost=193587837
89714 (21 20) | UNION-ALL ..................COST: io COST=,cpu_cost=
89714 (22 21) | HASH UNIQUE...............COST:12234 (262689 bytes, 5361 ROWS) io COST=12215,cpu_cost=193587837
89714 (23 22) | FILTER .................COST: io COST=,cpu_cost=
89714 (24 23) | HASH JOIN ............COST:12230 (262689 bytes, 5361 ROWS) io COST=12212,cpu_cost=180277196
89714 (25 24) | TABLE ACCESS BY INDE(GSNX.MFG_WIP) COST:12169 (128664 bytes, 5361 ROWS) io COST=12152,cpu_cost=167815964
89714 (26 25) | INDEX SKIP SCAN...(GSNX.MFG_WIP_N2) COST:647 ( bytes, 57640 ROWS) io COST=645,cpu_cost=16123961
89714 (27 24) | INDEX FAST FULL SCAN(GSNX.OM_SHIPMENT_N2) COST:60 (525225 bytes, 21009 ROWS) io COST=60,cpu_cost=4408262
89714 (28 23) | SORT AGGREGATE........COST: (9 bytes, 1 ROWS) io COST=,cpu_cost=
89714 (29 28) | INDEX RANGE SCAN....(GSNX.OM_SHIPMENT_LINE_N1) COST:3 (54 bytes, 6 ROWS) io COST=3,cpu_cost=22564
89714 (30 23) | SORT AGGREGATE........COST: (10 bytes, 1 ROWS) io COST=,cpu_cost=
89714 (31 30) | INDEX RANGE SCAN....(GSNX.MFG_WIP_LOT_QTY_N1) COST:3 (10 bytes, 1 ROWS) io COST=3,cpu_cost=21764
89714 (32 21) | MINUS ....................COST: io COST=,cpu_cost=
89714 (33 32) | SORT UNIQUE.............COST: (219010 bytes, 19910 ROWS) io COST=,cpu_cost=
89714 (34 33) | INDEX FAST FULL SCAN..(GSNX.OM_SHIPMENT_UK1) COST:42 (231099 bytes, 21009 ROWS) io COST=42,cpu_cost=3824304
89714 (35 32) | SORT UNIQUE.............COST: (204084 bytes, 22676 ROWS) io COST=,cpu_cost=
89714 (36 35) | INDEX FAST FULL SCAN..(GSNX.MFG_WIP_N5) COST:2972 (518760 bytes, 57640 ROWS) io COST=2946,cpu_cost=267746021
89714 (37 2) | FILTER ...........................COST: io COST=,cpu_cost=
89714 (38 37) | TABLE ACCESS BY INDEX ROWID.....(GSNX.MFG_WIP) COST:4 (19 bytes, 1 ROWS) io COST=4,cpu_cost=29946
89714 (39 38) | INDEX RANGE SCAN..............(GSNX.MFG_WIP_N5) COST:3 ( bytes, 1 ROWS) io COST=3,cpu_cost=21614
 
73 ROWS selected.

Below view provide potential improvement (or regression, colors are self explaining), 79% in our case. So in clear moving this database from 9iR2 to 11gR2 would provide huge gain to this particular query with no effort. Obviously this simple query is not representative of anything and you would need to add much more statements to your STS to be in better position before upgrade. Of course RAT and workload capture could be a great help for such task:

SQL> col message FOR a80
SQL> col FINDING_NAME FOR a30
SQL> col EXECUTION_NAME FOR a20
SQL> SELECT execution_name,finding_name,TYPE,impact,message FROM dba_advisor_findings WHERE task_name='Task01';
 
EXECUTION_NAME FINDING_NAME TYPE IMPACT MESSAGE
-------------------- ------------------------------ ----------- ---------- --------------------------------------------------------------------------------
compare_performance normal, SUCCESSFUL completion INFORMATION 0 The structure OF the SQL PLAN IN execution 'before_change' IS different than its
corresponding PLAN which IS stored IN the SQL Tuning SET.
 
compare_performance normal, SUCCESSFUL completion SYMPTOM 0 The structure OF the SQL execution PLAN has changed.
compare_performance normal, SUCCESSFUL completion INFORMATION 79.0460506 The performance OF this SQL has improved.

Finally all reports generated:

References

  • Using SQL Performance Analyzer to Test SQL Performance Impact of 9i to 10gR2 Upgrade [ID 562899.1]
  • SQL PERFORMANCE ANALYZER EXAMPLE [ID 455889.1]

##############ref 4

http://czmmiao.iteye.com/blog/1914603

The execution_type parameter of the EXECUTE_ANALYSIS_TASK procedure can take one of the following three values:
TEST_EXECUTE:Executes all SQL statements in the captured SQL workload. The database only executes the query portion of the DML statements, in order to avoid adversely impacting user data or the database itself. The database generates both execution plans and execution statistics (for example, disk reads and buffer gets).
COMPARE_PERFORMANCE:Compares performance between two executions of the workload performance analysis.
EXPLAIN PLAN:Lets you generate SQL plans only, without actually executing them.

The EXECUTE_ANALYSIS_TASK procedure executes all DML statements but ignores any DDL statements to avoid unduly affecting the test data. 
Now we have the "before" performance information, we need to make a change so we can test the "after" performance. For this example we will simply add an index to the test table on the OBJECT_ID column. In a new SQL*Plus session create the index using the following statements.
CONN spa_test_user/spa_test_user
CREATE INDEX my_objects_index_01 ON my_objects(object_id);
EXEC DBMS_STATS.gather_table_stats(USER, 'MY_OBJECTS', cascade => TRUE);
Now, we can return to our original session and test the performance after the database change. Once again use the EXECUTE_ANALYSIS_TASK procedure, naming the analysis task "after_change".
BEGIN
  DBMS_SQLPA.execute_analysis_task(
    task_name       => :v_task,
    execution_type  => 'test execute',
    execution_name  => 'after_change');
END;
/
Once the before and after analysis tasks are complete, we must run a comparison analysis task. The following code explicitly names the analysis tasks to compare using name-value pairs in the EXECUTION_PARAMS parameter. If this is ommited, the latest two analysis runs are compared.
BEGIN
  DBMS_SQLPA.execute_analysis_task(
    task_name        => :v_task,
    execution_type   => 'compare performance',
    execution_params => dbms_advisor.arglist(
                          'execution_name1',
                          'before_change',
                          'execution_name2',
                          'after_change')
    );
END;
/
With this final analysis run complete, we can check out the comparison report using the REPORT_ANALYSIS_TASK function. The function returns a CLOB containing the report in 'TEXT', 'XML' or 'HTML' format. Its usage is shown below.
Note. Oracle 11gR2 also includes an 'ACTIVE' format that looks more like the Enterprise Manager output.
SET PAGESIZE 0
SET LINESIZE 1000
SET LONG 1000000
SET LONGCHUNKSIZE 1000000
SET TRIMSPOOL ON
SET TRIM ON
SPOOL /tmp/execute_comparison_report.htm
SELECT DBMS_SQLPA.report_analysis_task(:v_task, 'HTML', 'ALL')
FROM   dual;
SPOOL OFF
An example of this file for each available type is shown below.
TEXT
HTML
XML
ACTIVE

#####

I'm more the command line type of person. Once I've understand what's going on behind the curtains I certainly switch to the GUI-click-click tools. But in the case of Real Application Testing - even though the support via the OEM GUI is excellent - sometimes I prefer to run my procedures from the command lineand check my reports in the browser.

Recently Thomas, a colleague from Oracle ACS Support, and I were asking ourselves about the different comparison metrics for the SQL Performance Analyzer reporting We did scan the documentation but we found only examples but no complete list. Then we did ask a colleague but thanks to OEM we got an incomplete list as well.

Finally Thomas dug it out - it's stored in the dictionary in the table V$SQLPA_METRIC:

SQL> SELECT metric_name FROM v$sqlpa_metric;

METRIC_NAME   
-------------------------
PARSE_TIME               
ELAPSED_TIME             
CPU_TIME                 
USER_IO_TIME             
BUFFER_GETS              
DISK_READS               
DIRECT_WRITES            
OPTIMIZER_COST           
IO_INTERCONNECT_BYTES

9 rows selected.

What do you do with these metrics now?

You can use them in such a way:

set timing on

begin
dbms_sqlpa.execute_analysis_task(
   task_name=>'SPA_TASK_MR07PLP_11107_12102',
   execution_name=>'Compare workload Elapsed',
   execution_type=>'compare performance',
   execution_params=>dbms_advisor.arglist(
                     'comparison_metric','elapsed_time',
                     'execution_name1','EXEC_SPA_TASK_MR07PLP_11107',
                     'execution_name2','TEST 11107 workload'),
   execution_desc=>'Compare 11107 Workload on 12102 Elapsed');
end;
/

You can vary the elapsed_time in my example with the various comparison metrics mentioned in v$sqlpa_metric.

--Mike

###############5

How to Load Queries into a SQL Tuning Set (STS)

SOLUTION

NOTE: The example provided below works successfully on 11g and may not work in 10g. In this case, please just list the values without any parameter names similar to the following:

select value(p) from table(dbms_sqltune.select_cursor_cache('sql_id =''fgtq4z4vb0xx5''',NULL,NULL,NULL,NULL,1,NULL,'ALL')) p;

Create a SQL Tuning Set:

EXEC dbms_sqltune.create_sqlset('mysts');

Load SQL into the STS

1. From Cursor Cache

1) To load a query with a specific sql_id

DECLARE 
cur sys_refcursor; 
BEGIN 
open cur for 
select value(p) from table(dbms_sqltune.select_cursor_cache('sql_id = ''fgtq4z4vb0xx5''')) p; 
dbms_sqltune.load_sqlset('mysts', cur); 
close cur;
END; 
/

2) To load queries with a specific query string and more than 1,000 buffer_gets

DECLARE 
cur sys_refcursor;
BEGIN 
open cur for 
select value(p) from table(dbms_sqltune.select_cursor_cache('sql_text like ''%querystring%'' and buffer_gets > 1000')) p; 
dbms_sqltune.load_sqlset('mysts', cur); 
close cur;
END; 
/


2. From AWR Snapshots

1) Find the two snapshots you want

select snap_id, begin_interval_time, end_interval_time from dba_hist_snapshot order by 1;

2) To load all the queries between two snapshots

DECLARE 
cur sys_refcursor; 
BEGIN 
open cur for 
select value(p) from table(dbms_sqltune.select_workload_repository(begin_snap => 2245, end_snap => 2248)) p; 
dbms_sqltune.load_sqlset('mysts', cur); 
close cur;
END; 
/

3) To load a query with a specific sql_id and plan_hash_value

DECLARE 
cur sys_refcursor; 
BEGIN 
open cur for 
select value(p) from table(dbms_sqltune.select_workload_repository(begin_snap => 2245, end_snap => 2248, basic_filter => 'sql_id = ''fgtq4z4vb0xx5'' and plan_hash_value = 431456802')) p; 
dbms_sqltune.load_sqlset('mysts', cur); 
close cur;
END; 
/
NOTE: "basic_filter" is the SQL predicate to filter the SQL from the cursor cache defined on attributes of the SQLSET_ROW.  If basic_filter is not set by the caller, the subprogram captures only statements of the type CREATE TABLE, INSERT, SELECT, UPDATE, DELETE, and MERGE.

CREATE TYPE sqlset_row AS object (
sql_id VARCHAR(13),
force_matching_signature NUMBER,
sql_text CLOB,
object_list sql_objects,
bind_data RAW(2000),
parsing_schema_name VARCHAR2(30),
module VARCHAR2(48),
action VARCHAR2(32),
elapsed_time NUMBER,
cpu_time NUMBER,
buffer_gets NUMBER,
disk_reads NUMBER,
direct_writes NUMBER,
rows_processed NUMBER,
fetches NUMBER,
executions NUMBER,
end_of_fetch_count NUMBER,
optimizer_cost NUMBER,
optimizer_env RAW(2000),
priority NUMBER,
command_type NUMBER,
first_load_time VARCHAR2(19),
stat_period NUMBER,
active_stat_period NUMBER,
other CLOB,
plan_hash_value NUMBER,
sql_plan sql_plan_table_type,
bind_list sql_binds) ;

3. From an AWR Baseline

1) Find the baseline you want to load

select baseline_name, start_snap_id, end_snap_id from dba_hist_baseline;

2) Load queries from the baseline

DECLARE 
cur sys_refcursor; 
BEGIN 
open cur for 
select value(p) from table(dbms_sqltune.select_workload_repository('MY_BASELINE')) p; 
dbms_sqltune.load_sqlset('mysts', cur); 
close cur;
END; 
/

4. From another SQL Tuning Set

1) Find the SQL Tuning Set you want to load

select name, owner, statement_count from dba_sqlset;

2) Load queries from the SQL Tuning Set

DECLARE 
cur sys_refcursor; 
BEGIN 
open cur for 
select value(p) from table(dbms_sqltune.select_sqlset(sqlset_name => 'HR_STS', sqlset_owner => 'HR', basic_filter => 'sql_text like ''%querystring%''')) p; 
dbms_sqltune.load_sqlset('mysts', cur); 
close cur;
END; 
/

5. From 10046 trace files (11g+)

1) Loading into a SQL Tuning Set in the same database that it originated from

i. Create a directory object for the directory where the trace files are.

create directory my_dir as '/home/oracle/trace';

ii. Load the queries

DECLARE 
cur sys_refcursor; 
BEGIN 
open cur for 
select value(p) from table(dbms_sqltune.select_sql_trace(directory=>'MY_DIR', file_name=>'%.trc')) p;
dbms_sqltune.load_sqlset('mysts', cur); 
close cur;
END; 
/

2) Loading into a SQL Tuning Set in a different database

i. Create a mapping table from the database where the trace files were captured.

create table mapping as
select object_id id, owner, substr(object_name, 1, 30) name
from dba_objects
union all
select user_id id, username owner, null name
from dba_users;

ii. Copy the trace files into a directory of the target server and create a directory object for the directory. And import the mapping table into the target database.

create directory my_dir as '/home/oracle/trace';

iii. Specify the mapping table when loading the queries.

DECLARE 
cur sys_refcursor; 
BEGIN 
open cur for 
select value(p) from table(dbms_sqltune.select_sql_trace(directory=>'MY_DIR', file_name=>'%.trc', mapping_table_name=> 'MAPPING', mapping_table_owner=> 'HR')) p;
dbms_sqltune.load_sqlset('mysts', cur); 
close cur;
END; 
/

-----spa

:ORA-13757: "SQL Tuning Set" "OCMHU_STS" owned by user "SYS" is active.

-方法:

http://blog.itpub.net/14359/viewspace-1253599/

STEP 1:
select owner,description, created,last_modified,TASK_NAME
from DBA_ADVISOR_TASKS where owner='DBMGR'

STEP 2:

(7) You can drop sql tuning set by issuing following command :
SQL>BEGIN
DBMS_SQLTUNE.DROP_SQLSET(
sqlset_name => 'STS_SQLSET'
);
END;
/
PL/SQL procedure successfully completed.

step 3:

(5) Check the no. of dependent task using below mentioned query.
SQL> SELECT count(*)
FROM wri$_sqlset_definitions d, wri$_sqlset_references r
WHERE d.name = 'STS_SQLSET'
AND r.sqlset_id = d.id;
COUNT(*)
———-
0
In normal case the output of the query should be “0”. In this case, it is taking advisory task “SPA01″ as it has not been deleted from database.
(6) You need to manually edit table and remove the entry which contains information regarding sql tuning task :
conn / as sysdba
delete from wri$_sqlset_references
where sqlset_id in (select id
from wri$_sqlset_definitions
where name in ('SPA01'));
commit;
This command will update oracle table and will remove dependency from sql tuning set .

STEP 4:

(3) To check all the dependent advisory tasks attached to sql tuning set you need to issue following command :
SQL> SET WRAP OFF
SQL> SET LINE 140
SQL> COL NAME FOR A15
SQL> COL DESCRIPTION FOR A50 WRAPPED
SQL>
SQL> select description, created, owner
from DBA_SQLSET_REFERENCES
where sqlset_name ='STS_SQLSET';
DESCRIPTION CREATED OWNER
-------------------------------------------------- ------------------- ----------
created by: SQL Performance Analyzer - task: SPA01 2014-08-05 16:22:27 SYS
From the above output we can evaluate that task “SPA01″ is dependent on sql tuning set “OCMHU_STS”. So, if you want to drop sql tuning set ”OCMHU_STS” you need to drop task ”SPA01“.
NOTE : Think before dropping sql tuning task as if you drop it, all the information related to sql_profile, stats, indexes related to this advisory task will be deleted.
(4) To check the details regarding SQL Performance Analyzer - task: “SPA01” you can issue following command :
a. Normal Output :
SQL> SET WRAP OFF
SQL> SET LINE 140
SQL> COL NAME FOR A15
SQL> COL OWNER FOR A10
SQL> COL DESCRIPTION FOR A50 WRAPPED
SQL>
SQL> select owner,description, created,last_modified
from DBA_ADVISOR_TASKS
where task_name = 'TASK_10G';
OWNER DESCRIPTION CREATED LAST_MODIFIED
---------- -------------------------------------------------- ------------------- -------------------
SYS 2014-08-05 16:22:26 2014-08-05 17:25:12
If you are getting the above mentioned output then you can solve the issue by dropping the abve mentioned sql advisory tasks. The command for dropping sql advisory task is mentioned below.

SQL> execute dbms_sqltune.drop_tuning_task('TASK_10G');
PL/SQL procedure successfully completed.
b. Issue based output :

SPA 介绍的更多相关文章

  1. Angular企业级开发-AngularJS1.x学习路径

    博客目录 有链接的表明已经完成了,其他的正在建设中. 1.AngularJS简介 2.搭建Angular开发环境 3.Angular MVC实现 4.[Angular项目目录结构] 5.[SPA介绍] ...

  2. 单页web应用(SPA)的简单介绍

    单页 Web 应用 (single-page application 简称为 SPA) 是一种特殊的 Web 应用.它将所有的活动局限于一个Web页面中,仅在该Web页面初始化时加载相应的HTML.J ...

  3. vue项目构建实战基础知识:SPA理解/RESTful接口介绍/static目录配置/axios封装/打包时map文件去除

    一.SPA 不是指水疗.是 single page web application 的缩写.中文翻译为 单页应用程序 或 单页Web应用,更多解释请自行搜索. 所有的前端人员都应该明白我们的页面的 u ...

  4. 使用backbone的history管理SPA应用的url

    本文介绍如何使用backbone的history模块实现SPA应用里面的URL管理.SPA应用的核心在于使用无刷新的方式更改url,从而引发页面内容的改变.从实现上来看,url的管理和页面内容的管理是 ...

  5. ABP理论学习之开篇介绍

    返回总目录 为了和2016年春节赛跑,完成该系列博客,我牺牲了今天中午的时间来完成该系列的第一篇----开篇介绍.开篇介绍嘛,读过大学教材的同学都知道,这玩意总是那么无聊,跟考试没关系,干脆直接跳过, ...

  6. [搜索引擎]Sphinx的介绍和原理探索

    What/Sphinx是什么 定义 Sphinx是一个全文检索引擎. 特性 索引和性能优异 易于集成SQL和XML数据源,并可使用SphinxAPI.SphinxQL或者SphinxSE搜索接口 易于 ...

  7. [译]ABP框架使用AngularJs,ASP.NET MVC,Web API和EntityFramework构建N层架构的SPA应用程序

    本文转自:http://www.skcode.cn/archives/281 本文演示ABP框架如何使用AngularJs,ASP.NET MVC,Web API 和EntityFramework构建 ...

  8. 【读书笔记】WebApi 和 SPA(单页应用)--knockout的使用

    Web API从MVC4开始出现,可以服务于Asp.Net下的任何web应用,本文将介绍Web api在单页应用中的使用.什么是单页应用?Single-Page Application最常用的定义:一 ...

  9. 使用 AngularJS 开发一个大规模的单页应用(SPA)

      本文的目标是基于单页面应用程序开发出拥有数百页的内容,包括认证,授权,会话状态等功能,可以支持上千个用户的企业级应用. 下载源代码 介绍 (SPA)这样一个名字里面蕴含着什么呢? 如果你是经典的S ...

随机推荐

  1. 使用Excel制作万年历(日历可A4纸打印)

    先来看看A4纸打印效果,其他功能后续继续完善中. 年份数据字典(农历节日) 农历节日表 年度 春节 元宵节 龙抬头 端午节 六月六 七月七 七月十五 仲秋节 除夕 2010年02月14日 2010年0 ...

  2. CSS:word-wrap/overflow/transition

    一 自动换行:一个div有固定宽高,如果其内容很长,必须两行以上才能显示完整的时候,有两种情况要留意 1 默认如果其内容都是中文,那么内容是可以自适应,而不会溢出div 2 如果内容除了中文之外,还有 ...

  3. 为什么对象序列化要定义serialVersionUID

    对于实现了java.io.Serializable接口的实体类来说,往往都会手动声明serialVersionUID,因为只要你实现了序列化,java自己就会默认给实体类加上一个serialVersi ...

  4. NSDictionary和NSArray

    // 字典里套数组 NSArray *array1 = @[@"huahau" , @"hehe"]; NSArray *array2 = @[@"x ...

  5. (转)Maven 项目新建index.jsp报错问题

    原文:http://blog.csdn.net/dream_ll/article/details/52198656 最近用eclipse新建了一个maven项目,结果刚新建完成index.jsp页面就 ...

  6. 加减 script函数初识

    <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...

  7. BZOJ4085: [Sdoi2015]音质检测

    BZOJ4085: [Sdoi2015]音质检测 由于这题太毒了,导致可能会被某些人卡评测,于是成了一道权限题... 本蒟蒻表示没钱氪金... 但是可以去洛谷/Vijos搞搞事... 但是洛谷上只能评 ...

  8. unreal3对象管理模块分析

    凡是稍微大一点的引擎框架,必然都要自己搞一套对象管理机制,如mfc.qt.glib等等,unreal自然也不例外. 究其原因,还是c++这种静态语言天生的不足,缺乏运行时类型操作功能,对于复杂庞大的逻 ...

  9. Struts2学习第七课 动态方法调用

    动态方法调用:通过url动态调用Action中的方法. action声明: <package name="struts-app2" namespace="/&quo ...

  10. Gulp的学习和使用

    Gulp是一种直观.自动化构建的工具. Gulp是基于Node和NPM,安装教程点这里. 什么是Gulp? Gulp使用了node.js的流控制系统,使其(Gulp)构建更快,因为它不需要将临时文件/ ...