转 使用隐含Trace参数诊断Oracle Data Pump故障
http://blog.itpub.net/17203031/viewspace-772718/
Data Pump数据泵是Oracle从10g开始推出的,用于取代传统exp/imp工具的数据备份还原组件。经过若干版本的演进和修改,Data Pump已经非常成熟,逐渐被越来越多的DBA和运维人员接受。
相对于传统的exp/imp,Data Pump有很多优势,也变得更加复杂。数据泵一个最显著的特点就是Server-Side运行。Exp/Imp是运行在客户端上面的小工具,虽然使用方便,但是需要处理数据源端和目标端各自服务器和客户端四个版本的差异兼容问题。这就是为什么网络上很多朋友都在纠结如何处理Exp/Imp的版本差异。而且,运行在客户端上的Exp/Imp受网络影响很大,一旦操作时间较长网络不稳定,操作过程可能就以失败告终。同时,exp/imp还存在很多性能、稳定性和特性支持上的不足。
Data Pump数据泵是运行在服务端,直接就减少了版本问题出现的可能。即使存在版本问题,使用version参数也可以进行有效的控制。此外单独的作业运行,可以避免出现意外中断的情况。
尽管如此,我们还是经常会遇到Data Pump的故障和问题,很多时候仅仅借助提示信息不能做到完全的诊断。这个时候,我们可以考虑使用Data Pump的隐藏参数Trace来生成跟踪文件,逐步排查错误。
1、 Data Pump工作原理和环境准备
Data Pump工作原理有两个特点:作业调度,多进程配合协作。在Oracle中,Data Pump是作为一个特定的Job来进行处理的,可以进行Job作业的启动、终止、暂停,而且更重要的是Dump作业的工作过程是独立于外部用户的。也就是说,用户不需要和Exp/Imp一样“死盯着”界面,也不需要使用nohup &后台作业化,就可以实现自动的后台操作。
在工作中,Data Pump是一个多进程配合的工作。我们从工作日志上就可以看到,每个Data Pump作业在创建的时候,会自动创建一个作业表,其中记录操作过程。Job工作的时候有两类Process进程工作,一个是master control process,负责整体过程协调,Work Process池管理,任务分配。实际进行导入导出的是Work process,如果设置了parallel参数,就会有多个Work Process进行数据工作。
对Data Pump的诊断本质上就是对各种Process行为的跟踪。Oracle提供了一个Trace的隐含参数,来帮助我们实现这个目标。
首先,我们准备一下Data Pump工作环境。开始需要准备Directory对象。
[root@SimpleLinux /]# ls -l | grep dumpdata
drwxr-xr-x 2 root root 4096 Sep 11 09:01 dumpdata
[root@SimpleLinux /]# chown -R oracle:oinstall dumpdata/
[root@SimpleLinux /]# ls -l | grep dumpdata
drwxr-xr-x 2 oracle oinstall 4096 Sep 11 09:01 dumpdata
--创建directory对象
SQL> select * from v$version where rownum<2;
BANNER
-----------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Producti
SQL> create directory dumpdir as '/dumpdata';
Directory created
2、隐含参数Trace
Trace参数是Data Pump隐含内部使用的一个参数。使用方法和其他数据泵参数相同,但是使用取值需要有一些注意之处。下面是我们实验的Trace命令。
[oracle@SimpleLinux dumpdata]$ expdp \"/ as sysdba\" directory=dumpdir schemas=scott dumpfile=scott_dump.dmp parallel=2 trace=480300
Export: Release 11.2.0.3.0 - Production on Wed Sep 11 09:45:07 2013
Trace并不像其他跟踪过程相同,使用y/n的参数,开启或者关闭。Data Pump的Trace参数是一个7位十六进制组成的数字串。不同的数字串表示不同的跟踪对象方法。7位十六进制数字分为两个部分,前三个数字表示特定的数据泵组件,后四位使用0300就可以。
根据Oracle MOS中提供信息资料,Trace字符遵守如下设置规则:
ü 不要输入超过7位长度;
ü 不需要使用0X指定十六进制字符;
ü 不能将十六进制字符转化为数字取值;
ü 如果7位字符以0开头,可以省略0;
ü 输入字符大小写不敏感;
各个组件分别使用不同的三位十六进制数字代表。如下片段所示:
-- Summary of Data Pump trace levels:
-- ==================================
Trace DM DW ORA Lines
level trc trc trc in
(hex) file file file trace Purpose
------- ---- ---- ---- ------ -----------------------------------------------
10300 x x x SHDW: To trace the Shadow process (API) (expdp/impdp)
20300 x x x KUPV: To trace Fixed table
40300 x x x 'div' To trace Process services
80300 x KUPM: To trace Master Control Process (MCP) (DM)
100300 x x KUPF: To trace File Manager
200300 x x x KUPC: To trace Queue services
400300 x KUPW: To trace Worker process(es) (DW)
800300 x KUPD: To trace Data Package
1000300 x META. To trace Metadata Package
--- +
1FF0300 x x x 'all' To trace all components (full tracing)
如果需要同时跟踪多个组件,需要将目标组件的hex值进行累加,后面四位的300相同。
目标Dump作业生成的Trace文件,同其他Trace文件没有什么本质差异。默认都是在BACKGROUP_DUMP_DEST目录。但是注意,Data Pump的Trace过程,会生成多个Trace文件,而且定位需要知道dm和dw的Process ID信息。
笔者建议的一种方法是,如果系统业务不是非常繁忙,可以将目录上的Trc和trm文件暂时保存在其他的地方。再进行Trace作业,此时生成的文件就可以明显看出是哪些。
对于跟踪的Trace取值,Oracle建议使用480300就可以应对大部分的情况。480300会跟踪Oracle Dump作业的Master Control Process(MCP)和Work Process。作为初始化跟踪的过程,480300基本就够用了。
3、Expdp Trace过程
我们先从数据导出Expdp看Trace,导出一个案例。首先清理一下Trace File目录。
[oracle@SimpleLinux trace]$ rm *.trc
[oracle@SimpleLinux trace]$ rm *.trm
[oracle@SimpleLinux trace]$ ls -l
total 92
-rw-r----- 1 oracle oinstall 86384 Sep 11 09:37 alert_ora11g.log
调用命令,以两个并行度的方法进行导出动作。
[oracle@SimpleLinux dumpdata]$ expdp \"/ as sysdba\" directory=dumpdir schemas=scott dumpfile=scott_dump.dmp parallel=2 trace=480300
Export: Release 11.2.0.3.0 - Production on Wed Sep 11 09:45:07 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_EXPORT_SCHEMA_01": "/******** AS SYSDBA" directory=dumpdir schemas=scott dumpfile=scott_dump.dmp parallel=2 trace=480300
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 32.18 MB
Processing object type SCHEMA_EXPORT/USER
. . exported "SCOTT"."T_MASTER":"P1" 42.43 KB 982 rows
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
(篇幅原因,有省略……)
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
. . exported "SCOTT"."T_MASTER":"P2" 88.69 KB 1859 rows
. . exported "SCOTT"."T_SLAVE":"P1" 412.2 KB 11268 rows
. . exported "SCOTT"."T_SLAVE":"P2" 975.7 KB 21120 rows
. . exported "SCOTT"."DEPT" 5.929 KB 4 rows
. . exported "SCOTT"."EMP" 8.562 KB 14 rows
. . exported "SCOTT"."SALGRADE" 5.859 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "SYS"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:
/dumpdata/scott_dump.dmp
Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at 09:45:36
我们从日志上能看出Parallel的一点不一样,额外的T_MASTER.P1被提前导出了。
新生成的Trace文件目录。
[oracle@SimpleLinux trace]$ ls -l
total 260
-rw-r----- 1 oracle oinstall 87421 Sep 11 09:45 alert_ora11g.log
-rw-r----- 1 oracle oinstall 40784 Sep 11 09:45 ora11g_dm00_3894.trc
-rw-r----- 1 oracle oinstall 1948 Sep 11 09:45 ora11g_dm00_3894.trm
-rw-r----- 1 oracle oinstall 73971 Sep 11 09:45 ora11g_dw00_3896.trc
-rw-r----- 1 oracle oinstall 1986 Sep 11 09:45 ora11g_dw00_3896.trm
-rw-r----- 1 oracle oinstall 27366 Sep 11 09:45 ora11g_dw01_3898.trc
-rw-r----- 1 oracle oinstall 982 Sep 11 09:45 ora11g_dw01_3898.trm
-rw-r----- 1 oracle oinstall 3016 Sep 11 09:45 ora11g_ora_3890.trc
-rw-r----- 1 oracle oinstall 209 Sep 11 09:45 ora11g_ora_3890.trm
Dm和dw标注的就是MCP和Work Process生成的Trace文件。同时Parallel设置使得dw有00和01两个。
在导出过程中,我们可以看到两个worker的会话信息。
SQL> select * from dba_datapump_sessions;
OWNER_NAME JOB_NAME INST_ID SADDR SESSION_TYPE
------------------------------ ------------------------------ ---------- -------- --------------
SYS SYS_EXPORT_SCHEMA_01 1 35EB0580 DBMS_DATAPUMP
SYS SYS_EXPORT_SCHEMA_01 1 35E95280 MASTER
SYS SYS_EXPORT_SCHEMA_01 1 35E8A480 WORKER
SYS SYS_EXPORT_SCHEMA_01 1 35E84D80 WORKER
此时我们可以从Trace文件中,看到一些Data Pump工作的细节信息。例如:在MCP的Trace文件中,我们看到一系列调用动作过程,如下片段:
--初始化导出动作,整理文件系统;
KUPM:09:45:08.720: ****IN DISPATCH at 35108, request type=1001
KUPM:09:45:08.721: Current user is: SYS
KUPM:09:45:08.721: hand := DBMS_DATAPUMP.OPEN ('EXPORT', 'SCHEMA', '', 'SYS_EXPORT_SCHEMA_01', '', '2');
KUPM:09:45:08.791: Resumable enabled
KUPM:09:45:08.799: Entered state: DEFINING
KUPM:09:45:08.799: initing file system
*** 2013-09-11 09:45:08.893
KUPM:09:45:08.893: ****OUT DISPATCH, request type=1001, response type =2041
--日志写入
KUPM:09:45:12.135: ****IN DISPATCH at 35112, request type=3031
KUPM:09:45:12.135: Current user is: SYS
KUPM:09:45:12.136: Log message received from worker DG,KUPC$C_1_20130911094507,KUPC$A_1_094510040559000,MCP,3,Y
KUPM:09:45:12.136: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 4
kwqberlst ascn 986758 lascn 0
KUPM:09:45:12.137: ****OUT DISPATCH, request type=3031, response type =2041
在Worker Process中,如下片段看出在导出数据。
KUPW:09:45:12.153: 1:
KUPW:09:45:12.153: 1:
KUPW:09:45:12.153: 1: TABLE
KUPW:09:45:12.153: 1: SCOTT
KUPW:09:45:12.153: 1: DEPT
KUPW:09:45:12.154: 1: In procedure LOCATE_DATA_FILTERS
KUPW:09:45:12.154: 1: In function NEXT_PO_NUMBER
KUPW:09:45:12.161: 1: In procedure DETERMINE_METHOD_PARALLEL
KUPW:09:45:12.161: 1: flags mask: 0
KUPW:09:45:12.161: 1: dapi_possible_meth: 1
KUPW:09:45:12.161: 1: data_size: 65536
KUPW:09:45:12.161: 1: et_parallel: TRUE
KUPW:09:45:12.161: 1: object: TABLE_DATA:"SCOTT"."DEPT"
KUPW:09:45:12.164: 1: l_dapi_bit_mask: 7
KUPW:09:45:12.164: 1: l_client_bit_mask: 7
KUPW:09:45:12.164: 1: TABLE_DATA:"SCOTT"."DEPT" either, parallel: 1
KUPW:09:45:12.164: 1: In function GATHER_PARSE_ITEMS
KUPW:09:45:12.165: 1: In function CHECK_FOR_REMAP_NETWORK
KUPW:09:45:12.165: 1: Nothing to remap
KUPW:09:45:12.165: 1: In procedure BUILD_OBJECT_STRINGS
KUPW:09:45:12.165: 1: In DETERMINE_BASE_OBJECT_INFO
KUPW:09:45:12.165: 1: TABLE_DATA
KUPW:09:45:12.165: 1: SCOTT
KUPW:09:45:12.165: 1: EMP
4、Impdp导入过程
在Trace过程中,我们也可以如10046跟踪过程一样,添加SQL跟踪。Data Pump本质上工作还是一系列的SQL语句,很多时候的性能问题根源都是从SQL着手的。
切换到SQL跟踪模式也比较简单,一般是在Trace数值后面添加1。我们使用导入过程进行实验。
--处理之前
[root@SimpleLinux trace]# ls -l
total 4
-rw-r----- 1 oracle oinstall 552 Sep 11 10:49 alert_ora11g.log
[oracle@SimpleLinux dumpdata]$ impdp \"/ as sysdba\" directory=dumpdir dumpfile=scott_dump.dmp remap_schema=scott:test trace=480301 parallel=2
Import: Release 11.2.0.3.0 - Production on Wed Sep 11 10:50:14 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_FULL_01": "/******** AS SYSDBA" directory=dumpdir dumpfile=scott_dump.dmp remap_schema=scott:test trace=480301 parallel=2
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."T_MASTER":"P1" 42.43 KB 982 rows
. . imported "TEST"."T_MASTER":"P2" 88.69 KB 1859 rows
. . imported "TEST"."T_SLAVE":"P1" 412.2 KB 11268 rows
. . imported "TEST"."T_SLAVE":"P2" 975.7 KB 21120 rows
. . imported "TEST"."DEPT" 5.929 KB 4 rows
. . imported "TEST"."EMP" 8.562 KB 14 rows
. . imported "TEST"."SALGRADE" 5.859 KB 5 rows
. . imported "TEST"."BONUS" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at 10:50:24
查看跟踪目录。
[root@SimpleLinux trace]# ls -l
total 7588
-rw-r----- 1 oracle oinstall 739 Sep 11 10:50 alert_ora11g.log
-rw-r----- 1 oracle oinstall 1916394 Sep 11 10:50 ora11g_dm00_4422.trc
-rw-r----- 1 oracle oinstall 9446 Sep 11 10:50 ora11g_dm00_4422.trm
-rw-r----- 1 oracle oinstall 2706475 Sep 11 10:50 ora11g_dw00_4424.trc
-rw-r----- 1 oracle oinstall 15560 Sep 11 10:50 ora11g_dw00_4424.trm
-rw-r----- 1 oracle oinstall 2977812 Sep 11 10:50 ora11g_ora_4420.trc
-rw-r----- 1 oracle oinstall 12266 Sep 11 10:50 ora11g_ora_4420.trm
-rw-r----- 1 oracle oinstall 29795 Sep 11 10:50 ora11g_p000_4426.trc
-rw-r----- 1 oracle oinstall 526 Sep 11 10:50 ora11g_p000_4426.trm
-rw-r----- 1 oracle oinstall 30109 Sep 11 10:50 ora11g_p001_4428.trc
-rw-r----- 1 oracle oinstall 524 Sep 11 10:50 ora11g_p001_4428.trm
-rw-r----- 1 oracle oinstall 8430 Sep 11 10:50 ora11g_p002_4430.trc
-rw-r----- 1 oracle oinstall 184 Sep 11 10:50 ora11g_p002_4430.trm
-rw-r----- 1 oracle oinstall 8432 Sep 11 10:50 ora11g_p003_4432.trc
-rw-r----- 1 oracle oinstall 204 Sep 11 10:50 ora11g_p003_4432.trm
目录生成的Trace文件,都是10046格式的Raw文件。截取片段如下:
=====================
PARSING IN CURSOR #13035136 len=51 dep=2 uid=0 ct=3 lid=0 tim=1378867817703043 hv=1523794037 ad='360b079c' sqlid='b1wc53ddd6h3p'
select audit$,options from procedure$ where obj#=:1
END OF STMT
PARSE #13035136:c=0,e=96,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=1637390370,tim=1378867817703039
EXEC #13035136:c=0,e=79,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=1637390370,tim=1378867817703178
FETCH #13035136:c=0,e=53,p=0,cr=3,cu=0,mis=0,r=1,dep=2,og=4,plh=1637390370,tim=1378867817703248
STAT #13035136 id=1 cnt=1 pid=0 pos=1 bj=221 p='TABLE ACCESS BY INDEX ROWID PROCEDURE$ (cr=3 pr=0 pw=0 time=53 us cost=2 size=47 card=1)'
STAT #13035136 id=2 cnt=1 pid=1 pos=1 bj=231 p='INDEX UNIQUE SCAN I_PROCEDURE1 (cr=2 pr=0 pw=0 time=24 us cost=1 size=0 card=1)'
CLOSE #13035136:c=0,e=7,dep=2,type=1,tim=1378867817703387
=====================
5、结论
Oracle Data Pump已经非常成熟,也越来越多被人们接受。Trace参数尤其存在的历史背景,相信使用的机会越来越少。不过,作为研究内部机制的用途,还是比较有用的。
#####sample 0 使用network-link 导入20G 以上 clob 表 ,性能很慢
select a.owner,
a.table_name,
a.column_name,
b.segment_name,
ROUND(b.BYTES / 1024 / 1024)
from dba_lobs a, dba_segments b
where a.segment_name = b.segment_name
and a.owner = 'XXX'
and a.table_name = 'YYYY'
union all
select a.owner,
a.table_name,
a.column_name,
b.segment_name,
ROUND(b.BYTES / 1024 / 1024)
from dba_lobs a, dba_segments b
where a.index_name = b.segment_name
and a.owner = 'XXX'
and a.table_name = 'YYYY'
;
Performance Problems When Transferring LOBs Using IMPDP With NETWORK_LINK (文档 ID 1488229.1) | 转到底部 |
|
|
In this Document
APPLIES TO:Oracle Database - Enterprise Edition - Version 10.1.0.3 and later SYMPTOMSA severe performance impact is experienced when using IMPDP with the NETWORK_LINK command line option to transfer a table which has 2 CLOB columns (900000 rows, average row length ~5 kB). CAUSEThe cause of this problem has been identified in: This is expected behavior when dealing with LOBs and the use of the NETWORK_LINK functionality. "IMPDP with NETWORK_LINK ultimately uses SQL of the form:
INSERT INTO local_tab_name SELECT ... FROM remote_tab_name@network_link; Underneath this, the number of network round trips varies significantly for CLOB versus VARCHAR2 by necessity. For a table with VARCHAR2 columns the remote fetches can pull back several rows in one go in a single packet. For a table with CLOB columns the remote fetches pull back several rows in one go but the CLOB columns return a LOB LOCATOR. A LOB locator is like a handle to the LOB itself. Each of these LOBs has to be requested and read individually resulting in a lot more network round trips and these add significantly to the time taken. Example: In some situation we get 8 rows back in each fetch, so for VARCHAR2 we send a fetch request and get back a large packet with 8 rows of data for all columns. In the CLOB case we send a fetch request and get back a packet with 8 rows of data which includes 3 LOB locators per row. We then have to send a LOB READ request for each of these LOBs to the remote site, and get back that LOB data. 8*3 = 24 extra round trips to get that data." SOLUTIONAs this is the way LOB access is implemented, the only workaround available is to avoid network access to remote LOBs by using a dump file instead of the NETWORK_LINK functionality. |
##############sample 2 使用networklink 方式导入几个20G 的大表 (非clob 字段),导致临时表空间被称爆满 ORA-1652: unable to extend temp segment ,ORA-30036
----
DataPump Network Mode Import Consumes Lots Of Temporary Segments In TEMP Tablespace
Oracle Database - Enterprise Edition - Version 10.1.0.2 to 11.2.0.4 [Release 10.1 to 11.2]
Information in this document applies to any platform.
***Checked for relevance on 27-May-2014***
SYMPTOMS
You try to import a huge table with DataPump import (IMPDP) using a network link. During this procedure, lots of temporary segments are allocated in TEMP tablespace and the import job may fail with the errors like:
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39171: Job is experiencing a resumable wait.
ORA-1652: unable to extend temp segment by 128 in tablespace TEMP
CAUSE
The issue was investigated in
Bug 10396489 - SUSPECT BEHAVIOR AT DATA PUMP NETWORK IMPORT OF A HUGE PARTITIONED TABLE
closed with status 'Not a Bug' (expected behavior).
This is happening because the import is using the APPEND hint for an insert statement in network mode import to load the data fast.
Each parallel execution server allocates a new temporary segment and inserts data into that temporary segment. When a COMMIT runs (at the end of table/partition), the parallel execution coordinator merges the new temporary segments into the primary table segment, where it is visible to users.
SOLUTION
1. Increase the TEMP tablespace size
- OR -
2. Generate export dumps using expdp on source database and then import the dump files on target database using impdp instead using network mode import.
#########sample 3 ----使用impdp方式导入几个20G 的大表 (非clob 字段),导致undo表空间被称爆满 ORA-30036
Run Out Of Space On UNDO Tablespace Using DataPump Import/Export (文档 ID 735366.1) 转到底部转到底部
GOAL
With the old import utility (imp) there is the option of using the parameters BUFFER and COMMIT=Y.
That way, there are lower chances of running into issues with the UNDO tablespace. Is there anything similar in Import DataPump or it's necessary to increase the UNDO tablespace?
An example experiencing issues is when using DataPump to re-organize tables.
SOLUTION
Unlike the traditional Export and Import utilities, which used the BUFFER, COMMIT, COMPRESS, CONSISTENT, DIRECT and RECORDLENGTH parameters, DataPump needs no tuning to achieve maximum performance.
DataPump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner. Initialization parameters should be sufficient upon installation.
However, you can receive the error:
ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS1'
during the Import (impdp) if indexes are present in some cases.
Impdp maintains indexes during import by default and does not use direct_path if tables and indexes are already created. However, if there is no index to enforce constraints and you specify:
ACCESS_METHOD=DIRECT_PATH
with the DataPump import command line, DataPump can use direct path method to do the import.
To get around potential issues with the UNDO tablespace in this case:
- load data by direct path by disabling primary key constraint (using ALTER TABLE ... MODIFY CONSTRAINT ... DISABLE NOVALIDATE) and using access_method=direct_path.
- after loading data, enable primary key constraint (using ALTER TABLE ... MODIFY CONSTRAINT ... ENABLE VALIDATE)
############
Error ORA-30036 DataPump Import (IMPDP) Exhausts Undo Tablespace (文档 ID 727894.1) 转到底部转到底部
In this Document
Symptoms
Changes
Cause
Solution
References
APPLIES TO:
Oracle Database - Enterprise Edition - Version 10.1.0.2 and later
Information in this document applies to any platform.
SYMPTOMS
The import DataPump session completes with the following errors:
ORA-31693: Table data object "[schema]"."[table-name]" failed to load/unload and is being skipped due to error:
ORA-30032: the suspended (resumable) statement has timed out
ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS1'
Job "[user]"."SYS_IMPORT_TABLE_01" completed with 141 error(s) at 01:15:34
This indicates that ROLLBACK was being performed during the time in which no progress was made. It appears there is excessive UNDO being generated.
CHANGES
CAUSE
Excess undo generation can occur when there is a Primary Key (PK) constraint present on the system. Import DataPump will perform index maintenance and this can increase undo usage especially if there is other DML occurring on the database.
SOLUTION
Disable constraints for Primary Keys (PK) on the database during import datapump load. This will reduce undo as index maintenance will not be performed.
REFERENCES
NOTE:735366.1 - Run Out Of Space On UNDO Tablespace Using DataPump Import/Export
NOTE:1670349.1 - Import DataPump - How To Limit The Amount Of UNDO Generation of an IMPDP job ?
##########sample2
debug sql:
REM srdc_impdp_performance.sql - Gather Information for IMPDP Performance Issues
define SRDCNAME='IMPDP_PERFORMANCE'
SET MARKUP HTML ON PREFORMAT ON
set TERMOUT off FEEDBACK off verify off TRIMSPOOL on HEADING off
set lines 132 pages 10000
COLUMN SRDCSPOOLNAME NOPRINT NEW_VALUE SRDCSPOOLNAME
select 'SRDC_'||upper('&&SRDCNAME')||'_'||upper(instance_name)||'_'||to_char(sysdate,'YYYYMMDD_HH24MISS') SRDCSPOOLNAME from v$instance;
set TERMOUT on MARKUP html preformat on
REM
spool &&SRDCSPOOLNAME..htm
select '+----------------------------------------------------+' from dual
union all
select '| Diagnostic-Name: '||'&&SRDCNAME' from dual
union all
select '| Timestamp: '||to_char(systimestamp,'YYYY-MM-DD HH24:MI:SS TZH:TZM') from dual
union all
select '| Machine: '||host_name from v$instance
union all
select '| Version: '||version from v$instance
union all
select '| DBName: '||name from v$database
union all
select '| Instance: '||instance_name from v$instance
union all
select '+----------------------------------------------------+' from dual
/
set HEADING on MARKUP html preformat off
REM === -- end of standard header -- ===
set concat "#"
SET PAGESIZE 9999
SET LINESIZE 256
SET TRIMOUT ON
SET TRIMSPOOL ON
Column sid format 99999 heading "SESS|ID"
Column serial# format 9999999 heading "SESS|SER|#"
Column session_id format 99999 heading "SESS|ID"
Column session_serial# format 9999999 heading "SESS|SER|#"
Column event format a40
Column total_waits format 9,999,999,999 heading "TOTAL|TIME|WAITED|MICRO"
Column pga_used_mem format 9,999,999,999
Column pga_alloc_mem format 9,999,999,999
Column status heading 'Status' format a20
Column timeout heading 'Timeout' format 999999
Column error_number heading 'Error Number' format 999999
Column error_msg heading 'Message' format a44
Column sql_text heading 'Current SQL statement' format a44
Column Number_of_objects format 99999999
Column object_type format a35
ALTER SESSION SET nls_date_format='DD-MON-YYYY HH24:MI:SS';
SET MARKUP HTML ON PREFORMAT ON
--====================Retrieve sid, serial# information for the active DataPump process(es)===========================
SET HEADING OFF
SELECT '=================================================================================================================================' FROM dual
UNION ALL
SELECT 'Determine sid, serial# details for the active DataPump process(es):' FROM dual
UNION ALL
SELECT '=================================================================================================================================' FROM dual;
SET HEADING ON
set feedback on
col program for a38
col username for a10
col spid for a7
select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid,
s.status, s.username, d.job_name, p.spid, s.serial#, p.pid
from v$session s, v$process p, dba_datapump_sessions d
where p.addr=s.paddr and s.saddr=d.saddr and
(UPPER (s.program) LIKE '%DM0%' or UPPER (s.program) LIKE '%DW0%');
set feedback off
--====================Retrieve sid, serial#, PGA details for the active DataPump process(es)===========================
SET HEADING OFF
SELECT '=================================================================================================================================' FROM dual
UNION ALL
SELECT 'Determine PGA details for the active DataPump process(es):' FROM dual
UNION ALL
SELECT '=================================================================================================================================' FROM dual;
SET HEADING ON
set feedback on
SELECT sid, s.serial#, p.PGA_USED_MEM,p.PGA_ALLOC_MEM
FROM v$process p, v$session s
WHERE p.addr = s.paddr and
(UPPER (s.program) LIKE '%DM0%' or UPPER (s.program) LIKE '%DW0%');
set feedback off
--====================Retrive all wait events and time in wait for the running DataPump process(es)====================
SET HEADING OFF
SELECT '=================================================================================================================================' FROM dual
UNION ALL
SELECT 'All wait events and time in wait for the active DataPump process(es):' FROM dual
UNION ALL
SELECT '=================================================================================================================================' FROM dual;
SET HEADING ON
select session_id, session_serial#, Event, sum(time_waited) total_waits
from v$active_session_history
where sample_time > sysdate - 1 and
(UPPER (program) LIKE '%DM0%' or UPPER (program) LIKE '%DW0%') and
session_id in (select sid from v$session where UPPER (program) LIKE '%DM0%' or UPPER (program) LIKE '%DW0%') and
session_state = 'WAITING' And time_waited > 0
group by session_id, session_serial#, Event
order by session_id, session_serial#, total_waits desc;
--====================DataPump progress - retrieve current sql id and statement====================
SET HEADING OFF
SELECT '=================================================================================================================================' FROM dual
UNION ALL
SELECT 'DataPump progress - retrieve current SQL id and statement:' FROM dual
UNION ALL
SELECT '=================================================================================================================================' FROM dual;
SET HEADING ON
select sysdate, a.sid, a.sql_id, a.event, b.sql_text
from v$session a, v$sql b
where a.sql_id=b.sql_id and
(UPPER (a.program) LIKE '%DM0%' or UPPER (a.program) LIKE '%DW0%')
order by a.sid desc;
SET HEADING OFF MARKUP HTML OFF
SET SERVEROUTPUT ON FORMAT WRAP
declare
v_ksppinm varchar2(30);
CURSOR c_fix IS select v.KSPPSTVL value FROM x$ksppi n, x$ksppsv v WHERE n.indx = v.indx and n.ksppinm = v_ksppinm;
CURSOR c_count is select count(*) from DBA_OPTSTAT_OPERATIONS where operation in ('gather_dictionary_stats','gather_fixed_objects_stats');
CURSOR c_stats is select operation, START_TIME, END_TIME from DBA_OPTSTAT_OPERATIONS
where operation in ('gather_dictionary_stats','gather_fixed_objects_stats') order by 2 desc;
v_long_op_flag number := 0 ;
v_target varchar2(100);
v_sid number;
v_totalwork number;
v_opname varchar2(200);
v_sofar number;
v_time_remain number;
stmt varchar2(2000);
v_fix c_fix%ROWTYPE;
v_count number;
begin
stmt:='select count(*) from v$session_longops where sid in (select sid from v$session where UPPER (program) LIKE '||
'''%DM0%'''||' or UPPER (program) LIKE '||'''%DW0%'')'||' and totalwork <> sofar';
DBMS_OUTPUT.PUT_LINE ('<pre>');
dbms_output.put_line ('=================================================================================================================================');
dbms_output.put_line ('Check v$session_longops - DataPump pending work');
dbms_output.put_line ('=================================================================================================================================');
execute immediate stmt into v_long_op_flag;
if (v_long_op_flag > 0) then
dbms_output.put_line ('The number of long running DataPump processes is: '|| v_long_op_flag);
dbms_output.put_line (chr (10));
for longop in (select sid,target,opname, sum(totalwork) totwork, sum(sofar) sofar, sum(totalwork-sofar) blk_remain, Round(sum(time_remaining/60),2) time_remain
from v$session_longops where sid in (select sid from v$session where UPPER (program) LIKE '%DM0%' or UPPER (program) LIKE '%DW0%') and
opname NOT LIKE '%aggregate%' and totalwork <> sofar group by sid,target,opname) loop
dbms_output.put_line (Rpad ('DataPump SID', 40, ' ')||chr (9)||':'||chr (9)||longop.sid);
dbms_output.put_line (Rpad ('Object being read', 40, ' ')||chr (9)||':'||chr (9)||longop.target);
dbms_output.put_line (Rpad ('Operation being executed', 40, ' ')||chr (9)||':'||chr (9)||longop.opname);
dbms_output.put_line (Rpad ('Total blocks to be read', 40, ' ')||chr (9)||':'||chr (9)||longop.totwork);
dbms_output.put_line (Rpad ('Total blocks already read', 40, ' ')||chr (9)||':'||chr (9)||longop.sofar);
dbms_output.put_line (Rpad ('Remaining blocks to be read', 40, ' ')||chr (9)||':'||chr (9)||longop.blk_remain);
dbms_output.put_line (Rpad ('Estimated time remaining for the process', 40, ' ')||chr (9)||':'||chr (9)||longop.time_remain|| ' Minutes');
dbms_output.put_line (chr (10));
end Loop;
else
DBMS_OUTPUT.PUT_LINE ('No DataPump session is found in v$session_longops');
dbms_output.put_line (chr (10));
end If;
DBMS_OUTPUT.PUT_LINE ('=================================Have Dictionary and Fixed Objects statistics been gathered?====================================');
open c_count;
fetch c_count into v_count;
if v_count>0 then
BEGIN
DBMS_OUTPUT.PUT_LINE (rpad ('OPERATION', 30)||' '||rpad ('START_TIME', 32)||' '||rpad ('END_TIME', 32));
DBMS_OUTPUT.PUT_LINE (rpad ('--------------------------', 30)||' '||rpad ('-----------------------------', 32)||' '||rpad ('-----------------------------', 32));
FOR v_stats IN c_stats LOOP
DBMS_OUTPUT.PUT_LINE (rpad (v_stats.operation, 30)||' '||rpad (v_stats.start_time, 32)||' '||rpad (v_stats.end_time, 32));
END LOOP;
end;
else
DBMS_OUTPUT.PUT_LINE ('Dictionary and fixed objects statistics have not been gathered for this database.');
dbms_output.put_line (chr (10));
END IF;
dbms_output.put_line ('=================================================================================================================================');
dbms_output.put_line (chr (10));
for i in 1..6 loop
if i = 1 then
v_ksppinm := 'fixed_date';
elsif i = 2 then
v_ksppinm := 'aq_tm_processes';
elsif i = 3 then
v_ksppinm := 'compatible';
elsif i = 4 then
v_ksppinm := 'optimizer_features_enable';
elsif i = 5 then
v_ksppinm := 'optimizer_index_caching';
elsif i = 6 then
v_ksppinm := 'optimizer_index_cost_adj';
end if;
dbms_output.put_line ('=================================================================================================================================');
DBMS_OUTPUT.PUT_LINE ('Is the '||upper (v_ksppinm)||' parameter set?');
dbms_output.put_line ('=================================================================================================================================');
open c_fix;
fetch c_fix into v_fix;
close c_fix;
if nvl (to_char (v_fix.value), '1') = to_char ('1') then
DBMS_OUTPUT.PUT_LINE ('No value is found for '||upper (v_ksppinm)||' parameter.');
else
DBMS_OUTPUT.PUT_LINE ('The '||upper (v_ksppinm)||' parameter is set for this database and the value is: '||v_fix.value);
end if;
dbms_output.put_line('=================================================================================================================================');
dbms_output.put_line (chr (10));
end loop;
end;
/
set feedback off
begin
dbms_output.put_line(chr(10));
DBMS_OUTPUT.PUT_LINE ('=================================================Encountering space issues?======================================================');
end;
/
begin
dbms_output.put_line(chr(10));
DBMS_OUTPUT.PUT_LINE ('Look at view DBA_RESUMABLE:');
end;
/
set feedback on
SET HEADING on
set linesize 120
set pagesize 120
column name heading 'Name' format a20
column status heading 'Status' format a20
column timeout heading 'Timeout' format 999999
column error_number heading 'Error Number' format 999999
column error_msg heading 'Message' format a44
select NAME,STATUS, TIMEOUT, ERROR_NUMBER, ERROR_MSG from DBA_RESUMABLE;
set feedback off
SET HEADING OFF
begin
dbms_output.put_line(chr(10));
DBMS_OUTPUT.PUT_LINE ('Look at view DBA_OUTSTANDING_ALERTS:');
end;
/
set feedback on
SET HEADING on
column object_name heading 'Object Name' format a14
column object_type heading 'Object Type' format a14
column reason heading 'Reason' format a40
column suggested_action heading 'Suggested action' format a40
select OBJECT_NAME,OBJECT_TYPE,REASON,SUGGESTED_ACTION from DBA_OUTSTANDING_ALERTS;
set feedback off
SET HEADING OFF
SET LINESIZE 256
begin
dbms_output.put_line ('=================================================================================================================================');
DBMS_OUTPUT.PUT_LINE ('</pre>');
end;
/
spool off
PROMPT
PROMPT
PROMPT REPORT GENERATED : &SRDCSPOOLNAME..htm
exit
##############444
Export or Import of Table with LOB Columns (like CLOB and BLOB) has Slow Performance (文档 ID 281461.1) | 转到底部 |
APPLIES TO:Oracle Database - Personal Edition - Version 8.1.7.0 to 11.2.0.4 [Release 8.1.7 to 11.2] SYMPTOMSAn export or import of a table with a Large Object (LOB) column, has slower performance than an export or import of a table without LOB columns. Tests done with table with CLOB and without CLOB. Both tables contained 500,000 rows of data. No CLOB No CLOB With CLOB NOTE:
Above performance results should not be considered as a benchmark of the performance between different Oracle versions, as the test databases were located on different machines with different hardware, and the databases had a different parameter configuration. The main objective of these results is to give an indication of the difference in the time that is needed to export a table with a LOB column, and a table without a LOB column. CHANGESYou recently created tables that have Large Object (LOB) columns. CAUSEThis is expected behavior. The rows of a table with a LOB column are fetched one row at a time. Also note that rows in tables that contain objects and LOBs will be exported using conventional path, even if direct path was specified. Also during import, the rows of tables containing LOB columns are inserted individually. SOLUTIONAlthough the performance of the export cannot be improved directly, possible alternative solutions are:
|
转 使用隐含Trace参数诊断Oracle Data Pump故障的更多相关文章
- OGG初始化之使用Oracle Data Pump加载数据
此方法使用Oracle Data Pump实用程序来建立目标数据.将副本应用于目标后,您将记录副本停止的SCN.包含在副本中的交易将被跳过以避免完整性违规冲突.从流程起点,Oracle GoldenG ...
- [Oracle] Data Pump 详细使用教程(4)- network_link
[Oracle] Data Pump 详细使用教程(1)- 总览 [Oracle] Data Pump 详细使用教程(2)- 总览 [Oracle] Data Pump 详细使用教程(3)- 总览 [ ...
- [Oracle] Data Pump 详细使用教程(5)- 命令交互模式
[Oracle] Data Pump 详细使用教程(1)- 总览 [Oracle] Data Pump 详细使用教程(2)- 总览 [Oracle] Data Pump 详细使用教程(3)- 总览 [ ...
- [Oracle] Data Pump 详细使用教程(1)- 总览
从10g开始,Oracle提供更高效的Data Pump(即expdp/impdp)来进行数据的导入和导出,老的exp/imp还可以用,但已经不建议使用.注意:expdp/impdp和exp/imp之 ...
- Oracle Data Pump 导出和导入数据
Data pump export/import(hereinafter referred to as Export/Import for ease of reading)是一种将元数据和数据导出到系统 ...
- Oracle 11g R2 Backup Data Pump(数据泵)之expdp/impdp工具
Oracle Data Pump(以下简称数据泵)是Oracle 10g开始提供的一种数据迁移工具,同时也被广大DBA用来作为数据库的逻辑备份工具和体量较小的数据迁移工具.与传统的数据导出/导入工具, ...
- Oracle逻辑备份与恢复(Data Pump)
1. 备份的类型 按照备份方式的不同,可以把备份分为两类: 1.1 逻辑备份:指通过逻辑导出对数据进行备份.将数据库中的用户对象导出到一个二进制文件中,逻辑备份使用导入导出工具:EXPDP/IMPDP ...
- data pump(数据泵)
先给出oracle给出的一个定义: “Oracle Data Pump technology enables very high-speed movement of data and metadata ...
- Oracle Data Guard 重要配置参数
Oracle Data Guard主要是通过为生产数据库提供一个或多个备用数据库(是产生数据库的一个副本),以保证在主库不可用或异常时数据不丢失并通过备用数据库继续提供服务.对于Oracle DG的配 ...
随机推荐
- resize和reserve的区别
转自http://blog.csdn.net/jackywgw/article/details/6248342 首先必须弄清楚两个概念: 1.capacity 指容器在分配新的存储空间之前能存储的元素 ...
- linux安装thrift
安装配置Thrift Thrift的编译器使用C++编写的,在安装编译器之前,首先应该保证操作系统基本环境支持C++的编译,安装相关依赖的软件包,如下所示 sudo yum install autom ...
- Git 之 协同开发
GitHub中多人协同开发和单人开发还是有点差别,协同开发一般有两种方式: 合作者,将其他用户添加到仓库合作者中之后,该用户就具有向当前仓库提交代码. 组织,创建一个组织,然后再该组织下可以创建多个项 ...
- zookeeper集群安装(转)
转载地址:http://www.blogjava.net/hello-yun/archive/2012/05/03/377250.html 本方法,本人亲自试验,可以成功. ZooKeeper是一个分 ...
- 《Maven实战》笔记-7-持续集成
一.持续集成的步骤: 1.持续编译 2.持续数据库集成 3.持续测试 4.持续审查 5.持续部署 6.持续反馈 二.持续集成工具——Hudson 1.安装Hudson 2.准备Subversion ...
- Altium designer14裁剪PCB的方法
很多人都跟我反应说AD14不能定义板框大小,或者说是不知道怎么定义板框的大小, 确实AD14的操作和AD13或者是AD10的版本斗有一些差异, 其实对于熟悉AD的人来说自己玩两天,这些差异就不算什么了 ...
- python 趣味强制请吃饭
# -*- coding: utf-8 -*- import easygui who = easygui.buttonbox("你想请谁吃饭 ?", "luckly qu ...
- day04-Linux系统中用户控制及文件权限管理方法
一. useradd指令新建一个用户包含以下文件 1. 用户信息文件:less /etc/passwd ...
- 第一次接触C++------感触
2018/09/24 上大学第一次接触C++,感觉还挺有趣的. C语言是计算机的一门语言,顾名思义,语言嘛,有它自己独特的语法. 第一次用C++敲代码,觉得还挺不错的,可以从中找到乐趣.咏梅老师布置的 ...
- kuangbin专题16H(next数组)
题目链接: https://vjudge.net/contest/70325#problem/H 题意: 输入字符串 str, 求 str 子串中既是 str 前缀又是 str 后缀的的字符串长度, ...