Datapump tips
同时导出多个schema下的表
$ expdp system/manager dumpfile=test.dmp logfile=test.log directory=TESTDIR schemas=user1,user2 include=table:\"in \(\'TEST1\',\'TEST2\',\'TEST\'\)\" Export: Release 10.2.0.3.0 - 64bit Production on Thursday, 11 July, 2013 15:23:12 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** dumpfile=test.dmp logfile=test.log directory=TESTDIR schemas=user1,user2 include=table:"in ('TEST1','TEST2','TEST')"
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 128 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "USER1"."TEST1" 4.914 KB 1 rows
. . exported "USER2"."TEST2" 4.914 KB 1 rows
. . exported "USER1"."TEST" 0 KB 0 rows
. . exported "USER2"."TEST" 0 KB 0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/app/oracle/test.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 15:23:31
datapump trace
The tracing of data pump is done by TRACE parameter. This parameter takes value as 7 digit hexadecimal number. Specifying the parameter value follow some rules.
Out of 7 digit hexadecimal number,
- first 3 digits are responsible to enable tracing for a specific data pump component.
- Rest 4 digits are usually 0300
- Specifying more than 7 hexadecimal number is not allowed. Doing so will result,
UDE-00014: invalid value for parameter, 'trace'.
- Specifying leading 0x (hexadecimal specification characters) is not allowed.
- Value to be specified in hexadecimal. You can't specify it in decimal.
- Leading zero can be omitted. So it may be less than 7 hexadecimal digit.
- Values are not case sensitive.
The majority of errors that occur during a Data Pump job, can be diagnosed by creating a trace file for the Master Control Process (MCP) and the Worker Process(es) only.
In case of standard tracing trace files are generated in BACKGROUND_DUMP_DEST. In case of standard tracing,
- If it is Master Process trace file then generated file name is,
<SID>_dm<number>_<process_id>.trc
- If it is Worker Process trace file then generated file name is,
<SID>_dw<number>_<process_id>.trc
In case of full tracing two trace files are generated in BACKGROUND_DUMP_DEST just like standard tracing. And one trace file is generated in USER_DUMP_DEST.
Shadow Process trace file: <SID>_ora_<process_id>.trc
The list of trace level in data pump is shown below.
Trace DM DW ORA Lines
level trc trc trc in
(hex) file file file trace Purpose
------- ---- ---- ---- ------ -----------------------------------------------
10300 x x x SHDW: To trace the Shadow process (API) (expdp/impdp)
20300 x x x KUPV: To trace Fixed table
40300 x x x 'div' To trace Process services
80300 x KUPM: To trace Master Control Process (MCP) (DM)
100300 x x KUPF: To trace File Manager
200300 x x x KUPC: To trace Queue services
400300 x KUPW: To trace Worker process(es) (DW)
800300 x KUPD: To trace Data Package
1000300 x META: To trace Metadata Package
--- +
1FF0300 x x x 'all' To trace all components (full tracing)
Individual tracing level values in hexadecimal are shown except last one in the list. You can use individual value or combination of values. If you sum all the individual values you will get 1FF0300 which is full tracing.
To use full level tracing issue data pump export as,
expdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=1FF0300To use full level tracing for data pump import operation issue import as,
impdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=1FF0300
However for most cases full level tracing is not required. As trace 400300 is to trace Worker process(es) and trace 80300 is to trace Master Control Process (MCP). So combining them is trace 480300 and by using trace 480300 you will be able to trace both Master Control process (MCP) and the Worker process(es). This would serve the purpose.
So to solve any data pump export problem issue,
expdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=480300To solve any data pump import problem issue,
impdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=480300
清除stoped impdp/expdp job
stoped impdp/expdp job会在dba_datapump_jobs中留下一条记录,显示为not running.
清除stopped job分两种情况:
1) job能够attach
如果job能够attach, 则可以attach后再kill job.
如:expdp system/**** attach=SYS_EXPORT_TABLE_01
kill_job 2) job无法attach
如果job无法attach, 则需要删除连接DataPump的用户下的master table.
如:conn system/*****
drop table SYS_EXPORT_TABLE_01 (master table名称一般与job name相同)
以上的用户名和job name都可以从dba_datapump_jobs中得到。
导出时如何排除表分区
有两种方式可以在导出时排除表分区,一是使用exclude=table_data方式,二是使用datapump API.
(1) exclude=table_data方式
测试表上有2个分区:
SQL> select partition_name,table_name from user_tab_partitions; PARTITION_NAME TABLE_NAME
------------------------------ ------------------------------
DATA_PART1 TEST
DATA_PART2 TEST 在parfile中exclude分区DATA_PART2:
$ more parfile.par
userid='/ as sysdba'
directory=DATA_PUMP_DIR
dumpfile=test.dmp
logfile=test.log
tables=jyu.test
exclude=table_data:"in ('DATA_PART2')" $ expdp parfile=parfile.par Export: Release 10.2.0.3.0 - 64bit Production on Friday, 30 May, 2014 15:44:28 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "SYS"."SYS_EXPORT_TABLE_01": parfile=parfile.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "JYU"."TEST":"DATA_PART1" 5.218 KB 1 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
/app/oracle/product/server_ee/10.2.0.2/rdbms/log/test.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at 15:44:42
只有分区DATA_PART1的数据被导出,分区DATA_PART2的数据没有被导出
(2) datapump API
在SQL*Plus中执行以下脚本:
declare
rvalue number;
begin
rvalue := dbms_datapump.open (operation => 'EXPORT',
job_mode => 'TABLE'); dbms_datapump.add_file (handle => rvalue,
filename => 'TEST.DMP',
directory => 'DATA_PUMP_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE); dbms_datapump.add_file (handle => rvalue,
filename => 'TEST.LOG',
directory => 'DATA_PUMP_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE); dbms_datapump.metadata_filter (handle => rvalue,
name => 'SCHEMA_EXPR',
value => 'IN (''JYU'')'); dbms_datapump.metadata_filter (handle => rvalue,
name => 'NAME_EXPR',
value => 'IN (''TEST'')'); dbms_datapump.data_filter (handle => rvalue,
name => 'PARTITION_LIST',
value => '''DATA_PART1''',
table_name => 'TEST',
schema_name => 'JYU'); dbms_datapump.start_job (handle => rvalue);
dbms_datapump.detach (handle => rvalue);
end;
/ 检查导出日志,只有分区DATA_PART1被导出:
$ more TEST.LOG
Starting "SYS"."SYS_EXPORT_TABLE_01":
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "JYU"."TEST":"DATA_PART1" 5.218 KB 1 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
/app/oracle/product/server_ee/10.2.0.2/rdbms/log/TEST.DMP
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at 16:26:58
Datapump tips的更多相关文章
- Mac上MySQL忘记root密码且没有权限的处理办法&workbench的一些tips (转)
忘记Root密码肿么办 Mac上安装MySQL就不多说了,去mysql的官网上下载最新的mysql包以及workbench,先安装哪个影响都不大.如果你是第一次安装,在mysql安装完成之后,会弹出来 ...
- 【Tips】史上最全H1B问题合辑——保持H1B身份终级篇
[Tips]史上最全H1B问题合辑——保持H1B身份终级篇 2015-04-10留学小助手留学小助手 留学小助手 微信号 liuxue_xiaozhushou 功能介绍 提供最真实全面的留学干货,帮您 ...
- layer.js中layer.tips
<script src="~/Content/js/layer/layer.js"></script> layer.tips('名称不能为空', '#pro ...
- HTML 最简单的tips 怎么支持指定DIV显示提示信息
<body> <style type="text/css"> a.link{position:relative;} a.link div.tips{ bor ...
- CSS:CSS使用Tips
Css是前端开发中效果展现的主要部分之一,良好的Css书写习惯可以为实际的项目开发提高效率,也可以为实现良好的团队合作提供保证. 一般新手在使用Css的时候经常会犯一些错误,出现一些不经意的漏洞,如果 ...
- 【读书笔记】100个Switf必备tips
声明 欢迎转载,但请保留文章原始出处:) 博客园:http://www.cnblogs.com 农民伯伯: http://over140.cnblogs.com 正文 1.Selector 在Swi ...
- 【转】40个良好用户界面Tips
一个良好的用户界面应具有高转换率,并且易于使用.但要用户体验良好并不容易做到,下面我们整理了40个良好用户界面Tips,希望能对你有帮助! 1 尽量使用单列而不是多列布局 单列布局能够让对全局有更好的 ...
- 转:Eclipse Search Tips
from: https://github.com/ajermakovics/eclipse-instasearch/wiki/Eclipse-search-tips Eclipse Search T ...
- VS:101 Visual Studio 2010 Tips
101 Visual Studio 2010 Tips Tip #1 How to not accidentally copy a blank line TO – Text Editor ...
随机推荐
- 解析远程域名主机的IP地址
我们知道,计算机在访问远程主机的时候,本质上是通过IP地址来进行访问的,但我们实际在使用的时候,例如我们想访问百度的主页,我们是通过在浏览器的地址栏输入百度的域名来进行访问的,因此,计算机需要将百度的 ...
- HTML5 拖放---drag和drop
拖放四步走:第一步:设置元素可拖放,即把 draggable属性设置为 true: 例:<div id="div" draggable="true"&g ...
- 井眼轨迹的三次样条插值 (vs + QT + coin3d)
井眼轨迹数据的测量值是离散的,根据某些测斜公式,我们可以计算出离散的三维的井眼轨迹坐标,但是真实的井眼轨迹是一条平滑的曲线,这就需要我们对测斜数据进行插值,使井眼轨迹变得平滑,我暂时决定使用三次样条进 ...
- 浅谈如何优化SQL Server服务器
在中国,使用SQLServer数据库的用户和企业是最多的,那么如何去设计和优化SQLSerer服务器呢,DBA应该遵循那些准则和方法呢,下面就将我的经验与大家分享,希望对大家有所帮助. AD: ...
- SSIS包的组建之连接管理器
上一篇我们通过一个示例来介绍一下SSIS 包的开发.接下来的内容我们将学习一下包中各个选项卡的使用.如:连接管理器选项卡.控制流选项卡.数据流选项卡和事件处理选项卡等等.这一篇将介绍一下连接管理器作用 ...
- 我总结的call()与apply()方法的区别
[call()与apply()的区别]在ECMAScript中每一个函数都是function类型(是javascript的基本引用类型)的实例,具有一定的属性和方法.call()和apply()则是这 ...
- 关于WM8741对于32位音频的支持
WM8741据说是支持到32位192K,但实际上,对于32位,它只是支持I2S总线上32位数据输入,内部还是转换成24位来做滤波处理的,DA转换的精度是达不到32位的.不过它在转换到24位的时候有一个 ...
- jdbc connection为什么放在webINF的lib里面
jdbc connection为什么放在webINF的lib里面
- 剑指offer--10.最小的K个数
边界判断,坑了一下 ----------------------------------------------- 时间限制:1秒 空间限制:32768K 热度指数:375643 本题知识点: 数组 ...
- CodeForces - 961D:Pair Of Lines (几何,问两条直线是否可以覆盖所有点)
You are given n points on Cartesian plane. Every point is a lattice point (i. e. both of its coordin ...