############1   迁移数据库的集中方法

三、相关技术

  迁移方式 优势 不足
1 Export and import • 对数据库版本,以及系统平台没有要求 • 不支持并发,速度慢
• 停机时间长

2 Oracle Data Pump • 多进程并发 • 要求10g以后的版本
• 适合中型数据库 • 不支持XML和spatial数据类型

3 Transportable tablespace (TTS) database •快速,适合大型数据库 • 要求10g以后的版本
• 可以跨平台,但是要求具有相同的字序 • 要求相同的RDBMS版本                                                 《-要求相同的RDBMS版本 
• 要求停机时间长

4 Cross-platform transportable tablespace (XTTS) • 快速,适合大型数据库 • 要求10g以后的版本
• 对数据库平台,操作系统平台无限制 • 对于部分应用有限制 详细见MOS 454574.1

5 oracle goldengate • 对平台和数据库类型都没有要求
• 复制熟读快,可以根据业务灵活调整
• 更短的停机时间 • 对DDL支持不是太好

6 oracle physical
standby • 更灵活的解决办法
• 更短的停机时间 • 不能夸平台和数据库版本

###############2   TTS

You can use theTransportable Tablespaces feature to copy a set of tablespaces from one OracleDatabase to another.

--可以使用TTS 特性将表空间从一个数据库copy 到另一个数据库。

--从Oracle 11gR1 开始,必须使用Data Pump 来进行TTS。 只有在一种情况下可以使用exp/imp 工具,那就是迁移10gR2 之前的XMLType 数据。

1.3.7 Beginning with Oracle Database10g Release 2, you can transport tablespaces that contain XMLTypes.Beginning with Oracle Database 11g Release 1, you must use only Data Pumpto export and import the tablespace metadata for tablespaces that containXMLTypes.

--从Oracle10gR2 开始,可以TTS 包含XMLTypes的表空间,从11gR1之后,必须使用DataPump来导出导入XMLTypes 表空间的metadata。

http://blog.csdn.net/tianlesoftware/article/details/7267582

##########

Oracle Database - Enterprise Edition - 版本 10.1.0.2 到 12.1.0.1 [发行版 10.1 到 12.1]
本文档所含信息适用于所有平台
******************* 警告 *************

Document 1334152.1 Corrupt IOT when using Transportable Tablespace to HP from different OS
Document 13001379.8 Bug 13001379 - Datapump transport_tablespaces produces wrong dictionary metadata for some tables

目标

从 Oracle 数据库 10g 开始,你可以跨平台的传输表空间。这篇文档提供了一个逐步指导,来解释如何实现 ASM 数据文件和 OS 文件系统数据文件的传输表空间。

如果你的目标是迁移一个数据库到不同的字节序平台,如下的步骤概述了如何使用可传输表空间迁移一个数据库到一个新的平台:

1.- 在目标平台上创建一个新的,空的数据库。
2.- 从源库导入传输操作要求的对象到目标库。
3.- 从源库为所有的用户表空间导出可传输的元数据。
4.- 转移用户表空间的数据文件到目标系统。
5.- 使用 RMAN 转换数据文件到目标系统的字节序格式。
6.- 导入所有用户表空间的可传输元数据到目标数据库。
7.- 从源库导入余下的数据库对象和元数据(传输操作未移动的部分)到目标库。

你也可以在源平台转换数据文件,转换完成后转移他们到目标平台。

MAA 白皮书“利用表空间传输实现平台迁移”请参考:

http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-platformmigrationtts-129269.pdf

从 11.2.0.4,12c 之后,如果要转换到 Linux x86-64,那么参考如下文档:

Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup [1389592.1]

解决方案

支持的平台

请查询 V$TRANSPORTABLE_PLATFORM 来查看受支持的平台,并确定每个平台的字节序。

SQL> COLUMN PLATFORM_NAME FORMAT A32
SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM;

PLATFORM_ID PLATFORM_NAME                    ENDIAN_FORMAT
----------- -------------------------------- --------------
          1 Solaris[tm] OE (32-bit)          Big
          2 Solaris[tm] OE (64-bit)          Big
          7 Microsoft Windows IA (32-bit)    Little
         10 Linux IA (32-bit)                Little
          6 AIX-Based Systems (64-bit)       Big
          3 HP-UX (64-bit)                   Big
          5 HP Tru64 UNIX                    Little
          4 HP-UX IA (64-bit)                Big
         11 Linux IA (64-bit)                Little
         15 HP Open VMS                      Little
          8 Microsoft Windows IA (64-bit)    Little
          9 IBM zSeries Based Linux          Big
         13 Linux 64-bit for AMD             Little
         16 Apple Mac OS                     Big
         12 Microsoft Windows 64-bit for AMD Little
         17 Solaris Operating System (x86)   Little

如果源平台和目标平台是不同的字节序,那么必须在源平台或者目标平台上做一个额外的步骤,来转换被传输的表空间到目标格式。如果它们是同样的字节序,那么不需要做转换,表空间可以像同平台那样传输。

传输表空间

  1. 传输表空间前的准备工作

    • 检查表空间是自包含的:

      SQL> execute sys.dbms_tts.transport_set_check('TBS1,TBS2', true);
      SQL> select * from sys.transport_set_violations;
      注意:在表空间被传输之前,这些违反传输标准的问题必须被解决。
    • 要成功的运行传输表空间导出,表空间必须在 READ ONLY 模式:
      SQL> ALTER TABLESPACE TBS1 READ ONLY;
      SQL> ALTER TABLESPACE TBS2 READ ONLY;
  2. 导出元数据
    • 使用传统导出工具:

      exp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_exp.log transport_tablespace=y tablespaces=TBS1,TBS2
    • 使用数据泵导出:
      首先创建数据泵使用的目录对象,例如:
      CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir' ;
      GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

      然后初始化数据泵导出:

      expdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_TABLESPACES = TBS1,TBS2

      如果你想要在执行一个传输表空间操作的同时进行严格的包含关系检查,那么使用 TRANSPORT_FULL_CHECK 参数。

      expdp system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir TRANSPORT_TABLESPACES= TBS1,TBS2 TRANSPORT_FULL_CHECK=Y

      如果被传输的表空间集不是自包含的,那么导出会失败。

  3. 使用 V$TRANSPORTABLE_PLATFORM 来确定每个平台的字节序,你可以在每个平台实例执行如下查询:
    SELECT tp.platform_id,substr(d.PLATFORM_NAME,1,30), ENDIAN_FORMAT
    FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
    WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;

    如果你发现字节序是不同的,那么传输表空间集时必须进行转换:

    RMAN> convert tablespace TBS1 to platform="Linux IA (32-bit)" FORMAT '/tmp/%U';

    RMAN> convert tablespace TBS2 to platform="Linux IA (32-bit)" FORMAT '/tmp/%U';

    然后复制数据文件和导出的文件到目标环境。

  4. 导入可传输表空间
    • 使用传统导入工具:

      imp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_imp.log transport_tablespace=y datafiles='/tmp/....','/tmp/...'
    • 使用数据泵:
      CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir';
      GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

      然后执行:

      impdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_DATAFILES='/tmp/....','/tmp/...' REMAP_SCHEMA=(source:target) REMAP_SCHEMA=(source_sch2:target_schema_sch2)

      如果你想要改变传输的数据库对象的属主,可以使用 REMAP_SCHEMA。

  5. 将表空间置于 read/write 模式:
    SQL> ALTER TABLESPACE TBS1 READ WRITE;
    SQL> ALTER TABLESPACE TBS2 READ WRITE;

使用 DBMS_FILE_TRANSFER

你也可以是使用 DBMS_FILE_TRANSFER 来拷贝数据文件到另外一个主机。

从 12c 和 11.2.0.4 开始 DBMS_FILE_TRANSFER 默认的进行转换。若使用 DBMS_FILE_TRANSFER,当目标数据库收到一个来自不同字节序的平台的文件时,它对每一个块进行转换。作为可传输操作的一部分,在数据文件被移动到目标数据库后,不需 RMAN 转换,即可导入。

在低于 11.2.0.4 的版本上,对于 ASM 文件同样需要执行上面的步骤。但是如果字节序格式不同,那么你必须在转移文件后,使用 RMAN 转换。文件无法直接在不同平台的两个 ASM 实例间进行拷贝。

如下是一个使用范例:

RMAN> CONVERT DATAFILE
      '/hq/finance/work/tru/tbs_31.f',
      '/hq/finance/work/tru/tbs_32.f',
      '/hq/finance/work/tru/tbs_41.f'
      TO PLATFORM="Solaris[tm] OE (32-bit)"
      FROM PLATFORM="HP TRu64 UNIX"
      DB_FILE_NAME_CONVERT= "/hq/finance/work/tru/", "/hq/finance/dbs/tru"
      PARALLELISM=5;

相同的范例,但是这里显示目的地是一个 ASM 磁盘组:

RMAN> CONVERT DATAFILE
      '/hq/finance/work/tru/tbs_31.f',
      '/hq/finance/work/tru/tbs_32.f',
      '/hq/finance/work/tru/tbs_41.f'
      TO PLATFORM="Solaris[tm] OE (32-bit)"
      FROM PLATFORM="HP TRu64 UNIX"
      DB_FILE_NAME_CONVERT="/hq/finance/work/tru/", "+diskgroup"
      PARALLELISM=5;
*** 警告***

  • 当使用可传输表空间(TTS)从 Solaris,Linux 或者 AIX 迁移到 HP/UX 时,索引组织表(IOT)可能损坏。
    这是 BUG:9816640 带来的限制。
    当前针对这个问题没有补丁。索引组织表(IOT)需要在 TTS 之后进行重建。

    参考文档 1334152.1 Corrupt IOT when using Transportable Tablespace to HP from different OS

  • 当使用被 drop 掉的列,可能遇到这个 Bug:13001379 - Datapump transport_tablespaces produces wrong dictionary metadata for some tables can occur。文档 1440203.1 给出了这个警告的细节。
使用 DBMS_FILE_TRANSFER 的已知问题

=> 未公开的 Bug 13636964 - ORA-19563 from RMAN convert on datafile copy transferred with DBMS_FILE_TRANSFER (Doc ID 13636964.8)
 确认受影响的版本     
    11.2.0.3 
 问题在如下版本修复   
    12.1.0.1 (Base Release)
    11.2.0.4 (Future Patch Set) 
    
描述

使用 DBMS_FILE_TRANSFER 转移的文件在 RMAN convert 操作中失败。
    例如:
     RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
     RMAN-00571: ===========================================================
     RMAN-03002: failure of conversion at target command at 01/24/2012 16:22:23
     ORA-19563: cross-platform datafile header validation failed for file +RECO/soets_9.tf 
     
    Rediscovery Notes:
     如果 RMAN 转换一个使用 DBMS_FILE_TRANSFER 转移的文件失败,那么可能是由于这个 Bug。
     
    规避方案:
     使用 OS 工具转移文件。

=> Dbms_file_transfer Corrupts Dbf File When Copying between endians (Doc ID 1262965.1)

额外的资源

社区: Database Utilities

仍有其它问题吗? 使用如上的社区来搜索相似的讨论,或者就此主题开启一个新的讨论。

可传输表空间使用的限制

  1. 源库和目标库必须使用相同的字符集和国家字符集。
  2. 如果目标库上已经有一个同名的表空间,无法进行传输。然而,你可以在传输之前,重命名要传输的表空间或者目标库上的表空间。
  3. 若对象带有下层对象(例如物化视图)或者被包含的对象(例如分区表),则无法被传输。除非所有下层对象或者被包含的对象都在这个表空间集里。
    • 查看 Oracle Database Utilities 文档中的表"Objects Exported and Imported in Each Mode",里面有几个对象类型在表空间模式中不被导出。
  4. 如果表空间对象的所有者在目标库中不存在,则需要在开始可传输表空间导入之前,手动的创建用户名。
    • 如果你使用了 spatial 索引,那么:

      • 注意在 10gR1 和 10gR2 中,对于 spatial 索引,不支持跨不同字节序平台的 TTS 操作。这个限制在 11g 中取消了。
      • 在导出之前和传输之后,必须运行专门的 spatial 包,请参阅 Oracle Spatial 文档.
  5. 从 Oracle Database 11gR1 开始,对于含有 XMLType 的表空间,必须使用数据泵来导出和导入表空间元数据。

    如下的查询返回了包含 XMLType 的表空间的列表:

    select distinct p.tablespace_name
    from dba_tablespaces p, dba_xml_tables x, dba_users u, all_all_tables t
    where t.table_name=x.table_name and
          t.tablespace_name=p.tablespace_name and
          x.owner=u.username;

    传输带有 XMLType 的表空间有如下限制

    1. 目标数据库必须安装 XML DB。
    2. XMLType 表引用的 schema 不能是 XML DB 标准 schema。
    3. XMLType 表引用的 schema 不能有循环依赖。
    4. XMLType 表上的任何行级别安全性都会在导入时丢失。
    5. 如果一个传输的 XMLType 表的 schema 不在目标数据库里,它会被导入并且注册。如果这个 schema 在目标数据库里已经存在了,就会返回一个错误,除非使用 ignore=y 选项。
  6. 高级队列可传输表空间不支持带有多个容器的 8.0 兼容版高级队列。
  7. 你无法传输 SYSTEM 表空间或者用户 SYS 拥有的对象。
  8. 不透明类型(例如 RAW,BFILE 和 AnyTypes)可以被传输,但是他们不会在跨平台传输操作中被转换。他们的实际框架只有应用知道,所以应用必须在这些类型被移动到新的平台后处理字节序的问题。
  9. 浮点数 BINARY_FLOAT 和 BINARY_DOUBLE 类型是可以传输的,但必须使用 Data Pump,不能使用原始的导出工具 EXP。
  10. 其它更多的限制和要求,请查看以下文档: Document 1454872.1 - Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where Applicable

ASM 文件的可传输表空间导出/导入

    • 使用 RMAN CONVERT

      没有直接的方法将 ASM 文件作为可传输表空间导出/导入。但是,可以通过 RMAN 实现这个功能。

      请务必遵照如下步骤:

      1. 导出表空间前的准备。

        • 检查表空间是自包含的:

          SQL>execute sys.dbms_tts.transport_set_check('TBS1,TBS2', true);
          SQL> select * from sys.transport_set_violations;
          注意:这些违反限制的结果必须在表空间传输前解决。
        • 要成功的进行可传输表空间导出,这些表空间必须处于 READ ONLY 模式。
          SQL> ALTER TABLESPACE TBS1 READ ONLY;
          SQL> ALTER TABLESPACE TBS2 READ ONLY;
      2. 导出元数据。
        • 使用原始导出工具:

          exp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_exp.log transport_tablespace=y tablespaces=TBS1,TBS2
        • 使用数据泵导出:
          CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir';
          GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

          然后执行:

          expdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_TABLESPACES = TBS1,TBS2

          如果你想要在执行可传输表空间操作同时进行严格的包容性检查,那么使用 TRANSPORT_FULL_CHECK 参数:

          expdp system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir TRANSPORT_TABLESPACES= TBS1,TBS2 TRANSPORT_FULL_CHECK=Y

        如果传输的表空间不是自包含的,那么导出会出错。

      3. 使用 V$TRANSPORTABLE_PLATFORM 找到目标库准确的平台名。你可以在目标平台实例上执行如下的查询。
        SELECT tp.platform_id,substr(d.PLATFORM_NAME,2,30), ENDIAN_FORMAT
        FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
        WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
      4. 以目标平台的格式,从 ASM 文件生成一个 OS 文件。
        RMAN> CONVERT TABLESPACE TBS1
              TO PLATFORM 'HP-UX (64-bit)' FORMAT '/tmp/%U';
        RMAN> CONVERT TABLESPACE TBS2
              TO PLATFORM 'HP-UX (64-bit)' FORMAT '/tmp/%U';
      5. 拷贝生成的文件到目标服务器(如果跟源服务器不是同一台机器)。
      6. 导入可传输表空间。
        • 使用原始的导入工具:

          imp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_imp.log transport_tablespace=y datafiles='/tmp/....','/tmp/...'
        • 使用数据泵导入:
          CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir';
          GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

          然后执行:

          impdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_DATAFILES='/tmp/....','/tmp/...' REMAP_SCHEMA=(source:target) REMAP_SCHEMA=(source_sch2:target_schema_sch2)

          如果你想要改变传输的数据库对象的属主的话,可以使用 REMAP_SCHEMA 参数。

      7. 将表空间放置在 read/write 模式。
        SQL> ALTER TABLESPACE TBS1 READ WRITE;
        SQL> ALTER TABLESPACE TBS2 READ WRITE;

        如果你想要将数据文件从 ASM 环境传输到文件系统,那么操作到此结束。但如果你想要在两个 ASM 环境之间传输表空间,那么你要继续下面的操作。

      8. 使用 rman 拷贝文件'/tmp/....dbf' 到 ASM 环境。
        rman nocatalog target /
        RMAN> backup as copy datafile '/tmp/....dbf' format '+DGROUPA';

        这里 +DGROUPA 是 ASM 磁盘组名字。

      9. 将数据文件交换到这个拷贝。
        如果是 10g 数据库,首先要将数据文件离线:
        SQL> alter database datafile '/tmp/....dbf' offline;

        文件交换到这个拷贝:

        rman nocatalog target /
        RMAN> switch datafile '/tmp/....dbf' to copy;

        记下在 +DGROUPA 磁盘组中创建的拷贝的名字,例如,'+DGROUPA/s101/datafile/tts.270.5'。

      10. 使文件重新在线,我们首先要 recover 它。
        SQL> recover datafile '+DGROUPA/s101/datafile/tts.270.5';
        SQL> alter database datafile '+DGROUPA/s101/datafile/tts.270.5' online;
      11. 检查数据文件确实已经是 ASM 环境的一部分,并且在线。
        SQL> select name, status from v$datafile;

        输出应该是:

        +DGROUPA/s101/datafile/tts.270.5 ONLINE
    • 使用DBMS_FILE_TRANSFER

      你同样可以使用 DBMS_FILE_TRANSFER 来将数据文件从一个 ASM 磁盘组拷贝到另外一个,甚至到另外一个主机上。从 10gR2 开始你同样可以使用 DBMS_FILE_TRANSFER 来拷贝数据文件从 ASM 到文件系统,以及从文件系统到 ASM。

      PUT_FILE 过程读取一个本地文件或者 ASM 并且联系远端数据库来创建一个在远端文件系统的拷贝。被拷贝的文件是源文件,拷贝带来的新文件是目标文件。直到过程成功完成,目标文件都不会被关闭。

      语法:

      DBMS_FILE_TRANSFER.PUT_FILE(
         source_directory_object       IN  VARCHAR2,
         source_file_name              IN  VARCHAR2,
         destination_directory_object  IN  VARCHAR2,
         destination_file_name         IN  VARCHAR2,
         destination_database          IN  VARCHAR2);

      其中:

      • source_directory_object: 在本地源端拷贝的文件所在的目录对象。在源端,这个目录对象必须存在。
      • source_file_name: 从本地文件系统拷贝的文件的名字。这个文件必须存在于本地文件系统上 source_directory_object 所指定的目录里。
      • destination_directory_object: 这是在目标端文件所要放置的目录对象。这个目录对象必须存在于远端文件系统。
      • destination_file_name: 放在远端文件系统的文件的名字。在远端文件系统目标目录中必须没有重名的文件。
      • destination_database: 指向作为拷贝文件的目的地的远端数据库的数据库链接的名字。

      如果我们想要使用 DBMS_FILE_TRANSFER.PUT_FILE 来从源端传输文件到目的地主机,步骤3,4,5做如下修改:

      1. 在目标数据库主机创建一个目录,授权给本地用户。这是文件要在目标端放置的目录对象,必须在远端的文件系统存在。

        CREATE OR REPLACE DIRECTORY target_dir AS '+DGROUPA';
        GRANT WRITE ON DIRECTORY target_dir TO "USER";
      2. 在源数据库主机创建一个目录。这是要拷贝的文件在本地源端所存在的目录对象。这个目录对象必须在源端存在。
        CREATE OR REPLACE DIRECTORY source_dir AS '+DGROUPS/subdir';
        GRANT READ,WRITE ON DIRECTORY source_dir TO "USER";
        CREATE OR REPLACE DIRECTORY source_dir_1 AS '+DGROUPS/subdir/subdir_2';
      3. 创建一个 dblink 连接到目标数据库主机:
        CREATE DATABASE LINK DBS2 CONNECT TO 'user' IDENTIFIED BY 'password' USING 'target_connect';

        这里 target_connect 是目标数据库的连接字符串,USER 是我们将要用来转移数据文件的用户。

      4. 连接到源实例。会用到如下项目:
        • dbs1: 到源数据库的连接字符串
        • dbs2: 到目标数据库的 dblink
        • a1.dat: 源数据库的文件名
        • a4.dat: 目标数据库的文件名
        CONNECT user/password@dbs1

        -- - put a1.dat to a4.dat (using dbs2 dblink)
        -- - level 2 sub dir to parent dir
        -- - user has read privs on source_dir_1 at dbs1 and write on target_dir 
        -- - in dbs2
        BEGIN
            DBMS_FILE_TRANSFER.PUT_FILE('source_dir_1', 'a1.dat',
                                        'target_dir', 'a4.dat', 'dbs2' );
        END;

#########XTTS

http://blog.itpub.net/21754115/viewspace-2085482/

xtts迁移文件系统表空间到文件系统表空间可参考,oracle小知识点14--xtts传输表空间 http://blog.itpub.net/28539951/viewspace-1978401/
测试:
os:源端:centos 6.6 目标端:centos 6.6
db:源端:11.2.0.4 文件系统 单实例 目标端:11.2.0.4 ASM RAC
host:源端:ct6604 192.108.56.120 目标端:ct66rac01 192.108.56.101
源端实例:ctdb 目标端实例:rac11g1

1.##ct66rac01
##在目标端实例上建连接源端的dblink和用于存放数据文件的目录directory.
    #此步骤是为了最近通过impdp dblink的方式导入数据文件到目标端,如果准备采用本地导入则不需要建dblink.
    [oracle@ct66rac01 ~]$ cd /u01/app/oracle/product/11.2.0/db_1/network/admin/
    [oracle@ct66rac01 admin]$ vi tnsnames.ora
    CTDB =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 192.108.56.120)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = ctdb)
        )
      )
    [oracle@ct66rac01 dbs]$ ORACLE_SID=ctdb
    [oracle@ct66rac01 ~]$ sqlplus / as sysdba
    SQL> create directory dump_oradata as '+DATA';
    SQL> create public database link lnk_ctdb connect to system identified by system using 'ctdb';
    SQL> select * from dual@lnk_ctdb;
    /*
    DUMMY
    X
    */
    SQL> exit

2.##ct66rac01
##在目标端配置nfs服务.
    #整个xtts的过程源端产生的数据文件,增量备份,执行脚本都是要传到目标端.通过测试发现,使用nfs的方式将传输直接在生成文件的时候就完成了,方便操作,又减少错误.如果不使用nfs,手动去传也是可以.
    [oracle@ct66rac01 ~]$ mkdir /home/oracle/xtts

[oracle@ct66rac01 ~]$ su -
    [root@ct66rac01 oracle]# service nfs status
    [root@ct66rac01 ~]# cat /etc/exports
    /home/oracle/xtts *(rw,sync,no_root_squash,insecure,anonuid=500,anongid=500)
    [root@ct66rac01 oracle]# service nfs start

3.##ct6604
##在源端建立测试用的用户,表空间,表,权限.
    #此处的权限和表用于迁移之后的验证
    [oracle@ct6604 ~]$ ORACLE_SID=ctdb
    [oracle@ct6604 ~]$ sqlplus / as sysdba
    SQL> create tablespace tbs01 datafile '/u02/oradata/ctdb/tbs01.dbf' size 10m autoextend on next 2m maxsize 4g;
    SQL> create tablespace tbs02 datafile '/u02/oradata/ctdb/tbs02.dbf' size 10m autoextend on next 2m maxsize 4g;

SQL> create user test01 identified by test01 default tablespace tbs01;
    SQL> create user test02 identified by test02 default tablespace tbs02;
    SQL> grant connect,resource to test01;
    SQL> grant connect,resource to test02;
    SQL> grant execute on dbms_crypto to test02;

SQL> create table test01.tb01 as select * from dba_objects;
    SQL> create table test02.tb01 as select * from dba_objects;
    SQL> grant select on test01.tb01 to test02;
    SQL> exit

4.##ct6604
##在源端连接目标端的nfs,mount到/home/oracle/xtts下.
    [oracle@ct6604 ~]$ mkdir /home/oracle/xtts
    [oracle@ct6604 ~]$ su -
    [root@ct6604 ~]# showmount -e 192.108.56.101
    Export list for 192.108.56.101:
    /home/oracle/xtts *
    [root@ct6604 ~]# mount -t nfs 192.108.56.101:/home/oracle/xtts /home/oracle/xtts

5.##ct6604
##在源端解压rman-xttconvert脚本,配置xtts的参数文件.
    #此处的操作都是在/home/oracle/xtts下,它也目标端nfs是的一个目录,所以目标端就不需要再配置这些.
    #配置文件参数说明:tablespaces要传输的表空间
                      platformid源端平台ID,通过V$DATABASE.PLATFORM_ID查看
                      srcdir,dstdir,srclink是用于通过dbms_file_transfer传输的参数,本测试通过rman,不使用
                      dfcopydir源端生成数据文件的目录
                      backupformat源端生成增量备份的目录
                      stageondest目标端存放源数据文件和增量备份的目录
                      storageondest目录端存放目标数据文件的目录                  backupondest目标端使用ASM时转换增量备份的目录,目标端使用数据文件建议和stageondest设的一样,测试发现目标端为ASM也可以把目录设为和stageondest一样,因为无需转换增量备份即可应用增量roll forward
                      parallel,rollparallel,getfileparallel并行度,此处用的默认
                      asm_home,asm_sid目标端使用ASM时,用于指定asm实例的oracle_home,sid.      此测试没使用的参数:cnvinst_home,cnvinst_sid目标端辅助实例的oracle_home,sid,如果目标端是单独又装的11.2.04的软件,需要指定
                            
    [root@ct6604 xtts]# su - oracle
    [oracle@ct6604 ~]# cd /home/oracle/xtts
    [oracle@ct6604 xtts]$ mkdir  backup script
    [oracle@ct6604 xtts]$ cp /home/oracle/rman-xttconvert_2.0.zip /home/oracle/xtts/
    [oracle@ct6604 xtts]$ unzip rman-xttconvert_2.0.zip
    [oracle@ct6604 xtts]$ mv xtt.properties xtt.properties.bak
    [oracle@ct6604 xtts]$ cat xtt.properties.bak|grep -v ^#|grep -v ^$ >xtt.properties
    [oracle@ct6604 xtts]$ vi xtt.properties
    [oracle@ct6604 xtts]$ cat xtt.properties
    tablespaces=TBS01,TBS02
    platformid=13
    #srcdir=SOURCEDIR1,SOURCEDIR2
    #dstdir=DESTDIR1,DESTDIR2
    #srclink=TTSLINK
    dfcopydir=/home/oracle/xtts/backup
    backupformat=/home/oracle/xtts/backup
    stageondest=/home/oracle/xtts/backup
    storageondest=+DATA
    backupondest=/home/oracle/xtts/backup
    asm_home=/u01/app/11.2.0/grid
    asm_sid=+ASM1
    parallel=3
    rollparallel=2
    getfileparallel=4
5.##ct6604
##在源端解压rman-xttconvert脚本,配置xtts的参数文件.
    #此处的操作都是在/home/oracle/xtts下,它也目标端nfs是的一个目录,所以目标端就不需要再配置这些.
    #配置文件参数说明:tablespaces要传输的表空间
                      platformid源端平台ID,通过V$DATABASE.PLATFORM_ID查看
                      srcdir,dstdir,srclink是用于通过dbms_file_transfer传输的参数,本测试通过rman,不使用
                      dfcopydir源端生成数据文件的目录
                      backupformat源端生成增量备份的目录
                      stageondest目标端存放源数据文件和增量备份的目录
                      storageondest目录端存放目标数据文件的目录                  backupondest目标端使用ASM时转换增量备份的目录,目标端使用数据文件建议和stageondest设的一样,测试发现目标端为ASM也可以把目录设为和stageondest一样,因为无需转换增量备份即可应用增量roll forward
                      parallel,rollparallel,getfileparallel并行度,此处用的默认
                      asm_home,asm_sid目标端使用ASM时,用于指定asm实例的oracle_home,sid.      此测试没使用的参数:cnvinst_home,cnvinst_sid目标端辅助实例的oracle_home,sid,如果目标端是单独又装的11.2.04的软件,需要指定
                            
    [root@ct6604 xtts]# su - oracle
    [oracle@ct6604 ~]# cd /home/oracle/xtts
    [oracle@ct6604 xtts]$ mkdir  backup script
    [oracle@ct6604 xtts]$ cp /home/oracle/rman-xttconvert_2.0.zip /home/oracle/xtts/
    [oracle@ct6604 xtts]$ unzip rman-xttconvert_2.0.zip
    [oracle@ct6604 xtts]$ mv xtt.properties xtt.properties.bak
    [oracle@ct6604 xtts]$ cat xtt.properties.bak|grep -v ^#|grep -v ^$ >xtt.properties
    [oracle@ct6604 xtts]$ vi xtt.properties
    [oracle@ct6604 xtts]$ cat xtt.properties
    tablespaces=TBS01,TBS02
    platformid=13
    #srcdir=SOURCEDIR1,SOURCEDIR2
    #dstdir=DESTDIR1,DESTDIR2
    #srclink=TTSLINK
    dfcopydir=/home/oracle/xtts/backup
    backupformat=/home/oracle/xtts/backup
    stageondest=/home/oracle/xtts/backup
    storageondest=+DATA
    backupondest=/home/oracle/xtts/backup
    asm_home=/u01/app/11.2.0/grid
    asm_sid=+ASM1
    parallel=3
    rollparallel=2
    getfileparallel=4

6.##ct6604
##在源端执行准备prepare操作
    #此处生成数据文件和转换脚本
    [oracle@ct6604 xtts]$ ORACLE_SID=ctdb
    [oracle@ct6604 xtts]$ TMPDIR=/home/oracle/xtts/script
    [oracle@ct6604 xtts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -p

7.##ct66rac01
##在目标端执行行转换convert操作
    #因为使用nfs,所以转换之前源端产生的文件就无需传过来,直接执行就可以
    [root@ct66rac01 ~]# su - oracle
    [oracle@ct66rac01 ~]$ cd /home/oracle/xtts
    [oracle@ct66rac01 xtts]$ ORACLE_SID=rac11g1
    [oracle@ct66rac01 xtts]$ TMPDIR=/home/oracle/xtts/script
    [oracle@ct66rac01 xtts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -c

8.##ct6604
##在源端模拟生成新增数据
    [oracle@ct6604 xtts]$ ORACLE_SID=ctdb
    [oracle@ct6604 xtts]$ sqlplus / as sysdba
    SQL> insert into test01.tb01 select * from test01.tb01;
    SQL> insert into test02.tb01 select * from test02.tb01;
    SQL> commit;
    SQL> exit

9.##ct6604
##在源端执行增量备份incremental
    [oracle@ct6604 xtts]$ ORACLE_SID=ctdb
    [oracle@ct6604 xtts]$ TMPDIR=/home/oracle/xtts/script
    [oracle@ct6604 xtts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -i

10.##ct66rac01
##在目标端应用增量roll forward
    #因为使用nfs,所以转换之前源端产生的文件就无需传过来,直接执行就可以
    #应用增量roll forward是应用到转换后的数据文件上
    [oracle@ct66rac01 xtts]$ ORACLE_SID=rac11g1
    [oracle@ct66rac01 xtts]$ TMPDIR=/home/oracle/xtts/script
    [oracle@ct66rac01 xtts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r

11.##ct6604
##在源端模拟生成新增数据,并将要传输的表空间设置只读
    #此时才算开始计算停机时间
    [oracle@ct6604 xtts]$ ORACLE_SID=ctdb
    [oracle@ct6604 xtts]$ sqlplus / as sysdba

SQL> insert into test01.tb01 select * from test01.tb01;
    SQL> insert into test02.tb01 select * from test02.tb01;
    SQL> commit;

SQL> alter tablespace tbs01 read only;
    SQL> alter tablespace tbs02 read only;

SQL> exit

12.##ct6604
##在源端执行最后一次增量备份incremental
    [oracle@ct6604 xtts]$ ORACLE_SID=ctdb
    [oracle@ct6604 xtts]$ TMPDIR=/home/oracle/xtts/script
    [oracle@ct6604 xtts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -i

13.##ct66rac01
##在目标端应用最后一次增量roll forward
    #因为使用nfs,所以转换之前源端产生的文件就无需传过来,直接执行就可以
    [oracle@ct66rac01 ~]$ cd /home/oracle/xtts
    [oracle@ct66rac01 xtts]$ ORACLE_SID=rac11g1
    [oracle@ct66rac01 xtts]$ TMPDIR=/home/oracle/xtts/script
    [oracle@ct66rac01 xtts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r

14.##ct66rac01
##在目标端产生执行导入的脚本
    #因为之前没有设置dstdir,srclink参数,所以此处产生的导入脚本需要手动加上dblink和directory的名称
    [oracle@ct66rac01 xtts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -e

15.##ct66rac01
##在目标端新建用户,导入传输表空间
    [oracle@ct66rac01 ~]$ ORACLE_SID=rac11g1
    [oracle@ct66rac01 xtts]$ sqlplus / as sysdba
    SQL> create user test01 identified by test01 ;
    SQL> create user test02 identified by test02 ;
    SQL> grant connect,resource to test01;
    SQL> grant connect,resource to test02;
    SQL> exit

[oracle@ct66rac01 ~]$ ORACLE_SID=rac11g1
    /home/oracle/xtts/script/xttplugin.txt
    [oracle@ct66rac01 ~]$ impdp directory=dump_oradata nologfile=y network_link=lnk_ctdb transport_full_check=no transport_tablespaces=TBS01,TBS02 transport_datafiles='+DATA/tbs01_5.xtf','+DATA/tbs02_6.xtf'

Import: Release 11.2.0.4.0 - Production on Fri Jan 15 17:18:14 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: system
    Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options
    Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** directory=dump_oradata nologfile=y network_link=lnk_ctdb transport_full_check=no transport_tablespaces=TBS01,TBS02 transport_datafiles=+DATA/tbs01_5.xtf,+DATA/tbs02_6.xtf
    Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
    Processing object type TRANSPORTABLE_EXPORT/TABLE
    Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
    Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Fri Jan 15 17:19:07 2016 elapsed 0 00:00:48

16.##ct66rac01
##在目标端验证导入的数据和权限和源端是否一致
    #此处发现源端给test02用户的execute on dbms_crypto权限没有导入,这是impdp原本的问题.所以在做xtts之前就要确定好这些权限的问题,以减少停机时间.
    [oracle@ct66rac01 xtts]$ sqlplus / as sysdba
    SQL> alter tablespace tbs01 read write;
    SQL> alter tablespace tbs02 read write;
    SQL> alter user test01 default tablespace tbs01;
    SQL> alter user test02 default tablespace tbs02;

SQL> select count(1) from test01.tb01;
    /*
    COUNT(1)
    345732
    */

SQL> select * from dba_tab_privs where grantee='TEST02';
    /*
    GRANTEE    OWNER    TABLE_NAME    GRANTOR    PRIVILEGE    GRANTABLE    HIERARCHY
    TEST02    TEST01    TB01    TEST01    SELECT    NO    NO
    */
    #select * from dba_tab_privs where owner ='SYS' and grantee='TEST02';
    SQL> grant execute on dbms_crypto to test02;
    SQL> exit

测试中的一些小问题:
1.报Cant find xttplan.txt, TMPDIR undefined at xttdriver.pl line 1185.
要注意设定环境变量TMPDIR=/home/oracle/xtts/script  
2.Unable to fetch platform name
执行xttdriver.pl之前没有指定ORACLE_SID
3.Some failure occurred. Check /home/oracle/xtts/script/FAILED for more details
      If you have fixed the issue, please delete /home/oracle/xtts/script/FAILED and run it
      again OR run xttdriver.pl with -L option
执行xttdriver.pl报错后,下次执行要删除FAILED文件.
4.Can't locate strict.pm in @INC
使用$ORACLE_HOME/perl/bin/perl而不是使用perl

备注:
测试完成,比较简单吧.做好准备工作,通过在源端和目标端执行几次$ORACLE_HOME/perl/bin/perl xttdriver.pl,再执行impdp就完成.此测试中使用nfs可以省去文件的传输,使用整个操作方便清晰许多.           
减少迁移停机时间的goldengate也是不错.另外整库迁移如果平台不同或相同,但字节顺序相同,可先考虑dataguard,Data Guard Support for Heterogeneous Primary and Physical Standbys in Same Data Guard Configuration (文档 ID 413484.1).

#############XTTS 2  11G

When using Cross Platform Transportable Tablespaces (XTTS) to migrate data between systems that have different endian formats, the amount of downtime required can be substantial because it is directly proportional to the size of the data set being moved.  However, combining XTTS with Cross Platform Incremental Backup can significantly reduce the amount of downtime required to move data between platforms.

Traditional Cross Platform Transportable Tablespaces

The high-level steps in a typical XTTS scenario are the following:

  1. Make tablespaces in source database READ ONLY
  2. Transfer datafiles to destination system
  3. Convert datafiles to destination system endian format
  4. Export metadata of objects in the tablespaces from source database using Data Pump
  5. Import metadata of objects in the tablespaces into destination database using Data Pump
  6. Make tablespaces in destination database READ WRITE

Because the data transported must be made read only at the very start of the procedure, the application that owns the data is effectively unavailable to users for the entire duration of the procedure.  Due to the serial nature of the steps, the downtime required for this procedure is proportional to the amount of data.  If data size is large, datafile transfer and convert times can be long, thus downtime can be long.

Reduce Downtime using Cross Platform Incremental Backup

To reduce the amount of downtime required for XTTS, Oracle has enhanced RMAN's ability to roll forward datafile copies using incremental backups, to work in a cross-platform scenario.  By using a series of incremental backups, each smaller than the last, the data at the destination system can be brought almost current with the source system, before any downtime is required.  The downtime required for datafile transfer and convert when combining XTTS with Cross Platform Incremental Backup is now proportional to the rate of data block changes in the source system.

The Cross Platform Incremental Backup feature does not affect the amount of time it takes to perform other actions for XTTS, such as metadata export and import.  Hence, databases that have very large amounts of metadata (DDL) will see limited benefit from Cross Platform Incremental Backup since migration time is typically dominated by metadata operations, not datafile transfer and conversion.
Only those database objects that are physically located in the tablespaces that are being transported will be copied to the destination system. If you need for other objects to be transported, that are located in different tablespaces (such as, for example, pl/sql objects, sequences, etc., that are located in the SYSTEM tablespace), you can use data pump to copy those objects to the destination system.

The high-level steps using the cross platform incremental backup capability are the following:

1.  Prepare phase (source data remains online)

    1. Transfer datafiles to destination system
    2. Convert datafiles, if necessary, to destination system endian format

2.  Roll Forward phase (source data remains online - Repeat this phase as many times as necessary to catch destination datafile copies up to source database)

    1. Create incremental backup on source system
    2. Transfer incremental backup to destination system
    3. Convert incremental backup to destination system endian format and apply the backup to the destination datafile copies
NOTE: In Version 3, if a datafile is added to the tablespace OR a new tablespace name is added to the xtt.properties file, a warning and additional instructions will be required. 

3.  Transport phase (source data is READ ONLY)

    1. Make tablespaces in source database READ ONLY
    2. Repeat the Roll Forward phase one final time
      • This step makes destination datafile copies consistent with source database.
      • Time for this step is significantly shorter than traditional XTTS method when dealing with large data because the incremental backup size is smaller.
    3. Export metadata of objects in the tablespaces from source database using Data Pump
    4. Import metadata of objects in the tablespaces into destination database using Data Pump
    5. Make tablespaces in destination database READ WRITE

The purpose of this document is to provide an example of how to use this enhanced RMAN cross platform incremental backup capability to reduce downtime when transporting tablespaces across platforms.

SCOPE

The source system may be any platform provided the prerequisites referenced and listed below for both platform and database are met.

If you are migrating from a little endian platform to Oracle Linux, then the migration method that should receive first consideration is Data Guard.  See Note 413484.1for details about heterogeneous platform support for Data Guard between your current little endian platform and Oracle Linux.

This method can also be used with 12c databases, however, for an alternative method for 12c see:

Note 2005729.1 12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup.

NOTE:  Neither method supports 12c multitenant databases.  Enhancement bug 22570430 addresses this limitation.  

DETAILS

Overview

This document provides a procedural example of transporting two tablespaces called TS1 and TS2 from an Oracle Solaris SPARC system to an Oracle Exadata Database Machine running Oracle Linux, incorporating Oracle's Cross Platform Incremental Backup capability to reduce downtime.

After performing the Initial Setup phase, moving the data is performed in the following three phases:

Prepare phase
During the Prepare phase, datafile copies of the tablespaces to be transported are transferred to the destination system and converted.  The application being migrated is fully accessible during the Prepare phase.  The Prepare phase can be performed using RMAN backups or dbms_file_transfer.  Refer to the Selecting the Prepare Phase Method section for details about choosing the Prepare phase method.

Roll Forward phase
During the Roll Forward phase, the datafile copies that were converted during the Prepare phase are rolled forward using incremental backups taken from the source database.  By performing this phase multiple times, each successive incremental backup becomes smaller and faster to apply, allowing the data at the destination system to be brought almost current with the source system.  The application being migrated is fully accessible during the Roll Forward phase.

Transport phase
During the Transport phase, the tablespaces being transported are put into READ ONLY mode, and a final incremental backup is taken from the source database and applied to the datafile copies on the destination system, making the destination datafile copies consistent with source database.  Once the datafiles are consistent, the tablespaces are TTS-exported from the source database and TTS-imported into the destination database.  Finally, the tablespaces are made READ WRITE for full access on the destination database. The application being migrated cannot receive any updates during the Transport phase.

Cross Platform Incremental Backup Supporting Scripts

The Cross Platform Incremental Backup core functionality is delivered in Oracle Database 11.2.0.4 and later.  See the Requirements and Recommendations section for details.  In addition, a set of supporting scripts in the file rman-xttconvert_2.0.zip are attached to this document that are used to manage the procedure required to perform XTTS with Cross Platform Incremental Backup.  The two primary supporting scripts files are the following:

  • Perl script xttdriver.pl - the script that is run to perform the main steps of the XTTS with Cross Platform Incremental Backup procedure.
  • Parameter file xtt.properties - the file that contains your site-specific configuration.

Requirements and Recommendations

This section contains the following subsections:

  • Prerequisites
  • Selecting the Prepare Phase Method
  • Destination Database 11.2.0.3 or Earlier Requires a Separate Incremental Convert Home and Instance

Prerequisites

The following prerequisites must be met before starting this procedure:

  • The limitations and considerations for transportable tablespaces must still be followed.  They are defined in the following manuals:
  • In addition to the limitations and considerations for transportable tablespaces, the following conditions must be met:
    • The current version does NOT support Windows.
    • The source database must be running 10.2.0.3 or higher.
    • The source database must have its COMPATIBLE parameter set to 10.2.0 or higher.
    • The source database's COMPATIBLE parameter must not be greater than the destination database's COMPATIBLE parameter.
    • The source database must be in ARCHIVELOG mode.
    • The destination database must be running 11.2.0.4 or higher.
    • Although preferred destination system is Linux (either 64-bit Oracle Linux or a certified version of RedHat Linux), this procedure can be used with other unix based operating systems.
    • The Oracle version of source must be lower or equal to destination.
    • RMAN's default device type should be configured to DISK
    • RMAN on the source system must not have DEVICE TYPE DISK configured with COMPRESSED.   If so, procedure may return: ORA-19994: cross-platform backup of compressed backups different endianess.
    • The set of tablespaces being moved must all be online, and contain no offline data files.  Tablespaces must be READ WRITE.  Tablespaces that are READ ONLY may be moved with the normal XTTS method.  There is no need to incorporate Cross Platform Incremental Backups to move tablespaces that are always READ ONLY.
  • All steps in this procedure are run as the oracle user that is a member of the OSDBA group. OS authentication is used to connect to both the source and destination databases.
  • If the Prepare Phase method selected is dbms_file_transfer, then the destination database must be 11.2.0.4.  See the Selecting the Prepare Phase Method section for details.
  • If the Prepare Phase method selected is RMAN backup, then staging areas are required on both the source and destination systems.  See the Selecting the Prepare Phase Method section for details.
  • It is not supported to execute this procedure against a standby or snapshot standby databases.
  • If the destination database version is 11.2.0.3 or lower, then a separate database home containing 11.2.0.4 running an 11.2.0.4 instance on the destination system is required to perform the incremental backup conversion.  See the Destination Database 11.2.0.3 and Earlier Requires a Separate Incremental Convert Home and Instance section for details. If using ASM for 11.2.0.4 Convert Home, then ASM needs to be on 11.2.0.4, else error ORA-15295 (e.g. ORA-15295: ASM instance software version 11.2.0.3.0 less than client version 11.2.0.4.0) is raised.
Whole Database Migration

If Cross Platform Incremental Backups will be used to reduce downtime for a whole database migration, then the steps in this document can be combined with the XTTS guidance provided in the MAA paper Platform Migration Using Transportable Tablespaces: Oracle Database 11g.

This method can also be used with 12c databases, however, for an alternative method for 12c see:
Note 2005729.1 12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup.

Selecting the Prepare Phase Method

During the Prepare phase, datafiles of the tablespaces to be transported are transferred to the destination system and converted by the xttdriver.pl script.  There are two possible methods:

  1. Using dbms_file_transfer (DFT) transfer (using xttdriver.pl -S and -G options)
  2. Using Recovery Manager (RMAN) RMAN backup (using xttdriver.pl -p and -c options)

The dbms_file_transfer method uses the dbms_file_transfer.get_file() subprogram to transfer the datafiles from the source system to the target system over a database link.  The dbms_file_transfer method has the following advantages over the RMAN method: 1) it does not require staging area space on either the source or destination system; 2) datafile conversion occurs automatically during transfer - there is not a separate conversion step.  The dbms_file_transfer method requires the following:

  • A destination database running 11.2.0.4.  Note that an incremental convert home or instance do not participate in dbms_file_transfer file transfers.
  • A database directory object in the source database from where the datafiles are copied.
  • A database directory object in the destination database to where the datafiles are placed.
  • A database link in the destination database referencing the source database.

The RMAN backup method runs RMAN on the source system to create backups on the source system of the datafiles to be transported.  The backups files must then be manually transferred over the network to the destination system.  On the destination system the datafiles are converted by RMAN, if necessary.  The output of the RMAN conversion places the datafiles in their final location where they will be used by the destination database.  In the original version of xttdriver.pl, this was the only method supported.  The RMAN backup method requires the following:

  • Staging areas are required on both the source and destination systems for the datafile copies created by RMAN.  The staging areas are referenced in the xtt.properties file using the parameters dfcopydir and stageondest.  The final destination where converted datafiles are placed is referenced in the xtt.properties file using the parameter storageondest.  Refer to the Description of Parameters in Configuration File xtt.properties section for details and sizing guidelines.

Details of using each of these methods are provided in the instructions below.  The recommended method is the dbms_file_transfer method.

Destination Database 11.2.0.3 or Earlier Requires a Separate Incremental Convert Home and Instance

The Cross Platform Incremental Backup core functionality (i.e. incremental backup conversion) is delivered in Oracle Database 11.2.0.4 and later.  If the destination database version is 11.2.0.4 or later, then the destination database can perform this function.  However, if the destination database version is 11.2.0.3 or earlier, then, for the purposes of performing incremental backup conversion, a separate 11.2.0.4 software home, called the incremental convert home, must be installed, and an instance, called the incremental convert instance, must be started in NOMOUNT state using that home.  The incremental convert home and incremental convert instance are temporary and are used only during the migration.

Note that because the dbms_file_transfer Prepare Phase method requires destination database 11.2.0.4, which can be used to perform the incremental backup conversions function (as stated above), an incremental convert home and incremental convert instance are usually only applicable when the Prepare Phase method is RMAN backup.

For details about setting up a temporary incremental convert instance, see instructions in Phase 1.

Troubleshooting

To enable debug mode, either run xttdriver.pl with the -d flag, or set environment variable XTTDEBUG=1 before running xttdriver.pl.  Debug mode enables additional screen output and causes all RMAN executions to be performed with the debug command line option.

Known Issues

  1. If the source database contains nested IOTs with key compression, then the fix for Bug 14835322 must be installed in the destination database home (where the tablespace plug operation occurs).
  2. If you wish to utilize block change tracking on the source database when incremental backups are created, then the fix for Bug 16850197 must be installed in the source database home.
  3. If using ASM in both source and destination, see XTTS Creates Alias on Destination when Source and Destination use ASM (Note 2351123.1)
  4. If the roll forward phase (xttdriver.pl -r) fails with the following errors, then verify RMAN DEVICE TYPE DISK is not configured COMPRESSED.

    Entering RollForward
    After applySetDataFile
    Done: applyDataFileTo
    Done: RestoreSetPiece
    DECLARE
    *
    ERROR at line 1:
    ORA-19624: operation failed, retry possible
    ORA-19870: error while restoring backup piece
    /dbfs_direct/FS1/xtts/incrementals/xtts_incr_backup
    ORA-19608: /dbfs_direct/FS1/xtts/incrementals/xtts_incr_backup is not a backup
    piece
    ORA-19837: invalid blocksize 0 in backup piece header
    ORA-06512: at "SYS.X$DBMS_BACKUP_RESTORE", line 2338
    ORA-06512: at line 40

  5. This document can be referred as well for known issues : Note 17866999.8 & If the source contains cluster objects, then run "analyze cluster &cluster_name validate structure cascade" after XTTS has been completed in the target database and if it reports an ORA-1499 open the trace file and review if it has entries like:

    kdcchk: index points to block 0x01c034f2 slot 0x1 chain length is 256
    kdcchk: chain count wrong 0x01c034f2.1 chain is 1 index says 256
    last entry 0x01c034f2.1 blockcount = 1
    kdavls: kdcchk returns 3 when checking cluster dba 0x01c034a1 objn 90376

    Then to repair this inconsistency either:

    1. rebuild the cluster index.
    or
    2. Install fix bug 17866999 and run dbms_repair.repair_cluster_index_keycount

    If after repairing the inconsistency the "analyze cluster &cluster_name validate structure cascade" still reports issues then recreate the affected cluster which involves recreating its tables.

    Note that the fix of bug 17866999 is a workaround fix to repair the index cluster; it will not avoid the problem.Oracle did not find a valid fix for this situation so it will affect any rdbms versions.


Transport Tablespaces with Reduced Downtime using Cross Platform Incremental Backup

The XTTS with Cross Platform Incremental Backups procedure is divided into the following four phases:

  • Phase 1 - Initial Setup phase
  • Phase 2 - Prepare phase
  • Phase 3 - Roll Forward phase
  • Phase 4 - Transport phase

Conventions Used in This Document

  • All command examples use bash shell syntax.
  • Commands prefaced by the shell prompt string [oracle@source]$ indicate commands run as the oracle user on the source system.
  • Commands prefaced by the shell prompt string [oracle@dest]$ indicate commands run as the oracle user on the destination system.

Phase 1 - Initial Setup

Perform the following steps to configure the environment to use Cross Platform Incremental Backups:

Step 1.1 - Install the Destination Database Software and Create the Destination Database

Install the desired Oracle Database software on the destination system that will run the destination database.  It is highly recommended to use Oracle Database 11.2.0.4 or later.  Note that the dbms_file_transfer Prepare Phase method requires the destination database to be 11.2.0.4.

Identify (or create) a database on the destination system to transport the tablespace(s) into and create the schema users required for the tablespace transport.

Per generic TTS requirement, ensure that the schema users required for the tablespace transport exist in the destination database.
Step 1.2 - If necessary, Configure the Incremental Convert Home and Instance

See the Destination Database 11.2.0.3 and Earlier Requires a Separate Incremental Convert Home and Instance section for details.

Skip this step if the destination database software version is 11.2.0.4 or later.  Note that the dbms_file_transfer Prepare Phase method requires the destination database to be 11.2.0.4.

If the destination database is 11.2.0.3 or earlier, then you must configure a separate incremental convert instance by performing the following steps:

    • Install a new 11.2.0.4 database home on the destination system.  This is the incremental convert home.
    • Using the incremental convert home startup an instance in the NOMOUNT state.  This is the incremental convert instance.  A database does not need to be created for the incremental convert instance.  Only a running instance is required.

The following steps may be used to create an incremental convert instance named xtt running out of incremental convert home /u01/app/oracle/product/11.2.0.4/xtt_home:

[oracle@dest]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/xtt_home

[oracle@dest]$ export ORACLE_SID=xtt

[oracle@dest]$ cat << EOF > $ORACLE_HOME/dbs/init$ORACLE_SID.ora
db_name=xtt
compatible=11.2.0.4.0
EOF

[oracle@dest]$ sqlplus / as sysdba
SQL> startup nomount

If ASM storage is used for the xtt.properties parameter backupondest (described below), then the COMPATIBLE initialization parameter setting for this instance must be equal to or higher than the rdbms.compatible setting for the ASM disk group used.
Step 1.3 - Identify Tablespaces to be Transported

Identify the tablespace(s) in the source database that will be transported. Tablespaces TS1 and TS2 will be used in the examples in this document.  As indicated above, the limitations and considerations for transportable tablespaces must still be followed.

Step 1.4 - If Using dbms_file_transfer Prepare Phase Method, then Configure Directory Objects and Database Links

Note that the dbms_file_transfer Prepare Phase method requires the destination database to be 11.2.0.4.

If using dbms_file_transfer as the Prepare Phase method, then three database objects must be created:

    1. A database directory object in the source database from where the datafiles are copied
    2. A database directory object in the destination database to where the datafiles are placed
    3. A database link in the destination database referencing the source database

The source database directory object references the location where the datafiles in the source database currently reside.  For example, to create directory object sourcedir that references datafiles in ASM location +DATA/prod/datafile, connect to the source database and run the following SQL command:

SQL@source> create directory sourcedir as '+DATA/prod/datafile';

The destination database directory object references the location where the datafiles will be placed on the destination system.  This should be the final location where the datafils will reside when in use by the destination database.  For example, to create directory object dstdir that will place transferred datafiles in ASM location +DATA/prod/datafile, connect to the destination database and run the following SQL command:

SQL@dest> create directory destdir as '+DATA/prod/datafile';

The database link is created in the destination database, referencing the source database.  For example, to create a database link named ttslink, run the following SQL command:

SQL@dest> create public database link ttslink connect to system identified by <password> using '<tns_to_source>';

Verify the database link can properly access the source system:

SQL@dest> select * from dual@ttslink;
Step 1.5 - Create Staging Areas

Create the staging areas on the source and destinations systems as defined by the following xtt.properties parameters: backupformat, backupondest.

Also, if using RMAN backups in the Prepare phase, create the staging areas on the source and destinations systems as defined by the following xtt.properties parameters: dfcopydir, stageondest.

Step 1.6 - Install xttconvert Scripts on the Source System

On the source system, as the oracle software owner, download and extract the supporting scripts attached as rman-xttconvert_2.0.zip to this document.

[oracle@source]$ pwd
/home/oracle/xtt

[oracle@source]$ unzip rman_xttconvert_v3.zip
Archive: rman_xttconvert_v3.zip
inflating: xtt.properties
inflating: xttcnvrtbkupdest.sql
inflating: xttdbopen.sql
inflating: xttdriver.pl
inflating: xttprep.tmpl
extracting: xttstartupnomount.sql

Step 1.7 - Configure xtt.properties on the Source System

Edit the xtt.properties file on the source system with your site-specific configuration.   For more information about the parameters in the xtt.properties file, refer to the Description of Parameters in Configuration File xtt.properties section in the Appendix below.

Step 1.8 - Copy xttconvert Scripts and xtt.properties to the Destination System

As the oracle software owner copy all xttconvert scripts and the modified xtt.properties file to the destination system.

[oracle@source]$ scp -r /home/oracle/xtt dest:/home/oracle/xtt
Step 1.9 - Set TMPDIR

In the shell environment on both source and destination systems, set environment variable TMPDIR to the location where the supporting scripts exist.  Use this shell to run the Perl script xttdriver.pl as shown in the steps below.  If TMPDIR is not set, output files are created in and input files are expected to be in /tmp.

[oracle@source]$ export TMPDIR=/home/oracle/xtt

[oracle@dest]$ export TMPDIR=/home/oracle/xtt

 

Phase 2 - Prepare Phase

During the Prepare phase, datafiles of the tablespaces to be transported are transferred to the destination system and converted by the xttdriver.pl script.  There are two possible methods:

  1. Phase 2A - dbms_file_transfer Method
  2. Phase 2B - RMAN Backup Method

Select and use one of these methods based upon the information provided in the Requirements and Recommendations section above.

NOTE:  For large number of files, using dbms_file_transfer has been found to be the fastest method for transferring datafiles to destination.   

Phase 2A - Prepare Phase for dbms_file_transfer Method

Only use the steps in Phase 2A if the Prepare Phase method chosen is dbms_file_transfer and the setup instructions have been completed, particularly those in Step 1.4.

During this phase datafiles of the tablespaces to be transported are transferred directly from source system and placed on the destination system in their final location to be used by the destination database.  If conversion is required, it is performed automatically during transfer.  No separate conversion step is required.  The steps in this phase are run only once.  The data being transported is fully accessible in the source database during this phase.

Step 2A.1 - Run the Prepare Step on the Source System

On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the prepare step as follows:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -S

The prepare step performs the following actions on the source system:

  • Verifies the tablespaces are online, in READ WRITE mode, and do not contain offline datafiles.
  • Creates the following files used later in this procedure:
    • xttnewdatafiles.txt
    • getfile.sql

The set of tablespaces being transported must all be online, contain no offline data files, and must be READ WRITE.  The Prepare step will signal an error if one or more datafiles or tablespaces in your source database are offline or READ ONLY.  If a tablespace is READ ONLY and will remain so throughout the procedure, then simply transport those tablespaces using the traditional cross platform transportable tablespace process.  No incremental apply is needed for those files.

Step 2A.2 - Transfer the Datafiles to the Destination System

On the destination system, log in as the oracle user and set the environment (ORACLE_HOME and ORACLE_SID environment variables) to the destination database (it is invalid to attempt to use an incremental convert instance). Copy the xttnewdatafiles.txt and getfile.sql files created in step 2A.1 from the source system and run the -G get_file step as follows:

NOTE: This step copies all datafiles being transported from the source system to the destination system.  The length of time for this step to complete is dependent on datafile size, and may be substantial.  Use getfileparallel option for parallelism.  
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/xttnewdatafiles.txt /home/oracle/xtt
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/getfile.sql /home/oracle/xtt

# MUST set environment to destination database
[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -G

When this step is complete, the datafiles being transported will reside in the final location where they will be used by the destination database.  Note that endian conversion, if required, is performed automatically during this step.

Proceed to Phase 3 to create and apply incremental backups to the datafiles.

Phase 2B - Prepare Phase for RMAN Backup Method

Only use the steps in Phase 2B if the Prepare Phase method chosen is RMAN backup and the setup instructions have been completed, particularly those in Step 1.5.

During this phase datafile copies of the tablespaces to be transported are created on the source system, transferred to the destination system, converted, and placed in their final location to be used by the destination database.  The steps in this phase are run only once.  The data being transported is fully accessible in the source database during this phase.

Step 2B.1 - Run the Prepare Step on the Source System

On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the prepare step as follows:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -p

The prepare step performs the following actions on the source system:

  • Creates datafile copies of the tablespaces that will be transported in the location specified by the xtt.properties parameter dfcopydir.
  • Verifies the tablespaces are online, in READ WRITE mode, and do not contain offline datafiles.
  • Creates the following files used later in this procedure:
    • xttplan.txt
    • rmanconvert.cmd

The set of tablespaces being transported must all be online, contain no offline data files, and must be READ WRITE.  The Prepare step will signal an error if one or more datafiles or tablespaces in your source database are offline or READ ONLY.  If a tablespace is READ ONLY and will remain so throughout the procedure, then simply transport those tablespaces using the traditional cross platform transportable tablespace process.  No incremental apply is needed for those files.

Step 2B.2 - Transfer Datafile Copies to the Destination System

On the destination system, logged in as the oracle user, transfer the datafile copies created in the previous step from the source system.  Datafile copies on the source system are created in the location defined in xtt.properties parameter dfcopydir.  The datafile copies must be placed in the location defined by xtt.properties parameter stageondest.

Any method of transferring the datafile copies from the source system to the destination system that results in a bit-for-bit copy is supported.

If the dfcopydir location on the source system and the stageondest location on the destination system refer to the same NFS storage location, then this step can be skipped since the datafile copies are already available in the expected location on the destination system.

In the example below, scpis used to transfer the copies created by the previous step from the source system to the destination system.

[oracle@dest]$ scp oracle@source:/stage_source/* /stage_dest
Note that due to current limitations with cross-endian support in DBMS_FILE_TRANSPORT and ASMCMD, you must use OS-level commands, such as SCP or FTP to transfer the copies from the source system to destination system.
Step 2B.3 - Convert the Datafile Copies on the Destination System

On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, copy the rmanconvert.cmd file created in step 2B.1 from the source system and run the convert datafiles step as follows:

[oracle@dest]$ scp oracle@source:/home/oracle/xtt/rmanconvert.cmd /home/oracle/xtt

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -c

The convert datafiles step converts the datafiles copies in the stageondest location to the endian format of the destination system.  The converted datafile copies are written in the location specified by the xtt.properties parameter storageondest.  This is the final location where datafiles will be accessed when they are used by the destination database.

When this step is complete, the datafile copies in stageondest location are no longer needed and may be removed.

Phase 3 - Roll Forward Phase

During this phase an incremental backup is created from the source database, transferred to the destination system, converted to the destination system endian format, then applied to the converted destination datafile copies to roll them forward.  This phase may be run multiple times. Each successive incremental backup should take less time than the prior incremental backup, and will bring the destination datafile copies more current with the source database.  The data being transported is fully accessible during this phase.

Step 3.1 - Create an Incremental Backup of the Tablespaces being Transported on the Source System

On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the create incremental step as follows:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -i

The create incremental step executes RMAN commands to generate incremental backups for all tablespaces listed in xtt.properties.  It creates the following files used later in this procedure:

  • tsbkupmap.txt
  • incrbackups.txt
Step 3.2 - Transfer Incremental Backup to the Destination System

Transfer the incremental backup(s) created during the previous step to the stageondest location on the destination system.  The list of incremental backup files to copy are found in the incrbackups.txt file on the source system.

[oracle@source]$ scp `cat incrbackups.txt` oracle@dest:/stage_dest
If the backupformat location on the source system and the stageondest location on the destination system refer to the same NFS storage location, then this step can be skipped since the incremental backups are already available in the expected location on the destination system.
Step 3.3 - Convert the Incremental Backup and Apply to the Datafile Copies on the Destination System

On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, copy the xttplan.txt and tsbkupmap.txt files from the source system and run the rollforward datafiles step as follows:

[oracle@dest]$ scp oracle@source:/home/oracle/xtt/xttplan.txt /home/oracle/xtt
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/tsbkupmap.txt /home/oracle/xtt

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r

The rollforward datafiles step connects to the incremental convert instance as SYS, converts the incremental backups, then connects to the destination database and applies the incremental backups for each tablespace being transported.

Note:
1.  You must copy the xttplan.txt and tsbkupmap.txt files each time that this step is executed, because their content is different each iteration.   
2.  Do NOT change, copy or make any changes to the xttplan.txt.new generated by the script.  
3.  The destination instance will be shutdown and restarted by this process.
Step 3.4 - Determine the FROM_SCN for the Next Incremental Backup

On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the sourcedatabase, run the determine new FROM_SCN step as follows:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -s

The determine new FROM_SCN step calculates the next FROM_SCN, records it in the file xttplan.txt, then uses that SCN when the next incremental backup is created in step 3.1.

Step 3.5 - Repeat the Roll Forward Phase (Phase 3) or Move to the Transport Phase (Phase 4)

At this point there are two choices:

  1. If you need to bring the files at the destination database closer in sync with the production system, then repeat the Roll Forward phase, starting with step 3.1.
  2. If the files at the destination database are as close as desired to the source database, then proceed to the Transport phase.

NOTE: If a datafile is added to one a tablespace since last incremental backup and/or a new tablespace name is added to the xtt.properties, the following will appear:

Error:
------
The incremental backup was not taken as a datafile has been added to the tablespace:

Please Do the following:
--------------------------
1. Copy fixnewdf.txt from source to destination temp dir

2. Copy backups:
<backup list>
from <source location> to the <stage_dest> in destination

3. On Destination, run $ORACLE_HOME/perl/bin/perl xttdriver.pl --fixnewdf

4. Re-execute the incremental backup in source:
$ORACLE_HOME/perl/bin/perl xttdriver.pl --bkpincr

NOTE: Before running incremental backup, delete FAILED in source temp dir or
run xttdriver.pl with -L option:

$ORACLE_HOME/perl/bin/perl xttdriver.pl -L --bkpincr

These instructions must be followed exactly as listed. The next incremental backup will include the new datafile.

Phase 4 - Transport Phase

During this phase the source data is made READ ONLY and the destination datafiles are made consistent with the source database by creating and applying a final incremental backup. After the destination datafiles are made consistent, the normal transportable tablespace steps are performed to export object metadata from the source database and import it into the destination database.  The data being transported is accessible only in READ ONLY mode until the end of this phase.

Step 4.1 - Make Source Tablespaces READ ONLY in the Source Database

On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the sourcedatabase, make the tablespaces being transported READ ONLY.

system@source/prod SQL> alter tablespace TS1 read only;

Tablespace altered.

system@source/prod SQL> alter tablespace TS2 read only;

Tablespace altered.

Step 4.2 - Create the Final Incremental Backup, Transfer, Convert, and Apply It to the Destination Datafiles

Repeat steps 3.1 through 3.3 one last time to create, transfer, convert, and apply the final incremental backup to the destination datafiles.

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -i

[oracle@source]$ scp `cat incrbackups.txt` oracle@dest:/stage_dest

[oracle@source]$ scp xttplan.txt oracle@dest:/home/oracle/xtt
[oracle@source]$ scp tsbkupmap.txt oracle@dest:/home/oracle/xtt

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r

Step 4.3 - Import Object Metadata into Destination Database

On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, run the generate Data Pump TTS command step as follows:

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -e

The generate Data Pump TTS command step creates a sample Data Pump network_link transportable import command in the file xttplugin.txt with the transportable tablespaces parameters TRANSPORT_TABLESPACES and TRANSPORT_DATAFILES correctly set.  Note that network_link mode initiates an import over a database link that refers to the source database.  A separate export or dump file is not required.  If you choose to perform the tablespace transport with this command, then you must edit the import command to replace import parameters DIRECTORY, LOGFILE, and NETWORK_LINK with site-specific values.

The following is an example network mode transportable import command:

[oracle@dest]$ impdp directory=DATA_PUMP_DIR logfile=tts_imp.log network_link=ttslink \ 
transport_full_check=no \ 
transport_tablespaces=TS1,TS2 \ 
transport_datafiles='+DATA/prod/datafile/ts1.285.771686721', \ 
'+DATA/prod/datafile/ts2.286.771686723', \ 
'+DATA/prod/datafile/ts2.287.771686743'

After the object metadata being transported has been extracted from the source database, the tablespaces in the source database may be made READ WRITE again, if desired.

Database users that own objects being transported must exist in the destination database before performing the transportable import.

If you do not use network_link import, then perform the tablespace transport by running transportable mode Data Pump Export on the source database to export the object metadata being transported into a dump file, then transfer the dump file to the destination system, then run transportable mode Data Pump Import to import the object metadata into the destination database.  Refer to the following manuals for details:

Step 4.4 - Make the Tablespace(s) READ WRITE in the Destination Database

The final step is to make the destination tablespace(s) READ WRITE in the destination database.

system@dest/prod SQL> alter tablespace TS1 read write;

Tablespace altered.

system@dest/prod SQL> alter tablespace TS2 read write;

Tablespace altered.

Step 4.5 - Validate the Transported Data

At this step, the transported data is READ ONLY in the destination database.  Perform application specific validation to verify the transported data.

Also, run RMAN to check for physical and logical block corruption by running VALIDATE TABLESPACE as follows:

RMAN> validate tablespace TS1, TS2 check logical;

Phase 5 - Cleanup

If a separate incremental convert home and instance were created for the migration, then the instance may be shutdown and the software removed.

Files created by this process are no longer required and may now be removed.  They include the following:

  • dfcopydir location on the source system
  • backupformat location on the source system
  • stageondest location on the destination system
  • backupondest location on the destination system
  • $TMPDIR location in both destination and source systems

Appendix

Description of Perl Script xttdriver.pl Options

The following table describes the options available for the main supporting script xttdriver.pl.

Option Description
-S prepare source for transfer

-S option is used only when Prepare phase method is dbms_file_transfer.

Prepare step is run once on the source system during Phase 2A with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.  This step creates files xttnewdatafiles.txt and getfile.sql.

-G get datafiles from source

-G option is used only when Prepare phase method is dbms_file_transfer.

Get datafiles step is run once on the destination system during Phase 2A with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.  The -S option must be run beforehand and files xttnewdatafiles.txt and getfile.sql transferred to the destination system.

This option connects to the destination database and runs script getfile.sql.  getfile.sql invokes dbms_file_transfer.get_file() subprogram for each datafile to transfer it from the source database directory object (defined by parameter srcdir) to the destination database directory object (defined by parameter dstdir) over a database link (defined by parameter srclink).

-p prepare source for backup

-p option is used only when Prepare phase method is RMAN backup.

Prepare step is run once on the source system during Phase 2B with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.

This step connects to the source database and runs the xttpreparesrc.sql script once for each tablespace to be transported, as configured in xtt.properties.  xttpreparesrc.sql does the following:

  1. Verifies the tablespace is online, in READ WRITE mode, and contains no offline datafiles.
  2. Identifies the SCN that will be used for the first iteration of the incremental backup step and writes it into file $TMPDIR/xttplan.txt.
  3. Creates the initial datafile copies on the destination system in the location specified by the parameter dfcopydir set in xtt.properties.  These datafile copies must be transferred manually to the destination system.
  4. Creates RMAN script $TMPDIR/rmanconvert.cmd that will be used to convert the datafile copies to the required endian format on the destination system.
-c convert datafiles

-c option is used only when Prepare phase method is RMAN backup.

Convert datafiles step is run once on the destination system during Phase 2B with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.

This step uses the rmanconvert.cmd file created in the Prepare step to convert the datafile copies to the proper endian format.  Converted datafile copies are written on the destination system to the location specified by the parameter storageondest set in xtt.properties.

-i create incremental Create incremental step is run one or more times on the source system with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.

This step reads the SCNs listed in $TMPDIR/xttplan.txt and generates an incremental backup that will be used to roll forward the datafile copies on the destination system.

-r rollforward datafiles Rollforward datafiles step is run once for every incremental backup created with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.

This step connects to the incremental convert instance using the parameters cnvinst_home and cnvinst_sid, converts the incremental backup pieces created by the Create Incremental step, then connects to the destination database and rolls forward the datafile copies by applying the incremental for each tablespace being transported.

-s determine new FROM_SCN Determine new FROM_SCN step is run one or more times with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.
This step calculates the next FROM_SCN, records it in the file xttplan.txt, then uses that SCN when the next incremental backup is created in step 3.1. It reports the mapping of the new FROM_SCN to wall clock time to indicate how far behind the changes in the next incremental backup will be.
-e generate Data Pump TTS command Generate Data Pump TTS command step is run once on the destination system with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.

This step creates the template of a Data Pump Import command that uses a network_link to import metadata of objects that are in the tablespaces being transported.

-d debug -d option enables debug mode for xttdriver.pl and RMAN commands it executes.  Debug mode can also be enabled by setting environment variable XTTDEBUG=1.
   

Description of Parameters in Configuration File xtt.properties

The following table describes the parameters defined in the xtt.properties file that is used by xttdriver.pl.

Parameter Description Example Setting
tablespaces Comma-separated list of tablespaces to transport from source database to destination database. Must be a single line, any subsequent lines will not be read. tablespaces=TS1,TS2
platformid Source database platform id, obtained from V$DATABASE.PLATFORM_ID. platformid=2
srcdir

Directory object in the source database that defines where the source datafiles currently reside. Multiple locations can be used separated by ",". The srcdir to dstdir mapping can either be N:1 or N:N. i.e. there can be multiple source directories and the files will be written to a single destination directory, or files from a particular source directory can be written to a particular destination directory.

This parameter is used only when Prepare phase method is dbms_file_transfer.

srcdir=SOURCEDIR

srcdir=SRC1,SRC2

dstdir

Directory object in the destination database that defines where the destination datafiles will be created.  If multiple source directories are used (srcdir), then multiple destinations can be defined so a particular source directory is written to a particular destination directory.

This parameter is used only when Prepare phase method is dbms_file_transfer.

dstdir=DESTDIR

dstdir=DST1,DST2

srclink

Database link in the destination database that refers to the source database.  Datafiles will be transferred over this database link using dbms_file_transfer.

This parameter is used only when Prepare phase method is dbms_file_transfer.

srclink=TTSLINK
dfcopydir

Location on the source system where datafile copies are created during the "-p prepare" step.

This location must have sufficient free space to hold copies of all datafiles being transported.

This location may be an NFS-mounted filesystem that is shared with the destination system, in which case it should reference the same NFS location as the stageondest parameter for the destination system.  See Note 359515.1 for mount option guidelines.

This parameter is used only when Prepare phase method is RMAN backup.

dfcopydir=/stage_source
backupformat Location on the source system where incremental backups are created.

This location must have sufficient free space to hold the incremental backups created for one iteration through the process documented above.

This location may be an NFS-mounted filesystem that is shared with the destination system, in which case it should reference the same NFS location as the stageondest parameter for the destination system.

backupformat=/stage_source
stageondest Location on the destination system where datafile copies are placed by the user when they are transferred manually from the source system.

This location must have sufficient free space to hold copies of all datafiles being transported.

This is also the location from where datafiles copies and incremental backups are read when they are converted in the "-c conversion of datafiles" and "-r roll forward datafiles" steps.

This location may be a DBFS-mounted filesystem.

This location may be an NFS-mounted filesystem that is shared with the source system, in which case it should reference the same NFS location as the dfcopydir and backupformat parameters for the source system.  See Note 359515.1 for mount option guidelines.

stageondest=/stage_dest
storageondest

Location on the destination system where the converted datafile copies will be written during the "-c conversion of datafiles" step.

This location must have sufficient free space to permanently hold the datafiles that are transported.

This is the final location of the datafiles where they will be used by the destination database.

This parameter is used only when Prepare phase method is RMAN backup.

storageondest=+DATA
- or -
storageondest=/oradata/prod/%U
backupondest Location on the destination system where converted incremental backups on the destination system will be written during the "-r roll forward datafiles" step.

This location must have sufficient free space to hold the incremental backups created for one iteration through the process documented above.

NOTE: If this is set to an ASM location then define properties asm_home and asm_sid below. If this is set to a file system location, then comment out asm_home and asm_sid parameters below.

backupondest=+RECO
cnvinst_home

Only set this parameter if a separate incremental convert home is in use.

ORACLE_HOME of the incremental convert instance that runs on the destination system.

cnvinst_home=/u01/app/oracle/product/11.2.0.4/xtt_home
cnvinst_sid

Only set this parameter if a separate incremental convert home is in use.

ORACLE_SID of the incremental convert instance that runs on the destination system.

cnvinst_sid=xtt
asm_home ORACLE_HOME for the ASM instance that runs on the destination system.

NOTE: If backupondest is set to a file system location, then comment out both asm_home and asm_sid.

asm_home=/u01/app/11.2.0.4/grid
asm_sid ORACLE_SID for the ASM instance that runs on the destination system. asm_sid=+ASM1
parallel

Defines the degree of parallelism set in the RMAN CONVERT command file rmanconvert.cmd. This file is created during the prepare step and used by RMAN in the convert datafiles step to convert the datafile copies on the destination system.  If this parameter is unset, xttdriver.pl uses parallel=8.

NOTE: RMAN parallelism used for the datafile copies created in the RMAN Backup prepare phase and the incremental backup created in the rollforward phase is controlled by the RMAN configuration on the source system. It is not controlled by this parameter.

parallel=3
rollparallel

Defines the level of parallelism for the -r roll forward operation.

rollparallel=2
getfileparallel

Defines the level of parallelism for the -G operation

Default value is 1. Maximum supported value is 8.

getfileparallel=4

Known Issue

Known Issues for Cross Platform Transportable Tablespaces XTTS Document 2311677.1

Change History

Change Date

rman_xttconvert_v3.zip released - adds support for added datafiles

2017-Jun-06

rman-xttconvert_2.0.zip released - add support for multiple source and destination directories

2015-Apr-20

rman-xttconvert_1.4.2.zip released - add parallelism support for -G get file from source operation

2014-Nov-14

rman-xttconvert_1.4.zip released - remove staging area requirement, add parallel rollforward, eliminate conversion instance requirements when using 11.2.0.4.

2014-Feb-21

rman-xttconvert_1.3.zip released - improves handling of large databases with large number of datafiles.

2013-Apr-10

###XTTS 12c

本文档覆盖了在 12c 及更高版本上,使用跨平台传输表空间(XTTS)以及 RMAN 增量备份,以最小的应用停机时间,在不同 endian 格式的系统间迁移数据的步骤。

第一步是从源系统拷贝一份 full backup 到目标系统。之后,使用一系列的增量备份(每一份都比前一份要小),这样在停机前可以做到目标系统的数据和源系统“几乎”一致。需要停机的步骤只有最终的增量备份及元数据导出/导入。

这个文档描述了在 12c 下使用跨平台增量备份的步骤,关于 11g 下的步骤,请您参考 Note:1389592.1

跨平台增量备份特性并不能减少 XTTS 的其它步骤花费的时间,比如元数据导出/导入。因此,如果数据库内有很多元数据(DDL),比如 Oracle E-Business Suite 和其它打包程序,那么跨平台增量备份特性并不能带来很多好处;对于这样的环境,迁移花的大部分时间是花在处理元数据上,而不是数据文件的转换及传输。
只有被迁移表空间里物理存储的数据库对象才会被拷贝至目标系统;如果要迁移存储在其它表空间的其它类型的对象(比如存储在 SYSTEM 表空间内的 pl/sql 对象,sequences 等),你可以使用数据泵来拷贝这些对象至目标系统。

跨平台增量备份的主要步骤有:

    1. 初始化设置
    2. 准备阶段(源库数据仍然在线)
      1. 备份要传输的表空间(0级备份)
      2. 把备份及其它必须的文件发送到目标系统
      3. 在目标系统恢复数据文件至目标端的 endian 格式
    3. 前滚阶段(源库数据仍然在线 – 要重复这个阶段足够多次,使得目标数据文件拷贝和源库越相近越好)
      1. 在源库创建增量备份
      2. 把增量备份及其它必须的文件发送到目标系统
      3. 把增量备份转换成目标系统的 endian 格式并且把增量备份应用至目标数据文件
      4. 为下次增量备份确定 next_scn
      5. 重复这些步骤直到已经准备好了操作传输表空间
NOTE:  在版本3,如果一个数据文件被加入到一个表空间或者一个新的表空间名字被加入到xtt.properties文件,会出现一个Warning并且需要额外的处置  
  1. 传输阶段(此时源库数据需要置于 READ ONLY 模式)
    1. 在源库端把表空间置为 READ ONLY
    2. 最后一次执行前滚阶段的步骤
      • 这个步骤会让目标系统的数据文件拷贝和源库数据文件完全一致并且产生必要导出文件。
      • 在数据量非常大的情况下,这个步骤所花费的时间要显著的少于传统的 XTTS 方式,因为增量备份会很小。
    3. 使用数据泵把这个表空间的元数据导入至目标数据库
    4. 把目标数据库的相关表空间置为 READ WRITE

适用范围

源库可以是下面列出的满足前提条件的任何平台。

如果要迁移小 endian 平台到 Oracle Linux,那么可以考虑的最好办法是 Data Guard。关于使用异构平台 Data Guard 迁移小 endian 平台到 Oracle Linux 的更详细信息可以参考 Note 413484.1

详细信息

概述

这个文档提供了一个测试案例,列出了使用 Oracle 跨平台增量备份技术把 Oracle Solaris SPARC 系统的两个表空间 TS1 和 TS2 传输至 Oracle Linux 并减少停机时间的详细步骤。

在完成初始化步骤后,要执行下面的步骤来移动数据:

准备:
在准备阶段,要在源库上对表空间的数据文件做一个0级备份。备份需要传送至目标系统,数据文件被还原并转换成目标系统的 endian 格式。

前滚:
在前滚阶段,会对上个步骤还原的数据文件使用从源库上做的增量备份来做前滚。对这个步骤做多次,每个增量备份会变得越来越小并且应用这些增量备份所花的时间也会越来越小,并可以让目标系统“几乎”和源库一致。而在这个过程中,应用程序不会受到任何影响。

传输:
在传输阶段,源库中要迁移的表空间需要置为 READ ONLY 模式,并且需要在源库最后做一次增量备份。这个备份会被传送至目标系统并且应用到目标系统的数据文件上。此时,目标的数据文件拷贝和源库已经是一致的了;而要迁移的应用程序不能做任何更改操作。表空间会使用传输表空间技术导入到目标数据库。最终,目标系统上的表空间会被置于 READ WRITE 模式并且提供完全的访问。

跨平台增量备份支持脚本

跨平台增量备份的核心功能是在 Oracle 数据库 11.2.0.4 及之后版本上提供的,而对于 11g 版本,需要按照 Note 1389592.1 里提供的步骤来操作。本文提到的步骤适用于 Oracle 12c,12.1 或者更高版本,可以参照 Requirements and Recommendations 部分。另外,附件 rman_xttconvert_ver2.zip 中包含了一些在实施跨平台增量备份及 XTTS 时用到的支持脚本。

主要的支持脚本包括下面的几个:

  • Perl 脚本 xttdriver.pl 是用来运行 XTTS 和跨平台增量备份的主要步骤的。
  • 参数文件 xtt.properties 保存了一些站点相关的配置。

前提条件

在开始操作前,必须要先满足下面的前提条件:

  • 必须要考虑传输表空间的限制及注意事项,它们定义在下面的在线文档中:
    • Oracle Database Administrator's Guide
    • Oracle Database Utilities
  • 除了传输表空间的限制及注意事项,还要注意下面的条件:

    • 当前版本不支持Windows作为源库或目标库。
    • 源库的 COMPATIBLE 参数必须设置为 12.1.0 或更高。
    • 源库的 COMPATIBLE 参数值不能大于目标库的 COMPATIBLE 参数值。
    • 源库必须处于 ARCHIVELOG 模式。
    • 源库的 RMAN 配置里 DEVICE TYPE DISK 不能设置为 COMPRESSED。
    • 目标库的 COMPATIBLE 参数必须设置为 12.1.0 或更高。
    • 要迁移的表空间的数据文件必须都是 online 或者不包含 offline 的数据文件。表空间必须是 READ WRITE 模式。READ ONLY 的表空间可以用普通的 XTTS 方式迁移,没有必要使用跨平台增量备份的技术来迁移 READ ONLY 的表空间。
    • 虽然首选目标系统是Linux(64位Oracle Linux或RedHat Linux的认证版本),此过程可与其他基于Unix的操作系统一起使用。 但是,任何非Linux操作系统必须在目标库和源库中都运行12.1.0.1或以上版本。
    • 源库的oracle版本必须低于或等于目标库
  • 这些步骤都需要使用 OSDBA 组中的 oracle 用户来执行。需要使用 OS 验证的方式来连接源库和目标库。
  • 这个方法不支持在备库和snapshot备库
  • 这个方法不支持multitenant databases。Enhancement bug 22570430 描述了这个限制。

问题分析

Debug 模式可以打印更多的屏幕输出,并且开启 RMAN 的 debug 模式。要启用 debug 模式,或者以 -d 参数运行 xttdriver.pl 或者在运行 xttdriver.pl 前设置环境变量 XTTDEBUG=1。这个参数接受3种级别,-d[1/2/3]级别3会显示最多的信息。

已知问题

  1. 如果前滚阶段(xttdriver.pl -r)失败了并显示下面的错误,需要检查 RMAN DEVICE TYPE DISK 是否被配置成了 COMPRESSED:

Entering RollForward
After applySetDataFile
Done: applyDataFileTo
Done: RestoreSetPiecehttps://mosemp.us.oracle.com/epmos/faces/secure/awiz/AwizHome.jspx?_afrLoop=345416299618939&docid=2102859.1&_afrWindowMode=0&_adf.ctrl-state=14b3jfishi_464
DECLARE
*
ERROR at line 1:
ORA-19624: operation failed, retry possible
ORA-19870: error while restoring backup piece
/dbfs_direct/FS1/xtts/incrementals/xtts_incr_backup
ORA-19608: /dbfs_direct/FS1/xtts/incrementals/xtts_incr_backup is not a backup
piece
ORA-19837: invalid blocksize 0 in backup piece header
ORA-06512: at "SYS.X$DBMS_BACKUP_RESTORE", line 2338
ORA-06512: at line 40

2. 如果在源和目标中使用ASM,请参阅XTTS在源和目标使用ASM时目标上创建的别名(<注释2351123.1>)

3. 无论是源还是目标,GLOGIN.sql的存在都可能导致语法错误。

>另请参阅针对其他已知问题的跨平台可传输表空间XTTS <文档2311677.1>的已知问题。

注意:我们建议此过程在主数据库上运行,以读写模式打开。但是,如果是强制性的,则可以对具有版本3的备用数据库执行该过程:

1.在xtt.properties文件中,取消注释:

allowstandby = 1

2.所有步骤保持不变,除了在第四阶段 - Transport阶段。datapump必须在主数据库执行,因为它不能针对只读数据库(备用数据库)。

3.以下是第4阶段 - Transport阶段所需的更改:

a. 步骤4.1,使源表空间在PRIMARY数据库中只读

SQL> alter tablespace test1 read only;
SQL> alter system archive log current;

注意:确保待机已收到redo。如果数据文件不同,则表空间插件将返回如下错误:
ORA-39123: Data Pump transportable tablespace job aborted
ORA-19722: datafile /u01/oradata/convert/TEST1_5.xtf is an incorrect version

b.步骤4.2,创建最终增量备份,传输,转换并应用到目标数据文件

<与此note相同的步骤>

c。步骤4.3,在目标数据库中创建一个连接到PRIMARY的数据库链接

SQL>创建公共数据库链接primarylink连接到由管理器标识的系统使用'<连接字符串>';

测试链接:

SQL>select db_name, database_role from v$database@primarylink;

该角色应该返回PRIMARY。

d。步骤4.4,将对象元数据导入目标数据库:

d1。执行:
[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -e

d2。修改在xttplugin.txt中创建的命令以包含链接到主要。即,在我的测试中,我使用的命令
是:

impdp directory = DATA_PUMP_DIR logfile = tts_imp.log \
network_link = primarylink transport_full_check = no \
transport_tablespaces = TEST1 \
transport_datafiles = '/ U01 / ORADATA /转换/ TEST1_5.xtf'

注意:在我的测试中,在“'READ ONLY WITH APPLY”模式下。在备用备份之间的每个增量备份之间,数据在主要数据中更改。待机状态已被检查,以确保已从主库接收到redo。

[这部分客户不可见。]


使用跨平台增量备份及传输表空间技术来减少停机时间

XTTS 和跨平台传输表空间被分成下面的阶段:

  • 阶段 1 - 初始设置阶段
  • 阶段 2 - 准备阶段
  • 阶段 3 - 前滚阶段
  • 阶段 4 - 最后的增量备份阶段
  • 阶段 5 - 传输阶段: 导入元数据
  • 阶段 6 - 校验数据
  • 阶段 7 - 收尾

文档约定

  • 所有的命令都是使用的bash的语法
  • 如果命令前面的提示符是 [oracle@source]$ 代表是在源系统上以 oracle 用户运行的。
  • 如果命令前面的提示符是 [oracle@dest]$ 代表是在目标系统上以 oracle 用户运行的。

阶段1 – 初始设置

为了跨平台增量备份,需要执行下面的步骤来配置环境:

步骤1.1 – 安装目标数据库软件并且创建目标数据库

在目标系统上安装 Oracle 数据库软件,它一定需要是 Oracle 12c 版本的。

在目标系统上确定(或者创建)一个数据库用来导入表空间,并且创建传输表空间需要的用户。

根据普通的 TTS 的要求,需要确保传输表空间使用的用户已经在目标库上存在。
步骤1.2 – 确定要进行传输的表空间

在源库上确定要进行迁移的表空间。在这个例子里我们会使用表空间 TS1 和 TS2。就像之前提到的那样,传输表空间的前提条件和注意事项都需要仔细验证。

步骤1.3 – 在源库上安装 xttconvert 脚本

在源库上,使用 oracle 软件的用户,下载并解压缩这个文档的附件:支持脚本 rman-xttconvert_2.0.zip:

[oracle@source]$ pwd
/home/oracle/xtt

[oracle@source]$ unzip rman_xttconvert_v3.zip
Archive: rman_xttconvert_v3.zip
inflating: xtt.properties
inflating: xttcnvrtbkupdest.sql
inflating: xttdbopen.sql
inflating: xttdriver.pl
inflating: xttprep.tmpl
extracting: xttstartupnomount.sql

步骤1.4 – 创建必要的目录
  1. 在源库:

    • 存放备份的目录,在 xtt.properties 文件中由 backupformat 参数定义。
  2. 在目标库:
    • 目标系统中暂存区,在 xtt.properties 文件中由 stageondest 参数定义。
    • 目标系统中数据文件的位置,在 xtt.properties 文件中由 storageondest 参数定义。
步骤1.5 – 在源库中配置 xtt.properties

按照本系统配置编辑源库 xtt.properties 文件。关于这个文件中参数的更多信息,请参照本文档附录中的“配置文件 xtt.properties中 的参数描述”部分。只要下面的参数是本文档必需的,其它的都是可选项/或者和备份的兼容有关的部分,可以忽略:

    • tablespaces
    • platformid
    • backupformat
    • stageondest
    • storageondest
步骤1.6 – 拷贝 xttconvert 脚本和 xtt.properties 文件到目标系统

使用 oracle 软件的用户复制所有的 xttconvert 脚本和修改后的 xtt.properties 文件到目标系统:

[oracle@source]$ scp -r /home/oracle/xtt oracle@dest:/home/oracle/xtt
步骤1.7 – 这是 TMPDIR 环境变量

在源库和目标库的 shell 环境里设置环境变量 TMPDIR,指向支持脚本所在的目录。使用相同的 shell 来运行 Perl 脚本 xttdriver.pl。如果 TMPDIR 没有设置,那么输出文件会被放在 /tmp 中;并且输入文件也应该放在 /tmp 下。

[oracle@source]$ export TMPDIR=/home/oracle/xtt

[oracle@dest]$ export TMPDIR=/home/oracle/xtt

阶段2 – 准备阶段

在准备阶段,源库中要对要迁移的表空间的所有数据文件做备份;备份需要传送到目标系统,并且使用 xttdriver.pl 脚本还原。

NOTE:  For large number of files, using dbms_file_transfer (see phase 2 in Note 1389592.1 has been found to be the fastest method for transferring datafiles to destination.  This method outlined in the following article also applies to 12c databases:
11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1). 
步骤2.1 – 在源库中做备份

在源系统中,使用 oracle 用户及环境变量(ORACLE_HOME 和 ORACLE_SID 环境变量)指向源库,做备份如下:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
步骤2.2 – 把下面的文件发送到目标系统
  • 创建在源系统 backupformat 目录中的备份需要传送至目标系统 stageondest 目录

在下面的例子里,使用 scp 命令来传送从源系统中做的0级备份到目标系统中:

[oracle@source]$ scp /backupformat/* oracle@dest:/stageondest
  •  从源系统 $TMPDIR 到目标系统的 $TMPDIR,如下:
    • tsbkupmap.txt
    • xttnewdatafiles.txt
步骤2.3 – 在目标库上还原数据文件

在目标系统中,使用 oracle 用户及环境变量(ORACLE_HOME 和 ORACLE_SID 环境变量)指向目标库,还原数据文件如下:

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore

数据文件会被放在目标系统中由 storageondest 定义的目录中

阶段3 – 前滚阶段

在这个阶段,增量备份会在源库创建,发送到目标系统,转换成目标系统的 endian 格式,应用到目标数据文件拷贝上并进行前滚。这个阶段可能需要运行很多次。每次增量备份都会比上次的增量备份花费更少的时间,并且把目标系统的数据文件拷贝和源库更贴近。在这个阶段,源库上的数据仍然可以被正常访问。

步骤3.1 – 在源库需要传输的表空间上做增量备份

在源系统上,使用 oracle 用户及环境变量(ORACLE_HOME 和 ORACLE_SID 环境变量)指向源库,做增量备份如下:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --bkpinc

这个步骤可以对 xtt.properties 中列出的所有表空间生成增量备份。它同时生成的下面的文件也必须和备份文件一起传送到目标系统中:

  • xttplan.txt
  • tsbkupmap.txt
  • incrbackups.txt
步骤3.2 – 传输增量备份到目标系统

把上个步骤产生的增量备份文件及其它需要的文件传输到目标系统的 stageondest 处。下面列出了源库中 incrbackups.txt 文件中发现的需要拷贝的文件列表:

[oracle@source]$ scp `cat incrbackups.txt` oracle@dest:/stageondest

[oracle@source]$ scp xttplan.txt oracle@dest:/home/oracle/xtt
[oracle@source]$ scp tsbkupmap.txt oracle@dest:/home/oracle/xtt
[oracle@source]$ scp incrbackups.txt oracle@dest:/

home/oracle/xtt >p>

如果源系统的 backupformat 和目标系统的 stageondest 指向了同一个 NFS 存储空间,那么备份文件时不需要再拷贝了,因为它们已经是放在了目标系统的期望目录中。

但是其它的文件(xttplan.txt, tsbkupmap.txt, incrbackups.txt)在每次增量备份中仍然需要拷贝,因为这些文件的内容会在每次执行步骤3.4后发生改变。

步骤3.3 – 应用增量备份到目标系统的数据文件拷贝中

在目标系统中,使用 oracle 用户及环境变量(ORACLE_HOME 和 ORACLE_SID环境变量)指向目标库,前滚数据文件如下:

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --recover

前滚步骤会连接至目标系统并且把增量备份应用到要进行传输的表空间的数据文件中。

注意:每次这个步骤在执行中,都需要拷贝 xttplan.txt 和 tsbkupmap.txt,因为它们的内容在每次执行时都会发生改变。
步骤3.4 - 为下次增量备份确定 from_scn

为了下次增量备份,在源系统中,使用oracle用户及环境变量(ORACLE_HOME 和 ORACLE_SID环境变量)指向源库,运行下面的操作来确定新的 FROM_SCN:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -s

这可以为下次 FROM_SCN 步骤计算出下个 from_scn,并记录在 xttplan.txt 文件中,然后再下次再做增量备份时就可以使用这个 SCN 了。

步骤 3.5 – 或者重复执行前滚阶段3 (3.1 – 3.4)或者直接执行阶段4 – 最后一次增量备份

到现在为止,我们有两种选择

  1. 如果需要把目标数据库的数据和生产库同步更近一些,那么重复前滚阶段,从步骤3.1开始。
  2. 如果觉得目标系统已经和源库足够接近了,那么执行传输阶段。

阶段4 – 最后一次增量备份

在这个阶段,源数据会被置为只读,而目标系统会和源库数据通过最后一次增量备份完全同步。之后就可以通过普通的传输表空间技术来从源库上导出对象元数据并导入到目标数据库。数据会一直处于 READ ONLY 模式,直到这个阶段结束。

步骤4.1 – 在源库中把源表空间置为 READ ONLY 模式

在源系统上,使用 oracle 用户及环境变量(ORACLE_HOME 和 ORACLE_SID环境变量)指向源库,把要传输的表空间设为 READ ONLY。

system@source/prod SQL> alter tablespace TS1 read only;

Tablespace altered.

system@source/prod SQL> alter tablespace TS2 read only;

Tablespace altered.

步骤4.2 – 创建最后一次增量备份并发送相关文件到目标系统

最后的增量备份是使用参数"--bkpexport"创建的,之后传送这些文件到目标系统:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --bkpexport
[oracle@source]$ scp `cat incrbackups.txt` oracle@dest:/stageondest
[oracle@source]$ scp xttplan.txt oracle@dest:home/oracle/xtt
[oracle@source]$ scp tsbkupmap.txt oracle@dest:home/oracle/xtt
[oracle@source]$ scp incrbackups.txt oracle@dest:home/oracle/xtt

步骤4.3 – 应用最后的增量备份到目标系统

最后的增量备份必须使用"--resincrdmp"来应用到目标数据文件

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --resincrdmp 

这一步会应用最后的增量备份到目标系统数据文件,并且,它会为阶段 5A 产生一个 dump 文件及一个脚本文件 xttplugin.txt。

在源库的对象元数据被导出后,源库的表空间就可以被置为 READ WRITE,如果你希望这样做的话(注意,这可能会导致目标系统和源库数据的不一致)。

阶段5 – 传输阶段:导入对象元数据至目标数据库

在这个阶段,表空间会被插入到目标数据库。这里有两个选项。第一个是使用步骤 4.3 创建的 dump 文件导入;第二个可以在两个数据库间使用 network link 来导入。

步骤5A – 导入已存在的 dump 文件
步骤5A.1 – 创建数据泵目录及赋予权限

数据泵会在特定的 directory 对象来查找/生成 dump 文件。或者拷贝’.dmp’文件到一个已经存在的数据泵 directory 或者创建一个新的 directory 对象来指向’.dmp’文件当前的目录。

SYS@DESTDB> create directory dpump_tts as '/home/oracle/destination/convert';

这个目录的相关权限必须赋予给要执行导入的用户:

SYS@DESTDB> GRANT READ, WRITE ON DIRECTORY dpump_tts TO system;
步骤5A.2 修改并执行 impdp 命令:

要执行传输表空间,你必须修改导入的命令文件 xttplugin.txt (在执行步骤4.3时生成的)并且替换导入参数 DIRECTORY 成当前环境里的值。

下面是一个导入的例子:

[oracle@dest]$ impdp system/manager directory=dpump_tts \
> logfile=tts_imp.log \
> dumpfile=impdp3925_641.dmp \
> transport_datafiles='/u01/oradata/DESTDB/o1_mf_ts1_bngv18vm_.dbf','/u01/oradata/DESTDB/o1_mf_ts2_bngv229g_.dbf'
步骤5B – 使用 dblink 导入

如果你想的话,你可以使用 dblink 来导入元数据到目标库。这可以通过下面的步骤来完成。

步骤5B.1 – 为网络导入生成新的 xttplugin.txt

在目标系统中,使用 oracle 用户及环境变量(ORACLE_HOME 和 ORACLE_SID 环境变量)指向目标库,运行下列命令来生成数据泵 TTS 命令:

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -e

这会生成一个使用数据泵 network_link 方式导入的示例命令,并正确设置了 TRANSPORT_TABLESPACES 和 TRANSPORT_DATAFILES 参数。另外,一个数据泵导出文件也会被创建。

注意:这个命令会覆盖步骤 5A 中也会需要的 xttplugin.txt 文件。
步骤5B.2 – 在目标系统上创建一个 dblink

连接到目标数据库,创建一个连接至源库的 db link,如:

SQL@dest> create public database link ttslink connect to system identified by <password> using '<tns_to_source>';

验证这个 dblink 可以正确的连接至源库:

SQL@dest> select name from v$database@ttslink;
步骤5B.3 – 修改并执行 impdp 命令

这并不需要重新生成一份 dump 文件。要执行传输表空间,你必须修改导入的命令文件 xttplugin.txt (在执行步骤 5B.1 时生成的)并且替换导入参数 DIRECTORY,LOGFILE和NETWORK_LINK 成当前环境里的值。

如下是一个使用网络模式运行导入的示例命令:

[oracle@dest]$ impdp directory=DATA_PUMP_DIR logfile=tts_imp.log network_link=ttslink \ 
transport_full_check=no \ 
transport_tablespaces=TS1,TS2 \ 
transport_datafiles='+DATA/prod/datafile/ts1.285.771686721', \ 
'+DATA/prod/datafile/ts2.286.771686723', \ 
'+DATA/prod/datafile/ts2.287.771686743'
根据普通的 TTS 的要求,在执行导入前,用户拥有的需要导入的对象必须在目标系统上已存在。

资源:

阶段6 -校验数据

步骤6.1 – 检查表空间是否发生 corruption

在这个步骤里,传输的数据在目标系统是 READ ONLY 的,可以使用应用程序来验证这些数据是否正确。

另外,可以使用 RMAN 的 VALIDATE TABLESPACE 来检查是否存在逻辑/物理损坏:

RMAN> validate tablespace TS1, TS2 check logical;
步骤6.2 – 在目标系统上把相关的表空间设为 READ WRITE 模式

最后的步骤是在目标系统上把表空间设置为 READ WRITE 模式

system@dest/prod SQL> alter tablespace TS1 read write;

Tablespace altered.

system@dest/prod SQL> alter tablespace TS2 read write;

Tablespace altered.

阶段7 -收尾

如果迁移过程中创建了一个额外的用来做增量备份转换的临时的实例,那么这个实例就可以被关闭/删除了。

在整个过程中一些不再需要的文件都可以被删除了,这包括:

  • 源库中 backupformat 所对应的目录
  • 目标系统中 stageondest 所对应的目录
  • 源库和目标系统中 $TMPDIR 下对应的目录(注意 /tmp 目录不能被删除)

附录

Perl 脚本 xttdriver.pl 命令选项的描述

下面的表格列出了只要的支持脚本 xttdriver.pl 的选项。

选项 描述
--backup 

对于选定的表空间所拥有的数据文件做0级备份。这些备份会写入配置文件 xtt.properties 的参数“backupformat”所定义的目录中。这些备份需要拷贝至目标系统中的配置文件的参数“stageondest”所定义的目录中。生成的另两个文件 tsbkupmap.txt 和 xttnewdatafiles.txt 也需要拷贝到目标系统上,TMPDIR 参数所对应的临时目录中。

--restore 

还原并转换备份中的数据文件到目标系统中 “stageondest” 对应的目录中,还原后的文件也会放在目标系统中 “stageondest” 对应的目录中。

--bkpincr

会生成一个增量备份,并把备份放在源库的“ backupformat”参数定义的目录中,同时会生成“incrbackups.txt”文件,包含了创建的备份的列表。这个文件和“tsbkupmap.txt”文件一起都必须被拷贝至目标系统的“stageondest”所定义的目录中。

--recover 

这个选项可以对在目标系统上已经创建了的数据文件应用增量备份。

-s  在源库设置环境变量(ORACLE_HOME and ORACLE_SID) 并多次运行来确认新的 FROM_SCN。这个选项是用来确认下一个 FROM_SCN,并记录到 xttplan.txt 文件中。当下次按照步骤3.1创建增量备份时使用这个 SCN。它会把新的 FROM_SCN 映射到时钟时间来显示下个增量备份距当前有多久。
--bkpexport

这个选项会生成最后一次增量备份,并创建导入数据文件所需要的 dump 文件。增量备份会被放置在“backupformat”对应的目录,并且可以从“incrbackups.txt”文件得到这个路径。另外也会创建文件“tsbkupmap.txt”, 这些文件都应该被拷贝到目标系统。

--resincrdmp

这个选项会还原并应用最后一个增量备份,并且会把 dump 文件还原至“TMPDIR”所对应的临时目录中(可以通过命令行指定,也可以设置成环境变量);这个 dump 文件会在导入时使用。

-e  在目标系统(需设置环境变量 ORACLE_HOME 和 ORACLE_SID 到目标系统)执行一次来生成数据泵的命令。

这个步骤会创建数据泵命令使用的模板,这个模板将使用dblink来导入表空间内对象的元数据。

-d debug 对 xttdriver.pl 脚本和 RMAN 命令开启调试模式。调试模式也可以通过设置环境变量 XTTDEBUG=1 来开启。调试模式支持3种级别: 1,2,3 ,如 xttdriver.pl -d 3。

配置文件 xtt.properties 中的参数描述

下面表中列出的参数是在 12c 上操作必须在 xtt.properties 文件中定义的参数;可能还会有其它为了向下兼容而存在的参数:

参数 描述 例子
tablespaces 以逗号分割的的要传输的表空间的列表。必须写在一行里,写在多行里是不支持的。 tablespaces=TS1,TS2
platformid 源库的平台标识号,可以从 V$DATABASE.PLATFORM_ID 得到。 platformid=2
storageondest

目标系统中的 Directory 对象,定义了目标数据文件创建在哪里。

storageondest=DESTDIR
backupformat 源库上备份放置的地方,这个目录必须有足够的空闲磁盘空间来放置0级备份及所有之后产生的增量备份。这个地址可以是和目标系统共享的 NFS 文件系统,在这种情况下,目标系统的 stageondest 也应该指向相同的 NFS 目录。 backupformat=/stage_source
stageondest 目标系统上用来放置从源库上传输过来的备份的位置。这个位置必须有足够的空闲磁盘空间来放置0级备份及之后所有的增量备份。这个地址可以是 NFS 文件系统。这个地址可以是和目标系统共享的 NFS 文件系统,在这种情况下,目标系统的 stageondest 也应该指向相同的 NFS 目录。对于 NFS 的挂载选项,请参考 Note 359515.1 stageondest=/stage_dest
asm_home 目标系统上 ASM 实例的 ORACLE_HOME,注意: 如果 backupondest 被指定到文件系统目录上(而不是 ASM 的话),那么请注释掉 m_home 和 asm_sid。 asm_home=/u01/app/11.2.0.4/grid
asm_sid 目标系统上运行的 ASM 实例的 ORACLE_SID。 asm_sid=+ASM1
parallel

定义了在源库上做备份时的并行度。

parallel=3

更改历史

更改 日期

rman-xttconvert_2.0.zip 被发布 – 加入了对多个源及目标系统目录的支持。

rman-xttconvert_v3.zip 被发布 –

2015-May-20

##############XTTS

OVERVIEW

We are all aware of the issue surrounding cross platform database migrations.  The endian difference between RISC Unix platforms (big endian) versus x86 platforms (little endian) causes a conversion of the data before it is usable by Oracle.  There are three methodologies possible when converting the endian-ness of a database.

  • Traditional export using exp/imp or Data Pump
  • Cross platform transportable tablespaces (XTTS)
  • Logical replication with Streams or Golden Gate

Each method has its roadblocks.  Exporting the data is fairly simple, but requires a lot of downtime for large databases.  XTTS too requires a lot of downtime for large databases, although usually less than exporting and importing.  Logical replication offers less downtime, but Golden Gate is a separately licensed product that comes with a hefty price tag.

As of 11.2.0.4 there is an additional method, which is based on traditional XTTS.  This is XTTS along with cross platform incremental backups.

TRADITIONAL XTTS

The steps involved with traditional XTTS are as follows:

  • Make the source datafiles read only (downtime begins)
  • Transfer datafiles to the destination
  • Convert datafiles to new endian format
  • Export metadata from the source
  • Import metadata on the destination
  • Make tablespaces on destination read/write

The problem is that the source datafiles must be made read only before the copy to the target system.  This copy can take a very long time for a large, multi-terabyte database.

XTTS WITH CROSS PLATFORM INCREMENTAL BACKUP

The main difference with this procedure is that the initial copy of the datafiles occurs while the source database remains online.  Then, incremental backups are taken of the source database, transferred to the destination, converted to the new endian-ness, and applied to the destination.  Here are the steps.

  • Transfer source datafiles to the destination (source database remains online)
  • Convert datafiles to new endian format
  • Create an incremental backup of the source tablespaces
  • Transfer the incremental backup to the destination
  • Convert the incremental backup to new endian format
  • Apply the incremental backup to the destination database
  • Repeat the incremental backup steps as needed
  • Place the source database into read only mode
  • Repeat the incremental backup steps
  • Export metadata from the source
  • Import metadata on the destination
  • Make tablespaces on destination read/write

Cross Platform Incremental Caveat
The one caveat to this process is that the functionality of converting an incremental backup is new in 11.2.0.4.  This means that an 11.2.0.4 Oracle home must exist on the destination system to perform the conversion of the incremental backup.  That does not mean that the destination database must be 11.2.0.4.  The 11.2.0.4 home can be used for the conversion even if the destination database is a lower version.

CONCLUSION

In this post I described, in more detail, one of the three methodologies possible when converting the endian-ness of a database – cross platform transportable tablespaces (XTTS). Hopefully you now have a better understanding of the pros and cons of this method and whether or not it’s a good fit for your Oracle environment.

OVERVIEW

In part one of this post, we described the high level concept of using Oracle’s new cross platform incremental backup along with transportable tablespaces. These tools allow a DBA to perform cross platform transportable tablespace operations with the source database online, and then later to apply one or more incremental backups of the source database to roll the destination database forward. This can substantially reduce the downtime of a cross platform transportable tablespace operation.

In part two of this post, we will outline the specific steps required to perform this migration using the new cross platform incremental backup functionality.

COMPLETE MIGRATION STEPS

The following are the high level steps necessary to complete a cross platform transportable tablespace migration with cross platform incremental backup using the Oracle scripts outlined in MOS note 1389592.1 (MOS note 2005729.1 for 12c):

  • Install 11.2.0.4 Oracle home, if not already installed
  • Initial configuration of the Oracle perl scripts
  • Turn on block change tracking, if not already configured
  • Transfer and convert datafiles from source to target
  • Perform incremental backup and apply to target
  • Repeat incremental backup and apply
  • Put tablespaces into read only mode on the source
  • Perform final incremental backup and apply to target
  • Perform TTS Data Pump import on the target
  • Perform metadata only export on the source
  • Perform metadata only import on the target
  • Audit objects between source and target to ensure everything came over
  • Set the tablespaces to read-write on the target

Configure Oracle Perl Scripts
The first step in using this methodology is to download the zip file containing the scripts from MOS note 1389592.1. The current version of the scripts is located in file rman_xttconvert_2.0.zip. Unzip the file on both the source and target systems and then configure the xtt.properties file on both nodes. This is the parameter file that controls the operations. The comments in the file describe how to modify the entries.

Transfer and Convert the Datafiles
Run the Oracle supplied perl script to copy the datafiles to the target system.

nohup perl xttdriver.pl -p > prepare.log 2>&1 &
nohup perl xttdriver.pl -c > convert.log 2>&1 &

Perform the Incremental Backup
Run the script to take the incremental backup on the source.

nohup perl xttdriver.pl -i > incr_bkup.log 2>&1 &

Copy the following files to the target and apply the incremental backup.

tsbkupmap.txt & xttplan.txt
nohup perl xttdriver.pl -r > incr_apply.log 2>&1 &

Determine the starting SCN for the next incremental backup.

nohup perl xttdriver.pl -s > next_scn.log 2>&1 &

Repeat the incremental backup as many times as necessary.

Transport the Tablespaces
Complete the remaining steps to finish the migration.

1. Place the tablespaces into read only mode on the source.

alter tablespace APP_DATA read only;
alter tablespace APP_IDX read only;
alter tablespace APP_DATA2 read only;

2. Repeat the incremental backup and incremental apply steps from above.

3. Run a transportable tablespace Data Pump export on the source.

nohup impdp \”/ as sysdba\” parfile=migrate_tts.par > migrate_tts.log 2>&1 &
## migrate_tts.par
DIRECTORY=MIG_DIR
LOGFILE=MIG_TTS.log
NETWORK_LINK=ttslink
TRANSPORT_FULL_CHECK=no
TRANSPORT_TABLESPACES=APP_DATA,APP_IDX,APP_DATA2
TRANSPORT_DATAFILES=’/oradata/APP/APP_DATA_01.dbf’,’/oradata/APP/APP_IDX_01.dbf’,…

4. Run a metadata only Data Pump export from the source.

nohup expdp \”/ as sysdba\” parfile=migrate_meta.par > migrate_meta.log 2>&1 &
## migrate_meta.par
DIRECTORY = MIG_DIR
DUMPFILE = MIGRATE_META.dmp
LOGFILE = MIGRATE_META.log
FULL = Y
PARALLEL = 8
CONTENT = METADATA_ONLY
JOB_NAME = MIGRATE_META
EXCLUDE = STATISTICS,USER,ROLESTABLESPACE,DIRECTORY,TRIGGERS,INDEXES,TABLES,CONSTRAINTS
SCHEMA:”IN (‘SYSTEM’,’ANONYMOUS’,’DBSNMP’,’DIP’,’EXFSYS’,’MDSYS’,’MGMT_VIEW’,’ORACLE_OCM’,’ORDPLUGINS’,
‘ORDSYS’,’OUTLN’,’SI_INFORMTN_SCHEMA’,’SYSMAN’,’TSMSYS’,’WMSYS’,’XDB’,’PERFSTAT’,
‘OLAPSYS’,’APEX_030200′,’APEX_PUBLIC_USER’,’APPQOSSYS’,’FLOWS_FILES’,’CTXSYS’,’XS$NULL’)”

5. Run the metadata only Data Pump import on the target.

nohup impdp \”/ as sysdba\” parfile=migrate_meta_imp.par > migrate_meta_imp.log 2>&1 &
## migrate_meta_imp.par
DIRECTORY = MIG_DIR
DUMPFILE = MIGRATE_META.dmp
LOGFILE = MIGRATE_META_IMP.log
FULL = Y
PARALLEL = 8
JOB_NAME = MIGRATE_META_IMP

6. Reconcile the source and target databases to ensure that all objects came over successfully.

set lines 132 pages 500 trimspool on echo off verify off feedback off

col object_name format a30

select owner, object_type, object_name, status
from dba_objects
where owner not in (‘SYS’, ‘SYSTEM’, ‘TOAD’, ‘SCOTT’, ‘OUTLN’, ‘MSDB1’, ‘DBSNMP’,
‘PUBLIC’, ‘XDB’, ‘WMSYS’, ‘WKSYS’, ‘ORDSYS’, ‘OLAPSYS’, ‘ORDPLUGINS’, ‘ODM’, ‘ODM_MTR’, ‘MDSYS’, ‘CTXSYS’)
order by 1,2,3
minus
select owner, object_type, object_name, status
from dba_objects@ttslink
where owner not in (‘SYS’, ‘SYSTEM’, ‘TOAD’, ‘SCOTT’, ‘OUTLN’, ‘MSDB1’, ‘DBSNMP’,
‘PUBLIC’, ‘XDB’, ‘WMSYS’, ‘WKSYS’, ‘ORDSYS’, ‘OLAPSYS’, ‘ORDPLUGINS’, ‘ODM’, ‘ODM_MTR’, ‘MDSYS’, ‘CTXSYS’)
order by 1,2,3;

set echo on verify on feedback on

7. Set the tablespaces to read write on the target.

alter tablespace APP_DATA read write;
alter tablespace APP_IDX read write;
alter tablespace APP_DATA2 read write;

CONCLUSION

In this blog, we have demonstrated the steps for using cross platform incremental backup to reduce downtime for large dataset platform migrations without the need for additional licensed products.

###refer http://blog.itpub.net/26015009/viewspace-2150312/

从AIX将数据库迁移到Linux Oracle为11.2.0.4
下面操作可以用来创建一个名叫xtt的增量转换实例,增量转换home为/u01/app/oracle/product/11.2.0/db/dbs:

[oracle@jyrac1 dbs]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db/
[oracle@jyrac1 dbs]$ export ORACLE_SID=xtt
[oracle@jyrac1 dbs]$ cat < < EOF > $ORACLE_HOME/dbs/init$ORACLE_SID.ora
> db_name=xtt
> compatible=11.2.0.4.0
> EOF [oracle@jyrac1 dbs]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Fri Aug 18 10:15:02 2017 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to an idle instance. SQL> startup nomount
ORACLE instance started. Total System Global Area 296493056 bytes
Fixed Size 2252584 bytes
Variable Size 239075544 bytes
Database Buffers 50331648 bytes
Redo Buffers 4833280 bytes

源数据库目录对象引用源数据库中当前存放数据文件的目录。例如,下面创建目录对象指向,数据文件存放目录/oracle11/oradata/jycs/jycs/,连接到源数据库房执行以下命令:

Connected to Oracle Database 11g Enterprise Edition Release 11.2.0.4.0
Connected as ldjc@129_2 SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
/oracle11/oradata/jycs/jycs/system01.dbf
/oracle11/oradata/jycs/jycs/sysaux01.dbf
/oracle11/oradata/jycs/jycs/undotbs01.dbf
/oracle11/oradata/jycs/jycs/users01.dbf
/oracle11/oradata/jycs/jycs/example01.dbf
/oracle11/oradata/jycs/jycs/cdzj01
/oracle11/oradata/jycs/jycs/ldjc01
7 rows selected SQL> create directory sourcedir as '/oracle11/oradata/jycs/jycs';
Directory created SQL> select platform_id from v$database;
PLATFORM_ID
-----------
6

目标数据库目录对象引用目标数据库中将要存储数据文件的目录。这个目录是最终目标数据库将要存放数据文件的目录+DATADG/jyrac/datafile/,连接到目标数据库执行以下命令

Connected to Oracle Database 11g Enterprise Edition Release 11.2.0.4.0
Connected as sys@jyrac AS SYSDBA SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
+DATADG/jyrac/datafile/system.259.930413057
+DATADG/jyrac/datafile/sysaux.258.930413055
+DATADG/jyrac/datafile/undotbs1.262.930413057
+DATADG/jyrac/datafile/users.263.930413057
+DATADG/jyrac/datafile/example.260.930413057
+DATADG/jyrac/datafile/undotbs2.261.930413057
+DATADG/jyrac/datafile/test01.dbf
+DATADG/jyrac/datafile/sales_test_01.dbf
+DATADG/jyrac/datafile/emp_test_01.dbf
+DATADG/jyrac/datafile/orders_test_01.dbf
10 rows selected SQL> create directory destdir as '+DATADG/jyrac/datafile';
Directory created

在目标数据库中创建一个dblink连接到源数据库。例如创建一个名叫ttslink的dblink,执行以下命令:

SQL> create public database link ttslink
2 connect to system identified by "xxzx7817600"
3 using '(DESCRIPTION =
4 (ADDRESS_LIST =
5 (ADDRESS = (PROTOCOL = TCP)(HOST =10.138.129.2)(PORT = 1521))
6 )
7 (CONNECT_DATA =
8 (SERVER = DEDICATED)
9 (SERVICE_NAME =jycs)
10 )
11 )'; Database link created.

创建dblink后验证是否可以能过dblink访问源数据库

SQL> select * from dual@ttslink;

D
-
X

在源系统与目标系统中创建预备目录,它们将被设置为xtt.properties文件中的backupformat(源系统中存放增量备份文件的目录),backupondest(目标系统中存放转换后的增量备份文件的目录)参数的值。如果使用RMAN备份方法,在源系统与目标系统中还需要为xtt.properties文件中的dfcopydir(源系统中存放数据文件副本的目录,只有使用rman备份才使用),stageondest(目标系统中存放从源系统传输过来的数据文件副本与增量备份的目录,只有使用rman备份才使用)。

在源系统中执行下面的命令分别创建backupformat目录(/oracle11/backup),dfcopydir目录(/oracle11/dfcopydir)

IBMP740-2:/oracle11$mkdir backup
IBMP740-2:/oracle11$mkdir dfcopydir

在目标系统中执行下面的命令分别创建backupondest目录(+DATADG/backup),stagenodest目录(/u01/xtts)

ASMCMD [+datadg] > mkdir backup

如果ASM被用于存储xtt.properties文件中的参数backupondest,那么实例的compatible参数的值必须等于或大于ASM磁盘组所使用的rdbms.compatible的值。

[grid@jyrac1 ~]$ asmcmd lsattr -G DATADG -l
Name Value
access_control.enabled false
access_control.umask 026
au_size 1048576
cell.smart_scan_capable FALSE
compatible.asm 11.2.0.0.0
compatible.rdbms 11.2.0.0.0
disk_repair_time 4.5 H
sector_size 512 [root@jyrac1 u01]# mkdir xtts
[root@jyrac1 u01]# chown -R oracle:oinstall xtts
[root@jyrac1 u01]# chmod 777 xtts

在源系统中安装xttconver脚本
在源系统中,使用Oracle软件用户,下裁与解压脚本

IBMP740-2:/oracle11/xtts_script$unzip rman_xttconvert_v3.zip
Archive: rman_xttconvert_v3.zip
inflating: xtt.properties
inflating: xttcnvrtbkupdest.sql
inflating: xttdbopen.sql
inflating: xttdriver.pl
inflating: xttprep.tmpl
extracting: xttstartupnomount.sql IBMP740-2:/oracle11/xtts_script$ls -lrt
total 416
-rw-r--r-- 1 oracle11 oinstall 1390 May 24 16:57 xttcnvrtbkupdest.sql
-rw-r--r-- 1 oracle11 oinstall 52 May 24 16:57 xttstartupnomount.sql
-rw-r--r-- 1 oracle11 oinstall 11710 May 24 16:57 xttprep.tmpl
-rw-r--r-- 1 oracle11 oinstall 139331 May 24 16:57 xttdriver.pl
-rw-r--r-- 1 oracle11 oinstall 71 May 24 16:57 xttdbopen.sql
-rw-r--r-- 1 oracle11 oinstall 7969 Jun 05 08:47 xtt.properties
-rw-r----- 1 oracle11 oinstall 33949 Aug 18 09:26 rman_xttconvert_v3.zip

在源系统中配置xtt.properties文件

IBMP740-2:/oracle11/xtts_script$vi xtt.properties
tablespaces=CDZJ,LDJC
platformid=6
srcdir=SOURCEDIR
dstdir=DESTDIR
srclink=ttslink
#dfcopydir=/oracle11/dfcopydir
backupformat=/oracle11/backup
stageondest=/u01/xtts
backupondest=+DATADG/backup
#storageondest=+DATADG/jyrac/datafile/
cnvinst_home=/oracle11/app/oracle/product/11.2.0/db
cnvinst_sid=xtt
asm_home=/u01/app/product/11.2.0/crs
asm_sid=+ASM1

将源系统中的转换脚本与xtt.properties文件复制到目标系统中

[oracle@jyrac1 xtts_script]$ ftp 10.138.129.2
Connected to 10.138.129.2.
220 IBMP740-2 FTP server (Version 4.2 Mon Nov 28 14:12:02 CST 2011) ready.
502 authentication type cannot be set to GSSAPI
502 authentication type cannot be set to KERBEROS_V4
KERBEROS_V4 rejected as an authentication type
Name (10.138.129.2:oracle): oracle
331 Password required for oracle.
Password:
230-Last unsuccessful login: Wed Dec 3 10:20:09 BEIST 2014 on /dev/pts/0 from 10.138.130.31
230-Last login: Mon Aug 14 08:39:17 BEIST 2017 on /dev/pts/0 from 10.138.130.242
230 User oracle logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> cd /oracle11/xtts_script
250 CWD command successful.
ftp> ls -lrt
227 Entering Passive Mode (10,138,129,2,37,50)
150 Opening data connection for /bin/ls.
total 424
-rw-r--r-- 1 oracle11 oinstall 1390 May 24 16:57 xttcnvrtbkupdest.sql
-rw-r--r-- 1 oracle11 oinstall 52 May 24 16:57 xttstartupnomount.sql
-rw-r--r-- 1 oracle11 oinstall 11710 May 24 16:57 xttprep.tmpl
-rw-r--r-- 1 oracle11 oinstall 139331 May 24 16:57 xttdriver.pl
-rw-r--r-- 1 oracle11 oinstall 71 May 24 16:57 xttdbopen.sql
-rw-r--r-- 1 oracle11 oinstall 7969 Jun 05 08:47 xtt.properties.jy
-rw-r----- 1 oracle11 oinstall 33949 Aug 18 09:26 rman_xttconvert_v3.zip
-rw-r--r-- 1 oracle11 oinstall 352 Aug 18 10:15 xtt.properties
226 Transfer complete.
ftp> lcd /u01/xtts_script
Local directory now /u01/xtts_script
ftp> bin
200 Type set to I.
ftp> get xttcnvrtbkupdest.sql
local: xttcnvrtbkupdest.sql remote: xttcnvrtbkupdest.sql
227 Entering Passive Mode (10,138,129,2,37,63)
150 Opening data connection for xttcnvrtbkupdest.sql (1390 bytes).
226 Transfer complete.
1390 bytes received in 4.8e-05 seconds (2.8e+04 Kbytes/s)
ftp> get xttstartupnomount.sql
local: xttstartupnomount.sql remote: xttstartupnomount.sql
227 Entering Passive Mode (10,138,129,2,37,66)
150 Opening data connection for xttstartupnomount.sql (52 bytes).
226 Transfer complete.
52 bytes received in 3.7e-05 seconds (1.4e+03 Kbytes/s)
ftp> get xttprep.tmpl
local: xttprep.tmpl remote: xttprep.tmpl
227 Entering Passive Mode (10,138,129,2,37,69)
150 Opening data connection for xttprep.tmpl (11710 bytes).
226 Transfer complete.
11710 bytes received in 0.00065 seconds (1.7e+04 Kbytes/s)
ftp> get xttdriver.pl
local: xttdriver.pl remote: xttdriver.pl
227 Entering Passive Mode (10,138,129,2,37,72)
150 Opening data connection for xttdriver.pl (139331 bytes).
226 Transfer complete.
139331 bytes received in 0.0026 seconds (5.3e+04 Kbytes/s)
ftp> get xttdbopen.sql
local: xttdbopen.sql remote: xttdbopen.sql
227 Entering Passive Mode (10,138,129,2,37,77)
150 Opening data connection for xttdbopen.sql (71 bytes).
226 Transfer complete.
71 bytes received in 3.9e-05 seconds (1.8e+03 Kbytes/s)
ftp> get xtt.properties
local: xtt.properties remote: xtt.properties
227 Entering Passive Mode (10,138,129,2,37,84)
150 Opening data connection for xtt.properties (352 bytes).
226 Transfer complete.
352 bytes received in 4.2e-05 seconds (8.2e+03 Kbytes/s) [oracle@jyrac1 xtts_script]$ ls -lrt
total 172
-rw-r--r-- 1 oracle oinstall 1390 Aug 18 10:38 xttcnvrtbkupdest.sql
-rw-r--r-- 1 oracle oinstall 52 Aug 18 10:38 xttstartupnomount.sql
-rw-r--r-- 1 oracle oinstall 11710 Aug 18 10:38 xttprep.tmpl
-rw-r--r-- 1 oracle oinstall 139331 Aug 18 10:38 xttdriver.pl
-rw-r--r-- 1 oracle oinstall 71 Aug 18 10:38 xttdbopen.sql
-rw-r--r-- 1 oracle oinstall 352 Aug 18 10:38 xtt.properties

在源系统与目标系统中设置环境变TMPDIR,它指向转换脚本所在的目录。为了执行Perl脚本xttdriver.pl设置如下。如果TMPDIR没有设置,那么脚本生成的输出文件将会存放在/tmp目录中。

IBMP740-2:/oracle11$export TMPDIR=/oracle11/xtts_script
[oracle@jyrac1 xtts_script]$ export TMPDIR=/u01/xtts_script

2.准备阶段
在准备阶段,被传输表空间的数据文件会被传输到目标系统并且通过执行xttdriver.pl脚本进行转换。有以下两种方法可以使用:
1. dbms_file_transfer方法
2. RMAN备份方法

对于大量数据文件使用dbms_file_transfer方法要比传输数据文件到目标系统更快。

2a.使用dbms_file_transfer方法
2a.1在源系统中执行准备操作
在源系统中,使用Oracle软件用户登录并设置相关环境变量(ORACLE_HOME与ORACLE_SID)来指向源数据库,执行以下命令:

IBMP740-2:/oracle11/xtts_script$export ORACLE_HOME=/oracle11/app/oracle/product/11.2.0/db
IBMP740-2:/oracle11/xtts_script$export ORACLE_SID=jycs
IBMP740-2:/oracle11/xtts_script$$ORACLE_HOME/perl/bin/perl xttdriver.pl -S
============================================================
trace file is /oracle11/xtts_script/setupgetfile_Aug18_Fri_10_21_17_169//Aug18_Fri_10_21_17_169_.log
============================================================= --------------------------------------------------------------------
Parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Done parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Checking properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Done checking properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Starting prepare phase
-------------------------------------------------------------------- Prepare source for Tablespaces:
'CDZJ' /u01/xtts
xttpreparesrc.sql for 'CDZJ' started at Fri Aug 18 10:21:17 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:21:18 2017
Prepare source for Tablespaces:
'LDJC' /u01/xtts
xttpreparesrc.sql for 'LDJC' started at Fri Aug 18 10:21:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:21:18 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 10:21:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:21:18 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 10:21:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:21:18 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 10:21:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:21:18 2017 --------------------------------------------------------------------
Done with prepare phase
--------------------------------------------------------------------

准备操作将在源系统中执行以下操作
.验证表空间是否online,read write且不包含脱机数据文件
.将创建后面所要使用的以下文件:
xttnewdatafiles.txt
getfile.sql

IBMP740-2:/oracle11/xtts_script$cat xttnewdatafiles.txt
::CDZJ
6,DESTDIR:/cdzj01
::LDJC
7,DESTDIR:/ldjc01
IBMP740-2:/oracle11/xtts_script$cat getfile.sql
0,SOURCEDIR,cdzj01,DESTDIR,cdzj01
1,SOURCEDIR,ldjc01,DESTDIR,ldjc01

要被传输的一组表空间必须是online,read write状态且不包含脱机数据文件。如果在源数据库中被传输表空间的一个或多个数据文件是脱机状态或read only就会触发错误。如果表空间在整个表空间传输过程中都保持read only状态,那么就使用传统的跨平台传输表空间,不要使用跨平台增量备份传输表空间。

2a.2 传输数据文件到目标系统中
在目标系统中,使用Oracle软件用户登录并设置相关环境变量(ORACLE_HOME与ORACLE_SID)来指向目标数据库,并复制上一步生成的xttnewdatafiles.txt与getfile.sql文件到目标系统并执行操作来获取数据文件

[oracle@jyrac1 xtts_script]$ ftp 10.138.129.2
Connected to 10.138.129.2.
220 IBMP740-2 FTP server (Version 4.2 Mon Nov 28 14:12:02 CST 2011) ready.
502 authentication type cannot be set to GSSAPI
502 authentication type cannot be set to KERBEROS_V4
KERBEROS_V4 rejected as an authentication type
Name (10.138.129.2:oracle): oracle
331 Password required for oracle.
Password:
230-Last unsuccessful login: Wed Dec 3 10:20:09 BEIST 2014 on /dev/pts/0 from 10.138.130.31
230-Last login: Fri Aug 18 10:16:01 BEIST 2017 on ftp from ::ffff:10.138.130.151
230 User oracle logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> cd /oracle11/xtts_script
250 CWD command successful.
ftp> ls -lrt
227 Entering Passive Mode (10,138,129,2,38,79)
150 Opening data connection for /bin/ls.
total 456
-rw-r--r-- 1 oracle11 oinstall 1390 May 24 16:57 xttcnvrtbkupdest.sql
-rw-r--r-- 1 oracle11 oinstall 52 May 24 16:57 xttstartupnomount.sql
-rw-r--r-- 1 oracle11 oinstall 11710 May 24 16:57 xttprep.tmpl
-rw-r--r-- 1 oracle11 oinstall 139331 May 24 16:57 xttdriver.pl
-rw-r--r-- 1 oracle11 oinstall 71 May 24 16:57 xttdbopen.sql
-rw-r--r-- 1 oracle11 oinstall 7969 Jun 05 08:47 xtt.properties.jy
-rw-r----- 1 oracle11 oinstall 33949 Aug 18 09:26 rman_xttconvert_v3.zip
-rw-r--r-- 1 oracle11 oinstall 352 Aug 18 10:15 xtt.properties
-rw-r--r-- 1 oracle11 oinstall 50 Aug 18 10:21 xttplan.txt
-rw-r--r-- 1 oracle11 oinstall 106 Aug 18 10:21 xttnewdatafiles.txt_temp
-rw-r--r-- 1 oracle11 oinstall 50 Aug 18 10:21 xttnewdatafiles.txt
drwxr-xr-x 2 oracle11 oinstall 256 Aug 18 10:21 setupgetfile_Aug18_Fri_10_21_17_169
-rw-r--r-- 1 oracle11 oinstall 68 Aug 18 10:21 getfile.sql
226 Transfer complete.
ftp> lcd /u01/xtts_script
Local directory now /u01/xtts_script
ftp> bin
200 Type set to I.
ftp> get xttnewdatafiles.txt
local: xttnewdatafiles.txt remote: xttnewdatafiles.txt
227 Entering Passive Mode (10,138,129,2,38,112)
150 Opening data connection for xttnewdatafiles.txt (50 bytes).
226 Transfer complete.
50 bytes received in 6.2e-05 seconds (7.9e+02 Kbytes/s)
ftp> get getfile.sql
local: getfile.sql remote: getfile.sql
227 Entering Passive Mode (10,138,129,2,38,115)
150 Opening data connection for getfile.sql (68 bytes).
226 Transfer complete.
68 bytes received in 4.9e-05 seconds (1.4e+03 Kbytes/s) # MUST set environment to destination database
[oracle@jyrac1 xtts_script]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db
[oracle@jyrac1 xtts_script]$ export ORACLE_SID=jyrac1
[oracle@jyrac1 xtts_script]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -G
============================================================
trace file is /u01/xtts_script/getfile_Aug18_Fri_11_03_48_564//Aug18_Fri_11_03_48_564_.log
============================================================= --------------------------------------------------------------------
Parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Done parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Checking properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Done checking properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Getting datafiles from source
-------------------------------------------------------------------- --------------------------------------------------------------------
Executing getfile for /u01/xtts_script/getfile_Aug18_Fri_11_03_48_564//getfile_sourcedir_cdzj01_0.sql
-------------------------------------------------------------------- --------------------------------------------------------------------
Executing getfile for /u01/xtts_script/getfile_Aug18_Fri_11_03_48_564//getfile_sourcedir_ldjc01_1.sql
-------------------------------------------------------------------- --------------------------------------------------------------------
Completed getting datafiles from source
-------------------------------------------------------------------- ASMCMD [+datadg/jyrac/datafile] > ls -lt
Type Redund Striped Time Sys Name
N ldjc01 => +DATADG/JYRAC/DATAFILE/FILE_TRANSFER.271.952340629
N cdzj01 => +DATADG/JYRAC/DATAFILE/FILE_TRANSFER.272.952340629
DATAFILE MIRROR COARSE AUG 18 11:00:00 Y FILE_TRANSFER.272.952340629
DATAFILE MIRROR COARSE AUG 18 11:00:00 Y FILE_TRANSFER.271.952340629

当这步操作完成后,要被传输的数据文件会存放在目标系统最终存放数据文件的目录中。转换操作会自动执行。下面就要执行前滚阶段的操作了。

3.前滚阶段
下面在源数据库中创建增量数据

SQL> insert into ldjc.jy_test values(7);
1 row inserted SQL> insert into cdzj.jy_test values(7);
1 row inserted SQL> commit;
Commit complete SQL> select * from ldjc.jy_test;
USER_ID
---------------------
7
1
2
3
4
5
6
7 rows selected SQL> select * from cdzj.jy_test;
USER_ID
---------------------
7
1
2
3
4
5
6
7 rows selected

在这个阶段,会在源系统中对源数据库创建增量备份,然后将生成的增量备份传输到目标系统中,并将增量备份转换为目标系统所使用的字节序,然后将转换后的增量备份应用到转换后的数据文件进行前滚操作。这个阶段的操作可以执行多次,每一次成功的增量备份应该比之前的增量备份花费更少的时间,并且让目标系统中的数据文件的内容更加接近源数据库的内容。在这个阶段源数据库中被传输的数据完全可以被访问。

3.1 在源系统中对被传输的表空间LDJC,CDZJ创建增量备份
在源系统中,以Oracle软件用户登录并设置环境变量(ORACLE_HOME与ORACLE_SID)来指向源数据库,并执行以下命令来创建增量备份:

IBMP740-2:/oracle11/xtts_script$$ORACLE_HOME/perl/bin/perl xttdriver.pl -i
============================================================
trace file is /oracle11/xtts_script/incremental_Aug18_Fri_10_56_44_606//Aug18_Fri_10_56_44_606_.log
============================================================= --------------------------------------------------------------------
Parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Done parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Checking properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Done checking properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Backup incremental
-------------------------------------------------------------------- Prepare source for Tablespaces:
'CDZJ' /u01/xtts
xttpreparesrc.sql for 'CDZJ' started at Fri Aug 18 10:56:44 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:56:44 2017
Prepare source for Tablespaces:
'LDJC' /u01/xtts
xttpreparesrc.sql for 'LDJC' started at Fri Aug 18 10:56:44 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:56:44 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 10:56:44 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:56:44 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 10:56:44 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:56:44 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 10:56:44 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:56:44 2017
============================================================
No new datafiles added
=============================================================
Prepare newscn for Tablespaces: 'CDZJ'
Prepare newscn for Tablespaces: 'LDJC'
Prepare newscn for Tablespaces: '''''''''''' --------------------------------------------------------------------
Starting incremental backup
-------------------------------------------------------------------- --------------------------------------------------------------------
Done backing up incrementals
--------------------------------------------------------------------

上面的操作会执行RMAN命令对xtt.properties文件中所指定的所有表空间生成增量备份文件。并且还将创建以下文件供后面的操作使用:
.tsbkupmap.txt
.incrbackups.txt

tsbkupmap.txt的内容如下:

IBMP740-2:/oracle11/xtts_script$cat tsbkupmap.txt
LDJC::7:::1=07sc73ng_1_1
CDZJ::6:::1=06sc73nf_1_1

文件中的内容记录了表空间与增量备份的关联关系

incrbackups.txt的内容如下:

IBMP740-2:/oracle11/xtts_script$cat incrbackups.txt
/oracle11/backup/07sc73ng_1_1
/oracle11/backup/06sc73nf_1_1

文件中的内容显示了生成的增量备份文件信息

IBMP740-2:/oracle11/backup$ls -lrt
total 624
-rw-r----- 1 oracle11 oinstall 65536 Aug 18 10:56 06sc73nf_1_1
-rw-r----- 1 oracle11 oinstall 253952 Aug 18 10:56 07sc73ng_1_1

3.2 将增量备份传输到目标系统中
将上一步生成的增量备份传输到目标系统中由xtt.properties文件中的stageondest目录(/u01/xtts)中。

[oracle@jyrac1 xtts]$ ftp 10.138.129.2
Connected to 10.138.129.2.
220 IBMP740-2 FTP server (Version 4.2 Mon Nov 28 14:12:02 CST 2011) ready.
502 authentication type cannot be set to GSSAPI
502 authentication type cannot be set to KERBEROS_V4
KERBEROS_V4 rejected as an authentication type
Name (10.138.129.2:oracle): oracle
331 Password required for oracle.
Password:
230-Last unsuccessful login: Wed Dec 3 10:20:09 BEIST 2014 on /dev/pts/0 from 10.138.130.31
230-Last login: Fri Aug 18 10:24:32 BEIST 2017 on ftp from ::ffff:10.138.130.151
230 User oracle logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> cd /oracle11/backup
250 CWD command successful.
ftp> ls -lrt
227 Entering Passive Mode (10,138,129,2,43,121)
150 Opening data connection for /bin/ls.
total 624
-rw-r----- 1 oracle11 oinstall 65536 Aug 18 10:56 06sc73nf_1_1
-rw-r----- 1 oracle11 oinstall 253952 Aug 18 10:56 07sc73ng_1_1
226 Transfer complete.
ftp> lcd /u01/xtts
Local directory now /u01/xtts
ftp> bin
200 Type set to I.
ftp> get 06sc73nf_1_1
local: 06sc73nf_1_1 remote: 06sc73nf_1_1
227 Entering Passive Mode (10,138,129,2,43,130)
150 Opening data connection for 06sc73nf_1_1 (65536 bytes).
226 Transfer complete.
65536 bytes received in 0.0018 seconds (3.5e+04 Kbytes/s)
ftp> get 07sc73ng_1_1
local: 07sc73ng_1_1 remote: 07sc73ng_1_1
227 Entering Passive Mode (10,138,129,2,43,134)
150 Opening data connection for 07sc73ng_1_1 (253952 bytes).
226 Transfer complete.
253952 bytes received in 0.0038 seconds (6.5e+04 Kbytes/s) [oracle@jyrac1 xtts]$ ls -lrt
total 320
-rw-r--r-- 1 oracle oinstall 65536 Aug 18 11:22 06sc73nf_1_1
-rw-r--r-- 1 oracle oinstall 253952 Aug 18 11:22 07sc73ng_1_1

3.3 在目标系统中转换增量备份并应用到数据文件副本
在目标系统中以Oracle软件用户登录并设置环境变量(ORACLE_HOME与ORACLE_SID)来指向目标数据库,并从源系统中将上一步生成的xttplan.txt与tsbkupmap.txt文件。

[oracle@jyrac1 xtts_script]$ ftp 10.138.129.2
Connected to 10.138.129.2.
220 IBMP740-2 FTP server (Version 4.2 Mon Nov 28 14:12:02 CST 2011) ready.
502 authentication type cannot be set to GSSAPI
502 authentication type cannot be set to KERBEROS_V4
KERBEROS_V4 rejected as an authentication type
Name (10.138.129.2:oracle): oracle
331 Password required for oracle.
Password:
230-Last unsuccessful login: Wed Dec 3 10:20:09 BEIST 2014 on /dev/pts/0 from 10.138.130.31
230-Last login: Fri Aug 18 11:00:11 BEIST 2017 on ftp from ::ffff:10.138.130.151
230 User oracle logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> cd /oracle11/xtts_script
250 CWD command successful.
ftp> ls -lrt
227 Entering Passive Mode (10,138,129,2,43,196)
150 Opening data connection for /bin/ls.
total 520
-rw-r--r-- 1 oracle11 oinstall 1390 May 24 16:57 xttcnvrtbkupdest.sql
-rw-r--r-- 1 oracle11 oinstall 52 May 24 16:57 xttstartupnomount.sql
-rw-r--r-- 1 oracle11 oinstall 11710 May 24 16:57 xttprep.tmpl
-rw-r--r-- 1 oracle11 oinstall 139331 May 24 16:57 xttdriver.pl
-rw-r--r-- 1 oracle11 oinstall 71 May 24 16:57 xttdbopen.sql
-rw-r--r-- 1 oracle11 oinstall 7969 Jun 05 08:47 xtt.properties.jy
-rw-r----- 1 oracle11 oinstall 33949 Aug 18 09:26 rman_xttconvert_v3.zip
-rw-r--r-- 1 oracle11 oinstall 352 Aug 18 10:15 xtt.properties
-rw-r--r-- 1 oracle11 oinstall 50 Aug 18 10:21 xttplan.txt
-rw-r--r-- 1 oracle11 oinstall 106 Aug 18 10:21 xttnewdatafiles.txt_temp
-rw-r--r-- 1 oracle11 oinstall 50 Aug 18 10:21 xttnewdatafiles.txt
drwxr-xr-x 2 oracle11 oinstall 256 Aug 18 10:21 setupgetfile_Aug18_Fri_10_21_17_169
-rw-r--r-- 1 oracle11 oinstall 68 Aug 18 10:21 getfile.sql
-rw-r--r-- 1 oracle11 oinstall 50 Aug 18 10:56 xttplan.txt_tmp
-rw-r--r-- 1 oracle11 oinstall 106 Aug 18 10:56 xttnewdatafiles.txt.added_temp
-rw-r--r-- 1 oracle11 oinstall 50 Aug 18 10:56 xttnewdatafiles.txt.added
-rw-r--r-- 1 oracle11 oinstall 68 Aug 18 10:56 getfile.sql.added
-rw-r--r-- 1 oracle11 oinstall 54 Aug 18 10:56 xttplan.txt.new
-rw-r--r-- 1 oracle11 oinstall 50 Aug 18 10:56 tsbkupmap.txt
drwxr-xr-x 2 oracle11 oinstall 4096 Aug 18 10:56 incremental_Aug18_Fri_10_56_44_606
-rw-r--r-- 1 oracle11 oinstall 60 Aug 18 10:56 incrbackups.txt
226 Transfer complete.
ftp> lcd /u01/xtts_script
Local directory now /u01/xtts_script
ftp> get tsbkupmap.txt
local: tsbkupmap.txt remote: tsbkupmap.txt
227 Entering Passive Mode (10,138,129,2,43,208)
150 Opening data connection for tsbkupmap.txt (50 bytes).
226 Transfer complete.
50 bytes received in 4.1e-05 seconds (1.2e+03 Kbytes/s)
ftp> get xttplan.txt
local: xttplan.txt remote: xttplan.txt
227 Entering Passive Mode (10,138,129,2,43,213)
150 Opening data connection for xttplan.txt (50 bytes).
226 Transfer complete.
50 bytes received in 4.8e-05 seconds (1e+03 Kbytes/s) [oracle@jyrac1 xtts_script]$ cat tsbkupmap.txt
LDJC::7:::1=07sc73ng_1_1
CDZJ::6:::1=06sc73nf_1_1
[oracle@jyrac1 xtts_script]$ cat xttplan.txt
CDZJ::::14690270660591
6
LDJC::::14690270660591
7 [oracle@jyrac1 xtts_script]$ export XTTDEBUG=1
[oracle@jyrac1 xtts_script]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r
============================================================
trace file is /u01/xtts_script/rollforward_Aug18_Fri_11_34_08_253//Aug18_Fri_11_34_08_253_.log
============================================================= --------------------------------------------------------------------
Parsing properties
-------------------------------------------------------------------- Key: backupondest
Values: +DATADG/backup
Key: platformid
Values: 6
Key: backupformat
Values: /oracle11/backup
Key: srclink
Values: ttslink
Key: asm_sid
Values: +ASM1
Key: dstdir
Values: DESTDIR
Key: cnvinst_home
Values: /u01/app/oracle/product/11.2.0/db
Key: cnvinst_sid
Values: xtt
Key: srcdir
Values: SOURCEDIR
Key: stageondest
Values: /u01/xtts
Key: tablespaces
Values: CDZJ,LDJC
Key: asm_home
Values: /u01/app/product/11.2.0/crs --------------------------------------------------------------------
Done parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Checking properties
-------------------------------------------------------------------- ARGUMENT tablespaces
ARGUMENT platformid
ARGUMENT backupformat
ARGUMENT stageondest
ARGUMENT backupondest --------------------------------------------------------------------
Done checking properties
-------------------------------------------------------------------- ORACLE_SID : jyrac1
ORACLE_HOME : /u01/app/oracle/product/11.2.0/db --------------------------------------------------------------------
Start rollforward
-------------------------------------------------------------------- convert instance: /u01/app/oracle/product/11.2.0/db convert instance: xtt ORACLE instance started. Total System Global Area 2505338880 bytes
Fixed Size 2255832 bytes
Variable Size 687866920 bytes
Database Buffers 1795162112 bytes
Redo Buffers 20054016 bytes
rdfno 6 BEFORE ROLLPLAN datafile number : 6 datafile name : +DATADG/jyrac/datafile/cdzj01 AFTER ROLLPLAN CONVERTED BACKUP PIECE+DATADG/backup/xib_06sc73nf_1_1_6 PL/SQL procedure successfully completed.
Entering RollForward
After applySetDataFile
Done: applyDataFileTo
Done: applyDataFileTo
Done: RestoreSetPiece
Done: RestoreBackupPiece PL/SQL procedure successfully completed.
asmcmd rm +DATADG/backup/xib_06sc73nf_1_1_6 /u01/app/product/11.2.0/crs .. +ASM1

--这里显示的信息是说在前滚后不能删除增量备份文件,可以忽略这个错误

Can't locate strict.pm in @INC (@INC contains: /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/lib /u01/app/product/11.2.0/crs/lib/asmcmd /u01/app/product/11.2.0/crs/rdbms/lib/asmcmd /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl .) at /u01/app/product/11.2.0/crs/bin/asmcmdcore line 143.
BEGIN failed--compilation aborted at /u01/app/product/11.2.0/crs/bin/asmcmdcore line 143.
ASMCMD: rdfno 7 BEFORE ROLLPLAN datafile number : 7 datafile name : +DATADG/jyrac/datafile/ldjc01 AFTER ROLLPLAN CONVERTED BACKUP PIECE+DATADG/backup/xib_07sc73ng_1_1_7 PL/SQL procedure successfully completed.
Entering RollForward
After applySetDataFile
Done: applyDataFileTo
Done: applyDataFileTo
Done: RestoreSetPiece
Done: RestoreBackupPiece PL/SQL procedure successfully completed.
asmcmd rm +DATADG/backup/xib_07sc73ng_1_1_7 /u01/app/product/11.2.0/crs .. +ASM1

--这里显示的信息是说在前滚后不能删除增量备份文件,可以忽略这个错误

Can't locate strict.pm in @INC (@INC contains: /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/lib /u01/app/product/11.2.0/crs/lib/asmcmd /u01/app/product/11.2.0/crs/rdbms/lib/asmcmd /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl .) at /u01/app/product/11.2.0/crs/bin/asmcmdcore line 143.
BEGIN failed--compilation aborted at /u01/app/product/11.2.0/crs/bin/asmcmdcore line 143.
ASMCMD: --------------------------------------------------------------------
End of rollforward phase
--------------------------------------------------------------------

这步前滚数据文件的操作,会以sys用户连接到增量转换实例,转换完增量备份后,然后连接到目标数据库并将增量备份应用到每个表空间注意:对于每一次增量备份都需要将xttplan.txt与tsbkupmap.txt文件复制一次,不要对脚本所生成的xttplan.txt.new文件进行修改,复制或者其它任何改变。执行这步操作时目标实例会进行重启操作。

3.4 为下一次增量备份判断from_scn
再次生成增量数据

SQL> insert into ldjc.jy_test values(8);
1 row inserted SQL> insert into cdzj.jy_test values(8);
1 row inserted SQL> commit;
Commit complete SQL> select * from ldjc.jy_test;
USER_ID
---------------------
7
8
8
1
2
3
4
5
6
9 rows selected SQL> select * from cdzj.jy_test;
USER_ID
---------------------
7
8
1
2
3
4
5
6
8 rows selected

在源系统中,以Oracle软件用户登录并设置环境变量(ORACLE_HOME与ORACLE_SID)来指向源数据库,执行以下命令来判断from_scn:

IBMP740-2:/oracle11/xtts_script$$ORACLE_HOME/perl/bin/perl xttdriver.pl -s
============================================================
trace file is /oracle11/xtts_script/determinescn_Aug18_Fri_11_21_56_544//Aug18_Fri_11_21_56_544_.log
============================================================= --------------------------------------------------------------------
Parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Done parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Checking properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Done checking properties
-------------------------------------------------------------------- Prepare newscn for Tablespaces: 'CDZJ'
Prepare newscn for Tablespaces: 'LDJC'
Prepare newscn for Tablespaces: ''''
Prepare newscn for Tablespaces: ''''
Prepare newscn for Tablespaces: ''''
New /oracle11/xtts_script/xttplan.txt with FROM SCN's generated

这步操作会计算下一个from_scn,并记录在xttplan.txt文件中,当下次创建增量备份时会使用这个scn

IBMP740-2:/oracle11/xtts_script$cat xttplan.txt
CDZJ::::14690270749458
6
LDJC::::14690270749458
7

3.5 再次重复前滚阶段或执行传输阶段
这里有两种选择:
1.如果如果将目标数据库中的数据文件与源数据库中的数据文件进行最接近的同步,那么就重复执行前滚操作。
2.如果目标数据库中的数据文件与源数据库中的数据文件已经达到所期望的接近,那么执行传输阶段的操作。

注意:如果从上一次增量备份后增加了一个新的表空间或者一个新的表空间名增加到xtt.properties文件中,那么将会出现以下错误:

Error:
------
The incremental backup was not taken as a datafile has been added to the tablespace: Please Do the following:
--------------------------
1. Copy fixnewdf.txt from source to destination temp dir 2. Copy backups: from to the in destination 3. On Destination, run $ORACLE_HOME/perl/bin/perl xttdriver.pl --fixnewdf 4. Re-execute the incremental backup in source:
$ORACLE_HOME/perl/bin/perl xttdriver.pl --bkpincr NOTE: Before running incremental backup, delete FAILED in source temp dir or
run xttdriver.pl with -L option: $ORACLE_HOME/perl/bin/perl xttdriver.pl -L --bkpincr These instructions must be followed exactly as listed. The next incremental backup will include the new datafile.

我这里再次执行前滚操作
在源系统中,以Oracle软件用户登录并设置环境变量(ORACLE_HOME与ORACLE_SID)来指向源数据库,并执行以下命令来创建增量备份:

IBMP740-2:/oracle11/xtts_script$$ORACLE_HOME/perl/bin/perl xttdriver.pl -i
============================================================
trace file is /oracle11/xtts_script/incremental_Aug18_Fri_11_23_16_532//Aug18_Fri_11_23_16_532_.log
============================================================= --------------------------------------------------------------------
Parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Done parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Checking properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Done checking properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Backup incremental
-------------------------------------------------------------------- Prepare source for Tablespaces:
'CDZJ' /u01/xtts
xttpreparesrc.sql for 'CDZJ' started at Fri Aug 18 11:23:16 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:23:16 2017
Prepare source for Tablespaces:
'LDJC' /u01/xtts
xttpreparesrc.sql for 'LDJC' started at Fri Aug 18 11:23:16 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:23:16 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 11:23:16 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:23:17 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 11:23:17 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:23:17 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 11:23:17 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:23:17 2017
============================================================
No new datafiles added
=============================================================
Prepare newscn for Tablespaces: 'CDZJ'
Prepare newscn for Tablespaces: 'LDJC'
Prepare newscn for Tablespaces: '''''''''''' --------------------------------------------------------------------
Starting incremental backup
-------------------------------------------------------------------- --------------------------------------------------------------------
Done backing up incrementals
--------------------------------------------------------------------

上面的操作会执行RMAN命令对xtt.properties文件中所指定的所有表空间生成增量备份文件。并且还将创建以下文件供后面的操作使用:
.tsbkupmap.txt
.incrbackups.txt
tsbkupmap.txt的内容如下:
IBMP740-2:/oracle11/xtts_script$cat tsbkupmap.txt
LDJC::7:::1=09sc7598_1_1
CDZJ::6:::1=08sc7597_1_1
文件中的内容记录了表空间与增量备份的关联关系
incrbackups.txt的内容如下:

IBMP740-2:/oracle11/xtts_script$cat incrbackups.txt
/oracle11/backup/09sc7598_1_1
/oracle11/backup/08sc7597_1_1

文件中的内容显示了生成的增量备份文件信息

IBMP740-2:/oracle11/backup$ls -lrt
-rw-r----- 1 oracle11 oinstall 49152 Aug 18 11:23 08sc7597_1_1
-rw-r----- 1 oracle11 oinstall 204800 Aug 18 11:23 09sc7598_1_1

将增量备份传输到目标系统中
将上一步生成的增量备份传输到目标系统中由xtt.properties文件中的stageondest目录(/u01/xtts)中。

[oracle@jyrac1 xtts_script]$ ftp 10.138.129.2
Connected to 10.138.129.2.
220 IBMP740-2 FTP server (Version 4.2 Mon Nov 28 14:12:02 CST 2011) ready.
502 authentication type cannot be set to GSSAPI
502 authentication type cannot be set to KERBEROS_V4
KERBEROS_V4 rejected as an authentication type
Name (10.138.129.2:oracle): oracle
331 Password required for oracle.
Password:
230-Last unsuccessful login: Wed Dec 3 10:20:09 BEIST 2014 on /dev/pts/0 from 10.138.130.31
230-Last login: Fri Aug 18 11:02:13 BEIST 2017 on ftp from ::ffff:10.138.130.151
230 User oracle logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> cd /oracle11/backup
250 CWD command successful.
ftp> ls -lrt
227 Entering Passive Mode (10,138,129,2,46,249)
150 Opening data connection for /bin/ls.
total 1120
-rw-r----- 1 oracle11 oinstall 65536 Aug 18 10:56 06sc73nf_1_1
-rw-r----- 1 oracle11 oinstall 253952 Aug 18 10:56 07sc73ng_1_1
-rw-r----- 1 oracle11 oinstall 49152 Aug 18 11:23 08sc7597_1_1
-rw-r----- 1 oracle11 oinstall 204800 Aug 18 11:23 09sc7598_1_1
226 Transfer complete.
ftp> lcd /u01/xtts
Local directory now /u01/xtts
ftp> bin
200 Type set to I.
ftp> get 08sc7597_1_1
local: 08sc7597_1_1 remote: 08sc7597_1_1
227 Entering Passive Mode (10,138,129,2,47,4)
150 Opening data connection for 08sc7597_1_1 (49152 bytes).
226 Transfer complete.
49152 bytes received in 0.0013 seconds (3.7e+04 Kbytes/s)
ftp> get 09sc7598_1_1
local: 09sc7598_1_1 remote: 09sc7598_1_1
227 Entering Passive Mode (10,138,129,2,47,9)
150 Opening data connection for 09sc7598_1_1 (204800 bytes).
226 Transfer complete.
204800 bytes received in 0.0029 seconds (7e+04 Kbytes/s)

在目标系统中转换增量备份并应用到数据文件副本
在目标系统中以Oracle软件用户登录并设置环境变量(ORACLE_HOME与ORACLE_SID)来指向目标数据库,并从源系统中将上一步生成的xttplan.txt与tsbkupmap.txt文件。

ftp> cd /oracle11/xtts_script
250 CWD command successful.
ftp> lcd /u01/xtts_script
Local directory now /u01/xtts_script
ftp> bin
200 Type set to I.
ftp> get xttplan.txt
local: xttplan.txt remote: xttplan.txt
227 Entering Passive Mode (10,138,129,2,47,32)
150 Opening data connection for xttplan.txt (54 bytes).
226 Transfer complete.
54 bytes received in 2.7e-05 seconds (2e+03 Kbytes/s)
ftp> get tsbkupmap.txt
local: tsbkupmap.txt remote: tsbkupmap.txt
227 Entering Passive Mode (10,138,129,2,47,39)
150 Opening data connection for tsbkupmap.txt (50 bytes).
226 Transfer complete.
50 bytes received in 3.2e-05 seconds (1.5e+03 Kbytes/s)
[oracle@jyrac1 xtts_script]$ cat xttplan.txt
CDZJ::::14690270749458
6
LDJC::::14690270749458
7
[oracle@jyrac1 xtts_script]$ cat tsbkupmap.txt
LDJC::7:::1=09sc7598_1_1
CDZJ::6:::1=08sc7597_1_1 [oracle@jyrac1 xtts_script]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r
============================================================
trace file is /u01/xtts_script/rollforward_Aug18_Fri_11_50_48_600//Aug18_Fri_11_50_48_600_.log
============================================================= --------------------------------------------------------------------
Parsing properties
-------------------------------------------------------------------- Key: backupondest
Values: +DATADG/backup
Key: platformid
Values: 6
Key: backupformat
Values: /oracle11/backup
Key: srclink
Values: ttslink
Key: asm_sid
Values: +ASM1
Key: dstdir
Values: DESTDIR
Key: cnvinst_home
Values: /u01/app/oracle/product/11.2.0/db
Key: cnvinst_sid
Values: xtt
Key: srcdir
Values: SOURCEDIR
Key: stageondest
Values: /u01/xtts
Key: tablespaces
Values: CDZJ,LDJC
Key: asm_home
Values: /u01/app/product/11.2.0/crs --------------------------------------------------------------------
Done parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Checking properties
-------------------------------------------------------------------- ARGUMENT tablespaces
ARGUMENT platformid
ARGUMENT backupformat
ARGUMENT stageondest
ARGUMENT backupondest --------------------------------------------------------------------
Done checking properties
-------------------------------------------------------------------- ORACLE_SID : jyrac1
ORACLE_HOME : /u01/app/oracle/product/11.2.0/db --------------------------------------------------------------------
Start rollforward
-------------------------------------------------------------------- convert instance: /u01/app/oracle/product/11.2.0/db convert instance: xtt ORACLE instance started. Total System Global Area 2505338880 bytes
Fixed Size 2255832 bytes
Variable Size 687866920 bytes
Database Buffers 1795162112 bytes
Redo Buffers 20054016 bytes
rdfno 6 BEFORE ROLLPLAN datafile number : 6 datafile name : +DATADG/jyrac/datafile/cdzj01 AFTER ROLLPLAN CONVERTED BACKUP PIECE+DATADG/backup/xib_08sc7597_1_1_6 PL/SQL procedure successfully completed.
Entering RollForward
After applySetDataFile
Done: applyDataFileTo
Done: applyDataFileTo
Done: RestoreSetPiece
Done: RestoreBackupPiece PL/SQL procedure successfully completed.
asmcmd rm +DATADG/backup/xib_08sc7597_1_1_6 /u01/app/product/11.2.0/crs .. +ASM1 Can't locate strict.pm in @INC (@INC contains: /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/lib /u01/app/product/11.2.0/crs/lib/asmcmd /u01/app/product/11.2.0/crs/rdbms/lib/asmcmd /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl .) at /u01/app/product/11.2.0/crs/bin/asmcmdcore line 143.
BEGIN failed--compilation aborted at /u01/app/product/11.2.0/crs/bin/asmcmdcore line 143.
ASMCMD: rdfno 7 BEFORE ROLLPLAN datafile number : 7 datafile name : +DATADG/jyrac/datafile/ldjc01 AFTER ROLLPLAN CONVERTED BACKUP PIECE+DATADG/backup/xib_09sc7598_1_1_7 PL/SQL procedure successfully completed.
Entering RollForward
After applySetDataFile
Done: applyDataFileTo
Done: applyDataFileTo
Done: RestoreSetPiece
Done: RestoreBackupPiece PL/SQL procedure successfully completed.
asmcmd rm +DATADG/backup/xib_09sc7598_1_1_7 /u01/app/product/11.2.0/crs .. +ASM1 Can't locate strict.pm in @INC (@INC contains: /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/lib /u01/app/product/11.2.0/crs/lib/asmcmd /u01/app/product/11.2.0/crs/rdbms/lib/asmcmd /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl .) at /u01/app/product/11.2.0/crs/bin/asmcmdcore line 143.
BEGIN failed--compilation aborted at /u01/app/product/11.2.0/crs/bin/asmcmdcore line 143.
ASMCMD: --------------------------------------------------------------------
End of rollforward phase
--------------------------------------------------------------------

这步前滚数据文件的操作,会以sys用户连接到增量转换实例,转换完增量备份后,然后连接到目标数据库并将增量备份应用到每个表空间注意:对于每一次增量备份都需要将xttplan.txt与tsbkupmap.txt文件复制一次,不要对脚本所生成的xttplan.txt.new文件进行修改,复制或者其它任何改变。执行这步操作时目标实例会进行重启操作。

为下一次增量备份判断from_scn
再次生成增量数据

SQL> insert into ldjc.jy_test values(9);
1 row inserted SQL> insert into cdzj.jy_test values(9);
1 row inserted SQL> commit;
Commit complete SQL> select * from ldjc.jy_test;
USER_ID
---------------------
7
8
8
9
1
2
3
4
5
6
10 rows selected SQL> select * from cdzj.jy_test;
USER_ID
---------------------
7
8
9
1
2
3
4
5
6
9 rows selected

在源系统中,以Oracle软件用户登录并设置环境变量(ORACLE_HOME与ORACLE_SID)来指向源数据库,执行以下命令来判断from_scn:

IBMP740-2:/oracle11/xtts_script$$ORACLE_HOME/perl/bin/perl xttdriver.pl -s
============================================================
trace file is /oracle11/xtts_script/determinescn_Aug18_Fri_11_31_22_441//Aug18_Fri_11_31_22_441_.log
============================================================= --------------------------------------------------------------------
Parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Done parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Checking properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Done checking properties
-------------------------------------------------------------------- Prepare newscn for Tablespaces: 'CDZJ'
Prepare newscn for Tablespaces: 'LDJC'
Prepare newscn for Tablespaces: ''''
Prepare newscn for Tablespaces: ''''
Prepare newscn for Tablespaces: ''''
New /oracle11/xtts_script/xttplan.txt with FROM SCN's generated IBMP740-2:/oracle11/xtts_script$cat xttplan.txt
CDZJ::::14690270749827
6
LDJC::::14690270749845

4.传输阶段
在执行传输阶段操作时,源数据库中被传输表空间要设置为read only状态,并且通过创建与应用最后一次的增量备份使用目标数据库中的数据文件与源数据库中的数据文件内容保持一致。在目标数据库数据文件与源数据库数据文件内容达成一致后,在源系统中执行正常的传输表空间操作来导出元数据,然后将元数据导入到目标数据库中。直到传输阶段操作完成之前,被传输的数据只能以read only模式被访问。

4.1 将源数据库中被传输表空间设置为read only状态
在源系统中,以Oracle软件用户登录并设置环境变量(ORACLE_HOME与ORACLE_SID)来指向源数据库,并执行以下命令将表空间设置为read only:

SQL> alter tablespace ldjc read only;
Tablespace altered SQL> alter tablespace cdzj read only;
Tablespace altered SQL> select tablespace_name,status from dba_tablespaces;
TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
SYSAUX ONLINE
UNDOTBS1 ONLINE
TEMP ONLINE
USERS ONLINE
EXAMPLE ONLINE
CDZJ READ ONLY
LDJC READ ONLY
8 rows selected

4.2 最后一次创建增量备份,并传输到目标系统且执行转换并应用到目标数据文件
在源系统中,以Oracle软件用户登录并设置环境变量(ORACLE_HOME与ORACLE_SID)来指向源数据库,并执行以下命令来创建增量备份:

IBMP740-2:/oracle11/xtts_script$$ORACLE_HOME/perl/bin/perl xttdriver.pl -i
============================================================
trace file is /oracle11/xtts_script/incremental_Aug18_Fri_11_33_18_477//Aug18_Fri_11_33_18_477_.log
============================================================= --------------------------------------------------------------------
Parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Done parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Checking properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Done checking properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Backup incremental
-------------------------------------------------------------------- Prepare source for Tablespaces:
'CDZJ' /u01/xtts
xttpreparesrc.sql for 'CDZJ' started at Fri Aug 18 11:33:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:33:18 2017
Prepare source for Tablespaces:
'LDJC' /u01/xtts
xttpreparesrc.sql for 'LDJC' started at Fri Aug 18 11:33:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:33:18 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 11:33:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:33:18 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 11:33:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:33:18 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 11:33:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:33:18 2017
============================================================
No new datafiles added
=============================================================
Prepare newscn for Tablespaces: 'CDZJ'
Prepare newscn for Tablespaces: 'LDJC'
Prepare newscn for Tablespaces: '''''''''''' --------------------------------------------------------------------
Starting incremental backup
-------------------------------------------------------------------- --------------------------------------------------------------------
Done backing up incrementals
--------------------------------------------------------------------

上面的操作会执行RMAN命令对xtt.properties文件中所指定的所有表空间生成增量备份文件。并且还将创建以下文件供后面的操作使用:
.tsbkupmap.txt
.incrbackups.txt

tsbkupmap.txt的内容如下:

IBMP740-2:/oracle11/xtts_script$cat tsbkupmap.txt
LDJC::7:::1=0bsc75s2_1_1
CDZJ::6:::1=0asc75s0_1_1

文件中的内容记录了表空间与增量备份的关联关系

incrbackups.txt的内容如下:

IBMP740-2:/oracle11/xtts_script$cat incrbackups.txt
/oracle11/backup/0bsc75s2_1_1
/oracle11/backup/0asc75s0_1_1

将增量备份传输到目标系统中
将上一步生成的增量备份传输到目标系统中由xtt.properties文件中的stageondest目录(/u01/xtts)中。

[oracle@jyrac1 xtts_script]$ ftp 10.138.129.2
Connected to 10.138.129.2.
220 IBMP740-2 FTP server (Version 4.2 Mon Nov 28 14:12:02 CST 2011) ready.
502 authentication type cannot be set to GSSAPI
502 authentication type cannot be set to KERBEROS_V4
KERBEROS_V4 rejected as an authentication type
Name (10.138.129.2:oracle): oracle
331 Password required for oracle.
Password:
230-Last unsuccessful login: Wed Dec 3 10:20:09 BEIST 2014 on /dev/pts/0 from 10.138.130.31
230-Last login: Fri Aug 18 11:26:03 BEIST 2017 on ftp from ::ffff:10.138.130.151
230 User oracle logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> cd /oracle11/backup
250 CWD command successful.
ftp> ls -lrt
227 Entering Passive Mode (10,138,129,2,48,62)
150 Opening data connection for /bin/ls.
total 1632
-rw-r----- 1 oracle11 oinstall 65536 Aug 18 10:56 06sc73nf_1_1
-rw-r----- 1 oracle11 oinstall 253952 Aug 18 10:56 07sc73ng_1_1
-rw-r----- 1 oracle11 oinstall 49152 Aug 18 11:23 08sc7597_1_1
-rw-r----- 1 oracle11 oinstall 204800 Aug 18 11:23 09sc7598_1_1
-rw-r----- 1 oracle11 oinstall 49152 Aug 18 11:33 0asc75s0_1_1
-rw-r----- 1 oracle11 oinstall 212992 Aug 18 11:33 0bsc75s2_1_1
226 Transfer complete.
ftp> lcd /u01/xtts
Local directory now /u01/xtts
ftp> get 0asc75s0_1_1
local: 0asc75s0_1_1 remote: 0asc75s0_1_1
227 Entering Passive Mode (10,138,129,2,48,73)
150 Opening data connection for 0asc75s0_1_1 (49152 bytes).
226 Transfer complete.
49152 bytes received in 0.0015 seconds (3.3e+04 Kbytes/s)
ftp> get 0bsc75s2_1_1
local: 0bsc75s2_1_1 remote: 0bsc75s2_1_1
227 Entering Passive Mode (10,138,129,2,48,76)
150 Opening data connection for 0bsc75s2_1_1 (212992 bytes).
226 Transfer complete.
212992 bytes received in 0.0032 seconds (6.6e+04 Kbytes/s)

在目标系统中转换增量备份并应用到数据文件副本
在目标系统中以Oracle软件用户登录并设置环境变量(ORACLE_HOME与ORACLE_SID)来指向目标数据库,并从源系统中将上一步生成的xttplan.txt与tsbkupmap.txt文件。

ftp> cd /oracle11/xtts_script
250 CWD command successful.
ftp> lcd /u01/xtts_script
Local directory now /u01/xtts_script
ftp> bin
200 Type set to I.
ftp> get xttplan.txt
local: xttplan.txt remote: xttplan.txt
227 Entering Passive Mode (10,138,129,2,48,100)
150 Opening data connection for xttplan.txt (54 bytes).
226 Transfer complete.
54 bytes received in 3.4e-05 seconds (1.6e+03 Kbytes/s)
ftp> get tsbkupmap.txt
local: tsbkupmap.txt remote: tsbkupmap.txt
227 Entering Passive Mode (10,138,129,2,48,107)
150 Opening data connection for tsbkupmap.txt (50 bytes).
226 Transfer complete.
50 bytes received in 6.4e-05 seconds (7.6e+02 Kbytes/s) [oracle@jyrac1 xtts_script]$ cat xttplan.txt
CDZJ::::14690270749827
6
LDJC::::14690270749845
7
[oracle@jyrac1 xtts_script]$ cat tsbkupmap.txt
LDJC::7:::1=0bsc75s2_1_1
CDZJ::6:::1=0asc75s0_1_1 [oracle@jyrac1 xtts_script]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r
============================================================
trace file is /u01/xtts_script/rollforward_Aug18_Fri_12_00_02_120//Aug18_Fri_12_00_02_120_.log
============================================================= --------------------------------------------------------------------
Parsing properties
-------------------------------------------------------------------- Key: backupondest
Values: +DATADG/backup
Key: platformid
Values: 6
Key: backupformat
Values: /oracle11/backup
Key: srclink
Values: ttslink
Key: asm_sid
Values: +ASM1
Key: dstdir
Values: DESTDIR
Key: cnvinst_home
Values: /u01/app/oracle/product/11.2.0/db
Key: cnvinst_sid
Values: xtt
Key: srcdir
Values: SOURCEDIR
Key: stageondest
Values: /u01/xtts
Key: tablespaces
Values: CDZJ,LDJC
Key: asm_home
Values: /u01/app/product/11.2.0/crs --------------------------------------------------------------------
Done parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Checking properties
-------------------------------------------------------------------- ARGUMENT tablespaces
ARGUMENT platformid
ARGUMENT backupformat
ARGUMENT stageondest
ARGUMENT backupondest --------------------------------------------------------------------
Done checking properties
-------------------------------------------------------------------- ORACLE_SID : jyrac1
ORACLE_HOME : /u01/app/oracle/product/11.2.0/db --------------------------------------------------------------------
Start rollforward
-------------------------------------------------------------------- convert instance: /u01/app/oracle/product/11.2.0/db convert instance: xtt ORACLE instance started. Total System Global Area 2505338880 bytes
Fixed Size 2255832 bytes
Variable Size 687866920 bytes
Database Buffers 1795162112 bytes
Redo Buffers 20054016 bytes
rdfno 6 BEFORE ROLLPLAN datafile number : 6 datafile name : +DATADG/jyrac/datafile/cdzj01 AFTER ROLLPLAN CONVERTED BACKUP PIECE+DATADG/backup/xib_0asc75s0_1_1_6 PL/SQL procedure successfully completed.
Entering RollForward
After applySetDataFile
Done: applyDataFileTo
Done: applyDataFileTo
Done: RestoreSetPiece
Done: RestoreBackupPiece PL/SQL procedure successfully completed.
asmcmd rm +DATADG/backup/xib_0asc75s0_1_1_6 /u01/app/product/11.2.0/crs .. +ASM1 Can't locate strict.pm in @INC (@INC contains: /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/lib /u01/app/product/11.2.0/crs/lib/asmcmd /u01/app/product/11.2.0/crs/rdbms/lib/asmcmd /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl .) at /u01/app/product/11.2.0/crs/bin/asmcmdcore line 143.
BEGIN failed--compilation aborted at /u01/app/product/11.2.0/crs/bin/asmcmdcore line 143.
ASMCMD: rdfno 7 BEFORE ROLLPLAN datafile number : 7 datafile name : +DATADG/jyrac/datafile/ldjc01 AFTER ROLLPLAN CONVERTED BACKUP PIECE+DATADG/backup/xib_0bsc75s2_1_1_7 PL/SQL procedure successfully completed.
Entering RollForward
After applySetDataFile
Done: applyDataFileTo
Done: applyDataFileTo
Done: RestoreSetPiece
Done: RestoreBackupPiece PL/SQL procedure successfully completed.
asmcmd rm +DATADG/backup/xib_0bsc75s2_1_1_7 /u01/app/product/11.2.0/crs .. +ASM1 Can't locate strict.pm in @INC (@INC contains: /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/lib /u01/app/product/11.2.0/crs/lib/asmcmd /u01/app/product/11.2.0/crs/rdbms/lib/asmcmd /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/app/product/11.2.0/crs/perl/lib/site_perl/5.10.0 /u01/app/product/11.2.0/crs/perl/lib/site_perl .) at /u01/app/product/11.2.0/crs/bin/asmcmdcore line 143.
BEGIN failed--compilation aborted at /u01/app/product/11.2.0/crs/bin/asmcmdcore line 143.
ASMCMD: --------------------------------------------------------------------
End of rollforward phase
--------------------------------------------------------------------

4.3 在目标数据库中导入元数据
在目标系统中以Oracle软件用户登录并设置环境变量(ORACLE_HOME与ORACLE_SID)来指向目标数据库,执行以下命令来生成Data Pump TTS命令:

[oracle@jyrac1 xtts_script]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -e
============================================================
trace file is /u01/xtts_script/generate_Aug18_Fri_12_01_00_366//Aug18_Fri_12_01_00_366_.log
============================================================= --------------------------------------------------------------------
Parsing properties
-------------------------------------------------------------------- Key: backupondest
Values: +DATADG/backup
Key: platformid
Values: 6
Key: backupformat
Values: /oracle11/backup
Key: srclink
Values: ttslink
Key: asm_sid
Values: +ASM1
Key: dstdir
Values: DESTDIR
Key: cnvinst_home
Values: /u01/app/oracle/product/11.2.0/db
Key: cnvinst_sid
Values: xtt
Key: srcdir
Values: SOURCEDIR
Key: stageondest
Values: /u01/xtts
Key: tablespaces
Values: CDZJ,LDJC
Key: asm_home
Values: /u01/app/product/11.2.0/crs --------------------------------------------------------------------
Done parsing properties
-------------------------------------------------------------------- --------------------------------------------------------------------
Checking properties
-------------------------------------------------------------------- ARGUMENT tablespaces
ARGUMENT platformid
ARGUMENT backupformat
ARGUMENT stageondest --------------------------------------------------------------------
Done checking properties
-------------------------------------------------------------------- ORACLE_SID : jyrac1
ORACLE_HOME : /u01/app/oracle/product/11.2.0/db --------------------------------------------------------------------
Generating plugin
-------------------------------------------------------------------- --------------------------------------------------------------------
Done generating plugin file /u01/xtts_script/xttplugin.txt
-------------------------------------------------------------------- [oracle@jyrac1 xtts_script]$ cat xttplugin.txt
impdp directory= logfile= \
network_link= transport_full_check=no \
transport_tablespaces=CDZJ,LDJC \
transport_datafiles='+DATADG/jyrac/datafile/cdzj01','+DATADG/jyrac/datafile/ldjc01'

上面的命令会生成一个名叫xttplugin.txt的文件,文件创建了一个使用network_link参数执行传输表空间导入元数据的命令。命令中的transport_tablespaces与transport_datafiles参数已经设置正确。注意network_link模式指示导入通过使用dblink来完成,就不需要执行导出或使用dump文件。如果选择执行这个命令来完成表空间的传输就需要修改directory,logfile与network_link参数

SQL> create directory dump_dir as '/u01/xtts_script';

Directory created.
SQL> grant read,write on directory dump_dir to public; Grant succeeded.

在目标数据库中创建用户方案LDJC,CDZJ

SQL> create user ldjc identified by "ldjc";

User created.

SQL> grant dba,connect,resource to ldjc;

Grant succeeded.

SQL> create user cdzj identified by "cdzj";

User created.

SQL> grant dba,connect,resource to cdzj;

Grant succeeded.

[oracle@jyrac1 xtts_script]$ impdp system/abcd directory=dump_dir logfile=tts_imp.log network_link=ttslink transport_full_check=no transport_tablespaces=CDZJ,LDJC transport_datafiles='+DATADG/jyrac/datafile/cdzj01','+DATADG/jyrac/datafile/ldjc01'

Import: Release 11.2.0.4.0 - Production on Fri Aug 18 12:05:05 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_03": system/******** directory=dump_dir logfile=tts_imp.log network_link=ttslink transport_full_check=no transport_tablespaces=CDZJ,LDJC transport_datafiles=+DATADG/jyrac/datafile/cdzj01,+DATADG/jyrac/datafile/ldjc01
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/COMMENT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_03" successfully completed at Fri Aug 18 12:07:05 2017 elapsed 0 00:01:52 [oracle@jyrac1 xtts_script]$ impdp system/abcd directory=dump_dir logfile=ysj.log schemas=ldjc,cdzj content=metadata_only exclude=table,index network_link=ttslink Import: Release 11.2.0.4.0 - Production on Fri Aug 18 12:09:15 2017 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01": system/******** directory=dump_dir logfile=ysj.log schemas=ldjc,cdzj content=metadata_only exclude=table,index network_link=ttslink
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"LDJC" already exists
ORA-31684: Object type USER:"CDZJ" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/DB_LINK
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/VIEW/VIEW
ORA-39082: Object type VIEW:"LDJC"."TEMP_AAB002" created with compilation warnings
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
ORA-39082: Object type PACKAGE_BODY:"LDJC"."QUEST_SOO_PKG" created with compilation warnings
ORA-39082: Object type PACKAGE_BODY:"LDJC"."QUEST_SOO_SQLTRACE" created with compilation warnings
Processing object type SCHEMA_EXPORT/JOB
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCOBJ
Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" completed with 5 error(s) at Fri Aug 18 12:09:46 2017 elapsed 0 00:00:30 SQL> select * from ldjc.jy_test;
USER_ID
---------------------
7
8
8
9
1
2
3
4
5
6
10 rows selected SQL> select * from cdzj.jy_test;
USER_ID
---------------------
7
8
9
1
2
3
4
5
6
9 rows selected

元数据导入后,可以将源数据库中的表空间ldjc,cdzj修改为read write状态

SQL> alter tablespace ldjc read write;

Tablespace altered.

SQL>  alter tablespace cdzj read write;

Tablespace altered.

如果不使用network_link执行导入,那么可以执行传输表空间模式的data pump导出元数据,然后将元数据复制到目标数据库,再执行导入。

4.4 将目标数据库中的表空间ldjc,cdzj修改为read write状态

SQL> select tablespace_name,status from dba_tablespaces;

TABLESPACE_NAME                STATUS
------------------------------ ---------
SYSTEM ONLINE
SYSAUX ONLINE
UNDOTBS1 ONLINE
TEMP ONLINE
USERS ONLINE
EXAMPLE ONLINE
CDZJ READ ONLY
LDJC READ ONLY 8 rows selected. SQL> alter tablespace ldjc read write; Tablespace altered. SQL> alter tablespace cdzj read write; Tablespace altered. SQL> select tablespace_name,status from dba_tablespaces; TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
SYSAUX ONLINE
UNDOTBS1 ONLINE
TEMP ONLINE
USERS ONLINE
EXAMPLE ONLINE
CDZJ ONLINE
LDJC ONLINE 8 rows selected.

4.5 验证传输的数据
在这一步,在目标数据库中被传输过来的表空间设置为read only状态,然后运行应用程序来进行验证。也可以使用RMAN来检查物理与逻辑块损坏的情况。

[oracle@jyrac1 dbs]$ export ORACLE_SID=jyrac1
[oracle@jyrac1 dbs]$ rman target/ Recovery Manager: Release 11.2.0.4.0 - Production on Fri Aug 18 12:13:13 2017 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to target database: JYRAC (DBID=2655496871) RMAN> validate tablespace LDJC,CDZJ check logical; Starting validate at 18-AUG-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=139 instance=jyrac1 device type=DISK
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
input datafile file number=00012 name=+DATADG/jyrac/datafile/ldjc01
input datafile file number=00011 name=+DATADG/jyrac/datafile/cdzj01
channel ORA_DISK_1: validation complete, elapsed time: 00:01:05
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
11 OK 0 255625 262144 14690270752496
File Name: +DATADG/jyrac/datafile/cdzj01
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 6239
Index 0 0
Other 0 280 File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
12 OK 0 3746 655360 14690292001658
File Name: +DATADG/jyrac/datafile/ldjc01
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 361625
Index 0 286299
Other 0 3690 Finished validate at 18-AUG-17

5.清除阶段
如果为了迁移创建了单独的转换home与实例,那么在传输表空间操作完成之后可以关闭实例并删除软件。为了执行跨平台增量备份传输表空间而创建的文件与目录也可以删除了,例如:
.源系统中的dfcopydir目录
.源系统中的backupformat目录
.目标系统中的stageondest目录
.目标系统中的backupondest目录
.源系统与目标系统中的$TMPDIR环境变量

Perl脚本xttdriver.pl选项
-S 准备传输源:-S选项只有当使用dbms_file_transfer方法传输数据文件时使用。这个准备操作在源系统中只对源数据库执行一次。这步操作将创建xttnewdatafiles.txt与getfile.sql文件

-G 从源系统获取数据文件:-G选项只有当使用dbms_file_transfer方法传输数据文件时使用。获取数据文件操作在目标系统中对目标数据库只执行一次。-S选项必须在它之前执行一次,并将生成的xttnewdatafiles.txt与getfile.sql文件传输到目标系统。-G选项会连接到目标数据库并执行脚本getfile.sql。getfile.sql将调用dbms_file_transfer.get_file()过程通过使用dblink(srclink)来从源数据库的目录对象(srcdir)中获取要被传输的数据文件到目标数据库的目录对象(dstdir)中。

-p 准备对源数据库执行备份:-p选项只有当使用RMAN备份方法来生成数据文件副本时才使用。这步操作在源系统中对源数据库只执行一次。这步操作会连接到源数据库并对要被传输的每个表空间执行一次xttpreparesrc.sql脚本。xttpreparesrc.sql会执行以下操作:
1.验证表空间是否处于online,read write模式与是否不包含脱机数据文件
2.标识第一次执行增量备份操作时所需要使用的SCN信息并将它们写入$TMPDIR目录中的xttplan.txt文件中
3.在源系统中会在xtt.properties文件的dfcopydir参数所指定的目录中创建初始化数据文件副本。这些数据文件副本必须手动传输到目标每张
4.创建RMAN脚本$TMPDIR/rmanconvert.cmd,在目标系统中它将被用来将数据文件副本的字节序转换为目标系统所使用的字节序

-c 转换数据文件:-c选项只有当使用RMAN备份创建初始化数据文件副本时才使用。在目标系统中转换数据文件副本只执行一次。这步操作将使用rmanconvert.cmd文件来将数据文件副本转换为目标系统所使用的字节序。转换后的数据文件副本会被存储到xtt.properties文件的storageondest参数所指定的目录中,也就是最终目标数据库存储数据文件的目录。

-i 创建增量备份: 创建增量备份可以对源数据库执行一次或多次。这个步骤会读取$TMPDIR/xttplan.txt中所记录的SCN并生成用于前滚目标系统上数据文件副本的增量备份文件。

-r 前滚数据文件:对于创建的每个增量备份都会对目标数据库的数据文件进行前滚操作。这步操作会连接到cnvinst_home与cnvinst_sid所定义的增量转换实例,转换所创建的增量备份,那么连接到目标数据库对数据文件应用增量备份进行前滚操作。

-s 判断新的from_scn:对源数据库判断新的from_scn可以执行一次或多次。这步操作会计算下次增量备份所需要的from_scn,并将其记录在xttplan.txt文件中,然后当下一次创建增量备份的就会使用它。

-e 生成Data Pump TTS命令:在目标系统中对目标数据库只执行一次来生成Data Pump TTS命令。这步操作将创建一个使用dblink来导入元数据的Data Pump Import命令

-d debug:-d选项能以debug模式来执行xttdriver.pl与RMAN命令。要启用debug模式需要设置环境变量XTTDEBUG=1

xtt.properties文件参数说明
tablespaces:用逗号来分隔从源数据库要被传输到目标数据库的表空间列表,例如tablespaces=TS1,TS2

platformid:从v$database.platform_id获得的源数据库的platform id,例如platformid=13

srcdir:源数据库中的目录对象,它指向源数据库中存储数据文件的目录。多个目录可以使用逗号进行分隔。srcdir与dstdir的映射可以是N:1或N:N。例如可以有多个源目录且文件存储到单个目标目录或者文件来自一个特定源目录将被存储到一个特定的目标目录。这个参数只有使用dbms_file_transfer来传输数据文件时才使用,例如srcdir=SOURCEDIR,srcdir=SRC1,SRC2

dstdir:目标数据库中的目录对象,它指向目标数据库中存储数据文件的目录。如果使用了多个源目录(srcdir),那么可以定义多个目标目录以便将特定源目录中的文件写入特定的目标目录中。这个参数只有使用dbms_file_transfer来传输数据文件时才使用,例如dstdir=DESTDIR,dstdir=DST1,DST2

srclink:目标数据库中连接到源数据库的dblink。使用dbms_file_transfer传输数据文件时会使用这个dblink。这个参数只有使用dbms_file_transfer来传输数据文件时才使用,例如srclink=ttslink

dfcopydir:源系统中用来存储xttdriver.pl -p操作所生成的数据文件副本目录。这个目录要有足够的空间来存储所有被传输表空间的数据文件副本。这个目录可以是目标系统上通过NFS-mounted文件系统所挂载到源系统中的一个目录,在这种情况下,目标系统中的stageondest参数也引用这个相同的NFS目录。可以参考See Note 359515.1 for mount option guidelines。 这个参数只有使用RMAN备份生成数据文件副本时才使用,例如dfcopydir=/stage_source

backupformat:源系统中存储增量备份文件的目录。这个目录必须要有足够的空间来存储所有创建的增量备份文件。这个目录可以是目标系统上通过NFS-mounted文件系统所挂载到源系统中的一个目录,在这种情况下,目标系统中的stageondest参数也引用这个相同的NFS目录。例如,backupformat=/stage_source

stageondest:目标系统中存储从源系统中手动传输过来的数据文件副本。这个目录要有足够的空间来存储数据文件副本。这个目录同时也是用来存储从源系统传输过来的增量备份文件的目录。在目标系统上执行xttdriver.pl -c转换数据文件与执行xttdriver.pl -r前滚数据文件时会从这个目录中读取数据文件副本与增量备份文件。这个目标也可以是一个DBFS-mounted文件系统。个目录可以是源系统上通过NFS-mounted文件系统所挂载到目标系统中的一个目录,在这种情况下,源系统中的backupformat参数与dfcopydir参数就会引用这个相同的NFS目录。可以参考See Note 359515.1 for mount option guidelines。例如stageondest=/stage_dest

storageondest:目标系统中用来存储xttdriver.pl -c转换操作后所生成的数据文件副本的目录,也就是目标数据库最终存储数据文件的目录。这个目录要有足够的空间来永久存储数据文件。这个参数当使用RMAN备份来生成初始化数据文件副本时才使用,例如
storageondest=+DATA或者storageondest=/oradata/test

backupondest:目录系统中用来存储xttdriver.pl -r前滚操作所转换后的增量备份文件的目录。这个目录要有足够的空间来存储转换后的增量备份文件。注意,如果这个参数指向ASM磁盘目录,那么需要在xtt.properties参数文件中定义asm_home与asm_sid参数。如果这个参数指向文件系统目录,那么就从xtt.properties参数文件中删除asm_home与asm_sid参数。例如,backupondest=+RECO

cnvinst_home:如果需要使用一个单独的增量转换home目录时才使用。它是目标系统中运行增量转换实例的ORACLE_HOME,例如cnvinst_home=/u01/app/oracle/product/11.2.0.4/xtt_home

cnvinst_sid:如果需要使用一个单独的增量转换home目录时才使用。它是目标系统中运行增量转换实例的ORACLE_SID,例如cnvinst_xtt

asm_home:目标系统中ASM实例的ORACLE_HOME。注意如果backupondest设置为文件系统目录,那么就要删除asm_home与asm_sid参数,例如asm_home=/u01/app/11.2.0.4/grid

asm_sid:目标系统中ASM实例的ORACLE_SID。例如asm_sid=+ASM1

parallel:定义rmanconvert.cmd命令文件中rman convert命令的并行度。如果不设置这个参数,那么xttdriver.pl将使用parallel=8的缺省并行度。例如,parallel=3

rollparallel:定义xttdriver.pl -r前滚操作的并行度,例如rollparallel=2

getfileparallel:定义xttdriver.pl -G获取数据文件副本操作的并行度,缺省值是1,最大值为8,例如getfileparallel=4

###sample   from 10.2.0.4 HP to 11.2.0.4 Linux

xtts迁移文件系统表空间到文件系统表空间可参考,oracle小知识点14--xtts传输表空间 http://blog.itpub.net/28539951/viewspace-1978401/

测试:
os:源端:centos 6.6 目标端:centos 6.6
db:源端:10.2.0.4 文件系统 单实例 目标端:11.2.0.4 ASM RAC
host:源端:nbutest2 25.10.0.100 目标端:rac01 25.10.0.31
源端实例:afa 目标端实例:afa

1.##ct66rac01
##在目标端实例上建连接源端的dblink和用于存放数据文件的目录directory.
#此步骤是为了最近通过impdp dblink的方式导入数据文件到目标端,如果准备采用本地导入则不需要建dblink.
[oracle@ct66rac01 ~]$ cd /u01/app/oracle/product/11.2.0/db_1/network/admin/
[oracle@ct66rac01 admin]$ vi tnsnames.ora
CTDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.108.56.120)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ctdb)
)
)
[oracle@ct66rac01 dbs]$ ORACLE_SID=ctdb
[oracle@ct66rac01 ~]$ sqlplus / as sysdba
SQL> create directory dump_oradata as '+DATA';
SQL> grant read,write on directory dump_oradata to public;
# SQL> create public database link lnk_ctdb connect to system identified by system using 'ctdb';
create public database link lnk_afa_hp connect to dbmgr identified by db1234DBA using 'afa_hp';
SQL> select * from dual@lnk_afa_hp;
/*
DUMMY
X
*/
SQL> exit

2.##ct66rac01
##在目标端配置nfs服务.
#整个xtts的过程源端产生的数据文件,增量备份,执行脚本都是要传到目标端.通过测试发现,使用nfs的方式将传输直接在生成文件的时候就完成了,方便操作,又减少错误.如果不使用nfs,手动去传也是可以.
[oracle@ct66rac01 ~]$ mkdir /home/oracle/xtts

[oracle@ct66rac01 ~]$ su -
[root@ct66rac01 oracle]# service nfs status
[root@ct66rac01 ~]# cat /etc/exports
/home/oracle/xtts *(rw,sync,no_root_squash,insecure,anonuid=500,anongid=500)
[root@ct66rac01 oracle]# service nfs start

3.##ct6604
##在源端建立测试用的用户,表空间,表,权限.
#此处的权限和表用于迁移之后的验证
[oracle@ct6604 ~]$ ORACLE_SID=ctdb
[oracle@ct6604 ~]$ sqlplus / as sysdba
SQL> create tablespace tbs01 datafile '/u02/oradata/ctdb/tbs01.dbf' size 10m autoextend on next 2m maxsize 4g;
SQL> create tablespace tbs02 datafile '/u02/oradata/ctdb/tbs02.dbf' size 10m autoextend on next 2m maxsize 4g;

SQL> create user test01 identified by test01 default tablespace tbs01;
SQL> create user test02 identified by test02 default tablespace tbs02;
SQL> grant connect,resource to test01;
SQL> grant connect,resource to test02;
SQL> grant execute on dbms_crypto to test02;

SQL> create table test01.tb01 as select * from dba_objects;
SQL> create table test02.tb01 as select * from dba_objects;
SQL> grant select on test01.tb01 to test02;
SQL> exit

4.##ct6604
##在源端连接目标端的nfs,mount到/home/oracle/xtts下.
[oracle@ct6604 ~]$ mkdir /oracle2/xtts
[oracle@ct6604 ~]$ su -
[root@ct6604 ~]# showmount -e 25.10.0.31
Export list for 192.108.56.101:
/home/oracle/xtts *
[root@ct6604 ~]# mount -t nfs 25.10.0.31:/home/oracle/xtts /oracle2/xtts

mount -F nfs 25.10.0.31:/home/oracle/xtts /home/oracle/xtts

chmod -R 777 /home/oracle/xtts

5.##ct6604
##在源端解压rman-xttconvert脚本,配置xtts的参数文件.
#此处的操作都是在/home/oracle/xtts下,它也目标端nfs是的一个目录,所以目标端就不需要再配置这些.
#配置文件参数说明:tablespaces要传输的表空间
platformid源端平台ID,通过V$DATABASE.PLATFORM_ID查看
srcdir,dstdir,srclink是用于通过dbms_file_transfer传输的参数,本测试通过rman,不使用
dfcopydir源端生成数据文件的目录
backupformat源端生成增量备份的目录
stageondest目标端存放源数据文件和增量备份的目录
storageondest目录端存放目标数据文件的目录 backupondest目标端使用ASM时转换增量备份的目录,目标端使用数据文件建议和stageondest设的一样,测试发现目标端为ASM也可以把目录设为和stageondest一样,因为无需转换增量备份即可应用增量roll forward
parallel,rollparallel,getfileparallel并行度,此处用的默认
asm_home,asm_sid目标端使用ASM时,用于指定asm实例的oracle_home,sid. 此测试没使用的参数:cnvinst_home,cnvinst_sid目标端辅助实例的oracle_home,sid,如果目标端是单独又装的11.2.04的软件,需要指定

[root@ct6604 xtts]# su - oracle
[oracle@ct6604 ~]# cd /oracle2/xtts
[oracle@ct6604 xtts]$ mkdir backup script
[oracle@ct6604 xtts]$ cp /home/oracle/rman-xttconvert_2.0.zip /home/oracle/xtts/
[oracle@ct6604 xtts]$ unzip rman-xttconvert_2.0.zip
[oracle@ct6604 xtts]$ mv xtt.properties xtt.properties.bak
[oracle@ct6604 xtts]$ cat xtt.properties.bak|grep -v ^#|grep -v ^$ >xtt.properties
[oracle@ct6604 xtts]$ vi xtt.properties
[oracle@ct6604 xtts]$ cat xtt.properties
tablespaces=TBS01,TBS02
platformid=4
#srcdir=SOURCEDIR1,SOURCEDIR2
#dstdir=DESTDIR1,DESTDIR2
#srclink=TTSLINK
dfcopydir=/home/oracle/xtts/backup
backupformat=/home/oracle/xtts/backup
stageondest=/home/oracle/xtts/backup
storageondest=+DATA
backupondest=/home/oracle/xtts/backup
asm_home=/u01/app/11.2.0/grid
asm_sid=+ASM1
parallel=3
rollparallel=2
getfileparallel=4
5.##ct6604
##在源端解压rman-xttconvert脚本,配置xtts的参数文件.
#此处的操作都是在/home/oracle/xtts下,它也目标端nfs是的一个目录,所以目标端就不需要再配置这些.
#配置文件参数说明:tablespaces要传输的表空间
platformid源端平台ID,通过V$DATABASE.PLATFORM_ID查看
srcdir,dstdir,srclink是用于通过dbms_file_transfer传输的参数,本测试通过rman,不使用
dfcopydir源端生成数据文件的目录
backupformat源端生成增量备份的目录
stageondest目标端存放源数据文件和增量备份的目录
storageondest目录端存放目标数据文件的目录 backupondest目标端使用ASM时转换增量备份的目录,目标端使用数据文件建议和stageondest设的一样,测试发现目标端为ASM也可以把目录设为和stageondest一样,因为无需转换增量备份即可应用增量roll forward
parallel,rollparallel,getfileparallel并行度,此处用的默认
asm_home,asm_sid目标端使用ASM时,用于指定asm实例的oracle_home,sid. 此测试没使用的参数:cnvinst_home,cnvinst_sid目标端辅助实例的oracle_home,sid,如果目标端是单独又装的11.2.04的软件,需要指定

[root@ct6604 xtts]# su - oracle
[oracle@ct6604 ~]# cd /home/oracle/xtts
[oracle@ct6604 xtts]$ mkdir backup script
[oracle@ct6604 xtts]$ cp /home/oracle/rman-xttconvert_2.0.zip /home/oracle/xtts/
[oracle@ct6604 xtts]$ unzip rman-xttconvert_2.0.zip
#unzip rman_xttconvert_v3.zip
[oracle@ct6604 xtts]$ mv xtt.properties xtt.properties.bak
[oracle@ct6604 xtts]$ cat xtt.properties.bak|grep -v ^#|grep -v ^$ >xtt.properties
[oracle@ct6604 xtts]$ vi xtt.properties
[oracle@ct6604 xtts]$ cat xtt.properties
tablespaces=TBS01,TBS02
platformid=4
#srcdir=SOURCEDIR1,SOURCEDIR2
#dstdir=DESTDIR1,DESTDIR2
#srclink=TTSLINK
dfcopydir=/home/oracle/xtts/backup
backupformat=/home/oracle/xtts/backup
stageondest=/home/oracle/xtts/backup
storageondest=+DATA
backupondest=/home/oracle/xtts/backup
cnvinst_home=/db/ebank/app/oracle/product/11.2.0/db_1
cnvinst_sid=afa1
asm_home=/db/ebank/app/11.2.0/grid
asm_sid=+ASM1
parallel=3
rollparallel=2
getfileparallel=4

6.##ct6604
##在源端执行准备prepare操作
#此处生成数据文件和转换脚本
[oracle@ct6604 xtts]$ ORACLE_SID=ctdb
[oracle@ct6604 xtts]$ export TMPDIR=/home/oracle/xtts/script
## [oracle@ct6604 xtts]$ /oracle2/10g/perl/bin/perl xttdriver.pl -p
perl xttdriver.pl -p

oracle2@nbutest2:/home/oracle/xtts]$ perl xttdriver.pl -p
============================================================
trace file is /home/oracle/xtts/script/prepare_Jul26_Thu_17_42_49_102//Jul26_Thu_17_42_49_102_.log
=============================================================

--------------------------------------------------------------------
Parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Done parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Checking properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Done checking properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Starting prepare phase
--------------------------------------------------------------------

Prepare source for Tablespaces:
'TBS01' /home/oracle/xtts/backup
xttpreparesrc.sql for 'TBS01' started at Thu Jul 26 17:42:49 2018
xttpreparesrc.sql for ended at Thu Jul 26 17:42:50 2018

Prepare source for Tablespaces:
'TBS02' /home/oracle/xtts/backup
xttpreparesrc.sql for 'TBS02' started at Thu Jul 26 17:43:02 2018
xttpreparesrc.sql for ended at Thu Jul 26 17:43:02 2018

--------------------------------------------------------------------
Done with prepare phase
--------------------------------------------------------------------

--------------------------------------------------------------------
Find list of datafiles in system
--------------------------------------------------------------------

--------------------------------------------------------------------
Done finding list of datafiles in system
--------------------------------------------------------------------
--------------------------------------------------------------------
Done finding list of datafiles in system
--------------------------------------------------------------------

7.##ct66rac01
##在目标端执行行转换convert操作
#因为使用nfs,所以转换之前源端产生的文件就无需传过来,直接执行就可以
[root@ct66rac01 ~]# su - oracle
[oracle@ct66rac01 ~]$ cd /home/oracle/xtts
[oracle@ct66rac01 xtts]$ ORACLE_SID=rac11g1
[oracle@ct66rac01 xtts]$ export TMPDIR=/home/oracle/xtts/script
[oracle@ct66rac01 xtts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -c

oracle@rac1 xtts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -c

--------------------------------------------------------------------
Parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Done parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Checking properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Done checking properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Performing convert
--------------------------------------------------------------------

--------------------------------------------------------------------
Converted datafiles listed in: /home/oracle/xtts/script/xttnewdatafiles.txt

8.##ct6604
##在源端模拟生成新增数据
[oracle@ct6604 xtts]$ ORACLE_SID=ctdb
[oracle@ct6604 xtts]$ sqlplus / as sysdba
SQL> insert into test01.tb01 select * from test01.tb01;
SQL> insert into test02.tb01 select * from test02.tb01;
SQL> commit;
SQL> exit

9.##ct6604
##在源端执行增量备份incremental
[oracle@ct6604 xtts]$ ORACLE_SID=ctdb
[oracle@ct6604 xtts]$ TMPDIR=/home/oracle/xtts/script
## [oracle@ct6604 xtts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -i
perl xttdriver.pl -i

--------------------------------------------------------------------
Parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Done parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Checking properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Done checking properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Backup incremental
--------------------------------------------------------------------
Prepare newscn for Tablespaces: 'TBS01'
Prepare newscn for Tablespaces: 'TBS02'
Prepare newscn for Tablespaces: ''
rman target / cmdfile /home/oracle/xtts/script/rmanincr.cmd

Recovery Manager: Release 10.2.0.4.0 - Production on Thu Jul 26 18:18:03 2018

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: AFA (DBID=1231362390)

RMAN> set nocfau;
2> host 'echo ts::TBS01';
3> backup incremental from scn 3168157425
4> tag tts_incr_update tablespace 'TBS01' format
5> '/home/oracle/xtts/backup/%U';
6> set nocfau;
7> host 'echo ts::TBS02';
8> backup incremental from scn 3168157441
9> tag tts_incr_update tablespace 'TBS02' format
10> '/home/oracle/xtts/backup/%U';
11>
executing command: SET NOCFAU
using target database control file instead of recovery catalog

ts::TBS01
host command complete

Starting backup at 26-JUL-18
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=168 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00025 name=/datalv03/afa/tbs01.dbf
channel ORA_DISK_1: starting piece 1 at 26-JUL-18
channel ORA_DISK_1: finished piece 1 at 26-JUL-18
piece handle=/home/oracle/xtts/backup/7et904et_1_1 tag=TTS_INCR_UPDATE comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 26-JUL-18

executing command: SET NOCFAU

ts::TBS02
host command complete

Starting backup at 26-JUL-18
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00026 name=/datalv03/afa/tbs02.dbf
channel ORA_DISK_1: starting piece 1 at 26-JUL-18
channel ORA_DISK_1: finished piece 1 at 26-JUL-18
piece handle=/home/oracle/xtts/backup/7ft904f5_1_1 tag=TTS_INCR_UPDATE comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
Finished backup at 26-JUL-18

Recovery Manager complete.

--------------------------------------------------------------------
Done backing up incrementals
--------------------------------------------------------------------

10.##ct66rac01
##在目标端应用增量roll forward
#因为使用nfs,所以转换之前源端产生的文件就无需传过来,直接执行就可以
#应用增量roll forward是应用到转换后的数据文件上
[oracle@ct66rac01 xtts]$ ORACLE_SID=rac11g1
[oracle@ct66rac01 xtts]$ export TMPDIR=/home/oracle/xtts/script
cd /home/oracle/xtts/
export XTTDEBUG=1
[oracle@ct66rac01 xtts]$

[oracle@rac1 xtts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r

--------------------------------------------------------------------
Parsing properties
--------------------------------------------------------------------
Key: backupondest
Values: /home/oracle/xtts/backup
Key: platformid
Values: 4
Key: backupformat
Values: /home/oracle/xtts/backup
Key: parallel
Values: 3
Key: storageondest
Values: +DATA
Key: dfcopydir
Values: /home/oracle/xtts/backup
Key: asm_sid
Values: +ASM1
Key: cnvinst_home
Values: /db/ebank/app/oracle/product/11.2.0/db_1
Key: cnvinst_sid
Values: afa1
Key: rollparallel
Values: 2
Key: stageondest
Values: /home/oracle/xtts/backup
Key: tablespaces
Values: TBS01,TBS02
Key: getfileparallel
Values: 4
Key: asm_home
Values: /db/ebank/app/11.2.0/grid

--------------------------------------------------------------------
Done parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Checking properties
--------------------------------------------------------------------
ARGUMENT tablespaces
ARGUMENT platformid
ARGUMENT backupformat
ARGUMENT stageondest
ARGUMENT backupondest

--------------------------------------------------------------------
Done checking properties
--------------------------------------------------------------------
ORACLE_SID : afa1
ORACLE_HOME : /db/ebank/app/oracle/product/11.2.0/db_1

--------------------------------------------------------------------
Start rollforward
--------------------------------------------------------------------
convert instance: /db/ebank/app/oracle/product/11.2.0/db_1

convert instance: afa1

ORACLE instance started.

Total System Global Area 1.3429E+10 bytes
Fixed Size 2265944 bytes
Variable Size 7180651688 bytes
Database Buffers 6241124352 bytes
Redo Buffers 4612096 bytes
rdfno 25

BEFORE ROLLPLAN

datafile number : 25

datafile name : +DATA/tbs01_25.xtf

AFTER ROLLPLAN

rdfno 26

BEFORE ROLLPLAN

datafile number : 26

datafile name : +DATA/tbs02_26.xtf

AFTER ROLLPLAN

CONVERTED BACKUP PIECE/home/oracle/xtts/backup/xib_7et904et_1_1_25

PL/SQL procedure successfully completed.
CONVERTED BACKUP PIECE/home/oracle/xtts/backup/xib_7ft904f5_1_1_26

PL/SQL procedure successfully completed.
Entering RollForward
After applySetDataFile
Done: applyDataFileTo
Done: applyDataFileTo
Done: RestoreSetPiece
Done: RestoreBackupPiece

PL/SQL procedure successfully completed.
asmcmd rm /home/oracle/xtts/backup/xib_7ft904f5_1_1_26 /db/ebank/app/11.2.0/grid .. +ASM1

Entering RollForward
After applySetDataFile
Done: applyDataFileTo
Done: applyDataFileTo
Done: RestoreSetPiece
Done: RestoreBackupPiece

PL/SQL procedure successfully completed.
asmcmd rm /home/oracle/xtts/backup/xib_7et904et_1_1_25 /db/ebank/app/11.2.0/grid .. +ASM1
--这个报错可以忽略,说的是asmcmd 无法删除备份文件(文件系统上的),

Can't locate Exporter/Heavy.pm in @INC (@INC contains: /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0 /db/ebank/app/11.2.0/grid/lib /db/ebank/app/11.2.0/grid/lib/asmcmd /db/ebank/app/11.2.0/grid/rdbms/lib/asmcmd /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl .) at /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/Exporter.pm line 18.
BEGIN failed--compilation aborted at /db/ebank/app/11.2.0/grid/bin/asmcmdcore line 146.
ASMCMD:

Can't locate Exporter/Heavy.pm in @INC (@INC contains: /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0 /db/ebank/app/11.2.0/grid/lib /db/ebank/app/11.2.0/grid/lib/asmcmd /db/ebank/app/11.2.0/grid/rdbms/lib/asmcmd /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl .) at /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/Exporter.pm line 18.
BEGIN failed--compilation aborted at /db/ebank/app/11.2.0/grid/bin/asmcmdcore line 146.
ASMCMD:

--

11.##ct6604
##在源端模拟生成新增数据,并将要传输的表空间设置只读
#此时才算开始计算停机时间
[oracle@ct6604 xtts]$ ORACLE_SID=ctdb
[oracle@ct6604 xtts]$ sqlplus / as sysdba

SQL> insert into test01.tb01 select * from test01.tb01;
SQL> insert into test02.tb01 select * from test02.tb01;
SQL> commit;

SQL> alter tablespace tbs01 read only;
SQL> alter tablespace tbs02 read only;

SQL> exit

12.##ct6604
##在源端执行最后一次增量备份incremental
[oracle@ct6604 xtts]$ ORACLE_SID=ctdb
[oracle@ct6604 xtts]$ TMPDIR=/home/oracle/xtts/script
export XTTDEBUG=1
### [oracle@ct6604 xtts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -i
perl xttdriver.pl -i

[oracle2@nbutest2:/home/oracle/xtts]$ perl xttdriver.pl -i

--------------------------------------------------------------------
Parsing properties
--------------------------------------------------------------------
Key: backupondest
Values: /home/oracle/xtts/backup
Key: platformid
Values: 4
Key: backupformat
Values: /home/oracle/xtts/backup
Key: parallel
Values: 3
Key: storageondest
Values: +DATA
Key: dfcopydir
Values: /home/oracle/xtts/backup
Key: asm_sid
Values: +ASM1
Key: cnvinst_home
Values: /db/ebank/app/oracle/product/11.2.0/db_1
Key: cnvinst_sid
Values: afa1
Key: rollparallel
Values: 2
Key: stageondest
Values: /home/oracle/xtts/backup
Key: tablespaces
Values: TBS01,TBS02
Key: getfileparallel
Values: 4
Key: asm_home
Values: /db/ebank/app/11.2.0/grid

--------------------------------------------------------------------
Done parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Checking properties
--------------------------------------------------------------------
ARGUMENT tablespaces
ARGUMENT platformid
ARGUMENT backupformat
ARGUMENT stageondest

--------------------------------------------------------------------
Done checking properties
--------------------------------------------------------------------
ORACLE_SID : afa
ORACLE_HOME : /oracle2/10g

--------------------------------------------------------------------
Backup incremental
--------------------------------------------------------------------
TABLESPACE STRING :'TBS01'
Prepare newscn for Tablespaces: 'TBS01'
TBS01::::3168199234
25
TABLESPACE STRING :'TBS02'
Prepare newscn for Tablespaces: 'TBS02'
TBS02::::3168199245
26
TABLESPACE STRING :''
Prepare newscn for Tablespaces: ''

Start backup incremental
Crossed mv
Crossed mv /home/oracle/xtts/backup
Generate /home/oracle/xtts/script/rmanincr.cmd
rman target / debug trace /home/oracle/xtts/script/rmantrc_13948_345_incrbackup.trc cmdfile /home/oracle/xtts/script/rmanincr.cmd

Recovery Manager: Release 10.2.0.4.0 - Production on Fri Jul 27 10:52:12 2018

Copyright (c) 1982, 2007, Oracle. All rights reserved.

RMAN-06005: connected to target database: AFA (DBID=1231362390)

RMAN> set nocfau;
2> host 'echo ts::TBS01';
3> backup incremental from scn 3168157425
4> tag tts_incr_update tablespace 'TBS01' format
5> '/home/oracle/xtts/backup/%U';
6> set nocfau;
7> host 'echo ts::TBS02';
8> backup incremental from scn 3168157441
9> tag tts_incr_update tablespace 'TBS02' format
10> '/home/oracle/xtts/backup/%U';
11>
RMAN-03023: executing command: SET NOCFAU
RMAN-06009: using target database control file instead of recovery catalog

ts::TBS01
RMAN-06134: host command complete

RMAN-03090: Starting backup at 27-JUL-18
RMAN-08030: allocated channel: ORA_DISK_1
RMAN-08500: channel ORA_DISK_1: sid=56 devtype=DISK
RMAN-08008: channel ORA_DISK_1: starting full datafile backupset
RMAN-08010: channel ORA_DISK_1: specifying datafile(s) in backupset
RMAN-08522: input datafile fno=00025 name=/datalv03/afa/tbs01.dbf
RMAN-08038: channel ORA_DISK_1: starting piece 1 at 27-JUL-18
RMAN-08044: channel ORA_DISK_1: finished piece 1 at 27-JUL-18
RMAN-08530: piece handle=/home/oracle/xtts/backup/7gt91un0_1_1 tag=TTS_INCR_UPDATE comment=NONE
RMAN-08540: channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
RMAN-03091: Finished backup at 27-JUL-18

RMAN-03023: executing command: SET NOCFAU

ts::TBS02
RMAN-06134: host command complete

RMAN-03090: Starting backup at 27-JUL-18
RMAN-12016: using channel ORA_DISK_1
RMAN-08008: channel ORA_DISK_1: starting full datafile backupset
RMAN-08010: channel ORA_DISK_1: specifying datafile(s) in backupset
RMAN-08522: input datafile fno=00026 name=/datalv03/afa/tbs02.dbf
RMAN-08038: channel ORA_DISK_1: starting piece 1 at 27-JUL-18
RMAN-08044: channel ORA_DISK_1: finished piece 1 at 27-JUL-18
RMAN-08530: piece handle=/home/oracle/xtts/backup/7ht91ung_1_1 tag=TTS_INCR_UPDATE comment=NONE
RMAN-08540: channel ORA_DISK_1: backup set complete, elapsed time: 00:01:05
RMAN-03091: Finished backup at 27-JUL-18

Recovery Manager complete.

TSNAME:TBS01
TSNAME:TBS02

--------------------------------------------------------------------
Done backing up incrementals
--------------------------------------------------------------------

13.##ct66rac01
##在目标端应用最后一次增量roll forward
#因为使用nfs,所以转换之前源端产生的文件就无需传过来,直接执行就可以
[oracle@ct66rac01 ~]$ cd /home/oracle/xtts
## [oracle@ct66rac01 xtts]$ ORACLE_SID=rac11g1
[oracle@ct66rac01 xtts]$ export TMPDIR=/home/oracle/xtts/script
export XTTDEBUG=1
[oracle@ct66rac01 xtts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r

[oracle@rac1 xtts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r

--------------------------------------------------------------------
Parsing properties
--------------------------------------------------------------------
Key: backupondest
Values: /home/oracle/xtts/backup
Key: platformid
Values: 4
Key: backupformat
Values: /home/oracle/xtts/backup
Key: parallel
Values: 3
Key: storageondest
Values: +DATA
Key: dfcopydir
Values: /home/oracle/xtts/backup
Key: asm_sid
Values: +ASM1
Key: cnvinst_home
Values: /db/ebank/app/oracle/product/11.2.0/db_1
Key: cnvinst_sid
Values: afa1
Key: rollparallel
Values: 2
Key: stageondest
Values: /home/oracle/xtts/backup
Key: tablespaces
Values: TBS01,TBS02
Key: getfileparallel
Values: 4
Key: asm_home
Values: /db/ebank/app/11.2.0/grid

--------------------------------------------------------------------
Done parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Checking properties
--------------------------------------------------------------------
ARGUMENT tablespaces
ARGUMENT platformid
ARGUMENT backupformat
ARGUMENT stageondest
ARGUMENT backupondest

--------------------------------------------------------------------
Done checking properties
--------------------------------------------------------------------
ORACLE_SID : afa1
ORACLE_HOME : /db/ebank/app/oracle/product/11.2.0/db_1

--------------------------------------------------------------------
Start rollforward
--------------------------------------------------------------------
convert instance: /db/ebank/app/oracle/product/11.2.0/db_1

convert instance: afa1

ORACLE instance started.

Total System Global Area 1.3429E+10 bytes
Fixed Size 2265944 bytes
Variable Size 7180651688 bytes
Database Buffers 6241124352 bytes
Redo Buffers 4612096 bytes
rdfno 25

BEFORE ROLLPLAN

datafile number : 25

datafile name : +DATA/tbs01_25.xtf

AFTER ROLLPLAN

rdfno 26

BEFORE ROLLPLAN

datafile number : 26

datafile name : +DATA/tbs02_26.xtf

AFTER ROLLPLAN

CONVERTED BACKUP PIECE/home/oracle/xtts/backup/xib_7gt91un0_1_1_25

PL/SQL procedure successfully completed.
Entering RollForward
After applySetDataFile
Done: applyDataFileTo
Done: applyDataFileTo
Done: RestoreSetPiece
Done: RestoreBackupPiece

PL/SQL procedure successfully completed.
asmcmd rm /home/oracle/xtts/backup/xib_7gt91un0_1_1_25 /db/ebank/app/11.2.0/grid .. +ASM1

Can't locate Exporter/Heavy.pm in @INC (@INC contains: /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0 /db/ebank/app/11.2.0/grid/lib /db/ebank/app/11.2.0/grid/lib/asmcmd /db/ebank/app/11.2.0/grid/rdbms/lib/asmcmd /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl .) at /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/Exporter.pm line 18.
BEGIN failed--compilation aborted at /db/ebank/app/11.2.0/grid/bin/asmcmdcore line 146.
ASMCMD:

CONVERTED BACKUP PIECE/home/oracle/xtts/backup/xib_7ht91ung_1_1_26

PL/SQL procedure successfully completed.
Entering RollForward
After applySetDataFile
Done: applyDataFileTo
Done: applyDataFileTo
Done: RestoreSetPiece
Done: RestoreBackupPiece

PL/SQL procedure successfully completed.
asmcmd rm /home/oracle/xtts/backup/xib_7ht91ung_1_1_26 /db/ebank/app/11.2.0/grid .. +ASM1

Can't locate Exporter/Heavy.pm in @INC (@INC contains: /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0 /db/ebank/app/11.2.0/grid/lib /db/ebank/app/11.2.0/grid/lib/asmcmd /db/ebank/app/11.2.0/grid/rdbms/lib/asmcmd /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl .) at /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/Exporter.pm line 18.
BEGIN failed--compilation aborted at /db/ebank/app/11.2.0/grid/bin/asmcmdcore line 146.
ASMCMD:

--------------------------------------------------------------------
End of rollforward phase
--------------------------------------------------------------------

14.##ct66rac01
##在目标端产生执行导入的脚本
#因为之前没有设置dstdir,srclink参数,所以此处产生的导入脚本需要手动加上dblink和directory的名称,以及nologfile=y
[oracle@ct66rac01 xtts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -e

[oracle@rac1 xtts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -e

--------------------------------------------------------------------
Parsing properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Done parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Checking properties
--------------------------------------------------------------------
ARGUMENT tablespaces
ARGUMENT platformid
ARGUMENT backupformat
ARGUMENT stageondest

--------------------------------------------------------------------
Done checking properties
--------------------------------------------------------------------
ORACLE_SID : afa1
ORACLE_HOME : /db/ebank/app/oracle/product/11.2.0/db_1

--------------------------------------------------------------------
Generating plugin
--------------------------------------------------------------------

--------------------------------------------------------------------
Done generating plugin file /home/oracle/xtts/script/xttplugin.txt
--------------------------------------------------------------------

15.##ct66rac01
##在目标端新建用户,导入传输表空间
[oracle@ct66rac01 ~]$ ORACLE_SID=rac11g1
[oracle@ct66rac01 xtts]$ sqlplus / as sysdba
SQL> create user test01 identified by test01 ;
SQL> create user test02 identified by test02 ;
SQL> grant connect,resource to test01;
SQL> grant connect,resource to test02;
SQL> exit

[oracle@ct66rac01 ~]$ ORACLE_SID=rac11g1
/home/oracle/xtts/script/xttplugin.txt
[oracle@ct66rac01 ~]$ impdp directory=dump_oradata nologfile=y network_link=lnk_afa_hp transport_full_check=no transport_tablespaces=TBS01,TBS02 transport_datafiles='+DATA/tbs01_5.xtf','+DATA/tbs02_6.xtf'

impdp directory=dump_oradata nologfile=y \
network_link=lnk_afa_hp transport_full_check=no \
transport_tablespaces=TBS01,TBS02 \
transport_datafiles='+DATA/tbs01_25.xtf','+DATA/tbs02_26.xtf'

Import: Release 11.2.0.4.0 - Production on Fri Jan 15 17:18:14 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Username: system
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** directory=dump_oradata nologfile=y network_link=lnk_ctdb transport_full_check=no transport_tablespaces=TBS01,TBS02 transport_datafiles=+DATA/tbs01_5.xtf,+DATA/tbs02_6.xtf
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Fri Jan 15 17:19:07 2016 elapsed 0 00:00:48

16.##ct66rac01
##在目标端验证导入的数据和权限和源端是否一致
#此处发现源端给test02用户的execute on dbms_crypto权限没有导入,这是impdp原本的问题.所以在做xtts之前就要确定好这些权限的问题,以减少停机时间.
[oracle@ct66rac01 xtts]$ sqlplus / as sysdba
SQL> alter tablespace tbs01 read write;
SQL> alter tablespace tbs02 read write;
SQL> alter user test01 default tablespace tbs01;
SQL> alter user test02 default tablespace tbs02;

SQL> select count(1) from test01.tb01;
/*
COUNT(1)
345732
*/
SQL> select count(1) from test02.tb01;
SQL> select * from dba_tab_privs where grantee='TEST02';
/*
GRANTEE OWNER TABLE_NAME GRANTOR PRIVILEGE GRANTABLE HIERARCHY
TEST02 TEST01 TB01 TEST01 SELECT NO NO
*/
#select * from dba_tab_privs where owner ='SYS' and grantee='TEST02';
SQL> grant execute on dbms_crypto to test02;
SQL> exit

?
权限对比sql

测试中的一些小问题:
1.报Cant find xttplan.txt, TMPDIR undefined at xttdriver.pl line 1185.
要注意设定环境变量TMPDIR=/home/oracle/xtts/script
2.Unable to fetch platform name
执行xttdriver.pl之前没有指定ORACLE_SID
3.Some failure occurred. Check /home/oracle/xtts/script/FAILED for more details
If you have fixed the issue, please delete /home/oracle/xtts/script/FAILED and run it
again OR run xttdriver.pl with -L option
执行xttdriver.pl报错后,下次执行要删除FAILED文件.
4.Can't locate strict.pm in @INC
使用$ORACLE_HOME/perl/bin/perl而不是使用perl

备注:
测试完成,比较简单吧.做好准备工作,通过在源端和目标端执行几次$ORACLE_HOME/perl/bin/perl xttdriver.pl,再执行impdp就完成.此测试中使用nfs可以省去文件的传输,使用整个操作方便清晰许多.
减少迁移停机时间的goldengate也是不错.另外整库迁移如果平台不同或相同,但字节顺序相同,可先考虑dataguard,Data Guard Support for Heterogeneous Primary and Physical Standbys in Same Data Guard Configuration (文档 ID 413484.1).

转 Oracle Transportable TableSpace(TTS) 传输表空间 说明的更多相关文章

  1. oracle expdp/impdp/可传输表空间

    oracle expdp/impdp/可传输表空间/及一些参数 Oracle data pump 导出操作能够将表.索引.约束.权限.PLSQL包.同义词等对象从数据库导出,并将它们保存在一种非文本格 ...

  2. 数据泵 TTS(传输表空间技术)

    1.源库准备环境 --创建被传输的表空间create tablespace tts logging datafile '/home/oracle/app/oradata/orcl/tts01.dbf' ...

  3. 【TTS】传输表空间AIX asm -> linux asm

    [TTS]传输表空间AIX asm -> linux asm 一.1  BLOG文档结构图       一.2  前言部分   一.2.1  导读和注意事项 各位技术爱好者,看完本文后,你可以掌 ...

  4. 【TTS】传输表空间Linux asm -> AIX asm

    [TTS]传输表空间Linux asm -> AIX asm 一.1  BLOG文档结构图       一.2  前言部分   一.2.1  导读和注意事项 各位技术爱好者,看完本文后,你可以掌 ...

  5. Oracle使用SQL传输表空间

    源环境:RHEL 6.4 + Oracle 11.2.0.4 目的环境:RHEL 6.4 + Oracle 11.2.0.4 DG双机 要求:使用SQL传输表空间DBS_D_JINGYU从源环境到目的 ...

  6. oracle可传输表空间测试

    使用RMAN在恢复表空间的时候,表空间数据文件DBID和恢复数据库的数据文件DBID必须相同 可传输表空间不需要这样,也就是可以快速的把这个表空间插入另一个数据库使用 可传输表空间内的对象必须不依赖与 ...

  7. oracle操作之传输表空间

    一.传输表空间概述 什么是传输表空间,传输表空间技术始于oracle9i,不论是数据字典管理的表空间还是本地管理的表空间,都可以使用传输表空间技术:传输表空间不需要在源数据库和目标数据库之间具有同样的 ...

  8. Oracle传输表空间介绍

    传输表空间通过拷贝数据文件的方式,实现可跨平台的数据迁移,效率远超expdp/impdp, exp/imp等工具.还可以应用跨平台&数据库版本迁移表数据.归档历史数据和实现表空间级时间点数据恢 ...

  9. 如何通过RMAN使用传输表空间迁移到不同的Endian平台 (Doc ID 371556.1)

    How to Migrate to different Endian Platform Using Transportable Tablespaces With RMAN (Doc ID 371556 ...

随机推荐

  1. python05-09

    一.lambda表达式 def f1(): return 123 f2 = lambda : 123 def f3 = (a1,a2): return a1+a2 f4 = lambda a1,a2 ...

  2. Robotframework集成jenkins执行用例

    Robotframework+jenkins配置 假设我们完成了一个模块的用例设计,可是想晚上9点或凌晨运行,这时候该怎么实现呢?jenkins可以很好解决我们的疑难. Jenkins安装 这里简单说 ...

  3. asp.net mvc3的静态化实现

    静态化处理,可以大大提高客户的访问浏览速度,提高用户体验,同时也降低了服务器本身的压力.在asp.net mvc3中,可以相对容易地处理静态化问题,不用过多考虑静态网页的同步,生成等等问题.我提供这个 ...

  4. 谈谈TensorFlow with CPU support or TensorFlow with GPU support(图文详解)

    不多说,直接上干货! You must choose one of the following types of TensorFlow to install: TensorFlow with CPU ...

  5. 【iOS进阶】UIWebview加载搜狐视频,自动跳回客户端 问题解决

    UIWebview加载搜狐视频,自动跳回搜狐客户端 问题解决 当我们用UIWebview(iOS端)加载网页视频的时候,会发现,当真机上有搜狐客户端的时候,会自动跳转到搜狐客户端进行播放,这样的体验对 ...

  6. oracle 10g 实例用localhost无法访问的处理

    我在笔记本上安装了一个Oracle10g,安装好了之后,查看E:\oracle\product\10.2.0\db_1\network\ADMIN\tnsnames.ora文件,发现SID对应的IP地 ...

  7. DRF的认证,频率,权限

    1,DRF的认证 初识认证:浏览器是无状态的,一次导致每次发的请求都是新的请求,所以每次请求,服务器都会进行校验,这样就很繁琐,这趟我们就需要给每一个用户登录后一个新的标识,浏览器每次都会带着这个唯一 ...

  8. SSH三大框架整合配置详细步骤(2)

    4 配置Hibernate Hibernate MySql连接配置 在Hibernate中,可以配置很多种数据库,例如MySql.Sql Server和Oracle,Hibernate MySql连接 ...

  9. 设计模式-(6)适配器 (swift版)

    用来解决接口适配问题的三种模式:适配器模式,桥接模式,外观模式. 一,概念 适配器模式,将一个类的结构转换成用户希望的另一个接口,使得原本接口不兼容的类能在一起工作.换句话说,适配器模式就是链接两种不 ...

  10. bzoj2823: [AHOI2012]信号塔&&1336: [Balkan2002]Alien最小圆覆盖&&1337: 最小圆覆盖

    首先我写了个凸包就溜了 这是最小圆覆盖问题,今晚学了一下 先随机化点,一个个加入 假设当前圆心为o,半径为r,加入的点为i 若i不在圆里面,令圆心为i,半径为0 再重新从1~i-1不停找j不在圆里面, ...