Step by Step Process of Migrating non-CDBs and PDBs Using ASM for File Storage (Doc ID 1576755.1)
Step by Step Process of Migrating non-CDBs and PDBs Using ASM for File Storage (Doc ID 1576755.1)
APPLIES TO:
Oracle Database - Enterprise Edition - Version 11.2.0.3 and later
Information in this document applies to any platform.
PURPOSE
To assist with migrating non-CDB databases, 12c container databases (CDB) and 12c pluggable databases (PDB) to a new container database when ASM is being used for storage.
为了协助迁移 non-CDB 数据库,当使用ASM进行存储时,会将12c容器数据库(CDB)和12c可插拔数据库 (PDB) 迁移到新的容器数据库。
SCOPE
This note will cover: 本说明将涵盖:
- Migrating a 12c non-CDB to a PDB whose container has a physical standby using 12c RMAN Active Database Duplication to copy the source files
This section should be used when you have a non-CDB that has either been created with or upgraded to Oracle Database 12c. You wish to convert this 12c non-CDB into a pluggable database. 使用12c RMAN Active Database Duplication复制功能将12c non-CDB迁移到其容器具有physical standby容器的PDB,以复制源文件 当您使用 non-CDB 创建或升级到Oracle Database 12c时,应使用此部分。 您希望将此12c non-CDB转换为可插入数据库 - Migrating a 12c PDB into another CDB whose container has a physical standby using 12c RMAN Active Database Duplication to copy the source files
This section should be used when you have a PDB that you wish to unplug from one container database and plug into another container database. 使用12c RMAN Active Database Duplication 复制功能将12c PDB迁移到其容器具有 physical standby 容器的另一个CDB中,以复制源文件 当您拥有希望从一个容器数据库中拔出并插入另一个容器数据库的PDB时,应使用本节 - Migrating a 11.2.0.3 non-CDB using dbms_file_transfer scripts from note 1389592.1 performing full transportable export/import.
This section should be used when you want to migrate a pre-12c non-CDB directly into a PDB. 使用说明1389592.1中的 dbms_file_transfer 脚本迁移11.2.0.3 non-CDB,以执行full transportable export/import. 当您要将12c之前的 non-CDB 直接迁移到PDB时,应使用此部分。
Using these methods, each will be done using incremental rollforward with minimal downtime for datafile movement. 使用这些方法,每种方法都将使用增量前滚来完成,同时将数据文件移动的停机时间降至最低。
For environments running 12.1.0.2 patch 20464614 is required to be applied on the 12c Oracle Database homes. 对于运行12.1.0.2的环境,需要在12c Oracle Database主目录上应用patch 20464614
All of the examples below consist of an environment of: 以下所有示例均包含以下环境:
- Source environment 源端
- Database is a Real Application Cluster database 数据库时RAC
- Database can be part of a Data Guard configuration, has no impact 数据库可以是 Data Guard 配置的一部分,没有影响
- Database files are stored in ASM 数据库文件存储在ASM中
- Destination environment 目标端
- Part of a Data Guard configuration with an Active Data Guard physical standby database in place 装有 Active Data Guard physical standby database 的 Data Guard配置的一部分
- Both the primary database and standby database are Real Application Cluster databases 主数据库和备用数据库都是RAC
- Database files are stored in ASM 数据库文件存储在ASM中
This process assumes that files from the source database will need to be moved to a new location due to file management desires (e.g. wanting to have the files kept under the directory structure for a particular database) or that the souce and destination environments do not share storage. If the source and destination environments share storage (primary databases only), it is possble to perform plugin operations without needing to move files. The operations noted below in the Handling of Standby Databases during plugin will still need to be considered. It is possible that you may not need to move files on the primary site but will need to in order to maintain a standby database. The steps described here can be performed on both the primary and standby as the examples show, or just standby only or just primary only (if no standby exists). 该过程假定由于文件管理的需要(例如,希望将文件保留在特定数据库的目录结构下),源数据库中的文件将需要移动到新位置,或者源环境和目标环境不共享存储。如果源和目标环境共享存储(仅主数据库),则可以执行插件操作而无需移动文件。仍然需要考虑下面在插件期间的备用数据库处理中指出的操作。您可能不需要在主站点上移动文件,但为了维护备用数据库而需要移动文件。如示例所示,此处描述的步骤可以在主数据库和备用数据库上执行,也可以仅在备用数据库上执行,也可以在仅主数据库上执行(如果不存在备用数据库)。
If you do not wish to migrate files and your source and destination are on shared storage, you can maintain the standby database as well. Please see the following notes: 如果您不希望迁移文件,并且源和目标位于共享存储中,则也可以维护备用数据库。请参阅以下注释:
For non-DB Migration: Reusing the Source Standby Database Files When Plugging a non-CDB as a PDB into the Primary Database of a Data Guard Configuration Document 2273304.1
For PDB Migration: Reusing the Source Standby Database Files When Plugging a PDB into the Primary Database of a Data Guard Configuration Document 2273829.
DETAILS
Goal
Provide step by step examples of migrating non-Container Database (non-CDB) and Pluggable Database (PDB) data into a different CDB on ASM.
提供将 non-Container Database (non-CDB) 和 Pluggable Database (PDB) 数据迁移到ASM上不同CDB的分步示例。
Solution
These examples were created with 12.1.0.1 with additional patches. Prior to using any of the examples in this document, you must ensure merge patch 17759598 is applied. All of the required patches are included in 12.1.0.1 Patch Set Update 2 or later. 这些示例是使用12.1.0.1以及其他补丁创建的。在使用本文档中的任何示例之前,必须确保已应用合并 patch 17759598。12.1.0.1 Patch Set Update 2 或更高版本中包含所有必需的补丁程序。
- Provides access to more tools to assist with the migration 提供访问更多工具以协助迁移的权限
- Simplifies the migration by allowing for the database to be treated as a complete entity for migration 通过允许将数据库视为完整的迁移实体来简化迁移
- Simplifies the migration process with easier, less error-prone steps 通过更简单,更不易出错的步骤简化迁移过程
- Generally shorter downtime than performing full transportable export/import 通常,停机时间比执行 full transportable export/import 要短
Also prior to performing any plugin operations it is highly recommended to execute the DBMS_PDB.CHECK_PLUG_COMPATIBILITY() function and resolve any issues that will stop the plugin from being successful. 同样,在执行任何插件操作之前,强烈建议执行 DBMS_PDB.CHECK_PLUG_COMPATIBILITY() 函数并解决所有将阻止插件成功的问题
Multitenant Plugin with OMF/ASM 具有OMF/ASM的多租户插件
Oracle Managed Files (OMF) is a feature where the RDBMS is given control of naming and location of files. Files are named using system generated names and are placed in the location specified in the DB_CREATE_FILE_DEST initialization parameter. When you use Oracle Automatic Storage Management (ASM) to manage and store your files, OMF is also used to simplify file management. If a Data Guard physical standby database is also in use, there are initilization parameters (*_FILE_NAME_CONVERT, STANDBY_FILE_MANAGEMENT) to assist with converting names of existing and newly created datafiles when OMF is in use. Oracle托管文件(OMF)是一项功能,RDBMS可以控制文件的命名和位置。使用系统生成的名称来命名文件,并将文件放置在DB_CREATE_FILE_DEST 初始化参数中指定的位置。当您使用Oracle自动存储管理 (ASM) 来管理和存储文件时,OMF也用于简化文件管理。如果还使用Data Guard物理备用数据库,则存在初始化参数(*_FILE_NAME_CONVERT, STANDBY_FILE_MANAGEMENT),以在使用OMF时帮助转换现有和新创建的数据文件的名称。
Oracle Multitenant plugin and Data Pump import for full database operations use these same initialization parameters when the operation is performed, but because these operations are very specific in file naming (the file names in the multitenant manifest or in the full transportable database import command must match all files exactly for the operation to complete), there are some difficulties when OMF/ASM is used, files are copied, and a physical standby database is in place. There have been improvements made to the multitenant plugin operation on both the primary and standby environments, however at this time additional work must still be done on the standby database when a full transportable database import operation is performed. 用于执行完整数据库操作的Oracle Multitenant插件和数据泵导入在执行操作时使用这些相同的初始化参数,但是因为这些操作在文件命名中非常特定(多租户清单或 full transportable database import 命令中的文件名必须匹配所有文件都完全可以完成操作),使用OMF/ASM,复制文件并建立物理备用数据库时会遇到一些困难。在主环境和备用环境上都对多租户插件操作进行了改进,但是,当执行 full transportable database import 操作时,此时仍必须在备用数据库上完成其他工作。
The ASM directory structure and naming convention is as follows. A traditional file name in ASM consists of 3 parts, <tablespace_name>.<ASM file number>.<incarnation>. You are not allowed to directly create files with this naming structure. The directory structure for datafiles traditionally consists of <DB_CREATE_FILE_DEST diskgroup_name>/<db unique name>/DATAFILE/<files>. With Oracle Multitenant, the directory structure changes for each pluggable database (PDB) that exists in a container database (CDB). Each database (non-CDB, CDB, PDB) as of 12gR1 has a generic unique identifier (GUID) associated with it. For pre-12c non-CDB databases, the GUID will be created when the database is upgraded to 12c. This GUID will not change throughout the life of the container/non-CDB and is one of the main reasons a PDB should be cloned (cloning generates a new GUID) rather than copying the same PDB to multilple locations and plugging in to multiple CDBs. The GUID for a particular container/non-CDB can be found by querying V$CONTAINERS in any 12c database. To assist with identifiying which files belong to which PDB, an ASM directory structure of <DB_CREATE_FILE_DEST diskgroup_name>/<db_unique_name>/<GUID>/DATAFILE/<files> is used for PDBs. ASM目录结构和命名约定如下。ASM中的传统文件名由3个部分组成,<tablespace_name>.<ASM file number>.<incarnation>。您不允许直接使用此命名结构创建文件。数据文件的目录结构通常由 <DB_CREATE_FILE_DEST diskgroup_name>/<db unique name>/DATAFILE/<files> 组成。使用Oracle Multitenant,容器容器(CDB)中存在的每个可插拔数据库(PDB)的目录结构都会更改。截至 12gR1,每个数据库(non-CDB, CDB, PDB)都有与之关联的通用唯一标识符(GUID)。对于12c之前的non-CDB数据库,当数据库升级到12c时将创建GUID。该GUID在容器/non-CDB的整个生命周期内都不会改变,这是应该克隆PDB(克隆会生成新的GUID)而不是将相同的PDB复制到多个位置并插入多个CDB的主要原因之一。可以通过查询任何12c数据库中的V$CONTAINERS来找到特定容器/non-CDB的GUID。为了帮助识别哪些文件属于哪个PDB,PDB使用<DB_CREATE_FILE_DEST diskgroup_name>/<db_unique_name>/<GUID>/DATAFILE/<files>的ASM目录结构。
RMAN has been enhanced so that, when copying files between databases it recognizes the GUID and acts accordingly when writing the files. RMAN得到了增强,以便在数据库之间复制文件时,它可以识别GUID并在写入文件时采取相应的措施
- If the clone/auxiliary instance being connected to for clone operations is a CDB root, the GUID of the RMAN target database is used to determine the directory structure to write the datafiles. Connect to the CDB root as the RMAN clone/auxiliary instance when the source database should be a 12c non-CDB or PDB that is going to be migrated and plugged into a remote CDB as a brand new PDB. This will ensure that the files copied by RMAN will be written to the GUID directory of source database for the migration. 如果要连接到克隆操作的克隆/辅助实例是CDB根目录,则RMAN目标数据库的GUID用于确定写入数据文件的目录结构。当源数据库应该是12c non-CDB or PDB,并将其作为全新的PDB插入到远程CDB中时,作为RMAN克隆/辅助实例连接到CDB根目录。这将确保将RMAN复制的文件写入到迁移的源数据库的GUID目录中。
- If the clone/auxiliary instance being connected to for clone operations is a PDB, the GUID of the auxiliary PDB will be used to determine the directory structure to write the datafiles. Connect to the destination PDB as the RMAN clone auxiliary instance when the source database is a 12c non-CDB or PDB that requires a cross platform full transportable database import and the data and files will be imported into an existing PDB. This will ensure the files copied by RMAN will be written to the GUID directory of the PDB target database for the migration. 如果要连接以进行克隆操作的克隆/辅助实例是PDB,则辅助PDB的GUID将用于确定写入数据文件的目录结构。当源数据库是12c non-CDB或需要跨平台完全可移动数据库导入的PDB时,作为RMAN克隆辅助实例连接到目标PDB,并且数据和文件将导入到现有PDB中。这将确保将RMAN复制的文件写入到PDB目标数据库的GUID目录中以进行迁移。
The enhancements for multitenant plugin operations with OMF simplify the process extensively. The manifest generated on the source non-CDB/PDB contains all of the filenames and characteristics about each file. Normally, the plugin operation would use the filenames in the manifest and look for those exact filenames or partially converted (using the SOURCE_FILE_NAME_CONVERT clause on the CREATE PLUGGABLE DATABASE....USING...statement). Since all filenames will be different when copied to a new location when OMF is used, you would need to specify full directory and filename convert pairs for EACH file being plugged in. By using the SOURCE_FILE_DIRECTORY clause on the CREATE PLUGGABLE DATABASE....USING... statement in the plugin operation, the filename in the manifest is ignored and the plugin looks for a file to match additional characteristics about the file stored in the manifest, looking for the file in the SOURCE_FILE_DIRECTORY location. OMF对多租户插件操作的增强功能大大简化了该过程。 在源 non-CDB/PDB 上生成的清单包含有关每个文件的所有文件名和特征。 通常,插件操作将使用清单中的文件名并查找那些确切的文件名或部分转换的文件(使用CREATE PLUGGABLE DATABASE .... USING ...语句上的SOURCE_FILE_NAME_CONVERT子句)。 由于使用OMF时将所有文件名复制到新位置时会有所不同,因此您需要为插入的每个文件指定完整的目录和文件名转换对。通过使用CREATE PLUGGABLE DATABASE上的SOURCE_FILE_DIRECTORY子句。 ...语句,在插件操作中,清单文件中的文件名将被忽略,插件将查找文件以匹配清单中存储的文件的其他特征,并在SOURCE_FILE_DIRECTORY位置中查找文件。
Handling of Standby Databases during plugin. 插件期间的备用数据库处理
Standby databases do not use the filename conversion clauses specified on the plugin statement. Media recovery will use the DB_FILE_NAME_CONVERT initialization parameter specifed in the standby database's spfile to try to convert the filename/directory from the redo record and find the file on the standby environment. 备用数据库不使用在plugin语句上指定的文件名转换子句。介质恢复将使用备用数据库的spfile中指定的DB_FILE_NAME_CONVERT初始化参数来尝试从重做记录转换文件名/目录并在备用环境中查找文件。
On standby databases when performing a multitenant plugin if OMF is in use, media recovery will search in the default directory noted above (in the case of ASM, <DB_CREATE_FILE_DEST diskgroup_name>/<db_unique_name>/<GUID>/DATAFILE) and use the same file characteristics to find a file match. If a match is found, media recovery adds the file to the controlfile and continues to apply redo without requiring any additional intervention. 在备用数据库上,如果正在使用OMF,则在执行多租户插件时,介质恢复将在上述默认目录中进行搜索(对于ASM,为<DB_CREATE_FILE_DEST diskgroup_name>/<db_unique_name>/<GUID>/DATAFILE)并使用相同的目录文件特征以找到文件匹配项。如果找到匹配项,则介质恢复会将文件添加到控制文件中,并继续应用重做,而无需任何其他干预。
If a suitable file match is not found, media recovery stops and returns an error and manual steps must then be taken to resolve the error and restart media recovery. 如果找不到合适的文件匹配项,则介质恢复将停止并返回错误,然后必须采取手动步骤解决该错误并重新启动介质恢复。
Oracle Database 12c version 12.2.0.1 provides additional options for the STANDBYS= clause allowing you to specify subsets of standbys that you wish to include or exclude in the full creation of PDBs. Please note that this option does not remove the requirement for files being pre-staged at the standbys during the plugin operation for standbys that are included in the subset. Oracle Database 12c 12.2.0.1版为STANDBYS =子句提供了其他选项,使您可以指定希望在完全创建PDB中包括或排除的备用子集。请注意,对于子集中包含的备用数据库,在插件操作期间,此选项不会删除在备用数据库中预备文件的要求。
Oracle Database 12c version 12.2.0.1 offers the ability to perform hot clones and PDB relocation. At this time it is not possible to maintain any standbys at the destination directly while using either of these options. You should always create PDBs with the STANDBYS=NONE clause when using hot cloning or relocation, then ENABLE RECOVERY at some future point. Oracle Database 12c 12.2.0.1版提供了执行热克隆和PDB重定位的功能。目前,使用这些选项之一时,无法直接在目标位置维护任何备用数据库。使用热克隆或重定位时,应始终使用STANDBYS = NONE子句创建PDB,然后在将来的某个时间启用ENABLE RECOVERY。
Full Transportable and Export/Import implications 影响
Data Pump full transportable or export/import requires exact filenames to be passed to Data Pump on import so the files must be in the location specified for the import operation to work successfully on the primary. As noted above, filenames will not match between primary databases and standby databases when OMF is used (even after applying DB_FILE_NAME_CONVERT), and with ASM directory structures will not match (DB_UNIQUE_NAME settings will be different). At this time with OMF, as the redo for each tablespace on the import attempts to be applied at the standby site, media recovery will fail as it cannot find the file(s). You must manually perform a rename of the file at the standby site to the actual filename and restart recovery. This must be done tablespace by tablespace until all of the tablespaces have been processed. Data Pump full transportable or export/import 时,需要在导入时将确切的文件名传递给数据泵,因此文件必须位于指定的位置,导入操作才能在主数据库上成功运行。如上所述,使用OMF时(即使在应用DB_FILE_NAME_CONVERT之后),主数据库和备用数据库之间的文件名将不匹配,并且与ASM目录结构将不匹配(DB_UNIQUE_NAME设置将有所不同)。此时使用OMF,由于尝试在备用站点上应用导入上每个表空间的重做,因此介质恢复将失败,因为它找不到文件。您必须手动将备用站点上的文件重命名为实际的文件名,然后重新启动恢复。这必须逐个表空间完成,直到所有表空间都已处理完毕。
12c Migrate non-CDB and Convert to a PDB using 12c RMAN Active Database Duplication 12c使用12c RMAN Active Database Duplication迁移non-CDB并转换为PDB
The RMAN Block Change Tracking feature can be used to improve the performance of the incremental roll forward steps. RMAN块更改跟踪功能可用于提高增量前滚步骤的性能
The following steps were executed using the bash shell. 使用bash shell执行以下步骤
This method takes advantage of a new feature of RMAN in 12c allowing you to clone database files and apply incrementals to those files from an active database without requiring staging areas. In addition, the source non-CDB application activity can continue while the cloning and incremental applies are being performed. When ready to perform the actual migration, the source non-CDB must be put into READ ONLY mode. At this time, application activity should stop and a final incremental apply performed. 此方法利用RMAN在12c中的新功能,该功能使您可以克隆数据库文件并将增量文件从活动数据库应用到这些文件,而无需暂存区域。此外,在执行克隆和增量应用时,源non-CDB 应用程序活动可以继续。准备好执行实际迁移时,必须将源non-CDB 置于“只读”模式。此时,应停止应用程序活动,并执行最终增量应用程序。
To get the datafiles in the correct ASM directory for the destination PDB, you perform an RMAN BACKUP AS COPY from the source non-CDB while also connected to the destination CDB in the same RMAN session. When using OMF or ASM on the destination, this will ensure that the datafile copies are written to the correct location. The incremental apply is also performed across the network and applied to the datafiles in their destination location. 要在目标PDB的正确ASM目录中获取数据文件,请从源 non-CDB 执行 RMAN BACKUP AS COPY,同时还要在同一RMAN会话中将其连接到目标CDB。在目标上使用OMF或ASM时,这将确保将数据文件副本写入正确的位置。增量应用还可以在整个网络上执行,并应用于它们在目标位置中的数据文件。
This operation requires a temporary instance to be used for the incremental apply. When the datafile copies are created, they aren't known to any databases on the new environment, by using the temporary instance we create a copy of the source non-CDB controlfile on the destination environment and catalog the newly created datafile copies in the temporary instance. We now have an instance that is aware of the files and can perform recovery on them. The temporary instance is used only for this purpose and can be discarded once the migration process is complete. The temporary instance should only be started in either nomount or mount mode, it should never be opened. 此操作需要将临时实例用于增量应用。创建数据文件副本后,新环境中的任何数据库都不知道它们,通过使用临时实例,我们在目标环境中创建源non-CDB控制文件的副本,并在临时目录中对新创建的数据文件副本进行分类实例。现在,我们有了一个可以识别文件并可以对其执行恢复的实例。临时实例仅用于此目的,并且在迁移过程完成后可以将其丢弃。临时实例只能以nomount或mount模式启动,永远不要打开它。
If you have an Active Data Guard phyiscal standby in place on the destination, perform the RMAN BACKUP AS COPY and incremental apply process on both the primary and standby destination environments. This will place the files in the correct location on the destination standby environment so that media recovery of the plugin operation can continue without stopping. You must run the process separately for each destination environment, e.g. you will run an RMAN process to copy the files to your destination primary site and also run an RMAN process to copy the files to your destination standby site, you will create a temporary instance on your destination primary site and your destination standby site, etc. Once the final incremental apply is completed on both the destination primary and standby sites, you can plugin the new PDB on the primary with no further action needing to be taken on the standby site. 如果目标上有Active Data Guard physscal备用数据库,请在主要和备用目标环境上执行RMAN BACKUP AS COPY和增量应用过程。 这会将文件放置在目标备用环境上的正确位置,以便可以继续进行插件操作的媒体恢复而不会停止。 您必须针对每个目标环境分别运行该过程,例如 您将运行RMAN进程将文件复制到目标主站点,也将运行RMAN进程将文件复制到目标主站点,您将在目标主站点和目标备用站点上创建一个临时实例,等等。 在目标主站点和备用站点上完成最终增量应用后,您可以在主站点上插入新的PDB,而无需在备用站点上执行进一步的操作。
The example below documents the process with ASM in use for file storage and with an Active Data Guard physical standby database in place. 下面的示例记录了使用ASM进行文件存储以及使用Active Data Guard物理备用数据库的过程
- On both the destination primary and standby database environments create a TNS entry to point to the source non-CDB to be migrated. This will be used by RMAN to connect to the source and create the datafile copies on the destination environments and to perform incremental applies across the network with no need for staging area. This connection can use SCAN host. 在目标主数据库和备用数据库环境上,都创建一个TNS条目以指向要迁移的源 non-CDB。RMAN将使用它来连接源,并在目标环境上创建数据文件副本,并在不需要暂存区域的情况下在整个网络中执行增量应用。此连接可以使用SCAN host。
NONCDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = source-noncdb-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = noncdb)
)
) - Ensure that there are TNS entries on both the destination primary and standby environments that point to the respective destination instances. In other words, on the destination primary server, there must be a TNS entry pointing to the destination primary database and on the destination standby server, there must be a TNS entry pointing to the destination standby database. These TNS entries can use SCAN hosts.
- Destination Primary
CONT3 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = target-prim-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = cont3)
)
) - Destination Standby
CONT3STBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = target-stby-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = cont3stby)
)
)
- Destination Primary
- Copy the TNS entries pointing to the destination primary and standby environments to the source non-CDB environment. These are required to allow the source database to connect to the destinations as part of the BACKUP AS COPY process. These TNS entries can use SCAN hosts. If you used SCAN hosts for the TNS entry pointing to the non-CDB source in the destination environment (first step above) and the source non-CDB is a Real Application Cluster database, these TNS entries pointing to the destination should be on ALL the nodes where the non-CDB database has instances running.
- Backup as copy the source non-CDB to the destination primary and standby environments connecting to the destination primary and standby database instances. This will create datafile copies of the source non-CDB in the correct directories of ASM, these file copies will not be known to the databases at this point. A TNS connection is needed to both the RMAN target (source 12c non-CDB) and destination (12c CDB). In the RMAN session, connect to the ROOT of the container database you will be migrating to as the clone instance. The source non-CDB can continue application activity while the copies are occurring.
Create a command file similar to the following. The target is the source non-CDB to be migrated, the clone is the destination CDB in which the new PDB will be created.
- Primary
connect target sys@noncdb
connect auxiliary sys@cont3
backup as copy database auxiliary format '+data';
- Primary
- Standby
connect target sys@noncdb
connect auxiliary sys@cont3stby
backup as copy database auxiliary format '+datad1';Ensure backup compression on the disk device type is not enabled on the primary database. Backup as copy is not allowed with compression enabled.
RMAN> show device type;
RMAN configuration parameters for database with db_unique_name NONCDB are:
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # defaultRun the code blocks just created on both the destination primary site and the destination standby site to copy the files from the source non-CDB to destination environments. Note the directory structure for the output files on both the primary and the standby environments (lines starting with "output file name"). These directories will be needed in a later step.
- Primary
$ rmanRecovery Manager: Release 12.1.0.1.0 - Production on Wed Sep 4 12:23:17 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN> @backup.cmd
RMAN> connect target *
target database Password:
connected to target database: NONCDB (DBID=1643045589)RMAN> connect auxiliary *
auxiliary database Password:
connected to auxiliary database: CONT3 (DBID=1783201232)RMAN> backup as copy database auxiliary format '+data';
Starting backup at 04-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=319 device type=DISK<snip>
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=+DATAD1/NONCDB/DATAFILE/undotbs1.441.825151103
channel ORA_DISK_2: starting datafile copy
input datafile file number=00005 name=+DATAD1/NONCDB/DATAFILE/soets.438.825151965
channel ORA_DISK_3: starting datafile copy
input datafile file number=00006 name=+DATAD1/NONCDB/DATAFILE/soets2.436.825151971
channel ORA_DISK_4: starting datafile copy
input datafile file number=00007 name=+DATAD1/NONCDB/DATAFILE/soets3.451.825151981<snip>
input datafile file number=00001 name=+DATAD1/NONCDB/DATAFILE/system.455.825151099
output file name=+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/system.387.825251193 tag=TAG20130904T102613 # Save this directory name to be used later
channel ORA_DISK_8: datafile copy complete, elapsed time: 00:01:05
channel ORA_DISK_8: starting datafile copy
input datafile file number=00004 name=+DATAD1/NONCDB/DATAFILE/users.288.825151109
output file name=+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/users.386.825251261 tag=TAG20130904T102613
channel ORA_DISK_8: datafile copy complete, elapsed time: 00:00:05<snip>
output file name=+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/undotbs1.272.825251181 tag=TAG20130904T102613
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:08:30
Finished backup at 04-SEP-13RMAN> **end-of-file**
- Standby
$ rmanRecovery Manager: Release 12.1.0.1.0 - Production on Wed Sep 4 10:26:51 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN> @backup.cmd
RMAN> connect target *
target database Password:
connected to target database: NONCDB (DBID=1643045589)RMAN> connect auxiliary *
auxiliary database Password:
connected to auxiliary database: CONT3 (DBID=1783201232)RMAN> backup as copy database auxiliary format '+datad1';
Starting backup at 04-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=556 device type=DISK<snip>
input datafile file number=00001 name=+DATAD1/NONCDB/DATAFILE/system.455.825151099
output file name=+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/sysaux.528.825244029 tag=TAG20130904T102706 # Save this directory name to be used later
channel ORA_DISK_7: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_7: starting datafile copy
input datafile file number=00004 name=+DATAD1/NONCDB/DATAFILE/users.288.825151109
output file name=+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/system.517.825244029 tag=TAG20130904T102706
channel ORA_DISK_8: datafile copy complete, elapsed time: 00:00:16
output file name=+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/users.510.825244043 tag=TAG20130904T102706
channel ORA_DISK_7: datafile copy complete, elapsed time: 00:00:01
output file name=+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/undotbs1.516.825244027 tag=TAG20130904T102706<snip>
output file name=+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets5.515.825244029 tag=TAG20130904T102706
channel ORA_DISK_6: datafile copy complete, elapsed time: 00:00:51
Finished backup at 04-SEP-13RMAN>
RMAN> **end-of-file**
- Copy the password file from the source non-CDB to the destination sites (both primary and standby) for the temporary instances. The following example shows the password file stored in ASM for the source CDB. The password file is extracted from ASM on the source and written to a local filesystem, then remote copied to the destination environment (both primary and standby). For ease of use, the files should be copied to the $ORACLE_HOME/dbs directory (default location) being used by the temporary instance on the destination environments and should be given a default name of orapw<$ORACLE_SID of the temporary instance>.
[oracle@source]$ asmcmd -p
ASMCMD [+] > cd datad1/noncdb
ASMCMD [+datad1/noncdb] > cp orapwnoncdb /home/oracle/temp/orapwtemp
copying +datad1/noncdb/orapwnoncdb -> /home/oracle/temp/orapwtemp[oracle@source]$ scp /home/oracle/temp/orapwtemp oracle@destination-prim:<ORACLE_HOME>/dbs
[oracle@source]$ scp /home/oracle/temp/orapwtemp oracle@destination-stby:<ORACLE_HOME>/dbs/orapwtempstby
- Create a small pfile for the temporary instances. Although not required to be different for the destination primary and standby environments, for clarity they are different in this example with each temporary instance getting a different DB_UNIQUE_NAME. Note the comments for each initialization parameter setting.
- Primary
db_name=cont4 # MUST match db_name of source CDB/non-CDB
db_unique_name=junk # ONLY specify if the source db is on the same system as the TEMP instance
control_files='/home/oracle/temp/control1.ctl','/home/oracle/temp/control2.ctl' # Just use local files
compatible='12.1.0.0.0' # required if compatible is not set to default value
#Since the source is a non-CDB, we do not use the enable_pluggable_database parameter - Standby
db_name=cont4 # MUST match db_name of source CDB/non-CDB
db_unique_name=junkstby # ONLY specify if the source db is on the same system as the TEMP instance
control_files='/home/oracle/temp/control1.ctl','/home/oracle/temp/control2.ctl' # Just use local files
compatible='12.1.0.0.0' # required if compatible is not set to default value
#Since the source is a non-CDB, we do not use the enable_pluggable_database parameter
- Primary
- Create a TNS alias for the destination temporary instances. This should be done on the environment you will be running RMAN (in this example, the destination primary and standby environments). If you needed to specify DB_UNIQUE_NAME, the same value must also be specified for the SERVICE_NAME in the TNS address. Do not use the SCAN address for the host as the temporary instances will not be registered with the SCAN listener. (UR = A ) is required to allow you to connect to the service while the database is in mount/nomount mode.
- Primary
TEMP =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = primhostdb01.us.oracle.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = junk) (UR = A)
)
) - Standby
TEMPSTBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = stbyhostdb01.us.oracle.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = junkstby) (UR = A)
)
)
- Primary
- Startup nomount the temporary instances on the destination primary and standby environments using the pfiles created above
- Primary
$ export ORACLE_SID=temp
$ sqlplus / as sysdbaSQL*Plus: Release 12.1.0.1.0 Production on Wed Sep 4 12:51:52 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile='/home/oracle/demo/pfile.ora'
ORACLE instance started.Total System Global Area 384200704 bytes
Fixed Size 2288536 bytes
Variable Size 322962536 bytes
Database Buffers 50331648 bytes
Redo Buffers 8617984 bytes - Standby
$ export ORACLE_SID=tempstby
$ sqlplus / as sysdbaSQL*Plus: Release 12.1.0.1.0 Production on Wed Sep 4 10:53:06 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile='/home/oracle/demo/pfile.ora'
ORACLE instance started.Total System Global Area 634732544 bytes
Fixed Size 2291424 bytes
Variable Size 524290336 bytes
Database Buffers 100663296 bytes
Redo Buffers 7487488 bytes
- Primary
- Restore the controlfile from the source non-CDB to both the destination primary and standby environment temporary instances. Connect to the temporary instances using TNS (e.g. rman target sys@temp).
The FROM SERVICE clause in the RESTORE command provides the connection to the source non-CDB. The clause in the example uses the TNS alias for the source non-CDB created above. This is the only connection used and required for the source non-CDB in these commands.
- Primary
$ rman
Recovery Manager: Release 12.1.0.1.0 - Production on Wed Sep 4 13:32:03 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN> connect target sys@temp
target database Password:
connected to target database: NONCDB (not mounted)RMAN> restore controlfile from service noncdb;
Starting restore at 04-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=790 device type=DISKchannel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service noncdb
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
output file name=/home/oracle/demo/control1.ctl
output file name=/home/oracle/demo/control2.ctl
Finished restore at 04-SEP-13 - Standby
$ rman
Recovery Manager: Release 12.1.0.1.0 - Production on Wed Sep 4 11:33:33 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN> connect target sys@tempstby
target database Password:
connected to target database: NONCDB (not mounted)RMAN> restore controlfile from service noncdb;
Starting restore at 04-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=514 device type=DISKchannel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service noncdb
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/home/oracle/demo/control1.ctl
output file name=/home/oracle/demo/control2.ctl
Finished restore at 04-SEP-13
- Primary
- Mount the temporary instances on the destination primary and standby environments to allow for cataloging the cloned database files.
RMAN> alter database mount;
Statement processed
- Catalog the datafiles in the temporary instances on the destination primary and standby environments using "START WITH" to locate the datafile copies restored above. The directory location can be found in the BACKUP AS COPY screen output in lines starting with "output file name".
- Primary
RMAN> catalog start with '+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE';
searching for all files that match the pattern +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE
List of Files Unknown to the Database
=====================================
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.283.825173903
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.388.825182125
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.259.825200151
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.272.825251181
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS.384.825251183
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS2.390.825251185
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS3.374.825251187
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS4.389.825251189
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS5.373.825251191
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSAUX.385.825251191
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSTEM.387.825251193
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/USERS.386.825251261Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging doneList of Cataloged Files
=======================
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.283.825173903
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.388.825182125
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.259.825200151
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.272.825251181
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS.384.825251183
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS2.390.825251185
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS3.374.825251187
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS4.389.825251189
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS5.373.825251191
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSAUX.385.825251191
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSTEM.387.825251193
File Name: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/USERS.386.825251261 - Standby
RMAN> catalog start with '+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE';
searching for all files that match the pattern +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE
List of Files Unknown to the Database
=====================================
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.516.825244027
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS.511.825244027
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS2.513.825244029
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS3.529.825244029
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS4.514.825244029
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSAUX.528.825244029
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS5.515.825244029
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSTEM.517.825244029
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/USERS.510.825244043Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging doneList of Cataloged Files
=======================
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.516.825244027
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS.511.825244027
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS2.513.825244029
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS3.529.825244029
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS4.514.825244029
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSAUX.528.825244029
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS5.515.825244029
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSTEM.517.825244029
File Name: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/USERS.510.825244043
- Primary
- Switch the files in temporary instances on the destination primary and standby environments to the datafile copies just cataloged
- Primary
RMAN> switch database to copy;
datafile 1 switched to datafile copy "+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/system.387.825251193"
datafile 2 switched to datafile copy "+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/sysaux.385.825251191"
datafile 3 switched to datafile copy "+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/undotbs1.272.825251181"
datafile 4 switched to datafile copy "+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/users.386.825251261"
datafile 5 switched to datafile copy "+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets.384.825251183"
datafile 6 switched to datafile copy "+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets2.390.825251185"
datafile 7 switched to datafile copy "+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets3.374.825251187"
datafile 8 switched to datafile copy "+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets4.389.825251189"
datafile 9 switched to datafile copy "+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets5.373.825251191" - Standby
RMAN> switch database to copy;
datafile 1 switched to datafile copy "+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/system.517.825244029"
datafile 2 switched to datafile copy "+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/sysaux.528.825244029"
datafile 3 switched to datafile copy "+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/undotbs1.516.825244027"
datafile 4 switched to datafile copy "+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/users.510.825244043"
datafile 5 switched to datafile copy "+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets.511.825244027"
datafile 6 switched to datafile copy "+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets2.513.825244029"
datafile 7 switched to datafile copy "+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets3.529.825244029"
datafile 8 switched to datafile copy "+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets4.514.825244029"
datafile 9 switched to datafile copy "+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets5.515.825244029"
- Primary
- Set RMAN parallelism in the temporary instances on both destination primary and standby environments
RMAN> configure device type disk parallelism 8;
old RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 8 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored - At periodic intervals, perform incremental applies from the source non-CDB to the temporary instances on the destination primary and standby environments to keep the destination datafiles somewhat current. This can be done as many times as necessary until you are ready to perform the migration. The incremental process will compare the files in the temporary instances with the files in the source non-CDB and determine which files need recovery applied, thus the recovery work that is done can be different in each of the primary environment temporary instance and the standby temporary instance, depending on the states of their respective files. The source non-CDB can continue application activity while the incremental applies are occurring.
The FROM SERVICE clause in the RECOVER command provides the connection to the source non-CDB. The clause in the example uses the TNS alias for the source non-CDB created above. This is the only connection used and required for the source non-CDB in these commands.
- Primary
RMAN> connect target sys@temp
target database Password:
connected to target database: NONCDB (mounted)RMAN> recover database noredo from service noncdb;
Starting recover at 04-SEP-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=136 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=529 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=660 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=791 device type=DISK
allocated channel: ORA_DISK_5
channel ORA_DISK_5: SID=922 device type=DISK
allocated channel: ORA_DISK_6
channel ORA_DISK_6: SID=6 device type=DISK
allocated channel: ORA_DISK_7
channel ORA_DISK_7: SID=137 device type=DISK
allocated channel: ORA_DISK_8
channel ORA_DISK_8: SID=268 device type=DISK
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service noncdb
destination for restore of datafile 00001: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/system.387.825251193
channel ORA_DISK_2: starting incremental datafile backup set restore
channel ORA_DISK_2: using network backup set from service noncdb
destination for restore of datafile 00002: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/sysaux.385.825251191
channel ORA_DISK_3: starting incremental datafile backup set restore
channel ORA_DISK_3: using network backup set from service noncdb
destination for restore of datafile 00003: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/undotbs1.272.825251181
channel ORA_DISK_4: starting incremental datafile backup set restore
channel ORA_DISK_4: using network backup set from service noncdb
destination for restore of datafile 00004: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/users.386.825251261
channel ORA_DISK_5: starting incremental datafile backup set restore
channel ORA_DISK_5: using network backup set from service noncdb
destination for restore of datafile 00005: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets.384.825251183
channel ORA_DISK_6: starting incremental datafile backup set restore
channel ORA_DISK_6: using network backup set from service noncdb
destination for restore of datafile 00006: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets2.390.825251185
channel ORA_DISK_7: starting incremental datafile backup set restore
channel ORA_DISK_7: using network backup set from service noncdb
destination for restore of datafile 00007: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets3.374.825251187
channel ORA_DISK_8: starting incremental datafile backup set restore
channel ORA_DISK_8: using network backup set from service noncdb
destination for restore of datafile 00008: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets4.389.825251189
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service noncdb
destination for restore of datafile 00009: +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets5.373.825251191
channel ORA_DISK_4: restore complete, elapsed time: 00:00:08
channel ORA_DISK_6: restore complete, elapsed time: 00:00:15
channel ORA_DISK_7: restore complete, elapsed time: 00:00:15
channel ORA_DISK_8: restore complete, elapsed time: 00:00:14
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_DISK_2: restore complete, elapsed time: 00:00:33
channel ORA_DISK_5: restore complete, elapsed time: 00:01:43
channel ORA_DISK_3: restore complete, elapsed time: 00:02:23Finished recover at 04-SEP-13
- Standby
RMAN> connect target sys@tempstby
target database Password:
connected to target database: NONCDB (mounted)RMAN> recover database noredo from service noncdb;
Starting recover at 04-SEP-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=770 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=834 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=898 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=961 device type=DISK
allocated channel: ORA_DISK_5
channel ORA_DISK_5: SID=1026 device type=DISK
allocated channel: ORA_DISK_6
channel ORA_DISK_6: SID=1090 device type=DISK
allocated channel: ORA_DISK_7
channel ORA_DISK_7: SID=1154 device type=DISK
allocated channel: ORA_DISK_8
channel ORA_DISK_8: SID=1218 device type=DISK
skipping datafile 1; already restored to SCN 7688784
skipping datafile 2; already restored to SCN 7688782
skipping datafile 3; already restored to SCN 7688777
skipping datafile 5; already restored to SCN 7688778
skipping datafile 6; already restored to SCN 7688779
skipping datafile 7; already restored to SCN 7688780
skipping datafile 8; already restored to SCN 7688781
skipping datafile 9; already restored to SCN 7688783
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service noncdb
destination for restore of datafile 00004: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/users.510.825244043
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01Finished recover at 04-SEP-13
- Primary
- Build the plugin statement to be executed on the destination primary database. use the SOURCE_FILE_DIRECTORY clause to point to the location the files were restored to on the destination CDB primary database environment. Note that the plugin process will search for files in the SOURCE_FILE_DIRECTORY that match the information contained in the manifest XML file created by the DBMS_PDB.DESCRIBE execution. Use the NOCOPY clause as the files are already in their intended location. This statement can be pre-created outside of the outage window.
create pluggable database my_pdb using '/home/oracle/noncdb/noncdb_unplug.xml'
source_file_directory= '+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE'
nocopy; - Shutdown the source non-CDB and restart it read only. The shutdown must not be done ABORT. At this time, application activity should cease.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.Total System Global Area 3206836224 bytes
Fixed Size 3079928 bytes
Variable Size 922747144 bytes
Database Buffers 2264924160 bytes
Redo Buffers 16084992 bytes
Database mounted.
SQL> alter database open read only;Database altered.
- On the source non-CDB, using DBMS_PDB.DESCRIBE(), create the xml manifest for the source non-CDB
SQL> exec dbms_pdb.describe('/home/oracle/demo/noncdb_unplug.xml');
PL/SQL procedure successfully completed.
- Leave the source non-CDB running and perform a final incremental apply through the temporary instances on the destination primary and standby environments
- Primary
recover database noredo from service noncdb;
- Standby
recover database noredo from service noncdb;
- Primary
- Shutdown the temporary instances
- Copy the xml manifest file to the destination primary host
- If your diskgroup names are different between your destination primary and standby, ensure that you have db_file_name_convert set on the standby specifying only the diskgroup names.
- Standby
SQL> show parameter db_file_name_convert
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert string +DATA, +DATAC1
- Standby
- Login to the destination CDB primary and plugin the non-CDB as a PDB using the plugin statement created above. As long as the files already exist on the standby prior to the plugin, are in the correct directory (<dgname>/<dbuniqname>/<guid>/DATAFILE for ASM) and are current on the destination standby environment, no additional steps are needed for the standby. Media recovery will find the files and continue processing.
SQL> set echo on
SQL> @plugin_mypdb
SQL> create pluggable database my_pdb using '/home/oracle/demo/noncdb_unplug.xml'
2 source_file_directory= '+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE'
3 nocopy;Pluggable database created.
If you encounter ora-922 on the source_file_directory clause, this indicates you have not applied the pre-requisite patches. To use these steps fixes 16347068, 16822629, 16874123 and 16888264 are required to be applied on 12c Oracle Database Homes. These fixes will be available in the 12.1.0.1 Patchset Update 2 or later and are available for 12.1.0.1 base as merge patch 17759598.Here is sample output from the alert log on the primary for finding the files:
Scanning source datafile directory - +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for file originally created as +DATAD1/NONCDB/DATAFILE/system.455.825151099 with afn -1
Using file-+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSTEM.384.825258561 for original file-+DATAD1/NONCDB/DATAFILE/system.455.825151099 with afn-1
Scanning source datafile directory - +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for file originally created as +DATAD1/NONCDB/DATAFILE/sysaux.317.825151101 with afn -2
Using file-+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSAUX.390.825258559 for original file-+DATAD1/NONCDB/DATAFILE/sysaux.317.825151101 with afn-2
Creating new file-+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/temp.444.825151103 for original file-+DATAD1/NONCDB/TEMPFILE/temp.444.825151103
Scanning source datafile directory - +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for file originally created as +DATAD1/NONCDB/DATAFILE/users.288.825151109 with afn -4
Using file-+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/USERS.272.825258627 for original file-+DATAD1/NONCDB/DATAFILE/users.288.825151109 with afn-4
Scanning source datafile directory - +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for file originally created as +DATAD1/NONCDB/DATAFILE/soets.438.825151965 with afn -5
Using file-+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS.387.825258551 for original file-+DATAD1/NONCDB/DATAFILE/soets.438.825151965 with afn-5
Scanning source datafile directory - +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for file originally created as +DATAD1/NONCDB/DATAFILE/soets2.436.825151971 with afn -6
Using file-+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS2.385.825258553 for original file-+DATAD1/NONCDB/DATAFILE/soets2.436.825151971 with afn-6
Scanning source datafile directory - +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for file originally created as +DATAD1/NONCDB/DATAFILE/soets3.451.825151981 with afn -7
Using file-+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS3.373.825258555 for original file-+DATAD1/NONCDB/DATAFILE/soets3.451.825151981 with afn-7
Scanning source datafile directory - +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for file originally created as +DATAD1/NONCDB/DATAFILE/soets4.452.825151987 with afn -8
Using file-+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS4.389.825258557 for original file-+DATAD1/NONCDB/DATAFILE/soets4.452.825151987 with afn-8
Scanning source datafile directory - +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for file originally created as +DATAD1/NONCDB/DATAFILE/soets5.443.825151993 with afn -9
Using file-+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS5.374.825258557 for original file-+DATAD1/NONCDB/DATAFILE/soets5.443.825151993 with afn-9Here is sample output from the alert log on the standby for finding the file:
Wed Sep 04 12:43:07 2013
Recovery scanning directory +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for any matching files
Deleted Oracle managed file +DATAD1/cont3/e57ca3f121dfd48ae043f646f40a61ea/datafile/system.384.825258561
Successfully added datafile 37 to media recovery
Datafile #37: '+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/system.511.825251253'
Deleted Oracle managed file +DATAD1/cont3/e57ca3f121dfd48ae043f646f40a61ea/datafile/sysaux.390.825258559
Successfully added datafile 38 to media recovery
Datafile #38: '+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/sysaux.528.825251253'
Deleted Oracle managed file +DATAD1/cont3/e57ca3f121dfd48ae043f646f40a61ea/datafile/users.272.825258627
Successfully added datafile 39 to media recovery
Datafile #39: '+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/users.516.825251265'
Deleted Oracle managed file +DATAD1/cont3/e57ca3f121dfd48ae043f646f40a61ea/datafile/soets.387.825258551
Successfully added datafile 40 to media recovery
Datafile #40: '+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets.517.825251253'
Deleted Oracle managed file +DATAD1/cont3/e57ca3f121dfd48ae043f646f40a61ea/datafile/soets2.385.825258553
Successfully added datafile 41 to media recovery
Datafile #41: '+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets2.515.825251253'
Deleted Oracle managed file +DATAD1/cont3/e57ca3f121dfd48ae043f646f40a61ea/datafile/soets3.373.825258555
Successfully added datafile 42 to media recovery
Datafile #42: '+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets3.514.825251253'
Deleted Oracle managed file +DATAD1/cont3/e57ca3f121dfd48ae043f646f40a61ea/datafile/soets4.389.825258557
Successfully added datafile 43 to media recovery
Datafile #43: '+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets4.529.825251253'
Deleted Oracle managed file +DATAD1/cont3/e57ca3f121dfd48ae043f646f40a61ea/datafile/soets5.374.825258557
Successfully added datafile 44 to media recovery
Datafile #44: '+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets5.513.825251253' - In the same SQL*Plus session in the destination primary CDB root used by the plugin operation, complete the conversion of the non-CDB to a PDB using the noncdb_to_pdb.sql script. You will connect to the newly created PDB, the PDB should be in mount mode (not open) and the script should be executed while connected to the PDB.
SQL> alter session set container=my_pdb;
Session altered.
SQL> @?/rdbms/admin/noncdb_to_pdb
SQL> Rem
SQL> Rem $Header: rdbms/admin/noncdb_to_pdb.sql /st_rdbms_12.1.0.1/9 2013/04/14 20:05:50 talliu Exp $
SQL> Rem
SQL> Rem noncdb_to_pdb.sql
SQL> Rem
SQL> Rem Copyright (c) 2011, 2013, Oracle and/or its affiliates.
SQL> Rem All rights reserved.
SQL> Rem
SQL> Rem NAME
SQL> Rem noncdb_to_pdb.sql - Convert PDB
SQL> Rem
SQL> Rem DESCRIPTION
SQL> Rem Converts DB to PDB.
SQL> Rem
SQL> Rem NOTES
SQL> Rem Given a DB with proper obj$ common bits set, we convert it to a proper
SQL> Rem PDB by deleting unnecessary metadata.
SQL> Rem
SQL> Rem MODIFIED (MM/DD/YY)
SQL> Rem sankejai 04/11/13 - 16530655: do not update status in container$
SQL> Rem pyam 04/03/13 - rename temp cdb$* views, to not interfere when
SQL> Rem this is run in multiple PDBs simultaneously
SQL> Rem pyam 02/06/13 - error out for non-CDB
SQL> Rem pyam 01/21/13 - stop exiting on sqlerror at end<snip>
TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN 2013-09-04 14:46:411 row selected.
SQL>
SQL> DOC
DOC> The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC> objects in the database. Recompilation time is proportional to the
DOC> number of invalid objects in the database, so this command may take
DOC> a long time to execute on a database with a large number of invalid
DOC> objects.
DOC>
DOC> Use the following queries to track recompilation progress:<snip>
SQL> -- leave the PDB in the same state it was when we started
SQL> BEGIN
2 execute immediate '&open_sql &restricted_state';
3 EXCEPTION
4 WHEN OTHERS THEN
5 BEGIN
6 IF (sqlcode <> -900) THEN
7 RAISE;
8 END IF;
9 END;
10 END;
11 /PL/SQL procedure successfully completed.
SQL>
SQL> WHENEVER SQLERROR CONTINUE;
SQL> - Once the script completes successfully, open the PDB on the primary database
The PDB will have a STATUS of NEW on both the primary and standby until the PDB is opened for the first time in the primary at which time the status will change to NORMAL. This initial open must be performed on the primary. There is a small amount of redo generated by this initial open which will be transported to the standby database and applied via media recovery. Until this redo is applied on the standby to change the status from NEW to NORMAL, the PDB cannot be opened on the standby database.SQL> alter session set container=cdb$root;
Session altered.
SQL> alter pluggable database stark open instances=all;
Pluggable database altered.
- On the standby, ensure the new PDB has a NORMAL status and open the PDB on standby. If you did not specify the TEMPFILE REUSE clause on the CREATE PLUGGABLE DATABASE statement, add a file to the TEMP tablespace on the standby.
SQL> select pdb_name, status from cdb_pdbs;
PDB_NAME STATUS
-------------------- -------------
PDB$SEED NORMAL
STARK NORMAL
MY_PDB NORMALSQL> alter pluggable database my_pdb open instances=all;
Pluggable database altered.
SQL> alter session set container=my_pdb;
Session altered.
SQL> alter tablespace temp add tempfile '+DATA1' size 10g;
Tablespace altered.
- Backup the PDB. It cannot be recovered on the new container database without a backup
RMAN> backup pluggable database my_pdb;
Starting backup at 06-SEP-13
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00040 name=+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets.387.825258551
input datafile file number=00041 name=+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets2.385.825258553
input datafile file number=00042 name=+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets3.373.825258555
input datafile file number=00043 name=+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets4.389.825258557
input datafile file number=00044 name=+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets5.374.825258557
input datafile file number=00038 name=+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/sysaux.390.825258559
input datafile file number=00037 name=+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/system.384.825258561
input datafile file number=00039 name=+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/users.272.825258627
channel ORA_DISK_1: starting piece 1 at 06-SEP-13
channel ORA_DISK_1: finished piece 1 at 06-SEP-13
piece handle=+RECO/CONT3/E57CA3F121DFD48AE043F646F40A61EA/BACKUPSET/2013_09_06/nnndf0_tag20130906t152558_0.381.825434759 tag=TAG20130906T152558 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:16
Finished backup at 06-SEP-13Starting Control File and SPFILE Autobackup at 06-SEP-13
piece handle=+RECO/CONT3/AUTOBACKUP/2013_09_06/s_825434834.358.825434835 comment=NONE
Finished Control File and SPFILE Autobackup at 06-SEP-13
12c Unplug/Plug PDB to new CDB using 12c RMAN Active Database Duplication
The RMAN Block Change Tracking feature can be used to improve the performance of the incremental roll forward steps.
The following steps were executed using the bash shell.
This method takes advantage of a new feature of RMAN in 12c allowing you to clone database files and apply incrementals to those files from an active database without requiring staging areas. In addition, the source non-CDB application activity can continue while the cloning and incremental applies are being performed. To get the datafiles in the correct ASM directory for the destination PDB, you perform an RMAN BACKUP AS COPY from the source PDB while also connected to the target CDB. When using OMF or ASM, this will ensure that the datafile copies are written to the correct location. The incremental apply is also performed across the network and applied to the datafiles in their destination location.
This operation requires a temporary instance to be used for the incremental apply. When the datafile copies are created, they aren't known to any databases on the new environment, by using the temporary instance we create a copy of the source PDB controlfile to the destination environment and catalog the newly created datafile copies in the temporary instance. We now have an instance that is aware of the files and perform recovery on them. The temporary instance is used only for this purpose and can be discarded once the migration process is complete, it should only be started in either nomount or mount mode, it should never be opened.
If you have an Active Data Guard phyiscal standby in place, perform the RMAN BACKUP AS COPY and incremental apply process on both the destination primary and standby environments. This will place the files in the correct location on the destination standby environment so that media recovery of the plugin operation can continue without stopping. You must run the process separately for each destination environment, e.g. you will run an RMAN process to copy the files to your destination primary site and also run an RMAN process to copy the files to your destination standby site, you will create a temporary instance on your destination primary site and your destination standby site, etc. Once the final incremental apply is completed on both the destination primary and standby sites, you can create the new PDB on the primary, no further action need be taken on the standby site.
The example below documents the process with ASM in use for file storage and with an Active Data Guard physical standby database in place.
- On the destination primary and standby database environments create a TNS entry to point to the CDB containing the PDB you wish to move. This will be used by RMAN to create the datafile copies on the destination environments and to perform incremental applies across the network with no need for staging area.
CONT4 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = pdb-scan.us.oracle.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = cont4)
)
) - Ensure that there are TNS entries on both the destination primary and standby environments that point to the respective destination instances. In other words, on the destination primary server, there must be a TNS entry pointing to the destination primary database and on the destination standby server, there must be a TNS entry pointing to the destination standby database. These TNS entries can use SCAN hosts.
- Destination Primary
CONT3 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = target-prim-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = cont3)
)
) - Destination Standby
CONT3STBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = target-stby-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = cont3stby)
)
)
- Destination Primary
- Copy the TNS entries pointing to the destination primary and standby environments to the source CDB environment. These are required to allow the source database to connect to the destinations as part of the BACKUP AS COPY process. These TNS entries can use SCAN hosts. If you used SCAN hosts for the TNS entry pointing to the CDB source in the destination environment (first step above) and the source CDB is a Real Application Cluster database, these TNS entries pointing to the destination should be on ALL the nodes where the CDB database has instances running.
- Using RMAN, perform a BACKUP AS COPY of the source PDB to the destination primary and standby environments using the destination primary and standby database instances. This will create datafile copies of the source PDB in the correct directories of ASM on the destination sites, these file copies will not be known to the databases at this point. A TNS connection is needed to both the RMAN target (source 12c CDB) and destination (12c CDB). In the RMAN session, connect to the ROOT of the both the source container database and the destination container databases you will be migrating to as the clone.
Create a command file similar to the following. The target is the source non-CDB to be migrated, the clone is the destination CDB in which the new PDB will be created.
- Primary
connect target sys@cont4
connect auxiliary sys@cont3
backup as copy pluggable database barney auxiliary format '+data';
- Standby
connect target sys@cont4
connect auxiliary sys@cont3stby
backup as copy pluggable database barney auxiliary format '+datad1';
Ensure backup compression on the disk device type is not enabled on the primary database. Backup as copy is not allowed with compression enabled.
RMAN> show device type;
RMAN configuration parameters for database with db_unique_name NONCDB are:
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # defaultRun the code blocks just created on both the destination primary site and the destination standby site to copy the files from the source non-CDB to destination environments. Note the directory structure for the output files on both the primary and the standby environments (lines starting with "output file name"). These directories will be needed in a later step.
- Primary
- Primary
RMAN> @backup_pdb.cmdRMAN> connect target *
target database Password:
connected to target database: CONT4 (DBID=1644992445)RMAN> connect auxiliary *
auxiliary database Password:
connected to auxiliary database: CONT3 (DBID=1783201232)RMAN> backup as copy pluggable database barney auxiliary format '+data';
Starting backup at 05-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=204 instance=cont42 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=239 instance=cont42 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=371 instance=cont41 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=271 instance=cont42 device type=DISK
allocated channel: ORA_DISK_5
channel ORA_DISK_5: SID=402 instance=cont41 device type=DISK
allocated channel: ORA_DISK_6
channel ORA_DISK_6: SID=306 instance=cont42 device type=DISK
allocated channel: ORA_DISK_7
channel ORA_DISK_7: SID=438 instance=cont41 device type=DISK
allocated channel: ORA_DISK_8
channel ORA_DISK_8: SID=343 instance=cont42 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00045 name=+DATAD1/CONT4/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.531.825347311
channel ORA_DISK_2: starting datafile copy
input datafile file number=00043 name=+DATAD1/CONT4/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.284.825346821
channel ORA_DISK_3: starting datafile copy
input datafile file number=00042 name=+DATAD1/CONT4/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.530.825346821
output file name=+DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.394.825355197 tag=TAG20130905T151946 # Save this directory name to be used later
channel ORA_DISK_3: datafile copy complete, elapsed time: 00:00:49
output file name=+DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.395.825355195 tag=TAG20130905T151946
channel ORA_DISK_2: datafile copy complete, elapsed time: 00:01:23
output file name=+DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.396.825355193 tag=TAG20130905T151946
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:06:32
Finished backup at 05-SEP-13RMAN> **end-of-file**
- Standby
RMAN> @backup_pdb.cmdRMAN> connect target *
target database Password:
connected to target database: CONT4 (DBID=1644992445)RMAN> connect auxiliary *
auxiliary database Password:
connected to auxiliary database: CONT3 (DBID=1783201232)RMAN> backup as copy pluggable database barney auxiliary format '+datad1';
Starting backup at 05-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=105 instance=cont42 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=207 instance=cont41 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=141 instance=cont42 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=244 instance=cont41 device type=DISK
allocated channel: ORA_DISK_5
channel ORA_DISK_5: SID=275 instance=cont41 device type=DISK
allocated channel: ORA_DISK_6
channel ORA_DISK_6: SID=307 instance=cont41 device type=DISK
allocated channel: ORA_DISK_7
channel ORA_DISK_7: SID=173 instance=cont42 device type=DISK
allocated channel: ORA_DISK_8
channel ORA_DISK_8: SID=346 instance=cont41 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00045 name=+DATAD1/CONT4/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.531.825347311
channel ORA_DISK_2: starting datafile copy
input datafile file number=00043 name=+DATAD1/CONT4/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.284.825346821
channel ORA_DISK_3: starting datafile copy
input datafile file number=00042 name=+DATAD1/CONT4/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.530.825346821
output file name=+DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.518.825347897 tag=TAG20130905T151815 # Save this directory name to be used later
channel ORA_DISK_2: datafile copy complete, elapsed time: 00:00:07
output file name=+DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.499.825347897 tag=TAG20130905T151815
channel ORA_DISK_3: datafile copy complete, elapsed time: 00:00:07
output file name=+DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.498.825347897 tag=TAG20130905T151815
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35
Finished backup at 05-SEP-13RMAN>
RMAN> **end-of-file**
- Copy the password file from the source CDB to the destination sites (both primary and standby) for the temp instances. The following example shows the password file stored in ASM for the source CDB. The password file is extracted from ASM on the source and written to a local filesystem, then remote copied to the destination environment (both primary and standby). For ease of use, the files should be copied to the $ORACLE_HOME/dbs directory (default location) being used by the temporary instance on the destination environments and should be given a default name of orapw<$ORACLE_SID of the temporary instance>.
[oracle@source]$ asmcmd -p
ASMCMD [+] > cd datad1/cont4
ASMCMD [+datad1/cont4] > cp orapwcont4 /home/oracle/temp/orapwtemp
copying +datad1/cont4/orapwcont4 -> /home/oracle/temp/orapwtemp[oracle@source]$ scp /home/oracle/temp/orapwtemp oracle@destination-prim:<ORACLE_HOME>/dbs
[oracle@source]$ scp /home/oracle/temp/orapwtemp oracle@destination-stby:<ORACLE_HOME>/dbs - Create a small pfile for the temporary instances. Although not required to be different for the destination primary and standby environments, for clarity they are different in this example with each temporary instance getting a different DB_UNIQUE_NAME. Note the comments for each initialization parameter setting.
- Primary
db_name=cont4 # MUST match db_name of source CDB/non-CDB
db_unique_name=junk # ONLY required if the source db is on the same system as the TEMP instance
control_files='/home/oracle/demo/control1.ctl','/home/oracle/demo/control2.ctl' # Just use local files
compatible='12.1.0.0.0' # Required if compatible is not set to default value
enable_pluggable_database=TRUE # Since the source is a CDB, we must specify the enable_pluggable_database parameter - Standby
db_name=cont4 # MUST match db_name of source CDB/non-CDB
db_unique_name=junk # ONLY required if the source db is on the same system as the TEMP instance
control_files='/home/oracle/demo/control1.ctl','/home/oracle/demo/control2.ctl' # Just use local files
compatible='12.1.0.0.0' # required if compatible is not set to default value
enable_pluggable_database=TRUE # Since the source is a CDB, we must specify the enable_pluggable_database parameter
- Primary
- Create a TNS alias for the destination temporary instances. This should be done on the environment you will be running RMAN (in this example, the destination primary and standby environments). If you needed to specify DB_UNIQUE_NAME, the same value must also be specified for the SERVICE_NAME in the TNS address. Do not use the SCAN address for the host as the temporary instances will not be registered with the SCAN listener. (UR = A ) is required to allow you to connect to the service while the database is in mount/nomount mode.
- Primary
TEMP =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = primdb01.us.oracle.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = junk) (UR = A)
)
) - Standby
TEMPSTBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = stbyadm04.us.oracle.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = junkstby) (UR = A)
)
)
- Primary
- Startup nomount the temporary instances on the destination primary and standby environments
- Primary
$ export ORACLE_SID=temp
$ sqlplus / as sysdbaSQL*Plus: Release 12.1.0.1.0 Production on Wed Sep 4 12:51:52 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile='/home/oracle/demo/pfile.ora'
ORACLE instance started.Total System Global Area 384200704 bytes
Fixed Size 2288536 bytes
Variable Size 322962536 bytes
Database Buffers 50331648 bytes
Redo Buffers 8617984 bytes - Standby
$ export ORACLE_SID=tempstby
$ sqlplus / as sysdbaSQL*Plus: Release 12.1.0.1.0 Production on Wed Sep 4 10:53:06 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile='/home/oracle/demo/pfile.ora'
ORACLE instance started.Total System Global Area 634732544 bytes
Fixed Size 2291424 bytes
Variable Size 524290336 bytes
Database Buffers 100663296 bytes
Redo Buffers 7487488 bytes
- Primary
- Restore the controlfile from the source CDB to both the destination primary and standby environment temporary instances. Connect to the temporary instance using TNS (e.g. rman target sys@temp).
The FROM SERVICE clause in the RESTORE command provides the connection to the source CDB. The clause in the example uses the TNS alias for the source CDB created above. This is the only connection used and required for the source CDB in these commands.
- Primary
$ rman
Recovery Manager: Release 12.1.0.1.0 - Production on Thu Sep 5 15:02:34 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN> connect target sys@temp
target database Password:
connected to target database: CONT4 (not mounted)RMAN> restore controlfile from service cont4;
Starting restore at 05-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=790 device type=DISKchannel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service cont4
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
output file name=/home/oracle/demo/control1.ctl
output file name=/home/oracle/demo/control2.ctl
Finished restore at 05-SEP-13 - Standby
$ rman
Recovery Manager: Release 12.1.0.1.0 - Production on Thu Sep 5 13:03:04 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN> connect target sys@tempstby
target database Password:
connected to target database: CONT4 (not mounted)RMAN> restore controlfile from service cont4;
Starting restore at 05-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=514 device type=DISKchannel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service cont4
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
output file name=/home/oracle/demo/control1.ctl
output file name=/home/oracle/demo/control2.ctl
Finished restore at 05-SEP-13
- Primary
- Mount the temporary instances on the destination primary and standby environments to allow for cataloging the cloned database files.
RMAN> alter database mount;
Statement processed
released channel: ORA_DISK_1 - Catalog the datafiles in the temporary instances on the destination primary and standby environments using "START WITH" to locate the datafile copies restored above. The directory location can be found in the BACKUP AS COPY screen output starting with "output file name".
- Primary
RMAN> catalog start with '+DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE';
searching for all files that match the pattern +DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE
List of Files Unknown to the Database
=====================================
File Name: +DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SOETS2.396.825355193
File Name: +DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SYSAUX.395.825355195
File Name: +DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SYSTEM.394.825355197Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging doneList of Cataloged Files
=======================
File Name: +DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SOETS2.396.825355193
File Name: +DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SYSAUX.395.825355195
File Name: +DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SYSTEM.394.825355197 - Standby
RMAN> catalog start with '+DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE';
searching for all files that match the pattern +DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE
List of Files Unknown to the Database
=====================================
File Name: +DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SYSAUX.518.825347897
File Name: +DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SOETS2.498.825347897
File Name: +DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SYSTEM.499.825347897Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging doneList of Cataloged Files
=======================
File Name: +DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SYSAUX.518.825347897
File Name: +DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SOETS2.498.825347897
File Name: +DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SYSTEM.499.825347897
- Primary
- Switch the pluggable database to copy in the temporary instances on the destination primary and standby environments to the datafiles just cataloged.
- Primary
RMAN> switch pluggable database barney to copy;
datafile 42 switched to datafile copy "+DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.394.825355197"
datafile 43 switched to datafile copy "+DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.395.825355195"
datafile 45 switched to datafile copy "+DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.396.825355193" - Standby
RMAN> switch pluggable database barney to copy;
datafile 42 switched to datafile copy "+DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.499.825347897"
datafile 43 switched to datafile copy "+DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.518.825347897"
datafile 45 switched to datafile copy "+DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.498.825347897"
- Primary
- Set parallelism on both destination primary and standby temporary instances
RMAN> configure device type disk parallelism 8;
old RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 8 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored - At periodic intervals, perform incremental applies from the source PDB to the temporary instances on the destination primary and standby environments to keep the destination datafiles somewhat current. This can be done as many times as necessary until you are ready to perform the migration. The incremental process will compare the files in the temporary instances with the files in the source PDB and determine which files need recovery applied, thus the recovery work that is done can be different in each of the primary environment temporary instance and the standby temporary instance, depending on the states of their respective files. The source non-CDB can continue application activity while the incremental applies are occurring.
The FROM SERVICE clause in the RECOVER command provides the connection to the source CDB (we do not need to connect to the source PDB). The clause in the example uses the TNS alias for the source CDB created above. This is the only connection used and required for the source PDB in these commands.
- Primary
RMAN> connect target sys@temp
target database Password:
connected to target database: CONT4 (mounted)RMAN> recover pluggable database barney noredo from service cont4;
Starting recover at 05-SEP-13
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
using channel ORA_DISK_5
using channel ORA_DISK_6
using channel ORA_DISK_7
using channel ORA_DISK_8
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service cont4
destination for restore of datafile 00042: +DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.394.825355197
channel ORA_DISK_2: starting incremental datafile backup set restore
channel ORA_DISK_2: using network backup set from service cont4
destination for restore of datafile 00043: +DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.395.825355195
channel ORA_DISK_3: starting incremental datafile backup set restore
channel ORA_DISK_3: using network backup set from service cont4
destination for restore of datafile 00045: +DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.396.825355193
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_DISK_2: restore complete, elapsed time: 00:00:08
channel ORA_DISK_3: restore complete, elapsed time: 00:00:45
Finished recover at 05-SEP-13 - Standby
RMAN> connect target sys@tempstby
target database Password:
connected to target database: CONT4 (mounted)RMAN> recover pluggable database barney noredo from service cont4;
Starting recover at 05-SEP-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=770 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=834 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=898 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=961 device type=DISK
allocated channel: ORA_DISK_5
channel ORA_DISK_5: SID=1026 device type=DISK
allocated channel: ORA_DISK_6
channel ORA_DISK_6: SID=1090 device type=DISK
allocated channel: ORA_DISK_7
channel ORA_DISK_7: SID=1154 device type=DISK
allocated channel: ORA_DISK_8
channel ORA_DISK_8: SID=1218 device type=DISK
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service cont4
destination for restore of datafile 00042: +DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.499.825347897
channel ORA_DISK_2: starting incremental datafile backup set restore
channel ORA_DISK_2: using network backup set from service cont4
destination for restore of datafile 00043: +DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.518.825347897
channel ORA_DISK_3: starting incremental datafile backup set restore
channel ORA_DISK_3: using network backup set from service cont4
destination for restore of datafile 00045: +DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.498.825347897
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_DISK_2: restore complete, elapsed time: 00:00:03
channel ORA_DISK_3: restore complete, elapsed time: 00:00:15
Finished recover at 05-SEP-13
- Primary
- Build the plugin statement to be executed on the destination primary database. Use the SOURCE_FILE_DIRECTORY clause to point to the location where the files were restored to on the destination CDB primary database environment. Note that the plugin process will search for files in the SOURCE_FILE_DIRECTORY that match the information contained in the XML file created by the unplug operation. Use the NOCOPY clause as the files are already in their intended location. The statement can be pre-created outside of the outage window.
create pluggable database barney using '/home/oracle/demo/barney_unplug.xml'
source_file_directory= '+DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE'
nocopy;If you encounter ora-922 on the source_file_directory clause, this indicates you have not applied the pre-requisite patches. To use these steps fixes 16347068, 16822629, 16874123 and 16888264 are required to be applied on 12c Oracle Database Homes. These fixes will be available in the 12.1.0.1 Patchset Update 2 or later and are available for 12.1.0.1 base as merge patch 17759598. - Shutdown the source PDB
SQL> alter pluggable database barney close instances=all;
Pluggable database altered.
- Unplug the source PDB and create the manifest XML file
SQL> alter pluggable database barney unplug into '/home/oracle/cont4/barney_unplug.xml';
Pluggable database altered.
- Perform a final incremental apply through the temporary instances on the destination primary and standby environments. The source CDB must be open during this process (the source PDB remains closed).
- Primary
RMAN> connect target sys@temp
target database Password:
connected to target database: CONT4 (DBID=1644992445, not open)RMAN> recover pluggable database barney noredo from service cont4;
Starting recover at 05-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=267 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=398 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=529 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=660 device type=DISK
allocated channel: ORA_DISK_5
channel ORA_DISK_5: SID=791 device type=DISK
allocated channel: ORA_DISK_6
channel ORA_DISK_6: SID=922 device type=DISK
allocated channel: ORA_DISK_7
channel ORA_DISK_7: SID=6 device type=DISK
allocated channel: ORA_DISK_8
channel ORA_DISK_8: SID=137 device type=DISK
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service cont4
destination for restore of datafile 00042: +DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.394.825355197
channel ORA_DISK_2: starting incremental datafile backup set restore
channel ORA_DISK_2: using network backup set from service cont4
destination for restore of datafile 00043: +DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.395.825355195
channel ORA_DISK_3: starting incremental datafile backup set restore
channel ORA_DISK_3: using network backup set from service cont4
destination for restore of datafile 00045: +DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.396.825355193
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_DISK_2: restore complete, elapsed time: 00:00:08
channel ORA_DISK_3: restore complete, elapsed time: 00:00:45
Finished recover at 05-SEP-13 - Standby
RMAN> connect target sys@tempstby
target database Password:
connected to target database: CONT4 (DBID=1644992445, not open)RMAN> recover pluggable database barney noredo from service cont4;
Starting recover at 05-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=770 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=834 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=898 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=961 device type=DISK
allocated channel: ORA_DISK_5
channel ORA_DISK_5: SID=1026 device type=DISK
allocated channel: ORA_DISK_6
channel ORA_DISK_6: SID=1090 device type=DISK
allocated channel: ORA_DISK_7
channel ORA_DISK_7: SID=1154 device type=DISK
allocated channel: ORA_DISK_8
channel ORA_DISK_8: SID=1218 device type=DISK
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service cont4
destination for restore of datafile 00042: +DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.499.825347897
channel ORA_DISK_2: starting incremental datafile backup set restore
channel ORA_DISK_2: using network backup set from service cont4
destination for restore of datafile 00043: +DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.518.825347897
channel ORA_DISK_3: starting incremental datafile backup set restore
channel ORA_DISK_3: using network backup set from service cont4
destination for restore of datafile 00045: +DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.498.825347897
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_2: restore complete, elapsed time: 00:00:01
channel ORA_DISK_3: restore complete, elapsed time: 00:00:07
Finished recover at 05-SEP-13
- Primary
- Shutdown the temporary instances
- Copy the manifest file to the destination primary host
- If your diskgroup names are different between your destination primary and standby, ensure that you have db_file_name_convert set on the standby specifying only the diskgroup names.
- SQL> show parameter db_file_name_convert
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert string +DATA, +DATAC1
- Plug the PDB into the new CDB using the manifest file created above. As long as the files already exist on the standby prior to the plugin, are in the correct directory (<dgname>/<dbuniqname>/<guid>/DATAFILE) and are current on the destination standby environment, no additional steps are needed for the standby. Media recovery will find the files and continue processing.
SQL> set echo on
SQL> @plugin_barney
SQL> create pluggable database barney using '/home/oracle/demo/barney_unplug.xml'
2 source_file_directory= '+DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE'
3 nocopy;Pluggable database created.
- Since the source was already a PDB, all we need to do is sync the new PDB
SQL> alter pluggable database barney open restricted;
Pluggable database altered.
SQL> alter session set container=barney;
Session altered.
SQL> exec dbms_pdb.sync_pdb();
PL/SQL procedure successfully completed.
- Open the PDB on the destination primary
The PDB will have a STATUS of NEW until the PDB is opened for the first time at the primary database at which time the status will change to NORMAL. This initial open must be performed on the primary. There is a small amount of redo generated by this initial open which will be transported to the standby database and applied via media recovery. Until this redo is applied on the standby to change the status from NEW to NORMAL, the PDB cannot be opened on the standby database.SQL> alter session set container=cdb$root;
Session altered.
SQL> alter pluggable database barney close;
Pluggable database altered.
SQL> alter pluggable database barney open instances=all;
Pluggable database altered.
- Open the PDB on destination standby. Ensure the status is NORMAL first. If you did not specify the TEMPFILE REUSE clause on the CREATE PLUGGABLE DATABASE statement, add a file to the TEMP tablespace on the standby.
SQL> col pdb_name format a30
SQL> select pdb_name, status from cdb_pdbs;PDB_NAME STATUS
------------------------------ -------------
PDB$SEED NORMAL
STARK NORMAL
BARNEY NORMAL
MY_PDB NORMALSQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 STARK MOUNTED
4 MY_PDB READ ONLY NO
5 BARNEY MOUNTED
SQL> alter pluggable database barney open instances=all;Pluggable database altered.
SQL> alter session set container=barney;
Session altered.
SQL> alter tablespace temp add tempfile '+DATAd1' size 10g;
Tablespace altered.
- Backup the PDB. It cannot be recovered on the new container database without a backup
RMAN> backup pluggable database barney;
Starting backup at 06-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=122 instance=cont31 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00047 name=+DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.396.825355193
input datafile file number=00046 name=+DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.395.825355195
input datafile file number=00045 name=+DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.394.825355197
channel ORA_DISK_1: starting piece 1 at 06-SEP-13
channel ORA_DISK_1: finished piece 1 at 06-SEP-13
piece handle=+RECO/CONT3/E5AA35C304D68CBBE043F646F40AEE75/BACKUPSET/2013_09_06/nnndf0_tag20130906t152412_0.525.825434653 tag=TAG20130906T152412 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 06-SEP-13Starting Control File and SPFILE Autobackup at 06-SEP-13
piece handle=+RECO/CONT3/AUTOBACKUP/2013_09_06/s_825434668.769.825434669 comment=NONE
Finished Control File and SPFILE Autobackup at 06-SEP-13
Data Pump Full Transportable Using Incremental Backups to Migrate a pre-12c non-CDB to 12c PDB
This method allows you to migrate an entire database from an 11.2.0.3 or 11.2.0.4 non-CDB directly to a 12c pluggable database. It is highly recommended that you upgrade to one of these releases to take advantage of full transportable export/import for the following reasons:
- It simplifies metadata movement significantly. With full transportable export/import, you only need a single export/import step (or network import as we will show in the example) to move the metadata (e.g. users, tablespaces, actual data) to the destination database.
- Extended data for Oracle RDBMS options like Spatial and Multimedia are exported/imported as part of the process.
- The number of error messages is significantly reduced as the process recognizes Oracle objects that don't need to be exported or applied. Going forward in later releases, the messages should be reduced even more.
- Full transportable export/import supports moving encrypted tablespaces.
The time required to perform full transportable export/import is highly dependent on the number of objects to be imported into the destination environment and can be a time consuming process. It is recommended to upgrade your database to 12c and use the first method documented in this note. However, full transportable export/import can be used for smaller departmental type databases to avoid performing the upgrade process. The method requires a staging area on both the source and destination systems for incremental backups that are taken to assist with reducing downtime. The staging area can be local file system, shared storage (e.g. NFS, CFS) or ASM. A PDB must be pre-created in the 12c container database to accept the Data Pump import of the database metadata.
For direct migration of pre-12c non-CDB's, full database export/import is available for source databases running Oracle RDBMS 11.2.0.3 or later. If your source database is running Oracle 12c and you wish to do full transportable export/import, we recommend using RMAN to perform file movement. See the Oracle Database Backup and Recovery User's Guide 12c Release 1 documentation, Transporting Data Across Platforms chapter for more information. The section of reference is Cross-Platform Transport of Tablespaces using Inconsistent Backups.
The process for moving the datafiles of the full transportable export/import set will be done using scripts provided in Note 1389592.1. Although the scripts and MOS document discuss cross-platform, they can be used for same platform to provide minimal downtime file movement. These scripts support active file copy without a staging area to prepare the destination environments for the incremental apply/roll forward. Using this process, the files can be moved without downtime to the application, except for the final incremental and the full transportable export/import steps. When performing full transportable export/import all user/application tablespaces should be included in the transportable set. Do not include SYSTEM, SYSAUX, TEMP or any UNDO tablespace.
This process also takes advantage of the NETWORK_LINK command line option for Data Pump. This removes the need to perform a Data Pump export as the Data Pump import will connect directly to the source database to retrieve the metadata necessary for the import process.
The environment for the following example consists of an 11.2.0.3 RAC database (fred) as the source and 12.1.0.1 RAC CDB (cont3) with a RAC Active Data Guard standby database (cont3stby).
In Oracle RDBMS 12.1.0.2, the capability to defer recovery of PDBs at the standby site. A new clause, STANDBYS=NONE, can be used when creating or plugging a PDB into a container. This clause allows you to defer migration of datafiles to destination standby CDBs. The standby will create the basic structure of the PDB but all files will be marked as OFFLINE/RECOVER. At some future point, you can copy the files to the standby site and ENABLE RECOVERY of the PDB to add the files to the standby and begin providing Data Guard protection. For the process of migrating directly from a pre-12c non-CDB, Oracle recommends that you use the STANDBYS=NONE clause when creating the empty PDB in step 1 to simplify the migration process. If you use the deferred recovery method, you can skip the steps dealing with the physical standby and after running the Data Pump import handling migration of the files to the standby database. See document 1916648.1 for more information on usage of STANDBYS=NONE.
- Create an empty PDB (pdb11203) in the destination CDB (cont3).
If your disk group names are different on your primary database and standby database environments, ensure you have specified DB_FILE_NAME_CONVERT in your spfile. It is sufficient to include just the disk group name and the database unique name in the parameter values. In the following example, the primary database uses a diskgroup called +DATA and the standby uses +DATAD1. The DB_FILE_NAME_CONVERT setting for the standby database would be:
DB_FILE_NAME_CONVERT='+DATA/CONT3/','+DATAD1/CONT3STBY/'
and for the primary database it would be:
DB_FILE_NAME_CONVERT='+DATAD1/CONT3STBY/','+DATA/CONT3/'
SQL-DESTPRIM> create pluggable database pdb11203 admin user pdbadmin identified by <admin password> roles=(connect,resource); Pluggable database created. SQL-DESTPRIM> alter pluggable database pdb11203 open instances=all; Pluggable database altered.
- This PDB will automatically be created on cont3stby when the "create pluggable database..." redo is applied via media recovery.
SQL-DESTPRIM> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 STARK MOUNTED
4 PDB11203 MOUNTEDSQL-DESTPRIM> alter pluggable database pdb11203 open instances=all;
Pluggable database altered.
- Get the GUID for the new PDB to be used in the file transfer process (used for the DESTDIR and DESTDIRSTBY directory objects created later in this process)
SQL-DESTPRIM> select name, guid from v$pdbs where name='PDB11203'; NAME GUID
----------------------------- --------------------------------
PDB11203 E909A2CB24B15CBFE0437628890A510D - Create the following directory object in the SOURCE database. This will tell the transport scripts where to find the source files for copy. Grant access to the "system" user for this directory.
In some situations when files are stored in ASM, the directory structure will be listed in mixed case (e.g. +DATAD1/fred/datafile) in the database. Prior to creating the SRCDIR directory query v$datafile in the SOURCE database and ensure the path specified in the CREATE DIRECTORY SRCDIR statement matches the format of the directory structure of the existing datafiles. For example, if v$datafile shows the location for all of the datafiles as +DATAD1/fred/datafile, the CREATE DIRECTORY statement required would be
SQL-SOURCE> create directory SRCDIR as '+DATAD1/fred/datafile';
SQL-SOURCE> create directory SRCDIR as '+DATAD1/FRED/DATAFILE';Directory created.
SQL-SOURCE> grant read, write on directory SRCDIR to system;
Grant succeeded.
- Create the following directory objects in the DESTINATION primary root. Note they MUST be created in the destination primary ROOT as the transport scripts cannot connect directly to the PDB. Grant access to the "system" user for this directory. The DESTDIR
ASM directory is where the files will be transported to on the destination primary environment and the DESTDIRSTBY is the ASM directory where the files will be transported to on the destination standby environment. These directories should use the GUID of the destination PDB (pdb11203).SQL-DESTPRIM> create directory DESTDIR as '+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE';Directory created.
SQL-DESTPRIM> create directory DESTDIRSTBY as '+DATAD1/CONT3STBY/E909A2CB24B15CBFE0437628890A510D/DATAFILE';
Directory created.
SQL-DESTPRIM> grant read, write on directory DESTDIR to system;
Grant succeeded.
SQL-DESTPRIM> grant read, write on directory DESTDIRSTBY to system;
Grant succeeded.
- In the DESTINATION PDB on the primary, create a directory to be used for the Data Pump import log files. These must be created in the destination PDB as the Data Pump import process, which connects to the PDB, will use this directory.
SQL-DESTPRIM-PDB> alter session set container=pdb11203;
Session altered.
SQL-DESTPRIM-PDB> create directory mydir as '/u01/app/oracle/bkup/pdb';
Directory created.
SQL-DESTPRIM-PDB>grant read, write on directory mydir to system;
Grant succeeded.
- Create a TNS alias on the DESTINATION primary and standby sites pointing back to the source non-CDB database. This alias should connect directly to the source host as opposed to using the SCAN address.
SOURCEDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = source1)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = fred)
)
) - On the DESTINATION primary, create the following database link pointing back to the source database. Create a link in the root for the incremental transport process and create the same link in the PDB for the Data Pump import.
SQL-DESTPRIM> create public database link sourcedb connect to system identified by <password> using 'sourcedb';
Database link created.
SQL-DESTPRIM> select * from dual@sourcedb;
D
-
XSQL-DESTPRIM> alter session set container=pdb11203;
Session altered.
SQL-DESTPRIM-PDB> create public database link sourcedb connect to system identified by <password> using 'sourcedb';
Database link created.
SQL-DESTPRIM-PDB> select * from dual@sourcedb;
D
-
X - On the DESTINATION primary environment, create a service for the new PDB just created above and start it
[oracle@destprim1 transport]$ srvctl add serv -d cont3 -s pdb11203serv -pdb pdb11203 -r cont31,cont32
[oracle@destprim1 transport]$ srvctl start serv -d cont3 -s pdb11203serv - On the SOURCE environment, create a TNS alias connecting to the PDB service just created. This should connect directly to the destination primary host as opposed to using SCAN address.
PDB11203SERV =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = destprim1)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = pdb11203serv)
)
) - On the SOURCE database, create a database link to the destination primary using the TNS alias created above.
SQL-SOURCE> create public database link pdb11203serv connect to system identified by <password> using 'pdb11203serv';
Database link created.
SQL-SOURCE> select * from dual@pdb11203serv;
D
-
X - On the DESTINATION primary, create a TNS alias to connect to the PDB service just created. This alias should connect directly to the destination primary host as opposed to using the SCAN address.
PDB11203SERV =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = destprim1)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = pdb11203serv)
)
) - On the DESTINATION primary, create a database link in the destination PDB using the TNS alias just created pointing to the destination PDB. This will be used by DataPump import process to connect directly to the destination PDB.
SQL-DESTPRIM> alter session set container=pdb11203;
Session altered.
SQL-DESTPRIM> create public database link pdb11203serv connect to system identified by <password> using 'pdb11203serv';
Database link created.
SQL-DESTPRIM> select * from dual@pdb11203serv;
D
-
X - The XTTS script zip contains a file named xtt.properties that will be used by the scripts to specify different characteristics of the environment. Starting on the SOURCE environment, modify the following entries in the xtt.properties file to be used when
running the script on the source:- Set tablespaces= to the list of tablespaces you will be migrating. For full transportable export/import, all tablespaces except SYSTEM, SYSAUX, TEMP and any UNDO should be listed.
tablespaces=USERS,SOETS,SOETS2,SOETS3,SOETS4,SOETS5
- Set platformid= to the platform_id you are running on, e.g. 13 for Linux 64-bit. This value can be gotten from V$DATABASE.PLATFORM_ID
platformid=13
- Set dfcopydir= to the directory object created on the DESTINATION in Step 5 to hold the datafile copies for the restore. Note this value will be different from that specified on the SOURCE due to changes in diskgroup name (maybe) and db_unique_name (definitely). In this example this is DESTDIR which is defined as +DATA/CONT3/ E909A2CB24B15CBFE0437628890A510D /DATAFILE which is the GUID of the PDB11203 PDB that was created. This MUST be set when using the -G option of the XTTS scripts so that the dbms_file_transfer.put_file knows where to write the files on the DESTINATION side. It is enough to set this for the primary destination at this time, we will manually modify the files generated by the script execution for the destination standby.
dfcopydir=DESTDIR
- Set backupformat= to the ASM directory where incremental backups will be stored on the SOURCE. Before running the xttdriver.pl script, ensure this directory exists in ASM on the source.
backupformat=+RECOD1/fred/TRANSPORT
- Set stageondest= to the ASM directory where the incremental backups will be stored and applied from on the DESTINATION PRIMARY environment. This just needs to be set to some accessible location on the DESTINATION PRIMARY, it will not be used directly by the SOURCE
stageondest=+RECO
- Set storageondest= to the ASM directory where the datafiles full transportable export/import will be written to on the DESTINATION PRIMARY environment. This just needs to be set to some accessible location on the DESTINATION PRIMARY, it will not be used directly by the SOURCE
storageondest=+DATA
- Set backupondest= to the ASM directory where the datafiles in the full transportable export/import will be written to on the DESTINATION PRIMARY environment. On the SOURCE this just needs to be set to some accessible location
backupondest=+RECO
- Set asm_home= to the Grid Infrastructure home on the SOURCE system
asm_home=/u01/app/12.1.0.2/grid
- Set asm_sid= to the ORACLE_SID of the SOURCE local ASM instance
asm_sid=+ASM1
- Set srcdir= to the name of the directory created on the SOURCE in Step 4 to tell the process where to find the SOURCE files
srcdir=SRCDIR
- Set dstdir= to the directory object created above on the DESTINATION PRIMARY in Step 5 to show where the files in the full transportable export/import will be written to for the DESTINATION PRIMARY. The xttdriver.pl script allows multiple SRCDIR and DSTDIR specifications and the driver script validates settings of SRCDIR locations and DSTDIR locations specified when the script is run. Adding this entry now avoids an error when the driver script is executed.
dstdir=DESTDIR
- Set srclink= to the database link on the SOURCE in Step 8 pointing back to the SOURCE database. This will be used to build statements to pull data from the source
srclink=sourcedb
- Set tablespaces= to the list of tablespaces you will be migrating. For full transportable export/import, all tablespaces except SYSTEM, SYSAUX, TEMP and any UNDO should be listed.
- Save the xtt.properties file and copy it from the SOURCE environment to the DESTINATION PRIMARY and STANDBY hosts where the zip file was extracted.
- Modify the following entries in the xtt.properties file on the DESTINATION PRIMARY host
- Set stageondest= to the ASM directory where the incremental backups will be stored on the DESTINATION PRIMARY environment. This directory must be precreated in the ASM environment on the DESTINATION PRIMARY
stageondest=+RECO/CONT3/TRANSPORT
- Set backupondest= to some local filesystem that the process can write to on the DESTINATION. The rollforward writes a file to this location to keep track of progress. This should not be on ASM and the directory should be precreated before running the driver script.
backupondest=/u01/app/oracle/bkup/transport
- Set asm_home= to the Grid Infrastructure home on the DESTINATION primary system
asm_home=/u01/app/12.1.0.2/grid
- Set asm_sid= the ORACLE_SID of the DESTINATION PRIMARY local ASM instance
asm_sid=+ASM1
- Set stageondest= to the ASM directory where the incremental backups will be stored on the DESTINATION PRIMARY environment. This directory must be precreated in the ASM environment on the DESTINATION PRIMARY
- Modify the following entries in the xtt.properties file on the DESTINATION STANDBY host
- Set dfcopydir= to the directory object created on the DESTINATION PRIMARY in Step 5 to hold the datafile copies for the restore to the DESTINATION STANDBY. Note this value will be different from that specified on the primary due to changes in diskgroup name (maybe) and db_unique_name (definitely). In this example this is DESTDIRSTBY which is defined as +DATAD1/CONT3STBY/E909A2CB24B15CBFE0437628890A510D /DATAFILE which is the GUID of the pdb11203 PDB that was created. This MUST be done on the DESTINATION STANDBY copy of the xtt.properties file.
dfcopydir=DESTDIRSTBY
- Set stageondest= to the ASM directory where the incremental backups will be stored on the DESTINATION STANDBY environment. This directory must be precreated in the ASM environment on the DESTINATION STANDBY
stageondest=+RECOD1/CONT3STBY/TRANSPORT
- Set backupondest= to some local filesystem that the process can write to on the DESTINATION STANDBY environment. The rollforward writes a file to this location to keep track of progress. This should not be on ASM
backupondest=/u01/app/oracle/bkup/transport
- Set asm_home= the Grid Infrastructure home on the DESTINATION STANDBY system
asm_home=/u01/app/12.1.0.2/grid
- Set asm_sid= the ORACLE_SID of the DESTINATION STANDBY local ASM instance
asm_sid=+ASM2
- Set dstdir= to the directory object created above on the DESTINATION PRIMARY in Step 5 to show where the files included in the full transportable export/import will be written to for the DESTINATION STANDBY
dstdir=DESTDIRSTBY
- Set dfcopydir= to the directory object created on the DESTINATION PRIMARY in Step 5 to hold the datafile copies for the restore to the DESTINATION STANDBY. Note this value will be different from that specified on the primary due to changes in diskgroup name (maybe) and db_unique_name (definitely). In this example this is DESTDIRSTBY which is defined as +DATAD1/CONT3STBY/E909A2CB24B15CBFE0437628890A510D /DATAFILE which is the GUID of the pdb11203 PDB that was created. This MUST be done on the DESTINATION STANDBY copy of the xtt.properties file.
- Verify that there are no Time Zone issues between the SOURCE database objects and the DESTINATION environment. If the DB Time Zones do not match, tables with date/timestamp columns will not be imported. Compare the DBTIMEZONE values in the SOURCE non-CDB and the new DESTINATION PRIMARY PDB. If they do not match, change the setting in the DESTINATION PRIMARY PDB to match that in the SOURCE non-CDB, then bounce the DESTINATION PRIMARY PDB.
- Source
SQL-SOURCE> select dbtimezone from dual;
DBTIME
------
-06:00 - Destination PDB
SQL-DESTPRIM-PDB> select dbtimezone from dual;
DBTIME
------
-04:00 - Modify PDB11203 timezone to match the SOURCE
SQL-DESTPRIM-PDB> alter database set time_zone='-06:00';
Database altered.
SQL-DESTPRIM-PDB> alter session set container=cdb$root;
Session altered.
SQL-DESTPRIM-PDB> alter pluggable database PDB11203 close instances=all;
Pluggable database altered.
SQL-DESTPRIM-PDB> c/close/open
1* alter pluggable database pdb11203 open instances=all
SQL-DESTPRIM-PDB> /Pluggable database altered.
SQL-DESTPRIM-PDB> alter session set container=pdb11203;
Session altered.
SQL-DESTPRIM-PDB> select dbtimezone from dual;
DBTIME
------
-06:00
- Source
- Use the XTTS scripts to create incremental backups on the SOURCE and roll the cloned files forward on the DESTINATION environments.
The tablespaces to be retrieved from the SOURCE non-CDB MUST be in READ WRITE mode for the inital copy, they cannot be open readonly. The script will perform an active database copy of the files over the network from the SOURCE non-CDB to the DESTINATION environment using settings in the xtt.properties files using DBMS_FILE_TRANSFER.GET_FILE. The clone of the files must be performed to both the DESTINATION PRIMARY and the DESTINATION STANDBY site. This process requires staging areas for the incremental backup on all environments. It is possible to use shared storage such as NFS to reduce space utilization or avoid the need to copy files from host to host. The staging area locations will be determined by settings in the xtt.properties files on each host. The incremental creation and roll forward can be done as many times as necessary until you are ready to perform the import. These incrementals can be performed with all of the tablespaces in READ WRITE so applications may continue processing while the incrementals are created and the roll forward process is being performed, however the final incremental apply prior to the Data Pump import MUST be done with the SOURCE tablespaces in READ ONLY. These steps are also documented in Note 1389592.1 as part of Phase 3.
- Perform initial setup on 1 node of the SOURCE environment, the DESTINATION PRIMARY environment and the DESTINATION STANDBY environment by setting TMPDIR environment variable to a directory
$ export TMPDIR=/home/oracle/transport/tmp
- Ensure ALL tablespaces on the SOURCE that are included in the xtt.properties files are open READ WRITE
SQL-SOURCE> select name, enabled from v$datafile where enabled ! = 'READ WRITE';
no rows selected
- On the SOURCE, run xttdriver.pl -S to create the files to be pushed to the DESTINATION sites for their local script execution.
[oracle@source1 transport]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -S
Prepare source for Tablespaces:
'USERS','SOETS' +RECO
xttpreparesrc.sql for 'USERS','SOETS' started at Fri Oct 18 14:42:56 2013
xttpreparesrc.sql for ended at Fri Oct 18 14:42:56 2013
Prepare source for Tablespaces:
'SOETS2','SOETS3' +RECO
xttpreparesrc.sql for 'SOETS2','SOETS3' started at Fri Oct 18 14:42:56 2013
xttpreparesrc.sql for ended at Fri Oct 18 14:42:56 2013
Prepare source for Tablespaces:
'SOETS4','SOETS5' +RECO
xttpreparesrc.sql for 'SOETS4','SOETS5' started at Fri Oct 18 14:42:56 2013
xttpreparesrc.sql for ended at Fri Oct 18 14:42:56 2013 - Copy the getfile.sql and xttnewdatafiles.txt from the $TMPDIR location on the SOURCE to the $TMPDIR location on both the DESTINATION PRIMARY and the DESTINATION STANDBY
[oracle@source1 transport]$ cat scp.sh
scp $TMPDIR/getfile.sql destprim1:~/transport/tmp
scp $TMPDIR/getfile.sql deststby1:~/transport/tmp
scp $TMPDIR/xttnewdatafiles.txt deststby1:~/transport/tmp
scp $TMPDIR/xttnewdatafiles.txt destprim1:~/transport/tmp[oracle@source1 transport]$ ./scp.sh
getfile.sql 100% 2797 2.7KB/s 00:00
getfile.sql 100% 2797 2.7KB/s 00:00
xttnewdatafiles.txt 100% 243 0.2KB/s 00:00
xttnewdatafiles.txt 100% 243 0.2KB/s 00:00 - On the DESTINATION STANDBY, modify each entry in the getfile.sql file and the xttnewdatafiles.txt to use the DESTDIRSTBY directory object created above. This will ensure the files will be located in the correct ASM diskgroup/directory structure for the DESTINATION STANDBY
getfile.sql:
0,SRCDIR,soets1.319.878309815,DESTDIRSTBY,soets1_319_878309815
xttnewdatafiles.txt
::USERS
5,DESTDIRSTBY/users_324_878299557 - On the DESTINATION PRIMARY and DESTINATION STANDBY, run $ORACLE_HOME/perl/bin/perl xttdriver.pl -G to get the files from the source. This can take some time depending on file size, number of files and distance between SOURCE and DESTINATION sites. Files will be retrieved across the network and stored in the DESTDIR/DESTDIRSTBY directory created above. After executing the script, ensure that the xttnewdatafiles.txt file has the correct path inserted into the file name for each file. This should match the path for either DESTDIR or DESTDIRSTBY as appropriate. If the values are incorrect, the files may have been put in an incorrect location and later steps will fail. This step requires that the destination database be open so it can query ALL_DIRECTORIES to match the directory object name with the directory object path created in Step 5.
- Primary - Note the files are being written to the diskgroup, DB_UNIQUE_NAME and PDB GUID of the DESTINATION PRIMARY
[oracle@destprim1 transport]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -G
--------------------------------------------------------------------
Parsing properties
----------------------------------------------------------------------------------------------------------------------------------------
Done parsing properties
----------------------------------------------------------------------------------------------------------------------------------------
Checking properties
----------------------------------------------------------------------------------------------------------------------------------------
Done checking properties
----------------------------------------------------------------------------------------------------------------------------------------
Getting datafiles from source
----------------------------------------------------------------------------------------------------------------------------------------
Executing getfile for getfile_srcdir_soets1.319.878309815_0.sql
----------------------------------------------------------------------------------------------------------------------------------------
Executing getfile for getfile_srcdir_soets2.268.878309843_1.sql
----------------------------------------------------------------------------------------------------------------------------------------
Executing getfile for getfile_srcdir_soets3.356.878309865_2.sql
----------------------------------------------------------------------------------------------------------------------------------------
Executing getfile for getfile_srcdir_soets4.354.878309885_3.sql
----------------------------------------------------------------------------------------------------------------------------------------
Executing getfile for getfile_srcdir_soets5.384.878309905_4.sql
----------------------------------------------------------------------------------------------------------------------------------------
Executing getfile for getfile_srcdir_users.324.878299557_5.sql
----------------------------------------------------------------------------------------------------------------------------------------
Completed getting datafiles from source
-------------------------------------------------------------------- - Standby - Note the files are being written to the diskgroup and DB_UNIQUE_NAME and PDB GUID of the DESTINATION STANDBY
[oracle@deststby1 transport]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -G
--------------------------------------------------------------------
Parsing properties
----------------------------------------------------------------------------------------------------------------------------------------
Done parsing properties
----------------------------------------------------------------------------------------------------------------------------------------
Checking properties
----------------------------------------------------------------------------------------------------------------------------------------
Done checking properties
----------------------------------------------------------------------------------------------------------------------------------------
Getting datafiles from source
----------------------------------------------------------------------------------------------------------------------------------------
Executing getfile for getfile_srcdir_soets1.319.878309815_0.sql
----------------------------------------------------------------------------------------------------------------------------------------
Executing getfile for getfile_srcdir_soets2.268.878309843_1.sql
----------------------------------------------------------------------------------------------------------------------------------------
Executing getfile for getfile_srcdir_soets3.356.878309865_2.sql
----------------------------------------------------------------------------------------------------------------------------------------
Executing getfile for getfile_srcdir_soets4.354.878309885_3.sql
----------------------------------------------------------------------------------------------------------------------------------------
Executing getfile for getfile_srcdir_soets5.384.878309905_4.sql
----------------------------------------------------------------------------------------------------------------------------------------
Executing getfile for getfile_srcdir_users.324.878299557_5.sql
----------------------------------------------------------------------------------------------------------------------------------------
Completed getting datafiles from source
--------------------------------------------------------------------
- Primary - Note the files are being written to the diskgroup, DB_UNIQUE_NAME and PDB GUID of the DESTINATION PRIMARY
- Repeat the following as many times as necessary until ready to complete. These steps are located in note 1389592.1 (Phase 3)
- Run incremental create on the SOURCE
[oracle@source1 transport]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -i
--------------------------------------------------------------------
Parsing properties
----------------------------------------------------------------------------------------------------------------------------------------
Done parsing properties
----------------------------------------------------------------------------------------------------------------------------------------
Checking properties
----------------------------------------------------------------------------------------------------------------------------------------
Done checking properties
----------------------------------------------------------------------------------------------------------------------------------------
Backup incremental
--------------------------------------------------------------------Prepare newscn for Tablespaces: 'USERS','SOETS'
Prepare newscn for Tablespaces: 'SOETS2','SOETS3'
Prepare newscn for Tablespaces: 'SOETS4','SOETS5'
rman target / cmdfile /home/oracle/transport/tmp/rmanincr.cmdRecovery Manager: Release 11.2.0.3.0 - Production on Fri Oct 18 15:33:34 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
connected to target database: FRED (DBID=1053879784)
RMAN> set nocfau;
2> host 'echo ts::USERS';
3> backup incremental from scn 34888639
4> tag tts_incr_update tablespace 'USERS' format
5> '+RECOD1/fred/TRANSPORT/%U';
6> set nocfau;
7> host 'echo ts::SOETS';
8> backup incremental from scn 34888639
9> tag tts_incr_update tablespace 'SOETS' format
10> '+RECOD1/fred/TRANSPORT/%U';
11> set nocfau;
12> host 'echo ts::SOETS2';
13> backup incremental from scn 34888639
14> tag tts_incr_update tablespace 'SOETS2' format
15> '+RECOD1/fred/TRANSPORT/%U';
16> set nocfau;
17> host 'echo ts::SOETS3';
18> backup incremental from scn 34888639
19> tag tts_incr_update tablespace 'SOETS3' format
20> '+RECOD1/fred/TRANSPORT/%U';
21> set nocfau;
22> host 'echo ts::SOETS4';
23> backup incremental from scn 34888639
24> tag tts_incr_update tablespace 'SOETS4' format
25> '+RECOD1/fred/TRANSPORT/%U';
26> set nocfau;
27> host 'echo ts::SOETS5';
28> backup incremental from scn 34888639
29> tag tts_incr_update tablespace 'SOETS5' format
30> '+RECOD1/fred/TRANSPORT/%U';
31>
executing command: SET NOCFAU
using target database control file instead of recovery catalogts::USERS
host command completeStarting backup at 18-OCT-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=563 instance=fred1 device type=DISK
backup will be obsolete on date 25-OCT-13
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATAD1/fred/datafile/users.409.820573579
channel ORA_DISK_1: starting piece 1 at 18-OCT-13
channel ORA_DISK_1: finished piece 1 at 18-OCT-13
piece handle=+RECOD1/fred/transport/57omnl6f_1_1 tag=TTS_INCR_UPDATE comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01using channel ORA_DISK_1
backup will be obsolete on date 25-OCT-13
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 18-OCT-13
channel ORA_DISK_1: finished piece 1 at 18-OCT-13
piece handle=+RECOD1/fred/transport/58omnl6h_1_1 tag=TTS_INCR_UPDATE comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 18-OCT-13executing command: SET NOCFAU
ts::SOETS
host command completeStarting backup at 18-OCT-13
using channel ORA_DISK_1
backup will be obsolete on date 25-OCT-13
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=+DATAD1/fred/datafile/soets.411.820575221
channel ORA_DISK_1: starting piece 1 at 18-OCT-13
channel ORA_DISK_1: finished piece 1 at 18-OCT-13
piece handle=+RECOD1/fred/transport/59omnl6k_1_1 tag=TTS_INCR_UPDATE comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07using channel ORA_DISK_1
backup will be obsolete on date 25-OCT-13
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 18-OCT-13
channel ORA_DISK_1: finished piece 1 at 18-OCT-13
piece handle=+RECOD1/fred/transport/5aomnl6r_1_1 tag=TTS_INCR_UPDATE comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 18-OCT-13executing command: SET NOCFAU
ts::SOETS2
host command complete<SNIP>
ts::SOETS5
host command completeStarting backup at 18-OCT-13
using channel ORA_DISK_1
backup will be obsolete on date 25-OCT-13
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00010 name=+DATAD1/fred/datafile/soets5.536.825789101
channel ORA_DISK_1: starting piece 1 at 18-OCT-13
channel ORA_DISK_1: finished piece 1 at 18-OCT-13
piece handle=+RECOD1/fred/transport/5homnl8b_1_1 tag=TTS_INCR_UPDATE comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15using channel ORA_DISK_1
backup will be obsolete on date 25-OCT-13
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 18-OCT-13
channel ORA_DISK_1: finished piece 1 at 18-OCT-13
piece handle=+RECOD1/fred/transport/5iomnl8q_1_1 tag=TTS_INCR_UPDATE comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 18-OCT-13Recovery Manager complete.
- Copy the xttplan.txt and tsbkupmap.txt files from the $TMPDIR on the SOURCE to the $TMPDIR on the DESTINATION PRIMARY and DESTINATION STANDBY
[oracle@source1 transport]$ cat scp2.sh
scp $TMPDIR/xttplan.txt destprim1:~/transport/tmp
scp $TMPDIR/xttplan.txt deststby1:~/transport/tmp
scp $TMPDIR/tsbkupmap.txt deststby1:~/transport/tmp
scp $TMPDIR/tsbkupmap.txt destprim1:~/transport/tmp[oracle@source1 transport]$ ./scp2.sh
xttplan.txt 100% 125 0.1KB/s 00:00
xttplan.txt 100% 125 0.1KB/s 00:00
tsbkupmap.txt 100% 137 0.1KB/s 00:00
tsbkupmap.txt 100% 137 0.1KB/s 00:00 - Copy the incremental backupsets from the SOURCE to the DESTINATION PRIMARY and DESTINATION STANDBY. The following example will get all of the files included in the full transportable export/import and push them across. Note that it will ONLY work if the ASM versions are the same, otherwise you will have to copy the files to filesystem and push across. The following is an example of code that can be used to copy from ASM instance to ASM instance. The code was put into shell scripts with appropriate destination host names and directory structure for the primary and standby environments. Run the scripts from your source system with set to point to the GRID HOME for the following example to work.
for i in `cat ./tmp/incrbackups.txt`; do
asmcmd cp $i sys/<password>@destprim1.+ASM1:+RECO/CONT3/TRANSPORT/
done[oracle@sourchost1 transport]$ ./cp_standby_asm.sh
copying +RECOD1/fred/TRANSPORT/5fomnl7p_1_1 -> +RECOD1/CONT3STBY/TRANSPORT/5fomnl7p_1_1
copying +RECOD1/fred/TRANSPORT/57omnl6f_1_1 -> +RECOD1/CONT3STBY/TRANSPORT/57omnl6f_1_1
copying +RECOD1/fred/TRANSPORT/5homnl8b_1_1 -> +RECOD1/CONT3STBY/TRANSPORT/5homnl8b_1_1
copying +RECOD1/fred/TRANSPORT/5domnl78_1_1 -> +RECOD1/CONT3STBY/TRANSPORT/5domnl78_1_1
copying +RECOD1/fred/TRANSPORT/5bomnl6u_1_1 -> +RECOD1/CONT3STBY/TRANSPORT/5bomnl6u_1_1
copying +RECOD1/fred/TRANSPORT/59omnl6k_1_1 -> +RECOD1/CONT3STBY/TRANSPORT/59omnl6k_1_1[oracle@sourcehost1 transport]$ ./cp_primary_asm.sh
copying +RECOD1/fred/TRANSPORT/5fomnl7p_1_1 -> nshexdak4db01:+RECO/CONT3/TRANSPORT/5fomnl7p_1_1
copying +RECOD1/fred/TRANSPORT/57omnl6f_1_1 -> nshexdak4db01:+RECO/CONT3/TRANSPORT/57omnl6f_1_1
copying +RECOD1/fred/TRANSPORT/5homnl8b_1_1 -> nshexdak4db01:+RECO/CONT3/TRANSPORT/5homnl8b_1_1
copying +RECOD1/fred/TRANSPORT/5domnl78_1_1 -> nshexdak4db01:+RECO/CONT3/TRANSPORT/5domnl78_1_1
copying +RECOD1/fred/TRANSPORT/5bomnl6u_1_1 -> nshexdak4db01:+RECO/CONT3/TRANSPORT/5bomnl6u_1_1
copying +RECOD1/fred/TRANSPORT/59omnl6k_1_1 -> nshexdak4db01:+RECO/CONT3/TRANSPORT/59omnl6k_1_1Once the files have been copied to the DESTINATION sites, they can be deleted from the SOURCE site to save space
- Perform a rollforward to apply the changes to the copies on the DESTINATION sites. The environment should be set to point to the destination database $ORACLE_HOME.
The rollforward process requires the instance being used be started in nomount mode. You can use your destination instance or, to avoid downtime to other PDBs and their applications, use a temporary instance. In the following example a temporary instance will be created on the destination primary and standby environments and used to perform the rollforward.
The ASMCMD-8001 errors below occur because there are no conversion files being created (source and destination platform are the same), so there are no files to be removed. These errors will appear for each datafile that is included in the migration set. These errors can be ignored.
- DESTINATION PRIMARY
[oracle@destprim1 transport]$ export ORACLE_SID=xtt
[oracle@destprim1 transport]$ cat << EOF > $ORACLE_HOME/dbs/init$ORACLE_SID.ora
db_name=xtt
compatible=12.1.0.2.0
EOF[oracle@destprim1 transport]$ sqlplus / as sysdba
SQL> startup nomount
SQL>exit[oracle@destprim1 transport]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r -L
--------------------------------------------------------------------
Parsing properties
----------------------------------------------------------------------------------------------------------------------------------------
Done parsing properties
----------------------------------------------------------------------------------------------------------------------------------------
Checking properties
----------------------------------------------------------------------------------------------------------------------------------------
Done checking properties
----------------------------------------------------------------------------------------------------------------------------------------
Start rollforward
--------------------------------------------------------------------
ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted
ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted
ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted
ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted
ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted
ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted--------------------------------------------------------------------
End of rollforward phase
-------------------------------------------------------------------- - DESTINATION STANDBY
[oracle@destprim1 transport]$ export ORACLE_SID=xtt[oracle@destprim1 transport]$ cat << EOF > $ORACLE_HOME/dbs/init$ORACLE_SID.ora
db_name=xtt
compatible=12.1.0.2.0
EOF[oracle@destprim1 transport]$ sqlplus / as sysdba
SQL> startup nomount
SQL>exit[oracle@deststby1 transport]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r -L
--------------------------------------------------------------------
Parsing properties
----------------------------------------------------------------------------------------------------------------------------------------
Done parsing properties
----------------------------------------------------------------------------------------------------------------------------------------
Checking properties
----------------------------------------------------------------------------------------------------------------------------------------
Done checking properties
----------------------------------------------------------------------------------------------------------------------------------------
Start rollforward
--------------------------------------------------------------------
ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted
ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted
ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted
ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted
ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted
ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted--------------------------------------------------------------------
End of rollforward phase
--------------------------------------------------------------------
- DESTINATION PRIMARY
- On the SOURCE system run the script to determine the new FROM_SCN for the next incremental backup execution.
[oracle@sourcehost1 transport]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -s
- Run incremental create on the SOURCE
- Perform initial setup on 1 node of the SOURCE environment, the DESTINATION PRIMARY environment and the DESTINATION STANDBY environment by setting TMPDIR environment variable to a directory
- When ready to complete the process do the following:
- Perform the final incremental creation on the SOURCE and apply the incrementals to the DESTINATION environments.
- Put all tablespaces on the SOURCE non-CDB to be transported into READ ONLY mode
SQL-SOURCE> alter tablespace soets read only;
Tablespace altered.
SQL-SOURCE> alter tablespace soets2 read only;
Tablespace altered.
SQL-SOURCE> alter tablespace soets3 read only;
Tablespace altered.
SQL-SOURCE> alter tablespace soets4 read only;
Tablespace altered.
SQL-SOURCE> alter tablespace soets5 read only;
Tablespace altered.
SQL-SOURCE> alter tablespace users read only;
Tablespace altered.
- Perform final incremental and roll forward using SAME STEPS AS ABOVE
- Create the incremental backups on the SOURCE
- Copy the xttplan.txt and tsbkupmap.txt from the SOURCE to the DESTINATION sites
- Copy the incremental backups to the DESTINATION sites (if not using shared storage)
- Run the rollforward on the DESTINATION sites. There is no need to run the step to determine a new FROM_SCN.
- Shut down the temporary instances
- Put all tablespaces on the SOURCE non-CDB to be transported into READ ONLY mode
- Perform the full transportable export/import on the DESTINATION connecting to the PDB using the link created that points to the SOURCE in Step 8 as the NETWORK_LINK= and the directory created in the DESTINATION PDB in Step 6 for the Data Pump logfile. Specify all the datafiles that have been copied and rolled forward in the TRANSPORT_DATAFILES clause, these files MUST be the file names on the DESTINATION PRIMARY site. Use FULL=Y to perform the full transpotable export/import. VERSION=12 is only required when the source is 11.2.0.3 or 11.2.0.4, for 12c source databases it can be omitted.
$[oracle@destprim1 transport]$ impdp \"sys/welcome1@pdb11203serv as sysdba\" network_link=sourcedb version=12 full=y transportable=always metrics=y exclude=statistics logfile=mydir:xttspdb.log transport_datafiles='+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/users_409_820573579','+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets_411_820575221','+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets2_539_825789091','+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets3_538_825789093','+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets4_537_825789097','+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets5_536_825789101'Some errors will be seen (e.g. failed attempt to create undo tablespace, some failures for APEX object creation, errors for ORDATA objects already existing) but these can be ignored.
- Maintain the DESTINATION STANDBY by renaming the datafiles as they are encountered by media recovery.
As discussed earlier, the file names and paths on the DESTINATION STANDBY will not match those of the DESTINATION PRIMARY As the redo for the files being added is applied by media recovery on the DESTINATION STANDBY, the media recovery process (MRP) will die with a file not found condition.
As each tablespace is encountered, you must manually rename the files in the DESTINATION STANDBY controlfile and restart the MRP process. You must continue this rename/start MRP loop until all tablespaces have been processed. The file names that have been attempted to be assigned can be found by querying v$datafile. If multiple files are being added for a single tablespace, ensure you rename them correctly.SQL-DESTSTBY> select name from v$datafile;NAME
--------------------------------------------------------------------------------
+DATAD1/CONT3STBY/DATAFILE/system.338.819017389
+DATAD1/CONT3STBY/DFEF7419D8C7245AE0437628890A96C3/DATAFILE/system.335.819017397
+DATAD1/CONT3STBY/DATAFILE/sysaux.379.819017389
+DATAD1/CONT3STBY/DFEF7419D8C7245AE0437628890A96C3/DATAFILE/sysaux.395.819017397
+DATAD1/CONT3STBY/DATAFILE/undotbs1.337.819017389
+DATAD1/CONT3STBY/DATAFILE/undotbs2.389.819017389
+DATAD1/CONT3STBY/DATAFILE/users.392.819017389
+DATAD1/CONT3STBY/DFFBB8557B625421E0437628890A461D/DATAFILE/system.427.819029243
+DATAD1/CONT3STBY/DFFBB8557B625421E0437628890A461D/DATAFILE/sysaux.426.819029179
+DATAD1/CONT3STBY/DFFBB8557B625421E0437628890A461D/DATAFILE/soets1.425.819028813
+DATAD1/CONT3STBY/E7C9ABBF31FC0FF3E0437628890A5753/DATAFILE/system.511.827767375
+DATAD1/CONT3STBY/E7C9ABBF31FC0FF3E0437628890A5753/DATAFILE/sysaux.528.827767377
+DATAD1/cont3/E7C9ABBF31FC0FF3E0437628890A5753/DATAFILE/soets_411_820575221
/u01/app/oracle/product/12.1.0/fkhome_1/dbs/UNNAMED00059To be able to rename the datafiles, you must set standby_file_management to manual
SQL-DESTSTBY> alter system set standby_file_management=manual;System altered.
SQL-DESTSTBY> alter database rename file '+DATAD1/cont3/e909a2cb24b15cbfe0437628890a510d/datafile/soets_411_820575221' to '+DATAD1/CONT3STBY/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets_411_820575221';
Database altered.
SQL-DESTSTBY> recover managed standby database using current logfile disconnect;
Media recovery complete.SQL-DESTSTBY> alter database rename file '/u01/app/oracle/product/12.1.0/fkhome_1/dbs/UNNAMED00067' to '+DATAD1/CONT3STBY/e909a2cb24b15cbfe0437628890a510d/DATAFILE/soets2_539_825789091';
Database altered.
SQL-DESTSTBY> recover managed standby database using current logfile disconnect;
Media recovery complete.SQL-DESTSTBY> alter database rename file '/u01/app/oracle/product/12.1.0/fkhome_1/dbs/UNNAMED00068' to '+DATAD1/CONT3STBY/e909a2cb24b15cbfe0437628890a510d/DATAFILE/soets3_538_825789093';
Database altered.
SQL-DESTSTBY> recover managed standby database using current logfile disconnect;
Media recovery complete.SQL-DESTSTBY> alter database rename file '/u01/app/oracle/product/12.1.0/fkhome_1/dbs/UNNAMED00069' to '+DATAD1/CONT3STBY/e909a2cb24b15cbfe0437628890a510d/DATAFILE/soets4_537_825789097';
Database altered.
SQL-DESTSTBY> recover managed standby database using current logfile disconnect;
Media recovery complete.SQL-DESTSTBY> alter database rename file '/u01/app/oracle/product/12.1.0/fkhome_1/dbs/UNNAMED00070' to '+DATAD1/CONT3STBY/e909a2cb24b15cbfe0437628890a510d/DATAFILE/soets5_536_825789101';
Database altered.
SQL-DESTSTBY> recover managed standby database using current logfile disconnect;
Media recovery complete.SQL-DESTSTBY> alter database rename file '/u01/app/oracle/product/12.1.0/fkhome_1/dbs/UNNAMED00071' to '+DATAD1/CONT3STBY/e909a2cb24b15cbfe0437628890a510d/DATAFILE/users_409_820573579';
Database altered.
SQL-DESTSTBY> alter system set standby_file_management=auto;
System altered.
SQL-DESTSTBY> recover managed standby database using current logfile disconnect;
Media recovery complete. - Perform a logical check of the application.
- Backup the PDB.
[oracle@destprim1 transport]$ rman target /
Recovery Manager: Release 12.1.0.1.0 - Production on Mon Oct 21 13:09:37 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
connected to target database: CONT3 (DBID=1783201232)
RMAN> backup as compressed backupset pluggable database pdb11203;
Starting backup at 21-OCT-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=211 instance=cont31 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00066 name=+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets_411_820575221
input datafile file number=00067 name=+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets2_539_825789091
input datafile file number=00068 name=+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets3_538_825789093
input datafile file number=00069 name=+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets4_537_825789097
input datafile file number=00070 name=+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets5_536_825789101
input datafile file number=00065 name=+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/sysaux.354.829148807
input datafile file number=00064 name=+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/system.391.829148807
input datafile file number=00071 name=+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/users_409_820573579
channel ORA_DISK_1: starting piece 1 at 21-OCT-13
1021T131147 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:08:45
Finished backup at 21-OCT-13Starting Control File and SPFILE Autobackup at 21-OCT-13
piece handle=+RECO/CONT3/AUTOBACKUP/2013_10_21/s_829401633.479.829401633 comment=NONE
Finished Control File and SPFILE Autobackup at 21-OCT-13
- Perform the final incremental creation on the SOURCE and apply the incrementals to the DESTINATION environments.
Done: applyDataFileTo
REFERENCES
NOTE:1916648.1 - Making Use Deferred PDB Recovery and the STANDBYS=NONE Feature with Oracle Multitenant
NOTE:2042607.1 - Cloning a Pluggable Database from an RMAN Container Database Backup
NOTE:2049127.1 - Data Guard Impact on Oracle Multitenant Environments
NOTE:2273829.1 - Reusing the Source Standby Database Files When Plugging a PDB into the Primary Database of a Data Guard Configuration
NOTE:2273304.1 - Reusing the Source Standby Database Files When Plugging a non-CDB as a PDB into the Primary Database of a Data Guard Configuration
Step by Step Process of Migrating non-CDBs and PDBs Using ASM for File Storage (Doc ID 1576755.1)的更多相关文章
- Step by step Process of creating APD
Step by step Process of creating APD: Business Scenario: Here we are going to create an APD on top o ...
- Tomcat Clustering - A Step By Step Guide --转载
Tomcat Clustering - A Step By Step Guide Apache Tomcat is a great performer on its own, but if you'r ...
- [ZZ] Understanding 3D rendering step by step with 3DMark11 - BeHardware >> Graphics cards
http://www.behardware.com/art/lire/845/ --> Understanding 3D rendering step by step with 3DMark11 ...
- e2e 自动化集成测试 架构 实例 WebStorm Node.js Mocha WebDriverIO Selenium Step by step (二) 图片验证码的识别
上一篇文章讲了“e2e 自动化集成测试 架构 京东 商品搜索 实例 WebStorm Node.js Mocha WebDriverIO Selenium Step by step 一 京东 商品搜索 ...
- Code Understanding Step by Step - We Need a Task
Code understanding is a task we are always doing, though we are not even aware that we're doing it ...
- enode框架step by step之saga的思想与实现
enode框架step by step之saga的思想与实现 enode框架系列step by step文章系列索引: 分享一个基于DDD以及事件驱动架构(EDA)的应用开发框架enode enode ...
- 课程五(Sequence Models),第一 周(Recurrent Neural Networks) —— 1.Programming assignments:Building a recurrent neural network - step by step
Building your Recurrent Neural Network - Step by Step Welcome to Course 5's first assignment! In thi ...
- 精通initramfs构建step by step
(一)hello world 一.initramfs是什么 在2.6版本的linux内核中,都包含一个压缩过的cpio格式 的打包文件.当内核启动时,会从这个打包文件中导出文件到内核的rootfs ...
- 课程一(Neural Networks and Deep Learning),第四周(Deep Neural Networks)——2.Programming Assignments: Building your Deep Neural Network: Step by Step
Building your Deep Neural Network: Step by Step Welcome to your third programming exercise of the de ...
随机推荐
- 《java面试十八式》--引子
爪哇城中 “喂,你等等我啊”少女气喘吁吁的喊道 “大小姐,你可快点吧,报名马上就要结束了.” 这是爪哇城一年一度的大选比赛,被选上的人会留下来任职,享有名誉和金钱,所以大家都在积极准备. ...
- 聊聊技术选型 - Angular2 vs Vue2
作者介绍:李旸,美团点评前端工程师,3 年 Web 前端开发经验,现在是美团点评点餐团队的一员. "Come, and take choice of all my library, And ...
- vue中使用this遇到的坑
在两个页面中创建函数,并且调用一个函数中能够获取到代表vue实例的this,而另一个却获取不到 页面1: <button id="login" v-text="$t ...
- dapr微服务.netcore sdk入门
Actors入门 先决条件 .Net Core SDK 3.0 Dapr CLI Dapr DotNet SDK 概述 本文档描述如何在客户端应用程序上创建Actor(MyActor)并调用其方法. ...
- 3年java开发面试BAT,你必须彻底搞定Maven!
前言 现在的Java项目中,Maven随处可见. Maven的仓库管理.依赖管理.继承和聚合等特性为项目的构建提供了一整套完善的解决方案,如果你搞不懂Maven,那么一个多模块的项目足以让你头疼,依赖 ...
- Java中“附近的人”实现方案讨论及代码实现
前言 在我们平时使用的许多app中有附近的人这一功能,像微信.qq附近的人,哈罗.街兔附近的车辆.这些功能就在我们日常生活中出现. 像类似于附近的人这一类业务,在Java中是如何实现的呢? 本文就简单 ...
- webpack前端简单配置
每次用webpack创建项目时,总是要安装一堆包,还要配置一系列文件解析规则,webpack打包规则等,烦不胜烦.网上找的很多webpack配置里面很多里面的配置或者引入包都是被弃用的,因此自己总结了 ...
- hdu 2586 How far away?(LCA模板题+离线tarjan算法)
How far away ? Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others)To ...
- SpringCloud分布式配置中心
一.什么是配置中心 在分布式系统中,由于服务数量巨多,为了方便服务配置文件统一管理,实时更新,所以需要分布式配置中心组件.在Spring Cloud中,有分布式配置中心组件spring cloud c ...
- OpenSSL 自述
1995 年, Eric A. Young 和 Tim J. Hudson 发明了 SSLeay,它是 SSL(Open-source Secure Sockets) 协议的实现.1998 年,You ...