OS:

Oracle Linux Server release 5.7

DB:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

1、以oracle用户登录启动dbca

[root@rac ~]# su - oracle

[oracle@rac ~]$ dbca


会逐一删除所有节点的database

2、用oracle用户登录并在cd $ORACLE_HOME/deinstall目录下执行deinstall脚本

[root@rac ~]# su - oracle

[oracle@rac ~]$ cd $ORACLE_HOME/deinstall

[oracle@rac ~]$ ./deinstall

Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################
## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/app/oracle/11.2.0/db_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/grid/11.2.0
The following nodes are part of this cluster: rac,rac1,rac2
Checking for sufficient temp space availability on node(s) : 'rac,rac1,rac2'

## [END] Install check configuration ##

Network Configuration check config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2013-08-14_02-22-03-AM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2013-08-14_02-22-09-AM.log

Use comma as separator when specifying list of values as input

Specify the list of database names that are configured in this Oracle home []:
Database Check Configuration END

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2013-08-14_02-31-07-AM.log

Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check1556.log
Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/grid/11.2.0
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac,rac1,rac2
Oracle Home selected for deinstall is: /u01/app/oracle/11.2.0/db_1
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
rac : Oracle Home exists with CCR directory, but CCR is not configured
rac1 : Oracle Home exists with CCR directory, but CCR is not configured
rac2 : Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2013-08-14_02-21-45-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2013-08-14_02-21-45-AM.err'

######################## CLEAN OPERATION START ########################

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2013-08-14_02-31-07-AM.log

Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2013-08-14_02-31-42-AM.log

Network Configuration clean config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2013-08-14_02-31-42-AM.log

De-configuring Listener configuration file on all nodes...
Listener configuration file de-configured successfully.

De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.

De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.

De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.

De-configuring backup files on all nodes...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean1556.log
Oracle Configuration Manager clean END
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/oracle/11.2.0/db_1' from the central inventory on the local node : Done

Delete directory '/u01/app/oracle/11.2.0/db_1' on the local node : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/u01/app/grid/11.2.0'.

Detach Oracle home '/u01/app/oracle/11.2.0/db_1' from the central inventory on the remote nodes 'rac2,rac1' : Done

Delete directory '/u01/app/oracle/11.2.0/db_1' on the remote nodes 'rac1,rac2' : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on node 'rac2'. The directory is in use by Oracle Home '/u01/app/grid/11.2.0'.

The Oracle Base directory '/u01/app/oracle' will not be removed on node 'rac1'. The directory is in use by Oracle Home '/u01/app/grid/11.2.0'.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2013-08-14_02-15-33AM' on node 'rac'
Clean install operation removing temporary directory '/tmp/deinstall2013-08-14_02-15-33AM' on node 'rac1,rac2'

## [END] Oracle install clean ##

######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully detached Oracle home '/u01/app/oracle/11.2.0/db_1' from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/11.2.0/db_1' on the local node.
Successfully detached Oracle home '/u01/app/oracle/11.2.0/db_1' from the central inventory on the remote nodes 'rac2,rac1'.
Successfully deleted directory '/u01/app/oracle/11.2.0/db_1' on the remote nodes 'rac1,rac2'.
Oracle Universal Installer cleanup was successful.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac2' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

3、用root用户登录所有节点,运行/u01/app/grid/11.2.0/crs/install/rootcrs.pl -verbose -deconfig

But,如果有3个节点,在前2个节点运行,最后一个节点不运行.

[root@rac ~]# cd /u01/app/grid/11.2.0/crs/install

[root@rac install]#  ./rootcrs.pl -verbose -deconfig

Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.12.0/255.255.255.0/eth0, type static
VIP exists: /rac-vip/192.168.12.4/192.168.12.0/255.255.255.0/eth0, hosting node rac
VIP exists: /rac1-vip/192.168.12.5/192.168.12.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2-vip/192.168.12.9/192.168.12.0/255.255.255.0/eth0, hosting node rac2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
PRCR-1065 : Failed to stop resource ora.rac.vip
CRS-2529: Unable to act on 'ora.rac.vip' because that would require stopping or relocating 'ora.LISTENER.lsnr', but the force option was not specified
PRCR-1014 : Failed to stop resource ora.net1.network
PRCR-1065 : Failed to stop resource ora.net1.network
CRS-2529: Unable to act on 'ora.net1.network' because that would require stopping or relocating 'ora.scan1.vip', but the force option was not specified

PRKO-2380 : VIP rac is still running on node: rac
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac'
CRS-2673: Attempting to stop 'ora.CRM.dg' on 'rac'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac'
CRS-2673: Attempting to stop 'ora.FLUSH.dg' on 'rac'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac'
CRS-2673: Attempting to stop 'ora.cvu' on 'rac'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac'
CRS-2677: Stop of 'ora.cvu' on 'rac' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac' succeeded
CRS-2673: Attempting to stop 'ora.rac.vip' on 'rac'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac'
CRS-2676: Start of 'ora.cvu' on 'rac1' succeeded
CRS-2677: Stop of 'ora.rac.vip' on 'rac' succeeded
CRS-2672: Attempting to start 'ora.rac.vip' on 'rac1'
CRS-2677: Stop of 'ora.scan1.vip' on 'rac' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac2'
CRS-2676: Start of 'ora.rac.vip' on 'rac1' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac2'
CRS-2677: Stop of 'ora.FLUSH.dg' on 'rac' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac2' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rac' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'rac1'
CRS-2677: Stop of 'ora.CRM.dg' on 'rac' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac'
CRS-2677: Stop of 'ora.asm' on 'rac' succeeded
CRS-2676: Start of 'ora.oc4j' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac'
CRS-2677: Stop of 'ora.net1.network' on 'rac' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac'
CRS-2673: Attempting to stop 'ora.asm' on 'rac'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac'
CRS-2677: Stop of 'ora.mdnsd' on 'rac' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac'
CRS-2677: Stop of 'ora.cssd' on 'rac' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac'
CRS-2677: Stop of 'ora.gipcd' on 'rac' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac'
CRS-2677: Stop of 'ora.gpnpd' on 'rac' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node

4、在最后一个节点以root用户运行"/u01/app/grid/11.2.0/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode" 该命令会清空OCR和votedisk

[root@rac2 install]# ./rootcrs.pl -verbose -deconfig -force -lastnode

Using configuration parameter file: ./crsconfig_params
CRS resources for listeners are still configured
Network exists: 1/192.168.12.0/255.255.255.0/eth0, type static
VIP exists: /rac-vip/192.168.12.4/192.168.12.0/255.255.255.0/eth0, hosting node rac
VIP exists: /rac1-vip/192.168.12.5/192.168.12.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2-vip/192.168.12.9/192.168.12.0/255.255.255.0/eth0, hosting node rac2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.CRM.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.FLUSH.dg' on 'rac2'
CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded
CRS-2677: Stop of 'ora.FLUSH.dg' on 'rac2' succeeded
CRS-2677: Stop of 'ora.CRM.dg' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 2, and is terminating
Unable to communicate with the Cluster Synchronization Services daemon.
CRS-4000: Command Delete failed, or completed with errors.
crsctl delete for vds in CRM ... failed
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node

5、在任意节点以Grid Infrastructure拥有者执行deinstall脚本

[root@rac ~]# su - grid

[grid@rac ~]$ cd /u01/app/grid/11.2.0/deinstall

[grid@rac deinstall]$ ./deinstall

Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2013-08-14_03-42-20AM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################
## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/app/grid/11.2.0
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: rac,rac1,rac2
Checking for sufficient temp space availability on node(s) : 'rac,rac1,rac2'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2013-08-14_03-42-20AM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "rac"[rac-vip]
 >

The following information can be collected by running "/sbin/ifconfig -a" on node "rac"
Enter the IP netmask of Virtual IP "192.168.12.4" on node "rac"[255.255.255.0]
 >

Enter the network interface name on which the virtual IP address "192.168.12.4" is active
 >

Enter an address or the name of the virtual IP used on node "rac1"[rac1-vip]
 >
The following information can be collected by running "/sbin/ifconfig -a" on node "rac1"
Enter the IP netmask of Virtual IP "192.168.12.5" on node "rac1"[255.255.255.0]
 >

Enter the IP netmask of Virtual IP "192.168.12.5" on node "rac1"[255.255.255.0]
 >

Enter the network interface name on which the virtual IP address "192.168.12.5" is active[rac-vip]
 >

Enter an address or the name of the virtual IP used on node "rac2"[rac2-vip]
 >

The following information can be collected by running "/sbin/ifconfig -a" on node "rac2"
Enter the IP netmask of Virtual IP "192.168.12.9" on node "rac2"[255.255.255.0]
 >

Enter the network interface name on which the virtual IP address "192.168.12.9" is active[rac-vip]
 >

Enter an address or the name of the virtual IP[]
 >

Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2013-08-14_03-42-20AM/logs/netdc_check2013-08-14_04-31-25-AM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2013-08-14_03-42-20AM/logs/asmcadc_check2013-08-14_04-31-31-AM.log

ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y
Is OCR/Voting Disk placed in ASM y|n [n]: y

Enter the OCR/Voting Disk diskgroup name []:
Specify the ASM Diagnostic Destination [ ]:
Specify the diskstring []:
Specify the diskgroups that are managed by this ASM instance []:

######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac,rac1,rac2
Oracle Home selected for deinstall is: /u01/app/grid/11.2.0
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2013-08-14_03-42-20AM/logs/deinstall_deconfig2013-08-14_03-43-56-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2013-08-14_03-42-20AM/logs/deinstall_deconfig2013-08-14_03-43-56-AM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2013-08-14_03-42-20AM/logs/asmcadc_clean2013-08-14_04-32-20-AM.log
ASM Clean Configuration START
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2013-08-14_03-42-20AM/logs/netdc_clean2013-08-14_04-32-26-AM.log

De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1

De-configuring listener: LISTENER
    Stopping listener: LISTENER
    Listener stopped successfully.
Listener de-configured successfully.

De-configuring listener: LISTENER_SCAN1
    Stopping listener: LISTENER_SCAN1
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.

De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.

De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.

De-configuring backup files on all nodes...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "rac1".

/tmp/deinstall2013-08-14_03-42-20AM/perl/bin/perl -I/tmp/deinstall2013-08-14_03-42-20AM/perl/lib -I/tmp/deinstall2013-08-14_03-42-20AM/crs/install /tmp/deinstall2013-08-14_03-42-20AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2013-08-14_03-42-20AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Run the following command as the root user or the administrator on node "rac2".

/tmp/deinstall2013-08-14_03-42-20AM/perl/bin/perl -I/tmp/deinstall2013-08-14_03-42-20AM/perl/lib -I/tmp/deinstall2013-08-14_03-42-20AM/crs/install /tmp/deinstall2013-08-14_03-42-20AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2013-08-14_03-42-20AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Run the following command as the root user or the administrator on node "rac".

/tmp/deinstall2013-08-14_03-42-20AM/perl/bin/perl -I/tmp/deinstall2013-08-14_03-42-20AM/perl/lib -I/tmp/deinstall2013-08-14_03-42-20AM/crs/install /tmp/deinstall2013-08-14_03-42-20AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2013-08-14_03-42-20AM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode

Press Enter after you finish running the above commands

<----------------------------------------

在所有节点以root用户执行上面的提示

## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2013-08-14_03-42-20AM' on node 'rac'
Clean install operation removing temporary directory '/tmp/deinstall2013-08-14_03-42-20AM' on node 'rac1,rac2'

## [END] Oracle install clean ##

######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1
Oracle Clusterware is stopped and successfully de-configured on node "rac1"
Oracle Clusterware is stopped and successfully de-configured on node "rac"
Oracle Clusterware is stopped and successfully de-configured on node "rac2"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/grid/11.2.0' from the central inventory on the local node.
Successfully deleted directory '/u01/app/grid/11.2.0' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Successfully detached Oracle home '/u01/app/grid/11.2.0' from the central inventory on the remote nodes 'rac2,rac1'.
Successfully deleted directory '/u01/app/grid/11.2.0' on the remote nodes 'rac1,rac2'.
Successfully deleted directory '/u01/app/oraInventory' on the remote nodes 'rac2'.
Successfully deleted directory '/u01/app/oraInventory' on the remote nodes 'rac1'.
Failed to delete directory '/u01/app/oracle' on the remote nodes 'rac1'.
Failed to delete directory '/u01/app/oracle' on the remote nodes 'rac2'.
Oracle Universal Installer cleanup completed with errors.

Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac,rac2,rac1' at the end of the session.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac,rac1,rac2' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############
deinstall运行完以后会提示在必要的节点运行'rm -rf /etc/oraInst.loc' 和'rm -rf /opt/ORCLfmap'

uninstall 11.2.0.3.0 grid & database in linux 5.7的更多相关文章

  1. Oracle Database 11g Release 2(11.2.0.3.0) RAC On Redhat Linux 5.8 Using Vmware Workstation 9.0

    一,简介 二,配置虚拟机 1,创建虚拟机 (1)添加三块儿网卡:   主节点 二节点 eth0:    公网  192.168.1.20/24   NAT eth0:    公网  192.168.1 ...

  2. 单机Oracle+asm(11.2.0.3.0) Patch Set Update(11.2.0.3.7 )

    之前写过一篇关于PSU升级的案例,参考如下: http://blog.csdn.net/jyjxs/article/details/8983880 但是,感觉有些地方理解的不是很透彻明白,照猫画虎的比 ...

  3. Oracle 11.2.0.3.0 RAC GI_DB升级到11.2.0.4.0

    转载:  http://blog.csdn.net/frank0521/article/details/18226199 前言 还是大家常说的那句:生产环境千万记得备份哈~~~ 以下的环境,是我的测试 ...

  4. Downgrade ASM DATABASE_COMPATIBILITY (from 11.2.0.4.0 to 11.2.0.0.0) on 12C CRS stack.

    使用Onecommand完成快速Oracle 12c RAC部署后 发现ASM database compatibilty无法设置,默认为11.2.0.4.0. 由于我们还有些数据库低于这个版本,所以 ...

  5. Discoverer 11.1.1.3.0以Oracle Application用户登录的必要配置

    客户这边要使用Discoverer来出报表, 就从OTN上下载安装了11.1.1.3.0版本的, 安装很简单, 一路Next, 使用的EBS版本是12.1.1.3, 结果发现用Oracle Appli ...

  6. Oracle_RAC数据库GI的PSU升级(11.2.0.4.0到11.2.0.4.8)

    Oracle_RAC数据库GI的PSU升级(11.2.0.4.0到11.2.0.4.8) 本次演示为升级oracle rac数据库,用GI的psu升级,从11.2.0.4.0升级到11.2.0.4.8 ...

  7. PSU 离11.2.0.3.0 -&gt; 11.2.0.3.11 如果解决冲突的整个

    Oracle rdbms 扑灭psu离11.2.0.3.0升级到11.2.0.3.11 参考patch :18522512 停止应用,停止听音乐并DB,将db的oracle_home在下面OPatch ...

  8. AIX 7.1 RAC 11.2.0.4.0升级至11.2.0.4.6(一个patch跑了3个小时)

    1.环境 DB:两节点RAC 11.2.0.4.0升级至11.2.0.4.6 OS:AIX 7.1(205G内存 16C) 2.节点1.节点2(未建库) 2.1.patch 20420937居然用了3 ...

  9. CentOS6.9 安装Oracle 11G 版本11.2.0.1.0

    安装实例与数据库 CentOS6.9 安装Oracle 11G 版本11.2.0.1.0 一.检查系统类别. 查看 系统的类别,这里是 64位系统:[root@localhost ~]# uname ...

随机推荐

  1. 在 MVC 控制器中使用 构造函数时行依赖注入 (IoC)

    在 Controller 中使用 构造函数进行依赖注入 (IoC) 1. Controller 代码: ICard card; ICardCategory cardCategory; public C ...

  2. Java基础类库

    1 main方法      运行java程序的参数:   下面详细讲解main 方法为什么采用这个方法签名 1.public 修饰符:Java类由jvm调用,为了让jvm可以自由调用这个main()方 ...

  3. 阅读jQuery源码的18个惊喜

    注释:本文使用$.fn.method指代调用一系列选中的元素的方法.例如,$.fn.addClass,指代$('div').addClass(‘blue’) 或 $('a.active’).addCl ...

  4. ubuntu下,apt的参数使用,很实用呦

    ubuntu下apt-get 命令参数 常用的APT命令参数 apt-cache search package 搜索包 apt-cache show package 获取包的相关信息,如说明.大小.版 ...

  5. oracle权限

    Oracle 权限 权限允许用户访问属于其它用户的对象或执行程序,ORACLE系统提供三种权限:Object 对象级.System 系统级.Role 角色级.这些权限可以授予给用户.特殊用户publi ...

  6. js跳转页面方法(转)

    <span id="tiao">3</span><a href="javascript:countDown"></a& ...

  7. 八、Linux下的网络服务器模型

    服务器设计技术有很多,按使用的协议来分有TCP服务器和UDP服务器,按处理方式来分有循环服务器和并发服务器. 在网络程序里面,一般来说都是许多客户对应一个服务器,为了处理客户的请求,对服务端的程序就提 ...

  8. CSS 居中效果完整指南

    本文翻译自:<Centering in CSS: A Complete Guide> 使用 CSS 实现效果困难吗?显然不是.实际上有许多方法可以实现居中效果,但在具体情况中,我们往往无法 ...

  9. sql server 分组后字段拼接

  10. 十四、Struts2的国际化

    十四.Struts2的国际化 1.配置全局国际化消息资源包 配置全局消息资源包 <!--配置全局消息资源包 -->     <constant name="struts.c ...