Oracle 11g RAC 卸载CRS步骤
Oracle 11g之后提供了卸载grid和database的脚本,可以卸载的比较干净,不需要手动删除crs
##########如果要卸载RAC,需要先使用dbca删除数据库,在执行下面的操作###############
1.root用户进入到grid的ORACLE_HOME(在最后一个节点以外的所有节点执行,如果只有两个节点的RAC那么只在主节点上执行即可)
说明:(如果没有从官网上下载) 1.从oracle官方网站上下载的deinstall工具 11GR2有7个下载包,deinstall放在第7个下载包,如11.2.0.2的下载包为p10098816_112020_AIX64-5L_7of7.zip 2.解压到本地目录,并执行deinstall 需要输入$ORACLE_HOME bash-3.00$ ./deinstall -home /oracle/grid/product/11.2.0/grid Location of logs /ora11g/server_install/deinstall/./logs/ |
主节点root用户执行:
[root@testdb11a ~]# cd /u01/app/11.2./grid/crs/install/ ----Grid用户的ORACLE_HOME目录/crs/install
[root@testdb11a install]# ./rootcrs.pl -verbose -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: /192.168.1.0/255.255.255.0/eth0, type static
VIP exists: /testdb11a-vip/192.168.1.22/192.168.1.0/255.255.255.0/eth0, hosting node testdb11a
GSD exists
ONS exists: Local port , remote port , EM port
CRS-: Attempting to stop 'ora.registry.acfs' on 'testdb11a'
CRS-: Stop of 'ora.registry.acfs' on 'testdb11a' succeeded
CRS-: Starting shutdown of Oracle High Availability Services-managed resources on 'testdb11a'
CRS-: Attempting to stop 'ora.crsd' on 'testdb11a'
CRS-: Starting shutdown of Cluster Ready Services-managed resources on 'testdb11a'
CRS-: Attempting to stop 'ora.oc4j' on 'testdb11a'
CRS-: Attempting to stop 'ora.DATA.dg' on 'testdb11a'
CRS-: Stop of 'ora.oc4j' on 'testdb11a' succeeded
CRS-: Stop of 'ora.DATA.dg' on 'testdb11a' succeeded
CRS-: Attempting to stop 'ora.asm' on 'testdb11a'
CRS-: Stop of 'ora.asm' on 'testdb11a' succeeded
CRS-: Shutdown of Cluster Ready Services-managed resources on 'testdb11a' has completed
CRS-: Stop of 'ora.crsd' on 'testdb11a' succeeded
CRS-: Attempting to stop 'ora.mdnsd' on 'testdb11a'
CRS-: Attempting to stop 'ora.drivers.acfs' on 'testdb11a'
CRS-: Attempting to stop 'ora.crf' on 'testdb11a'
CRS-: Attempting to stop 'ora.ctssd' on 'testdb11a'
CRS-: Attempting to stop 'ora.evmd' on 'testdb11a'
CRS-: Attempting to stop 'ora.asm' on 'testdb11a'
CRS-: Stop of 'ora.crf' on 'testdb11a' succeeded
CRS-: Stop of 'ora.evmd' on 'testdb11a' succeeded
CRS-: Stop of 'ora.mdnsd' on 'testdb11a' succeeded
CRS-: Stop of 'ora.ctssd' on 'testdb11a' succeeded
CRS-: Stop of 'ora.drivers.acfs' on 'testdb11a' succeeded
CRS-: Stop of 'ora.asm' on 'testdb11a' succeeded
CRS-: Attempting to stop 'ora.cluster_interconnect.haip' on 'testdb11a'
CRS-: Stop of 'ora.cluster_interconnect.haip' on 'testdb11a' succeeded
CRS-: Attempting to stop 'ora.cssd' on 'testdb11a'
CRS-: Stop of 'ora.cssd' on 'testdb11a' succeeded
CRS-: Attempting to stop 'ora.gipcd' on 'testdb11a'
CRS-: Stop of 'ora.gipcd' on 'testdb11a' succeeded
CRS-: Attempting to stop 'ora.gpnpd' on 'testdb11a'
CRS-: Stop of 'ora.gpnpd' on 'testdb11a' succeeded
CRS-: Shutdown of Oracle High Availability Services-managed resources on 'testdb11a' has completed
CRS-: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
[root@testdb11a install]# ./crsconfig_params
最后一个节点执行:(下面的命令将清空OCR配置和Voting disk)
[root@testdb11b install]# ./rootcrs.pl -verbose -deconfig -force -lastnode
Using configuration parameter file: ./crsconfig_params
Network exists: /192.168.1.0/255.255.255.0/eth0, type static
VIP exists: /testdb11b-vip/192.168.1.23/192.168.1.0/255.255.255.0/eth0, hosting node testdb11b
GSD exists
ONS exists: Local port , remote port , EM port
CRS-: Attempting to stop 'ora.registry.acfs' on 'testdb11b'
CRS-: Stop of 'ora.registry.acfs' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.crsd' on 'testdb11b'
CRS-: Starting shutdown of Cluster Ready Services-managed resources on 'testdb11b'
CRS-: Attempting to stop 'ora.DATA.dg' on 'testdb11b'
CRS-: Attempting to stop 'ora.oc4j' on 'testdb11b'
CRS-: Stop of 'ora.oc4j' on 'testdb11b' succeeded
CRS-: Stop of 'ora.DATA.dg' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.asm' on 'testdb11b'
CRS-: Stop of 'ora.asm' on 'testdb11b' succeeded
CRS-: Shutdown of Cluster Ready Services-managed resources on 'testdb11b' has completed
CRS-: Stop of 'ora.crsd' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.ctssd' on 'testdb11b'
CRS-: Attempting to stop 'ora.evmd' on 'testdb11b'
CRS-: Attempting to stop 'ora.asm' on 'testdb11b'
CRS-: Stop of 'ora.evmd' on 'testdb11b' succeeded
CRS-: Stop of 'ora.ctssd' on 'testdb11b' succeeded
CRS-: Stop of 'ora.asm' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.cluster_interconnect.haip' on 'testdb11b'
CRS-: Stop of 'ora.cluster_interconnect.haip' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.cssd' on 'testdb11b'
CRS-: Stop of 'ora.cssd' on 'testdb11b' succeeded
CRS-: Attempting to start 'ora.cssdmonitor' on 'testdb11b'
CRS-: Start of 'ora.cssdmonitor' on 'testdb11b' succeeded
CRS-: Attempting to start 'ora.cssd' on 'testdb11b'
CRS-: Attempting to start 'ora.diskmon' on 'testdb11b'
CRS-: Start of 'ora.diskmon' on 'testdb11b' succeeded
CRS-: Start of 'ora.cssd' on 'testdb11b' succeeded
CRS-: Successful deletion of voting disk +DATA.
ASM de-configuration trace file location: /tmp/asmcadc_clean2014--17_10---AM.log
ASM Clean Configuration START
ASM Clean Configuration END ASM with SID +ASM1 deleted successfully. Check /tmp/asmcadc_clean2014--17_10---AM.log for details. CRS-: Starting shutdown of Oracle High Availability Services-managed resources on 'testdb11b'
CRS-: Attempting to stop 'ora.mdnsd' on 'testdb11b'
CRS-: Attempting to stop 'ora.ctssd' on 'testdb11b'
CRS-: Attempting to stop 'ora.cluster_interconnect.haip' on 'testdb11b'
CRS-: Stop of 'ora.mdnsd' on 'testdb11b' succeeded
CRS-: Stop of 'ora.cluster_interconnect.haip' on 'testdb11b' succeeded
CRS-: Stop of 'ora.ctssd' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.cssd' on 'testdb11b'
CRS-: Stop of 'ora.cssd' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.crf' on 'testdb11b'
CRS-: Stop of 'ora.crf' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.gipcd' on 'testdb11b'
CRS-: Stop of 'ora.gipcd' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.gpnpd' on 'testdb11b'
CRS-: Stop of 'ora.gpnpd' on 'testdb11b' succeeded
CRS-: Shutdown of Oracle High Availability Services-managed resources on 'testdb11b' has completed
CRS-: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
执行完成上面的两个命令后,oracle的数据库软件也就删除了,就可以重新执行root.sh脚本了~~~
如果使用了asm磁盘,需要先清理asm磁盘,因为尝试了一次安装,你的ASM磁盘就被标记为used,不能再作为候选磁盘,要想再次使用,需要执行下面的操作
a.使用dd命令覆盖分区的头部信息
dd if=/dev/zero of=/dev/sdb1 bs= count=
dd if=/dev/zero of=/dev/sdc1 bs= count=
dd if=/dev/zero of=/dev/sdd1 bs= count=
dd if=/dev/zero of=/dev/sde1 bs= count=
b.删除并重建asm磁盘
/etc/init.d/oracleasm deletedisk DISK1 /dev/sdb1
/etc/init.d/oracleasm createdisk DISK1 /dev/sdb1
/etc/init.d/oracleasm deletedisk DISK2 /dev/sdc1
/etc/init.d/oracleasm createdisk DISK2 /dev/sdc1
/etc/init.d/oracleasm deletedisk DISK3 /dev/sdd1
/etc/init.d/oracleasm createdisk DISK3 /dev/sdd1
/etc/init.d/oracleasm deletedisk DISK4 /dev/sde1
/etc/init.d/oracleasm createdisk DISK4 /dev/sde1
执行完上面的命令后,asm磁盘就可以用了
如果要彻底卸载Grid Infrastructure软件,执行下面的命令
2.grid用户执行deinstall进行卸载(两个节点)
主节点执行结果:
+ASM1@testdb11a /u01/app/11.2./grid/deinstall$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2014--16_01--52PM/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############ ######################### CHECK OPERATION START #########################
## [START] Install check configuration ## Checking for existence of the Oracle home location /u01/app/11.2./grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: testdb11a,testdb11b
Checking for sufficient temp space availability on node(s) : 'testdb11a,testdb11b' ## [END] Install check configuration ## Traces log file: /tmp/deinstall2014--16_01--52PM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "testdb11a"[testdb11a-vip]
> The following information can be collected by running "/sbin/ifconfig -a" on node "testdb11a"
Enter the IP netmask of Virtual IP "192.168.1.22" on node "testdb11a"[255.255.255.0]
> Enter the network interface name on which the virtual IP address "192.168.1.22" is active
> Enter an address or the name of the virtual IP used on node "testdb11b"[testdb11b-vip]
> The following information can be collected by running "/sbin/ifconfig -a" on node "testdb11b"
Enter the IP netmask of Virtual IP "192.168.1.23" on node "testdb11b"[255.255.255.0]
> Enter the network interface name on which the virtual IP address "192.168.1.23" is active
> Enter an address or the name of the virtual IP[]
> Network Configuration check config START Network de-configuration trace file location: /tmp/deinstall2014--16_01--52PM/logs/netdc_check2014--16_01---PM.log Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER_SCAN1]: Network Configuration check config END Asm Check Configuration START ASM de-configuration trace file location: /tmp/deinstall2014--16_01--52PM/logs/asmcadc_check2014--16_01---PM.log ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y
Is OCR/Voting Disk placed in ASM y|n [n]: y Enter the OCR/Voting Disk diskgroup name []:
Specify the ASM Diagnostic Destination [ ]:
Specify the diskstring []:
Specify the diskgroups that are managed by this ASM instance []: ######################### CHECK OPERATION END ######################### ####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:testdb11a,testdb11b
Oracle Home selected for deinstall is: /u01/app/11.2./grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER_SCAN1
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2014-09-16_01-45-52PM/logs/deinstall_deconfig2014-09-16_01-46-31-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2014-09-16_01-45-52PM/logs/deinstall_deconfig2014-09-16_01-46-31-PM.err' ######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2014--16_01--52PM/logs/asmcadc_clean2014--16_01---PM.log
ASM Clean Configuration START
ASM Clean Configuration END Network Configuration clean config START Network de-configuration trace file location: /tmp/deinstall2014--16_01--52PM/logs/netdc_clean2014--16_01---PM.log De-configuring RAC listener(s): LISTENER_SCAN1 De-configuring listener: LISTENER_SCAN1
Stopping listener: LISTENER_SCAN1
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully. De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully. De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully. De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully. De-configuring backup files on all nodes...
Backup files de-configured successfully. The network configuration has been cleaned up successfully. Network Configuration clean config END ----------------------------------------> The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes. Run the following command as the root user or the administrator on node "testdb11b". /tmp/deinstall2014-09-16_01-45-52PM/perl/bin/perl -I/tmp/deinstall2014-09-16_01-45-52PM/perl/lib -I/tmp/deinstall2014-09-16_01-45-52PM/crs/install /tmp/deinstall2014-09-16_01-45-52PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-09-16_01-45-52PM/response/deinstall_Ora11g_gridinfrahome1.rsp" Run the following command as the root user or the administrator on node "testdb11a". /tmp/deinstall2014-09-16_01-45-52PM/perl/bin/perl -I/tmp/deinstall2014-09-16_01-45-52PM/perl/lib -I/tmp/deinstall2014-09-16_01-45-52PM/crs/install /tmp/deinstall2014-09-16_01-45-52PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-09-16_01-45-52PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode Press Enter after you finish running the above commands <----------------------------------------
打开另外一个终端窗口,在节点1和节点二下root用户执行上面的两条命令
节点一
[root@testdb11a ~]# /tmp/deinstall2014--16_01--52PM/perl/bin/perl -I/tmp/deinstall2014--16_01--52PM/perl/lib -I/tmp/deinstall2014--16_01--52PM/crs/install /tmp/deinstall2014--16_01--52PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-09-16_01-45-52PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode
Using configuration parameter file: /tmp/deinstall2014--16_01--52PM/response/deinstall_Ora11g_gridinfrahome1.rsp
Adding Clusterware entries to inittab
/crs/install/inittab does not exist.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-: No Oracle Clusterware components configured.
CRS-: Command Stop failed, or completed with errors.
CRS-: No Oracle Clusterware components configured.
CRS-: Command Delete failed, or completed with errors.
CRS-: No Oracle Clusterware components configured.
CRS-: Command Stop failed, or completed with errors.
CRS-: No Oracle Clusterware components configured.
CRS-: Command Modify failed, or completed with errors.
Adding Clusterware entries to inittab
/crs/install/inittab does not exist.
CRS-: No Oracle Clusterware components configured.
CRS-: Command Delete failed, or completed with errors.
crsctl delete for vds in DATA ... failed
CRS-: No Oracle Clusterware components configured.
CRS-: Command Delete failed, or completed with errors.
CRS-: No Oracle Clusterware components configured.
CRS-: Command Stop failed, or completed with errors.
ACFS-: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-, , No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
节点二
[root@testdb11b ~]# /tmp/deinstall2014--16_01--52PM/perl/bin/perl -I/tmp/deinstall2014--16_01--52PM/perl/lib -I/tmp/deinstall2014--16_01--52PM/crs/install /tmp/deinstall2014--16_01--52PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-09-16_01-45-52PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2014--16_01--52PM/response/deinstall_Ora11g_gridinfrahome1.rsp
Network exists: /192.168.1.0/255.255.255.0/eth0, type static
VIP exists: /testdb11b-vip/192.168.1.23/192.168.1.0/255.255.255.0/eth0, hosting node testdb11b
GSD exists
ONS exists: Local port , remote port , EM port
CRS-: Starting shutdown of Oracle High Availability Services-managed resources on 'testdb11b'
CRS-: Attempting to stop 'ora.crsd' on 'testdb11b'
CRS-: Starting shutdown of Cluster Ready Services-managed resources on 'testdb11b'
CRS-: Attempting to stop 'ora.asm' on 'testdb11b'
CRS-: Stop of 'ora.asm' on 'testdb11b' succeeded
CRS-: Shutdown of Cluster Ready Services-managed resources on 'testdb11b' has completed
CRS-: Stop of 'ora.crsd' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.mdnsd' on 'testdb11b'
CRS-: Attempting to stop 'ora.drivers.acfs' on 'testdb11b'
CRS-: Attempting to stop 'ora.ctssd' on 'testdb11b'
CRS-: Attempting to stop 'ora.evmd' on 'testdb11b'
CRS-: Attempting to stop 'ora.asm' on 'testdb11b'
CRS-: Stop of 'ora.evmd' on 'testdb11b' succeeded
CRS-: Stop of 'ora.mdnsd' on 'testdb11b' succeeded
CRS-: Stop of 'ora.ctssd' on 'testdb11b' succeeded
CRS-: Stop of 'ora.drivers.acfs' on 'testdb11b' succeeded
CRS-: Stop of 'ora.asm' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.cluster_interconnect.haip' on 'testdb11b'
CRS-: Stop of 'ora.cluster_interconnect.haip' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.cssd' on 'testdb11b'
CRS-: Stop of 'ora.cssd' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.crf' on 'testdb11b'
CRS-: Stop of 'ora.crf' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.gipcd' on 'testdb11b'
CRS-: Stop of 'ora.gipcd' on 'testdb11b' succeeded
CRS-: Attempting to stop 'ora.gpnpd' on 'testdb11b'
CRS-: Stop of 'ora.gpnpd' on 'testdb11b' succeeded
CRS-: Shutdown of Oracle High Availability Services-managed resources on 'testdb11b' has completed
CRS-: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
执行完成之后在原窗口按回车继续
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done Delete directory '/u01/app/11.2.0/grid' on the local node : Done Delete directory '/u01/app/oraInventory' on the local node : Done Delete directory '/u01/app/grid' on the local node : Done Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'testdb11b' : Done Delete directory '/u01/app/11.2.0/grid' on the remote nodes 'testdb11b' : Done Delete directory '/u01/app/oraInventory' on the remote nodes 'testdb11b' : Done Delete directory '/u01/app/grid' on the remote nodes 'testdb11b' : Done Oracle Universal Installer cleanup was successful. Oracle Universal Installer clean END ## [START] Oracle install clean ## Clean install operation removing temporary directory '/tmp/deinstall2014-09-16_01-45-52PM' on node 'testdb11a'
Clean install operation removing temporary directory '/tmp/deinstall2014-09-16_01-45-52PM' on node 'testdb11b' ## [END] Oracle install clean ## ######################### CLEAN OPERATION END ######################### ####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following RAC listener(s) were de-configured successfully: LISTENER_SCAN1
Oracle Clusterware is stopped and successfully de-configured on node "testdb11b"
Oracle Clusterware is stopped and successfully de-configured on node "testdb11a"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Successfully deleted directory '/u01/app/grid' on the local node.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'testdb11b'.
Successfully deleted directory '/u01/app/11.2.0/grid' on the remote nodes 'testdb11b'.
Successfully deleted directory '/u01/app/oraInventory' on the remote nodes 'testdb11b'.
Successfully deleted directory '/u01/app/grid' on the remote nodes 'testdb11b'.
Oracle Universal Installer cleanup was successful. Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'testdb11a,testdb11b' at the end of the session. Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'testdb11a,testdb11b' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
####################################################################### ############# ORACLE DEINSTALL & DECONFIG TOOL END #############
删除
[root@testdb11a deinstall]# rm -rf /etc/oraInst.loc
显示成功,现在把所有的crs服务停止,所有有关的文件都已经删除。
Oracle 11g RAC 卸载CRS步骤的更多相关文章
- oracle 11g RAC安装节点二执行结果错误CRS-5005: IP Address: 192.168.1.24 is already in use in the network
[root@testdb11b ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInvento ...
- Oracle 11g RAC停止和启动步骤
关闭前备份控制文件/参数文件: sqlplus / as sysdba alter database backup controlfile to '/home/oracle/control.ctl ...
- Oracle 11g RAC运维总结
转至:https://blog.csdn.net/qq_41944882/article/details/103560879 1 术语解释1.1 高可用(HA)什么是高可用?顾名思义我们能轻松地理解是 ...
- Oracle 11g RAC 应用补丁简明版
之前总结过<Oracle 11.2.0.4 RAC安装最新PSU补丁>, 这次整理为简明版,忽略一切输出的显示,引入一些官方的说明,增加OJVM PSU的补丁应用. 环境:RHEL6.5 ...
- Oracle 11g RAC 修改各类IP地址
Oracle 11g RAC 修改各类IP地址 首先,我们都知道Oracle 11g RAC中的IP主要有:Public IP.VIP.SCAN VIP.Private IP这几种. 一般这类改IP地 ...
- [转帖]Oracle 11G RAC For Windows 2008 R2部署手册
Oracle 11G RAC For Windows 2008 R2部署手册(亲测,成功实施多次) https://www.cnblogs.com/yhfssp/p/7821593.html 总体规划 ...
- 【Oracle 集群】Oracle 11G RAC教程之集群安装(七)
Oracle 11G RAC集群安装(七) 概述:写下本文档的初衷和动力,来源于上篇的<oracle基本操作手册>.oracle基本操作手册是作者研一假期对oracle基础知识学习的汇总. ...
- Oracle 11g RAC环境下Private IP修改方法及异常处理
Oracle 11g RAC环境下Private IP修改方法及异常处理 Oracle 11g RAC环境下Private IP修改方法及异常处理 一. 修改方法 1. 确认所有节点CRS服务以启动 ...
- Oracle 11g RAC 第二节点root.sh执行失败后再次执行root.sh
Oracle 11g RAC 第二节点root.sh执行失败后再次执行root.sh前,要先清除之前的crs配置信息 # /u01/app/11.2.0/grid/crs/install/rootcr ...
随机推荐
- 【bzoj1059】 ZJOI2007—矩阵游戏
http://www.lydsy.com/JudgeOnline/problem.php?id=1059 (题目链接) 题意 一个01矩阵,可以任意交换两行或两列,问能否经过若干次交换后使主对角线全为 ...
- Codeforces Round #377 (Div. 2) B. Cormen — The Best Friend Of a Man(贪心)
传送门 Description Recently a dog was bought for Polycarp. The dog's name is Cormen. Now Polycarp has ...
- <<< 如何查看自己是外网还是内网
判断的方法很简单,就是看你的网络中有没有路由器,不管是有线路由还是无线路由,只要你的网络中用了路由,那你就是内网,用路由器的网络有一个特点,那就是只要路由器在开着,那你开了电脑之后就可以直接上网,不需 ...
- 使用css打造形形色色的形状!
使用css打造形形色色的形状! css是非常强大的工具,如果我们掌握的好,那么许多复杂的形状不需要使用图片而直接使用css完成即可,这不仅有利于减少http请求以增强性能还便于日后的管理和维护,一举两 ...
- 创建NetWorkDataset---Shapefile篇
部分参照esri的官方例子,理解下各个参数,对照自己的NetWorkDatase创建方式(在arcmap中),多试试代码就调好了. /// <summary> /// 创建NetWorkD ...
- LoadRunner 函数之 web_add_cookie
简单示例: Action() { // 添加cookie web_add_cookie("is_login=True;path=/;domain=10.1.102.75"); // ...
- 使用MicroService4Net 快速创建一个简单的微服务
“微服务架构(Microservice Architecture)”一词在过去几年里广泛的传播,它用于描述一种设计应用程序的特别方式,作为一套独立可部署的服务.目前,这种架构方式还没有准确的定义,但是 ...
- [CentOs7]搭建ftp服务器
摘要 vsftpd 是“very secure FTP daemon”的缩写,安全性是它的一个最大的特点.vsftpd 是一个 UNIX 类操作系统上运行的服务器的名字,它可以运行在诸如 Linux. ...
- LNMP环境搭建笔记
说明:前面尝试的在ubuntu12.04上搭建的LAMP环境由于开发的需要需要对php的版本进行升级,然而通过apt-get库安装的php的版本是5.3.10,不能满足开发需要.此笔记安装的php的 ...
- UI第七节——UISlider详解
- (void)viewDidLoad { [super viewDidLoad]; // 实例化UISlider,高度对外观没有影响 UISlider *slider = [[UISlider al ...