【RAC】oracle11g r2 rac环境删除节点步骤
1.移除数据库实例
如果节点运行了service首先需要删除service
使用dbca图形化界面删除节点
依次选择 Real Application Clusters -- > Instance Management --- > Delete Instance.Accept the alert windows to delete the
也可以使用静默方式删除实例(在其他节点执行)
dbca -silent -deleteInstance [-nodeList node_name] -gdbName gdb_name
-instanceName instance_name -sysDBAUserName sysdba -sysDBAPassword password
node_name 是删除节点名
gdb_name 是全局数据库名
instance 是删除的实例名
sysdba 是拥有sysdba权限的oracle用户名称
passwd 是sysdba用户的密码
确认删除实例节点的thread是否disable 如果不是执行disable
ALTER DATABASE DISABLE THREAD 3;
确认实例从ocr中移除
srvctl config database -d db_unique_name
[root@rac1 ~]# su - grid
rac1-> srvctl config database -d orcl
Database unique name: orcl
Database name: orcl
Oracle home: /oracle/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATADG/orcl/spfileorcl.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: orcl1,orcl2
Disk Groups: DATADG
Mount point paths:
Services:
Type: RAC
Database is administrator managed
rac1->
2.卸载节点数据库软件
步骤1禁用和停止监听
$ srvctl disable listener -l listener_name -n name_of_node_to_delete
$ srvctl stop listener -l listener_name -n name_of_node_to_delete
步骤2 更新inventory目录
在要删除节点的$ORACLE_HOME/oui/bin目录下执行
$ ./runInstaller -updateNodeList ORACLE_HOME=Oracle_home_location
"CLUSTER_NODES={name_of_node_to_delete}" -local
rac3-> ./runInstaller -updateNodeList ORACLE_HOME=/oracle/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES={rac3}" -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 2998 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/app/oraInventory
'UpdateNodeList' was successful.
步骤3 移除节点
在删除节点$ORACLE_HOME/deinstall目下执行
./deinstall -local
步骤4更新其他的节点的inventory
在集群的其节点上 $ORACLE_HOME/oui/bin目录下执行
$ ./runInstaller -updateNodeList ORACLE_HOME=Oracle_home_location
"CLUSTER_NODES={remaining_node_list}"
rac1-> ./runInstaller -updateNodeList ORACLE_HOME=/oracle/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES={rac1,rac2}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 2999 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/app/oraInventory
'UpdateNodeList' was successful.
rac1->
3.卸载节点的clusterware软件
步骤1确保所有节点的$GRID_HOME环境变量配置正确
步骤2查看删除节点是否为pined状态(root用户或者grid用户查看)
$ olsnodes -s –t
如果节点是pined状态,请执行以下命令(root用户执行)
# crsctl unpin css -n node_to_be_deleted
步骤3 在要删除节点的 Grid_home/crs/install目录下 运行 rootcrs.pl脚本(root用户执行)
#./rootcrs.pl -deconfig -force
[root@rac3 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
: 1/192.168.56.0/255.255.255.0/eth0, static
VIP : /rac1-vip/192.168.56.2/192.168.56.0/255.255.255.0/eth0, rac1
VIP : /rac2-vip/192.168.56.3/192.168.56.0/255.255.255.0/eth0, rac2
VIP : /192.168.56.4/192.168.56.4/192.168.56.0/255.255.255.0/eth0, rac3
GSD
ONS 6100, 6200, EM ?2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac3'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac3'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac3'
CRS-2673: Attempting to stop 'ora.DATADG.dg' on 'rac3'
CRS-2673: Attempting to stop 'ora.GRIDDATA.dg' on 'rac3'
CRS-2677: Stop of 'ora.DATADG.dg' on 'rac3' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'rac1'
CRS-2677: Stop of 'ora.GRIDDATA.dg' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac3'
CRS-2677: Stop of 'ora.asm' on 'rac3' succeeded
CRS-2676: Start of 'ora.oc4j' on 'rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac3' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac3'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac3'
CRS-2673: Attempting to stop 'ora.asm' on 'rac3'
CRS-2677: Stop of 'ora.evmd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac3'
CRS-2677: Stop of 'ora.cssd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac3'
CRS-2677: Stop of 'ora.crf' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac3'
CRS-2677: Stop of 'ora.gipcd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac3'
CRS-2677: Stop of 'ora.gpnpd' on 'rac3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node
步骤4 在保留节点的Grid_home/bin目录下运行如下命令在集群中删除节点(root用户执行)
# crsctl delete node -n node_to_be_deleted
[root@rac1 bin]# ./crsctl delete node -n rac3
CRS-4661: Node rac3 successfully deleted.
步骤5 在删除节点节点的Grid_home/oui/bin目录下执行脚本(grid用户执行)
$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES=
{node_to_be_deleted}" CRS=TRUE -local
rac3->./runInstaller -updateNodeList ORACLE_HOME=/oracle/app/11.2.0/grid "CLUSTER_NODES={rac3}" CRS=TRUE -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 2999 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/app/oraInventory
'UpdateNodeList' was successful.
步骤6 在删除节点执行的目录下 Grid_home/deinstall的deinstall脚本删除cluster软件
$ ./deinstall –local
rac3-> export LANG=C
rac3-> ./deinstall –locl
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2015-02-03_03-07-18PM/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /oracle/app/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /oracle/app/grid
Checking for existence of central inventory location /oracle/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: rac3
Checking for sufficient temp space availability on node(s) : 'rac3'
## [END] Install check configuration ##
Traces log file: /tmp/deinstall2015-02-03_03-07-18PM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "rac3"[rac3-vip]
>
The following information can be collected by running "/sbin/ifconfig -a" on node "rac3"
Enter the IP netmask of Virtual IP "192.168.56.4" on node "rac3"[255.255.255.0]
>
Enter the network interface name on which the virtual IP address "192.168.56.4" is active
>
Enter an address or the name of the virtual IP[]
>
Network Configuration check config START
Network de-configuration trace file location: /tmp/deinstall2015-02-03_03-07-18PM/logs/netdc_check2015-02-03_03-08-29-PM.log
Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]:
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /tmp/deinstall2015-02-03_03-07-18PM/logs/asmcadc_check2015-02-03_03-08-53-PM.log
ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y
Is OCR/Voting Disk placed in ASM y|n [n]: y
Enter the OCR/Voting Disk diskgroup name []:
Specify the ASM Diagnostic Destination [ ]:
Specify the diskstring []:
Specify the diskgroups that are managed by this ASM instance []:
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac3
Oracle Home selected for deinstall is: /oracle/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /oracle/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2015-02-03_03-07-18PM/logs/deinstall_deconfig2015-02-03_03-07-25-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2015-02-03_03-07-18PM/logs/deinstall_deconfig2015-02-03_03-07-25-PM.err'
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2015-02-03_03-07-18PM/logs/asmcadc_clean2015-02-03_03-09-46-PM.log
ASM Clean Configuration START
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /tmp/deinstall2015-02-03_03-07-18PM/logs/netdc_clean2015-02-03_03-09-49-PM.log
De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1
De-configuring listener: LISTENER
Stopping listener: LISTENER
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN1
Stopping listener: LISTENER_SCAN1
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
---------------------------------------->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
Run the following command as the root user or the administrator on node "rac3".
/tmp/deinstall2015-02-03_03-07-18PM/perl/bin/perl -I/tmp/deinstall2015-02-03_03-07-18PM/perl/lib -I/tmp/deinstall2015-02-03_03-07-18PM/crs/install /tmp/deinstall2015-02-03_03-07-18PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2015-02-03_03-07-18PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode
Press Enter after you finish running the above commands
<----------------------------------------
Remove the directory: /tmp/deinstall2015-02-03_03-07-18PM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/oracle/app/11.2.0/grid' from the central inventory on the local node : Done
Delete directory '/oracle/app/11.2.0/grid' on the local node : Done
Delete directory '/oracle/app/oraInventory' on the local node : Done
Delete directory '/oracle/app/grid' on the local node : Done
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2015-02-03_03-07-18PM' on node 'rac3'
## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1
Oracle Clusterware is stopped and successfully de-configured on node "rac3"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/oracle/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/oracle/app/11.2.0/grid' on the local node.
Successfully deleted directory '/oracle/app/oraInventory' on the local node.
Successfully deleted directory '/oracle/app/grid' on the local node.
Oracle Universal Installer cleanup was successful.
Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac3' at the end of the session.
Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac3' at the end of the session.
Run 'rm -rf /etc/oratab' as root on node(s) 'rac3' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
rac3->
步骤7 在保留节点的 Grid_home/oui/bin 目录下执行runInstaller更新inventory
$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home
"CLUSTER_NODES={remaining_nodes_list}" CRS=TRUE
rac1-> ./runInstaller -updateNodeList ORACLE_HOME=/oracle/app/11.2.0/grid "CLUSTER_NODES={rac1,rac2}" CRS=TRUE
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 2996 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/app/oraInventory
'UpdateNodeList' was successful.
步骤8 在保留节点上执行验证节点是否删除成功
$ cluvfy stage -post nodedel -n node_list [-verbose]
转自:http://blog.csdn.net/weiwangsisoftstone/article/details/43450959
【RAC】oracle11g r2 rac环境删除节点步骤的更多相关文章
- Deepgreen/Greenplum 删除节点步骤
Deepgreen/Greenplum删除节点步骤 Greenplum和Deepgreen官方都没有给出删除节点的方法和建议,但实际上,我们可以对节点进行删除.由于不确定性,删除节点极有可能导致其他的 ...
- ORACLE11G R2 RAC的进程启动流程
简要说明ORACLE11GR2 RAC的进程启动流程: 1.启动流程概览图: 二.RAC启动流程的梳理: 第一层:OHASD 启动:(OHASD派生) 1.CSSDAGENT负责启动CSSD的AGEN ...
- Hadoop 删除节点步骤
1.在hadoop1.1.1/conf 下新建文件 nn-excluded-list 并写入要删除的节点名称或者IP 一个节点 一行 如: mos5200app cmpaknwom rac7 2.分发 ...
- centos7远程安装oracle11g R2详细教程-解决一切问题
相关链接与资源: sqldevelper(各种操作系统的oracle客户端) http://www.oracle.com/technetwork/cn/developer-tools/sql-deve ...
- 基于CentOS与VmwareStation10搭建Oracle11G RAC 64集群环境:4.安装Oracle RAC FAQ-4.3.Oracle 集群节点间连通失败
1.检查节点连通性的错误 [grid@linuxrac1 grid]$ ./runcluvfy.sh stage -post hwos -n linuxrac1,linuxrac2 -verbose ...
- 基于CentOS与VmwareStation10搭建Oracle11G RAC 64集群环境:4.安装Oracle RAC FAQ-4.7.Oracle 11G R2 RAC修改public网络IP
问题:Linuxrac2节点的public网IP被占用,导致集群节点2无法访问 1.禁止相关CRS资源的启动,停止这些资源(vip,listener,scan,scan_listener,databa ...
- 基于CentOS与VmwareStation10搭建Oracle11G RAC 64集群环境:4.安装Oracle RAC FAQ-4.6.重新配置与缷载11R2 Grid Infrastructure
1.[root@linuxrac1 ~]# /u01/app/oraInventory/orainstRoot.sh 2.[root@linuxrac2 ~]# /u01/app/oraInvento ...
- 基于CentOS与VmwareStation10搭建Oracle11G RAC 64集群环境:2.搭建环境-2.2安装操作系统CentOS5.4
2.2. 安装操作系统CentOS5.4 两个虚拟机都安装,此步骤在创建虚拟机节点时: 基于CentOS与VmwareStation10搭建Oracle11G RAC 64集群环境所有链接: 1.资源 ...
- 基于CentOS与VmwareStation10搭建Oracle11G RAC 64集群环境:4.安装Oracle RAC FAQ-4.1.系统界面报错Gnome
1.错误信息:登录系统后,屏幕弹出几个错误对话框,无菜单.无按钮 GConf error: Failed to contact configuration server; some possible ...
随机推荐
- 使用OpenSSL自建一个HTTPS服务
1. 理论知识 1.1 什么是https 传统的 HTTP 协议以明文方式进行通信,不提供任何方式的数据加密,很容易被中间攻击者破解通信内容或者伪装成服务器与客户端通信,在安全性上存在很大问题. HT ...
- .Net Core Excel导入导出神器Npoi.Mapper
前言 我们在日常开发中对Excel的操作可能会比较频繁,好多功能都会涉及到Excel的操作.在.Net Core中大家可能使用Npoi比较多,这款软件功能也十分强大,而且接近原始编程.但是直接使用Np ...
- gnuplot取消曲线标题
plot 'File.dat' using 1:2 notitle或者 plot 'File.dat' using 1:2 title ""
- Another MySQL daemon already running with the same unix socket. & ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (111)
mysql 断电后启动失败解决 应该是mysql.sock文件还存在. 把这个文件删掉就可以了. mv /var/lib/mysql/mysql.sock /var/lib/mysql/mysql.s ...
- web服务器专题:tomcat(二)模块组件与server.xml 配置文件
web服务器专题:tomcat(二)模块组件与server.xml 配置文件 回顾: Web服务器专题:tomcat(一) 基础模块 一个Server.xml的实例 <?xml version= ...
- Web服务器-并发服务器-协程 (3.4.2)
@ 目录 1.分析 2.代码 关于作者 1.分析 随着网站的用户量越来愈多,通过多进程多线程的会力不从心 使用协程可以缓解这一问题 只要使用gevent实现 2.代码 from socket impo ...
- [C#] (原创)一步一步教你自定义控件——05,Label(原生控件)
一.前言 技术没有先进与落后,只有合适与不合适. 自定义控件可以分为三类: 一类是"无中生有".就如之前文章中的的那些控件,都是继承基类Control,来实现特定的功能效果: 一类 ...
- Astra示例程序库正式上线啦
新上线的Astra示例程序库提供了基于多种编程语言和框架使用Astra的例子.借助这个示例程序库,你可以在短时间内建构起数据库.创建多个表.装载示例数据并部署基于Cassandra的应用程序. 什么是 ...
- 常见数据库的JDBC URL
转自:http://blog.csdn.net/ring0hx/article/details/6152528 Microsoft SQL Server Microsoft SQL Server JD ...
- spring 切面织入报错:java.lang.ClassCastException: com.sun.proxy.$Proxy7 cannot be cast to...
报这个错,只有一个原因,就是转化的类型不对. 接口过父类的子类,在强制转换的时候,一定要用接口父类来定义. 代码示例: package com.luoluo.dao.impl; import java ...