转自:http://www.askmaclean.com/archives/add-node-to-11-2-0-2-grid-infrastructure.html

在之前的文章中我介绍了为10g RAC Cluster添加节点的具体步骤。在11gr2中Oracle CRS升级为Grid Infrastructure,通过GI我们可以更方便地控制CRS资源如:VIP、ASM等等,这也导致了在为11.2中的GI添加节点时,同10gr2相比有着较大的差异。

这里我们要简述在11.2中为GI ADD NODE的几个要点:

一、准备工作

准备工作是不可忽略的,在10g RAC Cluster添加节点中我列举了必须完成的先决条件,在11.2 GI中这些条件依然有效,但请注意以下2点:

1.不仅要为oracle用户配置用户等价性,也要为grid(GI安装用户)用户配置;除非你同时使用oracle安装GI和RDBMS,这是不推荐的

2.在11.2 GI中推出了octssd(Oracle Cluster Synchronization Service Daemon)时间同步服务,如果打算使用octssd的话那么建议禁用ntpd事件服务,具体方法如下:

# service ntpd stop
Shutting down ntpd: [ OK ]
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.orig
# rm /var/run/ntpd.pid

3.使用cluster verify工具验证新增节点是否满足cluster的要求:

cluvfy stage -pre nodeadd -n <NEW NODE>

具体用法如:

su - grid

[grid@vrh1 ~]$ cluvfy stage -pre nodeadd -n vrh3

Performing pre-checks for node addition 

Checking node reachability...
Node reachability check passed from node "vrh1" Checking user equivalence...
User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0" Node connectivity check passed Checking CRS integrity... CRS integrity check passed Checking shared resources... Checking CRS home location...
The location "/g01/11.2.0/grid" is not shared but is present/creatable on all nodes
Shared resources check for node addition passed Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0" Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1" Node connectivity check passed Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "vrh3:/tmp"
Free disk space check passed for "vrh1:/tmp"
Check for multiple users with UID value 54322 passed
User existence check passed for "grid"
Run level check passed
Hard limits check failed for "maximum open file descriptors"
Check failed on nodes:
vrh3
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81( x86_64)"
Package existence check passed for "binutils-2.17.50.0.6( x86_64)"
Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)"
Package existence check passed for "glibc-common-2.5( x86_64)"
Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)"
Package existence check passed for "glibc-headers-2.5( x86_64)"
Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "sysstat-7.0.2( x86_64)"
Package existence check passed for "ksh-20060214( x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed Checking OCR integrity... OCR integrity check passed Checking Oracle Cluster Voting Disk configuration... Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started...
No NTP Daemons or Services were found to be running Clock synchronization check using Network Time Protocol(NTP) passed User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: vrh3 File "/etc/resolv.conf" is not consistent across nodes Pre-check for node addition was unsuccessful on all the nodes.

一般来说如果我们不使用DNS解析域名方式的话,那么resolv.conf不一直的问题可以忽略,但在slient安装模式下可能造成我们的操作无法完成,这个后面会介绍。

二、向GI中加入新的节点

注意11.2.0.2 GI添加节点的关键脚本addNode.sh可能存在Bug,如官方文档所述当希望使用Interactive Mode交互模式启动OUI界面添加节点时,只要运行addNode.sh脚本即可,实际情况则不是这样:

documentation said:
Go to CRS_home/oui/bin and run the addNode.sh script on one of the existing nodes.
Oracle Universal Installer runs in add node mode and the Welcome page displays.
Click Next and the Specify Cluster Nodes for Node Addition page displays. we done: 运行addNode.sh要求以GI拥有者身份运行该脚本,一般为grid用户,要求在已有的正运行GI的节点上启动脚本 [grid@vrh1 ~]$ cd $ORA_CRS_HOME/oui/bin [grid@vrh1 bin]$ ./addNode.sh
ERROR:
Value for CLUSTER_NEW_NODES not specified. USAGE:
/g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl {-pre|-post} /g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre [-silent] CLUSTER_NEW_NODES={}
/g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre [-silent] CLUSTER_NEW_NODES={} 
CLUSTER_NEW_VIRTUAL_HOSTNAMES={} /g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre [-silent] -responseFile
/g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -post [-silent]

我们的本意是期望使用图形化的交互界面的OUI(runInstaller -addnode)来新增节点,然而addNode.sh居然让我们输入一些参量,而且其调用的check_nodeadd.pl脚本使用的是silent模式。

在MOS和GOOGLE上搜了一圈,基本所有的文档都推荐使用silent模式来添加节点,无法只好转到静默添加上来。实际上静默添加所需要提供的参数并不多,这可能是这种方式得到推崇的原因之一,但是这里又碰到问题了:

语法SYNTAX:
./addNode.sh –silent
"CLUSTER_NEW_NODES={node2}"
"CLUSTER_NEW_PRIVATE_NODE_NAMES={node2-priv}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2-vip}"
在我们的例子中具体命令如下 ./addNode.sh -silent
"CLUSTER_NEW_NODES={vrh3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={vrh3-vip}"
"CLUSTER_NEW_PRIVATE_NODE_NAMES={vrh3-priv}" 以上命令因为采用silent模式所以没有任何窗口输出(实际上会输出到 /tmp/silentInstall.log日志文件中),去掉-silent参数 ./addNode.sh "CLUSTER_NEW_NODES={vrh3}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={vrh3-vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={vrh3-priv}" Performing pre-checks for node addition Checking node reachability...
Node reachability check passed from node "vrh1" Checking user equivalence...
User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0" Node connectivity check passed Checking CRS integrity... CRS integrity check passed Checking shared resources... Checking CRS home location...
The location "/g01/11.2.0/grid" is not shared but is present/creatable on all nodes
Shared resources check for node addition passed Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0" Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1" Node connectivity check passed Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "vrh3:/tmp"
Free disk space check passed for "vrh1:/tmp"
Check for multiple users with UID value 54322 passed
User existence check passed for "grid"
Run level check passed
Hard limits check failed for "maximum open file descriptors"
Check failed on nodes:
vrh3
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81( x86_64)"
Package existence check passed for "binutils-2.17.50.0.6( x86_64)"
Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)"
Package existence check passed for "glibc-common-2.5( x86_64)"
Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)"
Package existence check passed for "glibc-headers-2.5( x86_64)"
Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "sysstat-7.0.2( x86_64)"
Package existence check passed for "ksh-20060214( x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed Checking OCR integrity... OCR integrity check passed Checking Oracle Cluster Voting Disk configuration... Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started...
No NTP Daemons or Services were found to be running Clock synchronization check using Network Time Protocol(NTP) passed User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: vrh3 File "/etc/resolv.conf" is not consistent across nodes Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed. Pre-check for node addition was unsuccessful on all the nodes.

在addNode.sh正式添加节点之前它也会调用cluvfy工具来验证新加入节点是否满足条件,如果不满足则拒绝下一步操作。因为我们在之前已经验证过了新节点的可用性,所以这里完全可以跳过addNode.sh的验证,具体来看一下addNode.sh脚本的内容:

[grid@vrh1 bin]$ cat addNode.sh 

#!/bin/sh
OHOME=/g01/11.2.0/grid
INVPTRLOC=$OHOME/oraInst.loc
ADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOC ORACLE_HOME=$OHOME $*"
if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f "$OHOME/cv/cvutl/check_nodeadd.pl" ]
then
$ADDNODE
else
CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/check_nodeadd.pl -pre $*"
$CHECK_NODEADD
if [ $? -eq 0 ]
then
$ADDNODE
fi
fi

可以看到存在一个IGNORE_PREADDNODE_CHECKS环境变量可以控制是否进行节点新增的预检查,我们手动设置该变量,之后再次运行addNode.sh脚本:

export IGNORE_PREADDNODE_CHECKS=Y

./addNode.sh  "CLUSTER_NEW_NODES={vrh3}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={vrh3-vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={vrh3-priv}"
> add_node.log 2>&1 另开一个窗口可以监控新增节点的过程日志 tail -f add_node.log Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 5951 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved. Performing tests to see whether nodes vrh2,vrh3 are available
............................................................... 100% Done. .
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /g01/11.2.0/grid
New Nodes
Space Requirements
New Nodes
vrh3
/: Required 6.66GB : Available 32.40GB
Installed Products
Product Names
Oracle Grid Infrastructure 11.2.0.2.0
Sun JDK 1.5.0.24.08
Installer SDK Component 11.2.0.2.0
Oracle One-Off Patch Installer 11.2.0.0.2
Oracle Universal Installer 11.2.0.2.0
Oracle USM Deconfiguration 11.2.0.2.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Enterprise Manager Common Core Files 10.2.0.4.3
Oracle DBCA Deconfiguration 11.2.0.2.0
Oracle RAC Deconfiguration 11.2.0.2.0
Oracle Quality of Service Management (Server) 11.2.0.2.0
Installation Plugin Files 11.2.0.2.0
Universal Storage Manager Files 11.2.0.2.0
Oracle Text Required Support Files 11.2.0.2.0
Automatic Storage Management Assistant 11.2.0.2.0
Oracle Database 11g Multimedia Files 11.2.0.2.0
Oracle Multimedia Java Advanced Imaging 11.2.0.2.0
Oracle Globalization Support 11.2.0.2.0
Oracle Multimedia Locator RDBMS Files 11.2.0.2.0
Oracle Core Required Support Files 11.2.0.2.0
Bali Share 1.1.18.0.0
Oracle Database Deconfiguration 11.2.0.2.0
Oracle Quality of Service Management (Client) 11.2.0.2.0
Expat libraries 2.0.1.0.1
Oracle Containers for Java 11.2.0.2.0
Perl Modules 5.10.0.0.1
Secure Socket Layer 11.2.0.2.0
Oracle JDBC/OCI Instant Client 11.2.0.2.0
Oracle Multimedia Client Option 11.2.0.2.0
LDAP Required Support Files 11.2.0.2.0
Character Set Migration Utility 11.2.0.2.0
Perl Interpreter 5.10.0.0.1
PL/SQL Embedded Gateway 11.2.0.2.0
OLAP SQL Scripts 11.2.0.2.0
Database SQL Scripts 11.2.0.2.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
SSL Required Support Files for InstantClient 11.2.0.2.0
SQL*Plus Files for Instant Client 11.2.0.2.0
Oracle Net Required Support Files 11.2.0.2.0
Oracle Database User Interface 2.2.13.0.0
RDBMS Required Support Files for Instant Client 11.2.0.2.0
RDBMS Required Support Files Runtime 11.2.0.2.0
XML Parser for Java 11.2.0.2.0
Oracle Security Developer Tools 11.2.0.2.0
Oracle Wallet Manager 11.2.0.2.0
Enterprise Manager plugin Common Files 11.2.0.2.0
Platform Required Support Files 11.2.0.2.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
RDBMS Required Support Files 11.2.0.2.0
Oracle Ice Browser 5.2.3.6.0
Oracle Help For Java 4.2.9.0.0
Enterprise Manager Common Files 10.2.0.4.3
Deinstallation Tool 11.2.0.2.0
Oracle Java Client 11.2.0.2.0
Cluster Verification Utility Files 11.2.0.2.0
Oracle Notification Service (eONS) 11.2.0.2.0
Oracle LDAP administration 11.2.0.2.0
Cluster Verification Utility Common Files 11.2.0.2.0
Oracle Clusterware RDBMS Files 11.2.0.2.0
Oracle Locale Builder 11.2.0.2.0
Oracle Globalization Support 11.2.0.2.0
Buildtools Common Files 11.2.0.2.0
Oracle RAC Required Support Files-HAS 11.2.0.2.0
SQL*Plus Required Support Files 11.2.0.2.0
XDK Required Support Files 11.2.0.2.0
Agent Required Support Files 10.2.0.4.3
Parser Generator Required Support Files 11.2.0.2.0
Precompiler Required Support Files 11.2.0.2.0
Installation Common Files 11.2.0.2.0
Required Support Files 11.2.0.2.0
Oracle JDBC/THIN Interfaces 11.2.0.2.0
Oracle Multimedia Locator 11.2.0.2.0
Oracle Multimedia 11.2.0.2.0
HAS Common Files 11.2.0.2.0
Assistant Common Files 11.2.0.2.0
PL/SQL 11.2.0.2.0
HAS Files for DB 11.2.0.2.0
Oracle Recovery Manager 11.2.0.2.0
Oracle Database Utilities 11.2.0.2.0
Oracle Notification Service 11.2.0.2.0
SQL*Plus 11.2.0.2.0
Oracle Netca Client 11.2.0.2.0
Oracle Net 11.2.0.2.0
Oracle JVM 11.2.0.2.0
Oracle Internet Directory Client 11.2.0.2.0
Oracle Net Listener 11.2.0.2.0
Cluster Ready Services Files 11.2.0.2.0
Oracle Database 11g 11.2.0.2.0
----------------------------------------------------------------------------- Instantiating scripts for add node (Monday, August 15, 2011 10:15:35 PM CST)
. 1% Done.
Instantiation of add node scripts complete Copying to remote nodes (Monday, August 15, 2011 10:15:38 PM CST)
............................................................................................... 96% Done.
Home copied to new nodes Saving inventory on nodes (Monday, August 15, 2011 10:21:02 PM CST)
. 100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session.
However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/g01/oraInventory/orainstRoot.sh'
with root privileges on nodes 'vrh3'.
If you do not register the inventory, you may not be able to update or
patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each cluster node.
/g01/oraInventory/orainstRoot.sh #On nodes vrh3
/g01/11.2.0/grid/root.sh #On nodes vrh3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node The Cluster Node Addition of /g01/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.

以上GI软件的安装成功了,接下来我们还需要在新加入的节点上运行2个关键的脚本,千万不要忘记这一点!:

运行orainstRoot.sh 和 root.sh脚本要求以root身份
su - root [root@vrh3]# cat /etc/oraInst.loc
inventory_loc=/g01/oraInventory --这里是oraInventory的位置
inst_group=asmadmin [root@vrh3 ~]# cd /g01/oraInventory [root@vrh3 oraInventory]# ./orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /g01/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world. Changing groupname of /g01/oraInventory to asmadmin.
The execution of the script is complete. 运行CRS_HOME下的root.sh脚本,可能会有警告但不要紧 [root@vrh3 ~]# cd $ORA_CRS_HOME [root@vrh3 g01]# /g01/11.2.0/grid/root.sh
Running Oracle 11g root script... The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /g01/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ... Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed. Using configuration parameter file: /g01/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node vrh1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
/g01/11.2.0/grid/bin/srvctl start listener -n vrh3 ... failed
Failed to perform new node configuration at /g01/11.2.0/grid/crs/install/crsconfig_lib.pm line 8255.
/g01/11.2.0/grid/perl/bin/perl -I/g01/11.2.0/grid/perl/lib -I/g01/11.2.0/grid/crs/install 
/g01/11.2.0/grid/crs/install/rootcrs.pl execution failed

以上会出现了2个小错误:

1.新增节点上LISTENER启动失败的问题可以忽略,这是因为RDBMS_HOME仍未安装,但CRS尝试去启动相关的监听

[root@vrh3 g01]# /g01/11.2.0/grid/bin/srvctl start listener -n vrh3
PRCR-1013 : Failed to start resource ora.CRS_LISTENER.lsnr
PRCR-1064 : Failed to start resource ora.CRS_LISTENER.lsnr on node vrh3
CRS-5010: Update of configuration file "/s01/orabase/product/11.2.0/dbhome_1/network/admin/listener.ora" failed: details at "(:CLSN00014:)" in "/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log"
CRS-5013: Agent "/g01/11.2.0/grid/bin/oraagent.bin" failed to start process "/s01/orabase/product/11.2.0/dbhome_1/bin/lsnrctl" for action "check": details at "(:CLSN00008:)" in "/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log"
CRS-2674: Start of 'ora.CRS_LISTENER.lsnr' on 'vrh3' failed
CRS-5013: Agent "/g01/11.2.0/grid/bin/oraagent.bin" failed to start process "/s01/orabase/product/11.2.0/dbhome_1/bin/lsnrctl" for action "clean": details at "(:CLSN00008:)" in "/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log"
CRS-5013: Agent "/g01/11.2.0/grid/bin/oraagent.bin" failed to start process "/s01/orabase/product/11.2.0/dbhome_1/bin/lsnrctl" for action "check": details at "(:CLSN00008:)" in "/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log"
CRS-2678: 'ora.CRS_LISTENER.lsnr' on 'vrh3' has experienced an unrecoverable failure
CRS-0267: Human intervention required to resume its availability.
PRCC-1015 : LISTENER was already running on vrh3
PRCR-1004 : Resource ora.LISTENER.lsnr is already running

2.rootcrs.pl脚本运行失败的话,一般重新运行一次即可:

[root@vrh3 bin]# /g01/11.2.0/grid/perl/bin/perl -I/g01/11.2.0/grid/perl/lib
-I/g01/11.2.0/grid/crs/install /g01/11.2.0/grid/crs/install/rootcrs.pl Using configuration parameter file: /g01/11.2.0/grid/crs/install/crsconfig_params
PRKO-2190 : VIP exists for node vrh3, VIP name vrh3-vip
PRKO-2420 : VIP is already started on node(s): vrh3
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

3.建议在新增节点上重启crs,并使用cluvfy验证nodeadd顺利完成 :

[root@vrh3 ~]# crsctl stop crs

[root@vrh3 ~]# crsctl start crs

[root@vrh3 ~]# su - grid

[grid@vrh3 ~]$ cluvfy stage -post nodeadd -n vrh1,vrh2,vrh3

Performing post-checks for node addition 

Checking node reachability...
Node reachability check passed from node "vrh1" Checking user equivalence...
User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0" Node connectivity check passed Checking cluster integrity... Cluster integrity check passed Checking CRS integrity... CRS integrity check passed Checking shared resources... Checking CRS home location...
The location "/g01/11.2.0/grid" is not shared but is present/creatable on all nodes
Shared resources check for node addition passed Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0" Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1" Node connectivity check passed Checking node application existence... Checking existence of VIP node application (required)
VIP node application check passed Checking existence of NETWORK node application (required)
NETWORK node application check passed Checking existence of GSD node application (optional)
GSD node application is offline on nodes "vrh3,vrh2,vrh1" Checking existence of ONS node application (optional)
ONS node application check passed Checking Single Client Access Name (SCAN)... Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes Checking name resolution setup for "vrh.cluster.oracle.com"... ERROR:
PRVF-4664 : Found inconsistent name resolution entries for SCAN name "vrh.cluster.oracle.com" ERROR:
PRVF-4657 : Name resolution setup check for "vrh.cluster.oracle.com" (IP address: 192.168.1.190) failed ERROR:
PRVF-4664 : Found inconsistent name resolution entries for SCAN name "vrh.cluster.oracle.com" Verification of SCAN VIP and Listener setup failed User "grid" is not part of "root" group. Check passed Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed Checking if CTSS Resource is running on all nodes...
CTSS resource check passed Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed Check CTSS state started...
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Check of clock time offsets passed Oracle Cluster Time Synchronization Services check passed Post-check for node addition was successful.

为11.2.0.2 Grid Infrastructure添加节点的更多相关文章

  1. 为11gR2 Grid Infrastructure增加新的public网络

    在某些环境下,运行11.2版本的RAC数据库的服务器上,连接了多个public网络,那么就会有如下的需求: 给其他的,或者说是新的public网络增加新的VIP地址. 在新的public网络上增加SC ...

  2. uninstall 11.2.0.3.0 grid & database in linux 5.7

    OS: Oracle Linux Server release 5.7 DB: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - ...

  3. HPDL380G8平台11.2.0.3 RAC实施手册

    HPDL380G8平台11.2.0.3 RAC实施手册   1 前言 此文档详细描述了Oracle 11gR2 数据库在HPDL380G上的安装RAC的检查及安装步骤.文档中#表示root用户执行,$ ...

  4. HP11.31安装11.2.0.3实施手册

    1 前言 此文档详细描述了Oracle 11gR2 数据库在HP11.31上的安装RAC的检查及安装步骤.文档中#表示root用户执行,$表示grid或oracle用户执行. 2 系统环境 操作系统环 ...

  5. Oracle 11.2.0.3.0 RAC GI_DB升级到11.2.0.4.0

    转载:  http://blog.csdn.net/frank0521/article/details/18226199 前言 还是大家常说的那句:生产环境千万记得备份哈~~~ 以下的环境,是我的测试 ...

  6. 【Oracle】11G 11.2.0.4 RAC环境打补丁

    一.准备工作 1,数据库环境 操作系统版本  : RedHat 7.2 x64   数据库版本    : Oracle 11.2.0.4 x64 RAC    Grid          : 11.2 ...

  7. 转://Oracle 11gR2 硬件导致重新添加节点

    一.环境描述:        这是一套五年前部署的双节点单柜11g RAC,当时操作系统盘是一块164g的单盘,没有做RAID.        OS: RedHat EnterPrise 5.5 x8 ...

  8. Oracle RAC集群添加节点

    一,节点环境 所有节点分发/etc/hosts,这里我添加两个节点,一个是上次删除的节点,另一个是什么都没有的节点,尝试添加 服务器介绍什么的都在这hosts文件了,大家自己琢磨下 [grid@nod ...

  9. Oracle-11g-R2(11.2.0.3.x)RAC Oracle Grid & Database 零宕机方式回滚 PSU(自动模式)

    回滚环境: 1.源库版本: Grid Infrastructure:11.2.0.3.15 Database:11.2.0.3.15 2.目标库版本: Grid Infrastructure:11.2 ...

随机推荐

  1. Centos7与Windows10添加Windows10启动项并设置为默认启动

    在Centos7下root登陆 编辑 /boot/grub2/grub.cfg vim /boot/grub2/grub.cfg 在第一行添加 menuentry "Windows10&qu ...

  2. kali上部署dvwa漏洞测试平台

    kali上部署dvwa漏洞测试平台 一.获取dvwa安装包并解压 二.赋予dvwa文件夹相应权限 三.配置Mysql数据库 四.启动apache2和mysql服务 五.在网页配置dvwa 六.登陆到D ...

  3. 什么是mime类型

    本文转自:什么是mime类型 - 方法数码 http://www.fangfa.net/webnews/390.html MIME 类型在网站开发中经常碰到,特别是处理非文本数据的请求时(如:文件上传 ...

  4. Linux服务器下Nginx与Apache共存

    解决思路: 将nginx作为代理服务器和web服务器使用,nginx监听80端口,Apache监听除80以外的端口,我这暂时使用8080端口. nginx.conf 位置:/etc/nginx/ngi ...

  5. win10 修改 无线名 无线网络属性 名称 修改

    韩梦飞沙  韩亚飞  313134555@qq.com  yue31313  han_meng_fei_sha win10 修改 管理无线网络 无线网络属性 名称 修改 注册表 修改 ======== ...

  6. for循环的灵活性

      for循环把初始化.测试和更新组合在一起,其基本形式如下所示: for(初始化:测试条件:更新表达式) { //循环体 }   关键字for后面的圆括号中3个表达式,分别用两个分号隔开:   第一 ...

  7. “IT学子成长指导”专栏及文章目录 —贺利坚

    迂者专栏关键词 就 业 大一 大二 大三 大四 自学 职 场 专业+兴趣 研究生 硕士 规 划 考 研 大学生活 迷 茫 计算机+专业 基本功 学习方法 编程 基 础 实践 读书 前 途 成 长 社团 ...

  8. 2d场景背景无限滚动

    之前都是直接借用的DoTween插件,两个背景无限交替位置进行,还有就是三个背景在利用Trigger进行判断显示与否循环: 示例脚本: private List<RectTransform> ...

  9. 【译】如何在 Android 5.0 上获取 SD卡 的读写权限

    因为最近项目需要,涉及到 SD卡 的读写操作,然而申请 <!-- 读写权限 --> <uses-permission android:name="android.permi ...

  10. 用spring boot 2从零开始创建区块链

    区块链这么火的技术,大java怎能落后,所以有了本文,主要代码参考自 Learn Blockchains by Building One , 中文翻译:用Python从零开始创建区块链 . 一.区块链 ...