rac添加新节点的步骤与方法(官方步骤与自我测试)
Extending the Oracle Grid Infrastructure Home to the New Node
Now that the new node has been configured to support Oracle Clusterware, you use Oracle Universal Installer (OUI) to
add a Grid home to the node being added to your cluster.
This section assumes that you are adding a node named racnode3 and that you have successfully installed Oracle Clusterware on racnode1 in a nonshared home,
where Grid_home represents the successfully installed Oracle Clusterware home.
To extend the Oracle Grid Infrastructure for a cluster home to include the new node:
Verify the new node has been properly prepared for an Oracle Clusterware installation by running the following CLUVFY command on the racnode1 node:
cluvfy stage -pre nodeadd -n racnode3 -verbose
在grid下执行语句为: ./runcluvfy.sh stage -pre nodeadd -n racdb3 -verbose
=========================================================================================================================================
如下为测试结果:
[grid@racdb1 grid]$ ./runcluvfy.sh stage -pre nodeadd -n racdb3 -verbose
Performing pre-checks for node addition
Checking node reachability...
Check: Node reachability from node "racdb1"
Destination Node Reachable?
------------------------------------ ------------------------
racdb3 yes
Result: Node reachability check passed from node "racdb1"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
racdb3 passed
Result: User equivalence check passed for user "grid"
Checking CRS integrity...
Clusterware version consistency passed
The Oracle Clusterware is healthy on node "racdb1"
The Oracle Clusterware is healthy on node "racdb2"
CRS integrity check passed
Checking shared resources...
Checking CRS home location...
"/u01/app/11.2.0/grid" is shared
Result: Shared resources check for node addition passed
Checking node connectivity...
Checking hosts config file...
Node Name Status
------------------------------------ ------------------------
racdb1 passed
racdb2 passed
racdb3 passed
Verification of the hosts config file successful
Interface information for node "racdb1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.16.45 192.168.16.0 0.0.0.0 192.168.16.2 00:0C:29:E6:B7:6C 1500
eth0 192.168.16.55 192.168.16.0 0.0.0.0 192.168.16.2 00:0C:29:E6:B7:6C 1500
eth1 10.10.16.45 10.10.16.0 0.0.0.0 192.168.16.2 00:0C:29:E6:B7:76 1500
eth1 169.254.98.233 169.254.0.0 0.0.0.0 192.168.16.2 00:0C:29:E6:B7:76 1500
Interface information for node "racdb2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.16.46 192.168.16.0 0.0.0.0 192.168.17.2 00:0C:29:DD:87:AE 1500
eth0 192.168.16.65 192.168.16.0 0.0.0.0 192.168.17.2 00:0C:29:DD:87:AE 1500
eth0 192.168.16.56 192.168.16.0 0.0.0.0 192.168.17.2 00:0C:29:DD:87:AE 1500
eth1 10.10.16.46 10.10.16.0 0.0.0.0 192.168.17.2 00:0C:29:DD:87:B8 1500
eth1 169.254.184.88 169.254.0.0 0.0.0.0 192.168.17.2 00:0C:29:DD:87:B8 1500
eth2 192.168.17.46 192.168.17.0 0.0.0.0 192.168.17.2 00:50:56:29:1D:1E 1500
Interface information for node "racdb3"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.16.47 192.168.16.0 0.0.0.0 192.168.16.2 00:50:56:3A:7D:78 1500
eth1 10.10.16.47 10.10.16.0 0.0.0.0 192.168.16.2 00:0C:29:15:F0:06 1500
Check: Node connectivity for interface "eth0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
racdb1[192.168.16.45] racdb1[192.168.16.55] yes
racdb1[192.168.16.45] racdb2[192.168.16.46] yes
racdb1[192.168.16.45] racdb2[192.168.16.65] yes
racdb1[192.168.16.45] racdb2[192.168.16.56] yes
racdb1[192.168.16.45] racdb3[192.168.16.47] yes
racdb1[192.168.16.55] racdb2[192.168.16.46] yes
racdb1[192.168.16.55] racdb2[192.168.16.65] yes
racdb1[192.168.16.55] racdb2[192.168.16.56] yes
racdb1[192.168.16.55] racdb3[192.168.16.47] yes
racdb2[192.168.16.46] racdb2[192.168.16.65] yes
racdb2[192.168.16.46] racdb2[192.168.16.56] yes
racdb2[192.168.16.46] racdb3[192.168.16.47] yes
racdb2[192.168.16.65] racdb2[192.168.16.56] yes
racdb2[192.168.16.65] racdb3[192.168.16.47] yes
racdb2[192.168.16.56] racdb3[192.168.16.47] yes
Result: Node connectivity passed for interface "eth0"
Check: TCP connectivity of subnet "192.168.16.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
racdb1:192.168.16.45 racdb1:192.168.16.55 passed
racdb1:192.168.16.45 racdb2:192.168.16.46 passed
racdb1:192.168.16.45 racdb2:192.168.16.65 passed
racdb1:192.168.16.45 racdb2:192.168.16.56 passed
racdb1:192.168.16.45 racdb3:192.168.16.47 passed
Result: TCP connectivity check passed for subnet "192.168.16.0"
Check: Node connectivity for interface "eth1"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
racdb1[10.10.16.45] racdb2[10.10.16.46] yes
racdb1[10.10.16.45] racdb3[10.10.16.47] yes
racdb2[10.10.16.46] racdb3[10.10.16.47] yes
Result: Node connectivity passed for interface "eth1"
Check: TCP connectivity of subnet "10.10.16.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
racdb1:10.10.16.45 racdb2:10.10.16.46 passed
racdb1:10.10.16.45 racdb3:10.10.16.47 passed
Result: TCP connectivity check passed for subnet "10.10.16.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.16.0".
Subnet mask consistency check passed for subnet "10.10.16.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.16.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.16.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "10.10.16.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.10.16.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Check: Total memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 1.8179GB (1906252.0KB) 1.5GB (1572864.0KB) passed
racdb1 1.8179GB (1906252.0KB) 1.5GB (1572864.0KB) passed
Result: Total memory check passed
Check: Available memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 1.6644GB (1745300.0KB) 50MB (51200.0KB) passed
racdb1 710.5117MB (727564.0KB) 50MB (51200.0KB) passed
Result: Available memory check passed
Check: Swap space
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 3GB (3145724.0KB) 2.7269GB (2859378.0KB) passed
racdb1 3GB (3145724.0KB) 2.7269GB (2859378.0KB) passed
Result: Swap space check passed
Check: Free disk space for "racdb3:/u01/app/11.2.0/grid,racdb3:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/u01/app/11.2.0/grid racdb3 / 54.6299GB 7.5GB passed
/tmp racdb3 / 54.6299GB 7.5GB passed
Result: Free disk space check passed for "racdb3:/u01/app/11.2.0/grid,racdb3:/tmp"
Check: Free disk space for "racdb1:/u01/app/11.2.0/grid,racdb1:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/u01/app/11.2.0/grid racdb1 / 19.2575GB 7.5GB passed
/tmp racdb1 / 19.2575GB 7.5GB passed
Result: Free disk space check passed for "racdb1:/u01/app/11.2.0/grid,racdb1:/tmp"
Check: User existence for "grid"
Node Name Status Comment
------------ ------------------------ ------------------------
racdb3 passed exists(501)
racdb1 passed exists(501)
Checking for multiple users with UID value 501
Result: Check for multiple users with UID value 501 passed
Result: User existence check passed for "grid"
Check: Run level
Node Name run level Required Status
------------ ------------------------ ------------------------ ----------
racdb3 3 3,5 passed
racdb1 3 3,5 passed
Result: Run level check passed
Check: Hard limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
racdb1 hard 65536 65536 passed
racdb3 hard 65536 65536 passed
Result: Hard limits check passed for "maximum open file descriptors"
Check: Soft limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
racdb1 soft 1024 1024 passed
racdb3 soft 1024 1024 passed
Result: Soft limits check passed for "maximum open file descriptors"
Check: Hard limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
racdb1 hard 16384 16384 passed
racdb3 hard 16384 16384 passed
Result: Hard limits check passed for "maximum user processes"
Check: Soft limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
racdb1 soft 2047 2047 passed
racdb3 soft 2047 2047 passed
Result: Soft limits check passed for "maximum user processes"
Check: System architecture
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 x86_64 x86_64 passed
racdb1 x86_64 x86_64 passed
Result: System architecture check passed
Check: Kernel version
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 2.6.32-696.el6.x86_64 2.6.9 passed
racdb1 2.6.32-696.el6.x86_64 2.6.9 passed
Result: Kernel version check passed
Check: Kernel parameter for "semmsl"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb1 250 250 250 passed
racdb3 250 250 250 passed
Result: Kernel parameter check passed for "semmsl"
Check: Kernel parameter for "semmns"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb1 32000 32000 32000 passed
racdb3 32000 32000 32000 passed
Result: Kernel parameter check passed for "semmns"
Check: Kernel parameter for "semopm"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb1 100 100 100 passed
racdb3 100 100 100 passed
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb1 128 128 128 passed
racdb3 128 128 128 passed
Result: Kernel parameter check passed for "semmni"
Check: Kernel parameter for "shmmax"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb1 68719476736 68719476736 976001024 passed
racdb3 68719476736 68719476736 976001024 passed
Result: Kernel parameter check passed for "shmmax"
Check: Kernel parameter for "shmmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb1 4096 4096 4096 passed
racdb3 4096 4096 4096 passed
Result: Kernel parameter check passed for "shmmni"
Check: Kernel parameter for "shmall"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb1 2147483648 2147483648 2097152 passed
racdb3 2147483648 2147483648 2097152 passed
Result: Kernel parameter check passed for "shmall"
Check: Kernel parameter for "file-max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb1 6815744 6815744 6815744 passed
racdb3 6815744 6815744 6815744 passed
Result: Kernel parameter check passed for "file-max"
Check: Kernel parameter for "ip_local_port_range"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb1 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed
racdb3 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed
Result: Kernel parameter check passed for "ip_local_port_range"
Check: Kernel parameter for "rmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb1 262144 262144 262144 passed
racdb3 262144 262144 262144 passed
Result: Kernel parameter check passed for "rmem_default"
Check: Kernel parameter for "rmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb1 4194304 4194304 4194304 passed
racdb3 4194304 4194304 4194304 passed
Result: Kernel parameter check passed for "rmem_max"
Check: Kernel parameter for "wmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb1 262144 262144 262144 passed
racdb3 262144 262144 262144 passed
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb1 1048586 1048586 1048576 passed
racdb3 1048586 1048586 1048576 passed
Result: Kernel parameter check passed for "wmem_max"
Check: Kernel parameter for "aio-max-nr"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
racdb1 1048576 1048576 1048576 passed
racdb3 1048576 1048576 1048576 passed
Result: Kernel parameter check passed for "aio-max-nr"
Check: Package existence for "make"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 make-3.81-23.el6 make-3.80 passed
racdb1 make-3.81-23.el6 make-3.80 passed
Result: Package existence check passed for "make"
Check: Package existence for "binutils"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 binutils-2.20.51.0.2-5.46.el6 binutils-2.15.92.0.2 passed
racdb1 binutils-2.20.51.0.2-5.46.el6 binutils-2.15.92.0.2 passed
Result: Package existence check passed for "binutils"
Check: Package existence for "gcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 gcc(x86_64)-4.4.7-18.el6 gcc(x86_64)-3.4.6 passed
racdb1 gcc(x86_64)-4.4.7-18.el6 gcc(x86_64)-3.4.6 passed
Result: Package existence check passed for "gcc(x86_64)"
Check: Package existence for "libaio(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.105 passed
racdb1 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.105 passed
Result: Package existence check passed for "libaio(x86_64)"
Check: Package existence for "glibc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 glibc(x86_64)-2.12-1.209.el6 glibc(x86_64)-2.3.4-2.41 passed
racdb1 glibc(x86_64)-2.12-1.209.el6 glibc(x86_64)-2.3.4-2.41 passed
Result: Package existence check passed for "glibc(x86_64)"
Check: Package existence for "compat-libstdc++-33(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed
racdb1 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"
Check: Package existence for "elfutils-libelf(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 elfutils-libelf(x86_64)-0.164-2.el6 elfutils-libelf(x86_64)-0.97 passed
racdb1 elfutils-libelf(x86_64)-0.164-2.el6 elfutils-libelf(x86_64)-0.97 passed
Result: Package existence check passed for "elfutils-libelf(x86_64)"
Check: Package existence for "elfutils-libelf-devel"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 elfutils-libelf-devel-0.164-2.el6 elfutils-libelf-devel-0.97 passed
racdb1 elfutils-libelf-devel-0.164-2.el6 elfutils-libelf-devel-0.97 passed
Result: Package existence check passed for "elfutils-libelf-devel"
Check: Package existence for "glibc-common"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 glibc-common-2.12-1.209.el6 glibc-common-2.3.4 passed
racdb1 glibc-common-2.12-1.209.el6 glibc-common-2.3.4 passed
Result: Package existence check passed for "glibc-common"
Check: Package existence for "glibc-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 glibc-devel(x86_64)-2.12-1.209.el6 glibc-devel(x86_64)-2.3.4 passed
racdb1 glibc-devel(x86_64)-2.12-1.209.el6 glibc-devel(x86_64)-2.3.4 passed
Result: Package existence check passed for "glibc-devel(x86_64)"
Check: Package existence for "glibc-headers"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 glibc-headers-2.12-1.209.el6 glibc-headers-2.3.4 passed
racdb1 glibc-headers-2.12-1.209.el6 glibc-headers-2.3.4 passed
Result: Package existence check passed for "glibc-headers"
Check: Package existence for "gcc-c++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 gcc-c++(x86_64)-4.4.7-18.el6 gcc-c++(x86_64)-3.4.6 passed
racdb1 gcc-c++(x86_64)-4.4.7-18.el6 gcc-c++(x86_64)-3.4.6 passed
Result: Package existence check passed for "gcc-c++(x86_64)"
Check: Package existence for "libaio-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.105 passed
racdb1 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.105 passed
Result: Package existence check passed for "libaio-devel(x86_64)"
Check: Package existence for "libgcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 libgcc(x86_64)-4.4.7-18.el6 libgcc(x86_64)-3.4.6 passed
racdb1 libgcc(x86_64)-4.4.7-18.el6 libgcc(x86_64)-3.4.6 passed
Result: Package existence check passed for "libgcc(x86_64)"
Check: Package existence for "libstdc++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 libstdc++(x86_64)-4.4.7-18.el6 libstdc++(x86_64)-3.4.6 passed
racdb1 libstdc++(x86_64)-4.4.7-18.el6 libstdc++(x86_64)-3.4.6 passed
Result: Package existence check passed for "libstdc++(x86_64)"
Check: Package existence for "libstdc++-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 libstdc++-devel(x86_64)-4.4.7-18.el6 libstdc++-devel(x86_64)-3.4.6 passed
racdb1 libstdc++-devel(x86_64)-4.4.7-18.el6 libstdc++-devel(x86_64)-3.4.6 passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"
Check: Package existence for "sysstat"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 sysstat-9.0.4-33.el6 sysstat-5.0.5 passed
racdb1 sysstat-9.0.4-33.el6 sysstat-5.0.5 passed
Result: Package existence check passed for "sysstat"
Check: Package existence for "pdksh"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 pdksh-5.2.14-37.el5_8.1 pdksh-5.2.14 passed
racdb1 pdksh-5.2.14-37.el5_8.1 pdksh-5.2.14 passed
Result: Package existence check passed for "pdksh"
Check: Package existence for "expat(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
racdb3 expat(x86_64)-2.0.1-13.el6_8 expat(x86_64)-1.95.7 passed
racdb1 expat(x86_64)-2.0.1-13.el6_8 expat(x86_64)-1.95.7 passed
Result: Package existence check passed for "expat(x86_64)"
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Current group ID
Result: Current group ID check passed
Starting check for consistency of primary group of root user
Node Name Status
------------------------------------ ------------------------
racdb3 passed
racdb1 passed
Check for consistency of root user's primary group passed
Checking OCR integrity...
OCR integrity check passed
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed
Check: Time zone consistency
Result: Time zone consistency check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Checking to make sure user "grid" is not in "root" group
Node Name Status Comment
------------ ------------------------ ------------------------
racdb3 passed does not exist
racdb1 passed does not exist
Result: User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking DNS response time for an unreachable node
Node Name Status
------------------------------------ ------------------------
racdb1 passed
racdb3 passed
The DNS response time for an unreachable node is within acceptable limit on all nodes
File "/etc/resolv.conf" is consistent across nodes
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Pre-check for node addition was successful.
[grid@racdb1 grid]$
=========================================================================================================================================
As the oracle user (owner of the Oracle Grid Infrastructure for a cluster software installation) on racnode1,
go to Grid_home/oui/bin and run the addNode.sh script in silent mode:
If you are using Grid Naming Service (GNS):
./addNode.sh -silent "CLUSTER_NEW_NODES={racnode3}"
If you are not using Grid Naming Service (GNS):
在 节点1 ,grid用户下,做如下操作:
cd /software/grid/install
/u01/app/11.2.0/grid/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={racdb3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={racdb3-vip}"
======================================================================================================================================
如下为测试结果:
[grid@racdb1 bin]$ pwd
/u01/app/11.2.0/grid/oui/bin
[grid@racdb1 bin]$ ls
addLangs.sh addNode.sh attachHome.sh detachHome.sh filesList.bat filesList.properties filesList.sh lsnodes resource runConfig.sh runInstaller runInstaller.sh runSSHSetup.sh
[grid@racdb1 bin]$ /u01/app/11.2.0/grid/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={racdb3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={racdb3-vip}"
Performing pre-checks for node addition
Checking node reachability...
Node reachability check passed from node "racdb1"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Checking shared resources...
Checking CRS home location...
"/u01/app/11.2.0/grid" is shared
Shared resources check for node addition passed
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "192.168.16.0"
Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "10.10.16.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.16.0".
Subnet mask consistency check passed for subnet "10.10.16.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.16.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.16.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "10.10.16.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.10.16.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "racdb3:/u01/app/11.2.0/grid,racdb3:/tmp"
Free disk space check passed for "racdb1:/u01/app/11.2.0/grid,racdb1:/tmp"
Check for multiple users with UID value 501 passed
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make"
Package existence check passed for "binutils"
Package existence check passed for "gcc(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "elfutils-libelf(x86_64)"
Package existence check passed for "elfutils-libelf-devel"
Package existence check passed for "glibc-common"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "glibc-headers"
Package existence check passed for "gcc-c++(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "pdksh"
Package existence check passed for "expat(x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Checking OCR integrity...
OCR integrity check passed
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
No NTP Daemons or Services were found to be running
Clock synchronization check using Network Time Protocol(NTP) passed
User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes
File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
The DNS response time for an unreachable node is within acceptable limit on all nodes
File "/etc/resolv.conf" is consistent across nodes
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.
Pre-check for node addition was successful.
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 2372 MB Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.
Performing tests to see whether nodes racdb2,racdb3 are available
............................................................... 100% Done.
.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /u01/app/11.2.0/grid
New Nodes
Space Requirements
New Nodes
racdb3
/: Required 8.76GB : Available 50.87GB
Installed Products
Product Names
Oracle Grid Infrastructure 11g 11.2.0.4.0
Java Development Kit 1.5.0.51.10
Installer SDK Component 11.2.0.4.0
Oracle One-Off Patch Installer 11.2.0.3.4
Oracle Universal Installer 11.2.0.4.0
Oracle RAC Required Support Files-HAS 11.2.0.4.0
Oracle USM Deconfiguration 11.2.0.4.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Enterprise Manager Common Core Files 10.2.0.4.5
Oracle DBCA Deconfiguration 11.2.0.4.0
Oracle RAC Deconfiguration 11.2.0.4.0
Oracle Quality of Service Management (Server) 11.2.0.4.0
Installation Plugin Files 11.2.0.4.0
Universal Storage Manager Files 11.2.0.4.0
Oracle Text Required Support Files 11.2.0.4.0
Automatic Storage Management Assistant 11.2.0.4.0
Oracle Database 11g Multimedia Files 11.2.0.4.0
Oracle Multimedia Java Advanced Imaging 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
Oracle Multimedia Locator RDBMS Files 11.2.0.4.0
Oracle Core Required Support Files 11.2.0.4.0
Bali Share 1.1.18.0.0
Oracle Database Deconfiguration 11.2.0.4.0
Oracle Quality of Service Management (Client) 11.2.0.4.0
Expat libraries 2.0.1.0.1
Oracle Containers for Java 11.2.0.4.0
Perl Modules 5.10.0.0.1
Secure Socket Layer 11.2.0.4.0
Oracle JDBC/OCI Instant Client 11.2.0.4.0
Oracle Multimedia Client Option 11.2.0.4.0
LDAP Required Support Files 11.2.0.4.0
Character Set Migration Utility 11.2.0.4.0
Perl Interpreter 5.10.0.0.2
PL/SQL Embedded Gateway 11.2.0.4.0
OLAP SQL Scripts 11.2.0.4.0
Database SQL Scripts 11.2.0.4.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
SSL Required Support Files for InstantClient 11.2.0.4.0
SQL*Plus Files for Instant Client 11.2.0.4.0
Oracle Net Required Support Files 11.2.0.4.0
Oracle Database User Interface 2.2.13.0.0
RDBMS Required Support Files for Instant Client 11.2.0.4.0
RDBMS Required Support Files Runtime 11.2.0.4.0
XML Parser for Java 11.2.0.4.0
Oracle Security Developer Tools 11.2.0.4.0
Oracle Wallet Manager 11.2.0.4.0
Enterprise Manager plugin Common Files 11.2.0.4.0
Platform Required Support Files 11.2.0.4.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
RDBMS Required Support Files 11.2.0.4.0
Oracle Ice Browser 5.2.3.6.0
Oracle Help For Java 4.2.9.0.0
Enterprise Manager Common Files 10.2.0.4.5
Deinstallation Tool 11.2.0.4.0
Oracle Java Client 11.2.0.4.0
Cluster Verification Utility Files 11.2.0.4.0
Oracle Notification Service (eONS) 11.2.0.4.0
Oracle LDAP administration 11.2.0.4.0
Cluster Verification Utility Common Files 11.2.0.4.0
Oracle Clusterware RDBMS Files 11.2.0.4.0
Oracle Locale Builder 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
Buildtools Common Files 11.2.0.4.0
HAS Common Files 11.2.0.4.0
SQL*Plus Required Support Files 11.2.0.4.0
XDK Required Support Files 11.2.0.4.0
Agent Required Support Files 10.2.0.4.5
Parser Generator Required Support Files 11.2.0.4.0
Precompiler Required Support Files 11.2.0.4.0
Installation Common Files 11.2.0.4.0
Required Support Files 11.2.0.4.0
Oracle JDBC/THIN Interfaces 11.2.0.4.0
Oracle Multimedia Locator 11.2.0.4.0
Oracle Multimedia 11.2.0.4.0
Assistant Common Files 11.2.0.4.0
Oracle Net 11.2.0.4.0
PL/SQL 11.2.0.4.0
HAS Files for DB 11.2.0.4.0
Oracle Recovery Manager 11.2.0.4.0
Oracle Database Utilities 11.2.0.4.0
Oracle Notification Service 11.2.0.3.0
SQL*Plus 11.2.0.4.0
Oracle Netca Client 11.2.0.4.0
Oracle Advanced Security 11.2.0.4.0
Oracle JVM 11.2.0.4.0
Oracle Internet Directory Client 11.2.0.4.0
Oracle Net Listener 11.2.0.4.0
Cluster Ready Services Files 11.2.0.4.0
Oracle Database 11g 11.2.0.4.0
-----------------------------------------------------------------------------
Instantiating scripts for add node (Sunday, December 30, 2018 4:42:29 AM CST)
. 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Sunday, December 30, 2018 4:42:32 AM CST)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes (Sunday, December 30, 2018 4:46:32 AM CST)
. 100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'racdb3'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oraInventory/orainstRoot.sh #On nodes racdb3
/u01/app/11.2.0/grid/root.sh #On nodes racdb3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/app/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
[grid@racdb1 bin]$
=======================================================================================================
[root@racdb3 lib64]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
Start of resource "ora.ctssd" failed
CRS-2672: Attempting to start 'ora.ctssd' on 'racdb3'
CRS-2674: Start of 'ora.ctssd' on 'racdb3' failed
CRS-4000: Command Start failed, or completed with errors.
Failed to start Oracle Grid Infrastructure stack
Failed to start Cluster Time Synchronisation Service - CTSS at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1295.
/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
[root@racdb3 lib64]# hwclock -w
[root@racdb3 lib64]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
Start of resource "ora.ctssd" failed
CRS-2672: Attempting to start 'ora.ctssd' on 'racdb3'
CRS-2674: Start of 'ora.ctssd' on 'racdb3' failed
CRS-4000: Command Start failed, or completed with errors.
Failed to start Oracle Grid Infrastructure stack
Failed to start Cluster Time Synchronisation Service - CTSS at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1295.
/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
========================================================================================================
注意:这里的报错是由于rac3的时间和rac1和rac2 的时间不一致,只要将rac3的时间和rac1 和rac2 改成同步就可以了,具体修改如下:
[root@rac3 src]# date -s '2018-12-30 05:26:40'
Sat Mar 4 14:48:34 CST 2017
[root@rac3 src]# hwclock -w
=============================================================================================================================
/u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force -verbose
[root@racdb3 lib64]# /u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force -verbose
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Delete failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racdb3'
CRS-2673: Attempting to stop 'ora.cssd' on 'racdb3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'racdb3'
CRS-2677: Stop of 'ora.cssd' on 'racdb3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'racdb3'
CRS-2677: Stop of 'ora.mdnsd' on 'racdb3' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'racdb3' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'racdb3'
CRS-2677: Stop of 'ora.gpnpd' on 'racdb3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racdb3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle Restart stack
========================================================================================================================
[root@racdb3 dev]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to oracle-ohasd.conf
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node racdb1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
ls: cannot access /usr/sbin/smartctl: No such file or directory
/usr/sbin/smartctl not found.
error: %pre(cvuqdisk-1.0.9-1.x86_64) scriptlet failed, exit status 1
error: install: %pre scriptlet failed (2), skipping cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@racdb3 dev]#
=========================================================================================
[root@racdb1 ~]# /u01/app/11.2.0/grid/bin/crsctl status res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE racdb1
ONLINE ONLINE racdb2
ONLINE ONLINE racdb3
ora.LISTENER.lsnr
ONLINE ONLINE racdb1
ONLINE ONLINE racdb2
ONLINE ONLINE racdb3
ora.OCR.dg
ONLINE ONLINE racdb1
ONLINE ONLINE racdb2
ONLINE ONLINE racdb3
ora.asm
ONLINE ONLINE racdb1 Started
ONLINE ONLINE racdb2 Started
ONLINE ONLINE racdb3 Started
ora.gsd
OFFLINE OFFLINE racdb1
OFFLINE OFFLINE racdb2
OFFLINE OFFLINE racdb3
ora.net1.network
ONLINE ONLINE racdb1
ONLINE ONLINE racdb2
ONLINE ONLINE racdb3
ora.ons
ONLINE ONLINE racdb1
ONLINE ONLINE racdb2
ONLINE ONLINE racdb3
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE racdb2
ora.cvu
1 ONLINE ONLINE racdb2
ora.oc4j
1 ONLINE ONLINE racdb2
ora.prod.db
1 ONLINE ONLINE racdb1 Open
2 ONLINE ONLINE racdb2 Open
ora.prod.prodsrv.svc
1 ONLINE ONLINE racdb2
ora.prod.prodsrv2.svc
1 ONLINE ONLINE racdb2
ora.racdb.db
1 ONLINE ONLINE racdb1 Open
2 ONLINE ONLINE racdb2 Open
ora.racdb.service1.svc
1 ONLINE ONLINE racdb2
ora.racdb.test_srv.svc
1 ONLINE ONLINE racdb2
ora.racdb1.vip
1 ONLINE ONLINE racdb1
ora.racdb2.vip
1 ONLINE ONLINE racdb2
ora.racdb3.vip
1 ONLINE ONLINE racdb3
ora.scan1.vip
1 ONLINE ONLINE racdb2
[root@racdb1 ~]#
=======================================================================================================================================
When running this command, the curly braces ( { } ) are not optional and must be included or the command returns an error.
You can alternatively use a response file instead of placing all the arguments in the command line.
See Oracle Clusterware Administration and Deployment Guide for more information on using response files.
When the script finishes, run the root.sh script as the root user on the new node, racnode3, from the Oracle home directory on that node.
If you are not using Oracle Grid Naming Service (GNS), then you must add the name and address for racnode3 to DNS.
You should now have Oracle Clusterware running on the new node. To verify the installation of Oracle Clusterware on the new node,
you can run the following command on the newly configured node, racnode3:
$ cd /u01/app/11.2.0/grid/bin
$ ./cluvfy stage -post nodeadd -n racnode3 -verbose
Note:
Avoid changing host names after you complete the Oracle Clusterware installation, including adding or deleting domain qualifications.
Nodes with changed host names must be deleted from the cluster and added back with the new name.
See Also:
"Completing the Oracle Clusterware Configuration"
Oracle Real Application Clusters Administration and Deployment Guide for more information about adding and removing nodes from your cluster database
==========================================================================================================================================================
下面是对于oracle home 用户的软件添加。
Extending the Oracle RAC Home Directory
Now that you have extended the Grid home to the new node, you must extend the Oracle home on racnode1 to racnode3.
The following steps assume that you have completed the tasks described in the previous sections,
"Preparing the New Node" and "Extending the Oracle Grid Infrastructure Home to the New Node",
and that racnode3 is a member node of the cluster to which racnode1 belongs.
The procedure for adding an Oracle home to the new node is very similar to the procedure you just completed for extending the Grid home to the new node.
To extend the Oracle RAC installation to include the new node:
Ensure that you have successfully installed the Oracle RAC software on at least one node in your cluster environment. To use these procedures as shown,
replace Oracle_home with the location of your installed Oracle home directory.
Go to the Oracle_home/oui/bin directory on racnode1 and run the addNode.sh script in silent mode as shown in the following example:
在节点1 用oracle用户 执行如下命令:
$ cd /u01/app/oracle/product/11.2.0/dbhome_1/oui/bin
$ ./addNode.sh -silent "CLUSTER_NEW_NODES={racdb3}"
=================================================================================================
如下为案例:
[root@racdb1 ~]# su - oracle
[oracle@racdb1 ~]$ cd /u01/app/oracle/product/11.2.0/dbhome_1/oui/bin
[oracle@racdb1 bin]$ ls
addLangs.sh attachHome.sh filesList.bat filesList.sh resource runInstaller runSSHSetup.sh
addNode.sh detachHome.sh filesList.properties lsnodes runConfig.sh runInstaller.sh
[oracle@racdb1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={racdb3}"
Performing pre-checks for node addition
Checking node reachability...
Node reachability check passed from node "racdb1"
Checking user equivalence...
User equivalence check passed for user "oracle"
WARNING:
Node "racdb3" already appears to be part of cluster
Pre-check for node addition was successful.
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 2555 MB Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.
Performing tests to see whether nodes racdb2,racdb3 are available
............................................................... 100% Done.
..
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /u01/app/oracle/product/11.2.0/dbhome_1
New Nodes
Space Requirements
New Nodes
racdb3
/: Required 5.03GB : Available 44.30GB
Installed Products
Product Names
Oracle Database 11g 11.2.0.4.0
Java Development Kit 1.5.0.51.10
Installer SDK Component 11.2.0.4.0
Oracle One-Off Patch Installer 11.2.0.3.4
Oracle Universal Installer 11.2.0.4.0
Oracle USM Deconfiguration 11.2.0.4.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Oracle DBCA Deconfiguration 11.2.0.4.0
Oracle RAC Deconfiguration 11.2.0.4.0
Oracle Database Deconfiguration 11.2.0.4.0
Oracle Configuration Manager Client 10.3.2.1.0
Oracle Configuration Manager 10.3.8.1.0
Oracle ODBC Driverfor Instant Client 11.2.0.4.0
LDAP Required Support Files 11.2.0.4.0
SSL Required Support Files for InstantClient 11.2.0.4.0
Bali Share 1.1.18.0.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
Oracle Real Application Testing 11.2.0.4.0
Oracle Database Vault J2EE Application 11.2.0.4.0
Oracle Label Security 11.2.0.4.0
Oracle Data Mining RDBMS Files 11.2.0.4.0
Oracle OLAP RDBMS Files 11.2.0.4.0
Oracle OLAP API 11.2.0.4.0
Platform Required Support Files 11.2.0.4.0
Oracle Database Vault option 11.2.0.4.0
Oracle RAC Required Support Files-HAS 11.2.0.4.0
SQL*Plus Required Support Files 11.2.0.4.0
Oracle Display Fonts 9.0.2.0.0
Oracle Ice Browser 5.2.3.6.0
Oracle JDBC Server Support Package 11.2.0.4.0
Oracle SQL Developer 11.2.0.4.0
Oracle Application Express 11.2.0.4.0
XDK Required Support Files 11.2.0.4.0
RDBMS Required Support Files for Instant Client 11.2.0.4.0
SQLJ Runtime 11.2.0.4.0
Database Workspace Manager 11.2.0.4.0
RDBMS Required Support Files Runtime 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
Exadata Storage Server 11.2.0.1.0
Provisioning Advisor Framework 10.2.0.4.3
Enterprise Manager Database Plugin -- Repository Support 11.2.0.4.0
Enterprise Manager Repository Core Files 10.2.0.4.5
Enterprise Manager Database Plugin -- Agent Support 11.2.0.4.0
Enterprise Manager Grid Control Core Files 10.2.0.4.5
Enterprise Manager Common Core Files 10.2.0.4.5
Enterprise Manager Agent Core Files 10.2.0.4.5
RDBMS Required Support Files 11.2.0.4.0
regexp 2.1.9.0.0
Agent Required Support Files 10.2.0.4.5
Oracle 11g Warehouse Builder Required Files 11.2.0.4.0
Oracle Notification Service (eONS) 11.2.0.4.0
Oracle Text Required Support Files 11.2.0.4.0
Parser Generator Required Support Files 11.2.0.4.0
Oracle Database 11g Multimedia Files 11.2.0.4.0
Oracle Multimedia Java Advanced Imaging 11.2.0.4.0
Oracle Multimedia Annotator 11.2.0.4.0
Oracle JDBC/OCI Instant Client 11.2.0.4.0
Oracle Multimedia Locator RDBMS Files 11.2.0.4.0
Precompiler Required Support Files 11.2.0.4.0
Oracle Core Required Support Files 11.2.0.4.0
Sample Schema Data 11.2.0.4.0
Oracle Starter Database 11.2.0.4.0
Oracle Message Gateway Common Files 11.2.0.4.0
Oracle XML Query 11.2.0.4.0
XML Parser for Oracle JVM 11.2.0.4.0
Oracle Help For Java 4.2.9.0.0
Installation Plugin Files 11.2.0.4.0
Enterprise Manager Common Files 10.2.0.4.5
Expat libraries 2.0.1.0.1
Deinstallation Tool 11.2.0.4.0
Oracle Quality of Service Management (Client) 11.2.0.4.0
Perl Modules 5.10.0.0.1
JAccelerator (COMPANION) 11.2.0.4.0
Oracle Containers for Java 11.2.0.4.0
Perl Interpreter 5.10.0.0.2
Oracle Net Required Support Files 11.2.0.4.0
Secure Socket Layer 11.2.0.4.0
Oracle Universal Connection Pool 11.2.0.4.0
Oracle JDBC/THIN Interfaces 11.2.0.4.0
Oracle Multimedia Client Option 11.2.0.4.0
Oracle Java Client 11.2.0.4.0
Character Set Migration Utility 11.2.0.4.0
Oracle Code Editor 1.2.1.0.0I
PL/SQL Embedded Gateway 11.2.0.4.0
OLAP SQL Scripts 11.2.0.4.0
Database SQL Scripts 11.2.0.4.0
Oracle Locale Builder 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
SQL*Plus Files for Instant Client 11.2.0.4.0
Required Support Files 11.2.0.4.0
Oracle Database User Interface 2.2.13.0.0
Oracle ODBC Driver 11.2.0.4.0
Oracle Notification Service 11.2.0.3.0
XML Parser for Java 11.2.0.4.0
Oracle Security Developer Tools 11.2.0.4.0
Oracle Wallet Manager 11.2.0.4.0
Cluster Verification Utility Common Files 11.2.0.4.0
Oracle Clusterware RDBMS Files 11.2.0.4.0
Oracle UIX 2.2.24.6.0
Enterprise Manager plugin Common Files 11.2.0.4.0
HAS Common Files 11.2.0.4.0
Precompiler Common Files 11.2.0.4.0
Installation Common Files 11.2.0.4.0
Oracle Help for the Web 2.0.14.0.0
Oracle LDAP administration 11.2.0.4.0
Buildtools Common Files 11.2.0.4.0
Assistant Common Files 11.2.0.4.0
Oracle Recovery Manager 11.2.0.4.0
PL/SQL 11.2.0.4.0
Generic Connectivity Common Files 11.2.0.4.0
Oracle Database Gateway for ODBC 11.2.0.4.0
Oracle Programmer 11.2.0.4.0
Oracle Database Utilities 11.2.0.4.0
Enterprise Manager Agent 10.2.0.4.5
SQL*Plus 11.2.0.4.0
Oracle Netca Client 11.2.0.4.0
Oracle Multimedia Locator 11.2.0.4.0
Oracle Call Interface (OCI) 11.2.0.4.0
Oracle Multimedia 11.2.0.4.0
Oracle Net 11.2.0.4.0
Oracle XML Development Kit 11.2.0.4.0
Oracle Internet Directory Client 11.2.0.4.0
Database Configuration and Upgrade Assistants 11.2.0.4.0
Oracle JVM 11.2.0.4.0
Oracle Advanced Security 11.2.0.4.0
Oracle Net Listener 11.2.0.4.0
Oracle Enterprise Manager Console DB 11.2.0.4.0
HAS Files for DB 11.2.0.4.0
Oracle Text 11.2.0.4.0
Oracle Net Services 11.2.0.4.0
Oracle Database 11g 11.2.0.4.0
Oracle OLAP 11.2.0.4.0
Oracle Spatial 11.2.0.4.0
Oracle Partitioning 11.2.0.4.0
Enterprise Edition Options 11.2.0.4.0
-----------------------------------------------------------------------------
Instantiating scripts for add node (Sunday, December 30, 2018 6:18:59 AM CST)
. 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Sunday, December 30, 2018 6:19:09 AM CST)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes (Sunday, December 30, 2018 6:24:18 AM CST)
. 100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oracle/product/11.2.0/dbhome_1/root.sh #On nodes racdb3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/app/oracle/product/11.2.0/dbhome_1 was successful.
Please check '/tmp/silentInstall.log' for more details.
================================================================================================
当执行完毕如上脚本后,直接 root.sh 脚本
When the script finishes, run the root.sh script as the root user on the new node, racnode3, from the Oracle home directory on that node.
================================================================================================
[root@racdb3 bin]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@racdb3 bin]#
================================================================================================================================
For policy-managed databases with Oracle Managed Files (OMF) enabled, no further actions are needed.
For a policy-managed database, when you add a new node to the cluster, it is placed in the Free pool by default.
If you increase the cardinality of the database server pool, then an Oracle RAC instance is added to the new node, racnode3,
and it is moved to the database server pool. No further action is necessary.
Add shared storage for the undo tablespace and redo log files.
If OMF is not enabled for your database, then you must manually add an undo tablespace and redo logs.
If you have an administrator-managed database, then add a new instance on the new node as described in "Creating an Instance on the New Node".
If you followed the installation instructions in this guide, then your cluster database is an administrator-managed database
and stores the database files on Oracle Automatic Storage Management (Oracle ASM) with OMF enabled.
After completing these steps, you should have an installed Oracle home on the new node.
See Also:
"Verifying Your Oracle RAC Database Installation"
Oracle Real Application Clusters Administration and Deployment Guide for more information about adding and removing nodes from your cluster database
Adding the New Node to the Cluster using Enterprise Manager
rac添加新节点的步骤与方法(官方步骤与自我测试)的更多相关文章
- Oracle 11g rac 添加新节点测试
[转]https://blog.csdn.net/shiyu1157758655/article/details/60877076 前期准备: 操作系统设置OS版本必须相同,检查内核参数,系统内存.C ...
- rac添加新节点的步骤与方法2
上一篇文章,把节点删除了.这次新增加一个节点 .新增加的节点是host03.如下: #Public IP192.168.16.45 racdb1192.168.16.46 racdb2192.168. ...
- rac添加新节点的步骤与方法
[转载] https://www.cnblogs.com/hankyoon/p/5174465.html OS: [root@rac ~]# more /etc/oracle-releaseOracl ...
- 【Oracle】RAC添加新节点
RAC添加节点: 环境: OS:OEL5.6 RAC:10.2.0.1.0 原有rac1,rac2两个节点.如今要添加rac3节点: 操作过程: 改动三个节点上的/etc/hosts文件 192.16 ...
- Hadoop集群添加新节点步骤
1.在新节点中进行操作系统配置,包括主机名.网络.防火墙和无密码登录等. 2.在所有节点/etc/host文件中添加新节点 3.把namenode的有关配置文件复制到该节点 4.修改master节点s ...
- 使用percona xtradb cluster的IST方式添加新节点
使用percona xtradb cluster的IST(Incremental State Transfer)特性添加新节点,防止新节点加入时使用SST(State SnapShop Transfe ...
- 大数据实操3 - hadoop集群添加新节点
hadoop集群支持动态扩展,不需要停止原有集群节点就可以实现新节点的加入. 我是使用docker搭建的进群环境,制作了镜像文件,这里以我的工作基础为例子介绍集群中添加集群的方法 一.制作一个新节点 ...
- my35_MGR添加新节点
MGR添加节点主要涉及以下两个参数 group_replication_group_seeds #可以动态修改 group_replication_ip_whitelist #需要关闭 ...
- Hadoop-HBASE 热添加新节点
Hadoop-HBASE 热添加新节点 环境:192.168.137.101 hd1192.168.137.102 hd2192.168.137.103 hd3192.168.137.104 hd4四 ...
随机推荐
- C# QuartZ使用实例写成服务
官方学习文档:http://www.quartz-scheduler.net/documentation/index.html 官方的源代码下载:http://sourceforge.net/proj ...
- CSS学习笔记11 CSS背景
background-color:背景色 前面我们经常用background-color这个属性来设置元素的背景色,例如下面这条css可将段落的背景色设置为灰色 p {background-color ...
- Vue 系列之 样式相关
Class 与 Style 绑定 动态修改元素样式 <head> <meta charset="utf-8" /> <meta http-equiv= ...
- InheritParasitic.js
// 寄生式继承 // 其基本思路是类似创建对象时的工厂模式,将继承过程封装在一个函数里,然后返回一个对象 function createObject(o){ var clone = Object.c ...
- python 中文件输入输出及os模块对文件系统的操作
整理了一下python 中文件的输入输出及主要介绍一些os模块中对文件系统的操作. 文件输入输出 1.内建函数open(file_name,文件打开模式,通用换行符支持),打开文件返回文件对象. 2. ...
- mooc《数据结构》 习题1.8 二分查找
本题要求实现二分查找算法. 函数接口定义: Position BinarySearch( List L, ElementType X ); 其中List结构定义如下: typedef int Posi ...
- 让 Odoo POS 支持廉价小票打印机
为了测试 Odoo 在实际业务中的实施,我们开了一家(马上要开第二家分店)猪肉店.由于预算有限,在实施 Odoo PoS 的时候采购了一台价格为 85 元的爱宝热敏打印机,结果连上 Odoo Posb ...
- angularjs的$http请求方式
/*$http常用的几个参数 $http服务的设置对象: 1.method 字符串 表示发送的请求类型 get post jsonp等等 2.url 字符串 绝对或者相对的URL,请求的目标 3.pa ...
- tab 页形式展现多张报表
业务系统中,很多报表都是沿用之前 EXCEL 的报表样式,原来以 sheet 格式显示的表,客户在 web 端展现的时候也希望也有同样的格式,润乾在实现这种效果和 EXCEL 一样简单灵活,轻松将数据 ...
- 英雄无敌HoMM3-死亡阴影SOD-神之苏醒WOG-封神NABI-MOD等相关文件
英雄无敌HoMM3:死亡阴影SOD 英雄无敌3之死亡阴影(Heroes of Might and Magic III: Shadow of Death,简记为HoMM III: SOD)发行于1999 ...