AIX安装11gR2  RAC

 

一.1  BLOG文档结构图

 

 

 

一.2  前言部分

 

一.2.1  导读和注意事项

各位技术爱好者,看完本文后,你可以掌握如下的技能,也可以学到一些其它你所不知道的知识,~O(∩_∩)O~:

① 基于aix安装rac(重点)

② 静默安装rac软件

③ dbca静默创建rac数据库

 

  Tips:

       ① 若文章代码格式有错乱,推荐使用QQ、搜狗或360浏览器,也可以下载pdf格式的文档来查看,pdf文档下载地址:http://yunpan.cn/cdEQedhCs2kFz (提取码:ed9b) 

       ② 本篇BLOG中命令的输出部分需要特别关注的地方我都用灰色背景和粉红色字体来表示,比如下边的例子中,thread 1的最大归档日志号为33,thread 2的最大归档日志号为43是需要特别关注的地方;而命令一般使用黄色背景和红色字体标注;对代码或代码输出部分的注释一般采用蓝色字体表示。

 

  List of Archived Logs in backup set 11

  Thrd Seq     Low SCN    Low Time            Next SCN   Next Time

  ---- ------- ---------- ------------------- ---------- ---------

  1    32      1621589    2015-05-29 11:09:52 1625242    2015-05-29 11:15:48

  1    33      1625242    2015-05-29 11:15:48 1625293    2015-05-29 11:15:58

  2    42      1613951    2015-05-29 10:41:18 1625245    2015-05-29 11:15:49

  2    43      1625245    2015-05-29 11:15:49 1625253    2015-05-29 11:15:53

 

 

 

 

[ZFXXDB1:root]:/>lsvg -o

T_XDESK_APP1_vg

rootvg

[ZFXXDB1:root]:/>

00:27:22 SQL> alter tablespace idxtbs read write;

 

 

====》2097152*512/1024/1024/1024=1G 

 

 

 

 

 

 

 

 

本文如有错误或不完善的地方请大家多多指正,ITPUB留言或QQ皆可,您的批评指正是我写作的最大动力。

 

 

一.2.2  相关参考文章链接

linux 环境下rac的搭建:

一步一步搭建 oracle 11gR2 rac + dg 之前传(一) http://blog.itpub.net/26736162/viewspace-1290405/  

一步一步搭建oracle 11gR2 rac+dg之环境准备(二)  http://blog.itpub.net/26736162/viewspace-1290416/ 

一步一步搭建oracle 11gR2 rac+dg之共享磁盘设置(三) http://blog.itpub.net/26736162/viewspace-1291144/ 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)  http://blog.itpub.net/26736162/viewspace-1297101/ 

一步一步搭建oracle 11gR2 rac+dg之database安装(五) http://blog.itpub.net/26736162/viewspace-1297113/ 

一步一步搭建11gR2 rac+dg之安装rac出现问题解决(六) http://blog.itpub.net/26736162/viewspace-1297128/ 

一步一步搭建11gR2 rac+dg之DG 机器配置(七)  http://blog.itpub.net/26736162/viewspace-1298733/ 

一步一步搭建11gR2 rac+dg之配置单实例的DG(八)  http://blog.itpub.net/26736162/viewspace-1298735/  

一步一步搭建11gR2 rac+dg之DG SWITCHOVER功能(九) http://blog.itpub.net/26736162/viewspace-1328050/ 

一步一步搭建11gR2 rac+dg之结尾篇(十)  http://blog.itpub.net/26736162/viewspace-1328156/ 

【RAC】如何让Oracle RAC crs_stat 命令显示完整  http://blog.itpub.net/26736162/viewspace-1610957/ 

如何创建ASM磁盘  http://blog.itpub.net/26736162/viewspace-1401193/ 

linux下rac的卸载: http://blog.itpub.net/26736162/viewspace-1630145/

 

 

【RAC】 RAC For W2K8R2 安装--总体规划 (一) : http://blog.itpub.net/26736162/viewspace-1721232/

【RAC】 RAC For W2K8R2 安装--操作系统环境配置 (二):http://blog.itpub.net/26736162/viewspace-1721253/

【RAC】 RAC For W2K8R2 安装--共享磁盘的配置(三):http://blog.itpub.net/26736162/viewspace-1721270/

【RAC】 RAC For W2K8R2 安装--grid的安装(四):http://blog.itpub.net/26736162/viewspace-1721281/

【RAC】 RAC For W2K8R2 安装--RDBMS软件的安装(五):http://blog.itpub.net/26736162/viewspace-1721304/

【RAC】 RAC For W2K8R2 安装--创建ASM磁盘组(六):http://blog.itpub.net/26736162/viewspace-1721314/

【RAC】 RAC For W2K8R2 安装--dbca创建数据库(七):http://blog.itpub.net/26736162/viewspace-1721324/

【RAC】 RAC For W2K8R2 安装--卸载(八):http://blog.itpub.net/26736162/viewspace-1721331/

【RAC】 RAC For W2K8R2 安装--安装过程中碰到的问题(九):http://blog.itpub.net/26736162/viewspace-1721373/

【RAC】 RAC For W2K8R2 安装--结尾篇(十): http://blog.itpub.net/26736162/viewspace-1721378/

 

【推荐】 【DBCA -SILENT】静默安装之rac数据库安装 http://blog.itpub.net/26736162/viewspace-1586352/

 

一.2.3  本文简介

虽然之前已经多次安装过rac了,但都是基于linux或windows的,基于aix的还没有安装过,最近有空就学学基于aix的安装rac,并且对于我而已,rac安装很熟悉了,所以就抛弃图形界面,全程采用命令模式来安装,。

另外,文章中的脚本下载地址:http://yunpan.cn/cdEQedhCs2kFz (提取码:ed9b)

 

---------------------------------------------------------------------------------------------------------------------

 

第二章  安装准备

二.1  软件环境

数据库:

p10404530_112030_AIX64-5L_1of7.zip、

p10404530_112030_AIX64-5L_2of7.zip

集群软件(11G 中的 clusterware):

            p10404530_112030_AIX64-5L_3of7.zip

   操作系统:

7100-03-03-1415

 

注意: 解压时 p10404530_112030_AIX64-5L_1of7.zip、p10404530_112030_AIX64-5L_2of7.zip

这两个包要解到同一个目录下,p10404530_112030_AIX64-5L_3of7.zip 包解到另一个不同的目录下。 

 

二.2  网络规划及/etc/hosts

vi /etc/hosts

22.188.187.148   ZFFR4CB1101

222.188.187.148  ZFFR4CB1101-priv

22.188.187.149   ZFFR4CB1101-vip

 

22.188.187.158   ZFFR4CB2101

222.188.187.158  ZFFR4CB2101-priv

22.188.187.150   ZFFR4CB2101-vip

 

22.188.187.160   ZFFR4CB2101-scan

 

配置私网

HOST=`hostname`;IP=`host $HOST | awk '{print "2"$NF}'`;chdev -l 'en1' -a netaddr=$IP -a netmask='255.255.255.0' -a state='up'

[ZFPRMDB2:root]:/>smitty tcpip

     

      Minimum Configuration & Startup

 

* Internet ADDRESS (dotted decimal)                 [222.188.187.148]

  Network MASK (dotted decimal)                      [255.255.255.0]

 

节点一:

[ZFFR4CB1101:root]/]> ifconfig -a

en0: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 22.188.187.148 netmask 0xffffff00 broadcast 22.188.187.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

en1: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 222.188.187.148 netmask 0xffffff00 broadcast 222.188.187.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

lo0: flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN>

        inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255

        inet6 ::1%1/0

         tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1

[ZFFR4CB1101:root]/]>

[ZFFR4CB1101:root]/]>

 

节点二:

[ZFFR4CB2101:root]/]> ifconfig -a

en0: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 22.188.187.158 netmask 0xffffff00 broadcast 22.188.187.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

en1: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 222.188.187.158 netmask 0xffffff00 broadcast 222.188.187.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

lo0: flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN>

        inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255

        inet6 ::1%1/0

         tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1

[ZFFR4CB2101:root]/]>

[ZFFR4CB2101:root]/]>

 

公网、私网共4个IP可以ping通,其它3个不能ping通才是正常的。

 

二.3  硬件环境检查

以ZFFR4CB2101为例:

 

[ZFFR4CB2101:root]/]> getconf REAL_MEMORY

4194304

[ZFFR4CB2101:root]/]> /usr/sbin/lsattr -E -l sys0 -a realmem

realmem 4194304 Amount of usable physical memory in Kbytes False

[ZFFR4CB2101:root]/]> lsps -a

Page Space      Physical Volume   Volume Group    Size %Used Active  Auto  Type Chksum

hd6             hdisk0            rootvg        8192MB     0   yes   yes    lv     0

[ZFFR4CB2101:root]/]> getconf HARDWARE_BITMODE

64

[ZFFR4CB2101:root]/]> bootinfo -K

64

[ZFFR4CB2101:root]/]>

[ZFFR4CB2101:root]/]> df -g

Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on

/dev/hd4           4.25      4.00    6%    12709     2% /

/dev/hd2          10.00      4.57   55%   118820    11% /usr

/dev/hd9var        4.50      4.24    6%     1178     1% /var

/dev/hd3           4.25      4.23    1%      172     1% /tmp

/dev/hd1           1.00      1.00    1%       77     1% /home

/dev/hd11admin      0.25      0.25    1%        7     1% /admin

/proc                 -         -    -         -     -  /proc

/dev/hd10opt       4.50      4.37    3%     2567     1% /opt

/dev/livedump      1.00      1.00    1%        6     1% /var/adm/ras/livedump

/dev/Plv_install      1.00      1.00    1%        4     1% /install

/dev/Plv_mtool      1.00      1.00    1%        4     1% /mtool

/dev/Plv_audit      2.00      1.99    1%        5     1% /audit

/dev/Plv_ftplog      1.00      1.00    1%        5     1% /ftplog

/dev/Tlv_bocnet     50.00     49.99    1%        4     1% /bocnet

/dev/Tlv_WebSphere     10.00      5.71   43%    45590     4% /WebSphere

/dev/TLV_TEST_DATA    100.00     99.98    1%        7     1% /lhr

/dev/tlv_softtmp     30.00     20.30   33%     5639     1% /softtmp

ZTDNETAP3:/nfs   1240.00     14.39   99%   513017    14% /nfs

/dev/tlv_u01      50.00     32.90   35%    51714     1% /u01

[ZFFR4CB2101:root]/]> cat /etc/.init.state

2

[ZFFR4CB2101:root]/]> oslevel -s

7100-03-03-1415

[ZFFR4CB2101:root]/]> lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.perfstat bos.perf.libperfstat bos.perf.proctools

  Fileset                      Level  State      Description        

  ----------------------------------------------------------------------------

Path: /usr/lib/objrepos

  bos.adt.base              7.1.3.15  COMMITTED  Base Application Development

                                                 Toolkit

  bos.adt.lib               7.1.2.15  COMMITTED  Base Application Development

                                                 Libraries

  bos.adt.libm               7.1.3.0  COMMITTED  Base Application Development

                                                 Math Library

  bos.perf.libperfstat      7.1.3.15  COMMITTED  Performance Statistics Library

                                                 Interface

  bos.perf.perfstat         7.1.3.15  COMMITTED  Performance Statistics

                                                 Interface

 

Path: /etc/objrepos

  bos.adt.base              7.1.3.15  COMMITTED  Base Application Development

                                                 Toolkit

  bos.perf.libperfstat      7.1.3.15  COMMITTED  Performance Statistics Library

                                                 Interface

  bos.perf.perfstat         7.1.3.15  COMMITTED  Performance Statistics

                                                 Interface

lslpp: 0504-132  Fileset bos.perf.proctools  not installed.

 

 

二.4  操作系统参数调整

shell脚本:

vi os_pre_lhr.sh

_chlimit(){

  [ -f /etc/security/limits.org ] || { cp -p /etc/security/limits /etc/security/limits.org; }

  cat /etc/security/limits.org |egrep -vp "root|oracle|grid" > /etc/security/limits

  echo "root:

        core = -1

        cpu = -1

        data = -1

        fsize = -1

        nofiles = -1

        rss = -1

        stack = -1

        core_hard = -1

        cpu_hard = -1

        data_hard = -1

        fsize_hard = -1

        nofiles_hard = -1

        rss_hard = -1

        stack_hard = -1

 

oracle:

        core = -1

        cpu = -1

        data = -1

        fsize = -1

        nofiles = -1

        rss = -1

        stack = -1

        cpu_hard = -1

        core_hard = -1

        data_hard = -1

        fsize_hard = -1

        nofiles_hard = -1

        rss_hard = -1

        stack_hard = -1

 

grid:

        core = -1

        cpu = -1

        data = -1

        fsize = -1

        nofiles = -1

        rss = -1

        stack = -1

        core_hard = -1

        cpu_hard = -1

        data_hard = -1

        fsize_hard = -1

        nofiles_hard = -1

        rss_hard = -1

        stack_hard = -1" >> /etc/security/limits

}

 

_chospara(){

  vmo -p -o minperm%=3

  echo "yes"|vmo -p -o maxperm%=90

  echo "yes" |vmo -p -o maxclient%=90

  echo "yes"|vmo -p -o lru_file_repage=0

  echo "yes"|vmo -p -o strict_maxclient=1

  echo "yes" |vmo -p -o strict_maxperm=0

  echo "yes\nno" |vmo -r -o page_steal_method=1;

  ioo -a|egrep -w "aio_maxreqs|aio_maxservers|aio_minservers"

  /usr/sbin/chdev -l sys0 -a maxuproc=16384 -a ncargs=256 -a minpout=4096 -a maxpout=8193 -a fullcore=true

  echo "check sys0 16384 256"

  lsattr -El sys0 |egrep "maxuproc|ncargs|pout|fullcore" |awk '{print $1,$2}'

 

  /usr/sbin/no -p -o sb_max=41943040

  /usr/sbin/no -p -o udp_sendspace=2097152

  /usr/sbin/no -p -o udp_recvspace=20971520

  /usr/sbin/no -p -o tcp_sendspace=1048576

  /usr/sbin/no -p -o tcp_recvspace=1048576

  /usr/sbin/no -p -o rfc1323=1

  /usr/sbin/no -r -o ipqmaxlen=512

  /usr/sbin/no -p -o clean_partial_conns=1

 

  cp -p /etc/environment /etc/environment.`date '+%Y%m%d'`

  cat /etc/environment.`date '+%Y%m%d'` |awk '/^TZ=/{print "TZ=BEIST-8"} !/^TZ=/{print}' >/etc/environment

  _chlimit

 

}

 

_chlimit

_chospara

 

stopsrc -s xntpd

startsrc -s xntpd -a "-x"

 

sh os_pre_lhr.sh

二.5  创建文件系统

 

/usr/lpp/EMC/Symmetrix/bin/inq.aix64_51 -showvol -sid

lspv

mkvg -S -y t_u01_vg -s 128   hdisk22

 

mklv -t jfs2 -y tlv_u01 -x 1024 t_u01_vg 400

crfs -v jfs2 -d tlv_u01 -m /u01 -A yes

mount /u01

 

mklv -t jfs2 -y tlv_softtmp -x 1024 t_u01_vg 240

crfs -v jfs2 -d tlv_softtmp -m /softtmp -A yes

mount /softtmp

 

以ZFFR4CB2101为例:

[ZFFR4CB2101:root]/]> /usr/lpp/EMC/Symmetrix/bin/inq.aix64_51 -showvol -sid

Inquiry utility, Version V7.3-1214 (Rev 0.1)      (SIL Version V7.3.0.1 (Edit Level 1214)

Copyright (C) by EMC Corporation, all rights reserved.

For help type inq -h.

 

.........................

 

------------------------------------------------------------------------------------------------

DEVICE        :VEND    :PROD            :REV   :SER NUM    :Volume  :CAP(kb)        :SYMM ID   

------------------------------------------------------------------------------------------------

/dev/rhdisk0  :AIX     :VDASD           :0001  :hdisk5     :   00000:   134246400  :N/A        

/dev/rhdisk1  :EMC     :SYMMETRIX       :5876  :640250a000 :   0250A:        2880  :000492600664

/dev/rhdisk2  :EMC     :SYMMETRIX       :5876  :640250b000 :   0250B:        2880  :000492600664

/dev/rhdisk3  :EMC     :SYMMETRIX       :5876  :640250c000 :   0250C:        2880  :000492600664

/dev/rhdisk4  :EMC     :SYMMETRIX       :5876  :640250d000 :   0250D:        2880  :000492600664

/dev/rhdisk5  :EMC     :SYMMETRIX       :5876  :64026f6000 :   026F6:   134246400  :000492600664

/dev/rhdisk6  :EMC     :SYMMETRIX       :5876  :64026fe000 :   026FE:   134246400  :000492600664

/dev/rhdisk7  :EMC     :SYMMETRIX       :5876  :6402706000 :   02706:   134246400  :000492600664

/dev/rhdisk8  :EMC     :SYMMETRIX       :5876  :640270e000 :   0270E:   134246400  :000492600664

/dev/rhdisk9  :EMC     :SYMMETRIX       :5876  :6402716000 :   02716:   134246400  :000492600664

/dev/rhdisk10 :EMC     :SYMMETRIX       :5876  :640271e000 :   0271E:   134246400  :000492600664

/dev/rhdisk11 :EMC     :SYMMETRIX       :5876  :6402726000 :   02726:   134246400  :000492600664

/dev/rhdisk12 :EMC     :SYMMETRIX       :5876  :640272e000 :   0272E:   134246400  :000492600664

/dev/rhdisk13 :EMC     :SYMMETRIX       :5876  :6402736000 :   02736:   134246400  :000492600664

/dev/rhdisk14 :EMC     :SYMMETRIX       :5876  :640273e000 :   0273E:   134246400  :000492600664

/dev/rhdisk15 :EMC     :SYMMETRIX       :5876  :6402746000 :   02746:   134246400  :000492600664

/dev/rhdisk16 :EMC     :SYMMETRIX       :5876  :640274e000 :   0274E:   134246400  :000492600664

/dev/rhdisk17 :EMC     :SYMMETRIX       :5876  :6402756000 :   02756:   134246400  :000492600664

/dev/rhdisk18 :EMC     :SYMMETRIX       :5876  :640275e000 :   0275E:   134246400  :000492600664

/dev/rhdisk19 :EMC     :SYMMETRIX       :5876  :6402766000 :   02766:   134246400  :000492600664

/dev/rhdisk20 :EMC     :SYMMETRIX       :5876  :640276e000 :   0276E:   134246400  :000492600664

/dev/rhdisk21 :EMC     :SYMMETRIX       :5876  :6402776000 :   02776:   134246400  :000492600664

/dev/rhdisk22 :EMC     :SYMMETRIX       :5876  :640277e000 :   0277E:   134246400  :000492600664

/dev/rhdisk23 :EMC     :SYMMETRIX       :5876  :6402786000 :   02786:   134246400  :000492600664

/dev/rhdisk24 :EMC     :SYMMETRIX       :5876  :640278e000 :   0278E:   134246400  :000492600664

[ZFFR4CB2101:root]/]> lspv

hdisk0          00c49fc434da2434                    rootvg          active     

hdisk1          00c49fc461fc76b2                    None                       

hdisk2          00c49fc461fc76f5                    None                       

hdisk3          00c49fc461fc7739                    None                       

hdisk4          00c49fc461fc777a                    None                       

hdisk5          00c49fc461fc77bd                    None                       

hdisk6          00c49fc461fc77fe                    None                       

hdisk7          00c49fc461fc783f                    None                       

hdisk8          00c49fc461fc7880                    None                       

hdisk9          00c49fc461fc78c5                    None                       

hdisk10         00c49fc461fc7908                    None                       

hdisk11         00c49fc461fc7958                    None                       

hdisk12         00c49fc461fc79a0                    None                       

hdisk13         00c49fc461fc79ea                    None                       

hdisk14         00c49fc461fc7a2f                    None                       

hdisk15         00c49fc461fc7a71                    None                       

hdisk16         00c49fc461fc7ab1                    None                       

hdisk17         00c49fb4e3a8fc12                    None                       

hdisk18         00c49fc461fc7b3b                    T_NET_APP_vg    active     

hdisk19         00c49fc461fc7b7d                    None                       

hdisk20         00c49fc461fc7bbe                    None                       

hdisk21         00c49fc461fc7bff                    None                       

hdisk22         00c49fc461fc7c40                    None                       

hdisk23         00c49fc461fc7c88                    T_TEST_LHR_VG   active     

hdisk24         00c49fc461fc7cca                    T_TEST_LHR_VG   active

 

 

[ZFFR4CB2101:root]/]> df -g

Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on

/dev/hd4           4.25      4.00    6%    12643     2% /

/dev/hd2          10.00      4.58   55%   118785    10% /usr

/dev/hd9var        4.50      4.08   10%     1175     1% /var

/dev/hd3           4.25      3.75   12%     1717     1% /tmp

/dev/hd1           1.00      1.00    1%       17     1% /home

/dev/hd11admin      0.25      0.25    1%        7     1% /admin

/proc                 -         -    -         -     -  /proc

/dev/hd10opt       4.50      4.37    3%     2559     1% /opt

/dev/livedump      1.00      1.00    1%        6     1% /var/adm/ras/livedump

/dev/Plv_install      1.00      1.00    1%        4     1% /install

/dev/Plv_mtool      1.00      1.00    1%        4     1% /mtool

/dev/Plv_audit      2.00      1.99    1%        5     1% /audit

/dev/Plv_ftplog      1.00      1.00    1%        5     1% /ftplog

/dev/Tlv_bocnet     50.00     49.99    1%        4     1% /bocnet

/dev/Tlv_WebSphere     10.00      5.71   43%    45590     4% /WebSphere

/dev/TLV_TEST_DATA    100.00     99.98    1%        7     1% /lhr

ZTDNETAP3:/nfs   1240.00     14.39   99%   512924    14% /nfs

ZTINIMSERVER:/sharebkup   5500.00   1258.99   78%  2495764     1% /sharebkup

 

 

[ZFFR4CB2101:root]/]> mklv -t jfs2 -y tlv_u01 -x 1024 t_u01_vg 400

tlv_u01

[ZFFR4CB2101:root]/]> crfs -v jfs2 -d tlv_u01 -m /u01 -A yes

File system created successfully.

52426996 kilobytes total disk space.

New File System size is 104857600

[ZFFR4CB2101:root]/]> mount /u01

[ZFFR4CB2101:root]/]> df -g

Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on

/dev/hd4           4.25      4.00    6%    12648     2% /

/dev/hd2          10.00      4.58   55%   118785    10% /usr

/dev/hd9var        4.50      4.08   10%     1176     1% /var

/dev/hd3           4.25      3.75   12%     1717     1% /tmp

/dev/hd1           1.00      1.00    1%       17     1% /home

/dev/hd11admin      0.25      0.25    1%        7     1% /admin

/proc                 -         -    -         -     -  /proc

/dev/hd10opt       4.50      4.37    3%     2559     1% /opt

/dev/livedump      1.00      1.00    1%        6     1% /var/adm/ras/livedump

/dev/Plv_install      1.00      1.00    1%        4     1% /install

/dev/Plv_mtool      1.00      1.00    1%        4     1% /mtool

/dev/Plv_audit      2.00      1.99    1%        5     1% /audit

/dev/Plv_ftplog      1.00      1.00    1%        5     1% /ftplog

/dev/Tlv_bocnet     50.00     49.99    1%        4     1% /bocnet

/dev/Tlv_WebSphere     10.00      5.71   43%    45590     4% /WebSphere

/dev/TLV_TEST_DATA    100.00     99.98    1%        7     1% /lhr

ZTDNETAP3:/nfs   1240.00     14.39   99%   512924    14% /nfs

ZTINIMSERVER:/sharebkup   5500.00   1258.99   78%  2495764     1% /sharebkup

/dev/tlv_u01      50.00     49.99    1%        4     1% /u01

[ZFFR4CB2101:root]/]>

 

 

[ZFFR4CB2101:root]/]> mklv -t jfs2 -y tlv_softtmp -x 1024 t_u01_vg 240

tlv_softtmp

[ZFFR4CB2101:root]/]> crfs -v jfs2 -d tlv_softtmp -m /softtmp -A yes

File system created successfully.

31456116 kilobytes total disk space.

New File System size is 62914560

[ZFFR4CB2101:root]/]> mount /softtmp

[ZFFR4CB2101:root]/]> df -g

Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on

/dev/hd4           4.25      4.00    6%    12650     2% /

/dev/hd2          10.00      4.58   55%   118785    10% /usr

/dev/hd9var        4.50      4.08   10%     1177     1% /var

/dev/hd3           4.25      3.75   12%     1717     1% /tmp

/dev/hd1           1.00      1.00    1%       17     1% /home

/dev/hd11admin      0.25      0.25    1%        7     1% /admin

/proc                 -         -    -         -     -  /proc

/dev/hd10opt       4.50      4.37    3%     2559     1% /opt

/dev/livedump      1.00      1.00    1%        6     1% /var/adm/ras/livedump

/dev/Plv_install      1.00      1.00    1%        4     1% /install

/dev/Plv_mtool      1.00      1.00    1%        4     1% /mtool

/dev/Plv_audit      2.00      1.99    1%        5     1% /audit

/dev/Plv_ftplog      1.00      1.00    1%        5     1% /ftplog

/dev/Tlv_bocnet     50.00     49.99    1%        4     1% /bocnet

/dev/Tlv_WebSphere     10.00      5.71   43%    45590     4% /WebSphere

/dev/TLV_TEST_DATA    100.00     99.98    1%        7     1% /lhr

ZTDNETAP3:/nfs   1240.00     14.39   99%   512924    14% /nfs

ZTINIMSERVER:/sharebkup   5500.00   1258.99   78%  2495764     1% /sharebkup

/dev/tlv_u01      50.00     49.99    1%        4     1% /u01

/dev/tlv_softtmp     30.00     30.00    1%        4     1% /softtmp

[ZFFR4CB2101:root]/]>

 

创建卷组的时候注意踩盘,懂AIX的人懂的,不多说。

 

二.6  建立安装目录

直接复制粘贴执行:

mkdir -p  /u01/app/11.2.0/grid

chmod -R 755 /u01/app/11.2.0/grid

mkdir -p /u01/app/grid

chmod -R 755 /u01/app/grid

mkdir -p  /u01/app/oracle

chmod -R 755 /u01/app/oracle

 

[ZFFR4CB2101:root]/]>  mkdir -p  /u01/app/11.2.0/grid                                                      

[ZFFR4CB2101:root]/]>  chmod -R 755 /u01/app/11.2.0/grid                                                         

[ZFFR4CB2101:root]/]>  mkdir -p /u01/app/grid                                                                    

[ZFFR4CB2101:root]/]>  chmod -R 755 /u01/app/grid                                                                

[ZFFR4CB2101:root]/]>  mkdir -p  /u01/app/oracle                                                                 

[ZFFR4CB2101:root]/]>  chmod -R 755 /u01/app/oracle

[ZFFR4CB2101:root]/]>

[ZFFR4CB2101:root]/]> cd /u01/app

[ZFFR4CB2101:root]/u01/app]> l

total 0

drwxr-xr-x    3 root     system          256 Mar 08 16:11 11.2.0

drwxr-xr-x    2 root     system          256 Mar 08 16:11 grid

drwxr-xr-x    2 root     system          256 Mar 08 16:11 oracle

[ZFFR4CB2101:root]/u01/app]>

 

 

 

二.7  建立用户和用户组

直接复制粘贴执行:

mkgroup -A id=1024 dba

mkgroup -A id=1025 asmadmin

mkgroup -A id=1026 asmdba

mkgroup -A id=1027 asmoper

mkgroup -A id=1028 oinstall

 

 

mkuser -a id=1025 pgrp=oinstall groups=dba,asmadmin,asmdba,asmoper,oinstall home=/home/grid fsize=-1 cpu=-1 data=-1 core=-1 rss=-1 stack=-1 stack_hard=-1  capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE  grid

echo "grid:grid" |chpasswd

pwdadm -c grid

 

mkuser -a id=1024 pgrp=dba groups=dba,asmadmin,asmdba,asmoper,oinstall  home=/home/oracle fsize=-1 cpu=-1 data=-1 core=-1 rss=-1 stack=-1 stack_hard=-1  capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE  oracle

echo "oracle:oracle" |chpasswd

pwdadm -c oracle

 

 

chown -R grid:dba  /u01/app/11.2.0

chown grid:dba  /u01/app

chown grid:dba  /u01/app/grid

chown -R oracle:dba  /u01/app/oracle

chown oracle:dba  /u01

 

/usr/sbin/lsuser  -a  capabilities grid

/usr/sbin/lsuser  -a  capabilities oracle  

 

 

 

 

 

[ZFFR4CB2101:root]/u01/app]> mkgroup -A id=1024 dba  

[ZFFR4CB2101:root]/u01/app]> mkuser -a id=1025 pgrp=dba groups=dba home=/home/grid fsize=-1 cpu=-1 data=-1 core=-1 rss=-1 stack=-1 stack_hard=-1  capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE  grid                                                    

[ZFFR4CB2101:root]/u01/app]> passwd  grid

Changing password for "grid"

grid's New password:

Enter the new password again:

[ZFFR4CB2101:root]/u01/app]>

[ZFFR4CB2101:root]/u01/app]> mkuser -a id=1024 pgrp=dba groups=dba home=/home/oracle fsize=-1 cpu=-1 data=-1 core=-1 rss=-1 stack=-1 stack_hard=-1  capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE  oracle                                                    

[ZFFR4CB2101:root]/u01/app]> passwd  oracle

Changing password for "oracle"

oracle's New password:

Enter the new password again:

[ZFFR4CB2101:root]/u01/app]>    chown -R grid:dba  /u01/app/11.2.0                                       

[ZFFR4CB2101:root]/u01/app]>    chown grid:dba  /u01/app                                                                     

[ZFFR4CB2101:root]/u01/app]>    chown grid:dba  /u01/app/grid                                                               

[ZFFR4CB2101:root]/u01/app]>    chown -R oracle:dba  /u01/app/oracle                                                         

[ZFFR4CB2101:root]/u01/app]>    chown oracle:dba  /u01

[ZFFR4CB2101:root]/u01/app]> /usr/sbin/lsuser  -a  capabilities grid

grid capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE

[ZFFR4CB2101:root]/u01/app]> /usr/sbin/lsuser  -a  capabilities oracle

oracle capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE

[ZFFR4CB2101:root]/u01/app]>

 

 

2个节点都校验:

[ZFFR4CB1101:root]/]> id grid

uid=1025(grid) gid=1028(oinstall) groups=1024(dba),1025(asmadmin),1026(asmdba),1027(asmoper)

[ZFFR4CB1101:root]/]> id oracle

uid=1024(oracle) gid=1024(dba) groups=1025(asmadmin),1026(asmdba),1027(asmoper),1028(oinstall)

[ZFFR4CB1101:root]/]>

 

二.8  配置 grid 和 oracle的 .profile

---------2个节点分别配置,注意修改ORACLE_SID的值为+ASM1,+ASM2

su - grid

vi .profile

 

umask 022  

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/11.2.0/grid

export ORACLE_SID=+ASM

export ORACLE_TERM=vt100

export ORACLE_OWNER=grid

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/u01/app/oracle/product/11.2.0/dbhome_1/lib32

export LIBPATH=$LIBPATH:/u01/app/oracle/product/11.2.0/dbhome_1/lib

export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK

export PATH=$PATH:/bin:/usr/ccs/bin:/usr/bin/X11:$ORACLE_HOME/bin 

export NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'

 

set -o vi

export EDITOR=vi 

alias l='ls -l'

export PS1='[$LOGNAME@'`hostname`:'$PWD'']$ '

export AIXTHREAD_SCOPE=S

export ORACLE_TERM=vt100

export TMP=/tmp

export TMPDIR=/tmp

export LANG=en_US

export PS1='[$LOGNAME@'`hostname`:'$PWD'']$ '

export DISPLAY=22.188.216.97:0.0

 

 

su - oracle

vi .profile

umask 022

export ORACLE_SID=ora11g

export ORACLE_BASE=/u01/app/oracle

export GRID_HOME=/u01/app/11.2.0/grid

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1

export PATH=$ORACLE_HOME/bin:$GRID_HOME/bin:$PATH:$ORACLE_HOME/OPatch

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/rdbms/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib

export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK

export NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'

export ORACLE_OWNER=oracle

 

 

set -o vi

export EDITOR=vi 

alias l='ls -l'

export AIXTHREAD_SCOPE=S

export ORACLE_TERM=vt100

export TMP=/tmp

export TMPDIR=/tmp

export LANG=en_US

export PS1='[$LOGNAME@'`hostname`:'$PWD'']$ '

export DISPLAY=22.188.216.97:0.0

 

 

 

. ~/.profile 生效当前的环境变量

 

[ZFFR4CB1101:root]/]> . ~/.profile

 

 

二.9  准备ASM磁盘

  2个节点都执行, ASM磁盘权限和属性的修改,否则执行root.sh的时候报错:

Disk Group OCR creation failed with the following message:

ORA-15018: diskgroup cannot be created

ORA-15031: disk specification '/dev/rhdisk10' matches no disks

ORA-15025: could not open disk "/dev/rhdisk10"

ORA-15056: additional error message

 

 

chown grid.asmadmin /dev/rhdisk10

chown grid.asmadmin /dev/rhdisk11

chmod 660  /dev/rhdisk10

chmod 660  /dev/rhdisk11

 

lquerypv -h /dev/hdisk10

 

chdev -l hdisk10 -a reserve_policy=no_reserve -a algorithm=round_robin -a queue_depth=32 -a pv=yes

chdev -l hdisk11 -a reserve_policy=no_reserve -a algorithm=round_robin -a queue_depth=32 -a pv=yes

 

lsattr -El hdisk10

 

 

[ZFFR4CB2101:root]/]> lsattr -El hdisk10

PCM             PCM/friend/MSYMM_VRAID           Path Control Module              True

PR_key_value    none                             Persistant Reserve Key Value     True

algorithm       fail_over                        Algorithm                        True

clr_q           yes                              Device CLEARS its Queue on error True

dist_err_pcnt   0                                Distributed Error Percentage     True

dist_tw_width   50                               Distributed Error Sample Time    True

hcheck_cmd      inquiry                          Health Check Command             True

hcheck_interval 60                               Health Check Interval            True

hcheck_mode     nonactive                        Health Check Mode                True

location                                         Location Label                   True

lun_id          0x9000000000000                  Logical Unit Number ID           False

lun_reset_spt   yes                              FC Forced Open LUN               True

max_coalesce    0x100000                         Maximum Coalesce Size            True

max_retries     5                                Maximum Number of Retries        True

max_transfer    0x100000                         Maximum TRANSFER Size            True

node_name       0x50000978080a6000               FC Node Name                     False

pvid            00c49fc461fc79080000000000000000 Physical volume identifier       False

q_err           no                               Use QERR bit                     True

q_type          simple                           Queue TYPE                       True

queue_depth     32                               Queue DEPTH                      True

reserve_policy  single_path                      Reserve Policy                   True

rw_timeout      40                               READ/WRITE time out value        True

scsi_id         0xce0040                         SCSI ID                          False

start_timeout   180                              START UNIT time out value        True

timeout_policy  retry_path                       Timeout Policy                   True

ww_name         0x50000978080a61d1               FC World Wide Name               False

[ZFFR4CB2101:root]/]> chdev -l hdisk10 -a reserve_policy=no_reserve -a algorithm=round_robin -a queue_depth=32 -a pv=yes

hdisk10 changed

[ZFFR4CB2101:root]/]> chdev -l hdisk11 -a reserve_policy=no_reserve -a algorithm=round_robin -a queue_depth=32 -a pv=yes

hdisk11 changed

[ZFFR4CB2101:root]/]> lsattr -El hdisk11

PCM             PCM/friend/MSYMM_VRAID           Path Control Module              True

PR_key_value    none                             Persistant Reserve Key Value     True

algorithm       round_robin                      Algorithm                        True

clr_q           yes                              Device CLEARS its Queue on error True

dist_err_pcnt   0                                Distributed Error Percentage     True

dist_tw_width   50                               Distributed Error Sample Time    True

hcheck_cmd      inquiry                          Health Check Command             True

hcheck_interval 60                               Health Check Interval            True

hcheck_mode     nonactive                        Health Check Mode                True

location                                         Location Label                   True

lun_id          0xa000000000000                  Logical Unit Number ID           False

lun_reset_spt   yes                              FC Forced Open LUN               True

max_coalesce    0x100000                         Maximum Coalesce Size            True

max_retries     5                                Maximum Number of Retries        True

max_transfer    0x100000                         Maximum TRANSFER Size            True

node_name       0x50000978080a6000               FC Node Name                     False

pvid            00c49fc461fc79580000000000000000 Physical volume identifier       False

q_err           no                               Use QERR bit                     True

q_type          simple                           Queue TYPE                       True

queue_depth     32                               Queue DEPTH                      True

reserve_policy  no_reserve                       Reserve Policy                   True

rw_timeout      40                               READ/WRITE time out value        True

scsi_id         0xce0040                         SCSI ID                          False

start_timeout   180                              START UNIT time out value        True

timeout_policy  retry_path                       Timeout Policy                   True

ww_name         0x50000978080a61d1               FC World Wide Name               False

[ZFFR4CB2101:root]/]>

[ZFFR4CB2101:root]/]> lquerypv -h  /dev/rhdisk10

00000000   00000000 00000000 00000000 00000000  |................|

00000010   00000000 00000000 00000000 00000000  |................|

00000020   00000000 00000000 00000000 00000000  |................|

00000030   00000000 00000000 00000000 00000000  |................|

00000040   00000000 00000000 00000000 00000000  |................|

00000050   00000000 00000000 00000000 00000000  |................|

00000060   00000000 00000000 00000000 00000000  |................|

00000070   00000000 00000000 00000000 00000000  |................|

00000080   00000000 00000000 00000000 00000000  |................|

00000090   00000000 00000000 00000000 00000000  |................|

000000A0   00000000 00000000 00000000 00000000  |................|

000000B0   00000000 00000000 00000000 00000000  |................|

000000C0   00000000 00000000 00000000 00000000  |................|

000000D0   00000000 00000000 00000000 00000000  |................|

000000E0   00000000 00000000 00000000 00000000  |................|

000000F0   00000000 00000000 00000000 00000000  |................|

 

 

 

 

 

二.10  配置SSH连通性

可以采用shell脚本或者手动配置,推荐shell脚本的方式。

二.10.1  shell脚本(2个节点都执行)

注意修改黄色背景的部分,oth代表另外一个节点的主机名,执行cfgssh.sh即可,执行testssh.sh测试ssh的连通性,该脚本AIX和linux通用,若只给一个节点配置,可以将oth的值设置为hn的值 :

 

vi cfgssh.sh

echo "config ssh..."

grep "^LoginGraceTime 0" /etc/ssh/sshd_config

[ $? -ne 0 ] && { cp -p /etc/ssh/sshd_config /etc/ssh/sshd_config.org; echo "LoginGraceTime 0" >>/etc/ssh/sshd_config; }

 

export hn=`hostname`

export oth=ZFFR4CB2101

export p_pwd=`pwd`

su - grid -c "$p_pwd/sshUserSetup.sh -user grid -hosts $oth -noPromptPassphrase"

su - grid -c "ssh $hn hostname"

su - grid -c "ssh $oth hostname"

 

su - oracle -c "$p_pwd/sshUserSetup.sh -user oracle -hosts $oth -noPromptPassphrase"

su - oracle -c "ssh $hn hostname"

su - oracle -c "ssh $oth hostname"

 

vi sshUserSetup.sh

 

 

vi testssh.sh

export hn=`hostname`

export oth=ZFFR4CB2101

su - grid -c "ssh $hn pwd"

su - grid -c "ssh $oth pwd"

su - oracle -c "ssh $hn pwd"

su - oracle -c "ssh $oth pwd"

 

chmod 777 *.sh

sh cfgssh.sh

 

 

二.10.2  手动配置

分别配置grid和oracle用户的ssh

----------------------------------------------------------------------------------

[root@node1 : /]# su - oracle

[oracle@node1 ~]$ mkdir ~/.ssh

[oracle@node1 ~]$ chmod 700 ~/.ssh

[oracle@node1 ~]$ ssh-keygen -t rsa  ->回车->回车->回车

[oracle@node1 ~]$ ssh-keygen -t dsa  ->回车->回车->回车

 

-----------------------------------------------------------------------------------

[root@node2 : /]# su - oracle

[oracle@node2 ~]$ mkdir ~/.ssh

[oracle@node2 ~]$ chmod 700 ~/.ssh

[oracle@node2 ~]$ ssh-keygen -t rsa  ->回车->回车->回车

[oracle@node2 ~]$ ssh-keygen -t dsa  ->回车->回车->回车

 

-----------------------------------------------------------------------------------

 

[oracle@node1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[oracle@node1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

[oracle@node1 ~]$ ssh ZFFR4CB2101 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  ->输入node2密码

[oracle@node1 ~]$ ssh ZFFR4CB2101 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys  ->输入node2密码

[oracle@node1 ~]$ scp ~/.ssh/authorized_keys ZFFR4CB2101:~/.ssh/authorized_keys    ->输入node2密码

 

-----------------------------------------------------------------------------------

测试两节点连通性:

 

[oracle@node1 ~]$ ssh ZFFR4CB1101 date

[oracle@node1 ~]$ ssh ZFFR4CB2101 date

[oracle@node1 ~]$ ssh ZFFR4CB1101-priv date

[oracle@node1 ~]$ ssh ZFFR4CB2101-priv date

 

[oracle@node2 ~]$ ssh ZFFR4CB1101 date

[oracle@node2 ~]$ ssh ZFFR4CB2101 date

[oracle@node2 ~]$ ssh ZFFR4CB1101-priv date

[oracle@node2 ~]$ ssh ZFFR4CB2101-priv date

 

 

 

第三章 grid安装

三.1  准备安装源

 

上传文件到softtmp目录:

 

 

 

[ZFFR4CB2101:root]/softtmp]> l

total 9644872

drwxr-xr-x    2 root     system          256 Mar 08 16:10 lost+found

-rw-r-----    1 root     system   1766307597 Mar 02 04:05 p10404530_112030_AIX64-5L_1of7.zip

-rw-r-----    1 root     system   1135393912 Mar 02 04:03 p10404530_112030_AIX64-5L_2of7.zip

-rw-r-----    1 root     system   2036455635 Mar 02 04:06 p10404530_112030_AIX64-5L_3of7.zip

[ZFFR4CB2101:root]/softtmp]> unzip p10404530_112030_AIX64-5L_3of7.zip

Archive:  p10404530_112030_AIX64-5L_3of7.zip

   creating: grid/

   creating: grid/stage/

  inflating: grid/stage/shiphomeproperties.xml 

   creating: grid/stage/Components/

   creating: grid/stage/Components/oracle.crs/

   creating: grid/stage/Components/oracle.crs/11.2.0.3.0/

   creating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/

   creating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/

  inflating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/filegroup5.jar 

  inflating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/filegroup4.jar 

  inflating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/filegroup3.jar 

  inflating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/filegroup2.jar 

  inflating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/filegroup1.jar 

   creating: grid/stage/Components/oracle.has.crs/

《《《《。。。。。。。。篇幅原因,有省略。。。。。。。。》》》》

  inflating: grid/doc/server.11203/E18951-02.mobi 

  inflating: grid/welcome.html      

   creating: grid/sshsetup/

  inflating: grid/sshsetup/sshUserSetup.sh 

  inflating: grid/readme.html       

[ZFFR4CB2101:root]/softtmp]>

[ZFFR4CB2101:root]/softtmp]> l

total 9644880

drwxr-xr-x    9 root     system         4096 Oct 28 2011  grid

drwxr-xr-x    2 root     system          256 Mar 08 16:10 lost+found

-rw-r-----    1 root     system   1766307597 Mar 02 04:05 p10404530_112030_AIX64-5L_1of7.zip

-rw-r-----    1 root     system   1135393912 Mar 02 04:03 p10404530_112030_AIX64-5L_2of7.zip

-rw-r-----    1 root     system   2036455635 Mar 02 04:06 p10404530_112030_AIX64-5L_3of7.zip

[ZFFR4CB2101:root]/softtmp]> cd grid

[ZFFR4CB2101:root]/softtmp/grid]> l

total 168

drwxr-xr-x    9 root     system         4096 Oct 10 2011  doc

drwxr-xr-x    4 root     system         4096 Oct 21 2011  install

-rwxr-xr-x    1 root     system        28122 Oct 28 2011  readme.html

drwxrwxr-x    2 root     system          256 Oct 21 2011  response

drwxrwxr-x    3 root     system          256 Oct 21 2011  rootpre

-rwxr-xr-x    1 root     system        13369 Sep 22 2010  rootpre.sh

drwxrwxr-x    2 root     system          256 Oct 21 2011  rpm

-rwxr-xr-x    1 root     system        10006 Oct 21 2011  runInstaller

-rwxrwxr-x    1 root     system         4878 May 14 2011  runcluvfy.sh

drwxrwxr-x    2 root     system          256 Oct 21 2011  sshsetup

drwxr-xr-x   14 root     system         4096 Oct 21 2011  stage

-rw-r--r--    1 root     system         4561 Oct 10 2011  welcome.html

 

三.2  执行runcluvfy.sh脚本预检测

[grid@ZFFR4CB2101:/softtmp/grid]$ /softtmp/grid/runcluvfy.sh stage -pre crsinst -n  ZFFR4CB2101,ZFFR4CB1101 -verbose -fixup

 

Performing pre-checks for cluster services setup

 

Checking node reachability...

 

Check: Node reachability from node "ZFFR4CB2101"

  Destination Node                      Reachable?             

  ------------------------------------  ------------------------

  ZFFR4CB2101                           yes                    

  ZFFR4CB1101                           yes                    

Result: Node reachability check passed from node "ZFFR4CB2101"

 

 

Checking user equivalence...

 

Check: User equivalence for user "grid"

  Node Name                             Status                 

  ------------------------------------  ------------------------

  ZFFR4CB2101                           passed                 

  ZFFR4CB1101                           passed                 

Result: User equivalence check passed for user "grid"

 

Checking node connectivity...

 

Checking hosts config file...

  Node Name                             Status                 

  ------------------------------------  ------------------------

  ZFFR4CB2101                           passed                 

  ZFFR4CB1101                           passed                 

 

Verification of the hosts config file successful

 

 

Interface information for node "ZFFR4CB2101"

Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  

------ --------------- --------------- --------------- --------------- ----------------- ------

en0    22.188.187.158  22.188.187.0    22.188.187.158  22.188.187.1    C6:03:AE:03:97:83 1500 

en1    222.188.187.158 222.188.187.0   222.188.187.158 22.188.187.1    C6:03:A7:3E:FE:01 1500 

 

 

Interface information for node "ZFFR4CB1101"

Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  

------ --------------- --------------- --------------- --------------- ----------------- ------

en0    22.188.187.148  22.188.187.0    22.188.187.148  UNKNOWN         FE:B6:72:EF:12:83 1500 

en1    222.188.187.148 222.188.187.0   222.188.187.148 UNKNOWN         FE:B6:7D:9F:6C:01 1500 

 

 

Check: Node connectivity of subnet "22.188.187.0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  ZFFR4CB2101[22.188.187.158]     ZFFR4CB1101[22.188.187.148]     yes            

Result: Node connectivity passed for subnet "22.188.187.0" with node(s) ZFFR4CB2101,ZFFR4CB1101

 

 

Check: TCP connectivity of subnet "22.188.187.0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  ZFFR4CB2101:22.188.187.158      ZFFR4CB1101:22.188.187.148      passed         

Result: TCP connectivity check passed for subnet "22.188.187.0"

 

 

Check: Node connectivity of subnet "222.188.187.0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  ZFFR4CB2101[222.188.187.158]    ZFFR4CB1101[222.188.187.148]    yes            

Result: Node connectivity passed for subnet "222.188.187.0" with node(s) ZFFR4CB2101,ZFFR4CB1101

 

 

Check: TCP connectivity of subnet "222.188.187.0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  ZFFR4CB2101:222.188.187.158     ZFFR4CB1101:222.188.187.148     passed         

Result: TCP connectivity check passed for subnet "222.188.187.0"

 

 

Interfaces found on subnet "22.188.187.0" that are likely candidates for VIP are:

ZFFR4CB2101 en0:22.188.187.158

ZFFR4CB1101 en0:22.188.187.148

 

Interfaces found on subnet "222.188.187.0" that are likely candidates for VIP are:

ZFFR4CB2101 en1:222.188.187.158

ZFFR4CB1101 en1:222.188.187.148

 

WARNING:

Could not find a suitable set of interfaces for the private interconnect

Checking subnet mask consistency...

Subnet mask consistency check passed for subnet "22.188.187.0".

Subnet mask consistency check passed for subnet "222.188.187.0".

Subnet mask consistency check passed.

 

Result: Node connectivity check passed

 

Checking multicast communication...

 

Checking subnet "22.188.187.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "22.188.187.0" for multicast communication with multicast group "230.0.1.0" passed.

 

Checking subnet "222.188.187.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "222.188.187.0" for multicast communication with multicast group "230.0.1.0" passed.

 

Check of multicast communication passed.

 

Check: Total memory

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   4GB (4194304.0KB)         2GB (2097152.0KB)         passed   

  ZFFR4CB1101   48GB (5.0331648E7KB)      2GB (2097152.0KB)         passed   

Result: Total memory check passed

 

Check: Available memory

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   2.3528GB (2467056.0KB)    50MB (51200.0KB)          passed   

  ZFFR4CB1101   43.8485GB (4.5978476E7KB)  50MB (51200.0KB)          passed   

Result: Available memory check passed

 

Check: Swap space

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   8GB (8388608.0KB)         4GB (4194304.0KB)         passed   

  ZFFR4CB1101   8GB (8388608.0KB)         16GB (1.6777216E7KB)      failed   

Result: Swap space check failed

 

Check: Free disk space for "ZFFR4CB2101:/tmp"

  Path              Node Name     Mount point   Available     Required      Status     

  ----------------  ------------  ------------  ------------  ------------  ------------

  /tmp              ZFFR4CB2101   /tmp          3.5657GB      1GB           passed     

Result: Free disk space check passed for "ZFFR4CB2101:/tmp"

 

Check: Free disk space for "ZFFR4CB1101:/tmp"

  Path              Node Name     Mount point   Available     Required      Status     

  ----------------  ------------  ------------  ------------  ------------  ------------

  /tmp              ZFFR4CB1101   /tmp          18.4434GB     1GB           passed     

Result: Free disk space check passed for "ZFFR4CB1101:/tmp"

 

Check: User existence for "grid"

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  ZFFR4CB2101   passed                    exists(1025)           

  ZFFR4CB1101   passed                    exists(1025)           

 

Checking for multiple users with UID value 1025

Result: Check for multiple users with UID value 1025 passed

Result: User existence check passed for "grid"

 

Check: Group existence for "oinstall"

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  ZFFR4CB2101   passed                    exists                 

  ZFFR4CB1101   passed                    exists                 

Result: Group existence check passed for "oinstall"

 

Check: Group existence for "dba"

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  ZFFR4CB2101   passed                    exists                 

  ZFFR4CB1101   passed                    exists                 

Result: Group existence check passed for "dba"

 

Check: Membership of user "grid" in group "oinstall" [as Primary]

  Node Name         User Exists   Group Exists  User in Group  Primary       Status     

  ----------------  ------------  ------------  ------------  ------------  ------------

  ZFFR4CB2101       yes           yes           yes           yes           passed     

  ZFFR4CB1101       yes           yes           yes           yes           passed     

Result: Membership check for user "grid" in group "oinstall" [as Primary] passed

 

Check: Membership of user "grid" in group "dba"

  Node Name         User Exists   Group Exists  User in Group  Status         

  ----------------  ------------  ------------  ------------  ----------------

  ZFFR4CB2101       yes           yes           yes           passed         

  ZFFR4CB1101       yes           yes           yes           passed         

Result: Membership check for user "grid" in group "dba" passed

 

Check: Run level

  Node Name     run level                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   2                         2                         passed   

  ZFFR4CB1101   2                         2                         passed   

Result: Run level check passed

 

Check: Hard limits for "maximum open file descriptors"

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  ZFFR4CB2101       hard          9223372036854776000  65536         passed         

  ZFFR4CB1101       hard          9223372036854776000  65536         passed         

Result: Hard limits check passed for "maximum open file descriptors"

 

Check: Soft limits for "maximum open file descriptors"

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  ZFFR4CB2101       soft          9223372036854776000  1024          passed         

  ZFFR4CB1101       soft          9223372036854776000  1024          passed         

Result: Soft limits check passed for "maximum open file descriptors"

 

Check: Hard limits for "maximum user processes"

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  ZFFR4CB2101       hard          16384         16384         passed         

  ZFFR4CB1101       hard          16384         16384         passed         

Result: Hard limits check passed for "maximum user processes"

 

Check: Soft limits for "maximum user processes"

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  ZFFR4CB2101       soft          16384         2047          passed         

  ZFFR4CB1101       soft          16384         2047          passed         

Result: Soft limits check passed for "maximum user processes"

 

Check: System architecture

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   powerpc                   powerpc                   passed   

  ZFFR4CB1101   powerpc                   powerpc                   passed   

Result: System architecture check passed

 

Check: Kernel version

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   7.1-7100.03.03.1415       7.1-7100.00.01.1037       passed   

  ZFFR4CB1101   7.1-7100.02.05.1415       7.1-7100.00.01.1037       passed   

 

WARNING:

PRVF-7524 : Kernel version is not consistent across all the nodes.

Kernel version = "7.1-7100.02.05.1415" found on nodes: ZFFR4CB1101.

Kernel version = "7.1-7100.03.03.1415" found on nodes: ZFFR4CB2101.

Result: Kernel version check passed

 

Check: Kernel parameter for "ncargs"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   256                       128                       passed   

  ZFFR4CB1101   256                       128                       passed   

Result: Kernel parameter check passed for "ncargs"

 

Check: Kernel parameter for "maxuproc"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   16384                     2048                      passed   

  ZFFR4CB1101   16384                     2048                      passed   

Result: Kernel parameter check passed for "maxuproc"

 

Check: Kernel parameter for "tcp_ephemeral_low"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   32768                     9000                      failed (ignorable)

  ZFFR4CB1101   32768                     9000                      failed (ignorable)

Result: Kernel parameter check passed for "tcp_ephemeral_low"

 

Check: Kernel parameter for "tcp_ephemeral_high"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   65535                     65500                     failed (ignorable)

  ZFFR4CB1101   65535                     65500                     failed (ignorable)

Result: Kernel parameter check passed for "tcp_ephemeral_high"

 

Check: Kernel parameter for "udp_ephemeral_low"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   32768                     9000                      failed (ignorable)

  ZFFR4CB1101   32768                     9000                      failed (ignorable)

Result: Kernel parameter check passed for "udp_ephemeral_low"

 

Check: Kernel parameter for "udp_ephemeral_high"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   65535                     65500                     failed (ignorable)

  ZFFR4CB1101   65535                     65500                     failed (ignorable)

Result: Kernel parameter check passed for "udp_ephemeral_high"

 

Check: Package existence for "bos.adt.base"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.adt.base-7.1.3.15-0   bos.adt.base-...          passed   

  ZFFR4CB1101   bos.adt.base-7.1.3.15-0   bos.adt.base-...          passed   

Result: Package existence check passed for "bos.adt.base"

 

Check: Package existence for "bos.adt.lib"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.adt.lib-7.1.2.15-0    bos.adt.lib-...           passed   

  ZFFR4CB1101   bos.adt.lib-7.1.2.15-0    bos.adt.lib-...           passed   

Result: Package existence check passed for "bos.adt.lib"

 

Check: Package existence for "bos.adt.libm"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.adt.libm-7.1.3.0-0    bos.adt.libm-...          passed   

  ZFFR4CB1101   bos.adt.libm-7.1.3.0-0    bos.adt.libm-...          passed   

Result: Package existence check passed for "bos.adt.libm"

 

Check: Package existence for "bos.perf.libperfstat"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.perf.libperfstat-7.1.3.15-0  bos.perf.libperfstat-...  passed   

  ZFFR4CB1101   bos.perf.libperfstat-7.1.3.15-0  bos.perf.libperfstat-...  passed   

Result: Package existence check passed for "bos.perf.libperfstat"

 

Check: Package existence for "bos.perf.perfstat"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.perf.perfstat-7.1.3.15-0  bos.perf.perfstat-...     passed   

  ZFFR4CB1101   bos.perf.perfstat-7.1.3.15-0  bos.perf.perfstat-...     passed   

Result: Package existence check passed for "bos.perf.perfstat"

 

Check: Package existence for "bos.perf.proctools"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.perf.proctools-7.1.3.15-0  bos.perf.proctools-...    passed   

  ZFFR4CB1101   bos.perf.proctools-7.1.3.15-0  bos.perf.proctools-...    passed   

Result: Package existence check passed for "bos.perf.proctools"

 

Check: Package existence for "xlC.aix61.rte"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   xlC.aix61.rte-12.1.0.1-0  xlC.aix61.rte-10.1.0.0    passed   

  ZFFR4CB1101   xlC.aix61.rte-12.1.0.1-0  xlC.aix61.rte-10.1.0.0    passed   

Result: Package existence check passed for "xlC.aix61.rte"

 

Check: Package existence for "xlC.rte"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   xlC.rte-12.1.0.1-0        xlC.rte-10.1.0.0          passed   

  ZFFR4CB1101   xlC.rte-12.1.0.1-0        xlC.rte-10.1.0.0          passed   

Result: Package existence check passed for "xlC.rte"

 

Check: Operating system patch for "Patch IZ87216"

  Node Name     Applied                   Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   Patch IZ87216:devices.common.IBM.mpio.rte  Patch IZ87216             passed   

  ZFFR4CB1101   Patch IZ87216:devices.common.IBM.mpio.rte  Patch IZ87216             passed   

Result: Operating system patch check passed for "Patch IZ87216"

 

Check: Operating system patch for "Patch IZ87564"

  Node Name     Applied                   Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   Patch IZ87564:bos.adt.libmIZ87564:bos.adt.prof  Patch IZ87564             passed   

  ZFFR4CB1101   Patch IZ87564:bos.adt.libmIZ87564:bos.adt.prof  Patch IZ87564             passed   

Result: Operating system patch check passed for "Patch IZ87564"

 

Check: Operating system patch for "Patch IZ89165"

  Node Name     Applied                   Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   Patch IZ89165:bos.rte.bind_cmds  Patch IZ89165             passed   

  ZFFR4CB1101   Patch IZ89165:bos.rte.bind_cmds  Patch IZ89165             passed   

Result: Operating system patch check passed for "Patch IZ89165"

 

Check: Operating system patch for "Patch IZ97035"

  Node Name     Applied                   Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   Patch IZ97035:devices.vdevice.IBM.l-lan.rte  Patch IZ97035             passed   

  ZFFR4CB1101   Patch IZ97035:devices.vdevice.IBM.l-lan.rte  Patch IZ97035             passed   

Result: Operating system patch check passed for "Patch IZ97035"

 

Checking for multiple users with UID value 0

Result: Check for multiple users with UID value 0 passed

 

Check: Current group ID

Result: Current group ID check passed

 

Starting check for consistency of primary group of root user

  Node Name                             Status                 

  ------------------------------------  ------------------------

  ZFFR4CB2101                           passed                 

  ZFFR4CB1101                           passed                 

 

Check for consistency of root user's primary group passed

 

Starting Clock synchronization checks using Network Time Protocol(NTP)...

 

NTP Configuration file check started...

The NTP configuration file "/etc/ntp.conf" is available on all nodes

NTP Configuration file check passed

 

Checking daemon liveness...

 

Check: Liveness for "xntpd"

  Node Name                             Running?               

  ------------------------------------  ------------------------

  ZFFR4CB2101                           yes                    

  ZFFR4CB1101                           yes                    

Result: Liveness check passed for "xntpd"

Check for NTP daemon or service alive passed on all nodes

 

Checking NTP daemon command line for slewing option "-x"

Check: NTP daemon command line

  Node Name                             Slewing Option Set?    

  ------------------------------------  ------------------------

  ZFFR4CB2101                           yes                    

  ZFFR4CB1101                           no                     

Result:

NTP daemon slewing option check failed on some nodes

PRVF-5436 : The NTP daemon running on one or more nodes lacks the slewing option "-x"

Result: Clock synchronization check using Network Time Protocol(NTP) failed

 

Checking Core file name pattern consistency...

Core file name pattern consistency check passed.

 

Checking to make sure user "grid" is not in "system" group

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  ZFFR4CB2101   passed                    does not exist         

  ZFFR4CB1101   passed                    does not exist         

Result: User "grid" is not part of "system" group. Check passed

 

Check default user file creation mask

  Node Name     Available                 Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   022                       0022                      passed   

  ZFFR4CB1101   022                       0022                      passed   

Result: Default user file creation mask check passed

Checking consistency of file "/etc/resolv.conf" across nodes

 

File "/etc/resolv.conf" does not exist on any node of the cluster. Skipping further checks

 

File "/etc/resolv.conf" is consistent across nodes

 

Check: Time zone consistency

Result: Time zone consistency check passed

Result: User ID < 65535 check passed

 

Result: Kernel 64-bit mode check passed

 

[grid@ZFFR4CB2101:/softtmp/grid]$

 

三.2.1  静默安装grid软件

先root执行:

/softtmp/grid/rootpre.sh

 

[ZFFR4CB2101:root]/]> /softtmp/grid/rootpre.sh

/softtmp/grid/rootpre.sh output will be logged in /tmp/rootpre.out_16-03-09.09:47:33

 

Checking if group services should be configured....

Nothing to configure.

[ZFFR4CB2101:root]/]>

 

./runInstaller -silent  -force -noconfig -IgnoreSysPreReqs -ignorePrereq  -showProgress \

INVENTORY_LOCATION=/u01/app/oraInventory \

SELECTED_LANGUAGES=en \

ORACLE_BASE=/u01/app/grid \

ORACLE_HOME=/u01/app/11.2.0/grid \

oracle.install.asm.OSDBA=asmdba \

oracle.install.asm.OSOPER=asmoper \

oracle.install.asm.OSASM=asmadmin \

oracle.install.crs.config.storageOption=ASM_STORAGE \

oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=EXTERNAL \

oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=EXTERNAL \

oracle.install.crs.config.useIPMI=false \

oracle.install.asm.diskGroup.name=OCR \

oracle.install.asm.diskGroup.redundancy=EXTERNAL \

oracle.installer.autoupdates.option=SKIP_UPDATES \

oracle.install.crs.config.gpnp.scanPort=1521 \

oracle.install.crs.config.gpnp.configureGNS=false \

oracle.install.option=CRS_CONFIG \

oracle.install.asm.SYSASMPassword=lhr \

oracle.install.asm.monitorPassword=lhr \

oracle.install.asm.diskGroup.diskDiscoveryString=/dev/rhdisk* \

oracle.install.asm.diskGroup.disks=/dev/rhdisk10 \

oracle.install.crs.config.gpnp.scanName=ZFFR4CB2101-scan \

oracle.install.crs.config.clusterName=ZFFR4CB-cluster \

oracle.install.crs.config.autoConfigureClusterNodeVIP=false \

oracle.install.crs.config.clusterNodes=ZFFR4CB2101:ZFFR4CB2101-vip,ZFFR4CB1101:ZFFR4CB1101-vip \

oracle.install.crs.config.networkInterfaceList=en0:22.188.187.0:1,en1:222.188.187.0:2 \

ORACLE_HOSTNAME=ZFFR4CB2101

 

命令行模式执行静默安装,注意复制脚本的时候最后不能多加回车符号,当前窗口不要执行其他内容,开始执行有点慢,需要修改的地方我已经用黄色背景表示了:

[grid@ZFFR4CB2101:/softtmp/grid]$ ./runInstaller -silent  -force -noconfig -IgnoreSysPreReqs -ignorePrereq  -showProgress \

> INVENTORY_LOCATION=/u01/app/oraInventory \

> SELECTED_LANGUAGES=en \

> ORACLE_BASE=/u01/app/grid \

> ORACLE_HOME=/u01/app/11.2.0/grid \

> oracle.install.asm.OSDBA=asmdba \

> oracle.install.asm.OSOPER=asmoper \

> oracle.install.asm.OSASM=asmadmin \

> oracle.install.crs.config.storageOption=ASM_STORAGE \

> oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=EXTERNAL \

> oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=EXTERNAL \

> oracle.install.crs.config.useIPMI=false \

> oracle.install.asm.diskGroup.name=OCR \

> oracle.install.asm.diskGroup.redundancy=EXTERNAL \

> oracle.installer.autoupdates.option=SKIP_UPDATES \

> oracle.install.crs.config.gpnp.scanPort=1521 \

> oracle.install.crs.config.gpnp.configureGNS=false \

> oracle.install.option=CRS_CONFIG \

> oracle.install.asm.SYSASMPassword=lhr \

> oracle.install.asm.monitorPassword=lhr \

> oracle.install.asm.diskGroup.diskDiscoveryString=/dev/rhdisk* \

> oracle.install.asm.diskGroup.disks=/dev/rhdisk10 \

> oracle.install.crs.config.gpnp.scanName=ZFFR4CB2101-scan \

> oracle.install.crs.config.clusterName=ZFFR4CB-cluster \

> oracle.install.crs.config.autoConfigureClusterNodeVIP=false \

> oracle.install.crs.config.clusterNodes=ZFFR4CB2101:ZFFR4CB2101-vip,ZFFR4CB1101:ZFFR4CB1101-vip \

> oracle.install.crs.config.networkInterfaceList=en0:22.188.187.0:1,en1:222.188.187.0:2 \

> ORACLE_HOSTNAME=ZFFR4CB2101

********************************************************************************

 

Your platform requires the root user to perform certain pre-installation

OS preparation.  The root user should run the shell script 'rootpre.sh' before

you proceed with Oracle installation.  rootpre.sh can be found at the top level

of the CD or the stage area.

 

Answer 'y' if root has run 'rootpre.sh' so you can proceed with Oracle

installation.

Answer 'n' to abort installation and then ask root to run 'rootpre.sh'.

 

********************************************************************************

 

Has 'rootpre.sh' been run by root on all nodes? [y/n] (n)

y

 

Starting Oracle Universal Installer...

 

Checking Temp space: must be greater than 190 MB.   Actual 4330 MB    Passed

Checking swap space: must be greater than 150 MB.   Actual 8192 MB    Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-03-10_04-54-07PM. Please wait ...[grid@ZFFR4CB2101:/softtmp/grid]$

[grid@ZFFR4CB2101:/softtmp/grid]$

[grid@ZFFR4CB2101:/softtmp/grid]$

[grid@ZFFR4CB2101:/softtmp/grid]$

[grid@ZFFR4CB2101:/softtmp/grid]$ [WARNING] [INS-30011] The SYS password entered does not conform to the Oracle recommended standards.

   CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].

   ACTION: Provide a password that conforms to the Oracle recommended standards.

[WARNING] [INS-30011] The ASMSNMP password entered does not conform to the Oracle recommended standards.

   CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].

   ACTION: Provide a password that conforms to the Oracle recommended standards.

You can find the log of this install session at:

/u01/app/oraInventory/logs/installActions2016-03-10_04-54-07PM.log

 

Prepare in progress.

..................................................   5% Done.

 

Prepare successful.

 

Copy files in progress.

..................................................   10% Done.

..................................................   15% Done.

........................................

Copy files successful.

..................................................   27% Done.

 

Link binaries in progress.

 

Link binaries successful.

..................................................   34% Done.

 

Setup files in progress.

 

Setup files successful.

..................................................   41% Done.

 

Perform remote operations in progress.

..................................................   48% Done.

 

Perform remote operations successful.

The installation of Oracle Grid Infrastructure was successful.

Please check '/u01/app/oraInventory/logs/silentInstall2016-03-10_04-54-07PM.log' for more details.

..................................................   97% Done.

 

Execute Root Scripts in progress.

 

As a root user, execute the following script(s):

        1. /u01/app/oraInventory/orainstRoot.sh

        2. /u01/app/11.2.0/grid/root.sh

 

Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:

[ZFFR4CB2101, ZFFR4CB1101]

Execute /u01/app/11.2.0/grid/root.sh on the following nodes:

[ZFFR4CB2101, ZFFR4CB1101]

 

..................................................   100% Done.

 

Execute Root Scripts successful.

As install user, execute the following script to complete the configuration.

        1. /u01/app/11.2.0/grid/cfgtoollogs/configToolAllCommands

 

        Note:

        1. This script must be run on the same system from where installer was run.

        2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).

 

 

Successfully Setup Software.

 

[grid@ZFFR4CB2101:/softtmp/grid]$

 

 

 

 

执行命令的节点:

[ZFFR4CB2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

6.80    /u01/app/11.2.0/grid

[ZFFR4CB2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

7.41    /u01/app/11.2.0/grid

[ZFFR4CB2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

8.03    /u01/app/11.2.0/grid

[ZFFR4CB2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

8.61    /u01/app/11.2.0/grid

[ZFFR4CB2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

9.80    /u01/app/11.2.0/grid

[ZFFR4CB2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

9.80    /u01/app/11.2.0/grid

 

 

执行到 Perform remote operations in progress. 的时候,可以查看另外一个节点的grid目录的大小来判断是否卡掉:

[ZFFR4CB1101:root]/u01/app/11.2.0/grid/bin]> du -sg .

1.78    .

[ZFFR4CB1101:root]/u01/app/11.2.0/grid/bin]> cd

[ZFFR4CB1101:root]/]> du -sg /u01/app/11.2.0/grid

2.90    /u01/app/11.2.0/grid

[ZFFR4CB1101:root]/]> du -sg /u01/app/11.2.0/grid

3.41    /u01/app/11.2.0/grid

[ZFFR4CB1101:root]/]> du -sg /u01/app/11.2.0/grid

7.25    /u01/app/11.2.0/grid

[ZFFR4CB1101:root]/]> du -sg /u01/app/11.2.0/grid

8.76    /u01/app/11.2.0/grid

[ZFFR4CB1101:root]/]> du -sg /u01/app/11.2.0/grid

9.81    /u01/app/11.2.0/grid

[ZFFR4CB1101:root]/]>

 

 

三.2.1.1  执行root.sh

As a root user, execute the following script(s):

        1. /u01/app/oraInventory/orainstRoot.sh

        2. /u01/app/11.2.0/grid/root.sh

 

 

[ZFFR4CB2101:root]/]> /u01/app/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

 

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

[ZFFR4CB2101:root]/]> /u01/app/11.2.0/grid/root.sh

Check /u01/app/11.2.0/grid/install/root_ZFFR4CB2101_2016-03-10_17-08-45.log for the output of root script

 

回车后一直在等待。。。。。直到自动跳出才是完成,单独开窗口查看日志:

[ZFFR4CB2101:root]/softtmp]>  tail -2000f /u01/app/11.2.0/grid/install/root_ZFFR4CB2101_2016-03-10_17-08-45.log

 

Performing root user operation for Oracle 11g

 

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/11.2.0/grid

 

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

User grid has the required capabilities to run CSSD in realtime mode

OLR initialization - successful

  root wallet

  root wallet cert

  root cert export

  peer wallet

  profile reader wallet

  pa wallet

  peer wallet keys

  pa wallet keys

  peer cert request

  pa cert request

  peer cert

  pa cert

  peer root cert TP

  profile reader root cert TP

  pa root cert TP

  peer pa cert TP

  pa peer cert TP

  profile reader pa cert TP

  profile reader peer cert TP

  peer user cert

  pa user cert

Adding Clusterware entries to inittab

CRS-2672: Attempting to start 'ora.mdnsd' on 'zffr4cb2101'

CRS-2676: Start of 'ora.mdnsd' on 'zffr4cb2101' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'zffr4cb2101'

CRS-2676: Start of 'ora.gpnpd' on 'zffr4cb2101' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'zffr4cb2101'

CRS-2672: Attempting to start 'ora.gipcd' on 'zffr4cb2101'

CRS-2676: Start of 'ora.gipcd' on 'zffr4cb2101' succeeded

CRS-2676: Start of 'ora.cssdmonitor' on 'zffr4cb2101' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'zffr4cb2101'

CRS-2672: Attempting to start 'ora.diskmon' on 'zffr4cb2101'

CRS-2676: Start of 'ora.diskmon' on 'zffr4cb2101' succeeded

CRS-2676: Start of 'ora.cssd' on 'zffr4cb2101' succeeded

 

ASM created and started successfully.

 

Disk Group OCR created successfully.

 

clscfg: -install mode specified

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'system'..

Operation successful.

CRS-4256: Updating the profile

Successful addition of voting disk 04bd1fe1816f4f55bfc976416720128d.

Successfully replaced voting disk group with +OCR.

CRS-4256: Updating the profile

CRS-4266: Voting file(s) successfully replaced

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

1. ONLINE   04bd1fe1816f4f55bfc976416720128d (/dev/rhdisk10) [OCR]

Located 1 voting disk(s).

 

CRS-2672: Attempting to start 'ora.asm' on 'zffr4cb2101'

CRS-2676: Start of 'ora.asm' on 'zffr4cb2101' succeeded

CRS-2672: Attempting to start 'ora.OCR.dg' on 'zffr4cb2101'

CRS-2676: Start of 'ora.OCR.dg' on 'zffr4cb2101' succeeded

CRS-2672: Attempting to start 'ora.registry.acfs' on 'zffr4cb2101'

CRS-2676: Start of 'ora.registry.acfs' on 'zffr4cb2101' succeeded

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

 

 

 

[ZFFR4CB2101:root]/]> ps -ef|grep d.bin

    root  6815752        1   0 17:16:23      -  0:01 /u01/app/11.2.0/grid/bin/orarootagent.bin

    root  6881442        1   2 17:15:26      -  0:04 /u01/app/11.2.0/grid/bin/crsd.bin reboot

    root  7209048        1   2 17:15:04      -  0:06 /u01/app/11.2.0/grid/bin/osysmond.bin

    root  8061058  6488154   0 17:19:26  pts/1  0:00 grep d.bin

    grid  8126564        1   0 17:16:29      -  0:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit

    grid  8192252 13631536   0 17:15:29      -  0:00 /u01/app/11.2.0/grid/bin/evmlogger.bin -o /u01/app/11.2.0/grid/evm/log/evmlogger.info -l /u01/app/11.2.0/grid/evm/log/evmlogger.log

    root 10420390        1   0 17:14:13      -  0:00 /u01/app/11.2.0/grid/bin/cssdmonitor

    root 10551502        1   0 17:14:14      -  0:00 /u01/app/11.2.0/grid/bin/cssdagent

    grid 11731188        1   0 17:16:31      -  0:00 /u01/app/11.2.0/grid/bin/scriptagent.bin

    grid 12845094        1   0 17:14:09      -  0:01 /u01/app/11.2.0/grid/bin/oraagent.bin

    root 12976196        1   0 17:14:14      -  0:00 /bin/sh /u01/app/11.2.0/grid/bin/ocssd

    grid 13631536        1   0 17:15:27      -  0:02 /u01/app/11.2.0/grid/bin/evmd.bin

    grid 14221350        1   0 17:14:09      -  0:00 /u01/app/11.2.0/grid/bin/mdnsd.bin

    grid 15007882        1   1 17:14:13      -  0:02 /u01/app/11.2.0/grid/bin/gipcd.bin

    grid 15859816        1   0 17:16:11      -  0:00 /u01/app/11.2.0/grid/bin/oraagent.bin

    root 16056384        1   0 17:15:02      -  0:02 /u01/app/11.2.0/grid/bin/octssd.bin

    grid 16122020 12976196   1 17:14:14      -  0:04 /u01/app/11.2.0/grid/bin/ocssd.bin

    root 16515114        1   3 17:11:26      -  0:07 /u01/app/11.2.0/grid/bin/ohasd.bin reboot

    root 16711732        1   1 17:12:38      -  0:01 /u01/app/11.2.0/grid/bin/orarootagent.bin

    grid 16777306        1   0 17:14:11      -  0:00 /u01/app/11.2.0/grid/bin/gpnpd.bin

[ZFFR4CB2101:root]/]> crs_stat -t

Name           Type           Target    State     Host       

------------------------------------------------------------

ora....N1.lsnr ora....er.type ONLINE    ONLINE    zffr4cb2101

ora.OCR.dg     ora....up.type ONLINE    ONLINE    zffr4cb2101

ora.asm        ora.asm.type   ONLINE    ONLINE    zffr4cb2101

ora.cvu        ora.cvu.type   ONLINE    ONLINE    zffr4cb2101

ora.gsd        ora.gsd.type   OFFLINE   OFFLINE              

ora....network ora....rk.type ONLINE    ONLINE    zffr4cb2101

ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    zffr4cb2101

ora.ons        ora.ons.type   ONLINE    ONLINE    zffr4cb2101

ora....ry.acfs ora....fs.type ONLINE    ONLINE    zffr4cb2101

ora.scan1.vip  ora....ip.type ONLINE    ONLINE    zffr4cb2101

ora....SM1.asm application    ONLINE    ONLINE    zffr4cb2101

ora....101.gsd application    OFFLINE   OFFLINE              

ora....101.ons application    ONLINE    ONLINE    zffr4cb2101

ora....101.vip ora....t1.type ONLINE    ONLINE    zffr4cb2101

[ZFFR4CB2101:root]/]> crsctl stat res -t

--------------------------------------------------------------------------------

NAME           TARGET  STATE        SERVER                   STATE_DETAILS      

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.OCR.dg

               ONLINE  ONLINE       zffr4cb2101                                 

ora.asm

               ONLINE  ONLINE       zffr4cb2101              Started            

ora.gsd

               OFFLINE OFFLINE      zffr4cb2101                                 

ora.net1.network

               ONLINE  ONLINE       zffr4cb2101                                 

ora.ons

               ONLINE  ONLINE       zffr4cb2101                                 

ora.registry.acfs

               ONLINE  ONLINE       zffr4cb2101                                 

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       zffr4cb2101                                 

ora.cvu

      1        ONLINE  ONLINE       zffr4cb2101                                 

ora.oc4j

      1        ONLINE  ONLINE       zffr4cb2101                                 

ora.scan1.vip

      1        ONLINE  ONLINE       zffr4cb2101                                 

ora.zffr4cb2101.vip

      1        ONLINE  ONLINE       zffr4cb2101                                 

[ZFFR4CB2101:root]/]>

[ZFFR4CB2101:root]/]> ps -ef|grep asm

    grid  4391000        1   0 17:15:17      -  0:00 asm_dbw0_+ASM1

    grid  8519868        1   0 17:15:17      -  0:00 asm_lmhb_+ASM1

    grid  8650940        1   0 17:15:17      -  0:00 asm_mmon_+ASM1

    grid  8847532        1   0 17:15:17      -  0:00 asm_mman_+ASM1

    grid 10289152        1   0 17:15:17      -  0:00 asm_diag_+ASM1

    grid 10354890        1   0 17:15:17      -  0:00 asm_lms0_+ASM1

    grid 10682428        1   0 17:15:17      -  0:00 asm_lmd0_+ASM1

    grid 11010164        1   0 17:15:17      -  0:00 asm_mmnl_+ASM1

    root 11796632  6488154   0 17:22:17  pts/1  0:00 grep asm

    grid 12714016        1   0 17:15:17      -  0:00 asm_dia0_+ASM1

    grid 12910704        1   0 17:15:17      -  0:00 asm_rbal_+ASM1

    grid 13303898        1   0 17:15:27      -  0:00 asm_asmb_+ASM1

    grid 13435084        1   0 17:15:17      -  0:00 asm_lmon_+ASM1

    grid 13697226        1   0 17:15:18      -  0:00 asm_lck0_+ASM1

    grid 13828112        1   0 17:15:17      -  0:00 asm_ckpt_+ASM1

    grid 14155956        1   0 17:15:17      -  0:00 asm_gen0_+ASM1

    grid 14418088        1   0 17:15:17      -  0:00 asm_vktm_+ASM1

    grid 14680284        1   0 17:15:17      -  0:00 asm_ping_+ASM1

    grid 15073388        1   0 17:15:27      -  0:00 oracle+ASM1_asmb_+asm1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

    grid 15400976        1   0 17:15:17      -  0:00 asm_smon_+ASM1

    grid 15990812        1   0 17:15:17      -  0:00 asm_gmon_+ASM1

    grid 16187420        1   0 17:15:17      -  0:00 asm_lgwr_+ASM1

    grid 16449694        1   0 17:15:16      -  0:00 asm_pmon_+ASM1

    grid 16580744        1   0 17:15:16      -  0:00 asm_psp0_+ASM1

[ZFFR4CB2101:root]/]>

[ZFFR4CB2101:root]/]> lquerypv -h /dev/rhdisk10

00000000   00820101 00000000 80000000 B6FE0F29  |...............)|

00000010   00000000 00000000 00000000 00000000  |................|

00000020   4F52434C 4449534B 00000000 00000000  |ORCLDISK........|

00000030   00000000 00000000 00000000 00000000  |................|

00000040   0B200000 00000103 4F43525F 30303030  |. ......OCR_0000|

00000050   00000000 00000000 00000000 00000000  |................|

00000060   00000000 00000000 4F435200 00000000  |........OCR.....|

00000070   00000000 00000000 00000000 00000000  |................|

00000080   00000000 00000000 4F43525F 30303030  |........OCR_0000|

00000090   00000000 00000000 00000000 00000000  |................|

000000A0   00000000 00000000 00000000 00000000  |................|

000000B0   00000000 00000000 00000000 00000000  |................|

000000C0   00000000 00000000 01F80D69 66A0E000  |...........if...|

000000D0   01F80D69 70C48800 02001000 00100000  |...ip...........|

000000E0   0001BC80 0002001C 00000003 00000001  |................|

000000F0   00000002 00000002 00000000 00000000  |................|

[ZFFR4CB2101:root]/]>

 

 

一、 另外一个节点执行

As a root user, execute the following script(s):

        1. /u01/app/oraInventory/orainstRoot.sh

        2. /u01/app/11.2.0/grid/root.sh

 

 

[ZFFR4CB1101:root]/]> /u01/app/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

 

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

[ZFFR4CB1101:root]/]> $ORACLE_HOME/root.sh

Check /u01/app/11.2.0/grid/install/root_ZFFR4CB1101_2016-03-11_09-54-09.log for the output of root script

[ZFFR4CB1101:root]/]>

 

回车后一直在等待。。。。。直到自动跳出才是完成,单独开窗口查看日志:

 

[ZFFR4CB1101:root]/]> tail -200f /u01/app/11.2.0/grid/install/root_ZFFR4CB1101_2016-03-11_09-54-09.log

 

Performing root user operation for Oracle 11g

 

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/11.2.0/grid

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

User ignored Prerequisites during installation

User grid has the required capabilities to run CSSD in realtime mode

OLR initialization - successful

Adding Clusterware entries to inittab

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node zffr4cb2101, number 1, and is terminating

An active cluster was found during exclusive startup, restarting to join the cluster

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

 

[ZFFR4CB1101:root]/]> ps -ef|grep asm

    grid  9961498        1   0 09:57:39      -  0:00 asm_gmon_+ASM2

    grid 10813654        1   0 09:57:39      -  0:00 asm_mmon_+ASM2

    root 11599892  4587988   0 10:00:26  pts/0  0:00 grep asm

    grid 11862082        1   0 09:57:39      -  0:00 asm_diag_+ASM2

    grid 12124202        1   0 09:57:41      -  0:00 asm_lck0_+ASM2

    grid 12320918        1   0 09:57:39      -  0:00 asm_lmhb_+ASM2

    grid 12386418        1   1 09:57:39      -  0:00 asm_vktm_+ASM2

    grid 12517574        1   0 09:57:39      -  0:00 asm_lms0_+ASM2

    grid 12648524        1   0 09:57:46      -  0:00 asm_o000_+ASM2

    grid 12845130        1   1 09:57:39      -  0:00 asm_dia0_+ASM2

    grid 14221316        1   0 09:57:46      -  0:00 oracle+ASM2_asmb_+asm2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

    grid 14942382        1   0 09:57:39      -  0:00 asm_mmnl_+ASM2

    grid 15270102        1   0 09:57:39      -  0:00 asm_ping_+ASM2

    grid 15597756        1   0 09:57:39      -  0:00 asm_lgwr_+ASM2

    grid  2359724        1   0 09:57:38      -  0:00 asm_psp0_+ASM2

    grid  3014926        1   0 09:57:39      -  0:00 asm_ckpt_+ASM2

    grid  3080676        1   0 09:57:39      -  0:00 asm_dbw0_+ASM2

    grid  3211710        1   0 09:57:39      -  0:00 asm_mman_+ASM2

    grid  3539244        1   0 09:57:37      -  0:00 asm_pmon_+ASM2

    grid  3670514        1   1 09:57:39      -  0:00 asm_lmon_+ASM2

    grid  4129072        1   0 09:57:46      -  0:00 oracle+ASM2_o000_+asm2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

    grid  4522356        1   0 09:57:45      -  0:00 asm_asmb_+ASM2

    grid  4784516        1   0 09:57:39      -  0:00 asm_smon_+ASM2

    grid  5112192        1   0 09:57:39      -  0:00 asm_rbal_+ASM2

    grid  5243238        1   1 09:57:39      -  0:00 asm_lmd0_+ASM2

    grid  5702040        1   0 09:57:39      -  0:00 asm_gen0_+ASM2

[ZFFR4CB1101:root]/]> crsctl stat res -t

--------------------------------------------------------------------------------

NAME           TARGET  STATE        SERVER                   STATE_DETAILS      

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.OCR.dg

               ONLINE  ONLINE       zffr4cb1101                                 

               ONLINE  ONLINE       zffr4cb2101                                 

ora.asm

               ONLINE  ONLINE       zffr4cb1101              Started            

               ONLINE  ONLINE       zffr4cb2101              Started            

ora.gsd

               OFFLINE OFFLINE      zffr4cb1101                                 

               OFFLINE OFFLINE      zffr4cb2101                                 

ora.net1.network

               ONLINE  ONLINE       zffr4cb1101                                 

               ONLINE  ONLINE       zffr4cb2101                                 

ora.ons

               ONLINE  ONLINE       zffr4cb1101                                 

               ONLINE  ONLINE       zffr4cb2101                                 

ora.registry.acfs

               ONLINE  ONLINE       zffr4cb1101                                 

               ONLINE  ONLINE       zffr4cb2101                                 

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       zffr4cb2101                                 

ora.cvu

      1        ONLINE  ONLINE       zffr4cb2101                                 

ora.oc4j

      1        ONLINE  ONLINE       zffr4cb2101                                 

ora.scan1.vip

      1        ONLINE  ONLINE       zffr4cb2101                                 

ora.zffr4cb1101.vip

      1        ONLINE  ONLINE       zffr4cb1101                                 

ora.zffr4cb2101.vip

      1        ONLINE  ONLINE       zffr4cb2101       

 

 

 

 

 

 

 

第四章 db安装

四.1  准备安装文件

unzip  p10404530_112030_AIX64-5L_1of7.zip && unzip p10404530_112030_AIX64-5L_2of7.zip

 

[ZFFR4CB2101:root]/]> cd /soft*

[ZFFR4CB2101:root]/softtmp]> l

total 9644880

drwxr-xr-x    9 root     system         4096 Oct 28 2011  grid

drwxr-xr-x    2 root     system          256 Mar 08 16:10 lost+found

-rw-r-----    1 root     system   1766307597 Mar 02 04:05 p10404530_112030_AIX64-5L_1of7.zip

-rw-r-----    1 root     system   1135393912 Mar 02 04:03 p10404530_112030_AIX64-5L_2of7.zip

-rw-r-----    1 root     system   2036455635 Mar 02 04:06 p10404530_112030_AIX64-5L_3of7.zip

[ZFFR4CB2101:root]/softtmp]> unzip  p10404530_112030_AIX64-5L_1of7.zip && unzip p10404530_112030_AIX64-5L_2of7.zip

Archive:  p10404530_112030_AIX64-5L_1of7.zip

   creating: database/

   creating: database/stage/

  inflating: database/stage/shiphomeproperties.xml 

   creating: database/stage/Components/

《《《《。。。。。。。。篇幅原因,有省略。。。。。。。。》》》》

  inflating: database/doc/server.11203/E22487-03.mobi 

  inflating: database/doc/server.11203/e22487.pdf 

  inflating: database/welcome.html  

   creating: database/sshsetup/

  inflating: database/sshsetup/sshUserSetup.sh 

  inflating: database/readme.html   

Archive:  p10404530_112030_AIX64-5L_2of7.zip

   creating: database/stage/Components/oracle.ctx/

   creating: database/stage/Components/oracle.ctx/11.2.0.3.0/

   creating: database/stage/Components/oracle.ctx/11.2.0.3.0/1/

   creating: database/stage/Components/oracle.ctx/11.2.0.3.0/1/DataFiles/

《《《《。。。。。。。。篇幅原因,有省略。。。。。。。。》》》》

   creating: database/stage/Components/oracle.javavm.containers/

   creating: database/stage/Components/oracle.javavm.containers/11.2.0.3.0/

   creating: database/stage/Components/oracle.javavm.containers/11.2.0.3.0/1/

   creating: database/stage/Components/oracle.javavm.containers/11.2.0.3.0/1/DataFiles/

  inflating: database/stage/Components/oracle.javavm.containers/11.2.0.3.0/1/DataFiles/filegroup4.jar 

  inflating: database/stage/Components/oracle.javavm.containers/11.2.0.3.0/1/DataFiles/filegroup3.jar 

  inflating: database/stage/Components/oracle.javavm.containers/11.2.0.3.0/1/DataFiles/filegroup2.jar 

  inflating: database/stage/Components/oracle.javavm.containers/11.2.0.3.0/1/DataFiles/filegroup1.jar 

[ZFFR4CB2101:root]/softtmp]>

[ZFFR4CB2101:root]/softtmp]> l

total 9644888

drwxr-xr-x    9 root     system         4096 Oct 28 2011  database

drwxr-xr-x    9 root     system         4096 Oct 28 2011  grid

drwxr-xr-x    2 root     system          256 Mar 08 16:10 lost+found

-rw-r-----    1 root     system   1766307597 Mar 02 04:05 p10404530_112030_AIX64-5L_1of7.zip

-rw-r-----    1 root     system   1135393912 Mar 02 04:03 p10404530_112030_AIX64-5L_2of7.zip

-rw-r-----    1 root     system   2036455635 Mar 02 04:06 p10404530_112030_AIX64-5L_3of7.zip

[ZFFR4CB2101:root]/softtmp]>

 

四.2  执行runcluvfy.sh脚本预检测

[grid@ZFFR4CB2101:/home/grid]$ /softtmp/grid/runcluvfy.sh stage -pre dbinst -n  ZFFR4CB2101,ZFFR4CB1101 -verbose -fixup

 

Performing pre-checks for database installation

 

Checking node reachability...

 

Check: Node reachability from node "ZFFR4CB2101"

  Destination Node                      Reachable?             

  ------------------------------------  ------------------------

  ZFFR4CB2101                           yes                    

  ZFFR4CB1101                           yes                    

Result: Node reachability check passed from node "ZFFR4CB2101"

 

 

Checking user equivalence...

 

Check: User equivalence for user "grid"

  Node Name                             Status                 

  ------------------------------------  ------------------------

  ZFFR4CB2101                           passed                 

  ZFFR4CB1101                           passed                 

Result: User equivalence check passed for user "grid"

 

Checking node connectivity...

 

Checking hosts config file...

  Node Name                             Status                 

  ------------------------------------  ------------------------

  ZFFR4CB2101                           passed                 

  ZFFR4CB1101                           passed                 

 

Verification of the hosts config file successful

 

 

Interface information for node "ZFFR4CB2101"

Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  

------ --------------- --------------- --------------- --------------- ----------------- ------

en0    22.188.187.158  22.188.187.0    22.188.187.158  22.188.187.1    C6:03:AE:03:97:83 1500 

en0    22.188.187.158  22.188.187.0    22.188.187.158  22.188.187.1    C6:03:AE:03:97:83 1500 

en0    22.188.187.158  22.188.187.0    22.188.187.158  22.188.187.1    C6:03:AE:03:97:83 1500 

en1    222.188.187.158 222.188.187.0   222.188.187.158 22.188.187.1    C6:03:A7:3E:FE:01 1500 

en1    222.188.187.158 222.188.187.0   222.188.187.158 22.188.187.1    C6:03:A7:3E:FE:01 1500 

 

 

Interface information for node "ZFFR4CB1101"

Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  

------ --------------- --------------- --------------- --------------- ----------------- ------

en0    22.188.187.148  22.188.187.0    22.188.187.148  UNKNOWN         FE:B6:72:EF:12:83 1500 

en0    22.188.187.148  22.188.187.0    22.188.187.148  UNKNOWN         FE:B6:72:EF:12:83 1500 

en1    222.188.187.148 222.188.187.0   222.188.187.148 UNKNOWN         FE:B6:7D:9F:6C:01 1500 

en1    222.188.187.148 222.188.187.0   222.188.187.148 UNKNOWN         FE:B6:7D:9F:6C:01 1500 

 

 

Check: Node connectivity for interface "en0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  ZFFR4CB2101[22.188.187.158]     ZFFR4CB2101[22.188.187.158]     yes            

  ZFFR4CB2101[22.188.187.158]     ZFFR4CB2101[22.188.187.158]     yes            

  ZFFR4CB2101[22.188.187.158]     ZFFR4CB1101[22.188.187.148]     yes            

  ZFFR4CB2101[22.188.187.158]     ZFFR4CB1101[22.188.187.148]     yes            

  ZFFR4CB2101[22.188.187.158]     ZFFR4CB2101[22.188.187.158]     yes            

  ZFFR4CB2101[22.188.187.158]     ZFFR4CB1101[22.188.187.148]     yes            

  ZFFR4CB2101[22.188.187.158]     ZFFR4CB1101[22.188.187.148]     yes            

  ZFFR4CB2101[22.188.187.158]     ZFFR4CB1101[22.188.187.148]     yes            

  ZFFR4CB2101[22.188.187.158]     ZFFR4CB1101[22.188.187.148]     yes            

  ZFFR4CB1101[22.188.187.148]     ZFFR4CB1101[22.188.187.148]     yes            

Result: Node connectivity passed for interface "en0"

 

 

Check: TCP connectivity of subnet "22.188.187.0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  ZFFR4CB2101:22.188.187.158      ZFFR4CB1101:22.188.187.148      passed         

Result: TCP connectivity check passed for subnet "22.188.187.0"

 

 

Check: Node connectivity for interface "en1"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  ZFFR4CB2101[222.188.187.158]    ZFFR4CB2101[222.188.187.158]    yes            

  ZFFR4CB2101[222.188.187.158]    ZFFR4CB1101[222.188.187.148]    yes            

  ZFFR4CB2101[222.188.187.158]    ZFFR4CB1101[222.188.187.148]    yes            

  ZFFR4CB2101[222.188.187.158]    ZFFR4CB1101[222.188.187.148]    yes            

  ZFFR4CB2101[222.188.187.158]    ZFFR4CB1101[222.188.187.148]    yes            

  ZFFR4CB1101[222.188.187.148]    ZFFR4CB1101[222.188.187.148]    yes            

Result: Node connectivity passed for interface "en1"

 

 

Check: TCP connectivity of subnet "222.188.187.0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  ZFFR4CB2101:222.188.187.158     ZFFR4CB1101:222.188.187.148     passed         

Result: TCP connectivity check passed for subnet "222.188.187.0"

 

Checking subnet mask consistency...

Subnet mask consistency check passed for subnet "22.188.187.0".

Subnet mask consistency check passed for subnet "222.188.187.0".

Subnet mask consistency check passed.

 

Result: Node connectivity check passed

 

Checking multicast communication...

 

Checking subnet "22.188.187.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "22.188.187.0" for multicast communication with multicast group "230.0.1.0" passed.

 

Checking subnet "222.188.187.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "222.188.187.0" for multicast communication with multicast group "230.0.1.0" passed.

 

Check of multicast communication passed.

 

Check: Total memory

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   4GB (4194304.0KB)         1GB (1048576.0KB)         passed   

  ZFFR4CB1101   48GB (5.0331648E7KB)      1GB (1048576.0KB)         passed   

Result: Total memory check passed

 

Check: Available memory

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   224.293MB (229676.0KB)    50MB (51200.0KB)          passed   

  ZFFR4CB1101   41.4106GB (4.3422168E7KB)  50MB (51200.0KB)          passed   

Result: Available memory check passed

 

Check: Swap space

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   8GB (8388608.0KB)         4GB (4194304.0KB)         passed   

  ZFFR4CB1101   8GB (8388608.0KB)         16GB (1.6777216E7KB)      failed   

Result: Swap space check failed

 

Check: Free disk space for "ZFFR4CB2101:/tmp"

  Path              Node Name     Mount point   Available     Required      Status     

  ----------------  ------------  ------------  ------------  ------------  ------------

  /tmp              ZFFR4CB2101   /tmp          3.899GB       1GB           passed     

Result: Free disk space check passed for "ZFFR4CB2101:/tmp"

 

Check: Free disk space for "ZFFR4CB1101:/tmp"

  Path              Node Name     Mount point   Available     Required      Status     

  ----------------  ------------  ------------  ------------  ------------  ------------

  /tmp              ZFFR4CB1101   /tmp          18.1031GB     1GB           passed     

Result: Free disk space check passed for "ZFFR4CB1101:/tmp"

 

Check: User existence for "grid"

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  ZFFR4CB2101   passed                    exists(1025)           

  ZFFR4CB1101   passed                    exists(1025)           

 

Checking for multiple users with UID value 1025

Result: Check for multiple users with UID value 1025 passed

Result: User existence check passed for "grid"

 

Check: Group existence for "oinstall"

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  ZFFR4CB2101   passed                    exists                 

  ZFFR4CB1101   passed                    exists                 

Result: Group existence check passed for "oinstall"

 

Check: Group existence for "dba"

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  ZFFR4CB2101   passed                    exists                 

  ZFFR4CB1101   passed                    exists                 

Result: Group existence check passed for "dba"

 

Check: Membership of user "grid" in group "oinstall" [as Primary]

  Node Name         User Exists   Group Exists  User in Group  Primary       Status     

  ----------------  ------------  ------------  ------------  ------------  ------------

  ZFFR4CB2101       yes           yes           yes           yes           passed     

  ZFFR4CB1101       yes           yes           yes           yes           passed     

Result: Membership check for user "grid" in group "oinstall" [as Primary] passed

 

Check: Membership of user "grid" in group "dba"

  Node Name         User Exists   Group Exists  User in Group  Status         

  ----------------  ------------  ------------  ------------  ----------------

  ZFFR4CB2101       yes           yes           yes           passed         

  ZFFR4CB1101       yes           yes           yes           passed         

Result: Membership check for user "grid" in group "dba" passed

 

Check: Run level

  Node Name     run level                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   2                         2                         passed   

  ZFFR4CB1101   2                         2                         passed   

Result: Run level check passed

 

Check: Hard limits for "maximum open file descriptors"

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  ZFFR4CB2101       hard          9223372036854776000  65536         passed         

  ZFFR4CB1101       hard          9223372036854776000  65536         passed         

Result: Hard limits check passed for "maximum open file descriptors"

 

Check: Soft limits for "maximum open file descriptors"

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  ZFFR4CB2101       soft          9223372036854776000  1024          passed         

  ZFFR4CB1101       soft          9223372036854776000  1024          passed         

Result: Soft limits check passed for "maximum open file descriptors"

 

Check: Hard limits for "maximum user processes"

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  ZFFR4CB2101       hard          16384         16384         passed         

  ZFFR4CB1101       hard          16384         16384         passed         

Result: Hard limits check passed for "maximum user processes"

 

Check: Soft limits for "maximum user processes"

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  ZFFR4CB2101       soft          16384         2047          passed         

  ZFFR4CB1101       soft          16384         2047          passed         

Result: Soft limits check passed for "maximum user processes"

 

Check: System architecture

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   powerpc                   powerpc                   passed   

  ZFFR4CB1101   powerpc                   powerpc                   passed   

Result: System architecture check passed

 

Check: Kernel version

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   7.1-7100.03.03.1415       7.1-7100.00.01.1037       passed   

  ZFFR4CB1101   7.1-7100.02.05.1415       7.1-7100.00.01.1037       passed   

 

WARNING:

PRVF-7524 : Kernel version is not consistent across all the nodes.

Kernel version = "7.1-7100.02.05.1415" found on nodes: ZFFR4CB1101.

Kernel version = "7.1-7100.03.03.1415" found on nodes: ZFFR4CB2101.

Result: Kernel version check passed

 

Check: Kernel parameter for "ncargs"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   256                       128                       passed   

  ZFFR4CB1101   256                       128                       passed   

Result: Kernel parameter check passed for "ncargs"

 

Check: Kernel parameter for "maxuproc"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   16384                     2048                      passed   

  ZFFR4CB1101   16384                     2048                      passed   

Result: Kernel parameter check passed for "maxuproc"

 

Check: Kernel parameter for "tcp_ephemeral_low"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   32768                     9000                      failed (ignorable)

  ZFFR4CB1101   32768                     9000                      failed (ignorable)

Result: Kernel parameter check passed for "tcp_ephemeral_low"

 

Check: Kernel parameter for "tcp_ephemeral_high"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   65535                     65500                     failed (ignorable)

  ZFFR4CB1101   65535                     65500                     failed (ignorable)

Result: Kernel parameter check passed for "tcp_ephemeral_high"

 

Check: Kernel parameter for "udp_ephemeral_low"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   32768                     9000                      failed (ignorable)

  ZFFR4CB1101   32768                     9000                      failed (ignorable)

Result: Kernel parameter check passed for "udp_ephemeral_low"

 

Check: Kernel parameter for "udp_ephemeral_high"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   65535                     65500                     failed (ignorable)

  ZFFR4CB1101   65535                     65500                     failed (ignorable)

Result: Kernel parameter check passed for "udp_ephemeral_high"

 

Check: Package existence for "bos.adt.base"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.adt.base-7.1.3.15-0   bos.adt.base-...          passed   

  ZFFR4CB1101   bos.adt.base-7.1.3.15-0   bos.adt.base-...          passed   

Result: Package existence check passed for "bos.adt.base"

 

Check: Package existence for "bos.adt.lib"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.adt.lib-7.1.2.15-0    bos.adt.lib-...           passed   

  ZFFR4CB1101   bos.adt.lib-7.1.2.15-0    bos.adt.lib-...           passed   

Result: Package existence check passed for "bos.adt.lib"

 

Check: Package existence for "bos.adt.libm"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.adt.libm-7.1.3.0-0    bos.adt.libm-...          passed   

  ZFFR4CB1101   bos.adt.libm-7.1.3.0-0    bos.adt.libm-...          passed   

Result: Package existence check passed for "bos.adt.libm"

 

Check: Package existence for "bos.perf.libperfstat"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.perf.libperfstat-7.1.3.15-0  bos.perf.libperfstat-...  passed   

  ZFFR4CB1101   bos.perf.libperfstat-7.1.3.15-0  bos.perf.libperfstat-...  passed   

Result: Package existence check passed for "bos.perf.libperfstat"

 

Check: Package existence for "bos.perf.perfstat"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.perf.perfstat-7.1.3.15-0  bos.perf.perfstat-...     passed   

  ZFFR4CB1101   bos.perf.perfstat-7.1.3.15-0  bos.perf.perfstat-...     passed   

Result: Package existence check passed for "bos.perf.perfstat"

 

Check: Package existence for "bos.perf.proctools"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.perf.proctools-7.1.3.15-0  bos.perf.proctools-...    passed   

  ZFFR4CB1101   bos.perf.proctools-7.1.3.15-0  bos.perf.proctools-...    passed   

Result: Package existence check passed for "bos.perf.proctools"

 

Check: Package existence for "xlC.aix61.rte"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   xlC.aix61.rte-12.1.0.1-0  xlC.aix61.rte-10.1.0.0    passed   

  ZFFR4CB1101   xlC.aix61.rte-12.1.0.1-0  xlC.aix61.rte-10.1.0.0    passed   

Result: Package existence check passed for "xlC.aix61.rte"

 

Check: Package existence for "xlC.rte"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   xlC.rte-12.1.0.1-0        xlC.rte-10.1.0.0          passed   

  ZFFR4CB1101   xlC.rte-12.1.0.1-0        xlC.rte-10.1.0.0          passed   

Result: Package existence check passed for "xlC.rte"

 

Check: Operating system patch for "Patch IZ87216"

  Node Name     Applied                   Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   Patch IZ87216:devices.common.IBM.mpio.rte  Patch IZ87216             passed   

  ZFFR4CB1101   Patch IZ87216:devices.common.IBM.mpio.rte  Patch IZ87216             passed   

Result: Operating system patch check passed for "Patch IZ87216"

 

Check: Operating system patch for "Patch IZ87564"

  Node Name     Applied                   Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   Patch IZ87564:bos.adt.libmIZ87564:bos.adt.prof  Patch IZ87564             passed   

  ZFFR4CB1101   Patch IZ87564:bos.adt.libmIZ87564:bos.adt.prof  Patch IZ87564             passed   

Result: Operating system patch check passed for "Patch IZ87564"

 

Check: Operating system patch for "Patch IZ89165"

  Node Name     Applied                   Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   Patch IZ89165:bos.rte.bind_cmds  Patch IZ89165             passed   

  ZFFR4CB1101   Patch IZ89165:bos.rte.bind_cmds  Patch IZ89165             passed   

Result: Operating system patch check passed for "Patch IZ89165"

 

Check: Operating system patch for "Patch IZ97035"

  Node Name     Applied                   Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   Patch IZ97035:devices.vdevice.IBM.l-lan.rte  Patch IZ97035             passed   

  ZFFR4CB1101   Patch IZ97035:devices.vdevice.IBM.l-lan.rte  Patch IZ97035             passed   

Result: Operating system patch check passed for "Patch IZ97035"

 

Checking for multiple users with UID value 0

Result: Check for multiple users with UID value 0 passed

 

Check: Current group ID

Result: Current group ID check passed

 

Starting check for consistency of primary group of root user

  Node Name                             Status                 

  ------------------------------------  ------------------------

  ZFFR4CB2101                           passed                 

  ZFFR4CB1101                           passed                 

 

Check for consistency of root user's primary group passed

 

Check default user file creation mask

  Node Name     Available                 Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   022                       0022                      passed   

  ZFFR4CB1101   022                       0022                      passed   

Result: Default user file creation mask check passed

 

Checking CRS integrity...

 

Clusterware version consistency passed

The Oracle Clusterware is healthy on node "ZFFR4CB2101"

The Oracle Clusterware is healthy on node "ZFFR4CB1101"

 

CRS integrity check passed

 

Checking Cluster manager integrity...

 

 

Checking CSS daemon...

 

  Node Name                             Status                 

  ------------------------------------  ------------------------

  ZFFR4CB2101                           running                

  ZFFR4CB1101                           running                

 

Oracle Cluster Synchronization Services appear to be online.

 

Cluster manager integrity check passed

 

 

Checking node application existence...

 

Checking existence of VIP node application (required)

  Node Name     Required                  Running?                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   yes                       yes                       passed   

  ZFFR4CB1101   yes                       yes                       passed   

VIP node application check passed

 

Checking existence of NETWORK node application (required)

  Node Name     Required                  Running?                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   yes                       yes                       passed   

  ZFFR4CB1101   yes                       yes                       passed   

NETWORK node application check passed

 

Checking existence of GSD node application (optional)

  Node Name     Required                  Running?                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   no                        no                        exists   

  ZFFR4CB1101   no                        no                        exists   

GSD node application is offline on nodes "ZFFR4CB2101,ZFFR4CB1101"

 

Checking existence of ONS node application (optional)

  Node Name     Required                  Running?                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   no                        yes                       passed   

  ZFFR4CB1101   no                        yes                       passed   

ONS node application check passed

 

 

Checking if Clusterware is installed on all nodes...

Check of Clusterware install passed

 

Checking if CTSS Resource is running on all nodes...

Check: CTSS Resource running on all nodes

  Node Name                             Status                 

  ------------------------------------  ------------------------

  ZFFR4CB2101                           passed                 

  ZFFR4CB1101                           passed                 

Result: CTSS resource check passed

 

 

Querying CTSS for time offset on all nodes...

Result: Query of CTSS for time offset passed

 

Check CTSS state started...

Check: CTSS state

  Node Name                             State                  

  ------------------------------------  ------------------------

  ZFFR4CB2101                           Observer               

  ZFFR4CB1101                           Observer               

CTSS is in Observer state. Switching over to clock synchronization checks using NTP

 

 

Starting Clock synchronization checks using Network Time Protocol(NTP)...

 

NTP Configuration file check started...

The NTP configuration file "/etc/ntp.conf" is available on all nodes

NTP Configuration file check passed

 

Checking daemon liveness...

 

Check: Liveness for "xntpd"

  Node Name                             Running?               

  ------------------------------------  ------------------------

  ZFFR4CB2101                           yes                    

  ZFFR4CB1101                           yes                    

Result: Liveness check passed for "xntpd"

Check for NTP daemon or service alive passed on all nodes

 

Checking NTP daemon command line for slewing option "-x"

Check: NTP daemon command line

  Node Name                             Slewing Option Set?    

  ------------------------------------  ------------------------

  ZFFR4CB2101                           yes                    

  ZFFR4CB1101                           no                     

Result:

NTP daemon slewing option check failed on some nodes

PRVF-5436 : The NTP daemon running on one or more nodes lacks the slewing option "-x"

Result: Clock synchronization check using Network Time Protocol(NTP) failed

 

 

PRVF-9652 : Cluster Time Synchronization Services check failed

Checking consistency of file "/etc/resolv.conf" across nodes

 

File "/etc/resolv.conf" does not exist on any node of the cluster. Skipping further checks

 

File "/etc/resolv.conf" is consistent across nodes

 

Check: Time zone consistency

Result: Time zone consistency check passed

 

Checking Single Client Access Name (SCAN)...

  SCAN Name         Node          Running?      ListenerName  Port          Running?   

  ----------------  ------------  ------------  ------------  ------------  ------------

  ZFFR4CB2101-scan  zffr4cb2101   true          LISTENER_SCAN1  1521          true       

 

Checking TCP connectivity to SCAN Listeners...

  Node          ListenerName              TCP connectivity?      

  ------------  ------------------------  ------------------------

  ZFFR4CB2101   LISTENER_SCAN1            yes                    

TCP connectivity to SCAN Listeners exists on all cluster nodes

 

Checking name resolution setup for "ZFFR4CB2101-scan"...

 

ERROR:

PRVG-1101 : SCAN name "ZFFR4CB2101-scan" failed to resolve

  SCAN Name     IP Address                Status                    Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101-scan  22.188.187.160            failed                    NIS Entry

 

ERROR:

PRVF-4657 : Name resolution setup check for "ZFFR4CB2101-scan" (IP address: 22.188.187.160) failed

 

ERROR:

PRVF-4663 : Found configuration issue with the 'hosts' entry in the /etc/nsswitch.conf file

 

Verification of SCAN VIP and Listener setup failed

Checking VIP configuration.

Checking VIP Subnet configuration.

Check for VIP Subnet configuration passed.

Checking VIP reachability

 

Checking Database and Clusterware version compatibility

 

 

Checking ASM and CRS version compatibility

ASM and CRS versions are compatible

Database version "11.2.0.3.0" is compatible with the Clusterware version "11.2.0.3.0".

Database Clusterware version compatibility passed

Result: User ID < 65535 check passed

 

Result: Kernel 64-bit mode check passed

 

Fixup information has been generated for following node(s):

ZFFR4CB1101,ZFFR4CB2101

Please run the following script on each node as "root" user to execute the fixups:

'/tmp/CVU_11.2.0.3.0_grid/runfixup.sh'

 

Pre-check for database installation was unsuccessful on all the nodes.

[grid@ZFFR4CB2101:/home/grid]$

 

四.3  静默安装DB软件

 

[ZFFR4CB2101:root]/]> /softtmp/database/rootpre.sh

/softtmp/database/rootpre.sh output will be logged in /tmp/rootpre.out_16-03-11.10:02:47

 

Checking if group services should be configured....

Nothing to configure.

[ZFFR4CB2101:root]/]>

 

./runInstaller -silent  -force -noconfig -IgnoreSysPreReqs -ignorePrereq  -showProgress \

oracle.install.option=INSTALL_DB_SWONLY \

DECLINE_SECURITY_UPDATES=true \

UNIX_GROUP_NAME=oinstall \

INVENTORY_LOCATION=/u01/app/oraInventory \

SELECTED_LANGUAGES=en \

oracle.install.db.InstallEdition=EE \

oracle.install.db.isCustomInstall=false \

oracle.install.db.EEOptionsSelection=false \

oracle.install.db.DBA_GROUP=dba \

oracle.install.db.OPER_GROUP=asmoper \

oracle.install.db.isRACOneInstall=false \

oracle.install.db.config.starterdb.type=GENERAL_PURPOSE \

SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \

oracle.installer.autoupdates.option=SKIP_UPDATES \

ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 \

ORACLE_BASE=/u01/app/oracle \

ORACLE_HOSTNAME=ZFFR4CB2101 \

oracle.install.db.CLUSTER_NODES=zffr4cb2101,zffr4cb1101 \

oracle.install.db.isRACOneInstall=false

 

同样,复制脚本的时候注意最后不能加回车符号,需要修改的地方我已经用黄色背景表示了:

 

[oracle@ZFFR4CB2101:/softtmp/database]$ ./runInstaller -silent  -force -noconfig -IgnoreSysPreReqs -ignorePrereq  -showProgress \

> oracle.install.option=INSTALL_DB_SWONLY \

> DECLINE_SECURITY_UPDATES=true \

> UNIX_GROUP_NAME=oinstall \

> INVENTORY_LOCATION=/u01/app/oraInventory \

> SELECTED_LANGUAGES=en \

> oracle.install.db.InstallEdition=EE \

> oracle.install.db.isCustomInstall=false \

> oracle.install.db.EEOptionsSelection=false \

> oracle.install.db.DBA_GROUP=dba \

> oracle.install.db.OPER_GROUP=asmoper \

> oracle.install.db.isRACOneInstall=false \

> oracle.install.db.config.starterdb.type=GENERAL_PURPOSE \

> SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \

> oracle.installer.autoupdates.option=SKIP_UPDATES \

> ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 \

> ORACLE_BASE=/u01/app/oracle \

> ORACLE_HOSTNAME=ZFFR4CB2101 \

> oracle.install.db.CLUSTER_NODES=zffr4cb2101,zffr4cb1101 \

> oracle.install.db.isRACOneInstall=false

********************************************************************************

 

Your platform requires the root user to perform certain pre-installation

OS preparation.  The root user should run the shell script 'rootpre.sh' before

you proceed with Oracle installation.  rootpre.sh can be found at the top level

of the CD or the stage area.

 

Answer 'y' if root has run 'rootpre.sh' so you can proceed with Oracle

installation.

Answer 'n' to abort installation and then ask root to run 'rootpre.sh'.

 

********************************************************************************

 

Has 'rootpre.sh' been run by root on all nodes? [y/n] (n)

y

 

Starting Oracle Universal Installer...

 

Checking Temp space: must be greater than 190 MB.   Actual 4328 MB    Passed

Checking swap space: must be greater than 150 MB.   Actual 8192 MB    Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-03-11_11-21-26AM. Please wait ...[oracle@ZFFR4CB2101:/softtmp/database]$ You can find the log of this install session at:

/u01/app/oraInventory/logs/installActions2016-03-11_11-21-26AM.log

 

Prepare in progress.

..................................................   9% Done.

 

Prepare successful.

 

Copy files in progress.

..................................................   14% Done.

..................................................   19% Done.

..................................................   24% Done.

..................................................   29% Done.

..................................................   34% Done.

..................................................   39% Done.

..................................................   44% Done.

........................................

Copy files successful.

..................................................   60% Done.

 

Link binaries in progress.

 

Link binaries successful.

..................................................   77% Done.

 

Setup files in progress.

..................................................   94% Done.

 

Setup files successful.

 

The installation of Oracle Database 11g was successful.

Please check '/u01/app/oraInventory/logs/silentInstall2016-03-11_11-21-26AM.log' for more details.

 

Execute Root Scripts in progress.

 

As a root user, execute the following script(s):

        1. /u01/app/oracle/product/11.2.0/dbhome_1/root.sh

 

Execute /u01/app/oracle/product/11.2.0/dbhome_1/root.sh on the following nodes:

[zffr4cb2101, zffr4cb1101]

 

..................................................   100% Done.

 

Execute Root Scripts successful.

Successfully Setup Software.

 

 

 

 

[ZFFR4CB1101:root]/]> du -sg /u01/app/oracle/product/11.2.0/dbhome_1

0.00    /u01/app/oracle/product/11.2.0/dbhome_1

[ZFFR4CB1101:root]/]> du -sg /u01/app/oracle/product/11.2.0/dbhome_1

0.00    /u01/app/oracle/product/11.2.0/dbhome_1

[ZFFR4CB1101:root]/]> du -sg /u01/app/oracle/product/11.2.0/dbhome_1

0.00    /u01/app/oracle/product/11.2.0/dbhome_1

[ZFFR4CB1101:root]/]> du -sg /u01/app/oracle/product/11.2.0/dbhome_1

3.08    /u01/app/oracle/product/11.2.0/dbhome_1

[ZFFR4CB1101:root]/]> du -sg /u01/app/oracle/product/11.2.0/dbhome_1

3.33    /u01/app/oracle/product/11.2.0/dbhome_1

[ZFFR4CB1101:root]/]> du -sg /u01/app/oracle/product/11.2.0/dbhome_1

3.44    /u01/app/oracle/product/11.2.0/dbhome_1

[ZFFR4CB1101:root]/]> du -sg /u01/app/oracle/product/11.2.0/dbhome_1

3.50    /u01/app/oracle/product/11.2.0/dbhome_1

[ZFFR4CB1101:root]/]> du -sg /u01/app/oracle/product/11.2.0/dbhome_1

3.54    /u01/app/oracle/product/11.2.0/dbhome_1

[ZFFR4CB1101:root]/]> du -sg /u01/app/oracle/product/11.2.0/dbhome_1

3.64    /u01/app/oracle/product/11.2.0/dbhome_1

[ZFFR4CB1101:root]/]> du -sg /u01/app/oracle/product/11.2.0/dbhome_1

3.76    /u01/app/oracle/product/11.2.0/dbhome_1

[ZFFR4CB1101:root]/]> du -sg /u01/app/oracle/product/11.2.0/dbhome_1

6.17    /u01/app/oracle/product/11.2.0/dbhome_1

[ZFFR4CB1101:root]/]> du -sg /u01/app/oracle/product/11.2.0/dbhome_1

7.06    /u01/app/oracle/product/11.2.0/dbhome_1

[ZFFR4CB1101:root]/]> du -sg /u01/app/oracle/product/11.2.0/dbhome_1

7.06    /u01/app/oracle/product/11.2.0/dbhome_1

[ZFFR4CB1101:root]/]>

 

 

 

2个节点root分别执行:

[ZFFR4CB2101:root]/]> /u01/app/oracle/product/11.2.0/dbhome_1/root.sh

Check /u01/app/oracle/product/11.2.0/dbhome_1/install/root_ZFFR4CB2101_2016-03-11_11-43-02.log for the output of root script

 

[ZFFR4CB2101:root]/]>

[ZFFR4CB2101:root]/]> more /u01/app/oracle/product/11.2.0/dbhome_1/install/root_ZFFR4CB2101_2016-03-11_11-43-02.log

 

Performing root user operation for Oracle 11g

 

The following environment variables are set as:

    ORACLE_OWNER= oracle

    ORACLE_HOME=  /u01/app/oracle/product/11.2.0/dbhome_1

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

 

 

 

 

 

四.4  静默配置监听

[grid@ZFFR4CB2101:/home/grid]$ netca -silent -responsefile $ORACLE_HOME/assistants/netca/netca.rsp

 

Parsing command line arguments:

    Parameter "silent" = true

    Parameter "responsefile" = /u01/app/11.2.0/grid/assistants/netca/netca.rsp

Done parsing command line arguments.

Oracle Net Services Configuration:

Profile configuration complete.

Oracle Net Listener Startup:

    Listener started successfully.

Listener configuration complete.

Oracle Net Services configuration successful. The exit code is 0

[grid@ZFFR4CB2101:/home/grid]$

[grid@ZFFR4CB2101:/home/grid]$ crsctl stat res -t

--------------------------------------------------------------------------------

NAME           TARGET  STATE        SERVER                   STATE_DETAILS      

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.LISTENER.lsnr

               ONLINE  ONLINE       zffr4cb1101                                 

               ONLINE  ONLINE       zffr4cb2101                                 

ora.OCR.dg

               ONLINE  ONLINE       zffr4cb1101                                 

               ONLINE  ONLINE       zffr4cb2101                                 

ora.asm

               ONLINE  ONLINE       zffr4cb1101              Started            

               ONLINE  ONLINE       zffr4cb2101              Started            

ora.gsd

               OFFLINE OFFLINE      zffr4cb1101                                 

               OFFLINE OFFLINE      zffr4cb2101                                 

ora.net1.network

               ONLINE  ONLINE       zffr4cb1101                                 

               ONLINE  ONLINE       zffr4cb2101                                 

ora.ons

               ONLINE  ONLINE       zffr4cb1101                                 

               ONLINE  ONLINE       zffr4cb2101                                 

ora.registry.acfs

               ONLINE  ONLINE       zffr4cb1101                                 

               ONLINE  ONLINE       zffr4cb2101                                 

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       zffr4cb2101                                 

ora.cvu

      1        ONLINE  ONLINE       zffr4cb2101                                 

ora.oc4j

      1        ONLINE  ONLINE       zffr4cb2101                                 

ora.scan1.vip

      1        ONLINE  ONLINE       zffr4cb2101                                 

ora.zffr4cb1101.vip

      1        ONLINE  ONLINE       zffr4cb1101                                 

ora.zffr4cb2101.vip

      1        ONLINE  ONLINE       zffr4cb2101                                 

[grid@ZFFR4CB2101:/home/grid]$

 

vi crsstat_lhr.sh

awk  'BEGIN {printf "%-26s %-26s %-10s %-10s %-10s \n","Name                                    ","Type                      ","Target    ","State     ","Host      "; printf "%-30s %-26s %-10s %-10s %-10s\n","----------------------------------------","--------------------------","----------", "---------","----------";}'

crs_stat | awk 'BEGIN { FS="=| ";state = 0;}  $1~/NAME/ {appname = $2; state=1};  state == 0 {next;}  $1~/TYPE/ && state == 1 {apptype = $2; state=2;} $1~/TARGET/ && state == 2 {apptarget = $2; state=3;} $1~/STATE/ && state == 3 {appstate = $2; apphost = $4; state=4;} state == 4 {printf "%-40s %-26s %-10s %-10s %-10s\n", appname,apptype,apptarget,appstate,apphost; state=0;}'

[ZFFR4CB2101:root]/]> chmod +x crsstat_lhr.sh

[ZFFR4CB2101:root]/]> ./crsstat_lhr.sh

Name                                     Type                       Target     State      Host      

---------------------------------------- -------------------------- ---------- ---------  ----------

ora.LISTENER.lsnr                        ora.listener.type          ONLINE     ONLINE     zffr4cb1101

ora.LISTENER_SCAN1.lsnr                  ora.scan_listener.type     ONLINE     ONLINE     zffr4cb2101

ora.OCR.dg                               ora.diskgroup.type         ONLINE     ONLINE     zffr4cb1101

ora.asm                                  ora.asm.type               ONLINE     ONLINE     zffr4cb1101

ora.cvu                                  ora.cvu.type               ONLINE     ONLINE     zffr4cb2101

ora.gsd                                  ora.gsd.type               OFFLINE    OFFLINE             

ora.net1.network                         ora.network.type           ONLINE     ONLINE     zffr4cb1101

ora.oc4j                                 ora.oc4j.type              ONLINE     ONLINE     zffr4cb2101

ora.ons                                  ora.ons.type               ONLINE     ONLINE     zffr4cb1101

ora.registry.acfs                        ora.registry.acfs.type     ONLINE     ONLINE     zffr4cb1101

ora.scan1.vip                            ora.scan_vip.type          ONLINE     ONLINE     zffr4cb2101

ora.zffr4cb1101.ASM2.asm                 application                ONLINE     ONLINE     zffr4cb1101

ora.zffr4cb1101.LISTENER_ZFFR4CB1101.lsnr application                ONLINE     ONLINE     zffr4cb1101

ora.zffr4cb1101.gsd                      application                OFFLINE    OFFLINE             

ora.zffr4cb1101.ons                      application                ONLINE     ONLINE     zffr4cb1101

ora.zffr4cb1101.vip                      ora.cluster_vip_net1.type  ONLINE     ONLINE     zffr4cb1101

ora.zffr4cb2101.ASM1.asm                 application                ONLINE     ONLINE     zffr4cb2101

ora.zffr4cb2101.LISTENER_ZFFR4CB2101.lsnr application                ONLINE     ONLINE     zffr4cb2101

ora.zffr4cb2101.gsd                      application                OFFLINE    OFFLINE             

ora.zffr4cb2101.ons                      application                ONLINE     ONLINE     zffr4cb2101

ora.zffr4cb2101.vip                      ora.cluster_vip_net1.type  ONLINE     ONLINE     zffr4cb2101

[ZFFR4CB2101:root]/]>

 

第五章 dbca静默方式建库

[grid@ZFFR4CB2101:/home/grid]$ ORACLE_SID=+ASM1

[grid@ZFFR4CB2101:/home/grid]$ sqlplus / as sysasm

 

SQL*Plus: Release 11.2.0.3.0 Production on Fri Mar 11 12:33:18 2016

 

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Real Application Clusters and Automatic Storage Management options

 

SQL> CREATE DISKGROUP DATA external redundancy DISK '/dev/rhdisk11';

 

Diskgroup created.

 

SQL> exit

Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Real Application Clusters and Automatic Storage Management options

 

[oracle@ZFFR4CB2101:/home/oracle]$dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbname orclasm  -sid orclasm -sysPassword lhr -systemPassword lhr -datafileDestination 'DATA/' -redoLogFileSize 50 -recoveryAreaDestination 'DATA/' -storageType ASM -asmsnmpPassword lhr  -diskGroupName 'DATA' -responseFile NO_VALUE -characterset ZHS16GBK -nationalCharacterSet AL16UTF16 -sampleSchema true -automaticMemoryManagement true -totalMemory 9048 -databaseType OLTP -emConfiguration NONE   -nodeinfo ZFFR4CB2101,ZFFR4CB1101

Copying database files

Cleaning up failed steps

4% complete

Copying database files

5% complete

6% complete

7% complete

13% complete

19% complete

24% complete

33% complete

Creating and starting Oracle instance

35% complete

39% complete

43% complete

47% complete

48% complete

50% complete

52% complete

Creating cluster database views

54% complete

71% complete

Completing Database Creation

74% complete

77% complete

85% complete

94% complete

100% complete

Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/orclasm/orclasm0.log" for further details.

[oracle@ZFFR4CB1101:/home/oracle]$

[oracle@ZFFR4CB1101:/home/oracle]$

[oracle@ZFFR4CB1101:/home/oracle]$ more /u01/app/oracle/cfgtoollogs/dbca/orclasm/orclasm0.log

DiskGroup "DATA" resources are not running on nodes "[ZFFR4CB2101]". Database instances may not come up on these nodes.

 

Do you want to continue?

Cleaning up failed steps

DBCA_PROGRESS : 4%

Copying database files

DBCA_PROGRESS : 5%

DBCA_PROGRESS : 6%

DBCA_PROGRESS : 7%

DBCA_PROGRESS : 13%

DBCA_PROGRESS : 19%

DBCA_PROGRESS : 24%

DBCA_PROGRESS : 33%

Creating and starting Oracle instance

DBCA_PROGRESS : 35%

DBCA_PROGRESS : 39%

DBCA_PROGRESS : 43%

DBCA_PROGRESS : 47%

DBCA_PROGRESS : 48%

DBCA_PROGRESS : 50%

DBCA_PROGRESS : 52%

Creating cluster database views

DBCA_PROGRESS : 54%

DBCA_PROGRESS : 71%

Completing Database Creation

DBCA_PROGRESS : 74%

DBCA_PROGRESS : 77%

DBCA_PROGRESS : 85%

DBCA_PROGRESS : 94%

DBCA_PROGRESS : 100%

Database creation complete. For details check the logfiles at:

/u01/app/oracle/cfgtoollogs/dbca/orclasm.

Database Information:

Global Database Name:orclasm

System Identifier(SID) Prefix:orclasm

 

[oracle@ZFFR4CB2101:/home/oracle]$

[oracle@ZFFR4CB1101:/home/oracle]$ crsctl stat res -t                   

--------------------------------------------------------------------------------

NAME           TARGET  STATE        SERVER                   STATE_DETAILS      

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.DATA.dg

               ONLINE  ONLINE       zffr4cb1101                                 

               ONLINE  ONLINE       zffr4cb2101                                 

ora.LISTENER.lsnr

               ONLINE  ONLINE       zffr4cb1101                                 

               ONLINE  ONLINE       zffr4cb2101                                 

ora.OCR.dg

               ONLINE  ONLINE       zffr4cb1101                                 

               ONLINE  ONLINE       zffr4cb2101                                 

ora.asm

               ONLINE  ONLINE       zffr4cb1101              Started            

               ONLINE  ONLINE       zffr4cb2101              Started            

ora.gsd

               OFFLINE OFFLINE      zffr4cb1101                                 

               OFFLINE OFFLINE      zffr4cb2101                                 

ora.net1.network

               ONLINE  ONLINE       zffr4cb1101                                 

               ONLINE  ONLINE       zffr4cb2101                                 

ora.ons

               ONLINE  ONLINE       zffr4cb1101                                 

               ONLINE  ONLINE       zffr4cb2101                                 

ora.registry.acfs

               ONLINE  ONLINE       zffr4cb1101                                 

               ONLINE  ONLINE       zffr4cb2101                                 

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       zffr4cb1101                                 

ora.cvu

      1        ONLINE  ONLINE       zffr4cb1101                                 

ora.oc4j

      1        ONLINE  ONLINE       zffr4cb1101                                 

ora.orclasm.db

      1        ONLINE  ONLINE       zffr4cb1101              Open               

      2        ONLINE  ONLINE       zffr4cb2101              Open               

ora.scan1.vip

      1        ONLINE  ONLINE       zffr4cb1101                                 

ora.zffr4cb1101.vip

      1        ONLINE  ONLINE       zffr4cb1101                                 

ora.zffr4cb2101.vip

      1        ONLINE  ONLINE       zffr4cb2101                                 

[oracle@ZFFR4CB1101:/home/oracle]$

 

[oracle@ZFFR4CB1101:/home/oracle]$ ORACLE_SID=orclasm2

[oracle@ZFFR4CB1101:/home/oracle]$ sqlplus / as sysdba

 

SQL*Plus: Release 11.2.0.3.0 Production on Fri Mar 11 14:47:19 2016

 

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Data Mining and Real Application Testing options

 

SQL> select INST_ID,name , open_mode, log_mode,force_logging from gv$database;

 

   INST_ID NAME      OPEN_MODE            LOG_MODE     FOR

---------- --------- -------------------- ------------ ---

         2 ORCLASM   READ WRITE           NOARCHIVELOG NO

         1 ORCLASM   READ WRITE           NOARCHIVELOG NO

 

SQL>

第六章 卸载

crsctl stop has -f

卸载 GRID软件,grid用户执行: $ORACLE_HOME/deinstall/deinstall   

卸载 ORACLE软件,oracle用户执行: $ORACLE_HOME/deinstall/deinstall

 

 

--dd if=/dev/zero of=/dev/rhdiskN bs=1024k count=1024

--lquerypv -h  /dev/rhdisk5

 

 

dbca -silent -deleteDatabase -sourceDB ora11g -sysDBAUserName sys -sysDBAPassword lhr

$ORACLE_HOME/bin/crsctl stop cluster -f

 

rmuser -p grid

rmuser -p oracle

rmgroup dba

rmgroup asmadmin

rmgroup asmdba 

rmgroup asmoper

rmgroup oinstall

 

rm -rf /tmp/.oracle

rm -rf /tmp/oraclone_RAC

rm -rf /tmp/oraclone

rm -rf /tmp/oraclone_RAC

rm -rf /var/tmp/.oracle

rm -rf /opt/ORCLfmap

rm -rf /etc/ora*

rm -rf /etc/ohasd

rm -rf /etc/rc.d/rc2.d/K19ohasd

rm -rf /etc/rc.d/rc2.d/S96ohasd

rm -rf /etc/init.ohasd

rm -rf /etc/inittab.crs

 

fuser -kuxc /u01

umount -f /u01

rmfs -r /u01

 

dd if=/dev/zero of=/dev/rhdiskN bs=1024k count=1024

lquerypv -h  /dev/rhdiskN

 

第七章 附加内容

七.1  重新执行root.sh

安装GRID执行root.sh失败的时候,可以重新执行root.sh

 

---------------------------------------重新执行root.sh-----------------------------------------  

 

---$ORACLE_HOME 为 GRID_HOME的路径

--------------① 脚本方式

---执行失败,重新执行root.sh脚本 

$ORACLE_HOME/crs/install/rootcrs.pl -deconfig -force -verbose

dd if=/dev/zero of=/dev/rhdiskN bs=1024k count=1024

lquerypv -h  /dev/rhdisk5

$ORACLE_HOME/root.sh

 

--------------② 界面方式

---------------删除两节点crsconfig_params中的DATA1和磁盘  界面方式

$ORACLE_HOME/crs/install/crsconfig_params

ASM_DISK_GROUP=DATA1

ASM_DISKS=/dev/rhdisk5

--root

$ORACLE_HOME/crs/install/rootcrs.pl -deconfig -force -verbose

-- GRID

export DISPLAY=22.188.216.132:0.0

$ORACLE_HOME/crs/config/config.sh

------------------------------------------------------------------------

 

 

七.2  [INS-35354] The system on which you are attempting to install Oracle RAC is not part of a valid cluster

 

-------- 安装数据库软件 或 添加节点时报 PRKC-1137 或 PRVF-5434

现象:

[INS-35354] The system on which you are attempting to install Oracle RAC is not part of a valid cluster

PRKC-1094 : Failed to retrieve the active version of crs: {0}

PRVF-5300 : Failed to retrieve active version for CRS on this node

PRKC-1093 : Failed to retrieve the version of crs software on node

 

解决办法:

/u01/app/oraInventory/ContentsXML/inventory.xml  修改<HOME NAME="Ora11g_gridinfrahome1" LOC="/g01/11.2.0/grid" TYPE="O" IDX="1"> 为 <HOME NAME="Ora11g_gridinfrahome1" LOC="/g01/11.2.0/grid" TYPE="O" IDX="1" CRS="true">,即加上 CRS="true"

 

[grid@vrh1 ContentsXML]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml  | grep NAME

<HOME NAME="Ora11g_gridinfrahome1" LOC="/g01/11.2.0/grid" TYPE="O" IDX="1" CRS="true">

 

 

 

 

 

---------------------------------------------------------------------------------------------------------------------

 

 

 

About Me

...........................................................................................................................................................................................

本文作者:小麦苗,只专注于数据库的技术,更注重技术的运用

ITPUB BLOG:http://blog.itpub.net/26736162

本文地址:http://blog.itpub.net/26736162/viewspace-2057270/

本文pdf版:http://yunpan.cn/cdEQedhCs2kFz (提取码:ed9b)

QQ:642808185 若加QQ请注明您所正在读的文章标题

于 2016-03-07 10:00~ 2016-03-11 19:00 在中行完成

<版权所有,文章允许转载,但须以链接方式注明源地址,否则追究法律责任!>

...........................................................................................................................................................................................

 

 

 

AIX 静默安装11gR2 RAC的更多相关文章

  1. 一步一步搭建11gR2 rac+dg之安装rac出现问题解决(六)【转】

    一步一步在RHEL6.5+VMware Workstation 10上搭建 oracle 11gR2 rac + dg 之安装rac出现的问题 (六) 本文转自 一步一步搭建11gR2 rac+dg之 ...

  2. Oracle 11gR2 RAC ohasd failed to start 解决方法

    rcrCRS-4124: Oracle High Availability Services startup failed. CRS-4000: Command Start failed, or co ...

  3. Linux平台 Oracle 11gR2 RAC安装Part1:准备工作

    一.实施前期准备工作 1.1 服务器安装操作系统 1.2 Oracle安装介质 1.3 共享存储规划 1.4 网络规范分配 二.安装前期准备工作 2.1 各节点系统时间校对 2.2 各节点关闭防火墙和 ...

  4. Linux平台 Oracle 11gR2 RAC安装Part2:GI安装

    三.GI(Grid Infrastructure)安装 3.1 解压GI的安装包 3.2 安装配置Xmanager软件 3.3 共享存储LUN的赋权 3.4 使用Xmanager图形化界面安装GI 3 ...

  5. Linux平台 Oracle 11gR2 RAC安装Part3:DB安装

    四.DB(Database)安装 4.1 解压DB的安装包 4.2 DB软件安装 4.3 ASMCA创建磁盘组 4.4 DBCA建库 4.5 验证crsctl的状态 Linux平台 Oracle 11 ...

  6. Oracle 11gR2 RAC 安装配置

    1. 简介   Oracle RAC,全称real application clusters,译为"实时应用集群", 是Oracle新版数据库中采用的一项新技术,是高可用性的一种, ...

  7. oracle 11gR2 RAC安装手册

    --oracle 11gR2 RAC安装手册 -----------------------------2013/10/29 参考三思笔记 http://files.cnblogs.com/jackh ...

  8. Oracle 12cR1 RAC 在VMware Workstation上安装(下)—静默安装

    Oracle 12cR1 RAC 在VMware Workstation上安装(下)—静默安装 1.1  静默安装 1.1.1  静默安装grid 安装之前使用脚本进行校验,确保所有的failed选项 ...

  9. Oracle 12c RAC 静默安装文档

    参考文档: https://docs.oracle.com/en/database/oracle/oracle-database/12.2/cwlin/index.html https://docs. ...

随机推荐

  1. Centos7.5 php7.2 安装pdo_sqlsrv 连接 sql server(转)

    Centos7.5 php7.2 安装pdo_sqlsrv 连接 sql server 转:https://blog.csdn.net/gdali/article/details/82912542   ...

  2. 【PHP】使用phpoffice/phpspreadsheet导入导出数据

    当你在使用phpoffice/phpexcel 类库时候.composer 会给你提示一句话 Package phpoffice/phpexcel is abandoned, you should a ...

  3. matlab学习笔记12_2创建结构体数组,访问标量结构体,访问非标量结构体数组的属性,访问嵌套结构体中的数据,访问非标量结构体数组中多个元素的字段

    一起来学matlab-matlab学习笔记12 12_2 结构体 创建结构体数组,访问标量结构体,访问非标量结构体数组的属性,访问嵌套结构体中的数据,访问非标量结构体数组中多个元素的字段 觉得有用的话 ...

  4. dubbo如何解决循环依赖的问题

    在分布式项目中,A调用B,  B再调用A,或者A调B,B调用C,C再调用A,形成一个环路时,就会出现循环依赖的问题, 当启动A服务时,需要B服务暴露的接口,找不到就会抛异常,B服务启动时,需要同样需要 ...

  5. [Python] 项目的配置覆盖与合并

    参考来源: https://www.liaoxuefeng.com/wiki/1016959663602400/1018490750237280 代码稍微修改了一下 import os import ...

  6. Javascript / Nodejs call 和 apply

    call: 改变了函数运行的作用域,即改变函数里面this的指向apply:同call,apply第二个参数是数组结构 例如: this.name = 'Ab'var obj = {name: 'BB ...

  7. linux查看端口常用命令

    netstat命令参数: -t : 指明显示TCP端口 -u : 指明显示UDP端口 -l : 仅显示监听套接字(所谓套接字就是使应用程序能够读写与收发通讯协议(protocol)与资料的程序) -p ...

  8. Kubernetes exec API串接分析

    本篇将说明Kubernetes exec API的运作方式,并以简单范例进行开发在前后端上.虽然Kubernetes提供了不同资源的RESTful API来进行CRUD操作,但是部分API并非单纯的回 ...

  9. 进程池和线程池、协程、TCP单线程实现并发

    一.进程池和线程池 当被操作对象数目不大时,我们可以手动创建几个进程和线程,十几个几十个还好,但是如果有上百个上千个.手动操作麻烦而且电脑硬件跟不上,可以会崩溃,此时进程池.线程池的功效就能发挥了.我 ...

  10. cookielib模块 for python3

    python2 可以直接安装cookielib模块 而py3却不能安装 故需要安装http模块 举例子: from http import cookiejar cookie = cookiejar.C ...