一、DRBD配置

Distributed Replicated Block Device(DRBD)是一个用软件实现的、无共享的、服务器之间镜像块设备内容的存储复制解决方案。
 
我们可以理解为它其实就是个网络Raid 1,两台服务器间就算某台因断电或宕机也不会对数据有任何影响,而真正的热切换可以通过Heartbeat方案解决,不需要人工干预。
 
二、环境描述
系统版本:centos6.6 x64(内核2.6.32-504.16.2.el6.x86_64)
DRBD版本:DRBD-8.4.3
 
node1(主节点)IP: 192.168.0.191 主机名:drbd1.corp.com
node2(从节点)IP: 192.168.0.192 主机名:drbd2.corp.com
 
(node1) 仅为主节点配置
(node2) 仅为从节点配置
(node1,node2) 为主从节点共同配置
 
三、安装前准备:(node1,node2)
1、关闭iptables和SELINUX,避免安装过程中报错。

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  1. # service iptables stop
  2. # chkconfig iptables off
  3. # setenforce 0
  4. # vi /etc/selinux/config
  5. ---------------
  6. SELINUX=disabled
  7. ---------------

2、配置hosts文件

  1. 1
  2. 2
  3. 3
  1. # vi /etc/hosts
  2. 192.168.0.191  drbd1.corp.com
  3. 192.168.0.192  drbd2.corp.com

3、在两台虚拟机分别添加一块10G硬盘分区作为DRBD设备磁盘,分别都为sdb1,大小10G,并在本地系统创建/store目录,不做挂载操作。

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  1. # fdisk /dev/sdb
  2. ----------------
  3. n-p-1-1-"+10G"-w
  4. ----------------
  5. # mkdir /store

4、时间同步:

  1. 1
  1. # ntpdate -u asia.pool.ntp.org

四、DRBD的安装配置:
1、安装依赖包:(node1,node2)

  1. 1
  1. # yum install gcc gcc-c++ make glibc flex kernel-devel kernel-headers

2、安装DRBD:(node1,node2)

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  1. # wget http://oss.linbit.com/drbd/8.4/drbd-8.4.3.tar.gz
  2. # tar zxvf drbd-8.4.3.tar.gz
  3. # cd drbd-8.4.3
  4. # ./configure --prefix=/usr/local/drbd --with-km
  5. # make KDIR=/usr/src/kernels/2.6.32-504.16.2.el6.x86_64/
  6. # make install
  7. # mkdir -p /usr/local/drbd/var/run/drbd
  8. # cp /usr/local/drbd/etc/rc.d/init.d/drbd /etc/rc.d/init.d
  9. # chkconfig --add drbd
  10. # chkconfig drbd on

3、加载DRBD模块:(node1,node2)

  1. 1
  1. # modprobe drbd

查看DRBD模块是否加载到内核:

  1. 1
  2. 2
  3. 3
  1. # lsmod |grep drbd
  2. drbd 310172 4
  3. libcrc32c 1246 1 drbd

4、参数配置:(node1,node2)

  1. 1
  1. # vi /usr/local/drbd/etc/drbd.conf

清空文件内容,并添加如下配置:

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  1. resource r0{
  2. protocol C;
  3.  
  4. startup { wfc-timeout 0; degr-wfc-timeout 120;}
  5. disk { on-io-error detach;}
  6. net{
  7. timeout 60;
  8. connect-int 10;
  9. ping-int 10;
  10. max-buffers 2048;
  11. max-epoch-size 2048;
  12. }
  13. syncer { rate 200M;}
  14.  
  15. on drbd1.corp.com{
  16. device /dev/drbd0;
  17. disk /dev/sdb1;
  18. address 192.168.0.191:7788;
  19. meta-disk internal;
  20. }
  21. on drbd2.corp.com{
  22. device /dev/drbd0;
  23. disk /dev/sdb1;
  24. address 192.168.0.192:7788;
  25. meta-disk internal;
  26. }
  27. }

注:请修改上面配置中的主机名、IP、和disk为自己的具体配置
 
5、创建DRBD设备并激活r0资源:(node1,node2)

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  1. # mknod /dev/drbd0 b 147 0
  2. # drbdadm create-md r0
  3.  
  4. 等待片刻,显示success表示drbd块创建成功
  5. Writing meta data...
  6. initializing activity log
  7. NOT initializing bitmap
  8. New drbd meta data block successfully created.
  9.  
  10. --== Creating metadata ==--
  11. As with nodes, we count the total number of devices mirrored by DRBD
  12. at http://usage.drbd.org.
  13.  
  14. The counter works anonymously. It creates a random number to identify
  15. the device and sends that random number, along with the kernel and
  16. DRBD version, to usage.drbd.org.
  17.  
  18. http://usage.drbd.org/cgi-bin/insert_usage.pl?
  19.  
  20. nu=716310175600466686&ru=15741444353112217792&rs=1085704704
  21.  
  22. * If you wish to opt out entirely, simply enter 'no'.
  23. * To continue, just press [RETURN]
  24.  
  25. success

再次输入该命令:

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  1. # drbdadm create-md r0
  2. 成功激活r0
  3. [need to type 'yes' to confirm] yes
  4.  
  5. Writing meta data...
  6. initializing activity log
  7. NOT initializing bitmap
  8. New drbd meta data block successfully created.

6、启动DRBD服务:(node1,node2)

  1. 1
  1. # service drbd start

注:需要主从共同启动方能生效
 
7、查看状态:(node1,node2)

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  1. # service drbd status
  2. drbd driver loaded OK; device status:
  3. version: 8.4.3 (api:1/proto:86-101)
  4. GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd1.corp.com, 2015-05-12 21:05:41
  5. m:res cs ro ds p mounted fstype
  6. 0:r0 Connected Secondary/Secondary Inconsistent/Inconsistent C

这里ro:Secondary/Secondary表示两台主机的状态都是备机状态,ds是磁盘状态,显示的状态内容为“Inconsistent不一致”,这是因为DRBD无法判断哪一方为主机,应以哪一方的磁盘数据作为标准。
 
8、将drbd1.example.com主机配置为主节点:(node1)

  1. 1
  1. # drbdsetup /dev/drbd0 primary --force

分别查看主从DRBD状态:
(node1)

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  1. # service drbd status
  2. drbd driver loaded OK; device status:
  3. version: 8.4.3 (api:1/proto:86-101)
  4. GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd1.corp.com, 2015-05-12 21:05:41
  5. m:res cs ro ds p mounted fstype
  6. 0:r0 Connected Primary/Secondary UpToDate/UpToDate C

(node2)

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  1. # service drbd status
  2. drbd driver loaded OK; device status:
  3. version: 8.4.3 (api:1/proto:86-101)
  4. GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd2.corp.com, 2015-05-12 21:05:46
  5. m:res cs ro ds p mounted fstype
  6. 0:r0 Connected Secondary/Primary UpToDate/UpToDate C

ro在主从服务器上分别显示 Primary/Secondary和Secondary/Primary
ds显示UpToDate/UpToDate
表示主从配置成功。
 
9、挂载DRBD:(node1)
从刚才的状态上看到mounted和fstype参数为空,所以我们这步开始挂载DRBD到系统目录/store

  1. 1
  2. 2
  1. # mkfs.ext4 /dev/drbd0
  2. # mount /dev/drbd0 /store

注:Secondary节点上不允许对DRBD设备进行任何操作,包括挂载;所有的读写操作只能在Primary节点上进行,只有当Primary节点挂掉时,Secondary节点才能提升为Primary节点,并自动挂载DRBD继续工作。
 
成功挂载后的DRBD状态:(node1)

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  1. # service drbd status
  2. drbd driver loaded OK; device status:
  3. version: 8.4.3 (api:1/proto:86-101)
  4. GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd1.corp.com, 2015-05-12 21:05:41
  5. m:res cs ro ds p mounted fstype
  6. 0:r0 Connected Primary/Secondary UpToDate/UpToDate C /store ext4

五、HeartBeat+NFS配置

1、安装heartbeat

  1. 1
  2. 2
  1. # yum install epel-release -y
  2. # yum --enablerepo=epel install heartbeat -y

2、设置heartbeat配置文件
(node1)
编辑ha.cf,添加下面配置:

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  1. # vi /etc/ha.d/ha.cf
  2. logfile /var/log/ha-log
  3. logfacility local0
  4. keepalive 2
  5. deadtime 5
  6. ucast eth0 192.168.0.192 # 指定对方网卡及IP
  7. auto_failback off
  8. node drbd1.corp.com drbd2.corp.com

(node2)
编辑ha.cf,添加下面配置:

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  1. # vi /etc/ha.d/ha.cf
  2. logfile /var/log/ha-log
  3. logfacility local0
  4. keepalive 2
  5. deadtime 5
  6. ucast eth0 192.168.0.191
  7. auto_failback off
  8. node drbd1.corp.com drbd2.corp.com

3、编辑双机互联验证文件authkeys,添加以下内容:(node1,node2)

  1. 1
  2. 2
  3. 3
  1. # vi /etc/ha.d/authkeys
  2. auth 1
  3. 1 crc

给验证文件600权限

  1. 1
  1. # chmod 600 /etc/ha.d/authkeys

4、编辑集群资源文件:(node1,node2)

  1. 1
  2. 2
  1. # vi /etc/ha.d/haresources
  2. drbd1.corp.com IPaddr::192.168.0.190/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/store::ext4 killnfsd

注:该文件内IPaddr,Filesystem等脚本存放路径在/etc/ha.d/resource.d/下,也可在该目录下存放服务启动脚本(例如:mysql,www),将相同脚本名称添加到/etc/ha.d/haresources内容中,从而跟随heartbeat启动而启动该脚本。
 
IPaddr::192.168.0.190/24/eth0:用IPaddr脚本配置对外服务的浮动虚拟IP
drbddisk::r0:用drbddisk脚本实现DRBD主从节点资源组的挂载和卸载
Filesystem::/dev/drbd0::/store::ext4:用Filesystem脚本实现磁盘挂载和卸载
 
5、编辑脚本文件killnfsd,用来重启NFS服务:(node1,node2)

  1. 1
  2. 2
  1. # vi /etc/ha.d/resource.d/killnfsd
  2. killall -9 nfsd; /etc/init.d/nfs restart;exit 0

赋予755执行权限:

  1. 1
  1. # chmod 755 /etc/ha.d/resource.d/killnfsd

六、创建DRBD脚本文件drbddisk:(node1,node2)
 
编辑drbddisk,添加下面的脚本内容

  1. 1
  1. # vi /etc/ha.d/resource.d/drbddisk
  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30
  31. 31
  32. 32
  33. 33
  34. 34
  35. 35
  36. 36
  37. 37
  38. 38
  39. 39
  40. 40
  41. 41
  42. 42
  43. 43
  44. 44
  45. 45
  46. 46
  47. 47
  48. 48
  49. 49
  50. 50
  51. 51
  52. 52
  53. 53
  54. 54
  55. 55
  56. 56
  57. 57
  58. 58
  59. 59
  60. 60
  61. 61
  62. 62
  63. 63
  64. 64
  65. 65
  66. 66
  67. 67
  68. 68
  69. 69
  70. 70
  71. 71
  72. 72
  73. 73
  74. 74
  75. 75
  76. 76
  77. 77
  78. 78
  79. 79
  80. 80
  81. 81
  82. 82
  83. 83
  84. 84
  85. 85
  86. 86
  87. 87
  88. 88
  89. 89
  90. 90
  91. 91
  92. 92
  93. 93
  94. 94
  95. 95
  96. 96
  97. 97
  98. 98
  99. 99
  100. 100
  101. 101
  102. 102
  103. 103
  104. 104
  105. 105
  106. 106
  107. 107
  108. 108
  109. 109
  110. 110
  111. 111
  112. 112
  113. 113
  114. 114
  115. 115
  116. 116
  117. 117
  118. 118
  119. 119
  120. 120
  121. 121
  122. 122
  123. 123
  124. 124
  125. 125
  126. 126
  127. 127
  128. 128
  129. 129
  130. 130
  131. 131
  132. 132
  133. 133
  1. #!/bin/bash
  2. #
  3. # This script is inteded to be used as resource script by heartbeat
  4. #
  5. # Copright 2003-2008 LINBIT Information Technologies
  6. # Philipp Reisner, Lars Ellenberg
  7. #
  8. ###
  9.  
  10. DEFAULTFILE="/etc/default/drbd"
  11. DRBDADM="/sbin/drbdadm"
  12.  
  13. if [ -f $DEFAULTFILE ]; then
  14. . $DEFAULTFILE
  15. fi
  16.  
  17. if [ "$#" -eq 2 ]; then
  18. RES="$1"
  19. CMD="$2"
  20. else
  21. RES="all"
  22. CMD="$1"
  23. fi
  24.  
  25. ## EXIT CODES
  26. # since this is a "legacy heartbeat R1 resource agent" script,
  27. # exit codes actually do not matter that much as long as we conform to
  28. # http://wiki.linux-ha.org/HeartbeatResourceAgent
  29. # but it does not hurt to conform to lsb init-script exit codes,
  30. # where we can.
  31. # http://refspecs.linux-foundation.org/LSB_3.1.0/
  32. #LSB-Core-generic/LSB-Core-generic/iniscrptact.html
  33. ####
  34.  
  35. drbd_set_role_from_proc_drbd()
  36. {
  37. local out
  38. if ! test -e /proc/drbd; then
  39. ROLE="Unconfigured"
  40. return
  41. fi
  42.  
  43. dev=$( $DRBDADM sh-dev $RES )
  44. minor=${dev#/dev/drbd}
  45. if [[ $minor = *[!0-9]* ]] ; then
  46. # sh-minor is only supported since drbd 8.3.1
  47. minor=$( $DRBDADM sh-minor $RES )
  48. fi
  49. if [[ -z $minor ]] || [[ $minor = *[!0-9]* ]] ; then
  50. ROLE=Unknown
  51. return
  52. fi
  53.  
  54. if out=$(sed -ne "/^ *$minor: cs:/ { s/:/ /g; p; q; }" /proc/drbd); then
  55. set -- $out
  56. ROLE=${5%/**}
  57. : ${ROLE:=Unconfigured} # if it does not show up
  58. else
  59. ROLE=Unknown
  60. fi
  61. }
  62.  
  63. case "$CMD" in
  64. start)
  65. # try several times, in case heartbeat deadtime
  66. # was smaller than drbd ping time
  67. try=6
  68. while true; do
  69. $DRBDADM primary $RES && break
  70. let "--try" || exit 1 # LSB generic error
  71. sleep 1
  72. done
  73. ;;
  74. stop)
  75. # heartbeat (haresources mode) will retry failed stop
  76. # for a number of times in addition to this internal retry.
  77. try=3
  78. while true; do
  79. $DRBDADM secondary $RES && break
  80. # We used to lie here, and pretend success for anything != 11,
  81. # to avoid the reboot on failed stop recovery for "simple
  82. # config errors" and such. But that is incorrect.
  83. # Don't lie to your cluster manager.
  84. # And don't do config errors...
  85. let --try || exit 1 # LSB generic error
  86. sleep 1
  87. done
  88. ;;
  89. status)
  90. if [ "$RES" = "all" ]; then
  91. echo "A resource name is required for status inquiries."
  92. exit 10
  93. fi
  94. ST=$( $DRBDADM role $RES )
  95. ROLE=${ST%/**}
  96. case $ROLE in
  97. Primary|Secondary|Unconfigured)
  98. # expected
  99. ;;
  100. *)
  101. # unexpected. whatever...
  102. # If we are unsure about the state of a resource, we need to
  103. # report it as possibly running, so heartbeat can, after failed
  104. # stop, do a recovery by reboot.
  105. # drbdsetup may fail for obscure reasons, e.g. if /var/lock/ is
  106. # suddenly readonly. So we retry by parsing /proc/drbd.
  107. drbd_set_role_from_proc_drbd
  108. esac
  109. case $ROLE in
  110. Primary)
  111. echo "running (Primary)"
  112. exit 0 # LSB status "service is OK"
  113. ;;
  114. Secondary|Unconfigured)
  115. echo "stopped ($ROLE)"
  116. exit 3 # LSB status "service is not running"
  117. ;;
  118. *)
  119. # NOTE the "running" in below message.
  120. # this is a "heartbeat" resource script,
  121. # the exit code is _ignored_.
  122. echo "cannot determine status, may be running ($ROLE)"
  123. exit 4 # LSB status "service status is unknown"
  124. ;;
  125. esac
  126. ;;
  127. *)
  128. echo "Usage: drbddisk [resource] {start|stop|status}"
  129. exit 1
  130. ;;
  131. esac
  132.  
  133. exit 0

赋予755执行权限:

  1. 1
  1. # chmod 755 /etc/ha.d/resource.d/drbddisk

三、启动HeartBeat服务
 
在两个节点上启动HeartBeat服务,先启动node1:(node1,node2)

  1. 1
  2. 2
  1. # service heartbeat start
  2. # chkconfig heartbeat on

现在从其他机器能够ping通虚IP 192.168.0.190,表示配置成功
 
七、配置NFS:(node1,node2)
 
编辑exports配置文件,添加以下配置:

  1. 1
  2. 2
  1. # vi /etc/exports
  2. /store *(rw,no_root_squash)

重启NFS服务:

  1. 1
  2. 2
  3. 3
  4. 4
  1. # service rpcbind restart
  2. # service nfs restart
  3. # chkconfig rpcbind on
  4. # chkconfig nfs off

注:这里设置NFS开机不要自动运行,因为/etc/ha.d/resource.d/killnfsd 该脚本会控制NFS的启动。
 
八、测试高可用
 
1、正常热备切换
在客户端挂载NFS共享目录

  1. 1
  1. # mount -t nfs 192.168.0.190:/store /tmp

模拟将主节点node1 的heartbeat服务停止,则备节点node2会立即无缝接管;测试客户端挂载的NFS共享读写正常。
 
此时备机node2上的DRBD状态:

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  1. # service drbd status
  2. drbd driver loaded OK; device status:
  3. version: 8.4.3 (api:1/proto:86-101)
  4. GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd2.corp.com, 2015-05-12 21:05:41
  5. m:res cs ro ds p mounted fstype
  6. 0:r0 Connected Primary/Secondary UpToDate/UpToDate C /store ext4

2、异常宕机切换
强制关机,直接关闭node1电源
 
node2节点也会立即无缝接管,测试客户端挂载的NFS共享读写正常。
 
此时node2上的DRBD状态:

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  1. # service drbd status
  2. drbd driver loaded OK; device status:
  3. version: 8.4.3 (api:1/proto:86-101)
  4. GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd2.corp.com, 2015-05-12 21:05:41
  5. m:res cs ro ds p mounted fstype
  6. 0:r0 Connected Primary/Unknown UpToDate/DUnknown C /store ext4

九、配置DRBD常见错误总结

问题1、’ha’ ignored, since this host (node2.centos.bz) is not mentioned with an ‘on’ keyword.?
 
错误信息:
执行指令 drbdadm create-md ha 时出现如下错误信息:
'ha' ignored, since this host (node2.centos.bz) is not mentioned with an 'on' keyword.
 
解决方法:
因为在 drbd 设定 drbd.conf 中 on 本来写的是 node1、node2 而以,将node1和node2分别改为node1.centos.bz,node2.centos.bz。
 
问题2、drbdadm create-md ha: exited with coolpre 20?
 
错误信息:
执行指令 drbdadm create-md ha 时出现如下错误信息:
 
open(/dev/hdb1) failed: No such file or directory
Command 'drbdmeta 0 v08 /dev/hdb1 internal create-md' terminated with exit coolpre 20
drbdadm create-md ha: exited with coolpre 20
 
解决方法:
因为忘了执行 fdisk /dev/hdb 指令建立分割区所造成,如下将 /dev/hdb 建立分割区后指令即可正常执行

  1. # fdisk /dev/hdb //准备为 hdb 建立分割区
  2. The number of cylinders for this disk is set to 20805.
  3. There is nothing wrong with that, but this is larger than 1024,
  4. and could in certain setups cause problems with:
  5. 1) software that runs at boot time (e.g., old versions of LILO)
  6. 2) booting and partitioning software from other OSs
  7. (e.g., DOS FDISK, OS/2 FDISK)
  8. Command (m for help): n //键入 n 表示要建立分割区
  9. Command action
  10. e extended
  11. p primary partition (1-4)
  12. p //键入 p 表示建立主要分割区
  13. Partition number (1-4): 1 //键入 1 为此主要分割区代号
  14. First cylinder (1-20805, default 1): //开始磁柱值,按下 enter 即可
  15. Using default value 1
  16. Last cylinder or +size or +sizeM or +sizeK (1-20805, default 20805): //结束磁柱值,按下 enter 即可
  17. Using default value 20805
  18. Command (m for help): w //键入 w 表示确定执行刚才设定
  19. The partition table has been altered!
  20. Calling ioctl() to re-read partition table.
  21. Syncing disks.

[root@node1 yum.repos.d]# partprobe //使刚才的 partition table 变更生效
 
问题3、drbdadm create-md ha: exited with coolpre 40?
 
错误信息:
执行指令 drbdadm create-md ha 时出现如下错误信息:

  1. Device size would be truncated, which
  2. would corrupt data and result in
  3. 'access beyond end of device' errors.
  4. You need to either
  5. * use external meta data (recommended)
  6. * shrink that filesystem first
  7. * zero out the device (destroy the filesystem)
  8. Operation refused.
  9. Command 'drbdmeta 0 v08 /dev/hdb1 internal create-md' terminated with exit coolpre 40
  10. drbdadm create-md ha: exited with coolpre 40

解决方法:
使用 dd 指令将一些资料塞到 /dev/hdb 后再执行 drbdadm create-md ha 指令即可顺利执行
 
# dd if=/dev/zero of=/dev/hdb1 bs=1M count=100
 
问题4、DRBD 状态始终是 Secondary/Unknown?
 
错误信息:
Node1、Node2 主机启动 DRBD 后状态始终是 Secondary/Unknown

  1. #service drbd status
  2. drbd driver loaded OK; device status:
  3. version: 8.3.8 (api:88/proto:86-94)
  4. GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by mockbuild@builder10.centos.org, 2010-06-04 08:04:16
  5. m:res cs ro ds p mounted fstype
  6. 0:ha WFConnection Secondary/Unknown Inconsistent/DUnknown C

解决方法:
1、Node1、Node2 没有打开相对应的 Port,请开启相对应的 Port 或先把 IPTables 服务关闭即可。
2、可能发生了脑裂行为,一般出现在ha切换时,解决方法:
在一节点执行:
drbdadm secondary resource
drbdadm connect –discard-my-data resource
另一节点执行:
drbdadm connect resource
 
问题5、1: Failure: (104) Can not open backing device
 
错误信息:
执行drbdadm up r0时出现:
 
1: Failure: (104) Can not open backing device.
Command 'drbdsetup attach 1 /dev/sdb1 /dev/sdb1 internal' terminated with exit pre 10
 
解决方法:
可能因为你挂载了/dev/sdb1,执行umount /dev/sdb1即可。

CentOS6.6下DRBD+HeartBeat+NFS配置的更多相关文章

  1. 某电商网站线上drbd+heartbeat+nfs配置

    1.环境 nfs1.test.com 10.1.1.1 nfs2.test.com 10.1.1.2 2.drbd配置 安装drbd yum -y install gcc gcc-c++ make g ...

  2. Centos6下DRBD的安装配置

    导读 Distributed Replicated Block Device(DRBD)是一个用软件实现的.无共享的.服务器之间镜像块设备内容的存储复制解决方案.数据镜像:实时.透明.同步(所有服务器 ...

  3. (转)CentOS6.5下Redis安装与配置

    场景:项目开发中需要用到redis,之前自己对于缓存这块一直不是很理解,所以一直有从头做起的想法. 本文详细介绍redis单机单实例安装与配置,服务及开机自启动.如有不对的地方,欢迎大家拍砖o(∩_∩ ...

  4. 1.CentOS6.5下的基础DNS配置

    常规DNS的安全和配置1.安装DNSyum -y install bind bind-utils安装后生成的文件,我们主要配置下面几个/etc/named.conf/var/named/xx这个xx是 ...

  5. 在Centos6.5下Samba的简单配置

    本文的目的主要用来说明怎样在CentOS6.5的环境下配置出一个简单可用的samba服务,而且能够通过windows对其文件进行訪问 安装相关软件 # yum install samba samba- ...

  6. CentOS6.5下的Nagios安装配置详解(图文)

    最近因为,科研需要,接触上了Nagios,这里,我将安装笔记做个详解.为自己后续需要和博友们学习! VMware workstation 11 的下载 VMWare Workstation 11的安装 ...

  7. CentOS6.5下Tomcat7 Nginx Redis配置步骤

    所有配置均在一台机器上完成,部署拓扑信息如下: 注意:由于Redis配置对jar包和tomcat版本比较严格,请务必使用tomcat7和本文中提供的jar包.下载地址: http://pan.baid ...

  8. centos6.4下没有heartbeat包解决办法

    1.在centos6.4 中使用yum install heartbeat,并没有发现heartbeat软件包 [root@master ~]# yum install heartbeat heart ...

  9. CentOS6.8下安装redis并配置开机自启动

    参考资料:http://www.bubuko.com/infodetail-1006383.html   http://www.cnblogs.com/skyessay/p/6433349.html ...

随机推荐

  1. bzoj 2662&bzoj 2763 SPFA变形

    我们用dis[i,j]代表到i这个点,用j张票的最短路程,那么我们只需要在SPFA更新 的时候,用dis[i,j]更新dis[p,j]和dis[p,j+1]就行了 /***************** ...

  2. js三层引号嵌套

    ··· 参考:https://blog.csdn.net/feiyangbaxia/article/details/49681131 第一层用双引号,第二层转义双引号,第三层单引号

  3. IEEE 802.15介绍

    1. 无线通信 无线通信主要是利用无线电(Radio)射频(RF)技术的通信方式,无线网络是采用无线通信技术实现的网络无线网络可为两种: 近距离无线网络和远距离无线网络 近距离无线网络主要可分为如下两 ...

  4. linux下源码安装netcat

    linux下源码安装netcat http://blog.chinaunix.net/uid-20783755-id-4211230.html 1,下载netcat源码,netcat-0.7.1-13 ...

  5. Javascript基于对象三大特征 -- 冒充对象

    Javascript基于对象三大特征 基本概述 JavaScript基于对象的三大特征和C++,Java面向对象的三大特征一样,都是封装(encapsulation).继承(inheritance ) ...

  6. 《Java编程思想》笔记 第十九章 枚举类型

    1.基本enum特征 所有创建的枚举类都继承自抽象类 java.lang.Enum; 一个枚举类,所有实例都要在第一句写出以 ,隔开. 如果只有实例最后可以不加 : 枚举类因为继承了Enum,所以再不 ...

  7. Docker Ubuntu容器安装ping(zz)

    更新apt-get的软件包信息,然后再安装 sudo docker run ubuntu apt-get update sudo docker run ubuntu apt-get install i ...

  8. php必备树状数组处理方法

    thinkphp必备公共方法 /** * 子元素计数器 * @param array $array * @param int $pid * @return array */ function arra ...

  9. 建立新的acticity需要的注意的问题

    首先需要我们在mainifests中进行注册, <activity android:name="com.special.residemenudemo.CameraActivity&qu ...

  10. Maven学习笔记1

    Maven是什么? 百度百科:Maven项目对象模型(POM),可以通过一小段描述信息来管理项目的构建,报告和文档的软件项目管理工具. 这些描述总是让人更加难理解Maven,扔掉它,咱们先看看Mave ...