系统环境: rhel6 x86_64 iptables and selinux disabled

主机: 192.168.122.119 server19.example.com

192.168.122.25 server25.example.com(注:时间需同步)

192.168.122.1 desktop36.example.com

所需的包:drbd-8.4.3.tar.gz

yum仓库配置:

[rhel-source]

name=Red
Hat Enterprise Linux $releasever - $basearch - Source

baseurl=ftp://192.168.122.1/pub/yum

enabled=1

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[HighAvailability]

name=Instructor
Server Repository

baseurl=ftp://192.168.122.1/pub/yum/HighAvailability

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

enabled=1

[LoadBalancer]

name=Instructor
Server Repository

baseurl=ftp://192.168.122.1/pub/yum/LoadBalancer

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

enabled=1

[ResilientStorage]

name=Instructor
Server Repository

baseurl=ftp://192.168.122.1/pub/yum/ResilientStorage

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

enabled=1

[ScalableFileSystem]

name=Instructor
Server Repository

baseurl=ftp://192.168.122.1/pub/yum/ScalableFileSystem

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

enabled=1

#配置pacemaker

以下步骤在server19server25上实施:

[root@server19
~]# yum install corosync pacemaker -y

以下步骤在server19server25上实施:

[root@server19
~]# cd /etc/corosync/

[root@server19
corosync]# corosync-keygen (生成该key需要不断的敲打键盘)

[root@server19
corosync]# cp corosync.conf.example corosync.conf

[root@server19
corosync]# vim corosync.conf

#
Please read the corosync.conf.5 manual page

compatibility:
whitetank

totem
{

version: 2

secauth: off

threads: 0

interface {

ringnumber: 0

bindnetaddr: 192.168.122.0

mcastaddr: 226.94.1.1

mcastport: 5405

ttl: 1

}

}

logging
{

fileline: off

to_stderr: yes

to_logfile: yes

to_syslog: yes

logfile: /var/log/cluster/corosync.log

debug: off

timestamp: on

logger_subsys {

subsys: AMF

debug: off

}

}

amf
{

mode: disabled

}

service
{

ver: 0

name: pacemaker

use_mgmtd: yes

}

[root@server19
corosync]# scp corosync.conf authkey
root@192.168.122.25:/etc/corosync/

以下步骤在server19server25上实施:

[root@server19
corosync]# /etc/init.d/corosync start

此时查看日志tail
-f /var/log/cluster/corosync.log
会有如下错误:

Jul
27 02:31:31 [1461] server19.example.com pengine: notice:
process_pe_message: Configuration ERRORs found during PE processing.
Please run "crm_verify -L" to identify issues.

解决方法如下:

[root@server19
corosync]# crm(注:redhat6.4后crm这个命令没有集成在pacemaker包中,需要另外安装crmsh)

crm(live)#
configure

crm(live)configure#
property stonith-enabled=false

crm(live)configure#
commit

crm(live)configure#
quit

[root@server19
corosync]# crm_verify -L(检测配置是否有错误)

此时执行crm_mon进入监控页面,若两台主机均处于Online状态说明配置成功.

以下配置只需在任意一台机子上实施,所有配置会自动同步到另一台机子上.

#添加虚拟IP

[root@server19 corosync]# crm

crm(live)# configure

crm(live)configure# primitive vip ocf:heartbeat:IPaddr2 params ip=192.168.122.178 cidr_netmask=32 op monitor interval=30s

crm(live)configure# commit

crm(live)configure# quit

#忽略法定人数的检查

[root@server19 corosync]# crm

crm(live)# configure

crm(live)configure# property no-quorum-policy=ignore

crm(live)configure# commit

crm(live)configure# quit

#
添加apache服务

1.以下步骤在server19server25上实施:

[root@server19
corosync]# vim /etc/httpd/conf/httpd.conf

<Location
/server-status>

SetHandler server-status

Order deny,allow

Deny from all

Allow from 127.0.0.1

</Location>

[root@server19
corosync]# echo `hostname` > /var/www/html/index.html

2.以下步骤在server19server25上实施:

[root@server19
corosync]# crm

crm(live)#
configure

crm(live)configure#
primitive apache ocf:heartbeat:apache params
configfile=/etc/httpd/conf/httpd.conf op monitor interval=1min

crm(live)configure#
commit

qcrm(live)configure#
quit

此时执行crm_mon有可能会发现vipapache运行在不同的机子上:

解决方法(vipapache绑定):

[root@server19 corosync]# crm

crm(live)# configure

crm(live)configure# colocation apache-with-vip inf: apache vip

crm(live)configure# commit

crm(live)configure# quit

此时访问192.168.122.178可访问到server19上的页面

#配置主备

[root@server19
corosync]# crm

crm(live)#
configure

crm(live)configure#
location master-node apache 10: server19.example.com

crm(live)configure#
commit

crm(live)configure#
quit

#配置fence

以下步骤在desktop36上实施:

[root@desktop36
~]# yum list fence*

[root@desktop36
~]# yum install fence-virtd.x86_64 fence-virtd-libvirt.x86_64
fence-virtd-multicast.x86_64 fence-virt-0.2.3-9.el6.x86_64 -y

[root@desktop36
~]# fence_virtd -c

Module
search path [/usr/lib64/fence-virt]:

Available
backends:

libvirt 0.1

Available
listeners:

multicast 1.1

Listener
modules are responsible for accepting requests

from
fencing clients.

Listener
module [multicast]:

The
multicast listener module is designed for use environments

where
the guests and hosts may communicate over a network using

multicast.

The
multicast address is the address that a client will use to

send
fencing requests to fence_virtd.

Multicast
IP Address [225.0.0.12]:

Using
ipv4 as family.

Multicast
IP Port [1229]:

Setting
a preferred interface causes fence_virtd to listen only

on
that interface. Normally, it listens on the default network

interface.
In environments where the virtual machines are

using
the host machine as a gateway, this *must* be set

(typically
to virbr0).

Set
to 'none' for no interface.

Interface
[none]: virbr0

The
key file is the shared key information which is used to

authenticate
fencing requests. The contents of this file must

be
distributed to each physical host and virtual machine within

a
cluster.

Key
File [/etc/cluster/fence_xvm.key]:

Backend
modules are responsible for routing requests to

the
appropriate hypervisor or management layer.

Backend
module [checkpoint]: libvirt

The
libvirt backend module is designed for single desktops or

servers.
Do not use in environments where virtual machines

may
be migrated between hosts.

Libvirt
URI [qemu:///system]:

Configuration
complete.

===
Begin Configuration ===

backends
{

libvirt
{

uri
= "qemu:///system";

}

}

listeners
{

multicast
{

interface
= "virbr0";

port
= "1229";

family
= "ipv4";

address
= "225.0.0.12";

key_file
= "/etc/cluster/fence_xvm.key";

}

}

fence_virtd
{

module_path
= "/usr/lib64/fence-virt";

backend
= "libvirt";

listener
= "multicast";

}

===
End Configuration ===

Replace
/etc/fence_virt.conf with the above [y/N]? y

注:以上设置除“Interface”处填写虚拟机通信接口和Backend
module填写libvirt外,其他选项均可回车保持默认。

[root@desktop36
~]# mkdir /etc/cluster

[root@desktop36
~]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1

以下步骤在server19server25上实施:

[root@server19
corosync]# mkdir /etc/cluster

[root@server19
corosync]# yum install fence-virt-0.2.3-9.el6.x86_64 -y

以下步骤在desktop36上实施:

[root@desktop36
~]# scp /etc/cluster/fence_xvm.key root@192.168.122.119:/etc/cluster/

[root@desktop36
~]# scp /etc/cluster/fence_xvm.key root@192.168.122.25:/etc/cluster/

[root@desktop36
~]# /etc/init.d/fence_virtd start

[root@desktop36
~]# netstat -anuple | grep fence

udp
0 0 0.0.0.0:1229 0.0.0.0:*
0 823705 6320/fence_virtd

端口说明fence_virtd启动成功.

以下步骤在server19server25上实施:

[root@server19
corosync]# crm

crm(live)#
configure

crm(live)configure#
cib new stonith

crm(stonith)configure#
quit

[root@server19
corosync]# crm

crm(live)#
configure

crm(live)configure#
primitive vmfence stonith:fence_xvm params
pcmk_host_map="server19.example.com:vm1
server25.example.com:vm2" op monitor interval=30s

crm(live)configure#
property stonith-enabled=true

crm(live)configure#
commit

crm(live)configure#
quit

测试:将server19断网或执行echo
c > /proc/sysrq-trigger模拟内核崩溃,看服务是否被接管,并且server19断电重启.

#配置drbd

分别给server19server25加一块相同大小的虚拟硬盘

以下步骤在server19server25上实施:

[root@server19 kernel]# yum install kernel-devel make -y

[root@server19 kernel]# tar zxf drbd-8.4.3.tar.gz

[root@server19 kernel]# cd drbd-8.4.3

[root@server19 drbd-8.4.3]# ./configure --enable-spec –with-km

此时会出现如下问题:

(1)configure: error: no acceptable C compiler found in $PATH

(2)configure: error: Cannot build utils without flex, either install flex or pass the --without-utils option.

(3)configure: WARNING: No rpmbuild found, building RPM packages is disabled.

(4)configure: WARNING: Cannot build man pages without xsltproc. You may safely ignore this warning when building from a tarball.

(5)configure: WARNING: Cannot update buildtag without git. You may safely ignore this warning when building from a tarball.

解决方法如下:

(1)[root@server19
drbd-8.4.3]# yum install gcc -y

(2)[root@server19
drbd-8.4.3]# yum install flex -y

(3)[root@server19
drbd-8.4.3]# yum install rpm-build -y

(4)[root@server19
drbd-8.4.3]# yum install libxslt -y

(5)[root@server19
drbd-8.4.3]# yum install git -y

[root@server19
kernel]# mkdir -p ~/rpmbuild/SOURCES

[root@server19
kernel]# cp drbd-8.4.3.tar.gz ~/rpmbuild/SOURCES/

[root@server19
drbd-8.4.3]# rpmbuild -bb drbd.spec

[root@server19
drbd-8.4.3]# rpmbuild -bb drbd-km.spec

[root@server19
drbd-8.4.3]# cd ~/rpmbuild/RPMS/x86_64/

[root@server19
x86_64]# rpm -ivh *

[root@server19
x86_64]# scp ~/rpmbuild/RPMS/x86_64/*
root@192.168.122.25:/root/kernel/

以下步骤在server25上实施:

[root@server25
kernel]# rpm -ivh *

以下步骤在server19server25上实施:

[root@server19
~]# fdisk -cu /dev/vda

划分分区(一般只划一个分区),类型为Linux
LVM的

[root@server19
~]# pvcreate /dev/vda1

[root@server19
~]# vgcreate koenvg /dev/vda1

[root@server19
~]# lvcreate -L 1G -n koenlv koenvg

以下步骤在server19server25上实施:

[root@server19
drbd.d]# cd /etc/drbd.d/

[root@server19
drbd.d]# vim drbd.res

resource
koen
{

meta-disk
internal;

device
/dev/drbd1;

syncer
{

verify-alg
sha1;

}

net
{

allow-two-primaries;

}

on
server19.example.com
{

disk

/dev/mapper/koenvg-koenlv;

address
192.168.122.119:7789;

}

on
server25.example.com
{

disk

/dev/mapper/koenvg-koenlv;

address
192.168.122.25:7789;

}

}

[root@server19
drbd.d]# scp drbd.res root@192.168.122.25:/etc/drbd.d/

以下步骤在server19server25上实施:

[root@server19
drbd.d]# drbdadm create-md koen

[root@server19
drbd.d]# /etc/init.d/drbd start

以下步骤在server19上实施:

[root@server19
drbd.d]# drbdsetup /dev/drbd1 primary –force

(此条命令将server19设置成primary节点,并同步数据)

此时可以执行watch
cat /proc/drbd 查看同步状态,当同步完成后继续往下配置,创建文件系统.

[root@server19
drbd.d]# mkfs.ext4 /dev/drbd1

[root@server19
drbd.d]# mount /dev/drbd1 /var/www/html/

注意:两台主机上的/dev/drbd1
不能同时挂载,只有状态为
primary
,才能被挂载使
,而此时另一方的状态为
secondary

测试:在server19上将/dev/drbd1挂在到/var/www/html/
,进到/var/www/html/中随意编辑一些文件,然后卸载/dev/drbd1(umount
/var/www/html/)
,执行drbdadm
secondary koen
drbdadm
primary
koen
server25设置为主节点,在server25上挂载/dev/drbd1,最后查看/var/www/html/下的内容是否同步,

:拉伸设备

以下步骤在server19server25上实施:

[root@server19
~]# lvextend -L +1000M /dev/mapper/koenvg-koenlv

以下步骤在server19server25上实施:

[root@server19
~]# drbdadm resize koen

以下步骤在primary节点上实施:

[root@server25
~]# mount /dev/drbd1 /var/www/html/

[root@server25
~]# resize2fs /dev/drbd1

#pacemakerdrbd整合

以下步骤在server19server25上实施:

[root@server19
~]# crm

crm(live)#
configure

crm(live)configure#
primitive webdata ocf:linbit:drbd params drbd_resource=koen op
monitor interval=60s

crm(live)configure#
ms webdataclone webdata meta master-max=1 master-node-max=1
clone-max=2 clone-node-max=1 notify=true

crm(live)configure#
primitive webfs ocf:heartbeat:Filesystem params device="/dev/drbd1"
directory="/var/www/html" fstype=ext4

crm(live)configure#
group webgroup vip apache webfs

crm(live)configure#
colocation apache-on-webdata inf: webgroup webdataclone:Master

crm(live)configure#
order apache-after-webdata inf: webdataclone:promote webgroup:start

crm(live)configure#
commit

crm(live)configure#
quit

附:使用iscsi存储

以下步骤在desktop36上实施:

[root@desktop36 ~]# yum install scsi-target-utils.x86_64 -y

[root@desktop36 ~]# vim /etc/tgt/targets.conf

<target iqn.2013-07.com.example:server.target1>

backing-store /dev/vg_desktop36/iscsi-test

initiator-address 192.168.122.112

initiator-address 192.168.122.234

</target>

[root@desktop36 ~]# /etc/init.d/tgtd start

以下步骤在server19server25上实施:

[root@server19
~]# iscsiadm -m discovery -t st -p 192.168.122.1

[root@server19
~]# iscsiadm -m node -l

使用fdisk
-cu对iscsi设备进行分区并且格式化.

注:此操作只需要在一个节点上进行即可,另一个节点会自动同步.

以下步骤在server19server25上实施:

[root@server19
~]# crm

crm(live)#
configure

crm(live)configure#
primitive iscsi ocf:heartbeat:Filesystem params device=/dev/sda1
directory=/var/www/html fstype=ext4 op monitor
interval=30s

crm(live)configure#
colocation apache-with-iscsi inf: apache iscsi

crm(live)configure#
commit

crm(live)configure#
quit

Rhel6-pacemaker+drbd配置文档的更多相关文章

  1. MYSQL服务器my.cnf配置文档详解

    MYSQL服务器my.cnf配置文档详解 硬件:内存16G [client] port = 3306 socket = /data/3306/mysql.sock [mysql] no-auto-re ...

  2. 转!!Java代码规范、格式化和checkstyle检查配置文档

    为便于规范各位开发人员代码.提高代码质量,研发中心需要启动代码评审机制.为了加快代码评审的速度,减少不必要的时间,可以加入一些代码评审的静态检查工具,另外需要为研发中心配置统一的编码模板和代码格式化模 ...

  3. Hibernate配置文档详解

    Hibernate配置文档有框架总部署文档hibernate.cfg.xml 和映射类的配置文档 ***.hbm.xml hibernate.cfg.xml(文件位置直接放在src源文件夹即可) (在 ...

  4. Java代码规范、格式化和checkstyle检查配置文档

    http://www.blogjava.net/amigoxie/archive/2014/05/31/414287.html 文件下载: http://files.cnblogs.com/files ...

  5. Spring Hibernate4 整合配置文档

    1 applicationContext.xml配置文档 <?xml version="1.0" encoding="UTF-8"?><bea ...

  6. Kerberos主从配置文档

    Kerberos主从配置文档   1. Kerberos主从同步机制 在Master上通过以下命令同步数据: kdb5_util dump /var/kerberos/krb5kdc/slave_db ...

  7. python常用模块-配置文档模块(configparser)

    python常用模块-配置文档模块(configparser) 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. ConfigParser模块用于生成和修改常见配置文档,当前模块的名称 ...

  8. azkaban编译安装配置文档

    azkaban编译安装配置文档 参考官方文档: http://azkaban.github.io/azkaban/docs/latest/ azkaban的配置文件说明:http://azkaban. ...

  9. Python学习 :常用模块(四)----- 配置文档

    常用模块(四) 八.configparser 模块 官方介绍:A configuration file consists of sections, lead by a "[section]& ...

随机推荐

  1. ASP.NET WEBAPI 简单CURD综合测试(asp.net MVC,json.net,sql基础存储过程和视图,sqlhelper,json解析)

    草图   真正的后端是不管前端是什么平台,用什么语言的,JSON格式的数据应该可以应对.用ASP.NET WEBAPI尝试做一个后端,实现最基本的CURD,业务逻辑和数据库操作都放在后端,前端只需要正 ...

  2. sql经典语句大全

    SQL Server提供了大量的函数, 但是在一些常见的如, 字符串拆分, 字符提取,过滤等没有对应的处理, 本帖主要收集一些常见的函数, 整理如下: ------------------------ ...

  3. web移动前端的click点透问题

    在移动端开发中,有时会出现click点透的问题. 一.什么是click点透 以下情况,在B元素上有半透明红色遮盖层A,黄色B元素内有可点击链接C. tips:以下举例仅针对webkit内核浏览器,所有 ...

  4. Oracle创建/删除表空间和用户(2014-3-10 记)

    /*创建表空间名为:DB_NAME*/ create tablespace DB_NAME datafile 'E:\oracle_data\db_name.dbf' size 100M autoex ...

  5. 一个解决chrome浏览器下input标签当autocomplete的时候背景变黄色同时input背景图片消失方案

    最近在改一个bug即如标题所讲的一样,chrome浏览器下当input标签开启autocomplete的时候input的背景颜色变黄同时在input的背景图片也被覆盖了.为此百度了好久发现网上说的使用 ...

  6. jenkins插件开发-此路是我开

    一:前置环境 1. JDK1.6+ 2. maven已安装 3. jenkins已搭建 4. eclipse已安装(并安装了maven插件) 以上环境可以百度搜索并安装 我的环境是WIN7 64位系统 ...

  7. iOS-图文表并茂,手把手教你GCD

    前言 对初学者来说,GCD似乎是一道迈不过去的坎,很多人在同步.异步.串行.并行和死锁这几个名词的漩涡中渐渐放弃治疗.本文将使用图文表并茂的方式给大家形象地解释其中的原理和规律. 线程.任务和队列的概 ...

  8. Android动画View Animation

    Animations 一.Animations介绍 Animations是一个实现android UI界面动画效果的API,Animations提供了一系列的动画效果,可以进行旋转.缩放.淡入淡出等, ...

  9. [四校联考P3] 区间颜色众数 (主席树)

    主席树 Description 给定一个长度为 N 颜色序列A,有M个询问:每次询问一个区间里是否有一种颜色的数量超过了区间的一半,并指出是哪种颜色. Input 输入文件第一行有两个整数:N和C 输 ...

  10. web服务器工作原理

    Web服务器工作原理概述 转载自http://www.importnew.com/15020.html 很多时候我们都想知道,web容器或web服务器(比如Tomcat或者jboss)是怎样工作的?它 ...