keepalived高可用
keepalived介绍
Keepalived是一个基于vrrp协议的高可用方案,vrrp协议的软件实现,原生设计的目的为了高可用ipvs服务。
1. vrrp协议
VRRP是一种容错协议,它通过把几台路由设备联合组成一台虚拟的路由设备,并通过一定的机制来保证当主机的下一跳设备出现故障时,可以及时将业务切换到其它设备,从而保持通讯的连续性和可靠性,
- vrrp协议中常见术语:
- 虚拟路由器:Virtual Router
- 虚拟路由器标识:VRID(0-255)
- 物理路由器:master(主设备),backup(备用设备),priority(优先级)
- VIP:Virtual IP
- VMAC:Virtual MAC(00-00-5e-00-01-VIRD)
- 虚拟路由器工作机制如下:
- 根据优先级的大小挑选Master设备,比较优先级的大小,优先级高者当选为Master。
- 当两台优先级相同的路由器同时竞争Master时,比较接口IP地址大小。接口地址大者当选为Master。
备份路由器随时监听Master的状态。 - 当主路由器正常工作时,它会每隔一段时间(Advertisement_Interval)发送一个VRRP组播报文,以通知组内的备份路由器,主路由器处于正常工作状态。
- 当组内的备份路由器一段时间(Master_Down_Interval)内没有接收到来自主路由器的报文,则将自己转为主路由器。
2. HA Cluster配置
2.1 HA Cluster的配置前提
- 各节点时间必须同步(ntp,chrony)。
- 确保iptables及selinux不会成为阻碍。
- 各节点之间可通过主机名互相通信(对KeepAlived并非必须),简易使用/etc/hosts文件实现。
- 各节点之间的root用户可以基于密钥认证的ssh服务完成相互通信(非必须);
- 配置keepalived的网卡必须支持并开启多播(multicast)功能。(ip link set dev enoxxxxx multicast on|off)
2.2 HA Cluster的虚拟路由器配置
环境:两台linux主机,配置虚拟路由器组,使用 10.1.
- 同步时间
安装ntp软件包
[root@ _8_ ~]# yum -y install ntp
编辑ntp配置文件,使本机作为ntp时间服务器,注释掉多行以server开头的行,添加server 127.127.0.1
重启ntp服务
[root@ _9_ ~]# service ntpd restart
另一台主机2同步时间
[root@ _9_ ~]# ntpdate 10.1.6.11
1 Nov 18:38:03 ntpdate[46881]: adjust time server 10.1.6.11 offset -0.000035 sec
- keepalived 单主模型(10.1.7.19)
安装keepalived
[root@ _14_ ~]# yum -y install keepalived
主机1修改keepalived配置文件,注释掉Virtual server的内容(这里暂不配置)
[root@ _15_ ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived
global_defs {
notification_email {
root@localhost #管理员邮箱
}
notification_email_from keepalived@localhost #发送者keepalived
smtp_server 127.0.0.1 #邮箱服务器
smtp_connect_timeout 30 #邮件发送超时时间
router_id node1 #当前路由器物理标识符
vrrp_mcast_group4 224.0.200.158 #多播地址(默认开启),应该与其他组成虚拟路由器的主机一致
} vrrp_instance VI_1 { #配置vrp示例,VI_1,随意定义,需唯一
state MASTER #定义当前节点在此虚拟路由器上的初始状态;只能有一个是MASTER,其余都为BACKUP
interface eth0 #绑定为当前虚拟路由器使用的物理接口
virtual_router_id 16 #当前虚拟路由器的唯一标识(0-255)
priority 100 #当前主机在此虚拟路由器中的优先级
advert_int 1 #vrrp通告的时间间隔
authentication {
auth_type PASS #认证类型,PASS为简单认证,AH为复杂认证,推荐使用PASS
auth_pass RrpIoZU7 #认证字符
}
virtual_ipaddress {
10.1.7.19/16 dev eth0 #配置的接口虚拟ip
}
} 注:网卡多播功能开启与关闭:ip link set multicast on|off
使用scp拷贝给另一台主机2,修改对应参数 。
[root@ _15_ ~]# scp /etc/keepalived/keepalived.conf root@10.1.7.11:/etc/keepalived/keepalived.conf 修改
state MASTER 为 state BACKUP
priority 100 为 priority 98 备用节点优先级要比主节点低
- 测试
启动主节点,能看到启动为MASTER,添加了10.1.7.19的地址
[root@ _1_ ~]# service keepalived start
Starting keepalived: [root@ _1_ ~]# tail /var/log/message
Nov 1 20:13:44 localhost Keepalived_healthcheckers[36312]: Opening file '/etc/keepalived/keepalived.conf'.
Nov 1 20:13:44 localhost Keepalived_healthcheckers[36312]: Configuration is using : 7552 Bytes
Nov 1 20:13:44 localhost kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP)
Nov 1 20:13:44 localhost kernel: IPVS: Connection hash table configured (size=4096, memory=64Kbytes)
Nov 1 20:13:44 localhost kernel: IPVS: ipvs loaded.
Nov 1 20:13:44 localhost Keepalived_healthcheckers[36312]: Using LinkWatch kernel netlink reflector...
Nov 1 20:13:44 localhost Keepalived_vrrp[36313]: VRRP_Instance(VI_1) Transition to MASTER STATE
Nov 1 20:13:45 localhost Keepalived_vrrp[36313]: VRRP_Instance(VI_1) Entering MASTER STATE
Nov 1 20:13:45 localhost Keepalived_vrrp[36313]: VRRP_Instance(VI_1) setting protocol VIPs.
Nov 1 20:13:45 localhost Keepalived_healthcheckers[36312]: Netlink reflector reports IP 10.1.7.19 added
Nov 1 20:13:45 localhost Keepalived_vrrp[36313]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.1.7.19
Nov 1 20:13:47 localhost ntpd[2238]: Listen normally on 8 eth0 10.1.7.19 UDP 123
Nov 1 20:13:50 localhost Keepalived_vrrp[36313]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.1.7.19 [root@ _2_ ~]# ip a
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:9c:14:7c brd ff:ff:ff:ff:ff:ff
inet 10.1.6.11/16 brd 10.1.255.255 scope global eth0
inet 10.1.7.19/16 scope global eth0
inet6 fe80::20c:29ff:fe9c:147c/64 scope link
valid_lft forever preferred_lft forever
启动备用节点,可看到由于主节点正常运行,备用节点并未抢占主节点ip
[root@ _3_ ~]# service keepalived start
Starting keepalived: [ OK ] [root@ _1_ ~]# tail /var/log/message
Nov 1 20:21:44 localhost Keepalived_healthcheckers[2229]: Opening file '/etc/keepalived/keepalived.conf'.
Nov 1 20:21:44 localhost Keepalived_healthcheckers[2229]: Configuration is using : 7556 Bytes
Nov 1 20:21:44 localhost Keepalived_healthcheckers[2229]: Using LinkWatch kernel netlink reflector... [root@ _4_ ~]# ip a
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:af:fd:ec brd ff:ff:ff:ff:ff:ff
inet 10.1.6.12/16 brd 10.1.255.255 scope global eth0
inet6 fe80::20c:29ff:feaf:fdec/64 scope link
valid_lft forever preferred_lft forever
关闭主节点keepalived服务后
[root@ _6_ ~]# service keepalived stop
Stopping keepalived: [ OK ] 以下为主节点日志:keepalived服务关闭,vrrp ip被移除
Nov 1 20:28:17 localhost Keepalived[36349]: Stopping Keepalived v1.2.13 (03/19,2015)
Nov 1 20:28:17 localhost Keepalived_vrrp[36352]: VRRP_Instance(VI_1) sending 0 priority
Nov 1 20:28:17 localhost Keepalived_vrrp[36352]: VRRP_Instance(VI_1) removing protocol VIPs.
Nov 1 20:28:17 localhost Keepalived_healthcheckers[36351]: Netlink reflector reports IP 10.1.7.19 removed 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:9c:14:7c brd ff:ff:ff:ff:ff:ff
inet 10.1.6.11/16 brd 10.1.255.255 scope global eth0
inet6 fe80::20c:29ff:fe9c:147c/64 scope link
valid_lft forever preferred_lft forever 以下为备用节点日志:转换为MASTER角色,配置10.1.7.19的vrrp ip
Nov 1 20:28:18 localhost Keepalived_vrrp[2231]: VRRP_Instance(VI_1) Transition to MASTER STATE
Nov 1 20:28:19 localhost Keepalived_vrrp[2231]: VRRP_Instance(VI_1) Entering MASTER STATE
Nov 1 20:28:19 localhost Keepalived_vrrp[2231]: VRRP_Instance(VI_1) setting protocol VIPs.
Nov 1 20:28:19 localhost Keepalived_healthcheckers[2229]: Netlink reflector reports IP 10.1.7.19 added
Nov 1 20:28:19 localhost Keepalived_vrrp[2231]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.1.7.19
Nov 1 20:28:24 localhost Keepalived_vrrp[2231]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.1.7.19 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:af:fd:ec brd ff:ff:ff:ff:ff:ff
inet 10.1.6.12/16 brd 10.1.255.255 scope global eth0
inet 10.1.7.19/16 scope global secondary eth0
inet6 fe80::20c:29ff:feaf:fdec/64 scope link
valid_lft forever preferred_lft forever
恢复主节点,启动主节点keepalived服务后
[root@ _8_ ~]# service keepalived start
Starting keepalived: [ OK ] 以下为主节点日志,转换为MASTER角色,抢占10.1.7.19的ip
Nov 1 20:34:20 localhost Keepalived_vrrp[36431]: VRRP_Instance(VI_1) Entering MASTER STATE
Nov 1 20:34:20 localhost Keepalived_vrrp[36431]: VRRP_Instance(VI_1) setting protocol VIPs.
Nov 1 20:34:20 localhost Keepalived_healthcheckers[36430]: Netlink reflector reports IP 10.1.7.19 added
Nov 1 20:34:20 localhost Keepalived_vrrp[36431]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.1.7.19
Nov 1 20:34:22 localhost ntpd[2238]: Listen normally on 10 eth0 10.1.7.19 UDP 123
Nov 1 20:34:25 localhost Keepalived_vrrp[36431]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.1.7.19 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:9c:14:7c brd ff:ff:ff:ff:ff:ff
inet 10.1.6.11/16 brd 10.1.255.255 scope global eth0
inet 10.1.7.19/16 scope global secondary eth0
inet6 fe80::20c:29ff:fe9c:147c/64 scope link
valid_lft forever preferred_lft forever 以下为备用节点日志,转换为BACKUP角色,ip 10.1.7.19被移除
Nov 1 20:34:19 localhost Keepalived_vrrp[2231]: VRRP_Instance(VI_1) Received higher prio advert
Nov 1 20:34:19 localhost Keepalived_vrrp[2231]: VRRP_Instance(VI_1) Entering BACKUP STATE
Nov 1 20:34:19 localhost Keepalived_vrrp[2231]: VRRP_Instance(VI_1) removing protocol VIPs.
Nov 1 20:34:19 localhost Keepalived_healthcheckers[2229]: Netlink reflector reports IP 10.1.7.19 removed 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:af:fd:ec brd ff:ff:ff:ff:ff:ff
inet 10.1.6.12/16 brd 10.1.255.255 scope global eth0
inet6 fe80::20c:29ff:feaf:fdec/64 scope link
valid_lft forever preferred_lft forever
- keepalived 双主模型(10.1.7.19,10.1.7.20)
在单主模型的基础上,主机1再增加一个vrrp_instance段,改动的地方为
vrrp_instance VI_2 { #vrrp示例编号要修改,不能与其他实例相同
state BACKUP #初始状态,上一个实例为MASTER,此时这里为BACKUP
interface eth0
virtual_router_id 17
priority 98 #优先级要比另一台主机MASTER的优先级低
advert_int 1
authentication {
auth_type PASS
auth_pass 2a6561b9 #认证字符串要修改
}
virtual_ipaddress {
10.1.7.20/16 dev eth0 #配置的另一个ip要修改
}
}
与之对应的另一台主机2上增加的配置为
vrrp_instance VI_2 {
state MASTER
interface eth0
virtual_router_id 17
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 2a6561b9
}
virtual_ipaddress {
10.1.7.20/16 dev eth0
}
}
- 测试
两台主机先停止keepalived服务,然后先启动主机1
以下为主机1日志:VI_1启动为MASTER角色,配置10.1.7.19的ip,VI_2启动为MASTER角色,配置10.1.7.20的ip Nov 1 20:57:42 localhost Keepalived_vrrp[36523]: VRRP_Instance(VI_1) Transition to MASTER STATE
Nov 1 20:57:43 localhost Keepalived_vrrp[36523]: VRRP_Instance(VI_1) Entering MASTER STATE
Nov 1 20:57:43 localhost Keepalived_vrrp[36523]: VRRP_Instance(VI_1) setting protocol VIPs.
Nov 1 20:57:43 localhost Keepalived_healthcheckers[36522]: Netlink reflector reports IP 10.1.7.19 added
Nov 1 20:57:43 localhost Keepalived_vrrp[36523]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.1.7.19
Nov 1 20:57:45 localhost Keepalived_vrrp[36523]: VRRP_Instance(VI_2) Transition to MASTER STATE
Nov 1 20:57:45 localhost ntpd[2238]: Listen normally on 11 eth0 10.1.7.19 UDP 123
Nov 1 20:57:46 localhost Keepalived_vrrp[36523]: VRRP_Instance(VI_2) Entering MASTER STATE
Nov 1 20:57:46 localhost Keepalived_vrrp[36523]: VRRP_Instance(VI_2) setting protocol VIPs.
Nov 1 20:57:46 localhost Keepalived_vrrp[36523]: VRRP_Instance(VI_2) Sending gratuitous ARPs on eth0 for 10.1.7.20
Nov 1 20:57:46 localhost Keepalived_healthcheckers[36522]: Netlink reflector reports IP 10.1.7.20 added
Nov 1 20:57:47 localhost ntpd[2238]: Listen normally on 12 eth0 10.1.7.20 UDP 123
Nov 1 20:57:48 localhost Keepalived_vrrp[36523]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.1.7.19
Nov 1 20:57:51 localhost Keepalived_vrrp[36523]: VRRP_Instance(VI_2) Sending gratuitous ARPs on eth0 for 10.1.7.20 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:9c:14:7c brd ff:ff:ff:ff:ff:ff
inet 10.1.6.11/16 brd 10.1.255.255 scope global eth0
inet 10.1.7.19/16 scope global secondary eth0
inet 10.1.7.20/16 scope global secondary eth0
inet6 fe80::20c:29ff:fe9c:147c/64 scope link
valid_lft forever preferred_lft forever
启动主机2
以下为主机1日志:VI_2转换为BACKUP角色,10.1.7.20的ip被移除:
Nov 1 21:03:36 localhost Keepalived_vrrp[36523]: VRRP_Instance(VI_2) Received higher prio advert
Nov 1 21:03:36 localhost Keepalived_vrrp[36523]: VRRP_Instance(VI_2) Entering BACKUP STATE
Nov 1 21:03:36 localhost Keepalived_vrrp[36523]: VRRP_Instance(VI_2) removing protocol VIPs.
Nov 1 21:03:36 localhost Keepalived_healthcheckers[36522]: Netlink reflector reports IP 10.1.7.20 removed 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:9c:14:7c brd ff:ff:ff:ff:ff:ff
inet 10.1.6.11/16 brd 10.1.255.255 scope global eth0
inet 10.1.7.19/16 scope global secondary eth0
inet6 fe80::20c:29ff:fe9c:147c/64 scope link
valid_lft forever preferred_lft forever 以下为主机2的日志,VI_2转换为MASTER角色,配置了10.1.7.20的ip
Nov 1 21:03:36 localhost Keepalived_vrrp[2380]: VRRP_Instance(VI_2) Transition to MASTER STATE
Nov 1 21:03:36 localhost Keepalived_vrrp[2380]: VRRP_Instance(VI_2) Received lower prio advert, forcing new election
Nov 1 21:03:37 localhost Keepalived_vrrp[2380]: VRRP_Instance(VI_2) Entering MASTER STATE
Nov 1 21:03:37 localhost Keepalived_vrrp[2380]: VRRP_Instance(VI_2) setting protocol VIPs.
Nov 1 21:03:37 localhost Keepalived_healthcheckers[2378]: Netlink reflector reports IP 10.1.7.20 added
Nov 1 21:03:37 localhost Keepalived_vrrp[2380]: VRRP_Instance(VI_2) Sending gratuitous ARPs on eth0 for 10.1.7.20
Nov 1 21:03:42 localhost Keepalived_vrrp[2380]: VRRP_Instance(VI_2) Sending gratuitous ARPs on eth0 for 10.1.7.20 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:af:fd:ec brd ff:ff:ff:ff:ff:ff
inet 10.1.6.12/16 brd 10.1.255.255 scope global eth0
inet 10.1.7.20/16 scope global secondary eth0
inet6 fe80::20c:29ff:feaf:fdec/64 scope link
valid_lft forever preferred_lft forever
停止主机1的keepalived服务。
以下为主机2日志:VI_1转换为MASTER角色,配置了10.1.7.19的ip
Nov 1 21:07:47 localhost Keepalived_vrrp[2380]: VRRP_Instance(VI_1) Transition to MASTER STATE
Nov 1 21:07:48 localhost Keepalived_vrrp[2380]: VRRP_Instance(VI_1) Entering MASTER STATE
Nov 1 21:07:48 localhost Keepalived_vrrp[2380]: VRRP_Instance(VI_1) setting protocol VIPs.
Nov 1 21:07:48 localhost Keepalived_healthcheckers[2378]: Netlink reflector reports IP 10.1.7.19 added
Nov 1 21:07:48 localhost Keepalived_vrrp[2380]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.1.7.19
Nov 1 21:07:53 localhost Keepalived_vrrp[2380]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.1.7.19 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:af:fd:ec brd ff:ff:ff:ff:ff:ff
inet 10.1.6.12/16 brd 10.1.255.255 scope global eth0
inet 10.1.7.20/16 scope global secondary eth0
inet 10.1.7.19/16 scope global secondary eth0
inet6 fe80::20c:29ff:feaf:fdec/64 scope link
valid_lft forever preferred_lft forever
3. Keepalived集群+ipvs(DR)集群
拓扑环境
10.1.6.11和10.1.6.12为两台real server,提供web服务。
左边两台服务器,主节点ip10.1.6.11,备节点ip10.1.6.12
主节点和备节点做成keepalived高可用集群。IP为10.1.8.88
两台real server 安装httpd,编辑测试主页,启动httpd服务,在主节点或备用节点上请求测试主页
[root@ _2_ ~]# yum -y install httpd
[root@ _2_ ~]# cat /var/www/html/index.html
<h1>Server 1</h1> [root@ _2_ ~]# yum -y install httpd
[root@ _2_ ~]# cat /var/www/html/index.html
<h1>Server 2</h1> [root@ _3_ ~]# curl http://10.1.7.11
<h1>Server 1</h1>
[root@ _4_ ~]# curl http://10.1.7.12
<h1>Server 2</h1>
Real Server上编写DR模型初始配置脚本,两台real server运行此脚本
#!/bin/bash vip='10.1.8.88'
vport='80'
netmask='255.255.255.255'
iface='lo:0' case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce ifconfig $iface $vip netmask $netmask broadcast $vip up
route add -host $vip dev $iface
;;
stop)
ifconfig $iface down echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce ;;
*)
echo "Usage $(basename $0) start|stop"
exit 1
;;
esac 检查配置
lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 10.1.8.88 netmask 255.255.255.255
loop txqueuelen 0 (Local Loopback)
主节点和备用节点上配置keepalived
以下为主节点配置,备用节点需修改state为BACKUP
! Configuration File for keepalived global_defs {
notification_email {
root@localhost
}
notification_email_from Keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_mcast_group4 224.0.200.158
} vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 16
priority 98
advert_int 1
authentication {
auth_type PASS
auth_pass 2a6561b8
}
virtual_ipaddress {
10.1.8.88/16 dev eth0
}
}
测试主备节点分别故障时,虚拟ip能来回切换
主节点和备节点安装ipvsadm,测试调度后端real server,确保调度正常
主节点:
[root@ _8_ ~]# yum -y install ipvsadm
[root@ _8_ ~]# ipvsadm -A -t 10.1.8.88:80 -s rr
[root@ _9_ ~]# ipvsadm -a -t 10.1.8.88:80 -r 10.1.7.11 -g -w 1
[root@ _10_ ~]# ipvsadm -a -t 10.1.8.88:80 -r 10.1.7.12 -g -w 1
[root@ _11_ ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.8.88:80 rr
-> 10.1.7.11:80 Route 1 0 0
-> 10.1.7.12:80 Route 1 0 0 [root@ _13_ ~]# for i in {1..10};do curl http://10.1.8.88 ;done
<h1>Server 2</h1>
<h1>Server 1</h1>
<h1>Server 2</h1>
<h1>Server 1</h1>
<h1>Server 2</h1>
<h1>Server 1</h1>
<h1>Server 2</h1>
<h1>Server 1</h1>
<h1>Server 2</h1>
<h1>Server 1</h1> 调度正常,清空规则
[root@ _25_ ~]# ipvsadm -C 备用节点同样方法测试一次
定义keepalived配置中Virtual Server
主节点和备节点的keepalived配置文件中加入Virtual server配置段
virtual_server 10.1.8.88 80 { #virtual_server ip地址
delay_loop 3 #服务轮询时间间隔
lb_algo rr #定义调度算法
lb_kind DR #定义lvs的类型
protocol TCP #服务协议,仅支持tcp real_server 10.1.7.11 80 { #real_server ip地址
weight 1 #权重
HTTP_GET { #请求方法
url {
path / #定义监控的url
status_code 200 #判断上述检测机制为健康状态的响应码为200
}
connect_timeout 1 #连接超时时间
nb_get_retry 3 #重试的次数
delay_before_retry 1 #重试之前延迟时长
}
} real_server 10.1.7.12 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
}
启动主节点和备节点keepalived服务,查看ip以及ipvs规则
主节点
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:9c:14:7c brd ff:ff:ff:ff:ff:ff
inet 10.1.6.11/16 brd 10.1.255.255 scope global eth0
inet 10.1.8.88/16 scope global secondary eth0
inet6 fe80::20c:29ff:fe9c:147c/64 scope link
valid_lft forever preferred_lft forever [root@ _33_ ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.8.88:80 rr
-> 10.1.7.11:80 Route 1 0 0
-> 10.1.7.12:80 Route 1 0 0 备节点
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:af:fd:ec brd ff:ff:ff:ff:ff:ff
inet 10.1.6.12/16 brd 10.1.255.255 scope global eth0
inet6 fe80::20c:29ff:feaf:fdec/64 scope link
valid_lft forever preferred_lft forever [root@ _28_ ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.8.88:80 rr
-> 10.1.7.11:80 Route 1 0 0
-> 10.1.7.12:80 Route 1 0 0
使用客户端对其进行访问检测
测试正常
[root@ _16_ ~]# for i in {1..10};do curl http://10.1.8.88 ;done
<h1>Server 2</h1>
<h1>Server 1</h1>
<h1>Server 2</h1>
<h1>Server 1</h1>
<h1>Server 2</h1>
<h1>Server 1</h1>
<h1>Server 2</h1>
<h1>Server 1</h1>
<h1>Server 2</h1>
<h1>Server 1</h1>
使real server中有一个故障,检测访问
停掉real server的httpd服务
[root@ _5_ ~]# systemctl stop httpd 主节点上查看ipvs规则,real server 2已下线 [root@ _38_ ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.8.88:80 rr
-> 10.1.7.11:80 Route 1 0 10 客户端请求访问
[root@ _17_ ~]# for i in {1..10};do curl http://10.1.8.88 ;done
<h1>Server 1</h1>
<h1>Server 1</h1>
<h1>Server 1</h1>
<h1>Server 1</h1>
<h1>Server 1</h1>
<h1>Server 1</h1>
<h1>Server 1</h1>
<h1>Server 1</h1>
<h1>Server 1</h1>
<h1>Server 1</h1>
访问正常
使故障的real server恢复,检测访问
[root@ _6_ ~]# systemctl start httpd 查看主节点上ipvs规则,real server已加入
[root@ _39_ ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.8.88:80 rr
-> 10.1.7.11:80 Route 1 0 0
-> 10.1.7.12:80 Route 1 0 0 客户端访问测试
[root@ _18_ ~]# for i in {1..10};do curl http://10.1.8.88 ;done
<h1>Server 2</h1>
<h1>Server 1</h1>
<h1>Server 2</h1>
<h1>Server 1</h1>
<h1>Server 2</h1>
<h1>Server 1</h1>
<h1>Server 2</h1>
<h1>Server 1</h1>
<h1>Server 2</h1>
<h1>Server 1</h1>
调度正常
4. keepalived主备节点上配置sorry server
主备节点分别安装httpd,编辑网页文件,最好都停掉keepalived服务
[root@ _41_ ~]# yum -y install httpd 主节点编辑网页文件
[root@ _38_ ~]# cat /var/www/html/index.html
<h1>LB Cluster Fault,this is Sorry Server 1</h1> 主节点编辑网页文件
[root@ _38_ ~]# cat /var/www/html/index.html
<h1>LB Cluster Fault,this is Sorry Server 2</h1>
编辑keepalived配置文件,在Virtual server中添加sorry server配置,主节点和备节点都要配置
virtual_server 10.1.8.88 80 {
delay_loop 3
lb_algo rr
lb_kind DR
protocol TCP sorry_server 127.0.0.1 80 real_server 10.1.7.11 80 {
weight 1
...
主备节点都启动httpd服务,启动keepalived服务,两台real server都停止httpd服务
[root@ _48_ ~]# service httpd start [root@ _44_ ~]# service keepalived start
Starting keepalived: [ OK ] [root@ _12_ ~]# systemctl stop httpd
客户端请求测试
看到sorry server的响应
[root@ _22_ ~]# curl http://10.1.8.88
<h1>LB Cluster Fault,this is Sorry Server 1</h1>
启动一台real server的httpd服务,客户端测试
响应正常
[root@ _23_ ~]# curl http://10.1.8.88
<h1>Server 2</h1>
5.keeplioved调用外部脚本,由结果实时调整优先级
脚本的定义与调用介绍
(1)脚本先定义
vrrp_script<SCRIPT_NAME> {
script ""
interval INT
weight -INT
}
(2)再调用
track_script {
SCRIPT_NAME_1
SCRIPT_NAME_2
...
}
主备节点keepalived配置加入脚本段,脚本检测到/etc/keepalived/down则返回失败
主节点
! Configuration File for keepalived global_defs {
notification_email {
root@localhost
}
notification_email_from Keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_mcast_group4 224.0.200.158
} vrrp_script chk_down { #定义脚本名
script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0" #判断down文件,存在则返回失败
interval 1 #每隔多长时间脚本执行一次
weight -5 #脚本失败动作,权重-5,确保-5后低于备用优先级
} vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 16
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 2a6561b8
}
virtual_ipaddress {
10.1.8.88/16 dev eth0
}
track_script { #监控的脚本
chk_down #调用的脚本名称
}
} 备节点需要修改state为BACKUP,priority为98
主备节点启动keepalived服务,查看ip
[root@ _72_ /etc/keepalived]# service keepalived stop
Starting keepalived: [ OK ] 主节点
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:9c:14:7c brd ff:ff:ff:ff:ff:ff
inet 10.1.6.11/16 brd 10.1.255.255 scope global eth0
inet 10.1.8.88/16 scope global secondary eth0 备节点
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:af:fd:ec brd ff:ff:ff:ff:ff:ff
inet 10.1.6.12/16 brd 10.1.255.255 scope global eth0
创建/etc/keepalived/down文件,查看ip转移
[root@ _161_ /etc/keepalived]# touch down 主节点
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:9c:14:7c brd ff:ff:ff:ff:ff:ff
inet 10.1.6.11/16 brd 10.1.255.255 scope global eth0
inet6 fe80::20c:29ff:fe9c:147c/64 scope link 备节点
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:af:fd:ec brd ff:ff:ff:ff:ff:ff
inet 10.1.6.12/16 brd 10.1.255.255 scope global eth0
inet 10.1.8.88/16 scope global secondary eth0
inet6 fe80::20c:29ff:feaf:fdec/64 scope link 主节点检测脚本,返回失败,权重-5,转换为BACKUP角色,ip 10.1.8.88被移除
Nov 3 08:24:02 localhost Keepalived_vrrp[4853]: VRRP_Script(chk_down) failed
Nov 3 08:24:03 localhost Keepalived_vrrp[4853]: VRRP_Instance(VI_1) Received higher prio advert
Nov 3 08:24:03 localhost Keepalived_vrrp[4853]: VRRP_Instance(VI_1) Entering BACKUP STATE
Nov 3 08:24:03 localhost Keepalived_vrrp[4853]: VRRP_Instance(VI_1) removing protocol VIPs.
Nov 3 08:24:03 localhost Keepalived_healthcheckers[4852]: Netlink reflector reports IP 10.1.8.88 removed
删除主节点/etc/keepalived/down文件,查看ip转移
[root@ _163_ /etc/keepalived]# rm -rf down 主节点,ip已夺回
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:9c:14:7c brd ff:ff:ff:ff:ff:ff
inet 10.1.6.11/16 brd 10.1.255.255 scope global eth0
inet 10.1.8.88/16 scope global secondary eth0
inet6 fe80::20c:29ff:fe9c:147c/64 scope link 日志
Nov 3 08:32:01 localhost Keepalived_healthcheckers[4852]: Netlink reflector reports IP 10.1.8.88 added
Nov 3 08:32:01 localhost Keepalived_vrrp[4853]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.1.8.88
Nov 3 08:32:03 localhost ntpd[4558]: Listen normally on 11 eth0 10.1.8.88 UDP 123
6. keepalived结合nginx调度(并使用辅助脚本检测nginx服务)
清空上例在real server上所做的lo:0接口规则以及ARP限制规则,并停止keepalived服务
[root@ _16_ ~]# bash set_dr stop #!/bin/bash vip='10.1.8.88'
vport='80'
netmask='255.255.255.255'
iface='lo:0' case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce ifconfig $iface $vip netmask $netmask broadcast $vip up
route add -host $vip dev $iface
;;
stop)
ifconfig $iface down echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce ;;
*)
echo "Usage $(basename $0) start|stop"
exit 1
;;
esac
主备节点停止为sorry server启动的httpd服务
[root@ _50_ ~]# service httpd stop
Stopping httpd: [ OK ]
主备节点安装nginx
[root@ _173_ /etc/keepalived]# yum -y install nginx
编辑nginx配置文件,实现反代
在/etc/nginx/nginx.conf的http上下文中添加
upstream websrvs {
server 10.1.7.11;
server 10.1.7.12;
}
在/etc/nginx/conf.d/default.conf的location上下文中添加 proxy_pass http://websrvs;
如
location / {
root /usr/share/nginx/html;
proxy_pass http://websrvs;
index index.html index.htm;
}
主备节点启动nginx服务,keepalived服务,访问测试
[root@ _18_ /etc]# curl http://10.1.8.88
<h1>Server 1</h1>
[root@ _19_ /etc]# curl http://10.1.8.88
<h1>Server 2</h1>
[root@ _20_ /etc]# curl http://10.1.8.88
<h1>Server 1</h1>
[root@ _21_ /etc]# curl http://10.1.8.88
<h1>Server 2</h1>
访问正常
主备节点添加vrrp_script脚本,在nginx没有启动时触发
vrrp_script chk_nginx {
script "killall -0 nginx && exit 0 || exit 1"
interval 1
weigth
} 追踪脚本中也需要加入chk_nginx
track_script {
chk_down
chk_nginx
}
主节点重启keepalived服务,而后备节点重启keepalived服务
此时虚拟ip 10.1.8.88在主节点上
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:9c:14:7c brd ff:ff:ff:ff:ff:ff
inet 10.1.6.11/16 brd 10.1.255.255 scope global eth0
inet 10.1.8.88/16 scope global secondary eth0
inet6 fe80::20c:29ff:fe9c:147c/64 scope link
valid_lft forever preferred_lft forever 客户端访问正常
[root@ _22_ /etc]# curl http://10.1.8.88
<h1>Server 1</h1>
[root@ _23_ /etc]# curl http://10.1.8.88
<h1>Server 2</h1>
[root@ _24_ /etc]# curl http://10.1.8.88
<h1>Server 1</h1>
停止主节点nginx服务
[root@ _12_ ~]# service nginx stop
Stopping nginx: [ OK ] 主节点ip已移除
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:9c:14:7c brd ff:ff:ff:ff:ff:ff
inet 10.1.6.11/16 brd 10.1.255.255 scope global eth0
inet6 fe80::20c:29ff:fe9c:147c/64 scope link
valid_lft forever preferred_lft forever 日志信息,检测到vrrp_script失败,转换为BACKUP模式,ip已移除
Nov 3 18:00:25 localhost Keepalived_vrrp[75164]: VRRP_Script(chk_nginx) failed
Nov 3 18:00:25 localhost Keepalived_vrrp[75164]: VRRP_Instance(VI_1) Entering FAULT STATE
Nov 3 18:00:25 localhost Keepalived_vrrp[75164]: VRRP_Instance(VI_1) removing protocol VIPs.
Nov 3 18:00:25 localhost Keepalived_vrrp[75164]: VRRP_Instance(VI_1) Now in FAULT state
Nov 3 18:00:25 localhost Keepalived_healthcheckers[75163]: Netlink reflector reports IP 10.1.8.88 removed 备节点ip与日志,10.1.8.88地址已获取,转换为MASTER角色
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:af:fd:ec brd ff:ff:ff:ff:ff:ff
inet 10.1.6.12/16 brd 10.1.255.255 scope global eth0
inet 10.1.8.88/16 scope global secondary eth0
inet6 fe80::20c:29ff:feaf:fdec/64 scope link Nov 3 18:00:26 localhost Keepalived_vrrp[75084]: VRRP_Instance(VI_1) Transition to MASTER STATE
Nov 3 18:00:27 localhost Keepalived_vrrp[75084]: VRRP_Instance(VI_1) Entering MASTER STATE
Nov 3 18:00:27 localhost Keepalived_vrrp[75084]: VRRP_Instance(VI_1) setting protocol VIPs.
Nov 3 18:00:27 localhost Keepalived_healthcheckers[75083]: Netlink reflector reports IP 10.1.8.88 added
客户端访问测试
调度正常
[root@ _25_ /etc]# curl http://10.1.8.88
<h1>Server 2</h1>
[root@ _26_ /etc]# curl http://10.1.8.88
<h1>Server 1</h1>
[root@ _27_ /etc]# curl http://10.1.8.88
<h1>Server 2</h1>
[root@ _28_ /etc]# curl http://10.1.8.88
<h1>Server 1</h1>
补充:keepalived发生角色转移时运行指定脚本
备节点上编写脚本/etc/keepalived/motify.sh,当角色切换时,给root用户发送邮件
#!/bin/bash
#
contact='root@localhost' notify() {
mailsubject="$(hostname) to be $1, vip floating."
mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
echo "$mailbody" | mail -s "$mailsubject" $contact
} case $1 in
master)
notify master
;;
backup)
notify backup
;;
fault)
notify fault
;;
*)
echo "Usage: $(basename $0) {master|backup|fault}"
exit 1
;;
esac
在vrrp_instance端中调用脚本,并重启keepalived服务
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
3.主节点停止keepalived服务,查看root用户邮件
备节点:收到转换为master角色的邮件
>N 1 root Thu Nov 3 18:41 18/731 "localhost.localdomain to be master, vip floating."
& 1
Message 1:
From root@localhost.localdomain Thu Nov 3 18:41:46 2016
Return-Path: <root@localhost.localdomain>
X-Original-To: root@localhost
Delivered-To: root@localhost.localdomain
Date: Thu, 03 Nov 2016 18:41:46 +0800
To: root@localhost.localdomain
Subject: localhost.localdomain to be master, vip floating.
User-Agent: Heirloom mailx 12.4 7/29/08
Content-Type: text/plain; charset=us-ascii
From: root@localhost.localdomain (root)
Status: R
2016-11-03 18:41:46: vrrp transition, localhost.localdomain changed to be master
ip 10.1.8.88已添加
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:af:fd:ec brd ff:ff:ff:ff:ff:ff
inet 10.1.6.12/16 brd 10.1.255.255 scope global eth0
inet 10.1.8.88/16 scope global secondary eth0
keepalived高可用的更多相关文章
- Nginx反向代理,负载均衡,redis session共享,keepalived高可用
相关知识自行搜索,直接上干货... 使用的资源: nginx主服务器一台,nginx备服务器一台,使用keepalived进行宕机切换. tomcat服务器两台,由nginx进行反向代理和负载均衡,此 ...
- 测试LVS+Keepalived高可用负载均衡集群
测试LVS+Keepalived高可用负载均衡集群 1. 启动LVS高可用集群服务 此时查看Keepalived服务的系统日志信息如下: [root@localhost ~]# tail -f /va ...
- Keepalived高可用集群介绍
1.Keepalived服务介绍 Keepalived起初是专为LVS设计的,专门用来监控LVS集群系统中各个服务节点的状态,后来又加入了VRRP的功能,因此除了配合LVS服务外,也可以为其他服务(n ...
- LVS+Keepalived高可用负载均衡集群架构实验-01
一.为什么要使用负载均衡技术? 1.系统高可用性 2. 系统可扩展性 3. 负载均衡能力 LVS+keepalived能很好的实现以上的要求,LVS提供负载均衡,keepalived提供健康检查, ...
- 实现基于Keepalived高可用集群网站架构的多种方法
实现基于Keepalived高可用集群网站架构 随着业务的发展,网站的访问量越来越大,网站访问量已经从原来的1000QPS,变为3000QPS,目前业务已经通过集群LVS架构可做到随时拓展,后端节点已 ...
- 十一.keepalived高可用服务实践部署
期中集群架构-第十一章-keepalived高可用集群章节======================================================================0 ...
- keepalived高可用简介与配置
keepalived简介 keepalived介绍 Keepalived 软件起初是专为LVS负载均衡软件设计的,用来管理并监控LVS集群系统中各个服务节点的状态,后来又加入了可以实现高可用的VRRP ...
- nginx+keepalived高可用及双主模式
高可用有2中方式. 1.Nginx+keepalived 主从配置 这种方案,使用一个vip地址,前端使用2台机器,一台做主,一台做备,但同时只有一台机器工作,另一台备份机器在主机器不出现故障的时候, ...
- 案例一(haproxy+keepalived高可用负载均衡系统)【转】
1.搭建环境描述: 操作系统: [root@HA-1 ~]# cat /etc/redhat-release CentOS release 6.7 (Final) 地址规划: 主机名 IP地址 集群角 ...
随机推荐
- Cordova webapp实战开发(20161207 )
http://www.cnblogs.com/zhoujg/archive/2015/05/28/4534932.html 1.https://www.jetbrains.com/idea/downl ...
- FreeMark学习(一)
FreeMarker是一个模板引擎,一个基于模板生成文本输出的通用工具,使用纯Java编写 FreeMarker被设计用来生成HTML Web页面,特别是基于MVC模式的应用程序 虽然FreeMark ...
- jQuery利用JSON数据动态生成表格
<style type="text/css"> table.gridtable { font-family: verdana,arial,sans-serif; fon ...
- 浅谈 JS 创建对象的 8 种模式
1.Object 模式 var o1 = {};//字面量的表现形式 var o2 = new Object; var o3 = new Object(); var o4 = new Object(n ...
- JQuery 插件之Ajax Autocomplete(ajax自动完成)搜索引擎自动显示下拉框
平时用百度,谷歌搜索的时候 会有一个下 拉列表进行提示 这是一个非常好的功能 本文要介绍的这个JQuery 插件 名叫Ajax Autocomplete 顾名思义 ajax 也就是用ajax的方式获取 ...
- Winform DataGridView控件添加行号
有很多种方法,这里介绍三种: A: 控件的RowStateChanged事件中添加,RowStateChanged事件是在行的状态更改(例如,失去或获得输入焦点)时发生的事件: e.Row.Hea ...
- linux autoload service create
---恢复内容开始--- EXEC="php-fpm" stop(){ echo "Stoping $EXEC ..." ps aux | grep " ...
- PHP中的特殊符号
<?php 注解符号: // 单行注解 /* */ 多行注解 引号的使用 ' ' 单引号,没有任何意义,不经任何处理直接拿过来; " "双引号,php动态处理然后输出,一般用 ...
- Masonry学习笔记
1.边距 [bottomView mas_makeConstraints:^(MASConstraintMaker *make) { make.left.equalTo(self.view).offs ...
- arguments 对象
在函数体内,标识符arguments是指向实参对象的引用,实参对象是一个类数组对象 arguments[0],arguments.length arguments是什么? 答:1:arguments是 ...