LVS负载均衡(7)-- LVS+keepalived实现高可用
1. LVS+keepalived实现高可用
LVS 可以实现负载均衡功能,但是没有健康检查机制,如果一台 RS 节点故障,LVS 任然会将请求调度至该故障 RS 节点服务器;可以使用 Keepalived 来实现解决:
1.使用 Keepalived 可以实现 LVS 的健康检查机制, RS 节点故障,则自动剔除该故障的 RS 节点,如果 RS 节点恢复则自动加入集群。
2.使用 Keeplaived 可以解决 LVS 单点故障,以此实现 LVS 的高可用。
1.1 实验环境说明
实验拓扑图如下,使用LVS的DR模型:
- 客户端:主机名:xuzhichao;地址:eth1:192.168.20.17;
- 路由器:主机名:router;地址:eth1:192.168.20.50;eth2:192.168.50.50;
- LVS负载均衡:
- 主机名:lvs-01;地址:eth2:192.168.50.31;
- 主机名:lvs-02;地址:eth2:192.168.50.32;
- VIP地址:192.168.50.100和192.168.50.101;
- WEB服务器,使用nginx1.20.1:
- 主机名:nginx02;地址:eth2:192.168.50.22;
- 主机名:nginx03;地址:eth2:192.168.50.23;
1.2 路由器配置
ROUTER设备的IP地址和路由信息如下:
[root@router ~]# ip add
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:4f:a9:ca brd ff:ff:ff:ff:ff:ff
inet 192.168.20.50/24 brd 192.168.20.255 scope global noprefixroute eth1
valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:4f:a9:d4 brd ff:ff:ff:ff:ff:ff
inet 192.168.50.50/24 brd 192.168.50.255 scope global noprefixroute eth2
valid_lft forever preferred_lft forever #此场景中无需配置路由
[root@router ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.20.0 0.0.0.0 255.255.255.0 U 101 0 0 eth1
192.168.50.0 0.0.0.0 255.255.255.0 U 104 0 0 eth2
打开router设备的ip_forward功能:
[root@router ~]# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
[root@router ~]# sysctl -p
net.ipv4.ip_forward = 1
把LVS的虚IP地址的80和443端口映射到路由器外网地址的80和443端口,也可以使用地址映射:
#端口映射:
[root@router ~]# iptables -t nat -A PREROUTING -d 192.168.20.50 -p tcp --dport 80 -j DNAT --to 192.168.50.100:80
[root@router ~]# iptables -t nat -A PREROUTING -d 192.168.20.50 -p tcp --dport 443 -j DNAT --to 192.168.50.100:443 #地址映射:
[root@router ~]# iptables -t nat -A PREROUTING -d 192.168.20.50 -j DNAT --to 192.168.50.100 #源NAT,让内部主机上网使用
[root@router ~]# iptables -t nat -A POSTROUTING -s 192.168.50.0/24 -j SNAT --to 192.168.20.50 #查看NAT配置:
[root@router ~]# iptables -t nat -vnL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DNAT tcp -- * * 0.0.0.0/0 192.168.20.50 tcp dpt:80 to:192.168.50.100:80
0 0 DNAT tcp -- * * 0.0.0.0/0 192.168.20.50 tcp dpt:443 to:192.168.50.100:443 Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 SNAT all -- * * 192.168.50.0/24 0.0.0.0/0 to:192.168.20.50
1.3 WEB服务器nginx配置
nginx02主机的网络配置如下:
#1.在lo接口配置两个VIP地址:
[root@nginx02 ~]# cat /etc/sysconfig/network-scripts/ifcfg-lo:0
DEVICE=lo:0
BOOTPROTO=none
IPADDR=192.168.50.100
NETMASK=255.255.255.255 <==注意:此处的掩码不能与RIP的掩码配置的一样,否则其他主机无法学习到RIP的ARP信息,会影响RIP的直连路由,而且设置的掩码不能过大,让VIP和CIP计算成同一网段,建议设置为32位掩码。
ONBOOT=yes
NAME=loopback #2.重启网卡生效:
[root@nginx02 ~]# ifdown lo:0 && ifup lo:0
[root@nginx02 ~]# ifconfig lo:0
lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 192.168.50.100 netmask 255.255.255.255
loop txqueuelen 1000 (Local Loopback) #3.eth2接口地址如下:
[root@nginx02 ~]# ip add
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:d9:f9:7d brd ff:ff:ff:ff:ff:ff
inet 192.168.50.22/24 brd 192.168.50.255 scope global noprefixroute eth2
valid_lft forever preferred_lft forever #4.路由配置:网关指向路由器192.168.50.50
[root@nginx02 ~]# ip route add default via 192.168.50.50 dev eth2 <==默认路由必须指定下一跳地址和出接口,否则有可能会从lo:0接口出去,导致不通。 [root@nginx02 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.50.50 0.0.0.0 UG 0 0 0 eth2
192.168.50.0 0.0.0.0 255.255.255.0 U 103 0 0 eth2
配置 arp ,不对外宣告本机 VIP 地址,也不响应其他节点发起 ARP 请求 本机的VIP
[root@nginx02 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
[root@nginx02 ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
[root@nginx02 ~]# echo 1 > /proc/sys/net/ipv4/conf/default/arp_ignore [root@nginx02 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
[root@nginx02 ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
[root@nginx02 ~]# echo 2 > /proc/sys/net/ipv4/conf/default/arp_announce
nginx03主机的网络配置如下:
#1.在lo接口配置VIP地址:
[root@nginx03 ~]# cat /etc/sysconfig/network-scripts/ifcfg-lo:0
DEVICE=lo:0
BOOTPROTO=none
IPADDR=192.168.50.100
NETMASK=255.255.255.255 <==注意:此处的掩码不能与RIP的掩码配置的一样,否则其他主机无法学习到RIP的ARP信息,会影响RIP的直连路由,而且设置的掩码不能过大,让VIP和CIP计算成同一网段,建议设置为32位掩码。
ONBOOT=yes
NAME=loopback #2.重启网卡生效:
[root@nginx03 ~]# ifdown lo:0 && ifup lo:0
[root@nginx03 ~]# ifconfig lo:0
lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 192.168.50.100 netmask 255.255.255.255
loop txqueuelen 1000 (Local Loopback) #3.eth2接口地址如下:
[root@nginx03 ~]# ip add show eth2
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:0a:bf:63 brd ff:ff:ff:ff:ff:ff
inet 192.168.50.23/24 brd 192.168.50.255 scope global noprefixroute eth2
valid_lft forever preferred_lft forever #4.路由配置:网关指向路由器192.168.50.50
[root@nginx03 ~]# ip route add default via 192.168.50.50 dev eth2 <==默认路由必须指定下一跳地址和出接口,否则有可能会从lo:0接口出去,导致不通。 [root@nginx03 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.50.50 0.0.0.0 UG 0 0 0 eth2
192.168.50.0 0.0.0.0 255.255.255.0 U 103 0 0 eth2
配置 arp ,不对外宣告本机 VIP 地址,也不响应其他节点发起 ARP 请求 本机的VIP
[root@nginx03 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
[root@nginx03 ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
[root@nginx03 ~]# echo 1 > /proc/sys/net/ipv4/conf/default/arp_ignore [root@nginx03 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
[root@nginx03 ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
[root@nginx03 ~]# echo 2 > /proc/sys/net/ipv4/conf/default/arp_announce
nginx配置文件两台WEB服务器保持一致:
[root@nginx03 ~]# cat /etc/nginx/conf.d/xuzhichao.conf
server {
listen 80 default_server;
listen 443 ssl;
server_name www.xuzhichao.com;
access_log /var/log/nginx/access_xuzhichao.log access_json;
charset utf-8,gbk; #SSL配置
ssl_certificate_key /apps/nginx/certs/www.xuzhichao.com.key;
ssl_certificate /apps/nginx/certs/www.xuzhichao.com.crt;
ssl_session_cache shared:ssl_cache:20m;
ssl_session_timeout 10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
keepalive_timeout 65; #防盗链
valid_referers none blocked server_names *.b.com b.* ~\.baidu\. ~\.google\.; if ( $invalid_referer ) {
return 403;
} client_max_body_size 10m; #浏览器图标
location = /favicon.ico {
root /data/nginx/xuzhichao;
} location / {
root /data/nginx/xuzhichao;
index index.html index.php; #http自动跳转https
if ($scheme = http) {
rewrite ^/(.*)$ https://www.xuzhichao.com/$1;
}
}
} #重启nginx服务:
[root@nginx03 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@nginx03 ~]# systemctl reload nginx.service
nginx02主机的主页文件如下:
[root@nginx02 certs]# cat /data/nginx/xuzhichao/index.html
node1.xuzhichao.com page
nginx03主机的主页文件如下:
[root@nginx03 ~]# cat /data/nginx/xuzhichao/index.html
node2.xuzhichao.com page
测试访问:
[root@lvs-01 ~]# curl -Hhost:www.xuzhichao.com -k https://192.168.50.23
node2.xuzhichao.com page
[root@lvs-01 ~]# curl -Hhost:www.xuzhichao.com -k https://192.168.50.22
node1.xuzhichao.com page
1.4 LVS+keepalived配置
1.4.1 keepalived检测后端服务器状态语法
虚拟服务器:
配置参数:
virtual_server IP port |
virtual_server fwmark int
{
...
real_server {
...
}
...
}
常用参数:
delay_loop <INT>:服务轮询的时间间隔;
lb_algo rr|wrr|lc|wlc|lblc|sh|dh:定义调度方法;
lb_kind NAT|DR|TUN:集群的类型;
persistence_timeout <INT>:持久连接时长;
protocol TCP:服务协议;
sorry_server <IPADDR> <PORT>:备用服务器地址;
real_server <IPADDR> <PORT>
{
weight <INT> 定义RS权重
notify_up <STRING>|<QUOTED-STRING> 定义RS上线时调用的脚本
notify_down <STRING>|<QUOTED-STRING> 定义RS下线或故障时调用的脚本
HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK { ... }:定义当前主机的健康状态检测方法;
}
HTTP_GET|SSL_GET:应用层检测
HTTP_GET|SSL_GET {
url {
path <URL_PATH>:定义要监控的URL;
status_code <INT>:判断上述检测机制为健康状态的响应码;
digest <STRING>:判断上述检测机制为健康状态的响应的内容的校验码;
}
nb_get_retry <INT>:重试次数;
delay_before_retry <INT>:重试之前的延迟时长,间隔时长;
connect_ip <IP ADDRESS>:向当前RS的哪个IP地址发起健康状态检测请求,默认为real_server定义的地址
connect_port <PORT>:向当前RS的哪个PORT发起健康状态检测请求,默认为real_server定义的端口
bindto <IP ADDRESS>:发出健康状态检测请求时使用的源地址;默认为出接口地址
bind_port <PORT>:发出健康状态检测请求时使用的源端口;
connect_timeout <INTEGER>:连接请求的超时时长;
}
传输层检测:
TCP_CHECK {
connect_ip <IP ADDRESS>:向当前RS的哪个IP地址发起健康状态检测请求
connect_port <PORT>:向当前RS的哪个PORT发起健康状态检测请求
bindto <IP ADDRESS>:发出健康状态检测请求时使用的源地址;
bind_port <PORT>:发出健康状态检测请求时使用的源端口;
connect_timeout <INTEGER>:连接请求的超时时长;
}
1.4.2 keepalived配置实例
安装keepalived软件包:
[root@lvs-01 ~]# yum install keepalived -y
lvs01节点的keepalived配置文件:
#1.keepalived配置文件如下:
[root@lvs-01 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS01
script_user root
enable_script_security
} vrrp_instance VI_1 {
state MASTER
interface eth2
virtual_router_id 51
priority 120
advert_int 3
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.50.100/32 dev eth2
} track_interface {
eth2
} notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
} virtual_server 192.168.50.100 443 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP sorry_server 192.168.20.24 443 real_server 192.168.50.22 443 {
weight 1
SSL_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
} real_server 192.168.50.23 443 {
weight 1
SSL_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
} virtual_server 192.168.50.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP real_server 192.168.50.22 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
} real_server 192.168.50.23 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
} #2.keepalived的notify.sh脚本
[root@lvs-01 keepalived]# cat notify.sh
#!/bin/bash contact='root@localhost'
notify() {
local mailsubject="$(hostname) to be $1, vip floating"
local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
echo "$mailbody" | mail -s "$mailsubject" $contact
} case $1 in
master)
notify master
;;
backup)
notify backup
;;
fault)
notify fault
;;
*)
echo "Usage: $(basename $0) {master|backup|fault}"
exit 1
;;
esac #增加执行权限
[root@lvs-01 keepalived]# chmod +x notify.sh #3.增加默认路由指向路由器网关
[root@lvs-01 ~]# ip route add default via 192.168.50.50 dev eth2 [root@lvs-01 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.50.50 0.0.0.0 UG 0 0 0 eth2
192.168.50.0 0.0.0.0 255.255.255.0 U 102 0 0 eth2 #4.启动keepalived服务:
[root@lvs-01 ~]# systemctl start keepalived.service #5.查看自动生成的ipvs规则:
[root@lvs-01 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.50.100:80 rr
-> 192.168.50.22:80 Route 1 0 0
-> 192.168.50.23:80 Route 1 0 0
TCP 192.168.50.100:443 rr
-> 192.168.50.22:443 Route 1 0 0
-> 192.168.50.23:443 Route 1 0 0 #6.查看VIP所在的主机:
[root@lvs-01 ~]# ip add
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:21:84:9d brd ff:ff:ff:ff:ff:ff
inet 192.168.50.31/24 brd 192.168.50.255 scope global noprefixroute eth2
valid_lft forever preferred_lft forever
inet 192.168.50.100/32 scope global eth2
valid_lft forever preferred_lft forever
lvs02节点的keepalived配置文件:
#1.keepalived配置文件如下:
[root@lvs-02 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS02
script_user root
enable_script_security
} vrrp_instance VI_1 {
state BACKUP
interface eth2
virtual_router_id 51
priority 100
advert_int 3
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.50.100/32 dev eth2
} track_interface {
eth2
} notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
} virtual_server 192.168.50.100 443 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP sorry_server 192.168.20.24 443 real_server 192.168.50.22 443 {
weight 1
SSL_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
} real_server 192.168.50.23 443 {
weight 1
SSL_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
} virtual_server 192.168.50.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP real_server 192.168.50.22 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
} real_server 192.168.50.23 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
} #2.keepalived的notify.sh脚本
[root@lvs-02 keepalived]# cat notify.sh
#!/bin/bash contact='root@localhost'
notify() {
local mailsubject="$(hostname) to be $1, vip floating"
local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
echo "$mailbody" | mail -s "$mailsubject" $contact
} case $1 in
master)
notify master
;;
backup)
notify backup
;;
fault)
notify fault
;;
*)
echo "Usage: $(basename $0) {master|backup|fault}"
exit 1
;;
esac #增加执行权限
[root@lvs-02 keepalived]# chmod +x notify.sh #3.增加默认路由指向路由器网关
[root@lvs-02 ~]# ip route add default via 192.168.50.50 dev eth2 [root@lvs-02 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.50.50 0.0.0.0 UG 0 0 0 eth2
192.168.50.0 0.0.0.0 255.255.255.0 U 102 0 0 eth2 #4.启动keepalived服务:
[root@lvs-02 ~]# systemctl start keepalived.service #5.查看自动生成的ipvs规则:
[root@lvs-02 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.50.100:80 rr
-> 192.168.50.22:80 Route 1 0 0
-> 192.168.50.23:80 Route 1 0 0
TCP 192.168.50.100:443 rr
-> 192.168.50.22:443 Route 1 0 0
-> 192.168.50.23:443 Route 1 0 0 #6.查看VIP,不在本机:
[root@lvs-02 ~]# ip add
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:e4:cf:17 brd ff:ff:ff:ff:ff:ff
inet 192.168.50.32/24 brd 192.168.50.255 scope global noprefixroute eth2
valid_lft forever preferred_lft forever
使用客户端测试
客户端网络配置如下:
[root@xuzhichao ~]# ip add
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:2f:d0:da brd ff:ff:ff:ff:ff:ff
inet 192.168.20.17/24 brd 192.168.20.255 scope global noprefixroute eth1
valid_lft forever preferred_lft forever [root@xuzhichao ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.20.0 0.0.0.0 255.255.255.0 U 101 0 0 eth1
测试访问:
#1.测试使用http方式访问,重定向到https
[root@xuzhichao ~]# for i in {1..10} ;do curl -k -L -Hhost:www,xuzhichao.com http://192.168.20.50; done
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page #2.测试直接使用https方式访问
[root@xuzhichao ~]# for i in {1..10} ;do curl -k -Hhost:www,xuzhichao.com https://192.168.20.50; done
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
1.5 RS故障场景测试
把nginx02节点的nginx服务停止
[root@nginx02 ~]# systemctl stop nginx.service
查看两个节点的日志和ipvs规则变化:
#1.查看日志,发现检测后端主机失败,将RS从集群中移除
[root@lvs-01 ~]# tail -f /var/log/keepalived.log
Jul 13 20:00:57 lvs-01 Keepalived_healthcheckers[13466]: TCP connection to [192.168.50.22]:80 failed.
Jul 13 20:00:59 lvs-01 Keepalived_healthcheckers[13466]: Error connecting server [192.168.50.22]:443.
Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: TCP connection to [192.168.50.22]:80 failed.
Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: Check on service [192.168.50.22]:80 failed after 1 retry.
Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: Removing service [192.168.50.22]:80 from VS [192.168.50.100]:80
Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: Remote SMTP server [127.0.0.1]:25 connected.
Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: SMTP alert successfully sent.
Jul 13 20:01:02 lvs-01 Keepalived_healthcheckers[13466]: Error connecting server [192.168.50.22]:443.
Jul 13 20:01:05 lvs-01 Keepalived_healthcheckers[13466]: Error connecting server [192.168.50.22]:443.
Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: Error connecting server [192.168.50.22]:443.
Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: Check on service [192.168.50.22]:443 failed after 3 retry.
Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: Removing service [192.168.50.22]:443 from VS [192.168.50.100]:443
Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: Remote SMTP server [127.0.0.1]:25 connected.
Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: SMTP alert successfully sent. #2.查看ipvs规则,192.168.50.22主机已经被移除集群:
[root@lvs-01 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.50.100:80 rr
-> 192.168.50.23:80 Route 1 0 0
TCP 192.168.50.100:443 rr
-> 192.168.50.23:443 Route 1 0 0
客户端测试,访问全部分配给nginx03节点:
[root@xuzhichao ~]# for i in {1..10} ;do curl -L -k -Hhost:www.xuzhichao.com http://192.168.20.50 ;done
node2.xuzhichao.com page
node2.xuzhichao.com page
node2.xuzhichao.com page
node2.xuzhichao.com page
node2.xuzhichao.com page
node2.xuzhichao.com page
node2.xuzhichao.com page
node2.xuzhichao.com page
node2.xuzhichao.com page
node2.xuzhichao.com page
恢复nginx02节点,查看两个lvs节点的日志和ipvs规则:
#1.打开nginx02节点的nginx服务:
[root@nginx02 ~]# systemctl start nginx.service #2.查看lvs01的keepalived日志,nginx02节点检测成功,加入后端主机:
[root@lvs-01 ~]# tail -f /var/log/keepalived.log
Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: HTTP status code success to [192.168.50.22]:443 url(1).
Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: Remote Web server [192.168.50.22]:443 succeed on service.
Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: Adding service [192.168.50.22]:443 to VS [192.168.50.100]:443
Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: Remote SMTP server [127.0.0.1]:25 connected.
Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: SMTP alert successfully sent.
Jul 13 20:06:49 lvs-01 Keepalived_healthcheckers[13466]: TCP connection to [192.168.50.22]:80 success.
Jul 13 20:06:49 lvs-01 Keepalived_healthcheckers[13466]: Adding service [192.168.50.22]:80 to VS [192.168.50.100]:80
Jul 13 20:06:49 lvs-01 Keepalived_healthcheckers[13466]: Remote SMTP server [127.0.0.1]:25 connected.
Jul 13 20:06:49 lvs-01 Keepalived_healthcheckers[13466]: SMTP alert successfully sent. #3.查看ipvs规则:
[root@lvs-01 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.50.100:80 rr
-> 192.168.50.22:80 Route 1 0 0
-> 192.168.50.23:80 Route 1 0 0
TCP 192.168.50.100:443 rr
-> 192.168.50.22:443 Route 1 0 0
-> 192.168.50.23:443 Route 1 0 0
此时使用客户端测试,两个nginx节点恢复正常访问:
[root@xuzhichao ~]# for i in {1..10} ;do curl -L -k -Hhost:www.xuzhichao.com http://192.168.20.50 ;done
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
1.6 lvs设备故障场景测试
把lvs-01节点的keepalived服务关闭,模拟lvs-01节点故障,查看负载均衡集群情况:
#1.把lvs-01节点的keepalived服务关闭:
[root@lvs-01 ~]# systemctl stop keepalived.service #2.查看keepalived日志情况:
[root@lvs-01 ~]# tail -f /var/log/keepalived.log
Jul 13 20:11:08 lvs-01 Keepalived[13465]: Stopping
Jul 13 20:11:08 lvs-01 Keepalived_vrrp[13467]: VRRP_Instance(VI_1) sent 0 priority
Jul 13 20:11:08 lvs-01 Keepalived_vrrp[13467]: VRRP_Instance(VI_1) removing protocol VIPs.
Jul 13 20:11:08 lvs-01 Keepalived_healthcheckers[13466]: Removing service [192.168.50.22]:80 from VS [192.168.50.100]:80
Jul 13 20:11:08 lvs-01 Keepalived_healthcheckers[13466]: Removing service [192.168.50.23]:80 from VS [192.168.50.100]:80
Jul 13 20:11:08 lvs-01 Keepalived_healthcheckers[13466]: Stopped
Jul 13 20:11:09 lvs-01 Keepalived_vrrp[13467]: Stopped
Jul 13 20:11:09 lvs-01 Keepalived[13465]: Stopped Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2 [root@lvs-02 ~]# tail -f /var/log/keepalived.log
Jul 13 20:11:09 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Entering MASTER STATE
Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) setting protocol VIPs.
Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: Sending gratuitous ARP on eth2 for 192.168.50.100
Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth2 for 192.168.50.100
Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: Sending gratuitous ARP on eth2 for 192.168.50.100
Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: Sending gratuitous ARP on eth2 for 192.168.50.100 #3.查看VIP情况,已经转移到lvs-02节点:
[root@lvs-02 ~]# ip add
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:e4:cf:17 brd ff:ff:ff:ff:ff:ff
inet 192.168.50.32/24 brd 192.168.50.255 scope global noprefixroute eth2
valid_lft forever preferred_lft forever
inet 192.168.50.100/32 scope global eth2
valid_lft forever preferred_lft forever [root@lvs-01 ~]# ip add
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:21:84:9d brd ff:ff:ff:ff:ff:ff
inet 192.168.50.31/24 brd 192.168.50.255 scope global noprefixroute eth2
valid_lft forever preferred_lft forever #4.测试客户端访问正常:
[root@xuzhichao ~]# for i in {1..10} ;do curl -L -k -Hhost:www.xuzhichao.com http://192.168.20.50 ;done
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
把lvs-01节点恢复,观察负载均衡集群情况:
#1.打开lvs-01节点的keepalived服务:
[root@lvs-01 ~]# systemctl start keepalived.service #2.查看keepalived日志情况:
[root@lvs-01 ~]# tail -f /var/log/keepalived.log
Jul 13 20:15:36 lvs-01 Keepalived_vrrp[13724]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: VRRP_Instance(VI_1) Entering MASTER STATE
Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: VRRP_Instance(VI_1) setting protocol VIPs.
Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: Sending gratuitous ARP on eth2 for 192.168.50.100
Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth2 for 192.168.50.100
Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: Sending gratuitous ARP on eth2 for 192.168.50.100 [root@lvs-02 ~]# tail -f /var/log/keepalived.log
Jul 13 20:15:36 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Received advert with higher priority 120, ours 100
Jul 13 20:15:36 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Entering BACKUP STATE
Jul 13 20:15:36 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) removing protocol VIPs.
Jul 13 20:15:36 lvs-02 Keepalived_vrrp[2247]: Opening script file /etc/keepalived/notify.sh #3.查看VIP情况,回到lvs-01节点:
[root@lvs-01 ~]# ip add
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:21:84:9d brd ff:ff:ff:ff:ff:ff
inet 192.168.50.31/24 brd 192.168.50.255 scope global noprefixroute eth2
valid_lft forever preferred_lft forever
inet 192.168.50.100/32 scope global eth2
valid_lft forever preferred_lft forever [root@lvs-02 ~]# ip add
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:e4:cf:0d brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:e4:cf:17 brd ff:ff:ff:ff:ff:ff
inet 192.168.50.32/24 brd 192.168.50.255 scope global noprefixroute eth2
valid_lft forever preferred_lft forever #4.客户端测试访问正常:
[root@xuzhichao ~]# for i in {1..10} ;do curl -L -k -Hhost:www.xuzhichao.com http://192.168.20.50 ;done
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
node2.xuzhichao.com page
node1.xuzhichao.com page
LVS负载均衡(7)-- LVS+keepalived实现高可用的更多相关文章
- LVS负载均衡(LVS简介、三种工作模式、十种调度算法)
一.LVS简介 LVS(Linux Virtual Server)即Linux虚拟服务器,是由章文嵩博士主导的开源负载均衡项目,目前LVS已经被集成到Linux内核模块中.该项目在Linux内核中实现 ...
- 浅谈web应用的负载均衡、集群、高可用(HA)解决方案(转)
1.熟悉几个组件 1.1.apache —— 它是Apache软件基金会的一个开放源代码的跨平台的网页服务器,属于老牌的web服务器了,支持基于Ip或者域名的虚拟主机,支持代理服务器,支持安 ...
- keepalived+LVS 实现双机热备、负载均衡、失效转移 高性能 高可用 高伸缩性 服务器集群
本章笔者亲自动手,使用LVS技术实现实现一个可以支持庞大访问量.高可用性.高伸缩性的服务器集群 在读本章之前,可能有不少读者尚未使用该技术,或者部分读者使用Nginx实现应用层的负载均衡.这里大家都可 ...
- 浅谈web应用的负载均衡、集群、高可用(HA)解决方案
http://aokunsang.iteye.com/blog/2053719 声明:以下仅为个人的一些总结和随写,如有不对之处,还请看到的网友指出,以免误导. (详细的配置方案请google,这 ...
- web应用的负载均衡、集群、高可用(HA)解决方案
看看别人的文章: 1.熟悉几个组件 1.1.apache —— 它是Apache软件基金会的一个开放源代码的跨平台的网页服务器,属于老牌的web服务器了,支持基于Ip或者域名的虚拟主机,支持代 ...
- lvs负载均衡的搭建
lvs负载均衡的搭建 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 在部署环境前,我们需要了解一下一些协议 一.什么是arp 地址解析协议,即ARP(Addr ...
- LVS负载均衡DR模式
什么是集群? 一组相互独立的计算机,利用高速通信网络组成的一个计算机系统,对于客户机来说像是一个单一服务器,实际上是一组服务器.简而言之,一堆机器协同工作就是集群.集群的基本特点:高性能.高并发.高吞 ...
- [Linux系统] (6)LVS负载均衡
部分内容转自:https://blog.csdn.net/weixin_40470303/article/details/80541639 一.LVS简介 LVS(Linux Virtual Ser ...
- LVS负载均衡模型及算法概述
集群类型 LB: Load Balancing,负载均衡 HA:High Availability, 高可用 HP:High Performance, 高性能 负载均衡 负载均衡设备 Hardwa ...
- LVS负载均衡服务
LVS负载均衡服务 LVS负载均衡调度技术是在Linux内核中实现的,因此被称为Linux虚拟服务器.使用LVS时,不能直接配置内核中的ipvs,而需要使用ipvs的管理工具ipvsadm进行管理. ...
随机推荐
- archlinux 时间,时钟设置与详解,时区对应的时间不正确
参照 https://wiki.archlinux.org/title/System_time 1.使用命令查看时间 timedatectl 显示类似 Local time: Wed 2024-01- ...
- Unicode编码解码的全面介绍
1. Unicode的起源和发展 Unicode是一个国际标准,旨在统一世界上所有文字的表示方式.它最初由Unicode协会创立,解决了不同字符集之间的兼容性问题.Unicode的发展经历了多个版本, ...
- python爬虫等获取实时数据+Flume+Kafka+Spark Streaming+mysql+Echarts实现数据动态实时采集、分析、展示
使用爬虫等获取实时数据+Flume+Kafka+Spark Streaming+mysql+Echarts实现数据动态实时采集.分析.展示 [获取完整源码关注公众号:靠谱杨阅读人生 回复kafka获取 ...
- OpenHarmony页面级UI状态存储:LocalStorage
LocalStorage是页面级的UI状态存储,通过@Entry装饰器接收的参数可以在页面内共享同一个LocalStorage实例.LocalStorage也可以在UIAbility内,页面间共享 ...
- 实践指南:EdgeOne与HAI的梦幻联动
在当今快速发展的数字时代,安全和速度已成为网络服务的基石.EdgeOne,作为腾讯云提供的边缘安全加速平台,以其全球部署的节点和强大的安全防护功能,为用户提供了稳定而高效的网络体验.而HAI(Hype ...
- Qt操作sqlite数据库
代码讲解: 1.检查数据库文件是否存在,如果不存在就创建数据库文件 2.创建 person 表(等下的操作就是操作这个表) 3.查询出 person 表中所有的数据,并显示出来 Pro 文件 添加 S ...
- Prometheus AlertManager 生产实践-直接根据 to_email label 发 alert 到对应邮箱
概述 通过之前的文章 - Prometheus Alertmanager 生产配置趟过的坑总结, 我们已经知道 AlertManager 作为告警平台,是非常强大的,可以去重 (deduplicati ...
- Spring框架之IOC和AOP底层原理
1.1简介 Spring:春天-->软件行业的春天 2002,首次推出了Spring框架的雏:interface21框架! Spring框架即以interface21框架为基础,经过重新设计, ...
- 重新点亮linux 命令树————目录相关[三]
前言 简单介绍一些目录命令 正文 主要介绍三个命令 cd 路径切换 cd 这个命令用于切换当前目录的. 切换有三种形式. 以/开头的是绝对路径,比如/home. 以.开头的是相对路径,比如说./ser ...
- 07cj031,07CJ03-1图集免费下载
简介 07CJ03-1轻钢龙骨石膏板隔墙.吊顶图集是中国建筑标准设计研究院组织编写的一部针对轻钢龙骨.石膏板材料用于非承重隔墙.室内吊顶装修的装修.建造参考资料,为用户提供专业的建造参考 下载 有需要 ...