一、简介

  1、lvs-dr原理请参考原理篇

    LVS负载均衡原理

  2、keepalived原理请参考原理篇

    高可用实现KeepAlived原理简介

  3、基于lvs-dr+keepalived故障切换架构图如下:

  

二、部署

  1、环境

lvs+keepalive+sorr-server+监控检测
web1 lvs+keepalived 192.168.216.51
web2 lvs+keepalived 192.168.216.52
web3 web 192.168.216.53
web4 web 192.168.216.54
client 物理机  

  

  注意:确保每台机器防火墙、selinux关闭,时间同步

  2、准备RS的web服务,这里安装httpd

    web3/web4

    yum install httpd -y

    web3

    echo "welcome to web3"  >/var/www/html/index.html

    systemctl start httpd

    systemctl enable httpd

    web4

    echo "welcome to web4"  >/var/www/html/index.html

    systemctl start httpd

    systemctl enable httpd

    互相访问一下,在客户机浏览器上也访问一下

 [root@web3 ~]# curl 192.168.216.54
welcome to web4
[root@web3 ~]# [root@web4 ~]# curl 192.168.216.54
welcome to web4
[root@web4 ~]#

    

  

    arp抑制的意义 ,修改的应答级别

      

       arp_ignore 改为1的意义是,响应报文,请求报文从哪个地址进来的,就只能这个接口地址响应

       arp_announce 改为2的意义是通知,不通告不同网段

 

    脚本实现:web3/web4,都运行一下

 [root@web3 ~]# cd /arp
[root@web3 arp]# ll
total
-rwxr-xr-x. root root Apr : arp.sh
[root@web3 arp]# cat arp.sh
#!/bin/bash
case $ in
start)
echo >/proc/sys/net/ipv4/conf/all/arp_ignore
echo >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo >/proc/sys/net/ipv4/conf/all/arp_announce
echo >/proc/sys/net/ipv4/conf/lo/arp_announce
;;
stop)
echo >/proc/sys/net/ipv4/conf/all/arp_ignore
echo >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo >/proc/sys/net/ipv4/conf/all/arp_announce
echo >/proc/sys/net/ipv4/conf/lo/arp_announce
;;
esac [root@web3 arp]# chmod +x arp.sh
[root@web3 arp]# ./arp.sh

  4、RS配置VIP接口

    web3/web4 同时配置

    首先几个问题解释一下:

      为什么配置到lo接口

        既然需要rs能够处理目标地址的vip的ip报文,首先需要接收这个包,在lo上配置vip就能够完全接收包并将结果返回client

        配置到其他网卡上,会影响客户端的arp request,影响arp表,从而影响负载均衡

      为什么是rs的掩码是255.255.255.255

        由于rs的vip不对外通信,用做侦首部,所以一定要设置位32位掩码

      

    

 1  ifconfig lo:0 192.168.216.200 netmask 255.255.255.255 broadcast 192.168.216.200 up
2 route add -host 192.168.216.200 dev lo:0 [root@web3 arp]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.216.2 0.0.0.0 UG ens33
192.168.122.0 0.0.0.0 255.255.255.0 U virbr0
192.168.216.0 0.0.0.0 255.255.255.0 U ens33
192.168.216.200 0.0.0.0 255.255.255.255 UH lo

  5、准备director的ipvsadm

    web1/web2

   yum install ipvsadm -y

 [root@web2 keepalived]# ipvsadm -C
[root@web2 keepalived]# ipvsadm -A -t 192.168.216.200: -s rr
[root@web2 keepalived]# ipvsadm -a -t 192.168.216.200: -r 192.168.216.53 -g -w
[root@web2 keepalived]# ipvsadm -a -t 192.168.216.200: -r 192.168.216.54 -g -w

  6、sorry-server的配置

    web1/web2-安装web软件

     yum install nginx -y

    web1-

       echo "sorry,under maintanance #####web1" >/usr/share/nginx/html/index.html

    web2 

       echo "sorry,under maintanance #####web2 >/usr/share/nginx/html/index.html

    web1/web2

      systemctl start nginx

      systemctl enable nginx

    客户端访问web应用是否正常

      后面在keepalived配置文件virtual_server区域添加sorry_server 127.0.0.1 80

  7、配置keepalived,及基于HTTP-GET做监控检测

    web1/web2-安装软件

      yum install keepalived -y 

    web1-master配置

 [root@web1 keepalived]# cat keepalived.conf
2 ! Configuration File for keepalived
3
4 global_defs {
# notification_email {
# acassen@firewall.loc
# failover@firewall.loc
# sysadmin@firewall.loc
# }
# notification_email_from Alexandre.Cassen@firewall.loc
# smtp_server 192.168.200.1
# smtp_connect_timeout
13 router_id LVS_DEVEL
# vrrp_skip_check_adv_addr
# vrrp_strict
# vrrp_garp_interval
# vrrp_gna_interval
}
19 vrrp_script chk_maintanance {
20
21 script "/etc/keepalived/chkdown.sh"
22 interval 1
23 weight -20
24 }
#vrrp_script chk_nginx {
# script "/etc/keepalived/chknginx.sh"
# interval
# weight -
#} 31 #VIP1
32 vrrp_instance VI_1 {
33 state MASTER
34 interface ens33
35 virtual_router_id 50
36 priority 100
37 advert_int 1
38 authentication {
39 auth_type PASS
40 auth_pass 1111
41 }
42 virtual_ipaddress {
43 192.168.216.200
44 }
45 track_script {
46 chk_maintanance
47 }
# track_script {
# chk_nginx
# }
}
#VIP2
#vrrp_instance VI_2 {
# state BAKCUP
# interface ens33
# virtual_router_id
# priority
# advert_int
# authentication {
# auth_type PASS
# auth_pass
# }
# virtual_ipaddress {
# 192.168.216.210
# }
# track_script {
# chk_maintanance
# }
# track_script {
# chk_nginx
# }
#} 74 virtual_server 192.168.216.200 80{
75 delay_loop 6                        
76 lb_algo wrr
77 lb_kind DR
78 nat_mask 255.255.0.0
79 protocol TCP
80
81 real_server 192.168.216.53 80 {                
82 weight 1
83 HTTP_GET {                          
84 url {
85 path /
86 status_code 200
87 }
88 connect_timeout 3
89 nb_get_retry 3
90 delay_before_retry 3
91 }
92 }
93
94 real_server 192.168.216.54 80 {
95 weight 2
96 HTTP_GET {
97 url {
98 path /
99 status_code 200
100 }
101 connect_timeout 3
102 nb_get_retry 3
103 delay_before_retry 3
104 }
105 }
106 }

    web2-backup配置

   

 [root@web2 keepalived]# cat keepalived.conf
! Configuration File for keepalived global_defs {
# notification_email {
# acassen@firewall.loc
# failover@firewall.loc
# sysadmin@firewall.loc
# }
# notification_email_from Alexandre.Cassen@firewall.loc
# smtp_server 192.168.200.1
# smtp_connect_timeout
router_id LVS_DEVEL1
# vrrp_skip_check_adv_addr
# vrrp_strict
# vrrp_garp_interval
# vrrp_gna_interval
}
vrrp_script chk_maintanance {                        #这里是脚本通过实现动态切换在Centos7+nginx+keepalived集群及双主架构案例文章有介绍
script "/etc/keepalived/chkdown.sh”
interval
weight -
} vrrp_script chk_nginx {
script "/etc/keepalived/chknginx.sh"
interval
weight -
} #VIP1
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id
priority
advert_int
authentication {
auth_type PASS
auth_pass
}
virtual_ipaddress {
192.168.216.200
}
track_script {
chk_maintanance
}
# track_script {
# chk_nginx
# }
} #VIP2
#vrrp_instance VI_2 {
# state MASTER
# interface ens33
# virtual_router_id
# priority
# advert_int
# authentication {
# auth_type PASS
# auth_pass
# }
# virtual_ipaddress {
# 192.168.216.210
# }
# track_script {
# chk_maintanance
# }
# track_script {
# chk_nginx
# }
#} virtual_server 192.168.216.200 { #vip区域
delay_loop 6              #延迟轮询时间
lb_algo wrr               #后端算法
lb_kind DR               #调度类型
nat_mask 255.255.0.0         #
protocol TCP              #监控服务协议类型
     sorry_server 127.0.0.1 80       #sorry-server
real_server 192.168.216.53 { #真实服务器
weight 1                 #权重
HTTP_GET {                 #健康检测方式 HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK,这里用的HTTP_GET据说效率比TCP_CHECK高
url {
path /              #请求rs上的路径
status_code 200        #状态码检测
}
connect_timeout 3        #超时时长
nb_get_retry 3          #重复次数
delay_before_retry 3      #下次重试时间延迟
}
}
    
real_server 192.168.216.54 {
weight
HTTP_GET {
url {
path /                
status_code
}
connect_timeout
nb_get_retry
delay_before_retry
}
}
}

    添加keepalived ,down脚本

[root@web1 keepalived]#  cat chkdown.sh
#!/bin/bash [[ -f /etc/keepalived/down ]]&&exit || exit [root@web1 keepalived]#

 

  8、开启日志功能

    vim /etc/sysconfig/keepalived

    KEEPALIVED_OPTIONS="-D" 修改成KEEPALIVED_OPTIONS="-D -d -S 0"

 [root@web1 keepalived]# cat /etc/sysconfig/keepalived
# Options for keepalived. See `keepalived --help' output and keepalived(8) and
# keepalived.conf() man pages for a list of all options. Here are the most
# common ones :
#
# --vrrp -P Only run with VRRP subsystem.
# --check -C Only run with Health-checker subsystem.
# --dont-release-vrrp -V Dont remove VRRP VIPs & VROUTEs on daemon stop.
# --dont-release-ipvs -I Dont remove IPVS topology on daemon stop.
# --dump-conf -d Dump the configuration data.
# --log-detail -D Detailed log messages.
# --log-facility -S - Set local syslog facility (default=LOG_DAEMON)
# 15 KEEPALIVED_OPTIONS="-D -d -S 0"

    开启rsyslog

     vim /etc/rsyslog.conf

      #keepalived -S 0

      local0.*                                                /var/log/keepalived.log

    重启服务

      systemctl restart keepalived

      systemctl start rsyslog

      systemctl enable rsyslog

    

三、验证   

  1、验证keepalived

    web1上

      touch down

      ip a     #查看vip 消失

      rm -rf down

      ip a  #vip自动跳回    

 [root@web1 keepalived]# touch down
[root@web1 keepalived]# ll
total
-rwxr-xr-x root root Apr : chkdown.sh
-rwxr-xr-x root root Apr : chkmysql.sh
-rwxr-xr-x root root Apr : chknginx.sh
7 -rw-r--r-- 1 root root 0 Apr 24 17:31 down
-rw-r--r-- root root Apr : keepalived.conf
-rw-r--r-- root root Apr : notify.sh
10 [root@web1 keepalived]# ip a
: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN qlen
link/loopback ::::: brd :::::
inet 127.0.0.1/ scope host lo
valid_lft forever preferred_lft forever
inet6 ::/ scope host
valid_lft forever preferred_lft forever
: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc pfifo_fast state UP qlen
link/ether :0c::1c:8b: brd ff:ff:ff:ff:ff:ff
inet 192.168.216.51/ brd 192.168.216.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80:::e73d:1ef:2e1/ scope link
valid_lft forever preferred_lft forever
: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu qdisc noqueue state DOWN qlen
link/ether ::::a5:7c brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/ brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
: virbr0-nic: <BROADCAST,MULTICAST> mtu qdisc pfifo_fast master virbr0 state DOWN qlen
link/ether ::::a5:7c brd ff:ff:ff:ff:ff:ff
[root@web1 keepalived]# rm -rf downn
30 [root@web1 keepalived]# rm -rf down
31 [root@web1 keepalived]# ip a
: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN qlen
link/loopback ::::: brd :::::
inet 127.0.0.1/ scope host lo
valid_lft forever preferred_lft forever
inet6 ::/ scope host
valid_lft forever preferred_lft forever
: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc pfifo_fast state UP qlen
link/ether :0c::1c:8b: brd ff:ff:ff:ff:ff:ff
inet 192.168.216.51/ brd 192.168.216.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80:::e73d:1ef:2e1/ scope link
valid_lft forever preferred_lft forever
: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu qdisc noqueue state DOWN qlen
link/ether ::::a5:7c brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/ brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
: virbr0-nic: <BROADCAST,MULTICAST> mtu qdisc pfifo_fast master virbr0 state DOWN qlen
link/ether ::::a5:7c brd ff:ff:ff:ff:ff:ff
50 [root@web1 keepalived]# ip a
: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN qlen
link/loopback ::::: brd :::::
inet 127.0.0.1/ scope host lo
valid_lft forever preferred_lft forever
inet6 ::/ scope host
valid_lft forever preferred_lft forever
: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc pfifo_fast state UP qlen
link/ether :0c::1c:8b: brd ff:ff:ff:ff:ff:ff
inet 192.168.216.51/ brd 192.168.216.255 scope global ens33
valid_lft forever preferred_lft forever
61 inet 192.168.216.200/32 scope global ens33                            #vip自动跳回
valid_lft forever preferred_lft forever
inet6 fe80:::e73d:1ef:2e1/ scope link
valid_lft forever preferred_lft forever
: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu qdisc noqueue state DOWN qlen
link/ether ::::a5:7c brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/ brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
: virbr0-nic: <BROADCAST,MULTICAST> mtu qdisc pfifo_fast master virbr0 state DOWN qlen
link/ether ::::a5:7c brd ff:ff:ff:ff:ff:ff
[root@web1 keepalived]#

  2、验证健康检测

    1)、首先检查一下ipvsadm,并访问

 [root@web1 keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2. (size=)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.216.200: wrr
-> 192.168.216.53: Route
-> 192.168.216.54: Route
[root@web1 keepalived]#

        正常状态

  

  2)、web3 停止httpd测试健康检测

    systemctl stop httpd

    web1上查看,ipvs策略已经剔除web3 ,日志文件也显示Removing service

 [root@web1 keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2. (size=)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.216.200: wrr
-> 192.168.216.54: Route
 [root@web1 keepalived]# cat /var/log/keepalived.log  |tail -
Apr :: web1 Keepalived_vrrp[]: Sending gratuitous ARP on ens33 for 192.168.216.200
Apr :: web1 Keepalived_vrrp[]: Sending gratuitous ARP on ens33 for 192.168.216.200
Apr :: web1 Keepalived_vrrp[]: Sending gratuitous ARP on ens33 for 192.168.216.200
Apr :: web1 Keepalived_vrrp[]: Sending gratuitous ARP on ens33 for 192.168.216.200
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
10 Apr 24 17:40:43 web1 Keepalived_healthcheckers[50390]: Check on service [192.168.216.53]:80 failed after 3 retry.
11 Apr 24 17:40:43 web1 Keepalived_healthcheckers[50390]: Removing service [192.168.216.53]:80 from VS [192.168.216.200]:
0
[root@web1 keepalived]#

      恢复web3的httpd

        systemctl start httpd

      web1上查看已经添加到负载均衡上,日志文件显示HTTP status code success 和adding service to VS

 [root@web1 keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2. (size=)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
5 TCP 192.168.216.200:80 wrr
6 -> 192.168.216.53:80 Route 1 0 0
7 -> 192.168.216.54:80 Route 2 0 0

[root@web1 keepalived]# cat /var/log/keepalived.log |tail -
Apr :: web1 Keepalived_vrrp[]: Sending gratuitous ARP on ens33 for 192.168.216.200
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Check on service [192.168.216.53]: failed after retry.
Apr :: web1 Keepalived_healthcheckers[]: Removing service [192.168.216.53]: from VS [192.168.216.200]:
16 Apr 24 17:44:37 web1 Keepalived_healthcheckers[50390]: HTTP status code success to [192.168.216.53]:80 url(1).
17 Apr 24 17:44:37 web1 Keepalived_healthcheckers[50390]: Remote Web server [192.168.216.53]:80 succeed on service.
18 Apr 24 17:44:37 web1 Keepalived_healthcheckers[50390]: Adding service [192.168.216.53]:80 to VS [192.168.216.200]:80

[root@web1 keepalived]#

    

  3、验证sorry-server

    web3/web4

      systemctl stop httpd

    web1上查看

     

 [root@web1 keepalived]# cat /var/log/keepalived.log  |tail -
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Check on service [192.168.216.53]: failed after retry.
Apr :: web1 Keepalived_healthcheckers[]: Removing service [192.168.216.53]: from VS [192.168.216.200]:
Apr :: web1 Keepalived_healthcheckers[]: HTTP status code success to [192.168.216.53]: url().
Apr :: web1 Keepalived_healthcheckers[]: Remote Web server [192.168.216.53]: succeed on service.
Apr :: web1 Keepalived_healthcheckers[]: Adding service [192.168.216.53]: to VS [192.168.216.200]:
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.54]:.
[root@web1 keepalived]# cat /var/log/keepalived.log |tail -
Apr :: web1 Keepalived_healthcheckers[]: Adding service [192.168.216.53]: to VS [192.168.216.200]:
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.54]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.53]:.
Apr :: web1 Keepalived_healthcheckers[]: Check on service [192.168.216.53]: failed after retry.
Apr :: web1 Keepalived_healthcheckers[]: Removing service [192.168.216.53]: from VS [192.168.216.200]:
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.54]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.54]:.
[root@web1 keepalived]# cat /var/log/keepalived.log |tail -
Apr :: web1 Keepalived_healthcheckers[]: Check on service [192.168.216.53]: failed after retry.
Apr :: web1 Keepalived_healthcheckers[]: Removing service [192.168.216.53]: from VS [192.168.216.200]:
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.54]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.54]:.
Apr :: web1 Keepalived_healthcheckers[]: Error connecting server [192.168.216.54]:.
Apr :: web1 Keepalived_healthcheckers[]: Check on service [192.168.216.54]: failed after retry.
Apr :: web1 Keepalived_healthcheckers[]: Removing service [192.168.216.54]: from VS [192.168.216.200]:
Apr :: web1 Keepalived_healthcheckers[]: Lost quorum -= > for VS [192.168.216.200]:
32 Apr 24 17:47:46 web1 Keepalived_healthcheckers[50390]: Adding sorry server [127.0.0.1]:80 to VS [192.168.216.200]:80
33 Apr 24 17:47:46 web1 Keepalived_healthcheckers[50390]: Removing alive servers from the pool for VS [192.168.216.200]:80

    日志显示,Adding sorry server

转载请注明出处:https://www.cnblogs.com/zhangxingeng/p/10743501.html

Centos7+LVS-DR+keepalived实验(包含sorry-server、日志、及HTTP-GET的健康检测)的更多相关文章

  1. centos LB负载均衡集群 三种模式区别 LVS/NAT 配置 LVS/DR 配置 LVS/DR + keepalived配置 nginx ip_hash 实现长连接 LVS是四层LB 注意down掉网卡的方法 nginx效率没有LVS高 ipvsadm命令集 测试LVS方法 第三十三节课

    centos   LB负载均衡集群 三种模式区别 LVS/NAT 配置  LVS/DR 配置  LVS/DR + keepalived配置  nginx ip_hash 实现长连接  LVS是四层LB ...

  2. LVS DR模式实验

    LVS DR模式实验 三台虚拟机,两个台节点机(Apache),一台DR实验调度机 一:关闭相关安全机制 systemctl stop firewalld iptables -F setenforce ...

  3. lvs/dr+keepalived搭建成功后,开启防火墙,虚拟IP不能访问,但是真实IP却可以访问

    lvs/dr+keepalived搭建成功后,开启防火墙,虚拟IP不能访问,但是真实IP却可以访问,如果关闭防火墙虚拟IP就可以访问网站了,这个问题肯定是防火墙在作怪. 经过这两的不懈奋斗和大家的帮助 ...

  4. RHEL 5.4下部署LVS(DR)+keepalived实现高性能高可用负载均衡

    原文地址:http://www.cnblogs.com/mchina/archive/2012/05/23/2514728.html 一.简介 LVS是Linux Virtual Server的简写, ...

  5. Linux下部署LVS(DR)+keepalived+Nginx负载均衡

    架构部署 LVS/keepalived(master):192.168.21.3  LVS/keepalived(Slave):192.168.21.6  Nginx1:192.168.21.4  N ...

  6. VM虚拟机上 实现CentOS 6.X下部署LVS(DR)+keepalived实现高性能高可用负载均衡

    一.简介 LVS是Linux Virtual Server的简写,意即Linux虚拟服务器,是一个虚拟的服务器集群系统.本项目在1998年5月由章文嵩博士成立,是中国国内最早出现的自由软件项目之一. ...

  7. RHEL 5.4下部署LVS(DR)+keepalived实现高性能高可用负载均衡(转)

    一.简介 LVS是Linux Virtual Server的简写,意即Linux虚拟服务器,是一个虚拟的服务器集群系统.本项目在1998年5月由章文嵩博士成立,是中国国内最早出现的自由软件项目之一. ...

  8. 虚拟机 搭建LVS + DR + keepalived 高可用负载均衡

    一:环境说明:   LVS-DR-Master:    10.3.0.82   LVS-DR-Backup:    10.3.0.70   VIP:                10.3.0.60  ...

  9. lvs(dr)+keepalived

    系统:centos6.5mini 环境: 机器名 Ip地址 角色 Vip-web: 192.168.20.50 Vip-mysql: 192.168.20.60 lvs01 192.168.20.10 ...

随机推荐

  1. BZOJ_3365_[Usaco2004 Feb]Distance Statistics 路程统计&&POJ_1741_Tree_点分治

    BZOJ_3365_[Usaco2004 Feb]Distance Statistics 路程统计&&POJ_1741_Tree_点分治 Description     在得知了自己农 ...

  2. 虚拟机console基础环境部署——配置本地YUM源

    1. CD/ROM装载系统镜像2. 挂载设备3. 配置本地源4. 总结 有关YUM源及Linux系统三大软件管理方式,参照博客<CentOS系统三大软件管理>,笔记内链:CentOS系统三 ...

  3. 【爆料】-《维多利亚大学毕业证书》Victoria一模一样原件

    ☞维多利亚大学毕业证书[微/Q:865121257◆WeChat:CC6669834]UC毕业证书/联系人Alice[查看点击百度快照查看][留信网学历认证&博士&硕士&海归& ...

  4. 对图片进行索引,存入数据库sqlite3中,实现快速搜索打开

    对图片进行索引,存入数据库中,实现快速搜索打开    这个任务分为两步: 第一步:建立索引 import os import shutil import sqlite3 # 扫描函数,需扫描路径目录处 ...

  5. 微信小程序保存图片到相册

    先来看小程序中的保存图片到相册的api wx.saveImageToPhotosAlbum({ filePath : "./test.png", //这个只是测试路径,没有效果 s ...

  6. asp.net core系列 45 Web应用 模型绑定和验证

    一. 模型绑定 ASP.NET Core MVC 中的模型绑定,是将 HTTP 请求中的数据映射到action方法参数. 这些参数可能是简单类型的参数,如字符串.整数或浮点数,也可能是复杂类型的参数. ...

  7. WebRTC系列(1)-手把手教你实现一个浏览器拍照室Demo

    1.WebRTC开发背景 由于业务需求,需要在项目中实现实时音视频通话功能,之前基于浏览器开发的Web项目要进行音视频通话,需要安装flash插件才能实现或者使用C/S客户端进行通信.随着互联网技术的 ...

  8. Shim 与 Polyfill

    Shim: 用来向后兼容.比如 requestIdleCallback,为了在旧的环境中不报错,可以加 shim. 使用环境中现有的 api 来实现,不会引入额外的依赖或其他技术. Polyfill: ...

  9. Android版数据结构与算法(二):基于数组的实现ArrayList源码彻底分析

    版权声明:本文出自汪磊的博客,未经作者允许禁止转载. 本片我们分析基础数组的实现--ArrayList,不会分析整个集合的继承体系,这不是本系列文章重点. 源码分析都是基于"安卓版" ...

  10. 你用.NET开发APP时,在云平台打包APP要填个“包名”的含义

    ios 在ios平台,包名有它专有的名词:bundle ID.bundle ID可以翻译成包ID,也可以叫APP ID或者应用ID,他是每一个ios应用的全球唯一标识,只要bundle id不变,无论 ...