LVS+keepalived搭建负载均衡
LB:192.168.2.158(VIP:192.168.2.188)
real-server1:192.168.2.187
real-server2:192.168.2.189
重点:关于LVS的keepalvied的HA方案,完全由keepalived.conf一个文件搞定,keepalived用到的是vrrp协议,以下是解释:
VRRP(Virtual Router Redundancy Protocol,虚拟路由冗余协议)是一种容错协议,它可以把一个虚拟路由器的责任动态分配到局域网上的 VRRP 路由器中的一台。控制虚拟路由器 IP 地址的 VRRP 路由器称为主路由器,它负责转发数据包到这些虚拟 IP 地址。一旦主路由器不可用,这种选择过程就提供了动态的故障转移机制,这就允许虚拟路由器的 IP 地址可以作为终端主机的默认第一跳路由器。使用 VRRP 的好处是有更高的默认路径的可用性而无需在每个终端主机上配置动态路由或路由发现协议。 VRRP 包封装在 IP 包中发送。
现在开始安装:
一.在VIP机器上安装ipvsadm
wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.24.tar.gz
安装前要建立一个连接文件,否则会出错
ln -s /usr/src/kernels/2.6.9-42.EL-i686/ /usr/src/linux 一定要与当前的运行的内核相一致
tar -zxvf ipvsadm-1.24.tar.gz
cd ipvsadm-1.24
make &&makeinstall
至此 ipvsadm就算安装成功了
以下对安装做一些验证:
1.先执行ipvsadm命令
2.lsmod |grep ip_vs
ip_vs_rr 5953 1
ip_vs 83137 3 ip_vs_rr
验证完成,VIP机器的ipvsadm没有问题。
二.接下来就是重要的keepalived的安装:
wget http://www.keepalived.org/software/keepalived-1.1.17.tar.gz
tar -zxvf keepalived-1.1.17.tar.gz
cd keepalived-1.1.17
./configure --prefix=/ --mandir=/usr/local/share/man/ --with-kernel-dir=/usr/src/kernels/2.6.9-42.EL-smp-i686/
configure如果正确会显示:
Keepalived configuration
------------------------
Keepalived version : 1.1.15
Compiler : gcc
Compiler flags : -g -O2
Extra Lib : -lpopt -lssl -lcrypto
Use IPVS Framework : Yes #支持lvs
IPVS sync daemon support : Yes
Use VRRP Framework : Yes
Use LinkWatch : No
Use Debug flags : No
make &&make install
至此lvs+keepalived安装完成。
三.接下来就要配置keepalived.conf:
vi /etc/keepalived/keepalived.conf
以下是我的配置:
! Configuration File for keepalived
#全局配置:
global_defs {
notification_email {
admin@xx.com #邮件地址,需要本机开启SMTP服务
}
notification_email_from root@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL #负载均衡器标示:同一局域网内,应该是唯一的。
}
#VRRP配置:
vrrp_sync_group VGM {
group {
VI_1
}
}
#VRRP实例配置
vrrp_instance VI_1 { #定义一个实例
state MASTER #设置为MASTER
interface eth0
virtual_router_id 51 #主、备机的virtual_router_id一定要相同
priority 100 #主机值较大,备份机值较小
advert_int 5 #VRRP Multicast广播周期秒数,检查间隔
authentication {
auth_type PASS #VRRP认证方式
auth_pass 1111 #VRRP口令字
}
virtual_ipaddress {
192.168.2.188
............. #(如果有多个VIP,继续换行填写.)
}
}
#LVS配置:
virtual_server 192.168.2.188 80 {
delay_loop 6 #(每隔6秒查询realserver状态)
lb_algo rr #(负载均衡调度 算法,常用wlc,rr)
lb_kind DR #(负载均衡转发规则,一般包括DR,NAT,TUN)
persistence_timeout 50 #(会话保持,同一IP的连接50秒内被分配到同一台realserver)
protocol TCP #(用TCP协议检查realserver状态)
sorry_server 127.0.0.1 80 #realserver全部失败,vip指向本机80端口。
real_server 192.168.2.187 80 {
weight 3 #(权重)
TCP_CHECK { #通过tcpcheck判断RealServer的健康状态
nb_get_retry 3 #重连次数
delay_before_retry 3 #重练间隔时间
connect_port 80 #健康检查端口
connect_timeout 3 #连接超时时间
}
}
real_server 192.168.2.189 80 {
weight 1
TCP_CHECK {
nb_get_retry 3
delay_before_retry 3
connect_port 80
connect_timeout 3
}
}
}
至此:keepalived配置完成。
执行/etc/init.d/keepalived start 启动
四.接下来配置real-server,两台上执行相同的脚本即可,脚本如下:
[root@test1 ~]# more /usr/local/bin/lvs_real
#!/bin/sh
VIP=192.168.2.188 (直接路由模式的vip必须跟服务器对外提供服务的ip地址在同一个网段)
/etc/rc.d/init.d/functions
case "$1" in
start)
echo " start tunl port"
ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP up
(如果有多VIP,依次往下写)
echo "2">/proc/sys/net/ipv4/conf/all/arp_announce
echo "1">/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2">/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1">/proc/sys/net/ipv4/conf/lo/arp_ignore
;;
stop)
echo " stop tunl port"
ifconfig lo:0 down
echo "0">/proc/sys/net/ipv4/conf/all/arp_announce
echo "0">/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0">/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0">/proc/sys/net/ipv4/conf/lo/arp_ignore
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac
下面是田老师对以上脚本的说明:
1、 vip(virtual ip)。直接路由模式的vip必须跟服务器对外提供服务的ip地址在同一个网段,并且lvs 负载均衡器和其他所有提供相同功能的服务器都使用这个vip.
2、 vip被绑定在环回接口lo0:0上,其广播地址是其本身,子网掩码是255.255.255.255。这与标准的网络地址设置有很大的不同。采用这种可变长掩码方式把网段划分成只含一个主机地址的目的是避免ip地址冲突。
3、 echo “1”,echo “2” 这段的作用是抑制arp广播。如果不做arp抑制,将会有众多的机器向其他宣称:“嗨!我是奥巴马,我在这里呢!”,这样就乱套了。
解释:
1 -允许多个网络介质位于同一子网段内,每个网络界面依据是否内核指派路由该数据包经过此界面来确认是否回答ARP查询(这个实现是由来源地址确定路由的时候决定的),换句话说,允许控制使用某一块网卡(通常是第一块)回应arp询问。(做负载均衡的时候,可以考虑用
echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
这样的方式就可以解决,当然利用:
echo 2 /proc/sys/net/ipv4/conf/all/arp_announcearp
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
两条命令配合使用更好,因为arp_announcearp和arp_ignore 似乎是对arp_filter的更细节控制的实现。)
使用/usr/local/bin/lvs_real start|stop 来启动和关闭
启动后:
[root@test1 ~]# ip add
1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
inet 192.168.2.188/32 brd 192.168.2.188 scope global lo:0
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:72:73:b5 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.187/24 brd 192.168.2.255 scope global eth0
inet6 fe80::20c:29ff:fe72:73b5/64 scope link
valid_lft forever preferred_lft forever
3: sit0: <NOARP> mtu 1480 qdisc noop
link/sit 0.0.0.0 brd 0.0.0.0
关闭后:
[root@test1 ~]# ip add
1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:72:73:b5 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.187/24 brd 192.168.2.255 scope global eth0
inet6 fe80::20c:29ff:fe72:73b5/64 scope link
valid_lft forever preferred_lft forever
3: sit0: <NOARP> mtu 1480 qdisc noop
link/sit 0.0.0.0 brd 0.0.0.0
五:接下来验证lvs+keepalived
VIP上/etc/init.d/keepalived start
两台realserver上执行上述的lvs_real start
在VIP机器上查看:
[root@YuHao-linux ipvsadm-1.24]# ipvsadm
IP Virtual Server version 1.2.0 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.2.188:http rr persistent 50
-> 192.168.2.187:http Route 3 2 0
-> 192.168.2.189:http Route 1 0 0
这时访问web也是正常的。 其中 ActiveConn表示活跃连接数:ESTABLISHED状态 InActConn 是不活跃连接数:除了ESTABLISHED的状态(SYN_RECV,TIME_WAIT,FIN_WAIT1等)
然后我关掉一台realserver的apache,再次查看
[root@YuHao-linux ipvsadm-1.24]# ipvsadm
IP Virtual Server version 1.2.0 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.2.188:http rr persistent 50
-> 192.168.2.189:http Route 1 0 0
可见keepalived已经发现一台web挂掉,并将其踢出负载均衡。
六.以下附上一些日志:
keepalived启动日志:
Oct 22 17:07:39 YuHao-linux Keepalived: Starting Keepalived v1.1.17 (10/22,2009)
Oct 22 17:07:39 YuHao-linux Keepalived: Remove a zombie pid file /var/run/vrrp.pid
Oct 22 17:07:39 YuHao-linux Keepalived: Remove a zombie pid file /var/run/checkers.pid
Oct 22 17:07:39 YuHao-linux Keepalived_healthcheckers: Using MII-BMSR NIC polling thread...
Oct 22 17:07:39 YuHao-linux Keepalived_healthcheckers: Netlink reflector reports IP 192.168.2.158 added
Oct 22 17:07:39 YuHao-linux Keepalived_healthcheckers: Netlink reflector reports IP 192.168.2.188 added
Oct 22 17:07:39 YuHao-linux Keepalived_healthcheckers: Registering Kernel netlink reflector
Oct 22 17:07:39 YuHao-linux Keepalived_healthcheckers: Registering Kernel netlink command channel
Oct 22 17:07:39 YuHao-linux Keepalived: Starting Healthcheck child process, pid=5972
Oct 22 17:07:39 YuHao-linux Keepalived_vrrp: Using MII-BMSR NIC polling thread...
Oct 22 17:07:39 YuHao-linux Keepalived_vrrp: Netlink reflector reports IP 192.168.2.158 added
Oct 22 17:07:39 YuHao-linux Keepalived_vrrp: Netlink reflector reports IP 192.168.2.188 added
Oct 22 17:07:39 YuHao-linux Keepalived: Starting VRRP child process, pid=5973
Oct 22 17:07:39 YuHao-linux keepalived: keepalived startup succeeded
Oct 22 17:07:39 YuHao-linux Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'.
Oct 22 17:07:39 YuHao-linux Keepalived_vrrp: Registering Kernel netlink reflector
Oct 22 17:07:40 YuHao-linux Keepalived_healthcheckers: Configuration is using : 7482 Bytes
Oct 22 17:07:40 YuHao-linux Keepalived_vrrp: Registering Kernel netlink command channel
Oct 22 17:07:40 YuHao-linux Keepalived_vrrp: Registering gratutious ARP shared channel
Oct 22 17:07:40 YuHao-linux Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'.
Oct 22 17:07:40 YuHao-linux Keepalived_vrrp: Configuration is using : 37230 Bytes
Oct 22 17:07:40 YuHao-linux Keepalived_vrrp: VRRP sockpool: [ifindex(2), proto(112), fd(10,11)]
Oct 22 17:07:45 YuHao-linux Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Oct 22 17:07:50 YuHao-linux Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Oct 22 17:07:50 YuHao-linux Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
Oct 22 17:07:50 YuHao-linux Keepalived_vrrp: Netlink: error: File exists, type=(20), seq=1256202461, pid=0
Oct 22 17:07:50 YuHao-linux Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.2.188
Oct 22 17:07:50 YuHao-linux Keepalived_vrrp: VRRP_Group(VGM) Syncing instances to MASTER state
Oct 22 17:07:55 YuHao-linux Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.2.188
停掉一台realserver后:
Oct 22 17:12:57 YuHao-linux Keepalived_healthcheckers: TCP connection to [192.168.2.189:80] failed !!!
Oct 22 17:12:57 YuHao-linux Keepalived_healthcheckers: Removing service [192.168.2.189:80] from VS [192.168.2.188:80]
将停掉的realserver重启后:
Oct 22 17:16:01 YuHao-linux Keepalived_healthcheckers: TCP connection to [192.168.2.189:80] success.
Oct 22 17:16:01 YuHao-linux Keepalived_healthcheckers: Adding service [192.168.2.189:80] to VS [192.168.2.188:80]
Oct 22 17:16:01 YuHao-linux Keepalived_healthcheckers: Gained quorum 1+0=1 <= 4 for VS [192.168.2.188:80]
初学lvs,所以没有做主备,再此真诚感谢田逸老师的指导,以上内容全部是在田逸老师的blog和他给我的pdf中学会,如转载请指明原作者sery,田老师!
老师blog:http://sery.blog.51cto.com/all/10037/page/1
上次做了LVS+keepalived的负载均衡,效果还不错,但是没有做主备,这次补上:
安装环境:环境 centos4.4
一共准备四台机器:
负载均衡机器:
主:192.168.2.158 #安装lvs+keepalived
备:192.168.2.159 #安装lvs+keepalived
VIP:192.168.2.188
real-server1:192.168.2.187 #仅执行脚本
real-server2:192.168.2.189 #仅执行脚本
安装过程:
在192.168.2.159上安装ipvsadm和keepalived,方法不再赘述,主要看keepalived的配置:
global_defs {
notification_email {
admin@xx.com #邮件地址,需要本机开启SMTP服务
}
notification_email_from root@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL2 #负载均衡器标示:同一局域网内,应该是唯一的。
}
group {
VI_1
}
}
state BACKUP #设置为BACKUP,通过priority控制那台提升为主。
interface eth0
virtual_router_id 51
priority 70
advert_int 5
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.2.188
}
}
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.2.187 80 {
weight 3
TCP_CHECK {
nb_get_retry 3
delay_before_retry 3
connect_port 80
connect_timeout 3
}
}
real_server 192.168.2.189 80 {
weight 1
TCP_CHECK {
nb_get_retry 3
delay_before_retry 3
connect_port 80
connect_timeout 3
}
}
红色标记为需要修改的地方。
Oct 22 23:40:35 test1 Keepalived_healthcheckers: Using MII-BMSR NIC polling thread...
Oct 22 23:40:35 test1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.2.187 added
Oct 22 23:40:35 test1 Keepalived_healthcheckers: Registering Kernel netlink reflector
Oct 22 23:40:35 test1 Keepalived_healthcheckers: Registering Kernel netlink command channel
Oct 22 23:40:35 test1 keepalived: keepalived startup succeeded
Oct 22 23:40:35 test1 Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'.
Oct 22 23:40:35 test1 Keepalived_healthcheckers: Configuration is using : 10675 Bytes
Oct 22 23:40:35 test1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.2.187:80]
Oct 22 23:40:35 test1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.2.189:80]
Oct 22 23:40:35 test1 Keepalived: Starting Healthcheck child process, pid=5387
Oct 22 23:40:35 test1 Keepalived_vrrp: Using MII-BMSR NIC polling thread...
Oct 22 23:40:35 test1 Keepalived_vrrp: Netlink reflector reports IP 192.168.2.187 added
Oct 22 23:40:35 test1 Keepalived_vrrp: Registering Kernel netlink reflector
Oct 22 23:40:35 test1 Keepalived: Starting VRRP child process, pid=5389
Oct 22 23:40:35 test1 Keepalived_vrrp: Registering Kernel netlink command channel
Oct 22 23:40:35 test1 Keepalived_vrrp: Registering gratutious ARP shared channel
Oct 22 23:40:35 test1 Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'.
Oct 22 23:40:35 test1 Keepalived_vrrp: Configuration is using : 37155 Bytes
Oct 22 23:40:35 test1 Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
Oct 22 23:40:35 test1 Keepalived_vrrp: VRRP sockpool: [ifindex(2), proto(112), fd(10,11)]
Oct 22 23:42:18 test1 Keepalived_vrrp: VRRP_Group(VGB) Syncing instances to MASTER state
Oct 22 23:42:23 test1 Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Oct 22 23:42:23 test1 Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
Oct 22 23:42:23 test1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.2.188
Oct 22 23:42:23 test1 Keepalived_vrrp: Netlink reflector reports IP 192.168.2.188 added
Oct 22 23:42:23 test1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.2.188 added
Oct 22 23:42:28 test1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.2.188
当重启主负载均衡时,备机负载均衡的日志:
Oct 22 23:43:18 test1 Keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advert
Oct 22 23:43:18 test1 Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
Oct 22 23:43:18 test1 Keepalived_vrrp: VRRP_Instance(VI_1) removing protocol VIPs.
Oct 22 23:43:18 test1 Keepalived_vrrp: VRRP_Group(VGB) Syncing instances to BACKUP state
Oct 22 23:43:18 test1 Keepalived_vrrp: Netlink reflector reports IP 192.168.2.188 removed
Oct 22 23:43:18 test1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.2.188 removed
以上就完成双机热备的Lvs+keepalived的负载均衡。
relserver上需要执行的脚本:
[root@test1 ~]# more /usr/local/bin/lvs_real
#!/bin/sh
VIP=192.168.2.188 (直接路由模式的vip必须跟服务器对外提供服务的ip地址在同一个网段)
/etc/rc.d/init.d/functions
case "$1" in
start)
echo " start tunl port"
ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP up
(如果有多VIP,依次往下写)
echo "2">/proc/sys/net/ipv4/conf/all/arp_announce
echo "1">/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2">/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1">/proc/sys/net/ipv4/conf/lo/arp_ignore
;;
stop)
echo " stop tunl port"
ifconfig lo:0 down
echo "0">/proc/sys/net/ipv4/conf/all/arp_announce
echo "0">/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0">/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0">/proc/sys/net/ipv4/conf/lo/arp_ignore
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac
总结:只需在负载均衡机器上安装ipvsadm和keepalived,并执行keepalived启动。在real server机器上仅需开启本地回环地址(脚本)即可,不用安装任何软件。
经验:千万不要将负载均衡机器和web服务器搭建在一起,虽然端口等都不冲突,但是会出现莫名其妙的问题..
[root@linux keepalived]# ifconfig lo:0
lo:0 Link encap:Local Loopback
UP LOOPBACK RUNNING MTU:16436 Metric:1
start tunl port
[root@YuHao-linux keepalived]# ifconfig lo:0
lo:0 Link encap:Local Loopback
inet addr:192.168.2.188 Mask:255.255.255.255
UP LOOPBACK RUNNING MTU:16436 Metric:1
所以:
这里还需注意一下MAC地址的问题(netseek):
假如两台VS之间使用的互备关系,那么当一台VS接管LVS服务时,可能会网络不通,这时因为路由器的MAC缓存表里无法及时刷新MAC.关于vip这个地址的MAC地址还是替换的VS的MAC,有两种解决方法,一种是修改新VS的MAC地
址,另一种是使用 send_arp /arpiing 命令.
以arping 命令为例.
/sbin/arping -I eth0 -c 3 -s ${vip}${gateway_ip} > /dev/null 2>&1
Eg:
/sbin/arping -I eth0 -c 3 -s 192.168.1.6192.168.1.1
如果采用Piranha/keealived方案切换的时候会内置自动发送 send_arp命令.UltraMonkey方案经测试也会自动发送此命令.如用 heartbeat方案,需要写一个send_arp 或者arping 相关的脚本当作heartbeat 一个资源切换服务的时候自动发送相关命令脚本.
附上一篇lvs+keeplived配置地址:
http://bbs.linuxtone.org/thread-1077-1-1.html
LVS+keepalived搭建负载均衡的更多相关文章
- 手把手教程: CentOS 6.5 LVS + KeepAlived 搭建 负载均衡 高可用 集群
为了实现服务的高可用和可扩展,在网上找了几天的资料,现在终于配置完毕,现将心得公布处理,希望对和我一样刚入门的菜鸟能有一些帮助. 一.理论知识(原理) 我们不仅要知其然,而且要知其所以然,所以先给大家 ...
- 【大型网站技术实践】初级篇:借助LVS+Keepalived实现负载均衡
一.负载均衡:必不可少的基础手段 1.1 找更多的牛来拉车吧 当前大多数的互联网系统都使用了服务器集群技术,集群即将相同服务部署在多台服务器上构成一个集群整体对外提供服务,这些集群可以是Web应用服务 ...
- 借助LVS+Keepalived实现负载均衡(转)
原文:http://www.cnblogs.com/edisonchou/p/4281978.html 一.负载均衡:必不可少的基础手段 1.1 找更多的牛来拉车吧 当前大多数的互联网系统都使用了服务 ...
- 借助 LVS + Keepalived 实现负载均衡
虽然现在云手段很高明了.但是这个lvs + keepalive 还是需要了解下的. 今天就整理了下lvs和keepalive的东西.做下总结留作以后怀念 在实际应用中,在Web服务器集群之前总会有一台 ...
- 借助LVS+Keepalived实现负载均衡
原文地址:http://www.cnblogs.com/edisonchou/p/4281978.html 一.负载均衡:必不可少的基础手段 1.1 找更多的牛来拉车吧 当前大多数的互联网系统都使用了 ...
- 借助LVS+Keepalived实现负载均衡(转)
出处:http://www.cnblogs.com/edisonchou/p/4281978.html 一.负载均衡:必不可少的基础手段 1.1 找更多的牛来拉车吧 当前大多数的互联网系统都使用了服务 ...
- 【转】借助LVS+Keepalived实现负载均衡
一.负载均衡:必不可少的基础手段 1.1 找更多的牛来拉车吧 当前大多数的互联网系统都使用了服务器集群技术,集群即将相同服务部署在多台服务器上构成一个集群整体对外提供服务,这些集群可以是Web应用服务 ...
- LVS+keepalived实现负载均衡
背景: 随着你的网站业务量的增长你网站的服务器压力越来越大?需要负载均衡方案!商业的硬件如F5又太贵,你们又是创业型互联公司如何有效节约成本,节省不必要 的浪费?同时实现商业硬件一样的 ...
- Lvs+keepAlived实现负载均衡高可用集群(DR实现)
第1章 LVS 简介 1.1 LVS介绍 LVS是Linux Virtual Server的简写,意为Linux虚拟服务器,是虚拟的服务器集群系统,可在UNIX/LINUX平台下实现负载均衡集群功能. ...
随机推荐
- jsonp 解决跨域传输
JSONP是JSON with Padding的略称.它是一个非官方的协议,它允许在服务器端集成Script tags返回至客户端,通过javascript callback的形式实现跨域访问(这仅仅 ...
- JS 正则验证 test()
/ 用途:检查输入手机号码是否正确 输入: s:字符串 返回: 如果通过验证返回true,否则返回false / function checkMobile(s){ var regu =/^[1 ...
- Unable to load native-hadoop library for your platform(已解决)
1.增加调试信息寻找问题 2.两种方式解决unable to load native-hadoop library for you platform 附:libc/glibc/glib简介 参考: 1 ...
- ionic2——学习指引-学习资源汇总
Ionic2 官网............................官网的文档非常好,超级全,一定要细心看中文文档.....................比较简单 Angular 2 官网.. ...
- OpenCV教程【002 VideoCapture加载并播放视频】
#include <opencv2/opencv.hpp> #include <iostream> using namespace std; using namespace c ...
- RedHat 6.8 内核编译
/*************************************************************************** * RedHat 6.8 内核编译 * 说明: ...
- asp.net mvc中model注意事项
1 modelState必须是需要在action Filter中才生效 2 发送接口的json nullable的类型必须初始化
- UOJ#454. 【UER #8】打雪仗
UOJ#454. [UER #8]打雪仗 http://uoj.ac/problem/454 分析: 好玩的通信题~ 把序列分成三块,\(bob\)先发出这三块中询问点最多的一块给\(alice\). ...
- LeetCode 340. Longest Substring with At Most K Distinct Characters
原题链接在这里:https://leetcode.com/problems/longest-substring-with-at-most-k-distinct-characters/ 题目: Give ...
- 基于Python语言使用RabbitMQ消息队列(三)
发布/订阅 前面的教程中我们已经创建了一个工作队列.在一个工作队列背后的假设是每个任务恰好会传递给一个工人.在这一部分里我们会做一些完全不同的东西——我们会发送消息给多个消费者.这就是所谓的“发布/订 ...