Nginx-keepalived+Nginx实现高可用集群
Keepalived+Nginx 高可用集群(主从模式)
集群架构图:
说明:Keepalived机器同样是nginx负载均衡器。
1)实验环境准备(此处都是使用的centos7系统)
# cat /etc/redhat-release
CentOS Linux release 7.4. (Core)
在所有节点上面进行配置
# systemctl stop firewalld //关闭防火墙
# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/sysconfig/selinux //关闭selinux,重启生效
# setenforce //关闭selinux,临时生效
# ntpdate .centos.pool.ntp.org //时间同步
# yum install nginx -y //安装nginx
2)配置后端web服务器(两台一样)
# echo "`hostname` `ifconfig ens33 |sed -n 's#.*inet \(.*\)netmask.*#\1#p'`" > /usr/share/nginx/html/index.html //准备测试文件,此处是将主机名和ip写到index.html页面中
# vim /etc/nginx/nginx.conf //编辑配置文件
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name www.mtian.org;
location / {
root /usr/share/nginx/html;
}
access_log /var/log/nginx/access.log main;
}
}
# systemctl start nginx //启动nginx
# systemctl enable nginx //加入开机启动
3)配置LB服务器(两台都一样)
# vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
upstream backend {
server 192.168.1.33:80 weight=1 max_fails=3 fail_timeout=20s;
server 192.168.1.34:80 weight=1 max_fails=3 fail_timeout=20s;
}
server {
listen 80;
server_name www.mtian.org;
location / {
proxy_pass http://backend;
proxy_set_header Host $host:$proxy_port;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
# systemctl start nginx //启动nginx
# systemctl enable nginx //加入开机自启动
4)在测试机(192.168.1.35)上面添加host解析,并测试lb集群是否正常。(测试机任意都可以,只要能访问lb节点。)
[root@node01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.32 www.mtian.org
192.168.1.31 www.mtian.org
// 测试时候轮流关闭lb1 和 lb2 节点,关闭后还是能够访问并看到轮循效果即表示 nginx lb集群搭建成功。
[root@node01 ~]# curl www.mtian.org
web01 192.168.1.33
[root@node01 ~]# curl www.mtian.org
web02 192.168.1.34
[root@node01 ~]# curl www.mtian.org
web01 192.168.1.33
[root@node01 ~]# curl www.mtian.org
web02 192.168.1.34
[root@node01 ~]# curl www.mtian.org
web01 192.168.1.33
[root@node01 ~]# curl www.mtian.org
web02 192.168.1.34
5)上面步骤成功后,开始搭建keepalived,在两台 lb节点上面安装keepalived(也可以源码编译安装、此处直接使用yum安装)
# yum install keepalived -y
6)配置 LB-01节点
[root@LB- ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived global_defs {
notification_email {
@qq.com
}
smtp_server 192.168.200.1
smtp_connect_timeout
router_id LVS_DEVEL
} vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id
priority
advert_int
authentication {
auth_type PASS
auth_pass
}
virtual_ipaddress {
192.168.1.110/24 dev ens33 label ens33:1
}
}
[root@LB- ~]# systemctl start keepalived //启动keepalived
[root@LB- ~]# systemctl enable keepalived //加入开机自启动
[root@LB- ~]# ip a //查看IP,会发现多出了VIP 192.168.1.110
......
: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc pfifo_fast state UP qlen
link/ether :0c:::: brd ff:ff:ff:ff:ff:ff
inet 192.168.1.31/ brd 192.168.1.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.1.110/24 scope global secondary ens33:1
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe94:/ scope link
valid_lft forever preferred_lft forever
......
7)配置 LB-02节点
[root@LB- ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived global_defs {
notification_email {
@qq.com
}
smtp_server 192.168.200.1
smtp_connect_timeout
router_id LVS_DEVEL
} vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id
priority
advert_int
authentication {
auth_type PASS
auth_pass
}
virtual_ipaddress {
192.168.1.110/24 dev ens33 label ens33:1
}
}
[root@LB- ~]# systemctl start keepalived //启动keepalived
[root@LB- ~]# systemctl enable keepalived //加入开机自启动
[root@LB- ~]# ifconfig //查看IP,此时备节点不会有VIP(只有当主挂了的时候,VIP才会飘到备节点)
ens33: flags=<UP,BROADCAST,RUNNING,MULTICAST> mtu
inet 192.168.1.32 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::20c:29ff:feab: prefixlen scopeid 0x20<link>
ether :0c::ab:: txqueuelen (Ethernet)
RX packets bytes (16.9 MiB)
RX errors dropped overruns frame
TX packets bytes (406.0 KiB)
TX errors dropped overruns carrier collisions
......
8)在测试机器上面访问 Keepalived上面配置的VIP 192.168.1.110
[root@node01 ~]# curl 192.168.1.110
web01 192.168.1.33
[root@node01 ~]# curl 192.168.1.110
web02 192.168.1.34
[root@node01 ~]# curl 192.168.1.110
web01 192.168.1.33
[root@node01 ~]# curl 192.168.1.110
web02 192.168.1.34
//关闭LB-01 节点上面keepalived主节点。再次访问
[root@LB- ~]# systemctl stop keepalived
[root@node01 ~]#
[root@node01 ~]# curl 192.168.1.110
web01 192.168.1.33
[root@node01 ~]# curl 192.168.1.110
web02 192.168.1.34
[root@node01 ~]# curl 192.168.1.110
web01 192.168.1.33
[root@node01 ~]# curl 192.168.1.110
web02 192.168.1.34
//此时查看LB-01 主节点上面的IP ,发现已经没有了 VIP
[root@LB- ~]# ifconfig
ens33: flags=<UP,BROADCAST,RUNNING,MULTICAST> mtu
inet 192.168.1.31 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::20c:29ff:fe94: prefixlen scopeid 0x20<link>
ether :0c:::: txqueuelen (Ethernet)
RX packets bytes (17.1 MiB)
RX errors dropped overruns frame
TX packets bytes (1016.4 KiB)
TX errors dropped overruns carrier collisions
...
//查看LB-02 备节点上面的IP,发现 VIP已经成功飘过来了
[root@LB- ~]# ifconfig
ens33: flags=<UP,BROADCAST,RUNNING,MULTICAST> mtu
inet 192.168.1.32 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::20c:29ff:feab: prefixlen scopeid 0x20<link>
ether :0c::ab:: txqueuelen (Ethernet)
RX packets bytes (16.9 MiB)
RX errors dropped overruns frame
TX packets bytes (419.9 KiB)
TX errors dropped overruns carrier collisions ens33:: flags=<UP,BROADCAST,RUNNING,MULTICAST> mtu
inet 192.168.1.110 netmask 255.255.255.0 broadcast 0.0.0.0
ether :0c::ab:: txqueuelen (Ethernet)
...
到此,Keepalived+Nginx高可用集群(主从)就搭建完成了。
Keepalived+Nginx 高可用集群(双主模式)
将keepalived做成双主模式,其实很简单,就是再配置一段新的vrrp_instance(实例)规则,主上面加配置一个从的实例规则,从上面加配置一个主的实例规则。
集群架构图:
说明:还是按照上面的环境继续做实验,只是修改LB节点上面的keepalived服务的配置文件即可。此时LB-01节点即为Keepalived的主节点也为备节点,LB-02节点同样即为Keepalived的主节点也为备节点。LB-01节点默认的主节点VIP(192.168.1.110),LB-02节点默认的主节点VIP(192.168.1.210)
1)配置 LB-01 节点
[root@LB- ~]# vim /etc/keepalived/keepalived.conf //编辑配置文件,增加一段新的vrrp_instance规则
! Configuration File for keepalived global_defs {
notification_email {
@qq.com
}
smtp_server 192.168.200.1
smtp_connect_timeout
router_id LVS_DEVEL
} vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id
priority
advert_int
authentication {
auth_type PASS
auth_pass
}
virtual_ipaddress {
192.168.1.110/ dev ens33 label ens33:
}
} vrrp_instance VI_2 {
state BACKUP
interface ens33
virtual_router_id 52
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 2222
}
virtual_ipaddress {
192.168.1.210/24 dev ens33 label ens33:2
}
}
[root@LB- ~]# systemctl restart keepalived //重新启动keepalived
// 查看LB-01 节点的IP地址,发现VIP(192.168.1.110)同样还是默认在该节点
[root@LB- ~]# ip a
: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc pfifo_fast state UP qlen
link/ether :0c:::: brd ff:ff:ff:ff:ff:ff
inet 192.168.1.31/ brd 192.168.1.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.1.110/ scope global secondary ens33:
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe94:/ scope link
valid_lft forever preferred_lft forever
2)配置 LB-02 节点
[root@LB- ~]# vim /etc/keepalived/keepalived.conf //编辑配置文件,增加一段新的vrrp_instance规则
! Configuration File for keepalived global_defs {
notification_email {
@qq.com
}
smtp_server 192.168.200.1
smtp_connect_timeout
router_id LVS_DEVEL
} vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id
priority
advert_int
authentication {
auth_type PASS
auth_pass
}
virtual_ipaddress {
192.168.1.110/ dev ens33 label ens33:
}
} vrrp_instance VI_2 {
state MASTER
interface ens33
virtual_router_id 52
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass 2222
}
virtual_ipaddress {
192.168.1.210/24 dev ens33 label ens33:2
}
}
[root@LB- ~]# systemctl restart keepalived //重新启动keepalived
// 查看LB-02节点IP,会发现也多了一个VIP(192.168.1.210),此时该节点也就是一个主了。
[root@LB- ~]# ip a
: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc pfifo_fast state UP qlen
link/ether :0c::ab:: brd ff:ff:ff:ff:ff:ff
inet 192.168.1.32/ brd 192.168.1.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.1.210/ scope global secondary ens33:
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:feab:/ scope link
valid_lft forever preferred_lft forever
3)测试
[root@node01 ~]# curl 192.168.1.110
web01 192.168.1.33
[root@node01 ~]# curl 192.168.1.110
web02 192.168.1.34
[root@node01 ~]# curl 192.168.1.210
web01 192.168.1.33
[root@node01 ~]# curl 192.168.1.210
web02 192.168.1.34
// 停止LB-01节点的keepalived再次测试
[root@LB- ~]# systemctl stop keepalived
[root@node01 ~]# curl 192.168.1.110
web01 192.168.1.33
[root@node01 ~]# curl 192.168.1.110
web02 192.168.1.34
[root@node01 ~]# curl 192.168.1.210
web01 192.168.1.33
[root@node01 ~]# curl 192.168.1.210
web02 192.168.1.34
测试可以发现我们访问keepalived中配置的两个VIP都可以正常调度等,当我们停止任意一台keepalived节点,同样还是正常访问;到此,keepalived+nginx高可用集群(双主模式)就搭建完成了。
Nginx-keepalived+Nginx实现高可用集群的更多相关文章
- nginx+keepalived+consul 实现高可用集群
继 负载均衡 之 nginx+consul+consul template,我这次将使用2台虚拟机,来做一个简单的双机负载均衡试验. 试验目标: 1. 当参加负载均衡的子节点服务,有任何其中一个或多个 ...
- 使用Keepalived构建LVS高可用集群
LVS的DR模型配置+Keepalive部署 介绍 下图为DR模型的通信过程,图中的IP不要被扑结构中的IP迷惑,图里只是为了说明DR的通信原理,应用到本例中的拓扑上其工作原理不变. 拓扑结构 服务器 ...
- 【转】Keepalived+Tengine实现高可用集群
原文出处:http://502245466.blog.51cto.com/7559397/1301772 概述 近年来随着Nginx在国内的发展潮流,越来越多的互联网公司使用Nginx:凭Nginx的 ...
- 基于keepalived搭建MySQL高可用集群
MySQL的高可用方案一般有如下几种: keepalived+双主,MHA,MMM,Heartbeat+DRBD,PXC,Galera Cluster 比较常用的是keepalived+双主,MHA和 ...
- 使用Keepalived实现linux高可用集群
安装 apt install libipset-dev keepalived -y 创建账户 useradd -s/usr/sbin/nologin -M -g root keepalived_scr ...
- 003.Keepalived搭建LVS高可用集群
一 基础环境 1.1 IP规划 OS:CentOS 6.8 64位 节点类型 IP规划 主机名 类型 主 Director Server eth0:172.24.8.10 DR1 公共IP eth1: ...
- keepalived + lvs 网站高可用集群
一 ,四台服务器 master 端 : 192.168.1.3 backup 端: 192.168.1.4 REserver1 端 : 192.168.1.5 REserver2 端: 192.168 ...
- 实战| Nginx+keepalived 实现高可用集群
一个执着于技术的公众号 前言 今天通过两个实战案例,带大家理解Nginx+keepalived 如何实现高可用集群,在学习新知识之前您可以选择性复习之前的知识点: 给小白的 Nginx 10分钟入门指 ...
- 集群介绍 keepalived介绍 用keepalived配置高可用集群
集群介绍 • 根据功能划分为两大类:高可用和负载均衡 • 高可用集群通常为两台服务器,一台工作,另外一台作为冗余,当提供服务的机器宕机,冗余将接替继续提供服务 • 实现高可用的开源软件有:heartb ...
- Linux集群介绍、keepalived介绍及配置高可用集群
7月3日任务 18.1 集群介绍18.2 keepalived介绍18.3/18.4/18.5 用keepalived配置高可用集群扩展heartbeat和keepalived比较http://blo ...
随机推荐
- IntelliJ IDEA下如何设置JSP模板
今天在学习Spring MVC知识时,发现自己所用的IntelliJ IDEA中自动生成的JSP文件不支持EL表达式的使用,所以就想导入新的JSP模板,方便以后使用.根据旧模板的提示,如下图 找到Se ...
- Java工具类——通过配置XML验证Map
Java工具类--通过配置XML验证Map 背景 在JavaWeb项目中,接收前端过来的参数时通常是使用我们的实体类进行接收的.但是呢,我们不能去决定已经搭建好的框架是怎么样的,在我接触的框架中有一种 ...
- Dynamics 365中的批量删除作业执行频率可以高于每天一次吗?
微软动态CRM专家罗勇 ,回复317或者20190314可方便获取本文,同时可以在第一间得到我发布的最新博文信息,follow me!我的网站是 www.luoyong.me . 我先来做一个例子,登 ...
- 用Jenkins搭建自动构建服务
Jenkins是BS跨平台构建工具,之前名为Hundson.wiki [chs en] 最新windows安装包:下载 下文以1.593版本为例,讲述Jenkins的Windows版本的一些要注意 ...
- mssql sqlserver 自动备份存储过程的方法分享
转自:http://www.maomao365.com/?p=7847摘要: 为了更好的记录数据库中存储过程脚本的变化情况,下文采用数据库触发器来自动记载每次“存储过程”的变化(新增或修改),如下所示 ...
- 阿里云MySQL远程连接不上问题
解决阿里云MySQL远程连接不上的问题:step1:1.修改user表:MySQL>update user set host = '%' where user = 'root'; 2.授权主机访 ...
- 如何将外部数据库 导入到系统的SQL中
打开数据库sql管理 在数据库中新建查询 如何输入: exec sp_attach_db @dbname='YourDataBaseName', @filename1='mdf文件路径', @fi ...
- android申请多个权限的正确姿势
ActivityCompat.requestPermissions(this,new String[]{Manifest.permission.RECORD_AUDIO, Manifest.permi ...
- Hybrid APP之Native和H5页面交互原理
Hybrid APP之Native和H5页面交互原理 Hybrid APP的关键是原生页面与H5页面直接的交互,如下图,痛过JSBridge,H5页面可以调用Native的api,Native也可调用 ...
- Oracle 数据库禁止全表访问的时候direct path read /////
一般在OLAP环境中,大表在进行全表扫描的时候一般会出现direct path read等待事件,如果在OLTP环境中,出现大量的direct path read直接路径读取,这样就有问题了.一般在O ...