使用Ansible实现nginx+keepalived高可用负载均衡自动化部署
本篇文章记录通过Ansible自动化部署nginx的负载均衡高可用,前端代理使用nginx+keepalived,端web server使用3台nginx用于负载效果的体现,结构图如下:
部署前准备工作
主机规划
- Ansible : 192.168.214.144
- Keepalived-node-1 : 192.168.214.148
- Keepalived-node-2 : 192.168.214.143
- web1 : 192.168.214.133
- web2 : 192.168.214.135
- web3 : 192.168.214.139
Ansible主机与远程主机秘钥认证
#!/bin/bash
keypath=/root/.ssh
[ -d ${keypath} ] || mkdir -p ${keypath}
rpm -q expect &> /dev/null || yum install expect -y
ssh-keygen -t rsa -f /root/.ssh/id_rsa -P ""
password=fsz...
while read ip;do
expect <<EOF
set timeout 5
spawn ssh-copy-id $ip
expect {
"yes/no" { send "yes\n";exp_continue }
"password" { send "$password\n" }
}
expect eof
EOF
done < /home/iplist.txt
iplist.txt
192.168.214.148
192.168.214.143
192.168.214.133
192.168.214.135
192.168.214.139
192.168.214.134
执行脚本
[root@Ansible script]# ./autokey.sh
测试验证
[root@Ansible script]# ssh 192.168.214.148 'date'
Address 192.168.214.148 maps to localhost, but this does not map back to the address - POSSIBLE BREAK-IN ATTEMPT!
Sat Jul 14 11:35:21 CST 2018
配置Ansible基于主机名认证,方便单独管理远程主机
vim /etc/hosts
#
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.214.148 node-1
192.168.214.143 node-2
192.168.214.133 web-1
192.168.214.135 web-2
192.168.214.139 web-3
安装配置Ansible
#安装ansible
[root@Ansible ~]# yum install ansible -y
#配置ansible主机清单
[root@Ansible ~]# vim /etc/ansible/hosts
[all]
192.168.214.148
192.168.214.143
192.168.214.133
192.168.214.135
192.168.214.139
[node]
192.168.214.148
192.168.214.143
[web]
192.168.214.133
192.168.214.135
192.168.214.139
#Ansible执行ping测试
[root@Ansible ~]# ansible all -m ping
编写roles,实现web的部署
先看一下web的目录结构
[root@Ansible ~]# tree /opt/roles/web
/opt/roles/web
.
├── tasks
│ ├── install_nginx.yml
│ ├── main.yml
│ ├── start.yml
│ ├── temps.yml
│ └── user.yml
└── templates
├── index.html.j2
└── nginx.conf.j2
2 directories, 7 files
按照角色执行的顺序编写
编写user.yml
- name: create group nginx
group: name=nginx
- name: create user nginx
user: name=nginx group=nginx system=yes shell=/sbin/nologin
编写install_nginx.yml
- name: install nginx webserver
yum: name=nginx
创建nginx配置文件的template模板
由于是测试,后端web服务的nginx.conf配置文件基本保持默认,只只更具后端主机情况设置worker进程数,使用ansible的setup模块中的变量获取远程主机的cpu的数量值
#将配置文件转换成template文件
[root@Ansible conf]# cp nginx.conf /opt/roles/web/templates/nginx.conf.j2
#做出修改的内容如下
worker_processes {{ansible_proccessor_vcpus}};
#在templates目录写一个测试页内如下
vim index.html.j2
{{ ansible_hostname }} test page.
编写temps.yml
- name: cp nginx.conf.j2 to nginx web server rename nginx.conf
template: src=/opt/roles/web/templates/nginx.conf.j2 dest=/etc/nginx/nginx.conf
- name: cp index test page to nginx server
template: src=/opt/roles/web/templates/index.html.j2 dest=/usr/share/nginx/html/index.html
编写start.yml
- name: restart nginx
service: name=nginx state=started
编写main.yml
- import_tasks: user.yml
- import_tasks: install_nginx.yml
- import_tasks: temps.yml
- import_tasks: start.yml
编写执行主文件web_install.yml,执行文件不能与web角色放在同一目录,通常放在roles目录
[root@Ansible ~]# vim /opt/roles/web_install.yml
---
- hosts: web
remote_user: root
roles:
- web
安装前测试: -C选项为测试
[root@Ansible ~]# ansible-playbook -C /opt/roles/web_install.yml
如没有问题则执行安装
[root@Ansible ~]# ansible-playbook /opt/roles/web_install.yml
测试访问
[root@Ansible ~]# ansible web -m shell -a 'iptables -F'
192.168.214.139 | SUCCESS | rc=0 >>
192.168.214.135 | SUCCESS | rc=0 >>
192.168.214.133 | SUCCESS | rc=0 >>
[root@Ansible ~]# curl 192.168.214.133
web-1 test page.
编写roles角色部署nginx+keepalived
部署高可用集群需要注意各节点包括后端主机的时间问题,保证各主机时间一致。
[root@Ansible ~]# ansible all -m shell -a 'yum install ntpdate -y'
[root@Ansible ~]# ansible all -m shell -a 'ntpdate gudaoyufu.com'
编写roles角色
编写user.yml
- name: create nginx group
group: name=nginx
- name: create nginx user
user: name=nginx group=nginx system=yes shell=/sbin/nologin
编写install_server.yml
- name: install nginx and keepalived
yum: name={{ item }} state=latest
with_items:
- nginx
- keepalived
编写temps.yml
- name: copy nginx proxy conf and rename
template: src=/opt/roles/ha_proxy/templates/nginx.conf.j2 dest=/etc/nginx/nginx.conf
- name: copy master_keepalived.conf.j2 to MASTER node
when: ansible_hostname == "node-1"
template: src=/opt/roles/ha_proxy/templates/master_keepalived.conf.j2 dest=/etc/keepalived/keepalived.conf
- name: copy backup_keepalived.conf.j2 to BACKUP node
when: ansible_hostname == "node-2"
template: src=/opt/roles/ha_proxy/templates/backup_keepalived.conf.j2 dest=/etc/keepalived/keepalived.conf
配置nginx proxy配置文件模板
[root@Ansible ~]# cp /opt/conf/nginx.conf /opt/roles/ngx_proxy/templates/nginx.conf.j2
[root@Ansible ~]# vim /opt/roles/ngx_proxy/templates/nginx.conf.j2
user nginx;
worker_processes {{ ansible_processor_vcpus }};
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
upstream web {
server 192.168.214.133:80 max_fails=3 fail_timeout=30s;
server 192.168.214.135:80 max_fails=3 fail_timeout=30s;
server 192.168.214.139:80 max_fails=3 fail_timeout=30s;
}
server {
listen 80 default_server;
server_name {{ ansible_hostname }};
root /usr/share/nginx/html;
index index.html index.php;
location / {
proxy_pass http://web;
}
error_page 404 /404.html;
}
}
配置keepalived配置文件模板
[root@Ansible ~]# cp /opt/conf/keepalived.conf /opt/roles/ha_proxy/templates/master_keepalived.conf.j2
[root@Ansible templates]# vim master_keepalived.conf.j2
#
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.214.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
vrrp_iptables
vrrp_mcast_group4 224.17.17.17
}
vrrp_script chk_nginx {
script "killall -0 nginx"
interval 1
weight -20
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 55
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 12345678
}
virtual_ipaddress {
192.168.214.100
}
track_script {
chk_nginx
}
}
同样,在master_keepalived.conf.j2基础修改另存为backup_keepalived.conf.j2,只修改角色与优先级即可。注意:master_keepalived.conf.j2文件中的检测故障降低优先级的值要确保降低后MASTER优先级小于BACKUP的优先级
编写start.yml
- name: start nginx proxy server
service: name=nginx state=started
编写main.yml
- import_tasks: user.yml
- import_tasks: install_server.yml
- import_tasks: temps.yml
- import_tasks: start.yml
编写执行主文件
[root@Ansible ~]# vim /opt/roles/ha_proxy_install.yml
---
- hosts: node
remote_user: root
roles:
- ha_proxy
执行检测roles
[root@Ansible ~]# ansible-playbook -C /opt/roles/ha_proxy_install.yml
执行测试没问题即可执行自动部署
执行过程如下:
[root@Ansible ~]# ansible-playbook /opt/roles/ha_proxy_install.yml
PLAY [node] **********************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************
ok: [192.168.214.148]
ok: [192.168.214.143]
TASK [ha_proxy : create nginx group] *********************************************************************************************
changed: [192.168.214.148]
ok: [192.168.214.143]
TASK [ha_proxy : create nginx user] **********************************************************************************************
changed: [192.168.214.148]
ok: [192.168.214.143]
TASK [ha_proxy : install nginx and keepalived] ***********************************************************************************
changed: [192.168.214.143] => (item=[u'nginx', u'keepalived'])
changed: [192.168.214.148] => (item=[u'nginx', u'keepalived'])
TASK [ha_proxy : copy nginx proxy conf and rename] *******************************************************************************
changed: [192.168.214.148]
changed: [192.168.214.143]
TASK [ha_proxy : copy master_keepalived.conf.j2 to MASTER node] ******************************************************************
skipping: [192.168.214.143]
changed: [192.168.214.148]
TASK [ha_proxy : copy backup_keepalived.conf.j2 to BACKUP node] ******************************************************************
skipping: [192.168.214.148]
changed: [192.168.214.143]
TASK [ha_proxy : start nginx proxy server] ***************************************************************************************
changed: [192.168.214.143]
changed: [192.168.214.148]
PLAY RECAP ***********************************************************************************************************************
192.168.214.143 : ok=7 changed=4 unreachable=0 failed=0
192.168.214.148 : ok=7 changed=6 unreachable=0 failed=0
至此,自动部署nginx+keepalived高可用负载均衡完成了
最后看一下roles目录的结构
[root@Ansible ~]# tree /opt/roles/
/opt/roles/
├── ha_proxy
│ ├── tasks
│ │ ├── install_server.yml
│ │ ├── main.yml
│ │ ├── start.yml
│ │ ├── temps.yml
│ │ └── user.yml
│ └── templates
│ ├── backup_keepalived.conf.j2
│ ├── master_keepalived.conf.j2
│ └── nginx.conf.j2
├── ha_proxy_install.retry
├── ha_proxy_install.yml
├── web
│ ├── tasks
│ │ ├── install_nginx.yml
│ │ ├── main.yml
│ │ ├── start.yml
│ │ ├── temps.yml
│ │ └── user.yml
│ └── templates
│ ├── index.html.j2
│ └── nginx.conf.j2
├── web_install.retry
└── web_install.yml
6 directories, 19 files
下面测试服务:keepalived的服务没有在ansible中设置自动启动,到keepalived节点启动即可。
测试node节点
[root@Ansible ~]# for i in {1..10};do curl 192.168.214.148;done
web-3 test page.
web-1 test page.
web-2 test page.
web-3 test page.
web-1 test page.
web-2 test page.
web-3 test page.
web-1 test page.
web-2 test page.
web-3 test page.
将node-1 的MASTER服务停掉测试故障转移,同时查看node-2状态变化
执行: nginx -s stop
查看vrrp通知,可以看到主备切换正常:
[root@node-2 ~]# tcpdump -i ens33 -nn host 224.17.17.17
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
16:55:20.804327 IP 192.168.214.148 > 224.17.17.17: VRRPv2, Advertisement, vrid 55, prio 100, authtype simple, intvl 1s, length 20
16:55:25.476397 IP 192.168.214.148 > 224.17.17.17: VRRPv2, Advertisement, vrid 55, prio 0, authtype simple, intvl 1s, length 20
16:55:26.128474 IP 192.168.214.143 > 224.17.17.17: VRRPv2, Advertisement, vrid 55, prio 90, authtype simple, intvl 1s, length 20
16:55:27.133349 IP 192.168.214.143 > 224.17.17.17: VRRPv2, Advertisement, vrid 55, prio 90, authtype simple, intvl 1s, length 20
再测试访问:
[root@Ansible ~]# for i in {1..10};do curl 192.168.214.148;done
web-1 test page.
web-2 test page.
web-3 test page.
web-1 test page.
web-2 test page.
web-3 test page.
web-1 test page.
web-2 test page.
web-3 test page.
web-1 test page.
node-1恢复主节点,抢回MASTER角色
node-1节点执行nginx指令,可以看到VIP漂移回到node-1节点,测试访问
[root@Ansible ~]# for i in {1..10};do curl 192.168.214.148;done
web-1 test page.
web-2 test page.
web-3 test page.
web-1 test page.
web-2 test page.
web-3 test page.
web-1 test page.
web-2 test page.
web-3 test page.
web-1 test page.
其他问题
上面的自动部署方式还有可以改进的地方,比如,可以将配置keepalived的配置文件中的许多参数在roles中以统一变量的方式定义,然后在template模板文件中引用参数就可以了
此外还有一个需要注意的地方是:keepalived的配置文件中使用了killall指令检测本地的nginx服务状态,如果检测结果状态为非0就会执行vrrp_script中定义的降级操作,要确保系统这个指令可以执行,有时该指令没有被安装,如果该指令没有存在,即使MASTER节点发生故障也不会发生变化
使用Ansible实现nginx+keepalived高可用负载均衡自动化部署的更多相关文章
- Nginx+Keepalived高可用负载均衡
转自 https://www.jianshu.com/p/da26df4f7d60 Keepalived+Nginx实现高可用Web负载均衡 Master backup vip(虚拟IP) 192.1 ...
- [转]搭建Keepalived+Nginx+Tomcat高可用负载均衡架构
[原文]https://www.toutiao.com/i6591714650205716996/ 一.概述 初期的互联网企业由于业务量较小,所以一般单机部署,实现单点访问即可满足业务的需求,这也是最 ...
- LVS+Keepalived高可用负载均衡集群架构实验-01
一.为什么要使用负载均衡技术? 1.系统高可用性 2. 系统可扩展性 3. 负载均衡能力 LVS+keepalived能很好的实现以上的要求,LVS提供负载均衡,keepalived提供健康检查, ...
- 测试LVS+Keepalived高可用负载均衡集群
测试LVS+Keepalived高可用负载均衡集群 1. 启动LVS高可用集群服务 此时查看Keepalived服务的系统日志信息如下: [root@localhost ~]# tail -f /va ...
- 案例一(haproxy+keepalived高可用负载均衡系统)【转】
1.搭建环境描述: 操作系统: [root@HA-1 ~]# cat /etc/redhat-release CentOS release 6.7 (Final) 地址规划: 主机名 IP地址 集群角 ...
- Keepalived+Nginx实现高可用负载均衡集群
一 环境介绍 1.操作系统CentOS Linux release 7.2.1511 (Core) 2.服务keepalived+nginx双主高可用负载均衡集群及LAMP应用keepalived-1 ...
- Keepalived + Nginx + Tomcat 高可用负载均衡架构
环境: 1.centos7.3 2.虚拟ip:192.168.217.200 3.192.168.217.11.192.168.217.12上分别部署Nginx Keepalived Tomcat并进 ...
- 基于HAProxy+Keepalived高可用负载均衡web服务的搭建
一 原理简介 1.HAProxyHAProxy提供高可用性.负载均衡以及基于TCP和HTTP应用的代理,支持虚拟主机,它是免费.快速并且可靠的一种解决方案.HAProxy特别适用于那些负载特大的web ...
- HAProxy+Keepalived 高可用负载均衡
转自 https://www.jianshu.com/p/95cc6e875456 Keepalived+haproxy实现高可用负载均衡 Master backup vip(虚拟IP) 192.16 ...
随机推荐
- em和px
在国内网站中,包括三大门户,以及“引领”中国网站设计潮流的蓝色理想,ChinaUI等都是使用了px作为字体单位.只有百度好歹做了个可调的表率.而 在大洋彼岸,几乎所有的主流站点都使用em作为字体单位, ...
- 买卖股票的最佳时机 - C++
class Solution { public: /** * @param prices: Given an integer array * @return: Maximum profit */ in ...
- Centos7安装完毕后无法联网的解决方法(转)
今天在VMware虚拟机中经过千辛万苦终于安装好了centos7..正兴致勃勃的例行yum update 却发现centos系统貌似默认网卡没配置好,反馈无法联网.经过一番研究,终于让centos连上 ...
- adb工具包究竟能帮我们做什么?
adb工具包主要作用于什么呢?应该有很多用户都不了解adb,那就一起来了解一下吧!adb的全称为Android Debug Bridge,就是起到调试桥的作用. 借助adb工具,我们可以管理设备或手机 ...
- html css:背景图片链接css写法
图片作为背景,并且是链接的写法.例如网站的logo图片.例如:土豆的logo图片 <a title="土豆网 tudou.com 每个人都是生活的导演" href=" ...
- 为什么A经理的团队总是会陷入加班与救火之中
最近在看一本名为<稀缺>的书,作者从行为经济学的角度解释了穷人为什么会更穷,忙碌的人越来越没有时间,节食的人总是失败.由于缺乏闲余导致的带宽负担会进一步导致稀缺,由于总是优先处理紧急的事情 ...
- UVA 12333 大数,字典树
题意:给一个数字,看他最小是第几个菲波那切数列的前缀. 分析: 大数模板就是吊哦. 将菲波那切数列前500个数字放到字典树上.注意插入的时候不能像普通一样,只在尾节点处标记,而是一路标记下去. #in ...
- CF235C 【Cyclical Quest】
厚颜无耻的发一篇可能是全网最劣解法 我们发现要求给定的串所有不同的循环同构出现的次数,可以直接暴力啊 因为一个长度为\(n\)的串,不同的循环同构次数显然是不会超过\(n\)的,所以我们可以直接对每一 ...
- js面向对象 继承
1.类的声明 2.生成实例 3.如何实现继承 4.继承的几种方式 1.类的声明有哪些方式 <script type="text/javascript"> //类的声明 ...
- scope的四种作用域的使用
如何使用spring的作用域: <bean id="role" class="spring.chapter2.maryGame.Role" scope=& ...