操作系统:CentOS Linux release 7.6.1810 (Core)

node1:192.168.216.130 master

node2:192.168.216.132 slave

node3:192.168.216.136 haproxy

这里仅测试,所以只部署了一主一丛,适用与测试环境,生产环境建议postgres至少1主2从,3个etcd节点,2个haproxy+keepalive组成

一、首先在两个节点上安装postgres,下面以postgres9.5.19为例

  1. 1、添加RPM
  2. yum install https://download.postgresql.org/pub/repos/yum/9.5/redhat/rhel-7-x86_64/pgdg-centos95-9.5-3.noarch.rpm
  3. 2、安装PostgreSQL 9.5
  4. yum install postgresql95-server postgresql95-contrib
  5. 注意:本次实验我们这里只需要操作到第2步即可,初始化可以由patroni来替我们完成
  6. 3、初始化数据库
  7. /usr/pgsql-9.5/bin/postgresql95-setup initdb
  8. 4、设置开机自启动
  9. systemctl enable postgresql-9.5.service
  10. 5、启动服务
  11. systemctl start postgresql-9.5.service
  12. 6、查看版本
  13. psql --version

二、安装etcd服务

1、这里我只在node1单节点上安装,仅实验,未做分布式部署,如果集群部署可以参考博客etcd集群部署文章

  1. yum install etcd -y
  2. cp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.bak
  3. cd /etc/etcd/
  4. [root@localhost etcd]# egrep ^[A-Z] ./etcd.conf
  5. ETCD_DATA_DIR="/var/lib/etcd/node1.etcd"
  6. ETCD_LISTEN_PEER_URLS="http://192.168.216.130:2380"
  7. ETCD_LISTEN_CLIENT_URLS="http://192.168.216.130:2379,http://127.0.0.1:2379"
  8. ETCD_NAME="node1"
  9. ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.216.130:2380"
  10. ETCD_ADVERTISE_CLIENT_URLS="http://192.168.216.130:2379"
  11. ETCD_INITIAL_CLUSTER="node1=http://192.168.216.130:2380"
  12. ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
  13. ETCD_INITIAL_CLUSTER_STATE="new"

2、保存文件,然后重启etcd服务

  1. systemctl restart etcd

3、查看ectd服务是否正常

三、安装patroni,分别在node1和node2节点安装

1、安装patroni用到依赖包,这里通过pip安装patroni

  1. yum install gcc
  2. yum install python-devel.x86_64
  3. curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
  4. python get-pip.py
  5. pip install psycopg2-binary
  6. pip install --upgrade setuptools
  7. pip install patroni[etcd,consul]

2、验证patroni是否安装成功

3、配置patroni,以下操作在node1中进行

  1. mkdir /data/patroni/conf -p
  2. cd /data/patroni/conf
  3. yum install git
  4. git clone https://github.com/zalando/patroni.git
  5. cd /data/patroni/conf/patroni-master
  6. cp -r postgres0.yml ../conf/

4、编辑node1上的postgres0.yml文件

  1. scope: batman
  2. #namespace: /service/
  3. name: postgresql0
  4.  
  5. restapi:
  6. listen: 192.168.216.130:8008
  7. connect_address: 192.168.216.130:8008
  8. # certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
  9. # keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
  10. # authentication:
  11. # username: username
  12. # password: password
  13.  
  14. # ctl:
  15. # insecure: false # Allow connections to SSL sites without certs
  16. # certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
  17. # cacert: /etc/ssl/certs/ssl-cacert-snakeoil.pem
  18.  
  19. etcd:
  20. host: 192.168.216.130:2379
  21.  
  22. bootstrap:
  23. # this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
  24. # and all other cluster members will use it as a `global configuration`
  25. dcs:
  26. ttl: 30
  27. loop_wait: 10
  28. retry_timeout: 10
  29. maximum_lag_on_failover: 1048576
  30. # master_start_timeout: 300
  31. synchronous_mode: false
  32. #standby_cluster:
  33. #host: 127.0.0.1
  34. #port: 1111
  35. #primary_slot_name: patroni
  36. postgresql:
  37. use_pg_rewind: true
  38. use_slots: true
  39. parameters:
  40. wal_level: logical
  41. hot_standby: "on"
  42. wal_keep_segments: 1000
  43. max_wal_senders: 10
  44. max_replication_slots: 10
  45. wal_log_hints: "on"
  46. archive_mode: "on"
  47. archive_timeout: 1800s
  48. archive_command: mkdir -p ../wal_archive && test ! -f ../wal_archive/%f && cp %p ../wal_archive/%f
  49. recovery_conf:
  50. restore_command: cp ../wal_archive/%f %p
  51.  
  52. # some desired options for 'initdb'
  53. initdb: # Note: It needs to be a list (some options need values, others are switches)
  54. - encoding: UTF8
  55. - data-checksums
  56.  
  57. pg_hba: # Add following lines to pg_hba.conf after running 'initdb'
  58. # For kerberos gss based connectivity (discard @.*$)
  59. #- host replication replicator 127.0.0.1/32 gss include_realm=0
  60. #- host all all 0.0.0.0/0 gss include_realm=0
  61. - host replication replicator 0.0.0.0/0 md5
  62. - host all admin 0.0.0.0/0 md5
  63. - host all all 0.0.0.0/0 md5
  64.  
  65. # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
  66. # post_init: /usr/local/bin/setup_cluster.sh
  67.  
  68. # Some additional users users which needs to be created after initializing new cluster
  69. users:
  70. admin:
  71. password: postgres
  72. options:
  73. - createrole
  74. - createdb
  75. replicator:
  76. password: replicator
  77. options:
  78. - replication
  79. postgresql:
  80. listen: 0.0.0.0:5432
  81. connect_address: 192.168.216.130:5432
  82. data_dir: /data/postgres
  83. bin_dir: /usr/pgsql-9.5/bin/
  84. # config_dir:
  85. # pgpass: /tmp/pgpass0
  86. authentication:
  87. replication:
  88. username: replicator
  89. password: replicator
  90. superuser:
  91. username: admin
  92. password: postgres
  93. # rewind: # Has no effect on postgres 10 and lower
  94. # username: rewind_user
  95. # password: rewind_password
  96. # Server side kerberos spn
  97. # krbsrvname: postgres
  98. parameters:
  99. # Fully qualified kerberos ticket file for the running user
  100. # same as KRB5CCNAME used by the GSS
  101. # krb_server_keyfile: /var/spool/keytabs/postgres
  102. unix_socket_directories: '.'
  103.  
  104. #watchdog:
  105. # mode: automatic # Allowed values: off, automatic, required
  106. # device: /dev/watchdog
  107. # safety_margin: 5
  108.  
  109. tags:
  110. nofailover: false
  111. noloadbalance: false
  112. clonefrom: false
  113. nosync: false

5、配置patroni,以下操作在node2中进行

  1. mkdir /data/patroni/conf -p
  2. cd /data/patroni/conf
  3. yum install git
  4. git clone https://github.com/zalando/patroni.git
  5. cd /data/patroni/conf/patroni-master
  6. cp -r postgres1.yml ../conf/

6、编辑node2上的postgres1.yml文件

  1. scope: batman
  2. #namespace: /service/
  3. name: postgresql1
  4.  
  5. restapi:
  6. listen: 192.168.216.132:8008
  7. connect_address: 192.168.216.132:8008
  8. # certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
  9. # keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
  10. # authentication:
  11. # username: username
  12. # password: password
  13.  
  14. # ctl:
  15. # insecure: false # Allow connections to SSL sites without certs
  16. # certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
  17. # cacert: /etc/ssl/certs/ssl-cacert-snakeoil.pem
  18.  
  19. etcd:
  20. host: 192.168.216.130:2379
  21.  
  22. bootstrap:
  23. # this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
  24. # and all other cluster members will use it as a `global configuration`
  25. dcs:
  26. ttl: 30
  27. loop_wait: 10
  28. retry_timeout: 10
  29. maximum_lag_on_failover: 1048576
  30. # master_start_timeout: 300
  31. synchronous_mode: false
  32. #standby_cluster:
  33. #host: 127.0.0.1
  34. #port: 1111
  35. #primary_slot_name: patroni
  36. postgresql:
  37. use_pg_rewind: true
  38. use_slots: true
  39. parameters:
  40. wal_level: logical
  41. hot_standby: "on"
  42. wal_keep_segments: 1000
  43. max_wal_senders: 10
  44. max_replication_slots: 10
  45. wal_log_hints: "on"
  46. archive_mode: "on"
  47. archive_timeout: 1800s
  48. archive_command: mkdir -p ../wal_archive && test ! -f ../wal_archive/%f && cp %p ../wal_archive/%f
  49. recovery_conf:
  50. restore_command: cp ../wal_archive/%f %p
  51.  
  52. # some desired options for 'initdb'
  53. initdb: # Note: It needs to be a list (some options need values, others are switches)
  54. - encoding: UTF8
  55. - data-checksums
  56.  
  57. pg_hba: # Add following lines to pg_hba.conf after running 'initdb'
  58. # For kerberos gss based connectivity (discard @.*$)
  59. #- host replication replicator 127.0.0.1/32 gss include_realm=0
  60. #- host all all 0.0.0.0/0 gss include_realm=0
  61. - host replication replicator 0.0.0.0/0 md5
  62. - host all admin 0.0.0.0/0 md5
  63. - host all all 0.0.0.0/0 md5
  64.  
  65. # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
  66. # post_init: /usr/local/bin/setup_cluster.sh
  67.  
  68. # Some additional users users which needs to be created after initializing new cluster
  69. users:
  70. admin:
  71. password: postgres
  72. options:
  73. - createrole
  74. - createdb
  75. replicator:
  76. password: replicator
  77. options:
  78. - replication
  79. postgresql:
  80. listen: 0.0.0.0:5432
  81. connect_address: 192.168.216.132:5432
  82. data_dir: /data/postgres
  83. bin_dir: /usr/pgsql-9.5/bin/
  84. # config_dir:
  85. # pgpass: /tmp/pgpass0
  86. authentication:
  87. replication:
  88. username: replicator
  89. password: replicator
  90. superuser:
  91. username: admin
  92. password: postgres
  93. # rewind: # Has no effect on postgres 10 and lower
  94. # username: rewind_user
  95. # password: rewind_password
  96. # Server side kerberos spn
  97. # krbsrvname: postgres
  98. parameters:
  99. # Fully qualified kerberos ticket file for the running user
  100. # same as KRB5CCNAME used by the GSS
  101. # krb_server_keyfile: /var/spool/keytabs/postgres
  102. unix_socket_directories: '.'
  103.  
  104. #watchdog:
  105. # mode: automatic # Allowed values: off, automatic, required
  106. # device: /dev/watchdog
  107. # safety_margin: 5
  108.  
  109. tags:
  110. nofailover: false
  111. noloadbalance: false
  112. clonefrom: false
  113. nosync: false

7、记下data_dir上述yml配置文件中的值。该目录需要确保postgres用户具备写入的权限。如果此目录不存在,请创建它:在node1和node2节点分别进行如下操作

  1. mkdir /data/postgres -p
  2. chown -Rf postgres:postgres /data/postgres
  3. chmod 700 /data/postgres

8、在node1上切换到postgres用户,并启动patroni服务,这里patroni会帮我们自动初始化数据库并创建相应的角色

  1. chown -Rf postgres:postgres /data/patroni/conf
  2. su - postgres
  3. 启动patroni服务
  4. patroni /data/patroni/conf/postgres0.yml

此时如果服务正常启动可以打印以下日志信息

由于不是后台启动的服务,所以这里我们克隆一个窗口,切换到postgres用户下,并执行psql -h 127.0.0.1 -U admin postgres连接数据库,验证patroni是否正常托管postgres服务

9、在node2上切换到postgres用户,并启动patroni服务,这里和node1的操作一致

  1. chown -Rf postgres:postgres /data/patroni/conf
  2. su - postgres
  3. 启动patroni服务
  4. patroni /data/patroni/conf/postgres1.yml

如果服务启动正常,可看到如下日志打印信息

10、查询集群运行状态patronictl -c /data/patroni/conf/postgres0.yml list

11、patronictl -c /data/patroni/conf/postgres0.yml switchover 手动切换master

12、可以后台启动来保持patroni服务不中断,也可以配置成systemd服务来管理保证开机自启

node1节点:

  1. nohup patroni /data/patroni/conf/postgres0.yml >
  2. /data/patroni/patroni_log 2>&1 &

node2节点:

  1. nohup patroni /data/patroni/conf/postgres1.yml >
  2. /data/patroni/patroni_log 2>&1 &

四、在node3节点安装haproxy

  1. yum install -y haproxy
  2. cp -r /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg_bak

编辑haproxy.cfg配置文件

  1. # vi /etc/haproxy/haproxy.cfg
  2.  
  3. #---------------------------------------------------------------------
  4. # 全局定义
  5. global
  6. # log语法:log [max_level_1]
  7. # 全局的日志配置,使用log关键字,指定使用127.0.0.1上的syslog服务中的local0日志设备,
  8. # 记录日志等级为info的日志
  9. # log 127.0.0.1 local0 info
  10. log 127.0.0.1 local1 notice
  11. chroot /var/lib/haproxy
  12. pidfile /var/run/haproxy.pid
  13.  
  14. # 定义每个haproxy进程的最大连接数 ,由于每个连接包括一个客户端和一个服务器端,
  15. # 所以单个进程的TCP会话最大数目将是该值的两倍。
  16. maxconn 4096
  17.  
  18. # 用户,组
  19. user haproxy
  20. group haproxy
  21.  
  22. # 以守护进程的方式运行
  23. daemon
  24.  
  25. # turn on stats unix socket
  26. stats socket /var/lib/haproxy/stats
  27.  
  28. #---------------------------------------------------------------------
  29. # 默认部分的定义
  30. defaults
  31. # mode语法:mode {http|tcp|health} 。http是七层模式,tcp是四层模式,health是健康检测,返回OK
  32. mode tcp
  33. # 使用127.0.0.1上的syslog服务的local3设备记录错误信息
  34. log 127.0.0.1 local3 err
  35.  
  36. #if you set mode to http,then you nust change tcplog into httplog
  37. option tcplog
  38.  
  39. # 启用该项,日志中将不会记录空连接。所谓空连接就是在上游的负载均衡器或者监控系统为了
  40. #探测该服务是否存活可用时,需要定期的连接或者获取某一固定的组件或页面,或者探测扫描
  41. #端口是否在监听或开放等动作被称为空连接;官方文档中标注,如果该服务上游没有其他的负
  42. #载均衡器的话,建议不要使用该参数,因为互联网上的恶意扫描或其他动作就不会被记录下来
  43. option dontlognull
  44.  
  45. # 定义连接后端服务器的失败重连次数,连接失败次数超过此值后将会将对应后端服务器标记为不可用
  46. retries 3
  47.  
  48. # 当使用了cookie时,haproxy将会将其请求的后端服务器的serverID插入到cookie中,以保证
  49. #会话的SESSION持久性;而此时,如果后端的服务器宕掉了,但是客户端的cookie是不会刷新的
  50. #,如果设置此参数,将会将客户的请求强制定向到另外一个后端server上,以保证服务的正常
  51. option redispatch
  52.  
  53. #等待最大时长 When a server's maxconn is reached, connections are left pending in a queue which may be server-specific or global to the backend.
  54. timeout queue 1m
  55.  
  56. # 设置成功连接到一台服务器的最长等待时间,默认单位是毫秒
  57. timeout connect 10s
  58.  
  59. # 客户端非活动状态的超时时长 The inactivity timeout applies when the client is expected to acknowledge or send data.
  60. timeout client 1m
  61.  
  62. # Set the maximum inactivity time on the server side.The inactivity timeout applies when the server is expected to acknowledge or send data.
  63. timeout server 1m
  64. timeout check 5s
  65. maxconn 5120
  66.  
  67. #---------------------------------------------------------------------
  68. # 配置haproxy web监控,查看统计信息
  69. listen status
  70. bind 0.0.0.0:1080
  71. mode http
  72. log global
  73.  
  74. stats enable
  75. # stats是haproxy的一个统计页面的套接字,该参数设置统计页面的刷新间隔为30s
  76. stats refresh 30s
  77. stats uri /haproxy-stats
  78. # 设置统计页面认证时的提示内容
  79. stats realm Private lands
  80. # 设置统计页面认证的用户和密码,如果要设置多个,另起一行写入即可
  81. stats auth admin:passw0rd
  82. # 隐藏统计页面上的haproxy版本信息
  83. # stats hide-version
  84.  
  85. #---------------------------------------------------------------------
  86. listen master
  87. bind *:5000
  88. mode tcp
  89. option tcplog
  90. balance roundrobin
  91. option httpchk OPTIONS /master
  92. http-check expect status 200
  93. default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
  94. server node1 192.168.216.130:5432 maxconn 1000 check port 8008 inter 5000 rise 2 fall 2
  95. server node2 192.168.216.132:5432 maxconn 1000 check port 8008 inter 5000 rise 2 fall 2
  96. listen replicas
  97. bind *:5001
  98. mode tcp
  99. option tcplog
  100. balance roundrobin
  101. option httpchk OPTIONS /replica
  102. http-check expect status 200
  103. default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
  104. server node1 192.168.216.130:5432 maxconn 1000 check port 8008 inter 5000 rise 2 fall 2
  105. server node2 192.168.216.132:5432 maxconn 1000 check port 8008 inter 5000 rise 2 fall 2

启动haproxy服务

  1. systemctl start haproxy
  2. systemctl status haproxy

浏览器访问http://192.168.216.136:1080/haproxy-stats输入用户名admin密码passw0rd

这里我们通过5000端口和5001端口分别来提供写服务和读服务,如果需要对数据库写入数据只需要对外提供192.168.216.136:5000即可,可以模拟主库故障,即关闭其中的master节点来验证是否会进行自动主从切换

https://www.linode.com/docs/databases/postgresql/create-a-highly-available-postgresql-cluster-using-patroni-and-haproxy/#configure-etcd

https://www.opsdash.com/blog/postgres-getting-started-patroni.html

使用Patroni和HAProxy创建高可用的PostgreSQL集群的更多相关文章

  1. 基于MySQL+MHA+Haproxy部署高可用负载均衡集群

    一.MHA 概述 MHA(Master High Availability)是可以在MySQL上使用的一套高可用方案.所编写的语言为Perl 从名字上我们可以看到.MHA的目的就是为了维护Master ...

  2. docker下用keepalived+Haproxy实现高可用负载均衡集群

    启动keepalived后宿主机无法ping通用keepalived,报错: [root@localhost ~]# ping 172.18.0.15 PING () bytes of data. F ...

  3. Haproxy+Keepalived搭建Weblogic高可用负载均衡集群

    配置环境说明: KVM虚拟机配置 用途 数量 IP地址 机器名 虚拟IP地址 硬件 内存3G  系统盘20G cpu 4核 Haproxy keepalived 2台 192.168.1.10 192 ...

  4. Dubbo入门到精通学习笔记(二十):MyCat在MySQL主从复制的基础上实现读写分离、MyCat 集群部署(HAProxy + MyCat)、MyCat 高可用负载均衡集群Keepalived

    文章目录 MyCat在MySQL主从复制的基础上实现读写分离 一.环境 二.依赖课程 三.MyCat 介绍 ( MyCat 官网:http://mycat.org.cn/ ) 四.MyCat 的安装 ...

  5. RabbitMQ(四):使用Docker构建RabbitMQ高可用负载均衡集群

    本文使用Docker搭建RabbitMQ集群,然后使用HAProxy做负载均衡,最后使用KeepAlived实现集群高可用,从而搭建起来一个完成了RabbitMQ高可用负载均衡集群.受限于自身条件,本 ...

  6. 1.还不会部署高可用的kubernetes集群?看我手把手教你使用二进制部署v1.23.6的K8S集群实践(上)

    公众号关注「WeiyiGeek」 设为「特别关注」,每天带你玩转网络安全运维.应用开发.物联网IOT学习! 本章目录: 0x00 前言简述 0x01 环境准备 主机规划 软件版本 网络规划 0x02 ...

  7. K8S 使用Kubeadm搭建高可用Kubernetes(K8S)集群 - 证书有效期100年

    1.概述 Kubenetes集群的控制平面节点(即Master节点)由数据库服务(Etcd)+其他组件服务(Apiserver.Controller-manager.Scheduler...)组成. ...

  8. 企业运维实践-还不会部署高可用的kubernetes集群?使用kubeadm方式安装高可用k8s集群v1.23.7

    关注「WeiyiGeek」公众号 设为「特别关注」每天带你玩转网络安全运维.应用开发.物联网IOT学习! 希望各位看友[关注.点赞.评论.收藏.投币],助力每一个梦想. 文章目录: 0x00 前言简述 ...

  9. keepalived工作原理和配置说明 腾讯云VPC内通过keepalived搭建高可用主备集群

    keepalived工作原理和配置说明 腾讯云VPC内通过keepalived搭建高可用主备集群 内网路由都用mac地址 一个mac地址绑定多个ip一个网卡只能一个mac地址,而且mac地址无法改,但 ...

随机推荐

  1. nginx 进程问题

    1 nginx的进程分为四种 master worker cacheLoader cacheManager. 实际接收请求的进程是 worker,master监控worker节点,之所以会多进程模式, ...

  2. [SourceTree] - 提交代码失败 "git -c diff.mnemonicprefix=false -c core.quotepath=false" 之解决

    背景 使用 SourceTree 提交代码失败,尝试了重装 SourceTree 和 Git 问题依旧. 错误信息 git -c diff.mnemonicprefix=false -c core.q ...

  3. 用Scratch制作一个Hello World程序

    网上出现了很多Hello World程序,看的小编心里也痒痒的,为此这次作为南京小码王Scratch培训机构的小编,就为大家来详细的了解下Scratch制作Hello World程序的过程,现在就和小 ...

  4. nmon2influxdb+grafana:服务监控可视化部署

    在工作中,无论是定位线上问题,还是性能优化,都需要对前端.后台服务进行监控.而及时的获取监控数据,能更好的帮助技术人员排查定位问题. 前面的博客介绍过服务端监控工具:Nmon使用方法及利用easyNm ...

  5. Docker 四种网络模式

    原文 https://www.cnblogs.com/gispathfinder/p/5871043.html 我们在使用docker run创建Docker容器时,可以用--net选项指定容器的网络 ...

  6. Lipo移除ORC架构

    Lipo移除ORC架构 打包前检查链接 https://cloud.baidu.com/doc/OCR/OCR-iOS-SDK.html#FAQ cd /Users/guojun/JG-iOS/Pro ...

  7. 【开发工具】- 推荐一款好用的文本编辑器[Sublime Text]

    作为一个程序员除了IDE外,文本编辑器也是必不可少的一个开发工具.之前一直在用的是NotePad++.EditPlus,这两款编辑器,但是总感觉差点什么,昨天在知乎上看到有人推荐Sublime Tex ...

  8. jQuery判断当前页面是APP内打开还是浏览器打开

    一.代码如下: function check_useragent() { var browser = { versions: function() { var u = navigator.userAg ...

  9. GO执行shell命令

    Golang执行shell命令主要依靠exec模块 代码为核心逻辑,并非全部 运行命令 cmd1 = exec.Command("ls") if err = cmd1.Run(); ...

  10. android ViewFlipper(翻转视图) 使用

    1.布局文件 <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns ...