架构设计及必要配置

主机环境

IP                 主机名             担任角色
192.168.192.128  node_master    MySQL-Master| MHA-Node
192.168.192.129  node_slave     MySQL-Slave | MHA-Node(备选Master)
192.168.192.130  manager_slave  MySQL-Slave | MHA-Manager
.....................................................................................................................................................
为了节省机器,这里选择只读的从库192.168.192.129(从库不对外提供读的服务)作为候选主库,即candicate master,或是专门用于备份
同样,为了节省机器,这里选择192.168.192.130这台从库作为manager server(实际生产环节中,机器充足的情况下, 一般是专门选择一台机器作为Manager server)

温馨提示:
快速更改主机名(我的主机名是需要更改的)
[root@master1 ~]# hostnamectl set-hostname node_master
[root@node1 ~]# hostnamectl set-hostname node_slave
[root@node2 ~]# hostnamectl set-hostname manager_slave
.....................................................................................................................................................

必要配置

  1. 1. 添加每台机器的hosts文件实现主机名hostname登录(3台机器都要添加,不过这一步可也是可以不做的)
  2. [root@node_master ~]# vim /etc/hosts
  3. ..............
  4. 192.168.192.128 node_master
  5. 192.168.192.129 node_slave
  6. 192.168.192.130 manager_slave
  7. 也可以添加完一台使用scp命令进行hosts文件拷贝(前提是其他机器没有其他的hosts文件设置)
  8. [root@node_master ~]# scp /etc/hosts root@192.168.192.129:/etc/hosts
  9. [root@node_master ~]# scp /etc/hosts root@192.168.192.130:/etc/hosts
  10.  
  11. 2. 做服务器免密登录(3台机器都要添加,这一步是必须要做的,至关重要)
  12. [root@node_master ~]# ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
  13. [root@node_master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub "root@192.168.192.128"
  14. [root@node_master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub "root@192.168.192.129"
  15. [root@node_master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub "root@192.168.192.130"
  16.  
  17. 测试能否免密登录(正常是不需要输入密码直接登录的)
  18. [root@node_master ~]# ssh 192.168.192.128
  19. Last login: Tue Dec 18 15:56:41 2018 from 192.168.192.1
  20. [root@node_master ~]#
  21.  
  22. [root@node_master ~]# ssh 192.168.192.129
  23. Last login: Tue Dec 18 16:03:56 2018 from 192.168.192.128
  24. [root@node_slave ~]#
  25.  
  26. [root@node_master ~]# ssh 192.168.192.130
  27. Last login: Tue Dec 18 16:04:02 2018 from 192.168.192.128
  28. [root@manager_slave ~]#
  29. 为了确保万无一失在最好也在其他的两台机器上操作免密测试

安装MySQL并实现主从

主从环境设计及安装(MySQL5.7的版本)

环境设计(一主两从):
192.168.192.128 MySQL-Master(主库)
192.168.192.129 MySQL-Slave (从库)
192.168.192.130 MySQL-Slave (从库)
安装教程:https://www.cnblogs.com/brianzhu/p/8575243.html (采用yum安装方式)

配置主从

  1. 1.主从my.cnf配置文件设置
  2. 主从配置文件(yum安装的MySQL配置文件是/etc/my.cnf)修改为如下配置
  3.  
  4. ------主库(192.168.192.128)配置------
  5. server-id=1 #数据库唯一ID,主从的标识号绝对不能重复
  6. log-bin=mysql-bin #开启bin-log,并指定文件目录和文件名前缀
  7. binlog-ignore-db=mysql #不同步mysql系统数据库。如果是多个不同步库,就以此格式另写几行;也可以在一行,中间逗号隔开
  8. sync_binlog=1 #确保binlog日志写入后与硬盘同步
  9. binlog_checksum=crc32 #跳过现有的采用checksum的事件,mysql5.6.5以后的版本中binlog_checksum=crc32,而低版本都是binlog_checksum=none
  10. binlog_format=mixed #bin-log日志文件格式,设置为MIXED可以防止主键重复
  11. validate_password_policy=0 #指定密码策略
  12. validate_password = off #禁用密码策略
  13.  
  14. 配置完成后保存,并重启MySQL服务
  15. [root@node_master ~]# systemctl restart mysqld
  16.  
  17. ------从库1(192.168.192.129)配置------
  18. server-id=2 #数据库唯一ID,主从的标识号绝对不能重复
  19. log-bin=mysql-bin #开启bin-log,并指定文件目录和文件名前缀
  20. binlog-ignore-db=mysql #不同步mysql系统数据库(千万要注意:主从同步中的过滤字段要一致,否则后面使用masterha_check_repl 检查复制时就会出错!)
  21. slave-skip-errors=all #跳过所有的错误错误,继续执行复制操作
  22. validate_password_policy=0 #指定密码策略
  23. validate_password = off #禁用密码策略
  24.  
  25. 配置完成后保存,并重启MySQL服务
  26. [root@node_slave ~]# systemctl restart mysqld
  27.  
  28. ------从库2(192.168.192.130)配置------
  29. server-id=3 #数据库唯一ID,主从的标识号绝对不能重复
  30. log-bin=mysql-bin #开启bin-log,并指定文件目录和文件名前缀
  31. binlog-ignore-db=mysql #不同步mysql系统数据库(千万要注意:主从同步中的过滤字段要一致,否则后面使用masterha_check_repl 检查复制时就会出错!)
  32. slave-skip-errors=all #跳过所有的错误错误,继续执行复制操作
  33. validate_password_policy=0 #指定密码策略
  34. validate_password = off #禁用密码策略
  35.  
  36. 配置完成后保存,并重启MySQL服务
  37. [root@manager_slave ~]# systemctl restart mysqld
  38.  
  39. 注意:
  40. 主从设置时,如果设置了binlog-ignore-db replicate-ignore-db 过滤规则,则主从必须相同。即要使用binlog-ignore-db过滤字段,则主从配置都使用这个,
  41. 要是使用replicate-ignore-db过滤字段,则主从配置都使用这个,千万不能主从配置使用的过滤字段不一样!因为MHA 在启动时候会检测过滤规则,如果过滤规则不同,MHA 不启动监控和故障转移。
  42.  
  43. 2.创建用户mha管理的账号(在三台节点上都需要执行)
  44. mysql> grant super,reload,replication client,select on *.* to manager@'192.168.192.%' identified by 'Manager_1234';
  45. Query OK, 0 rows affected, 1 warning (0.00 sec)
  46.  
  47. mysql> grant create,insert,update,delete,drop on *.* to manager@'192.168.192.%';
  48. Query OK, 0 rows affected (0.00 sec)
  49.  
  50. 3.创建主从账号(在三台节点上都需要执行)
  51. mysql> grant reload,super,replication slave on *.* to 'slave'@'192.168.192.%' identified by 'Slave_1234';
  52. Query OK, 0 rows affected, 1 warning (0.00 sec)
  53.  
  54. mysql> flush privileges;
  55. Query OK, 0 rows affected (0.01 sec)
  56.  
  57. 4.配置主从
  58. 在主服务器(192.168.192.128)上执行
  59. mysql> show master status;
  60. +------------------+----------+--------------+------------------+-------------------+
  61. | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
  62. +------------------+----------+--------------+------------------+-------------------+
  63. | mysql-bin.000001 | 1169 | | mysql | |
  64. +------------------+----------+--------------+------------------+-------------------+
  65. 1 row in set (0.00 sec)
  66.  
  67. 在两台从服务器(192.168.192.129,192.168.192.130)上执行
  68. 设置之前先把从库停掉
  69. mysql> stop slave;
  70. Query OK, 0 rows affected, 1 warning (0.00 sec)
  71.  
  72. 配置主从
  73. mysql> change master to master_host='192.168.192.128',master_port=3306,master_user='slave',master_password='Slave_1234',master_log_file='mysql-bin.000001',master_log_pos=1169;
  74. Query OK, 0 rows affected, 2 warnings (0.01 sec)
  75.  
  76. 启动主从
  77. mysql> start slave;
  78. Query OK, 0 rows affected (0.00 sec)
  79.  
  80. 查看同步状态(Slave_IO_RunningSlave_SQL_RunningYES表示主从配置成功)
  81. mysql> show slave status\G;
  82. *************************** 1. row ***************************
  83. Slave_IO_State: Waiting for master to send event
  84. Master_Host: 192.168.192.128
  85. Master_User: slave
  86. Master_Port: 3306
  87. Connect_Retry: 60
  88. Master_Log_File: mysql-bin.000001
  89. Read_Master_Log_Pos: 1169
  90. Relay_Log_File: node_slave-relay-bin.000002
  91. Relay_Log_Pos: 320
  92. Relay_Master_Log_File: mysql-bin.000001
  93. Slave_IO_Running: Yes
  94. Slave_SQL_Running: Yes

具体主从原理篇点击链接:https://www.cnblogs.com/brianzhu/p/10153802.html

具体主从部署篇点击链接:https://www.cnblogs.com/brianzhu/p/10154446.html  

安装及配置MHA

MHA下载地址

  1. ----------------------------------------------MHA下载地址----------------------------------------------
  2. mha包括manager节点和data节点,其中:
  3. data节点包括原有的MySQL复制结构中的主机,至少3台,即12从,当master failover后,还能保证主从结构;只需安装node包。
  4. manager server:运行监控脚本,负责monitoring auto-failover;需要安装node包和manager
  5.  
  6. 下载地址:
  7. MHA Node下载: https://coding.net/u/brian_zhu/p/mha4/git/raw/master/mha4mysql-node-0.58.tar.gz
  8. MHA Manager下载:https://coding.net/u/brian_zhu/p/mha4/git/raw/master/mha4mysql-manager-0.58.tar.gz

安装MHA

  1. 1. MHA Node安装
  2. ----------------------------------------------MHA Node安装----------------------------------------------
  3.  
  4. 在所有data数据节点机上安装安装MHA Node(三台机器都要安装MHA Node)
  5.  
  6. 先安装所需的perl模块
  7. [root@node_master ~]# yum -y install perl perl-DBD-MySQL perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker perl-CPAN
  8.  
  9. 下载解压
  10. [root@node_master ~]# wget https://coding.net/u/brian_zhu/p/mha4/git/raw/master/mha4mysql-node-0.58.tar.gz
  11. [root@node_master ~]# tar zxf mha4mysql-node-0.58.tar.gz
  12. [root@node_master ~]# cd mha4mysql-node-0.58
  13.  
  14. 编译安装
  15. [root@node_master mha4mysql-node-0.58]# perl Makefile.PL
  16. [root@node_master mha4mysql-node-0.58]# make && make install
  17.  
  18. 2. MHA Manager安装
  19. ----------------------------------------------MHA Manager安装----------------------------------------------
  20.  
  21. manager节点(即192.168.192.130)上安装MHA Manager(注意manager节点也要安装MHA node
  22.  
  23. 安装epel-release
  24. [root@manager_slave ~]# yum -y install epel-release
  25.  
  26. 安装perlmysql
  27. [root@manager_slave ~]# yum install -y perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Config-IniFiles perl-Time-HiRes -y
  28.  
  29. 下载解压
  30. [root@manager_slave ~]# wget https://coding.net/u/brian_zhu/p/mha4/git/raw/master/mha4mysql-manager-0.58.tar.gz
  31. [root@manager_slave ~]# tar zxf mha4mysql-manager-0.58.tar.gz
  32.  
  33. 编译安装
  34. [root@manager_slave ~]# cd mha4mysql-manager-0.58/
  35. [root@manager_slave mha4mysql-manager-0.58]# perl Makefile.PL
  36. [root@manager_slave mha4mysql-manager-0.58]# make && make install
  37.  
  38. 安装完MHA Manager后,在/usr/local/bin目录下会生成以下脚本
  39. [root@manager_slave mha4mysql-manager-0.58]# ll /usr/local/bin/
  40. total 39060
  41. -r-xr-xr-x 1 root root 17639 Dec 18 16:56 apply_diff_relay_logs
  42. -rwxr-xr-x 1 root root 11739376 Oct 18 09:41 docker-compose
  43. -rwxr-xr-x 1 root root 28160480 Oct 23 16:42 docker-machine
  44. -r-xr-xr-x 1 root root 4807 Dec 18 16:56 filter_mysqlbinlog
  45. -r-xr-xr-x 1 root root 1995 Dec 18 17:00 masterha_check_repl
  46. -r-xr-xr-x 1 root root 1779 Dec 18 17:00 masterha_check_ssh
  47. -r-xr-xr-x 1 root root 1865 Dec 18 17:00 masterha_check_status
  48. -r-xr-xr-x 1 root root 3201 Dec 18 17:00 masterha_conf_host
  49. -r-xr-xr-x 1 root root 2517 Dec 18 17:00 masterha_manager
  50. -r-xr-xr-x 1 root root 2165 Dec 18 17:00 masterha_master_monitor
  51. -r-xr-xr-x 1 root root 2373 Dec 18 17:00 masterha_master_switch
  52. -r-xr-xr-x 1 root root 5172 Dec 18 17:00 masterha_secondary_check
  53. -r-xr-xr-x 1 root root 1739 Dec 18 17:00 masterha_stop
  54. -r-xr-xr-x 1 root root 8337 Dec 18 16:56 purge_relay_logs
  55. -r-xr-xr-x 1 root root 7525 Dec 18 16:56 save_binary_logs
  56.  
  57. 其中:
  58. masterha_check_repl 检查MySQL复制状况
  59. masterha_check_ssh 检查MHASSH配置状况
  60. masterha_check_status 检测当前MHA运行状态
  61. masterha_conf_host 添加或删除配置的server信息
  62. masterha_manager 启动MHA
  63. masterha_stop 停止MHA
  64. masterha_master_monitor 检测master是否宕机
  65. masterha_master_switch 控制故障转移(自动或者手动)
  66. masterha_secondary_check 多种线路检测master是否存活
  67.  
  68. 另外:
  69. 在../mha4mysql-manager-0.58/samples/scripts/下还有以下脚本,需要将其复制到/usr/local/bin
  70.  
  71. [root@manager_slave mha4mysql-manager-0.58]# ll ../mha4mysql-manager-0.58/samples/scripts/
  72. total 32
  73. -rwxr-xr-x 1 1000 1000 3648 Mar 23 2018 master_ip_failover #自动切换时VIP管理脚本,不是必须,如果我们使用keepalived的,我们可以自己编写脚本完成对vip的管理,比如监控mysql,如果mysql异常,我们停止keepalived就行,这样vip就会自动漂移
  74. -rwxr-xr-x 1 1000 1000 9870 Mar 23 2018 master_ip_online_change #在线切换时VIP脚本,不是必须,同样可以可以自行编写简单的shell完成
  75. -rwxr-xr-x 1 1000 1000 11867 Mar 23 2018 power_manager #故障发生后关闭master脚本,不是必须
  76. -rwxr-xr-x 1 1000 1000 1360 Mar 23 2018 send_report #故障切换发送报警脚本,不是必须,可自行编写简单的shell完成
  77.  
  78. [root@manager_slave mha4mysql-manager-0.58]# cp ../mha4mysql-manager-0.58/samples/scripts/* /usr/local/bin/

配置MHA

  1. 1.MHA Manager配置(MHA的配置文件)
  2. ----------------------------------------------MHA Manager配置----------------------------------------------
  3. 在管理节点(192.168.192.130)上进行下面配置
  4. [root@manager_slave mha4mysql-manager-0.58]# mkdir -p /etc/masterha
  5. [root@manager_slave mha4mysql-manager-0.58]# cp samples/conf/app1.cnf /etc/masterha/
  6. [root@manager_slave mha4mysql-manager-0.58]# vim /etc/masterha/app1.cnf
  7. [server default]
  8. manager_workdir=/var/log/masterha/app1 #设置manager的工作目录
  9. manager_log=/var/log/masterha/app1/manager.log #设置manager的日志
  10.  
  11. ssh_user=root #ssh免密钥登录的帐号名
  12. user=manager #manager用户
  13. password=Manager_1234 #manager用户的密码
  14. repl_user=slave #mysql复制帐号,用来在主从机之间同步二进制日志等
  15. repl_password=Slave_1234 #设置mysql中root用户的密码,这个密码是前文中创建监控用户的那个密码
  16. ping_interval=1 #设置监控主库,发送ping包的时间间隔,用来检查master是否正常,默认是3秒,尝试三次没有回应的时候自动进行railover
  17. master_ip_failover_script= /usr/local/bin/master_ip_failover #设置自动failover时候的切换脚本
  18. master_ip_online_change_script= /usr/local/bin/master_ip_online_change #设置手动切换时候的切换脚本
  19.  
  20. [server1]
  21. hostname=192.168.192.128
  22. port=3306
  23. master_binlog_dir=/var/lib/mysql/ #设置master 保存binlog的位置,以便MHA可以找到master的日志,我这里的也就是mysql的数据目录
  24.  
  25. [server2]
  26. hostname=192.168.192.129
  27. port=3306
  28. candidate_master=1 #设置为候选master,即master机宕掉后,优先启用这台作为新master,如果设置该参数以后,发生主从切换以后将会将此从库提升为主库,即使这个主库不是集群中事件最新的slave
  29. check_repl_delay=0 #默认情况下如果一个slave落后master 100M的relay logs的话,MHA将不会选择该slave作为一个新的master,因为对于这个slave的恢复需要花费很长时间通过设置check_repl_delay=0,MHA触发切换在选择一个新的master的时候将会忽略复制延时,这个参数对于设置了candidate_master=1的主机非常有用,因为这个候选主在切换的过程中一定是新的master
  30. master_binlog_dir=/var/lib/mysql/
  31.  
  32. [server3]
  33. hostname=192.168.192.130
  34. port=3306
  35. #candidate_master=1
  36. master_binlog_dir=/var/lib/mysql/
  37.  
  38. #[server4]
  39. #hostname=host4
  40. #no_master=1
  41.  
  42. 2.设置relay log的清除方式(在两台slave节点上)
  43. 温馨提示:
  44. MHA在发生切换的过程中,从库的恢复过程中依赖于relay log的相关信息,所以这里要将relay log的自动清除设置为OFF,采用手动清除relay log的方式。
  45. 在默认情况下,从服务器上的中继日志会在SQL线程执行完毕后被自动删除。但是在MHA环境中,这些中继日志在恢复其他从服务器时可能会被用到,因此需要禁用
  46. 中继日志的自动删除功能。定期清除中继日志需要考虑到复制延时的问题。在ext3的文件系统下,删除大的文件需要一定的时间,会导致严重的复制延时。为了避
  47. 免复制延时,需要暂时为中继日志创建硬链接,因为在linux系统中通过硬链接删除大文件速度会很快。(在mysql数据库中,删除大表时,通常也采用建立硬链接的方式)
  48.  
  49. MHA节点中包含了pure_relay_logs命令工具,它可以为中继日志创建硬链接,执行SET GLOBAL relay_log_purge=1,等待几秒钟以便SQL线程切换到新的中继日志,
  50. 再执行SET GLOBAL relay_log_purge=0
  51.  
  52. pure_relay_logs脚本参数如下所示:
  53. --user mysql 用户名
  54. --password mysql 密码
  55. --port 端口号
  56. --workdir 指定创建relay log的硬链接的位置,默认是/var/tmp,由于系统不同分区创建硬链接文件会失败,故需要执行硬链接具体位置,成功执行脚本后,硬链接的中继日志文件被删除
  57. --disable_relay_log_purge 默认情况下,如果relay_log_purge=1,脚本会什么都不清理,自动退出,通过设定这个参数,当relay_log_purge=1的情况下会将relay_log_purge设置为0。清理relay log之后,最后将参数设置为OFF
  58.  
  59. 清除relay log的方法(两台slave节点)
  60. [root@node_slave ~]# mysql -uroot -p12345 -e 'set global relay_log_purge=0'
  61. [root@manager_slave ~]# mysql -uroot -p12345 -e 'set global relay_log_purge=0'
  62.  
  63. 为了定时提供一个定期清理relay的脚本
  64. 设置定期清理relay脚本(在两台slave节点上操作)
  65. [root@node_slave ~]# vim /root/purge_relay_log.sh
  66. #!/bin/bash
  67. user=root
  68. passwd=12345
  69. port=3306
  70. host=localhost
  71. log_dir='/data/masterha/log'
  72. work_dir='/data'
  73. purge='/usr/local/bin/purge_relay_logs'
  74.  
  75. if [ ! -d $log_dir ]
  76. then
  77. mkdir -p $log_dir
  78. fi
  79.  
  80. $purge --user=$user --passwd=$passwd --disable_relay_log_purge --port=$port --workdir=$work_dir >> $log_dir/purge_relay_logs.log 2>&1
  81.  
  82. 给脚本添加执行权限
  83. [root@node_slave ~]# chmod 755 /root/purge_relay_log.sh
  84.  
  85. 添加到crontab定期执行
  86. [root@node_slave ~]# crontab -e
  87. 0 5 * * * /bin/bash /root/purge_relay_log.sh
  88.  
  89. 测试脚本执行
  90. purge_relay_logs脚本删除中继日志不会阻塞SQL线程
  91. 下面我们手动执行以下purge_relay_log看看具体的情况
  92. [root@node_slave ~]# /usr/local/bin/purge_relay_logs --user=root --host=localhost --password=12345 --disable_relay_log_purge --port=3306 --workdir=/data
  93. 2018-12-18 17:48:25: purge_relay_logs script started.
  94. Found relay_log.info: /var/lib/mysql/relay-log.info
  95. Opening /var/lib/mysql/node_slave-relay-bin.000001 ..
  96. Opening /var/lib/mysql/node_slave-relay-bin.000002 ..
  97. Executing SET GLOBAL relay_log_purge=1; FLUSH LOGS; sleeping a few seconds so that SQL thread can delete older relay log files (if it keeps up); SET GLOBAL relay_log_purge=0; .. ok.
  98. 2018-12-18 17:48:28: All relay log purging operations succeeded.
  99.  
  100. 执行脚本看看
  101. [root@node_slave ~]# sh purge_relay_log.sh
  102.  
  103. 生成的了一个日志文件
  104. [root@node_slave ~]# ll /data/masterha/log/
  105. total 4
  106. -rw-r--r-- 1 root root 234 Dec 18 17:49 purge_relay_logs.log 

测试MHA

  1. 检查MHA集群的各个状态
  2. 1.检查MHA集群SSH
  3. ------------------------------检查SSH配置------------------------------
  4. 检查SSH配置(在manager节点(即192.168.192.130服务器上执行))
  5. 检查MHA Manger到所有MHA NodeSSH连接状态:
  6. [root@manager_slave ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
  7. Tue Dec 18 17:53:34 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
  8. Tue Dec 18 17:53:34 2018 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
  9. Tue Dec 18 17:53:34 2018 - [info] Reading server configuration from /etc/masterha/app1.cnf..
  10. Tue Dec 18 17:53:34 2018 - [info] Starting SSH connection tests..
  11. Tue Dec 18 17:53:35 2018 - [debug]
  12. Tue Dec 18 17:53:34 2018 - [debug] Connecting via SSH from root@192.168.192.128(192.168.192.128:22) to root@192.168.192.129(192.168.192.129:22)..
  13. Tue Dec 18 17:53:34 2018 - [debug] ok.
  14. Tue Dec 18 17:53:34 2018 - [debug] Connecting via SSH from root@192.168.192.128(192.168.192.128:22) to root@192.168.192.130(192.168.192.130:22)..
  15. Tue Dec 18 17:53:35 2018 - [debug] ok.
  16. Tue Dec 18 17:53:36 2018 - [debug]
  17. Tue Dec 18 17:53:35 2018 - [debug] Connecting via SSH from root@192.168.192.129(192.168.192.129:22) to root@192.168.192.128(192.168.192.128:22)..
  18. Tue Dec 18 17:53:35 2018 - [debug] ok.
  19. Tue Dec 18 17:53:35 2018 - [debug] Connecting via SSH from root@192.168.192.129(192.168.192.129:22) to root@192.168.192.130(192.168.192.130:22)..
  20. Tue Dec 18 17:53:35 2018 - [debug] ok.
  21. Tue Dec 18 17:53:36 2018 - [debug]
  22. Tue Dec 18 17:53:35 2018 - [debug] Connecting via SSH from root@192.168.192.130(192.168.192.130:22) to root@192.168.192.128(192.168.192.128:22)..
  23. Tue Dec 18 17:53:35 2018 - [debug] ok.
  24. Tue Dec 18 17:53:35 2018 - [debug] Connecting via SSH from root@192.168.192.130(192.168.192.130:22) to root@192.168.192.129(192.168.192.129:22)..
  25. Tue Dec 18 17:53:36 2018 - [debug] ok.
  26. Tue Dec 18 17:53:36 2018 - [info] All SSH connection tests passed successfully.
  27. 出现上面的结果表示SSH配置成功
  28.  
  29. 2.检查MySQL复制状况
  30. ------------------------------检查MySQL复制状况------------------------------
  31. 使用mha工具check检查repl环境(在manager节点(即192.168.192.130服务器上执行))
  32. [root@manager_slave ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
  33.  
  34. 这里出现了一个小插曲
  35. 执行后出现了下面的错误
  36. ......................
  37. Bareword "FIXME_xxx" not allowed while "strict subs" in use at /usr/local/bin/master_ip_failover line 93.
  38. Execution of /usr/local/bin/master_ip_failover aborted due to compilation errors.
  39. Tue Dec 18 18:28:28 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln229] Failed to get master_ip_failover_script status with return code 255:0.
  40. Tue Dec 18 18:28:28 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln427] Error happened on checking configurations. at /usr/local/bin/masterha_check_repl line 48.
  41. Tue Dec 18 18:28:28 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln525] Error happened on monitoring servers.
  42. Tue Dec 18 18:28:28 2018 - [info] Got exit code 1 (Not master dead).
  43.  
  44. 还是出现如上报错,原因是:
  45. 原来Failover两种方式:一种是虚拟IP地址,一种是全局配置文件。MHA并没有限定使用哪一种方式,而是让用户自己选择,虚拟IP地址的方式会牵扯到其它的软件,比如keepalive软件,而且还要修改脚本master_ip_failover
  46.  
  47. 解决方法:
  48. 先暂时注释掉管理节点的/etc/masterha/app1.cnf文件中的master_ip_failover_script= /usr/local/bin/master_ip_failover这个选项。
  49. 后面引入keepalived后和修改该脚本以后再开启该选项
  50. [root@manager_slave /]# cat /etc/masterha/app1.cnf | grep master_ip_failover_script
  51. #master_ip_failover_script= /usr/local/bin/master_ip_failover
  52.  
  53. 最后在通过masterha_check_repl脚本查看整个mysql集群的复制状态
  54. [root@manager_slave /]# masterha_check_repl --conf=/etc/masterha/app1.cnf
  55. Tue Dec 18 18:32:43 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
  56. Tue Dec 18 18:32:43 2018 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
  57. Tue Dec 18 18:32:43 2018 - [info] Reading server configuration from /etc/masterha/app1.cnf..
  58. Tue Dec 18 18:32:43 2018 - [info] MHA::MasterMonitor version 0.58.
  59. Tue Dec 18 18:32:45 2018 - [info] GTID failover mode = 0
  60. Tue Dec 18 18:32:45 2018 - [info] Dead Servers:
  61. Tue Dec 18 18:32:45 2018 - [info] Alive Servers:
  62. Tue Dec 18 18:32:45 2018 - [info] 192.168.192.128(192.168.192.128:3306)
  63. Tue Dec 18 18:32:45 2018 - [info] 192.168.192.129(192.168.192.129:3306)
  64. Tue Dec 18 18:32:45 2018 - [info] 192.168.192.130(192.168.192.130:3306)
  65. ......................
  66. Tue Dec 18 18:32:48 2018 - [info] Checking replication health on 192.168.192.129..
  67. Tue Dec 18 18:32:48 2018 - [info] ok.
  68. Tue Dec 18 18:32:48 2018 - [info] Checking replication health on 192.168.192.130..
  69. Tue Dec 18 18:32:48 2018 - [info] ok.
  70. Tue Dec 18 18:32:48 2018 - [warning] master_ip_failover_script is not defined.
  71. Tue Dec 18 18:32:48 2018 - [warning] shutdown_script is not defined.
  72. Tue Dec 18 18:32:48 2018 - [info] Got exit code 0 (Not master dead).
  73.  
  74. MySQL Replication Health is OK.
  75. 这个时候,发现整个复制环境状况是ok的了!!!
  76.  
  77. 3.检查MHA Manager的状态
  78. ------------------------------检查MHA Manager的状态------------------------------
  79. [root@manager_slave /]# masterha_check_status --conf=/etc/masterha/app1.cnf
  80. app1 is stopped(2:NOT_RUNNING).
  81.  
  82. 注意:如果正常,会显示"PING_OK",否则会显示"NOT_RUNNING",这代表MHA监控没有开启
  83.  
  84. 开启MHA Manager监控
  85. 使用下面命令放在后台执行启动动作
  86. [root@manager_slave /]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
  87. 启动参数介绍:
  88. --remove_dead_master_conf 该参数代表当发生主从切换后,老的主库的ip将会从配置文件中移除。
  89. --manger_log 日志存放位置
  90. --ignore_last_failover 在缺省情况下,如果MHA检测到连续发生宕机,且两次宕机间隔不足8小时的话,则不会进行Failover,之所以这样限制是为了
  91. 避免ping-pong效应。该参数代表忽略上次MHA触发切换产生的文件,默认情况下,MHA发生切换后会在日志目录,也就是上面我
  92. 设置的/data产生app1.failover.complete文件,下次再次切换的时候如果发现该目录下存在该文件将不允许触发切换,除非
  93. 在第一次切换后收到删除该文件,为了方便,这里设置为--ignore_last_failover
  94. 再次查看MHA Manager监控是否正常:
  95. [root@manager_slave /]# masterha_check_status --conf=/etc/masterha/app1.cnf
  96. app1 (pid:8162) is running(0:PING_OK), master:192.168.192.128
  97.  
  98. 可以看见已经在监控了,而且master的主机为192.168.192.128
  99.  
  100. 查看启动日志
  101. [root@manager_slave /]# tail -n20 /var/log/masterha/app1/manager.log
  102. Relay log found at /var/lib/mysql, up to manager_slave-relay-bin.000002
  103. Temporary relay log file is /var/lib/mysql/manager_slave-relay-bin.000002
  104. Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
  105. Testing mysql connection and privileges..
  106. mysql: [Warning] Using a password on the command line interface can be insecure.
  107. done.
  108. Testing mysqlbinlog output.. done.
  109. Cleaning up test file(s).. done.
  110. Tue Dec 18 18:35:11 2018 - [info] Slaves settings check done.
  111. Tue Dec 18 18:35:11 2018 - [info]
  112. 192.168.192.128(192.168.192.128:3306) (current master)
  113. +--192.168.192.129(192.168.192.129:3306)
  114. +--192.168.192.130(192.168.192.130:3306)
  115.  
  116. Tue Dec 18 18:35:11 2018 - [warning] master_ip_failover_script is not defined.
  117. Tue Dec 18 18:35:11 2018 - [warning] shutdown_script is not defined.
  118. Tue Dec 18 18:35:11 2018 - [info] Set master ping interval 1 seconds.
  119. Tue Dec 18 18:35:11 2018 - [warning] secondary_check_script is not defined. It is highly recommended setting it to check master reachability from two or more routes.
  120. Tue Dec 18 18:35:11 2018 - [info] Starting ping health check on 192.168.192.128(192.168.192.128:3306)..
  121. Tue Dec 18 18:35:11 2018 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
  122. 其中"Ping(SELECT) succeeded, waiting until MySQL doesn't respond.."说明整个系统已经开始监控了。
  123.  
  124. 4.关闭MHA Manager监控
  125. ------------------------------关闭MHA Manager监控------------------------------
  126. 关闭就很简单了,直接使用masterha_stop命令就可以完成了
  127. [root@manager_slave ~]# masterha_stop --conf=/etc/masterha/app1.cnf
  128. MHA Manager is not running on app1(2:NOT_RUNNING).
  129. 查看MHA Manager监控,发现已关闭
  130. [root@manager_slave ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
  131. app1 is stopped(2:NOT_RUNNING).

配置VIP

配置keeplived实现vip

vip配置可以采用两种方式,一种通过keepalived的方式管理虚拟ip浮动;另外一种通过脚本方式启动虚拟ip的方式(即不需要keepalived或者heartbeat类似的软件)

通过keepalive的方式管理vip

  1. ---------------------------------------------------------第一种方式:通过keepalive的方式管理vip---------------------------------------------------------
  2. 下载软件进行并进行安装(在两台master上都要安装,准确的说一台是master(192.168.192.128);另外一台是备选master(192.168.192.129),在没有切换以前是slave)
  3. [root@node_master ~]# yum -y install openssl-devel
  4. [root@node_master ~]# wget http://www.keepalived.org/software/keepalived-2.0.10.tar.gz
  5. [root@node_master ~]# tar zxf keepalived-2.0.10.tar.gz
  6. [root@node_master ~]# cd keepalived-2.0.10/
  7. [root@node_master keepalived-2.0.10]# ./configure --prefix=/usr/local/keepalived
  8. [root@node_master keepalived-2.0.10]# make && make install
  9. [root@node_master keepalived-2.0.10]# cp keepalived/etc/init.d/keepalived /etc/init.d/
  10. [root@node_master keepalived-2.0.10]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
  11. [root@node_master keepalived-2.0.10]# mkdir /etc/keepalived
  12. [root@node_master keepalived-2.0.10]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
  13. [root@node_master keepalived-2.0.10]# cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
  14.  
  15. keepalived配置
  16. ------------在master上配置(192.168.192.128节点上的配置)------------------
  17. [root@node_master keepalived-2.0.10]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
  18. [root@node_master keepalived-2.0.10]# vim /etc/keepalived/keepalived.conf
  19. ! Configuration File for keepalived
  20.  
  21. global_defs {
  22. notification_email {
  23. 1024331014@qq.com
  24. }
  25. notification_email_from 1024331014@qq.com
  26. smtp_server 127.0.0.1
  27. smtp_connect_timeout 30
  28. router_id MySQL-HA
  29. }
  30.  
  31. vrrp_instance VI_1 {
  32. state BACKUP
  33. interface ens37
  34. virtual_router_id 51
  35. priority 150
  36. advert_int 1
  37. nopreempt
  38.  
  39. authentication {
  40. auth_type PASS
  41. auth_pass 1111
  42. }
  43.  
  44. virtual_ipaddress {
  45. 192.168.192.131
  46. }
  47. }
  48. 其中router_id MySQL HA表示设定keepalived组的名称,将192.168.192.131这个虚拟ip绑定到该主机的ens37网卡上,并且设置了状态为backup模式,
  49. keepalived的模式设置为非抢占模式(nopreempt),priority 150表示设置的优先级为150
  50.  
  51. ------------在candicate master上配置(192.168.192.129节点上的配置)------------------
  52. [root@node_slave keepalived-2.0.10]# vim /etc/keepalived/keepalived.conf
  53. ! Configuration File for keepalived
  54.  
  55. global_defs {
  56. notification_email {
  57. 1024331014@qq.com
  58. }
  59. notification_email_from 1024331014@qq.com
  60. smtp_server 127.0.0.1
  61. smtp_connect_timeout 30
  62. router_id MySQL-HA
  63. }
  64.  
  65. vrrp_instance VI_1 {
  66. state BACKUP
  67. interface ens37
  68. virtual_router_id 51
  69. priority 120
  70. advert_int 1
  71. nopreempt
  72.  
  73. authentication {
  74. auth_type PASS
  75. auth_pass 1111
  76. }
  77.  
  78. virtual_ipaddress {
  79. 192.168.192.131
  80. }
  81. }
  82.  
  83. 启动keepalived服务
  84. --------------在master上启动并查看日志(192.168.192.128节点上)------------------------------
  85. [root@node_master keepalived-2.0.10]# /etc/init.d/keepalived start
  86. Starting keepalived (via systemctl): [ OK ]
  87.  
  88. 查看VIP是否成功配置(下面有个192.168.192.131的地址)
  89. [root@node_master keepalived-2.0.10]# ip addr
  90. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  91. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  92. inet 127.0.0.1/8 scope host lo
  93. valid_lft forever preferred_lft forever
  94. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  95. link/ether 00:0c:29:19:14:db brd ff:ff:ff:ff:ff:ff
  96. inet 192.168.52.129/24 brd 192.168.52.255 scope global noprefixroute dynamic ens33
  97. valid_lft 1250sec preferred_lft 1250sec
  98. 3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  99. link/ether 00:0c:29:19:14:e5 brd ff:ff:ff:ff:ff:ff
  100. inet 192.168.192.128/24 brd 192.168.192.255 scope global noprefixroute ens37
  101. valid_lft forever preferred_lft forever
  102. inet 192.168.192.131/32 scope global ens37
  103. valid_lft forever preferred_lft forever
  104. 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
  105. link/ether 02:42:4b:9c:ed:04 brd ff:ff:ff:ff:ff:ff
  106. inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
  107. valid_lft forever preferred_lft forever
  108.  
  109. [root@node_master keepalived-2.0.10]# tail -10 /var/log/messages
  110. Dec 19 10:43:05 node_master Keepalived_vrrp[6856]: Sending gratuitous ARP on ens37 for 192.168.192.131
  111. Dec 19 10:43:05 node_master Keepalived_vrrp[6856]: (VI_1) Sending/queueing gratuitous ARPs on ens37 for 192.168.192.131
  112. Dec 19 10:43:05 node_master Keepalived_vrrp[6856]: Sending gratuitous ARP on ens37 for 192.168.192.131
  113. Dec 19 10:43:05 node_master Keepalived_vrrp[6856]: Sending gratuitous ARP on ens37 for 192.168.192.131
  114. Dec 19 10:43:05 node_master Keepalived_vrrp[6856]: Sending gratuitous ARP on ens37 for 192.168.192.131
  115. Dec 19 10:43:05 node_master Keepalived_vrrp[6856]: Sending gratuitous ARP on ens37 for 192.168.192.131
  116. Dec 19 10:43:07 node_master crond: sendmail: fatal: parameter inet_interfaces: no local interface found for ::1
  117. Dec 19 10:44:02 node_master systemd: Started Session 90 of user root.
  118. Dec 19 10:44:02 node_master systemd: Starting Session 90 of user root.
  119. Dec 19 10:44:08 node_master crond: sendmail: fatal: parameter inet_interfaces: no local interface found for ::1
  120.  
  121. 发现vip资源已经绑定到192.168.192.128这个master节点机上了
  122.  
  123. --------------在candicate master上启动(192.168.192.129节点上)----------------------------
  124. [root@node_slave keepalived-2.0.10]# /etc/init.d/keepalived start
  125. Starting keepalived (via systemctl): [ OK ]
  126.  
  127. [root@node_slave keepalived-2.0.10]# ip addr
  128. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  129. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  130. inet 127.0.0.1/8 scope host lo
  131. valid_lft forever preferred_lft forever
  132. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  133. link/ether 00:0c:29:f8:41:0f brd ff:ff:ff:ff:ff:ff
  134. inet 192.168.52.130/24 brd 192.168.52.255 scope global noprefixroute dynamic ens33
  135. valid_lft 1083sec preferred_lft 1083sec
  136. 3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  137. link/ether 00:0c:29:f8:41:19 brd ff:ff:ff:ff:ff:ff
  138. inet 192.168.192.129/24 brd 192.168.192.255 scope global noprefixroute ens37
  139. valid_lft forever preferred_lft forever
  140. 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
  141. link/ether 02:42:b0:14:23:57 brd ff:ff:ff:ff:ff:ff
  142. inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
  143. valid_lft forever preferred_lft forever
  144.  
  145. 从上面的信息可以看到keepalived已经配置成功。
  146.  
  147. 注意:
  148. 上面两台服务器的keepalived都设置为了BACKUP模式,在keepalived2种模式,分别是master->backup模式和backup->backup模式。这两种模式有很大区别。
  149. master->backup模式下,一旦主库宕机,虚拟ip会自动漂移到从库,当主库修复后,keepalived启动后,还会把虚拟ip抢占过来,即使设置了非抢占模式(nopreempt)抢占ip的动作也会发生。
  150. backup->backup模式下,当主库宕机后虚拟ip会自动漂移到从库上,当原主库恢复和keepalived服务启动后,并不会抢占新主的虚拟ip,即使是优先级高于从库的优先级别,也不会发生抢占。
  151. 为了减少ip漂移次数,通常是把修复好的主库当做新的备库。

MHA引入keepalived

  1. MHA引入keepalived(MySQL服务进程挂掉时通过MHA 停止keepalived)
  2. 要想把keepalived服务引入MHA,只需要修改切换是触发的脚本文件master_ip_failover即可,在该脚本中添加在master发生宕机时对keepalived的处理。
  3. 编辑脚本/usr/local/bin/master_ip_failover,修改后如下:
  4. [root@manager_slave ~]# cp /usr/local/bin/master_ip_failover /usr/local/bin/master_ip_failover.bak
  5. [root@manager_slave ~]# vim /usr/local/bin/master_ip_failover #这里有个需要注意的点就是脚本中的vip的地址需要做下修改
  6. #!/usr/bin/env perl
  7.  
  8. use strict;
  9. use warnings FATAL => 'all';
  10.  
  11. use Getopt::Long;
  12.  
  13. my (
  14. $command, $ssh_user, $orig_master_host, $orig_master_ip,
  15. $orig_master_port, $new_master_host, $new_master_ip, $new_master_port
  16. );
  17.  
  18. my $vip = '192.168.192.131';
  19. my $ssh_start_vip = "/etc/init.d/keepalived start";
  20. my $ssh_stop_vip = "/etc/init.d/keepalived stop";
  21.  
  22. GetOptions(
  23. 'command=s' => \$command,
  24. 'ssh_user=s' => \$ssh_user,
  25. 'orig_master_host=s' => \$orig_master_host,
  26. 'orig_master_ip=s' => \$orig_master_ip,
  27. 'orig_master_port=i' => \$orig_master_port,
  28. 'new_master_host=s' => \$new_master_host,
  29. 'new_master_ip=s' => \$new_master_ip,
  30. 'new_master_port=i' => \$new_master_port,
  31. );
  32.  
  33. exit &main();
  34.  
  35. sub main {
  36.  
  37. print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
  38.  
  39. if ( $command eq "stop" || $command eq "stopssh" ) {
  40.  
  41. my $exit_code = 1;
  42. eval {
  43. print "Disabling the VIP on old master: $orig_master_host \n";
  44. &stop_vip();
  45. $exit_code = 0;
  46. };
  47. if ($@) {
  48. warn "Got Error: $@\n";
  49. exit $exit_code;
  50. }
  51. exit $exit_code;
  52. }
  53. elsif ( $command eq "start" ) {
  54.  
  55. my $exit_code = 10;
  56. eval {
  57. print "Enabling the VIP - $vip on the new master - $new_master_host \n";
  58. &start_vip();
  59. $exit_code = 0;
  60. };
  61. if ($@) {
  62. warn $@;
  63. exit $exit_code;
  64. }
  65. exit $exit_code;
  66. }
  67. elsif ( $command eq "status" ) {
  68. print "Checking the Status of the script.. OK \n";
  69. #`ssh $ssh_user\@cluster1 \" $ssh_start_vip \"`;
  70. exit 0;
  71. }
  72. else {
  73. &usage();
  74. exit 1;
  75. }
  76. }
  77.  
  78. # A simple system call that enable the VIP on the new master
  79. sub start_vip() {
  80. `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
  81. }
  82. # A simple system call that disable the VIP on the old_master
  83. sub stop_vip() {
  84. return 0 unless ($ssh_user);
  85. `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
  86. }
  87.  
  88. sub usage {
  89. print
  90. "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
  91. }
  92.  
  93. 现在已经修改这个脚本了,现在打开/etc/masterha/app1.cnf文件中的master_ip_failover_script注释,再检查集群状态,看是否会报错
  94. [root@manager_slave ~]# grep 'master_ip_failover_script' /etc/masterha/app1.cnf
  95. master_ip_failover_script= /usr/local/bin/master_ip_failover
  96.  
  97. [root@manager_slave ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
  98. ....................
  99. Checking the Status of the script.. OK
  100. Wed Dec 19 10:55:37 2018 - [info] OK.
  101. Wed Dec 19 10:55:37 2018 - [warning] shutdown_script is not defined.
  102. Wed Dec 19 10:55:37 2018 - [info] Got exit code 0 (Not master dead).
  103.  
  104. MySQL Replication Health is OK.
  105. 可以看出复制情况正常!
  106. /usr/local/bin/master_ip_failover添加或者修改的内容意思是当主库数据库发生故障时,会触发MHA切换,MHA Manager会停掉主库上的keepalived服务,
  107. 触发虚拟ip漂移到备选从库,从而完成切换。当然可以在keepalived里面引入脚本,这个脚本监控mysql是否正常运行,如果不正常,则调用该脚本杀掉keepalived进程。

使用脚本实现vip

通过脚本的方式管理VIP

  1. ---------------------------------------------------------第二种方式:通过脚本的方式管理VIP---------------------------------------------------------
  2. 为了测试第二种方式,我把上keepalived停掉了
  3. 这里是修改/usr/local/bin/master_ip_failover,修改完成后内容如下。还需要手动在master服务器上绑定一个vip
  4.  
  5. 先在master节点(192.168.192.128)上绑定vip
  6. [root@node_master ~]# ifconfig ens37:0 192.168.192.131/24 #这里要注意的是网卡名和地址
  7. [root@node_master ~]# ifconfig
  8. ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
  9. inet 192.168.52.129 netmask 255.255.255.0 broadcast 192.168.52.255
  10. ether 00:0c:29:19:14:db txqueuelen 1000 (Ethernet)
  11. RX packets 7850 bytes 8852518 (8.4 MiB)
  12. RX errors 0 dropped 0 overruns 0 frame 0
  13. TX packets 2478 bytes 176378 (172.2 KiB)
  14. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  15.  
  16. ens37: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
  17. inet 192.168.192.128 netmask 255.255.255.0 broadcast 192.168.192.255
  18. ether 00:0c:29:19:14:e5 txqueuelen 1000 (Ethernet)
  19. RX packets 4200 bytes 448286 (437.7 KiB)
  20. RX errors 0 dropped 0 overruns 0 frame 0
  21. TX packets 5620 bytes 2671664 (2.5 MiB)
  22. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  23.  
  24. ens37:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
  25. inet 192.168.192.131 netmask 255.255.255.0 broadcast 192.168.192.255
  26. ether 00:0c:29:19:14:e5 txqueuelen 1000 (Ethernet)
  27.  
  28. lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
  29. inet 127.0.0.1 netmask 255.0.0.0
  30. loop txqueuelen 1000 (Local Loopback)
  31. RX packets 0 bytes 0 (0.0 B)
  32. RX errors 0 dropped 0 overruns 0 frame 0
  33. TX packets 0 bytes 0 (0.0 B)
  34. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  35.  
  36. manager(192.168.192.130)节点修改/usr/local/bin/master_ip_failover
  37. [root@manager_slave ~]# cp /usr/local/bin/master_ip_failover /usr/local/bin/master_ip_failover.bak.keep
  38. [root@manager_slave ~]# vim /usr/local/bin/master_ip_failover #这里有个需要注意的点就是脚本中的vip的地址需要做下修改
  39. #!/usr/bin/env perl
  40.  
  41. use strict;
  42. use warnings FATAL => 'all';
  43.  
  44. use Getopt::Long;
  45.  
  46. my (
  47. $command, $ssh_user, $orig_master_host, $orig_master_ip,
  48. $orig_master_port, $new_master_host, $new_master_ip, $new_master_port
  49. );
  50.  
  51. my $vip = '192.168.192.131/24';
  52. my $key = '1';
  53. my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip";
  54. my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down";
  55.  
  56. GetOptions(
  57. 'command=s' => \$command,
  58. 'ssh_user=s' => \$ssh_user,
  59. 'orig_master_host=s' => \$orig_master_host,
  60. 'orig_master_ip=s' => \$orig_master_ip,
  61. 'orig_master_port=i' => \$orig_master_port,
  62. 'new_master_host=s' => \$new_master_host,
  63. 'new_master_ip=s' => \$new_master_ip,
  64. 'new_master_port=i' => \$new_master_port,
  65. );
  66.  
  67. exit &main();
  68.  
  69. sub main {
  70.  
  71. print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
  72.  
  73. if ( $command eq "stop" || $command eq "stopssh" ) {
  74.  
  75. my $exit_code = 1;
  76. eval {
  77. print "Disabling the VIP on old master: $orig_master_host \n";
  78. &stop_vip();
  79. $exit_code = 0;
  80. };
  81. if ($@) {
  82. warn "Got Error: $@\n";
  83. exit $exit_code;
  84. }
  85. exit $exit_code;
  86. }
  87. elsif ( $command eq "start" ) {
  88.  
  89. my $exit_code = 10;
  90. eval {
  91. print "Enabling the VIP - $vip on the new master - $new_master_host \n";
  92. &start_vip();
  93. $exit_code = 0;
  94. };
  95. if ($@) {
  96. warn $@;
  97. exit $exit_code;
  98. }
  99. exit $exit_code;
  100. }
  101. elsif ( $command eq "status" ) {
  102. print "Checking the Status of the script.. OK \n";
  103. exit 0;
  104. }
  105. else {
  106. &usage();
  107. exit 1;
  108. }
  109. }
  110.  
  111. sub start_vip() {
  112. `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
  113. }
  114. sub stop_vip() {
  115. return 0 unless ($ssh_user);
  116. `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
  117. }
  118.  
  119. sub usage {
  120. print
  121. "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
  122. }
  123.  
  124. 检查MHA的复制情况是否正常
  125. [root@manager_slave ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
  126. Checking the Status of the script.. OK
  127. Wed Dec 19 11:06:36 2018 - [info] OK.
  128. Wed Dec 19 11:06:36 2018 - [warning] shutdown_script is not defined.
  129. Wed Dec 19 11:06:36 2018 - [info] Got exit code 0 (Not master dead).
  130.  
  131. MySQL Replication Health is OK.
  132.  
  133. 注意:
  134. 要将/etc/masterha/app1.cnf文件中的master_ip_failover_script注释打开
  135. 为了防止脑裂发生,推荐生产环境采用脚本的方式来管理虚拟ip,而不是使用keepalived来完成。到此为止,基本MHA集群已经配置完毕。
  136. 接下来就是实际的测试环节了。通过一些测试来看一下MHA到底是如何进行工作的。

failover故障切换

自动切换

自动切换(必须先启动MHA Manager,否则无法自动切换(当然手动切换不需要开启MHA Manager监控))

  1. 开启MHA Manager监控(在192.168.192.130上执行,当然如果已经启动了则不需要再次执行)
  2. 使用下面命令放在后台执行启动动作
  3. [root@manager_slave /]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
  4.  
  5. 停掉主库(192.168.192.128)mysql服务,模拟主库发生故障,进行自动failover操作
  6. [root@node_master ~]# systemctl stop mysqld
  7.  
  8. 查看MHA切换日志,了解整个切换过程。在manager管理节点(192.168.192.130)上查看日志
  9. [root@manager_slave ~]# cat /var/log/masterha/app1/manager.log
  10. ................
  11. ................
  12. ----- Failover Report -----
  13.  
  14. app1: MySQL Master failover 192.168.192.128(192.168.192.128:3306) to 192.168.192.129(192.168.192.129:3306) succeeded
  15.  
  16. Master 192.168.192.128(192.168.192.128:3306) is down!
  17.  
  18. Check MHA Manager logs at manager_slave:/var/log/masterha/app1/manager.log for details.
  19.  
  20. Started automated(non-interactive) failover.
  21. Invalidated master IP address on 192.168.192.128(192.168.192.128:3306)
  22. The latest slave 192.168.192.129(192.168.192.129:3306) has all relay logs for recovery.
  23. Selected 192.168.192.129(192.168.192.129:3306) as a new master.
  24. 192.168.192.129(192.168.192.129:3306): OK: Applying all logs succeeded.
  25. 192.168.192.129(192.168.192.129:3306): OK: Activated master IP address.
  26. 192.168.192.130(192.168.192.130:3306): This host has the latest relay log events.
  27. Generating relay diff files from the latest slave succeeded.
  28. 192.168.192.130(192.168.192.130:3306): OK: Applying all logs succeeded. Slave started, replicating from 192.168.192.129(192.168.192.129:3306)
  29. 192.168.192.129(192.168.192.129:3306): Resetting slave info succeeded.
  30. Master failover to 192.168.192.129(192.168.192.129:3306) completed successfully.
  31. 看到最后的Master failover to 192.168.192.129(192.168.192.129:3306) completed successfully.说明备选master现在已经上位了
  32. 从上面的输出可以看出整个MHA的切换过程,共包括以下的步骤:
  33. 1)配置文件检查阶段,这个阶段会检查整个集群配置文件配置
  34. 2)宕机的master处理,这个阶段包括虚拟ip摘除操作,主机关机操作(这个我这里还没有实现,需要研究)
  35. 3)复制dead maste和最新slave相差的relay log,并保存到MHA Manger具体的目录下
  36. 4)识别含有最新更新的slave
  37. 5)应用从master保存的二进制日志事件(binlog events
  38. 6)提升一个slave为新的master进行复制
  39. 7)使其他的slave连接新的master进行复制
  40.  
  41. 最后启动MHA Manger监控,查看集群里面现在谁是master
  42. [root@manager_slave ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
  43. [root@manager_slave ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
  44. app1 (pid:7036) is running(0:PING_OK), master:192.168.192.129 

手动切换

  1. 手动Failover(MHA Manager必须没有运行)
  2. 手动failover,这种场景意味着在业务上没有启用MHA自动切换功能,当主服务器故障时,人工手动调用MHA来进行故障切换操作,具体命令如下:
  3.  
  4. 确保mha manager关闭
  5. [root@manager_slave ~]# masterha_stop --conf=/etc/masterha/app1.cnf
  6.  
  7. 注意:如果MHA manager检测到没有deadserver,将报错,并结束failover
  8. [root@manager_slave ~]# masterha_master_switch --master_state=dead --conf=/etc/masterha/app1.cnf --dead_master_host=192.168.192.128 --dead_master_port=3306 --new_master_host=192.168.192.129 --new_master_port=3306 --ignore_last_failover
  9. 输出的信息会询问你是否进行切换(yes/NO): 输入yes
  10. .............
  11. .............
  12. ----- Failover Report -----
  13.  
  14. app1: MySQL Master failover 192.168.192.128(192.168.192.128:3306) to 192.168.192.129(192.168.192.129:3306) succeeded
  15.  
  16. Master 192.168.192.128(192.168.192.128:3306) is down!
  17.  
  18. Check MHA Manager logs at manager_slave for details.
  19.  
  20. Started manual(interactive) failover.
  21. Invalidated master IP address on 192.168.192.128(192.168.192.128:3306)
  22. The latest slave 192.168.192.129(192.168.192.129:3306) has all relay logs for recovery.
  23. Selected 192.168.192.129(192.168.192.129:3306) as a new master.
  24. 192.168.192.129(192.168.192.129:3306): OK: Applying all logs succeeded.
  25. 192.168.192.129(192.168.192.129:3306): OK: Activated master IP address.
  26. 192.168.192.130(192.168.192.130:3306): This host has the latest relay log events.
  27. Generating relay diff files from the latest slave succeeded.
  28. 192.168.192.130(192.168.192.130:3306): OK: Applying all logs succeeded. Slave started, replicating from 192.168.192.129(192.168.192.129:3306)
  29. 192.168.192.129(192.168.192.129:3306): Resetting slave info succeeded.
  30. Master failover to 192.168.192.129(192.168.192.129:3306) completed successfully.
  31.  
  32. 我们看到上面的输出已经切换成功了,这样即模拟了master(192.168.192.128)宕机的情况下手动把192.168.192.129提升为主库的操作过程。

在线切换

在许多情况下, 需要将现有的主服务器迁移到另外一台服务器上,比如主服务器硬件故障,RAID 控制卡需要重建,将主服务器移到性能更好的服务器上等等。维护主服务器引起性能下降,导致停机时间至少无法写入数据。另外,阻塞或杀掉当前运行的会话会导致主主之间数据不一致的问题发生。
MHA提供快速切换和优雅的阻塞写入,这个切换过程只需要 0.5-2s 的时间,这段时间内数据是无法写入的。在很多情况下,0.5-2s 的阻塞写入是可以接受的。因此切换主服务器不需要计划分配维护时间窗口。

MHA在线切换的大概过程:
1)检测复制设置和确定当前主服务器
2)确定新的主服务器
3)阻塞写入到当前主服务器
4)等待所有从服务器赶上复制
5)授予写入到新的主服务器
6)重新设置从服务器

注意,在线切换的时候应用架构需要考虑以下两个问题:
1)自动识别master和slave的问题(master的机器可能会切换),如果采用了vip的方式,基本可以解决这个问题。
2)负载均衡的问题(可以定义大概的读写比例,每台机器可承担的负载比例,当有机器离开集群时,需要考虑这个问题)

为了保证数据完全一致性,在最快的时间内完成切换,MHA的在线切换必须满足以下条件才会切换成功,否则会切换失败。
1)所有slave的IO线程都在运行
2)所有slave的SQL线程都在运行
3)所有的show slave status的输出中Seconds_Behind_Master参数小于或者等于running_updates_limit秒,如果在切换过程中不指定running_updates_limit,那么默认情况下running_updates_limit为1秒。
4)在master端,通过show processlist输出,没有一个更新花费的时间大于running_updates_limit秒。

在线切换步骤如下:

  1. 首先,manager节点上停掉MHA监控:
  2. [root@manager_slave ~]# masterha_stop --conf=/etc/masterha/app1.cnf
  3.  
  4. 其次,进行在线切换操作(模拟在线切换主库操作,原主库192.168.192.128变为slave192.168.192.129提升为新的主库)
  5. [root@manager_slave ~]# masterha_master_switch --conf=/etc/masterha/app1.cnf --master_state=alive --new_master_host=192.168.192.129 --new_master_port=3306 --orig_master_is_new_slave --running_updates_limit=10000
  6. 执行后出现了下面的错误
  7. ..........
  8. Starting master switch from 192.168.192.128(192.168.192.128:3306) to 192.168.192.129(192.168.192.129:3306)? (yes/NO): yes
  9. Thu Dec 20 12:04:47 2018 - [info] Checking whether 192.168.192.129(192.168.192.129:3306) is ok for the new master..
  10. Thu Dec 20 12:04:47 2018 - [info] ok.
  11. Thu Dec 20 12:04:47 2018 - [info] 192.168.192.128(192.168.192.128:3306): SHOW SLAVE STATUS returned empty result. To check replication filtering rules, temporarily executing CHANGE MASTER to a dummy host.
  12. Thu Dec 20 12:04:47 2018 - [info] 192.168.192.128(192.168.192.128:3306): Resetting slave pointing to the dummy host.
  13. Thu Dec 20 12:04:47 2018 - [info] ** Phase 1: Configuration Check Phase completed.
  14. Thu Dec 20 12:04:47 2018 - [info]
  15. Thu Dec 20 12:04:47 2018 - [info] * Phase 2: Rejecting updates Phase..
  16. Thu Dec 20 12:04:47 2018 - [info]
  17. Thu Dec 20 12:04:47 2018 - [info] Executing master ip online change script to disable write on the current master:
  18. Thu Dec 20 12:04:47 2018 - [info] /usr/local/bin/master_ip_online_change --command=stop --orig_master_host=192.168.192.128 --orig_master_ip=192.168.192.128 --orig_master_port=3306 --orig_master_user='manager' --new_master_host=192.168.192.129 --new_master_ip=192.168.192.129 --new_master_port=3306 --new_master_user='manager' --orig_master_ssh_user=root --new_master_ssh_user=root --orig_master_is_new_slave --orig_master_password=xxx --new_master_password=xxx
  19. Thu Dec 20 12:04:47 2018 811566 Set read_only on the new master.. ok.
  20. Thu Dec 20 12:04:47 2018 815337 Drpping app user on the orig master..
  21. Got Error: Undefined subroutine &main::FIXME_xxx_drop_app_user called at /usr/local/bin/master_ip_online_change line 152.
  22.  
  23. Thu Dec 20 12:04:47 2018 - [error][/usr/local/share/perl5/MHA/ManagerUtil.pm, ln177] Got ERROR: at /usr/local/bin/masterha_master_switch line 53.
  24.  
  25. 报错原因是:这是由于无法找到对FIXME_xxx_drop_app_user定义,由于perl不熟,我暂时注释掉相关drop user的行或FIXME_xxx等,不会影响其他过程
  26. 解决方法:
  27. [root@manager_slave ~]# cp /usr/local/bin/master_ip_online_change /usr/local/bin/master_ip_online_change.bak #备份这个文件因为要对这个文件进行修改
  28. [root@manager_slave ~]# vim /usr/local/bin/master_ip_online_change #编辑这个文件
  29. 找到下面这两条将其使用#注释掉
  30. FIXME_xxx_drop_app_user($orig_master_handler);
  31. FIXME_xxx_create_app_user($new_master_handler);
  32.  
  33. 重新执行在线切换
  34. [root@manager_slave ~]# masterha_master_switch --conf=/etc/masterha/app1.cnf --master_state=alive --new_master_host=192.168.192.129 --new_master_port=3306 --orig_master_is_new_slave --running_updates_limit=10000
  35. ........
  36. Thu Dec 20 12:11:06 2018 - [info] -- Slave switch on host 192.168.192.130(192.168.192.130:3306) succeeded.
  37. Thu Dec 20 12:11:06 2018 - [info] Unlocking all tables on the orig master:
  38. Thu Dec 20 12:11:06 2018 - [info] Executing UNLOCK TABLES..
  39. Thu Dec 20 12:11:06 2018 - [info] ok.
  40. Thu Dec 20 12:11:06 2018 - [info] Starting orig master as a new slave..
  41. Thu Dec 20 12:11:06 2018 - [info] Resetting slave 192.168.192.128(192.168.192.128:3306) and starting replication from the new master 192.168.192.129(192.168.192.129:3306)..
  42. Thu Dec 20 12:11:06 2018 - [info] Executed CHANGE MASTER.
  43. Thu Dec 20 12:11:06 2018 - [info] Slave started.
  44. Thu Dec 20 12:11:06 2018 - [info] All new slave servers switched successfully.
  45. Thu Dec 20 12:11:06 2018 - [info]
  46. Thu Dec 20 12:11:06 2018 - [info] * Phase 5: New master cleanup phase..
  47. Thu Dec 20 12:11:06 2018 - [info]
  48. Thu Dec 20 12:11:06 2018 - [info] 192.168.192.129: Resetting slave info succeeded.
  49. Thu Dec 20 12:11:06 2018 - [info] Switching master to 192.168.192.129(192.168.192.129:3306) completed successfully.
  50.  
  51. 看到上面的输出结果表示在线切换成功
  52. 其中参数的意思:
  53. --orig_master_is_new_slave 切换时加上此参数是将原 master 变为 slave 节点,如果不加此参数,原来的 master 将不启动
  54. --running_updates_limit=10000 故障切换时,候选master 如果有延迟的话, mha 切换不能成功,加上此参数表示延迟在此时间范围内都可切换(单位为s),但是切换的时间长短是由recover relay 日志的大小决定

至此整个了MySQL高可用方案--MHA部署完毕!!!

MySQL高可用方案--MHA部署及故障转移的更多相关文章

  1. mysql高可用方案MHA介绍

    mysql高可用方案MHA介绍 概述 MHA是一位日本MySQL大牛用Perl写的一套MySQL故障切换方案,来保证数据库系统的高可用.在宕机的时间内(通常10-30秒内),完成故障切换,部署MHA, ...

  2. MySQL高可用方案MHA自动Failover与手动Failover的实践及原理

    集群信息 角色                             IP地址                 ServerID      类型 Master                     ...

  3. MySQL高可用方案MHA在线切换的步骤及原理

    在日常工作中,会碰到如下的场景,如mysql数据库升级,主服务器硬件升级等,这个时候就需要将写操作切换到另外一台服务器上,那么如何进行在线切换呢?同时,要求切换过程短,对业务的影响比较小. MHA就提 ...

  4. MySQL高可用方案MHA的部署和原理

    MHA(Master High Availability)是一套相对成熟的MySQL高可用方案,能做到在0~30s内自动完成数据库的故障切换操作,在master服务器不宕机的情况下,基本能保证数据的一 ...

  5. mysql 高可用方案MHA介绍

    概述 MHA是一位日本MySQL大牛用Perl写的一套MySQL故障切换方案,来保证数据库系统的高可用.在宕机的时间内(通常10—30秒内),完成故障切换,部署MHA,可避免主从一致性问题,节约购买新 ...

  6. MySQL高可用方案--MHA原理

    简介 MHA(Master High Availability)目前在MySQL高可用方面是一个相对成熟的解决方案,它由日本DeNA公司youshimaton(现就职于Facebook公司)开发,是日 ...

  7. (转)MySQL高可用方案MHA的部署和原理

    背后深层次的逻辑: MHA Node则运行在每个mysql节点上,MHA Manager会定时探测集群中的master节点,当master出现故障时,它自动将最新数据的slave提升为master,然 ...

  8. MySQL高可用方案 MHA之一MHA安装

    MHA0.58安装 MHA(Master High Availability)由两部分组成:MHA Manager(管理节点)和MHA Node(数据节点).管理节点mha4mysql-manager ...

  9. MySQL高可用方案 MHA之二 master_ip_failover

    异步主从复制架构master:10.150.20.90 ed3jrdba90slave:10.15.20.97 ed3jrdba9710.150.20.132 ed3jrdba132manager:1 ...

随机推荐

  1. 兼容多数浏览器的js添加收藏夹脚本

    浏览器不断发展,js的很多脚本需要跟进才能适应,目前多数代码对新版本浏览器(IE11, Firefox 27)无法适用,特关注跟进. 推荐代码1 适用浏览器:IE11(windows 8.1), Fi ...

  2. 腾讯Alloy团队代码规范

    概述 我个人很看重代码规范,因为代码是写给别人看的,按规范写别人才更容易理解.之前苦于没有代码规范的资料,现在在github上面看到了腾讯Alloy团队的代码规范,于是学习了一下,并记录下我自己还没怎 ...

  3. Swift5 语言指南(七) 集合类型

    Swift提供三种主要的集合类型,称为数组,集合和字典,用于存储值集合.数组是有序的值集合.集是唯一值的无序集合.字典是键值关联的无序集合. Swift中的数组,集合和字典总是清楚它们可以存储的值和键 ...

  4. 使用create-react-app 快速构建 React 开发环境以及react-router 4.x路由配置

    create-react-app 是来自于 Facebook,通过该命令我们无需配置就能快速构建 React 开发环境. create-react-app 自动创建的项目是基于 Webpack + E ...

  5. 内存管理buddy[原理]

    TODO------------------------------------------------------------------------------------------------ ...

  6. linux日常运维常用命令

    ---查看端口占用 netstat -ap | grep 8000 ---重启nginx sudo /usr/sbin/nginx -c /usr/local/nginx/conf/nginx.con ...

  7. spring framework核心框架体系结构(转载)

    作者:Dreawer 很多人都在用spring开发java项目,但是配置maven依赖的时候并不能明确要配置哪些spring的jar,经常是胡乱添加一堆,编译或运行报错就继续配置jar依赖,导致spr ...

  8. 从零开始学 Web 之 ES6(四)ES6基础语法二

    大家好,这里是「 从零开始学 Web 系列教程 」,并在下列地址同步更新...... github:https://github.com/Daotin/Web 微信公众号:Web前端之巅 博客园:ht ...

  9. 一个电脑的重装到java开发环境安装配置的全过程

    刚拿到一台别人用过的电脑.看着c盘爆满,而且用了还是windows7操作系统,强迫症发作马上就准备重装系统. 之前换固态使用wepe制作U盘启动盘装系统的步骤和过程全部忘记的,贼尴尬. 同事都看不过眼 ...

  10. win10 HTTP 错误 500.21 - Internal Server Error

    错误描述: HTTP 错误 500.21 - Internal Server Error 处理程序“ExtensionlessUrlHandler-Integrated-4.0”在其模块列表中有一个错 ...