一.理论概述

在MySQL故障切换过程中,MHA能做到在0~30秒之内自动完成数据库的故障切换操作,并且在进行故障切换的过程中,MHA能在最大程度上保证数据的一致性,以达到真正意义上的高可用。

优点 缺点
由perl语言开发的开源工具 需要编写脚本或利用第三方工具来实现Vip的配置
支持基于gtid的复制模式 MHA启动后只会对主数据库进行监控
同一个监控节点可以监控多个集群 需要基于SSH免认证配置,存在一定的安全隐患
MHA在进行故障转移时更不易产生数据丢失 没有提供从服务器的读负载均衡功能

使用gtid时大大简化复制过程,gtid是完全基于事务的,只要主服务器上提交了事务,那么从服务器上就一定会执行该事务

MHA是由两部分组成MHA manager节点,和MHA node节点.可以部署在一台独立的机器上管理多个集群,也可以部署在一台slave节点上,只管理当前所在的集群.MHA node运行在每台mysql服务器和 manager服务器上,MHA manager定时探测master节点状态,master故障时,自动将拥有最新数据的slave提升为master

MHAmaster切换时会试图从宕机的主服务器上保存二进制日志文件,最大程度保证数据不丢失,但有一定的概率会丢失数据,例如,如果主服务器硬件故障或无法通过 ssh 访问,MHA 没法保存二进制日志,只进行故障转移从而丢失了最新的数据。使用 MySQL 5.5 的半同步复制,可以降低数据丢失的风险。MHA可以与半同步复制结合起来。如果只有一个slave已经收到了最新的二进制日志,MHA可以将最新的二进制日志应用于其他所有的slave服务器上,因此可以保证所有节点的数据一致性。

本案例部署思路

案例环境就是一个集群,所以manager节点部署在其中一个slave上就可以,只管理当前集群.

而为了节省资源本案例使用一台主库,一台备用主库,主库空闲时也负责读操作,外加一台从库

而由于yum安装的版本不好指定,我这里采用二进制安装,并且使用lvs来调度读库,keepalived高可用lvs

二.环境

测试过程关闭防火墙和selinux

主机名 IP地址 MHA角色 mysql角色
master 192.168.111.3 主库 master
node1 192.168.111.4 MHA集群manager节点,从库 slave
node2 192.168.111.5 备用主库 slave
lvs1 192.168.111.6 lvs,keepalived
lvs2 192.168.111.7 lvs,keepalived
MHA VIP 192.168.111.100
keepalived VIP 192.168.111.200

三.部署

部署MHA

  • 基础环境
  1. [root@localhost ~]# vim /etc/hosts
  2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  4. 192.168.111.3 master
  5. 192.168.111.4 node1
  6. 192.168.111.5 node2
  7. 192.168.111.6 lvs1
  8. 192.168.111.7 lvs2
  9. [root@localhost ~]# scp /etc/hosts root@node1:/etc/
  10. [root@localhost ~]# scp /etc/hosts root@node2:/etc/
  11. [root@localhost ~]# scp /etc/hosts root@lvs1:/etc/
  12. [root@localhost ~]# scp /etc/hosts root@lvs2:/etc/
  13. [root@localhost ~]# hostname master
  14. [root@localhost ~]# bash
  15. [root@master ~]# uname -n
  16. master
  17. [root@localhost ~]# hostname node1
  18. [root@localhost ~]# bash
  19. [root@node1 ~]# uname -n
  20. node1
  21. [root@localhost ~]# hostname node2
  22. [root@localhost ~]# bash
  23. [root@node2 ~]# uname -n
  24. node2
  25. [root@localhost ~]# hostname lvs1
  26. [root@localhost ~]# bash
  27. [root@lvs1 ~]# uname -n
  28. lvs1
  29. [root@localhost ~]# hostname lvs2
  30. [root@localhost ~]# bash
  31. [root@lvs2 ~]# uname -n
  32. lvs2
  • 安装下载MHA(台机器一样操作)
  1. http://downloads.mariadb.com/MHA/
  2. #下载网址下载MHA-manager和MHA-node
  3. 我这里的版本是mha4mysql-manager-0.56.tar.gz;mha4mysql-node-0.56.tar.gz
  4. 自行配置epel
  5. [root@master ~]# yum install -y perl-DBD-MySQL.x86_64 perl-DBI.x86_64 perl-CPAN perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker
  6. #依赖包
  7. [root@master ~]# rpm -q perl-DBD-MySQL.x86_64 perl-DBI.x86_64 perl-CPAN perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker
  8. #检查,必须要全部安装上
  9. #所有节点安装node
  10. [root@master ~]# tar xf mha4mysql-node-0.56.tar.gz
  11. [root@master ~]# cd mha4mysql-node-0.56/
  12. [root@master mha4mysql-node-0.56]# perl Makefile.PL
  13. [root@master mha4mysql-node-0.56]# make && make install
  14. [root@master ~]# ls -l /usr/local/bin/
  15. 总用量 40
  16. -r-xr-xr-x 1 root root 16346 5 19 16:21 apply_diff_relay_logs
  17. -r-xr-xr-x 1 root root 4807 5 19 16:21 filter_mysqlbinlog
  18. -r-xr-xr-x 1 root root 7401 5 19 16:21 purge_relay_logs
  19. -r-xr-xr-x 1 root root 7395 5 19 16:21 save_binary_logs
  20. #生成的二进制文件
  21. manager的安装
  22. 刚才已经安了一部分依赖,有缺少的
  23. [root@node1 mha4mysql-node-0.56]# yum -y install perl perl-Log-Dispatch perl-Parallel-ForkManager perl-DBD-MySQL perl-DBI perl-Time-HiRes perl-Config-Tiny
  24. [root@node1 mha4mysql-node-0.56]# rpm -q perl perl-Log-Dispatch perl-Parallel-ForkManager perl-DBD-MySQL perl-DBI perl-Time-HiRes perl-Config-Tiny
  25. perl-5.16.3-294.el7_6.x86_64
  26. perl-Log-Dispatch-2.41-1.el7.1.noarch
  27. perl-Parallel-ForkManager-1.18-2.el7.noarch
  28. perl-DBD-MySQL-4.023-6.el7.x86_64
  29. perl-DBI-1.627-4.el7.x86_64
  30. perl-Time-HiRes-1.9725-3.el7.x86_64
  31. perl-Config-Tiny-2.14-7.el7.noarch
  32. [root@node1 ~]# tar xf mha4mysql-manager-0.56.tar.gz
  33. [root@node1 ~]# cd mha4mysql-manager-0.56/
  34. [root@node1 mha4mysql-manager-0.56]# perl Makefile.PL && make && make install
  35. [root@node1 mha4mysql-manager-0.56]# ls /usr/local/bin/
  36. apply_diff_relay_logs masterha_check_ssh masterha_manager masterha_secondary_check save_binary_logs
  37. filter_mysqlbinlog masterha_check_status masterha_master_monitor masterha_stop
  38. masterha_check_repl masterha_conf_host masterha_master_switch purge_relay_logs
  • 配置ssh免秘钥交互
  1. manager上操作
  2. [root@node1 ~]# ssh-keygen -t rsa
  3. Generating public/private rsa key pair.
  4. Enter file in which to save the key (/root/.ssh/id_rsa):
  5. Created directory '/root/.ssh'.
  6. Enter passphrase (empty for no passphrase):
  7. Enter same passphrase again:
  8. Your identification has been saved in /root/.ssh/id_rsa.
  9. Your public key has been saved in /root/.ssh/id_rsa.pub.
  10. The key fingerprint is:
  11. SHA256:qvOacPMWP+iO4BcxPtHJkVDdJXE4HNklCxlPkWShdMM root@node1
  12. The key's randomart image is:
  13. +---[RSA 2048]----+
  14. | .o.o oB%%=. |
  15. | o ..BXE+ |
  16. | o o ..o |
  17. | + + |
  18. | . + S |
  19. | + .. |
  20. | o oo.+ |
  21. | . +o*o o |
  22. | ..=B= . |
  23. +----[SHA256]-----+
  24. #以上都是回车我按照的默认的来的,也可以自己指定选项
  25. [root@node1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@master
  26. [root@node1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@node2
  27. [root@node1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@node1
  28. #逻辑上虽然自己就是node1,不会影响什么,但是按照我的思路node1也应该传递公钥,感兴趣的可以研究一下ssh的实现原理
  29. [root@node1 ~]# ssh node1
  30. Last login: Sat Apr 27 13:33:49 2019 from 192.168.111.1
  31. [root@node1 ~]# exit
  32. 登出
  33. Connection to node1 closed.
  34. [root@node1 ~]# ssh node2
  35. Last login: Thu Apr 18 22:55:10 2019 from 192.168.111.1
  36. [root@node2 ~]# exit
  37. 登出
  38. Connection to node2 closed.
  39. [root@node1 ~]# ssh master
  40. Last login: Sun May 19 16:00:20 2019 from 192.168.111.1
  41. #保险起见,我一个个试了一遍
  42. 每个节点上也需要分发公钥,本来只需要分发给node节点,在本案例中manager不是部署在单独服务器上,而是部署在一个node节点上,所以,也要给它分发
  43. [root@master ~]# ssh-keygen -t rsa
  44. [root@master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@node1
  45. [root@master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@node2
  46. [root@node2 ~]# ssh-keygen -t rsa
  47. [root@node2 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@master
  48. [root@node2 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@node1

部署二进制包MySQL及部署主从复制

  1. 三个节点安装mysql二进制安装
  2. yum -y install libaio
  3. wget http://mirrors.sohu.com/mysql/MySQL-5.7/mysql-5.7.24-linux-glibc2.12-x86_64.tar.gz
  4. useradd -M -s /sbin/nologin mysql
  5. tar zxf mysql-5.7.24-linux-glibc2.12-x86_64.tar.gz
  6. mv mysql-5.7.24-linux-glibc2.12-x86_64 /usr/local/mysql
  7. chown -R mysql:mysql /usr/local/mysql
  8. ln -s /usr/local/mysql/bin/* /usr/local/bin/
  9. /usr/local/mysql/bin/mysqld --user=mysql --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data --initialize
  10. cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld
  11. 将随机生成的登录密码记录下来
  12. 2019-05-18T08:43:11.094845Z 1 [Note] A temporary password is generated for root@localhost: 2Gk75Zvp&!-y
  13. #可以启动服务后,使用mysql -u root -p'旧密码' password '新密码'更改密码
  14. master:
  15. vim /etc/my.cnf
  16. [mysqld]
  17. datadir=/usr/local/mysql/data
  18. socket=/usr/local/mysql/data/mysql.sock
  19. server-id=1
  20. log-bin=mysql-binlog
  21. log-slave-updates=true
  22. symbolic-links=0
  23. [mysqld_safe]
  24. log-error=/usr/local/mysql/data/mysql.log
  25. pid-file=/usr/local/mysql/data/mysql.pid
  26. node1:
  27. [root@node1 ~]# vim /etc/my.cnf
  28. [mysqld]
  29. datadir=/usr/local/mysql/data
  30. socket=/tmp/mysql.sock
  31. server-id=2
  32. relay-log=relay-log-bin
  33. relay-log-index=slave-relay-bin.index
  34. symbolic-links=0
  35. [mysqld_safe]
  36. log-error=/usr/local/mysql/data/mysql.log
  37. pid-file=/usr/local/mysql/data/mysql.pid
  38. node2:
  39. [root@node2 mysql]# vim /etc/my.cnf
  40. [mysqld]
  41. datadir=/usr/local/mysql/data
  42. socket=/tmp/mysql.sock
  43. server-id=3
  44. relay-log=relay-log-bin
  45. relay-log-index=slave-relay-bin.index
  46. symbolic-links=0
  47. [mysqld_safe]
  48. log-error=/usr/local/mysql/data/mysql.log
  49. pid-file=/usr/local/mysql/data/mysql.pid
  50. #主上操作
  51. mysql> grant replication slave on *.* to 'myslave'@'192.168.111.%' identified by '123456';
  52. Query OK, 0 rows affected, 1 warning (0.00 sec)
  53. mysql> flush privileges;
  54. Query OK, 0 rows affected (0.10 sec)
  55. mysql> show master status;
  56. +---------------------+----------+--------------+------------------+-------------------+
  57. | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
  58. +---------------------+----------+--------------+------------------+-------------------+
  59. | mysql-binlog.000002 | 864 | | | |
  60. +---------------------+----------+--------------+------------------+-------------------+
  61. 由于我这里是测试环境,之前也没什么数据,就不对本来已经存在的数据进行备份了.
  62. node1,node2:
  63. mysql> change master to
  64. -> master_host='192.168.111.3',
  65. -> master_user='myslave',
  66. -> master_password='123456',
  67. -> master_log_file='mysql-binlog.000002',
  68. -> master_log_pos=864;
  69. Query OK, 0 rows affected, 2 warnings (0.01 sec)
  70. mysql> start slave;
  71. Query OK, 0 rows affected (0.00 sec)
  72. mysql> show slave status\G;
  73. Slave_IO_Running: Yes
  74. Slave_SQL_Running: Yes
  75. 然后创建表,库,到从库上查看测试.这里不再演示

部署半同步复制

  1. [root@master plugin]# ll -lh semisync_*
  2. -rwxr-xr-x 1 mysql mysql 692K 10 4 2018 semisync_master.so
  3. -rwxr-xr-x 1 mysql mysql 149K 10 4 2018 semisync_slave.so
  4. #这是半同步的插件
  5. [root@master plugin]# mysql -u root -p123456
  6. #全部安装如下
  7. mysql> install plugin rpl_semi_sync_master soname 'semisync_master.so';
  8. mysql> INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so';
  9. #安装插件
  10. mysql> SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME LIKE '%semi%';
  11. +----------------------+---------------+
  12. | PLUGIN_NAME | PLUGIN_STATUS |
  13. +----------------------+---------------+
  14. | rpl_semi_sync_slave | ACTIVE |
  15. | rpl_semi_sync_master | ACTIVE |
  16. +----------------------+---------------+
  17. set global rpl_semi_sync_master_enabled=on ;
  18. #主上开启插件
  19. set global rpl_semi_sync_slave_enabled=on ;
  20. #node1备上开启
  21. mysql> set global rpl_semi_sync_slave_enabled=on ;
  22. mysql> set global rpl_semi_sync_master_enabled=on ;
  23. #node2上两个都打开,它也是备用的主

以上配置重启mysql就会失效,添加到配置文件中可避免

  1. 主:
  2. vim /etc/my.cnf
  3. plugin-load=rpl_semi_sync_master=semisync_master.so
  4. plugin-load=rpl_semi_sync_slave=semisync_slave.so
  5. rpl_semi_sync_master_enabled=on
  6. node1
  7. vim /etc/my.cnf
  8. plugin-load=rpl_semi_sync_master=semisync_master.so
  9. plugin-load=rpl_semi_sync_slave=semisync_slave.so
  10. rpl_semi_sync_slave_enabled=on
  11. node2:
  12. vim /etc/my.cnf
  13. plugin-load=rpl_semi_sync_master=semisync_master.so
  14. plugin-load=rpl_semi_sync_slave=semisync_slave.so
  15. rpl_semi_sync_slave_enabled=on
  16. rpl_semi_sync_master_enabled=on
  • 测试半同步
  1. mysql> create database qiao;
  2. Query OK, 1 row affected (0.50 sec)
  3. mysql> SHOW GLOBAL STATUS LIKE '%semi%';
  4. +--------------------------------------------+--------+
  5. | Variable_name | Value |
  6. +--------------------------------------------+--------+
  7. | Rpl_semi_sync_master_yes_tx | 1 |
  8. #做一些测试操作,改参数会变大,随后在salve上查看是否同步

配置MHA

  • 设置两台slave为只读,因为手动设置而不写入配置文件,是因为node2是备用主,随时会提升为主,而node1则可以手动输入或写入配置文件可随意.
  1. node2node1一样:
  2. mysql> set global read_only=1;
  3. #该命令只限制普通用户,不包括root等具有supper权限的用户,要想拒绝所有用户可以通过"flush tables with read lock"即谁也没有办法进行写入了
  • 授权新的监控用户
  1. 主:
  2. mysql> grant all privileges on *.* to 'root'@'192.168.111.%' identified by '123456';
  3. mysql> flush privileges;
  4. 其余节点查看如下:
  5. mysql> show grants for root@'192.168.111.%';
  6. +-------------------------------------------------------+
  7. | Grants for root@192.168.111.% |
  8. +-------------------------------------------------------+
  9. | GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.111.%' |
  10. +-------------------------------------------------------+
  11. #可以看到授权也是更改了数据库的数据,所以也复制完成了
  • manager上配置
  1. [root@node1 ~]# vim /etc/masterha/app1.cnf
  2. [server default]
  3. manager_workdir=/var/log/masterha/app1
  4. #mha manager生成的相关状态文件的绝对路径,如果没有设置,则默认使用/var/tmp
  5. manager_log=/var/log/masterha/app1/manager.log
  6. #mha manager生成的日志据对路径,如果没有设置,mha manager将打印在标准输出,标准错误输出上,如:当mha manager执行故障转移的时候,这些信息就会打印
  7. master_binlog_dir=/usr/local/mysql/data
  8. #在master上生成binlog的绝对路径,这个参数用在master挂了,但是ssh还可达的时候,从主库的这个路径下读取和复制必须的binlog events,这个参数是必须的,因为master的mysqld挂掉之后,没有办法自动识别master的binlog存放目录。默认情况下,master_binlog_dir的值是/var/lib/mysql,/var/log/mysql,/var/lib/mysql目录是大多数mysql分支默认的binlog输出目录,而 /var/log/mysql是ubuntu包的默认binlog输出目录,这个参数可以设置多个值,用逗号隔开
  9. master_ip_failover_script=/usr/local/bin/master_ip_failover
  10. #自己编写一个脚本来透明地把应用程序连接到新主库上,用于切换VIP转移
  11. password=123456
  12. user=root
  13. #目标mysql实例的管理帐号,尽量是root用户,因为运行所有的管理命令(如:stop slave,change master,reset slave)需要使用,默认是root,以及密码
  14. ping_interval=1
  15. #这个参数表示mha manager多久ping(执行select ping sql语句)一次master,连续三个丢失ping连接,mha master就判定mater死了,因此,通过4次ping间隔的最大时间的机制来发现故障,默认是3,表示间隔是3秒
  16. remote_workdir=/usr/local/mysql/data
  17. #每一个MHA node(指的是mysql server节点)生成日志文件的工作路径,这个路径是绝对路径,如果该路径目录不存在,则会自动创建,如果没有权限访问这个路径,那么MHA将终止后续过程,另外,你需要关心一下这个路径下的文件系统是否有足够的磁盘空间,默认值是/var/tmp
  18. repl_password=123456
  19. repl_user=myslave
  20. #在所有slave上执行change master的复制用户名及密码,这个用户最好是在主库上拥有replication slave权限
  21. [server1]
  22. hostname=master
  23. #主机名或者ip地址
  24. port=3306
  25. #mysql端口
  26. [server2]
  27. hostname=node2
  28. candidate_master=1
  29. #从不同的从库服务器中,提升一个可靠的机器为新主库,(比如:RAID 10比RAID0的从库更可靠),可以通过在配置文件中对应的从库服务器的配置段下添加candidate_master=1来提升这个从库被提升为新主库的优先级(这个从库要开始binlog,以及没有显著的复制延迟,如果不满足这两个条件,也并不会在主库挂掉的时候成为新主库,所以,这个参数只是提升了优先级,并不是说指定了这个参数就一定会成为新主库)
  30. port=3306
  31. check_repl_delay=0
  32. #默认情况下,如果从库落后主库100M的relay logs,MHA不会选择这个从库作为新主库,因为它会增加恢复的时间,设置这个参数为0,MHA在选择新主库的时候,则忽略复制延迟,这个选项用在你使用candidate_master=1 明确指定需要哪个从库作为新主库的时候使用。
  33. [server3]
  34. hostname=node1
  35. port=3306
  36. node2操作:
  37. [root@node2 ~]# vim /etc/my.cnf
  38. log-bin=mysql-binlog
  39. #开启二进制日志,并重启服务
  • 配置manager故障转移脚本
  1. [root@node1 ~]# vim /usr/local/bin/masterha_ip_failover
  2. #!/usr/bin/env perl
  3. use strict;
  4. use warnings FATAL => 'all';
  5. use Getopt::Long;
  6. my (
  7. $command, $ssh_user, $orig_master_host, $orig_master_ip,
  8. $orig_master_port, $new_master_host, $new_master_ip, $new_master_port,
  9. );
  10. my $vip = '192.168.111.100';
  11. my $key = "1";
  12. my $ssh_start_vip = "/sbin/ifconfig ens32:$key $vip";
  13. my $ssh_stop_vip = "/sbin/ifconfig ens32:$key down";
  14. $ssh_user = "root";
  15. GetOptions(
  16. 'command=s' => \$command,
  17. 'ssh_user=s' => \$ssh_user,
  18. 'orig_master_host=s' => \$orig_master_host,
  19. 'orig_master_ip=s' => \$orig_master_ip,
  20. 'orig_master_port=i' => \$orig_master_port,
  21. 'new_master_host=s' => \$new_master_host,
  22. 'new_master_ip=s' => \$new_master_ip,
  23. 'new_master_port=i' => \$new_master_port,
  24. );
  25. exit &main();
  26. sub main {
  27. print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
  28. if ( $command eq "stop" || $command eq "stopssh" ) {
  29. # $orig_master_host, $orig_master_ip, $orig_master_port are passed.
  30. # If you manage master ip address at global catalog database,
  31. # invalidate orig_master_ip here.
  32. my $exit_code = 1;
  33. #eval {
  34. # print "Disabling the VIP on old master: $orig_master_host \n";
  35. # &stop_vip();
  36. # $exit_code = 0;
  37. #};
  38. eval {
  39. print "Disabling the VIP on old master: $orig_master_host \n";
  40. #my $ping=`ping -c 1 10.0.0.13 | grep "packet loss" | awk -F',' '{print $3}' | awk '{print $1}'`;
  41. #if ( $ping le "90.0%"&& $ping gt "0.0%" ){ #$exit_code = 0;
  42. #}
  43. #else {
  44. &stop_vip();
  45. # updating global catalog, etc
  46. $exit_code = 0;
  47. #}
  48. };
  49. if ($@) {
  50. warn "Got Error: $@\n";
  51. exit $exit_code;
  52. }
  53. exit $exit_code;
  54. }
  55. elsif ( $command eq "start" ) {
  56. # all arguments are passed.
  57. # If you manage master ip address at global catalog database,
  58. # activate new_master_ip here.
  59. # You can also grant write access (create user, set read_only=0, etc) here.
  60. my $exit_code = 10;
  61. eval {
  62. print "Enabling the VIP - $vip on the new master - $new_master_host \n";
  63. &start_vip();
  64. $exit_code = 0;
  65. };
  66. if ($@) {
  67. warn $@;
  68. exit $exit_code;
  69. }
  70. exit $exit_code;
  71. }
  72. elsif ( $command eq "status" ) {
  73. print "Checking the Status of the script.. OK \n";
  74. `ssh $ssh_user\@$orig_master_ip \" $ssh_start_vip \"`;
  75. exit 0;
  76. }
  77. else {
  78. &usage();
  79. exit 1;
  80. }
  81. }
  82. # A simple system call that enable the VIP on the new master
  83. sub start_vip() {
  84. `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
  85. }
  86. # A simple system call that disable the VIP on the old_master
  87. sub stop_vip() {
  88. `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
  89. }
  90. sub usage {
  91. print "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port -- new_mas
  92. ter_host=host --new_master_ip=ip --new_master_port=port\n"; }
  93. #以上脚本只需要修改相应的ip以及网卡就好
  94. [root@node1 ~]# chmod +x /usr/local/bin/masterha_ip_failover
  • 配置relay-log

MySQL在主从复制场景下默认情况是:从库的relaylog会在SQL线程执行完毕之后备自动删除.但是在本案例MHA场景下,对于某些滞后从库的恢复依赖于其它从库的relaylog,因此采用禁用自动删除功能以及加入计划任务定期清理的办法

那么待会做计划任务时会采取硬链接的方式,这是因为在文件系统中;我们一个没做硬链接的文件对应一个inode节点,删除的文件是将其的所有数据块删除.而做了硬链接的文件相当于是让多个文件指向同一个inode节点,删除操作时删除的仅仅是指向inode指针.这种方法在删除一些大的文件,数据库大表中也是经常用到.

但是阻止清楚relaylog日志也有其弊端,看这边文章

关于以下脚本中使用到的purge_relay_logs工具的使用,详情请看这篇文档

  1. mysql> set global relay_log_purge=0;
  2. #两个从节点操作一样
  3. 配置脚本并添加计划任务(两从一样)
  4. [root@node1 ~]# vim /opt/purge_relay_log.sh
  5. #!/bin/bash
  6. user=root
  7. passwd=123456
  8. #创建的监控账号及密码
  9. port=3306
  10. #端口
  11. log_dir='/usr/local/mysql/data'
  12. #relaylog日志的目录
  13. work_dir='/tmp'
  14. #指定创建relaylog硬链接的位置,默认是/var/tmp,因为不同分区之间是不能创建硬链接的,最好指定下硬链接的具体位置,成功执行脚本后,硬链接的中继日志文件就会被删除
  15. purge='/usr/local/bin/purge_relay_logs'
  16. if [ ! -d $log_dir ]
  17. then
  18. mkdir $log_dir -p
  19. fi
  20. $purge --user=$user --password=$passwd --disable_relay_log_purge --port=$port --workdir=$work_dir >> $log_dir/purge_relay_logs.log 2>&1
  21. [root@node1 ~]# chmod +x /opt/purge_relay_log.sh
  22. [root@node1 ~]# crontab -e
  23. 0 4 * * * /bin/bash /opt/purge_relay_log.sh
  24. 手动执行一遍
  25. [root@node1 data]# purge_relay_logs --user=root --password=123456 --disable_relay_log_purge --port=3306 --workdir=/tmp
  26. 2019-04-27 23:50:55: purge_relay_logs script started.
  27. Found relay_log.info: /usr/local/mysql/data/relay-log.info
  28. Removing hard linked relay log files relay-log-bin* under /tmp.. done.
  29. Current relay log file: /usr/local/mysql/data/relay-log-bin.000007
  30. Archiving unused relay log files (up to /usr/local/mysql/data/relay-log-bin.000006) ...
  31. Creating hard link for /usr/local/mysql/data/relay-log-bin.000006 under /tmp/relay-log-bin.000006 .. ok.
  32. Creating hard links for unused relay log files completed.
  33. Executing SET GLOBAL relay_log_purge=1; FLUSH LOGS; sleeping a few seconds so that SQL thread can delete older relay log files (if it keeps up); SE
  34. T GLOBAL relay_log_purge=0; .. ok.
  35. Removing hard linked relay log files relay-log-bin* under /tmp.. done.
  36. 2019-04-27 23:50:58: All relay log purging operations succeeded.
  • 在manager上测试MHAssh通信,以下是正常情况下的
  1. [root@node1 data]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
  2. Sat Apr 27 23:55:13 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
  3. Sat Apr 27 23:55:13 2019 - [info] Reading application default configurations from /etc/masterha/app1.cnf..
  4. Sat Apr 27 23:55:13 2019 - [info] Reading server configurations from /etc/masterha/app1.cnf..
  5. Sat Apr 27 23:55:13 2019 - [info] Starting SSH connection tests..
  6. Sat Apr 27 23:55:15 2019 - [debug]
  7. Sat Apr 27 23:55:14 2019 - [debug] Connecting via SSH from root@node1(192.168.111.4:22) to root@master(192.168.111.3:22)..
  8. Sat Apr 27 23:55:15 2019 - [debug] ok.
  9. Sat Apr 27 23:55:15 2019 - [debug] Connecting via SSH from root@node1(192.168.111.4:22) to root@node2(192.168.111.5:22)..
  10. Sat Apr 27 23:55:15 2019 - [debug] ok.
  11. Sat Apr 27 23:55:15 2019 - [debug]
  12. Sat Apr 27 23:55:13 2019 - [debug] Connecting via SSH from root@node2(192.168.111.5:22) to root@master(192.168.111.3:22)..
  13. Sat Apr 27 23:55:14 2019 - [debug] ok.
  14. Sat Apr 27 23:55:14 2019 - [debug] Connecting via SSH from root@node2(192.168.111.5:22) to root@node1(192.168.111.4:22)..
  15. Sat Apr 27 23:55:14 2019 - [debug] ok.
  16. Sat Apr 27 23:55:15 2019 - [debug]
  17. Sat Apr 27 23:55:13 2019 - [debug] Connecting via SSH from root@master(192.168.111.3:22) to root@node2(192.168.111.5:22)..
  18. Sat Apr 27 23:55:13 2019 - [debug] ok.
  19. Sat Apr 27 23:55:13 2019 - [debug] Connecting via SSH from root@master(192.168.111.3:22) to root@node1(192.168.111.4:22)..
  20. Sat Apr 27 23:55:14 2019 - [debug] ok.
  21. Sat Apr 27 23:55:15 2019 - [info] All SSH connection tests passed successfully.
  • 检查整个集群的状态
  1. [root@node1 data]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
  2. Sat Apr 27 23:57:21 2019 - [error][/usr/local/share/perl5/MHA/Server.pm, ln383] node2(192.168.111.5:3306): User myslave does not exist or does not have REPLICATION SLAVE privilege! Other slaves can not start replication from this host.
  3. Sat Apr 27 23:57:21 2019 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln401] Error happend on checking configurations. at /usr/local/share/perl5/MHA/ServerManager.pm line 1354.Sat Apr 27 23:57:21 2019 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln500] Error happened on monitoring servers.
  4. #正常的不说,看报错的
  5. #第一句日志看出是授权用户的问题,去111.5查看下
  6. mysql> show grants for myslave@'192.168.111.%';
  7. +-------------------------------------------------------------+
  8. | Grants for myslave@192.168.111.% |
  9. +-------------------------------------------------------------+
  10. | GRANT REPLICATION SLAVE ON *.* TO 'myslave'@'192.168.111.%' |
  11. +-------------------------------------------------------------+
  12. #主上是有的,从库没有,因为我最开始时是在主上授权了之后才开启的二进制功能""show master status"就是从那之后才开始复制的
  13. mysql> grant replication slave on *.* to myslave@'192.168.111.%' identified by '123456';
  14. Query OK, 0 rows affected, 1 warning (0.00 sec)
  15. mysql> flush privileges;
  16. Query OK, 0 rows affected (0.01 sec)
  17. #主上再次授权,这次因为已经主从复制了,所以其它两台从再查看也有了
  18. mysql> show grants for myslave@'192.168.111.%';
  19. +-------------------------------------------------------------+
  20. | Grants for myslave@192.168.111.% |
  21. +-------------------------------------------------------------+
  22. | GRANT REPLICATION SLAVE ON *.* TO 'myslave'@'192.168.111.%' |
  23. +-------------------------------------------------------------+
  24. 再次执行
  25. [root@node1 data]# masterha_check_repl --conf=/etc/masterha/app1.cnf
  26. 还报错,继续解决
  27. [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln401] Error happend on checking configurations. Can't exec "/usr/local/bin/master_ip_failover": 没有那个文件或目录 at /usr/local/share/perl5/MHA/ManagerUtil.pm line 68.
  28. #监控脚本好像放的位置不对
  29. [root@node1 data]# ll /usr/local/bin/masterha_ip_failover
  30. #名字和程序定义的不同,更改下
  31. [root@node1 data]# mv /usr/local/bin/masterha_ip_failover /usr/local/bin/master_ip_failover
  32. 再次重试
  33. [root@node1 data]# masterha_check_repl --conf=/etc/masterha/app1.cnf
  34. MySQL Replication Health is OK
  35. 最下面报这个,只能说明暂时没问题了,我们继续进行下去
  • VIP配置和管理

两种方式,一种通过keepalived或heartbeat管VIP转移,另一种为命令方式

本案例采取命令方式

  1. [root@node1 data]# masterha_check_status --conf=/etc/masterha/app1.cnf
  2. app1 is stopped(2:NOT_RUNNING).
  3. #检查manager状态如果正常会显示"PING_OK",否则会显示"NOT_RUNNING",代表 MHA 监控没有开启。我们刚才所做的都是预先进行手动测试,并不是打开MHA监控
  4. [root@node1 data]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover > /var/log/masterha/app1/manager.log 2>&1 &
  5. [1] 67827
  6. #--remove_dead_master_conf:如果设置此参数,当成功failover后,MHA manager将会自动删除配置文件中关于dead master的配置选项。
  7. #--ignore_last_failover:在缺省情况下,如果 MHA 检测到连续发生宕机,且两次宕机间隔不足 8 小时的话,则不会进行 Failover,之所以这样限制是为了避免 ping-pong 效应。该参数代表忽略上次 MHA 触发切换产生的文件,默认情况下,MHA 发生切换后会在日志目录,也就是上面我设置的/data 产生 app1.failover.complete 文件,下次再次切换的时候如果发现该目录下存在该文件将不允许触发切换,除非在第一次切换后收到删除该文件,为了方便,这里设置为--ignore_last_failover。
  8. [root@node1 data]# masterha_check_status --conf=/etc/masterha/app1.cnf
  9. app1 (pid:67827) is running(0:PING_OK), master:master
  10. #再次查看状态

MHA测试

我在master上已经看到了VIP,现在测试切换VIP,master上关闭mysql服务/etc/init.d/mysqld stop

node2是备用主,查看VIP是否转移

  1. [root@node2 ~]# ip a | grep ens32
  2. 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  3. inet 192.168.111.5/24 brd 192.168.111.255 scope global noprefixroute ens32
  4. inet 192.168.111.100/24 brd 192.168.111.255 scope global secondary ens32:1
  5. #转移成功

node1是从库,查看下主从复制的情况

  1. [root@node1 ~]# mysql -uroot -p123456
  2. mysql> show slave status\G;
  3. *************************** 1. row ***************************
  4. Slave_IO_State: Waiting for master to send event
  5. Master_Host: 192.168.111.5
  6. Master_User: myslave
  7. Master_Port: 3306
  8. Connect_Retry: 60
  9. Master_Log_File: mysql-binlog.000004
  10. Read_Master_Log_Pos: 154
  11. Relay_Log_File: relay-log-bin.000002
  12. Relay_Log_Pos: 323
  13. Relay_Master_Log_File: mysql-binlog.000004
  14. Slave_IO_Running: Yes
  15. Slave_SQL_Running: Yes
  16. #指定的主库已经更改为192.168.111.5
  17. [root@node1 ~]# jobs -l
  18. [2]+ 73464 停止 vim /usr/local/bin/master_ip_failover
  19. #该脚本已经切换了相当于完成了任务,已经停止了,可以再次运行
  20. [root@node1 ~]# vim /etc/masterha/app1.cnf
  21. [server default]
  22. manager_log=/var/log/masterha/app1/manager.log
  23. manager_workdir=/var/log/masterha/app1
  24. master_binlog_dir=/usr/local/mysql/data
  25. master_ip_failover_script=/usr/local/bin/master_ip_failover
  26. password=123456
  27. ping_interval=1
  28. remote_workdir=/usr/local/mysql/data
  29. repl_password=123456
  30. repl_user=myslave
  31. user=root
  32. [server2]
  33. candidate_master=1
  34. check_repl_delay=0
  35. hostname=node2
  36. port=3306
  37. [server3]
  38. hostname=node1
  39. port=3306
  40. #可以看到server1由于故障已经从配置文件删除了
  • VIP切回
  1. [root@master ~]# /etc/init.d/mysqld start
  2. #master修复好mysql启动服务
  3. [root@master ~]# mysql -uroot -p123456
  4. mysql> stop slave;
  5. mysql> change master to
  6. -> master_host='192.168.111.5',
  7. -> master_user='myslave',
  8. -> master_password='123456';
  9. #填写新主的ip,不写二进制文件名和位置参数了.
  10. mysql> start slave;
  11. mysql> show slave status\G;
  12. *************************** 1. row ***************************
  13. Slave_IO_Running: Yes
  14. Slave_SQL_Running: Yes
  15. manager上:
  16. [root@node1 ~]# vim /etc/masterha/app1.cnf
  17. #添加如下
  18. [server1]
  19. hostname=master
  20. port=3306
  21. [root@node1 ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
  22. #再次检查集群状态
  23. MySQL Replication Health is OK.
  24. [root@node1 ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover > /var/log/masterha/app1/manager.log 2>&1 &[3] 75013
  25. [root@node1 ~]# jobs -l
  26. [3]- 75013 运行中 nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover > /var/log/mas
  27. terha/app1/manager.log 2>&1 &
  28. node2:关闭服务,测试VIP自动切回
  29. [root@node2 ~]# /etc/init.d/mysqld stop
  30. Shutting down MySQL............ SUCCESS!
  31. [root@node2 ~]# ip a | grep ens32
  32. 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  33. inet 192.168.111.5/24 brd 192.168.111.255 scope global noprefixroute ens32
  34. master:查看
  35. [root@master ~]# ip a| grep ens32
  36. 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  37. inet 192.168.111.3/24 brd 192.168.111.255 scope global noprefixroute ens32
  38. inet 192.168.111.100/24 brd 192.168.111.255 scope global secondary ens32:1
  39. node1:从库查看主从复制状态,指定主ip已经自动切换
  40. mysql> show slave status\G;
  41. *************************** 1. row ***************************
  42. Slave_IO_State: Waiting for master to send event
  43. Master_Host: 192.168.111.3
  44. Master_User: myslave
  45. Master_Port: 3306
  46. Connect_Retry: 60
  47. Master_Log_File: mysql-binlog.000004
  48. Read_Master_Log_Pos: 398
  49. Relay_Log_File: relay-log-bin.000002
  50. Relay_Log_Pos: 323
  51. Relay_Master_Log_File: mysql-binlog.000004
  52. Slave_IO_Running: Yes
  53. Slave_SQL_Running: Yes

然后按照同样的思路以及配置再将node2加入到MHA中

  1. 若出现这样错误:
  2. mysql> stop slave;
  3. Query OK, 0 rows affected (0.01 sec)
  4. mysql> change master to master_host='192.168.111.3', master_user='myslave', master_password='123456';
  5. Query OK, 0 rows affected, 2 warnings (0.11 sec)
  6. mysql> start slave;
  7. Query OK, 0 rows affected (0.00 sec)
  8. mysql> show slave status\G;
  9. *************************** 1. row ***************************
  10. Slave_IO_State: Waiting for master to send event
  11. Master_Host: 192.168.111.3
  12. Master_User: myslave
  13. Master_Port: 3306
  14. Connect_Retry: 60
  15. Master_Log_File: mysql-binlog.000004
  16. Read_Master_Log_Pos: 842
  17. Relay_Log_File: relay-log-bin.000003
  18. Relay_Log_Pos: 1392
  19. Relay_Master_Log_File: mysql-binlog.000002
  20. Slave_IO_Running: Yes
  21. Slave_SQL_Running: No
  22. Replicate_Do_DB:
  23. Replicate_Ignore_DB:
  24. Replicate_Do_Table:
  25. Replicate_Ignore_Table:
  26. Replicate_Wild_Do_Table:
  27. Replicate_Wild_Ignore_Table:
  28. Last_Errno: 1007
  29. Last_Error: Error 'Can't create database 'qiao'; database exists' on query. Default database: 'qiao'. Query: 'create database qia
  30. o' Skip_Counter: 0
  31. Exec_Master_Log_Pos: 1173
  32. Relay_Log_Space: 4470
  33. Until_Condition: None
  34. Until_Log_File:
  35. Until_Log_Pos: 0
  36. Master_SSL_Allowed: No
  37. Master_SSL_CA_File:
  38. Master_SSL_CA_Path:
  39. Master_SSL_Cert:
  40. Master_SSL_Cipher:
  41. Master_SSL_Key:
  42. Seconds_Behind_Master: NULL
  43. Master_SSL_Verify_Server_Cert: No
  44. Last_IO_Errno: 0
  45. Last_IO_Error:
  46. Last_SQL_Errno: 1007
  47. Last_SQL_Error: Error 'Can't create database 'qiao'; database exists' on query. Default database: 'qiao'. Query: 'create database qia
  48. o'
  49. 解决:
  50. mysql> stop slave;
  51. Query OK, 0 rows affected (0.00 sec)
  52. mysql> set global sql_slave_skip_counter=1;
  53. Query OK, 0 rows affected (0.00 sec)
  54. mysql> start slave;
  55. Query OK, 0 rows affected (0.00 sec)
  56. mysql> show slave status\G;
  57. *************************** 1. row ***************************
  58. Slave_IO_State: Waiting for master to send event
  59. Master_Host: 192.168.111.3
  60. Master_User: myslave
  61. Master_Port: 3306
  62. Connect_Retry: 60
  63. Master_Log_File: mysql-binlog.000004
  64. Read_Master_Log_Pos: 842
  65. Relay_Log_File: relay-log-bin.000006
  66. Relay_Log_Pos: 323
  67. Relay_Master_Log_File: mysql-binlog.000004
  68. Slave_IO_Running: Yes
  69. Slave_SQL_Running: Yes
  70. Replicate_Do_DB:
  71. Replicate_Ignore_DB:
  72. Replicate_Do_Table:
  73. Replicate_Ignore_Table:
  74. Replicate_Wild_Do_Table:
  75. Replicate_Wild_Ignore_Table:
  76. Last_Errno: 0
  77. Last_Error:
  78. Skip_Counter: 0
  79. Exec_Master_Log_Pos: 842
  80. Relay_Log_Space: 1191
  81. Until_Condition: None
  82. Until_Log_File:
  83. Until_Log_Pos: 0
  84. Master_SSL_Allowed: No
  85. Master_SSL_CA_File:
  86. Master_SSL_CA_Path:
  87. Master_SSL_Cert:
  88. Master_SSL_Cipher:
  89. Master_SSL_Key:
  90. Seconds_Behind_Master: 0
  91. Master_SSL_Verify_Server_Cert: No
  92. Last_IO_Errno: 0
  93. Last_IO_Error:
  94. Last_SQL_Errno: 0
  95. Last_SQL_Error:
  96. Replicate_Ignore_Server_Ids:
  97. Master_Server_Id: 1
  98. Master_UUID: 9da60612-7a17-11e9-b288-000c2935c4a6
  99. Master_Info_File: /usr/local/mysql/data/master.info
  100. SQL_Delay: 0
  101. SQL_Remaining_Delay: NULL
  102. Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
  103. Master_Retry_Count: 86400
  104. Master_Bind:
  105. Last_IO_Error_Timestamp:
  106. Last_SQL_Error_Timestamp:
  107. Master_SSL_Crl:
  108. Master_SSL_Crlpath:
  109. Retrieved_Gtid_Set:
  110. Executed_Gtid_Set:
  111. Auto_Position: 0
  112. Replicate_Rewrite_DB:
  113. Channel_Name:
  114. Master_TLS_Version:
  115. 1 row in set (0.00 sec)

部署lvs+keepalived(lvs1,lvs2)

  1. 两台机器一样先安装上
  2. [root@lvs2 ~]# yum -y install ipvsadm kernel-devel openssl-devel keepalived
  3. [root@lvs2 ~]# vim /etc/keepalived/keepalived.conf
  4. #这里keepalived.lvs配置文件不解释,可以到其它站点搜索文档阅读
  5. ! Configuration File for keepalived
  6. global_defs {
  7. notification_email {
  8. acassen@firewall.loc
  9. failover@firewall.loc
  10. sysadmin@firewall.loc
  11. }
  12. notification_email_from Alexandre.Cassen@firewall.loc
  13. smtp_server 192.168.200.1
  14. smtp_connect_timeout 30
  15. router_id LVS_DEVEL
  16. vrrp_skip_check_adv_addr
  17. vrrp_garp_interval 0
  18. vrrp_gna_interval 0
  19. }
  20. vrrp_instance VI_1 {
  21. state MASTER
  22. interface ens32
  23. virtual_router_id 51
  24. priority 100
  25. advert_int 1
  26. authentication {
  27. auth_type PASS
  28. auth_pass 1111
  29. }
  30. virtual_ipaddress {
  31. 192.168.111.200
  32. }
  33. }
  34. virtual_server 192.168.111.200 3306 {
  35. delay_loop 6
  36. lb_algo rr
  37. lb_kind DR
  38. protocol TCP
  39. real_server 192.168.111.4 3306 {
  40. weight 1
  41. TCP_CHECK {
  42. connect_timeout 10
  43. nb_get_retry 3
  44. delay_before_retry 3
  45. connect_port 3306
  46. }
  47. }
  48. real_server 192.168.111.5 3306 {
  49. weight 1
  50. TCP_CHECK {
  51. connect_timeout 10
  52. nb_get_retry 3
  53. delay_before_retry 3
  54. connect_port 3306
  55. }
  56. }
  57. }
  58. }
  59. [root@lvs2 ~]# scp /etc/keepalived/keepalived.conf root@lvs1:/etc/keepalived/
  60. #复制到另一台机器
  61. lvs1修改如下
  62. [root@lvs1 ~]# vim /etc/keepalived/keepalived.conf
  63. 12 router_id LVS_DEVEL1
  64. 20 state BACKUP
  65. priority 90
  66. [root@lvs2 ~]# systemctl start keepalived.service
  67. #分别启动服务
  68. [root@lvs2 ~]# ip a | grep ens32
  69. 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  70. inet 192.168.111.7/24 brd 192.168.111.255 scope global noprefixroute ens32
  71. inet 192.168.111.200/32 scope global ens32
  72. [root@lvs2 ~]# ipvsadm -ln
  73. #查看状态
  74. IP Virtual Server version 1.2.1 (size=4096)
  75. Prot LocalAddress:Port Scheduler Flags
  76. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  77. TCP 192.168.111.200:3306 rr
  78. -> 192.168.111.4:3306 Route 1 0 0
  79. -> 192.168.111.5:3306 Route 1 0 0
  • DR模式是每个节点都要有VIP地址
  1. node1:
  2. [root@node1 ~]# vim /opt/realserver.sh
  3. #!/bin/bash
  4. SNS_VIP=192.168.111.200
  5. ifconfig lo:0 $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP
  6. /sbin/route add -host $SNS_VIP dev lo:0
  7. echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
  8. echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
  9. echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
  10. echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
  11. sysctl -p >/dev/null 2>&1
  12. node2:
  13. [root@node1 ~]# vim /opt/realserver.sh
  14. #!/bin/bash
  15. SNS_VIP=192.168.111.200
  16. ifconfig lo:0 $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP
  17. /sbin/route add -host $SNS_VIP dev lo:0
  18. echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
  19. echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
  20. echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
  21. echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
  22. sysctl -p >/dev/null 2>&1
  23. [root@node1 ~]# sh /opt/realserver.sh
  24. #在manager,node1机器上连接VIP进行测试
  25. [root@node1 ~]# mysql -uroot -p123456 -h192.168.111.200
  26. mysql: [Warning] Using a password on the command line interface can be insecure.
  27. Welcome to the MySQL monitor. Commands end with ; or \g.
  28. Your MySQL connection id is 749
  29. Server version: 5.7.24 MySQL Community Server (GPL)
  30. Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
  31. Oracle is a registered trademark of Oracle Corporation and/or its
  32. affiliates. Other names may be trademarks of their respective
  33. owners.
  34. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  35. mysql>

四.总结

  1. MHA集群个人感觉配置依靠人工配置的地方较多,易出错
  2. 大型架构的话还得多掂量一下
  3. 有很多扩展的地方,如amoeba实现读写分离,没有在本案例体现,灵活运用

MHA-结合MySQL半同步复制高可用集群(Centos7)的更多相关文章

  1. MySQL全同步复制基于GR集群架构实现(Centos7)

    目录 一. 理论概述 概述 二. 部署 向组加入新节点 测试 三.总结 一. 理论概述 概述 本案例操作的是针对于MySQL的复制类型中的全同步复制,对几种复制类型简单总结下: 异步复制:MySQL默 ...

  2. mysql+mycat搭建稳定高可用集群,负载均衡,主备复制,读写分离

    数据库性能优化普遍采用集群方式,oracle集群软硬件投入昂贵,今天花了一天时间搭建基于mysql的集群环境. 主要思路 简单说,实现mysql主备复制-->利用mycat实现负载均衡. 比较了 ...

  3. Kubernetes容器集群 - harbor仓库高可用集群部署说明

    之前介绍Harbor私有仓库的安装和使用,这里重点说下Harbor高可用集群方案的部署,目前主要有两种主流的Harbor高可用集群方案:1)双主复制:2)多harbor实例共享后端存储. 一.Harb ...

  4. 【转】harbor仓库高可用集群部署说明

    之前介绍Harbor私有仓库的安装和使用,这里重点说下Harbor高可用集群方案的部署,目前主要有两种主流的Harbor高可用集群方案:1)双主复制:2)多harbor实例共享后端存储. 一.Harb ...

  5. (5.5)mysql高可用系列——MySQL半同步复制(实践)

    关键词,mysql半同步复制 [0]实验环境 操作系统:CentOS linux 7.5 数据库版本:5.7.24 数据库架构:主从复制,主库用于生产,从库用于数据容灾和主库备机,采用默认传统的异步复 ...

  6. (MHA+MYSQL-5.7增强半同步)高可用架构设计与实现

           架构使用mysql5.7版本基于GTD增强半同步并行复制配置 reploication 一主两从,使用MHA套件管理整个复制架构,实现故障自动切换高可用        优势:       ...

  7. 构建MHA实现MySQL高可用集群架构

    一.MHA简介 MHA(Master HighAvailability)目前在MySQL高可用方面是一个相对成熟的解决方案,它由日本DeNA公司youshimaton(现就职于Facebook公司)开 ...

  8. MySQL半同步复制搭建

    默认情况下,MySQL 5.5/5.6/5.7和MariaDB 10.0/10.1的复制是异步的,异步复制可以提供最佳性能,主库把binlog日志发送给从库,这一动作就结束了,并不会验证从库是否接收完 ...

  9. MYSQL高可用集群架构-MHA架构

    1  MHA简介:MHA(Master High Availability)目前在MySQL高可用方面是一个相对成熟的解决方案,它由日本DeNA公司youshimaton(现就职于Facebook公司 ...

随机推荐

  1. PAT 甲级 1062 Talent and Virtue (25 分)(简单,结构体排序)

    1062 Talent and Virtue (25 分)   About 900 years ago, a Chinese philosopher Sima Guang wrote a histor ...

  2. SM30维护视图屏蔽按钮与增加选择条件

    *---------------------------------------------------------------------- * TABLES/Structure *-------- ...

  3. 【微信小程序】wx.navigateBack() 携带参数返回

    第一个页面: go_pick_time:function(e){ var that = this; var type = e.currentTarget.dataset.type; wx.naviga ...

  4. page工具类

    工具类 /** * @Title: PageUtil.java * @Package * @Description: TODO(用一句话描述该文件做什么) * @author licy * @date ...

  5. gocheck框架

    1.  引用包 :  gocheck "gopkg.in/check.v1" 2. 自动化测试入口   :Test_run(t *testing.T) 3. 将自定义的测试用例集, ...

  6. Reporting Service 2016 匿名访问配置

    环境:SQL SERVER 2016 一.修改配置文件 需要修改的配置文件目录C:\Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Repo ...

  7. js 数组去重、去空(收藏)

    function unique (arr) { return Array.from(new Set(arr)) } var arr = [1,1,'true','true',true,true,15, ...

  8. Python中的并行编程速度

    这里主要想记录下今天碰到的一个小知识点:Python中的并行编程速率如何? 我想把AutoTool做一个并行化改造,主要目的当然是想提高多任务的执行速度.第一反应就是想到用多线程执行不同模块任务,但是 ...

  9. Django与JS交互的示例代码-django js 获取 python 字典-Django 前后台的数据传递

    Django与JS交互的示例代码 Django 前后台的数据传递 https://www.cnblogs.com/xibuhaohao/p/10192052.html 应用一:有时候我们想把一个 li ...

  10. [官网]Postgresql 的客户端应用 pg_config

    pg_config Name pg_config -- 检索已安装版本的 PostgreSQL 的信息 Synopsis pg_config {--bindir | --includedir | -- ...