在上一篇文章《使用ProxySQL实现MySQL Group Replication的故障转移、读写分离(一) 》 中,已经完成了MGR+ProxySQL集群的搭建,也测试了ProxySQL实现业务层面的故障转移,接下来继续测试读写分离。

王国维大师笔下的人生三境界:

  1. 第一重境界:昨夜西风凋碧树。独上高楼,望尽天涯路;
  2. 第二重境界:衣带渐宽终不悔,为伊消得人憔悴;
  3. 第三重境界:众里寻他千百度,蓦然回首,那人却在灯火阑珊处。

作为一个一根筋的学渣程序员,我还没能想透彻。但是数据库读写分离的三境界却有了一定的了解,我们不妨来看一看MySQL数据库读写分离的三境界。

  1. 第一重境界:人工实现读写分离。通过IP、端口读写分离,业务层面人工识别读写语句,然后将其分配到不同的主机,实现读写分离;
  2. 第二重境界:正则实现读写分离。通过路由中间件识别SQL语句,通过正则表达式匹配SQL语句,然后根据匹配结果分发到不同的主机;
  3. 第三重境界:识别TOP SQL,将高负载SQL分发到不同的主机;

(一)第一境界:人工实现读写分离

通过IP、端口读写分离,业务层面人工识别读写语句,然后使用不同的连接数据库配置信息,将其分配到不同的主机,实现读写分离。在ProxySQL里面,我们是通过端口来实现读写分离的。具体操作如下:

STEP1:配置ProxySQL在两个端口上侦听,并且重新启动ProxySQL

  1. mysql -uadmin -padmin -h127.0.0.1 -P6032
  2. mysql> SET mysql-interfaces='0.0.0.0:6401;0.0.0.0:6402';
  3. -- save it on disk and restart proxysql
  4. mysql> SAVE MYSQL VARIABLES TO DISK;
  5. mysql> PROXYSQL RESTART;

STEP2:配置路由规则,通过端口将请求分发到不同的组

  1. mysql> INSERT INTO mysql_query_rules (rule_id,active,proxy_port,destination_hostgroup,apply) VALUES (1,1,6401,10,1), (3,1,6402,20,1);
  2. mysql> LOAD MYSQL QUERY RULES TO RUNTIME;
  3. mysql> SAVE MYSQL QUERY RULES TO DISK;

这样,通过6401端口访问数据库的请求就会被转发到组1(写组)中,通过6402端口访问数据库的请求会被转发到组3(读组)中,从而实现读写分离,具体使用6401端口还是6402端口访问数据库,取决于开发人员人工识别SQL的读写特性。

(二)第二境界:使用正则表达式实现读写分离

通过路由中间件识别SQL语句,通过正则表达式匹配SQL语句,然后根据匹配结果分发到不同的主机。操作过程如下

STEP1:为避免干扰测试,删除之前定义的规则

  1. DELETE FROM mysql_query_rules;

STEP2:定义新的读写分离规则

  1. INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply) VALUES(1,1,'^SELECT.*FOR UPDATE$',1,1);
  2. INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply) VALUES(2,1,'^SELECT',3,1);
  3.  
  4. LOAD MYSQL QUERY RULES TO RUNTIME;
  5. SAVE MYSQL QUERY RULES TO DISK;

现在,ProxySQL的路由规则为:

  • SELECT FOR UPDATE操作将被路由到组1(写组);
  • 其它的SELECT语句将被路由到组3(读组);
  • 其它的路由到默认组,即组1。

这里对使用正则表达式方式进行测试,整个过程如下:

(1)测试之前读写组信息修改

  1. -- 根据组的规则:最多1个写节点,其余的写节点放入备用写组。目前我们可以看到节点192.168.10.13是写节点,其余2个节点是备用写节点,没有读节点
  2. mysql> select * from mysql_group_replication_hostgroups;
  3. +------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
  4. | writer_hostgroup | backup_writer_hostgroup | reader_hostgroup | offline_hostgroup | active | max_writers | writer_is_also_reader | max_transactions_behind | comment |
  5. +------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
  6. | 1 | 2 | 3 | 4 | 1 | 1 | 0 | 100 | NULL |
  7. +------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
  8. 1 row in set (0.00 sec)
  9.  
  10. mysql> mysql> select * from runtime_mysql_servers;
  11. +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
  12. | hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
  13. +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
  14. | 1 | 192.168.10.13 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
  15. | 2 | 192.168.10.12 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
  16. | 2 | 192.168.10.11 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
  17. +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
  18. 3 rows in set (0.01 sec)
  19.  
  20. -- 为了实现读写分离,需要有读节点,我们可以修改writer_is_also_reader参数,让backup_writer_hostgroup中的节点既做备用写节点,又做读节点
  21. mysql> update mysql_group_replication_hostgroups set writer_is_also_reader = 2 ;
  22. Query OK, 1 row affected (0.00 sec)
  23.  
  24. mysql> select * from mysql_group_replication_hostgroups;
  25. +------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
  26. | writer_hostgroup | backup_writer_hostgroup | reader_hostgroup | offline_hostgroup | active | max_writers | writer_is_also_reader | max_transactions_behind | comment |
  27. +------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
  28. | 1 | 2 | 3 | 4 | 1 | 1 | 2 | 100 | NULL |
  29. +------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
  30. 1 row in set (0.00 sec)
  31.  
  32. mysql>
  33. mysql> select * from runtime_mysql_group_replication_hostgroups;
  34. +------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
  35. | writer_hostgroup | backup_writer_hostgroup | reader_hostgroup | offline_hostgroup | active | max_writers | writer_is_also_reader | max_transactions_behind | comment |
  36. +------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
  37. | 1 | 2 | 3 | 4 | 1 | 1 | 0 | 100 | NULL |
  38. +------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
  39. 1 row in set (0.00 sec)
  40.  
  41. --需要生效、永久保存mysql server配置
  42. mysql> load mysql servers to runtime;
  43. Query OK, 0 rows affected (0.01 sec)
  44.  
  45. mysql> save mysql servers to disk;
  46. Query OK, 0 rows affected (0.03 sec)
  47.  
  48. mysql> select * from runtime_mysql_group_replication_hostgroups;
  49. +------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
  50. | writer_hostgroup | backup_writer_hostgroup | reader_hostgroup | offline_hostgroup | active | max_writers | writer_is_also_reader | max_transactions_behind | comment |
  51. +------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
  52. | 1 | 2 | 3 | 4 | 1 | 1 | 2 | 100 | NULL |
  53. +------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
  54. 1 row in set (0.01 sec)
  55.  
  56. -- 最终mysql server的组信息如下
  57. mysql> select * from runtime_mysql_servers;
  58. +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
  59. | hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
  60. +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
  61. | 1 | 192.168.10.13 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
  62. | 3 | 192.168.10.12 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
  63. | 3 | 192.168.10.11 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
  64. | 2 | 192.168.10.11 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
  65. | 2 | 192.168.10.12 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
  66. +--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
  67. 5 rows in set (0.00 sec)

(2)导入规则

  1. -- 为避免测试干扰,先删除之前的规则
  2. DELETE FROM mysql_query_rules;
  3.  
  4. -- 导入规则
  5. INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply) VALUES(1,1,'^SELECT.*FOR UPDATE$',1,1);
  6. INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply) VALUES(2,1,'^SELECT',3,1);
  7.  
  8. -- 生效、保存规则
  9. LOAD MYSQL QUERY RULES TO RUNTIME;
  10. SAVE MYSQL QUERY RULES TO DISK;

(3)测试规则是否生效

测试SQL语句:

  1. mysql -uusera -p123456 -h192.168.10.10 -P6033
  2.  
  3. -- 写测试
  4. insert into testdb.test01 values(3,'c');
  5.  
  6. -- 读测试
  7. SELECT * from testdb.test01;
  8.  
  9. -- 正则大小写测试
  10. select * from testdb.test01;
  11.  
  12. -- select for update测试
  13. SELECT * from testdb.test01 FOR UPDATE;
  14. select * from testdb.test01 FOR UPDATE;
  15.  
  16. exit;

ProxySQL将SQL语句分发到哪一台主机上执行,可以查看统计视图:stats_mysql_query_digest和stats_mysql_query_digest_reset。两个表的内容和结构相同,但是查询stats_mysql_query_digest_reset表会自动将内部统计信息重置为零,即执行了stats_mysql_query_digest_reset的查询后,2个表的数据都会被完全清除。这里我们直接使用stats_mysql_query_digest_reset来查询上面的测试:

  1. mysql> select hostgroup,schemaname,username,digest_text,count_star from stats_mysql_query_digest_reset;
  2. +-----------+--------------------+----------+----------------------------------------+------------+
  3. | hostgroup | schemaname | username | digest_text | count_star |
  4. +-----------+--------------------+----------+----------------------------------------+------------+
  5. | 1 | information_schema | usera | SELECT * from testdb.test01 FOR UPDATE | 1 |
  6. | 3 | information_schema | usera | select * from testdb.test01 | 1 |
  7. | 3 | information_schema | usera | SELECT * from testdb.test01 | 1 |
  8. | 1 | information_schema | usera | select * from testdb.test01 FOR UPDATE | 1 |
  9. | 1 | information_schema | usera | insert into testdb.test01 values(?,?) | 1 |
  10. | 1 | information_schema | usera | select @@version_comment limit ? | 1 |
  11. +-----------+--------------------+----------+----------------------------------------+------------+
  12. 6 rows in set (0.00 sec)

可以看到,正则表达式规则不区分大小写,并且根据匹配规则,已经将SQL发到了对应的主机上执行。

个人觉得基于正则表达式路由SQL语句到不同主机执行已经十分智能了,然而ProxySQL官方并不建议这么干,因为我们无法准确知道各类型的SQL语句的开销,从而可能会导致流量分布不均。

接下来我们来看看ProxySQL推荐的方法,基于正则表达式和摘要进行读写拆分。

(三)第三境界:使用正则表达式和digest实现读写分离

以下是ProxySQL推荐的有效设置读写分离的配置过程:

(1)配置ProxySQL以将所有流量仅发送到一个MySQL主节点,写和读都发送到一个节点;

(2)检查stats_mysql_query_digest哪些是最昂贵的SELECT语句;

(3)确定哪些昂贵的语句应移至读节点;

(4)配置mysql_query_rules(创建规则)以仅将昂贵的SELECT语句发送给读者

总之,想法非常简单:仅发送那些你想发送的SQL给读节点,而不是发送所有SELECT语句。

我们来整理一下整个过程:

STEP1:去除规则,让所有SQL语句都在默认组上执行

  1. mysql> delete from mysql_query_rules;
  2. Query OK, 2 rows affected (0.00 sec)
  3.  
  4. mysql> LOAD MYSQL QUERY RULES TO RUNTIME;
  5. Query OK, 0 rows affected (0.00 sec)
  6.  
  7. mysql> SAVE MYSQL QUERY RULES TO DISK;
  8. Query OK, 0 rows affected (0.01 sec)

STEP2:查找最昂贵的SQL

假设目前所有读写操作都在同一台机器上执行,且执行了很久,读写比例都具有代表性,我们可以使用stats_mysql_query_digest查找最昂贵的SQL,可以多维度进行查找。

(1)查找查询总耗时最多的5个SQL

  1. mysql> SELECT digest,SUBSTR(digest_text,0,25),count_star,sum_time FROM stats_mysql_query_digest WHERE digest_text LIKE 'SELECT%' ORDER BY sum_time DESC LIMIT 5;
  2. +--------------------+--------------------------+------------+----------+
  3. | digest | SUBSTR(digest_text,0,25) | count_star | sum_time |
  4. +--------------------+--------------------------+------------+----------+
  5. | 0xBF001A0C13781C1D | SELECT c FROM sbtest1 WH | 9594 | 9837782 |
  6. | 0xC4771449056AB3AC | SELECT c FROM sbtest14 W | 9984 | 9756595 |
  7. | 0xD84E4E04982951C1 | SELECT c FROM sbtest9 WH | 9504 | 9596185 |
  8. | 0x9B090963F41AD781 | SELECT c FROM sbtest10 W | 9664 | 9530433 |
  9. | 0x9AF59B998A3688ED | SELECT c FROM sbtest2 WH | 9744 | 9513180 |
  10. +--------------------+--------------------------+------------+----------+
  11. 5 rows in set (0.00 sec)

(2)查看执行次数最多的5个SQL语句

  1. mysql> SELECT digest,SUBSTR(digest_text,0,25),count_star,sum_time FROM stats_mysql_query_digest WHERE digest_text LIKE 'SELECT%' ORDER BY count_star DESC LIMIT 5;
  2. +--------------------+--------------------------+------------+----------+
  3. | digest | SUBSTR(digest_text,0,25) | count_star | sum_time |
  4. +--------------------+--------------------------+------------+----------+
  5. | 0xC4771449056AB3AC | SELECT c FROM sbtest14 W | 9984 | 9756595 |
  6. | 0x9AF59B998A3688ED | SELECT c FROM sbtest2 WH | 9744 | 9513180 |
  7. | 0x9B090963F41AD781 | SELECT c FROM sbtest10 W | 9664 | 9530433 |
  8. | 0x03744DC190BC72C7 | SELECT c FROM sbtest5 WH | 9604 | 9343514 |
  9. | 0x1E7B7AC5611F30C2 | SELECT c FROM sbtest6 WH | 9594 | 9245838 |
  10. +--------------------+--------------------------+------------+----------+

(3)查看平均执行时间最长的5个SQL语句

  1. mysql> SELECT digest,SUBSTR(digest_text,0,25),count_star,sum_time, sum_time/count_star as avg_time FROM stats_mysql_query_digest WHERE digest_text LIKE 'SELECT%' ORDER BY avg_time DESC LIMIT 5;
  2. +--------------------+--------------------------+------------+----------+----------+
  3. | digest | SUBSTR(digest_text,0,25) | count_star | sum_time | avg_time |
  4. +--------------------+--------------------------+------------+----------+----------+
  5. | 0x0DCAF47B4A363A7A | SELECT * from testdb.tes | 1 | 11400 | 11400 |
  6. | 0x2050E81DB9C7038E | select * from testdb.tes | 1 | 10817 | 10817 |
  7. | 0xF340A73F6EDA5B20 | SELECT c FROM sbtest11 W | 964 | 1726994 | 1791 |
  8. | 0xC867A28C90150A81 | SELECT DISTINCT c FROM s | 929 | 1282699 | 1380 |
  9. | 0x283AA9863F85EFC8 | SELECT DISTINCT c FROM s | 963 | 1318362 | 1369 |
  10. +--------------------+--------------------------+------------+----------+----------+

(4)查看平均执行时间最长的5个SQL语句,且满足平均执行时间大于1s,并显示该SQL执行时间占所有SQL执行时间的百分比

  1. SELECT digest,SUBSTR(digest_text,0,25),count_star,sum_time,sum_time/count_star as avg_time,round(sum_time/1000000*100/(SELECT sum(sum_time/1000000) FROM stats_mysql_query_digest ),3) as pct
  2. FROM stats_mysql_query_digest
  3. WHERE digest_text LIKE 'SELECT%'
  4. AND sum_time/count_star > 1000000
  5. ORDER BY avg_time DESC LIMIT 5;

说明:在测试该语句时,是使用sysbench压测出来的数据,发现存在一个sum_time非常大的SQL,导致在求sum(sum_time)时返回NULL值,故先做了预处理,把sum_time/1000000变为进行计算。

STEP3:结合digest和正则表达式实现路由

我们先观察一下,未使用路由规则时候的流量分布,可以看到,所有流量都到了hostgroup1

  1. mysql> select hostgroup,schemaname,username,digest_text,count_star from stats_mysql_query_digest_reset;
  2. +-----------+--------------------+----------+---------------------------------------------------------------------+------------+
  3. | hostgroup | schemaname | username | digest_text | count_star |
  4. +-----------+--------------------+----------+---------------------------------------------------------------------+------------+
  5. | 1 | information_schema | usera | SET PROFILING = ? | 1 |
  6. | 1 | information_schema | usera | SHOW DATABASES | 3 |
  7. | 1 | information_schema | usera | SHOW VARIABLES LIKE ?; | 2 |
  8. | 1 | information_schema | usera | SET NAMES utf8mb4 | 3 |
  9. | 1 | tssysbench | usera | INSERT INTO sbtest15 (id, k, c, pad) VALUES (?, ?, ?, ?) | 1285 |
  10. | 1 | tssysbench | usera | INSERT INTO sbtest14 (id, k, c, pad) VALUES (?, ?, ?, ?) | 1309 |
  11. | 1 | tssysbench | usera | INSERT INTO sbtest13 (id, k, c, pad) VALUES (?, ?, ?, ?) | 1303 |
  12. | 1 | tssysbench | usera | INSERT INTO sbtest12 (id, k, c, pad) VALUES (?, ?, ?, ?) | 1240 |
  13. | 1 | tssysbench | usera | UPDATE sbtest3 SET k=k+? WHERE id=? | 1280 |
  14. | 1 | tssysbench | usera | UPDATE sbtest2 SET k=k+? WHERE id=? | 1280 |
  15. | 1 | tssysbench | usera | UPDATE sbtest1 SET k=k+? WHERE id=? | 1219 |
  16. | 1 | tssysbench | usera | SELECT DISTINCT c FROM sbtest15 WHERE id BETWEEN ? AND ? ORDER BY c | 1207 |
  17. | 1 | tssysbench | usera | SELECT DISTINCT c FROM sbtest14 WHERE id BETWEEN ? AND ? ORDER BY c | 1262 |
  18. | 1 | tssysbench | usera | SELECT DISTINCT c FROM sbtest11 WHERE id BETWEEN ? AND ? ORDER BY c | 1227 |

插入路由规则:

  1. -- 根据digest插入规则,匹配特定的SQL语句
  2. INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(1,1,'0x0DCAF47B4A363A7A',3,1);
  3. INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(2,1,'0x63F9BD89D906209B',3,1);
  4. INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(3,1,'0x10D8D9CC551E199B',3,1);
  5. INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(4,1,'0xC867A28C90150A81',3,1);
  6. INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(5,1,'0x283AA9863F85EFC8',3,1);
  7. INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply) VALUES(6,1,'0x16BD798E66615299',3,1);
  8.  
  9. -- 根据正则表达式插入规则,匹配所有SELECT 开头的语句
  10. INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply) VALUES(7,1,'^SELECT COUNT\(\*\)',3,1);
  11.  
  12. -- 使规则生效、保存
  13. LOAD MYSQL QUERY RULES TO RUNTIME;
  14. SAVE MYSQL QUERY RULES TO DISK;

STEP4:使用sysbench查询,再次查看流量分布,可以看到,符合路由条件的SQL语句已经转移到了hostgroup3执行。

  1. mysql> select hostgroup,schemaname,username,digest_text,count_star from stats_mysql_query_digest_reset;
  2. +-----------+------------+----------+---------------------------------------------------------------------+------------+
  3. | hostgroup | schemaname | username | digest_text | count_star |
  4. +-----------+------------+----------+---------------------------------------------------------------------+------------+
  5. | 1 | tssysbench | usera | UPDATE sbtest3 SET k=k+? WHERE id=? | 863 |
  6. | 1 | tssysbench | usera | SELECT DISTINCT c FROM sbtest14 WHERE id BETWEEN ? AND ? ORDER BY c | 841 |
  7. | 3 | tssysbench | usera | SELECT DISTINCT c FROM sbtest13 WHERE id BETWEEN ? AND ? ORDER BY c | 765 |
  8. | 1 | tssysbench | usera | SELECT DISTINCT c FROM sbtest12 WHERE id BETWEEN ? AND ? ORDER BY c | 837 |
  9. | 3 | tssysbench | usera | SELECT DISTINCT c FROM sbtest11 WHERE id BETWEEN ? AND ? ORDER BY c | 813 |
  10. | 1 | tssysbench | usera | SELECT DISTINCT c FROM sbtest10 WHERE id BETWEEN ? AND ? ORDER BY c | 861 |
  11. | 1 | tssysbench | usera | SELECT DISTINCT c FROM sbtest9 WHERE id BETWEEN ? AND ? ORDER BY c | 835 |
  12. | 3 | tssysbench | usera | SELECT DISTINCT c FROM sbtest8 WHERE id BETWEEN ? AND ? ORDER BY c | 823 |
  13. | 1 | tssysbench | usera | SELECT DISTINCT c FROM sbtest6 WHERE id BETWEEN ? AND ? ORDER BY c | 834 |
  14. | 1 | tssysbench | usera | UPDATE sbtest5 SET c=? WHERE id=? | 870 |
  15. | 3 | tssysbench | usera | SELECT DISTINCT c FROM sbtest4 WHERE id BETWEEN ? AND ? ORDER BY c | 802 |
  16. | 1 | tssysbench | usera | UPDATE sbtest1 SET c=? WHERE id=? | 835 |
  17. | 1 | tssysbench | usera | SELECT DISTINCT c FROM sbtest3 WHERE id BETWEEN ? AND ? ORDER BY c | 838 |
  18. | 1 | tssysbench | usera | SELECT DISTINCT c FROM sbtest2 WHERE id BETWEEN ? AND ? ORDER BY c | 885

至此,以实现根据负载进行流量分发。

================================================================================================================

附1:读写分离路由规则表解析

读写分离路由解析信息存放在mysql_query_rules表中,表的语法如下:

  1. CREATE TABLE mysql_query_rules (
  2. rule_id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
  3. active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 0,
  4. username VARCHAR,
  5. schemaname VARCHAR,
  6. flagIN INT CHECK (flagIN >= 0) NOT NULL DEFAULT 0,
  7. client_addr VARCHAR,
  8. proxy_addr VARCHAR,
  9. proxy_port INT,
  10. digest VARCHAR,
  11. match_digest VARCHAR,
  12. match_pattern VARCHAR,
  13. negate_match_pattern INT CHECK (negate_match_pattern IN (0,1)) NOT NULL DEFAULT 0,
  14. re_modifiers VARCHAR DEFAULT 'CASELESS',
  15. flagOUT INT CHECK (flagOUT >= 0),
  16. replace_pattern VARCHAR CHECK(CASE WHEN replace_pattern IS NULL THEN 1 WHEN replace_pattern IS NOT NULL AND match_pattern IS NOT NULL THEN 1 ELSE 0 END),
  17. destination_hostgroup INT DEFAULT NULL,
  18. cache_ttl INT CHECK(cache_ttl > 0),
  19. cache_empty_result INT CHECK (cache_empty_result IN (0,1)) DEFAULT NULL,
  20. reconnect INT CHECK (reconnect IN (0,1)) DEFAULT NULL,
  21. timeout INT UNSIGNED,
  22. retries INT CHECK (retries>=0 AND retries <=1000),
  23. delay INT UNSIGNED,
  24. next_query_flagIN INT UNSIGNED,
  25. mirror_flagOUT INT UNSIGNED,
  26. mirror_hostgroup INT UNSIGNED,
  27. error_msg VARCHAR,
  28. OK_msg VARCHAR,
  29. sticky_conn INT CHECK (sticky_conn IN (0,1)),
  30. multiplex INT CHECK (multiplex IN (0,1,2)),
  31. gtid_from_hostgroup INT UNSIGNED,
  32. log INT CHECK (log IN (0,1)),
  33. apply INT CHECK(apply IN (0,1)) NOT NULL DEFAULT 0,
  34. comment VARCHAR)

重要列的含义如下:

  • rule_id         :规则的id,是主键,具有唯一非空特性,规则匹配时,按照rule_id从小到大匹配;
  • active          :规则是否启用,1代表启用;
  • username:   : 匹配来自特定用户的流量;
  • client_addr   :匹配来自特定客户端的流量;
  • proxy_addr   : 匹配特定本地IP上的传入流量;
  • proxy_port    : 匹配特定本地端口上的传入流量,具体见上面使用端口进行读写分离的方案;
  • digest           : 将查询与特定摘要匹配,每个相同的SQL文本都会生成一个唯一的diagst码(类似Oracle的sql_id),按照码进行匹配;
  • match_digest :将查询摘要与正则表达式匹配;
  • match_pattern:将查询文本与正则表达式匹配;
  • destination_hostgroup:将匹配的查询路由到该主机组,除非存在已启动的事务并且已登录的用户将transaction_persistent标志设置为1(请参见表mysql_users),否则将发生这种情况。
  • cache_ttl     :查询结果缓存保留的时间(单位:s);
  • timeout       :执行匹配或重写的查询的最大超时(以毫秒为单位)。如果查询的运行时间超过特定阈值,则会自动终止该查询。如果未指定超时,则mysql-default_query_timeout应用全局变量
  • retries         : 在执行查询检测到失败的情况下,重新执行查询的次数
  • apply           : 如果这只为1,则不再匹配后面的查询规则。

附2:本次实验用到的sysbench脚本

  1. -- 准备阶段
  1. sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-host=192.168.10.10 --mysql-port=6033 --mysql-user=usera --mysql-password='123456' --mysql-db=tssysbench --db-driver=mysql --tables=15 --table-size=50000 --report-interval=10 --threads=4 --time=120 prepare

  1. -- 测试阶段
  2. sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-host=192.168.10.10 --mysql-port=6033 --mysql-user=usera --mysql-password='123456' --mysql-db=tssysbench --db-driver=mysql --tables=15 --table-size=500000 --report-interval=10 --threads=4 --time=120 run

  1. -- 清除阶段
  2. sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-host=192.168.10.10 --mysql-port=6033 --mysql-user=usera --mysql-password='123456' --mysql-db=tssysbench --db-driver=mysql --tables=15 --table-size=500000 --report-interval=10 --threads=4 --time=120 cleanup

【完】

使用ProxySQL实现MySQL Group Replication的故障转移、读写分离(二)的更多相关文章

  1. 使用ProxySQL实现MySQL Group Replication的故障转移、读写分离(一)

    导读: 在之前,我们搭建了MySQL组复制集群环境,MySQL组复制集群环境解决了MySQL集群内部的自动故障转移,但是,组复制并没有解决外部业务的故障转移.举个例子,在A.B.C 3台机器上搭建了组 ...

  2. 基于MGR+Atlas的读写分离尝试,以及MGR+Keepalived+Atlas自动故障转移+读写分离设想

    目的是尝试altas的读写分离,现有一套搭建好做测试的MGR(单主),于是就腿搓绳,在MGR基础上搭建altas. 复制环境准备 读写分离理论上讲,跟复制模式没有关系,atlas负责的是重定向读写,至 ...

  3. Percona XtraDB Cluster vs Galera Cluster vs MySQL Group Replication

    Percona XtraDB Cluster vs Galera Cluster vs MySQL Group Replication Overview Galera Cluster 由 Coders ...

  4. Mysql 5.7 基于组复制(MySQL Group Replication) - 运维小结

    之前介绍了Mysq主从同步的异步复制(默认模式).半同步复制.基于GTID复制.基于组提交和并行复制 (解决同步延迟),下面简单说下Mysql基于组复制(MySQL Group Replication ...

  5. MySQL group replication介绍

    “MySQL group replication” group replication是MySQL官方开发的一个开源插件,是实现MySQL高可用集群的一个工具.第一个GA版本正式发布于MySQL5.7 ...

  6. mysql group replication 主节点宕机恢复

    一.mysql group replication 生来就要面对两个问题: 一.主节点宕机如何恢复. 二.多数节点离线的情况下.余下节点如何继续承载业务. 在这里我们只讨论第一个问题.也就是说当主结点 ...

  7. mysql group replication观点及实践

    一:个人看法 Mysql  Group Replication  随着5.7发布3年了.作为技术爱好者.mgr 是继 oracle database rac 之后. 又一个“真正” 的群集,怎么做到“ ...

  8. MySQL Group Replication配置

    MySQL Group Replication简述 MySQL 组复制实现了基于复制协议的多主更新(单主模式). 复制组由多个 server成员构成,并且组中的每个 server 成员可以独立地执行事 ...

  9. Mysql Group Replication 简介及单主模式组复制配置【转】

    一 Mysql Group Replication简介    Mysql Group Replication(MGR)是一个全新的高可用和高扩张的MySQL集群服务.    高一致性,基于原生复制及p ...

随机推荐

  1. 基于SpringBoot AOP面向切面编程实现Redis分布式锁

    基于SpringBoot AOP面向切面编程实现Redis分布式锁 基于SpringBoot AOP面向切面编程实现Redis分布式锁 基于SpringBoot AOP面向切面编程实现Redis分布式 ...

  2. show me bug

    比较版本号 前者大返回1 后者大返回-1 两者一样大返回0 #include <iostream> #include<string> using namespace std; ...

  3. CSS 的层叠上下文是什么

    层叠上下文是 HTML 中的一个三维的概念,每个层叠上下文中都有一套元素的层叠排列顺序.页面根元素天生具有层叠上下文,所以整个页面处于一个“层叠结界”中. 层叠上下文的创建: 页面根元素:html z ...

  4. 数据可视化之PowerQuery篇(五)PowerQuery文本处理技巧:移除和提取

    https://zhuanlan.zhihu.com/p/64419762 每当拿到原始数据,不如意十有八九,快速准确的清洗数据也是必备技能,数据清洗正好是 PowerQuery 的强项,本文就来介绍 ...

  5. C#根据反射动态创建ShowDoc接口文本信息

    我目前每天主要工作以开发api为主,这都离不开接口文档.如果远程对接的话前端总说Swagger不清晰,只能重新找一下新的接口文档.ShowDoc就是一个不错的选择,简洁.大方.灵活部署. 但是话说回来 ...

  6. 使用Azure Application Insignhts监控ASP.NET Core应用程序

    Application Insignhts是微软开发的一套监控程序.他可以对线上的应用程序进行全方位的监控,比如监控每秒的请求数,失败的请求,追踪异常,对每个请求进行监控,从http的耗时,到SQL查 ...

  7. Git、Github、Gitkraken 学习笔记

    <Git.Github.Gitkraken 学习笔记> 一.写在前面 1.参考资料 本文参考 <Pro Git> 一书. 在官网有免费在线版可供阅读:https://git-s ...

  8. 网课神器之obs-studio的安装使用

    obs-studio 首先,下载obs-studio安装文件,然后点击安装. 建议安装完后直接跳过配置,然后进入文件-设置-通用-系统托盘-勾选"总是最小化到系统托盘,而不是任务栏" ...

  9. Makefile中的奇葩字符

    % : Makefile规则通配符,一般出现在目标或是依赖中 * : shell命令中的通配符,一般出现在命令中 $@:目标的名字 $^:所有依赖的名字 $<:第一个依赖的名字 $?:所有依赖中 ...

  10. GitHub 热点速览 Vol.29:程序员资料大全

    作者:HelloGitHub-小鱼干 摘要:有什么资料比各种大全更吸引人的呢?先马为敬,即便日后"挺尸"收藏夹,但是每个和程序相关的大全项目都值得一看.比如国内名为小傅哥整理的 J ...