mysql-group-replication 测试环境的搭建与排错
mysql-group-replication 是由mysql-5.7.17这个版本提供的强一致的高可用集群解决方案
1、环境规划
主机ip 主机名
172.16.192.201 balm001
172.16.192.202 balm002
172.16.192.203 balm003
2、mysql的配置文件
balm001的配置如下:
[mysql]
auto-rehash [mysqld]
####: for global
user =mysql # mysql
basedir =/usr/local/mysql # /usr/local/mysql/
datadir =/usr/local/mysql_datas/3306 # /usr/local/mysql/data
server_id =1 #
port =3306 #
character_set_server =utf8 # latin1
log_timestamps =system # utc
socket =/tmp/mysql.sock # /tmp/mysql.sock
read_only =1 # off
skip-slave-start =1 #
auto_increment_increment =1 #
auto_increment_offset =1 #
lower_case_table_names =1 #
secure_file_priv = # null ####: for binlog
binlog_format =row # row
log_bin =mysql-bin # off
binlog_rows_query_log_events =on # off
log_slave_updates =on # off
expire_logs_days =4 #
binlog_cache_size =32768 # 32768(32k)
binlog_checksum =none # CRC32
sync_binlog =1 # ####: for error-log
log_error =error.log # /usr/local/mysql/data/localhost.localdomain.err ####: for slow query log ####: for gtid
gtid_executed_compression_period =1000 #
gtid_mode =on # off
enforce_gtid_consistency =on # off ####: for replication
master_info_repository =table # file
relay_log_info_repository =table # file ####: for group replication
transaction_write_set_extraction =XXHASH64 # off
loose-group_replication_group_name ="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" #
loose-group_replication_start_on_boot =off # off
loose-group_replication_local_address ="172.16.192.201:24901" #
loose-group_replication_group_seeds ="172.16.192.201:24901,172.16.192.202:24901,172.16.192.203:24901"
loose-group_replication_bootstrap_group =off # off ####: for innodb
default_storage_engine =innodb # innodb
default_tmp_storage_engine =innodb # innodb
innodb_data_file_path =ibdata1:12M:autoextend # ibdata1:12M:autoextend
innodb_temp_data_file_path =ibtmp1:12M:autoextend # ibtmp1:12M:autoextend
innodb_buffer_pool_filename =ib_buffer_pool # ib_buffer_pool
innodb_log_group_home_dir =./ # ./
innodb_log_files_in_group =2 #
innodb_log_file_size =48M # 50331648(48M)
innodb_file_format =Barracuda # Barracuda
innodb_file_per_table =on # on
innodb_page_size =16k # 16384(16k)
innodb_thread_concurrency =0 #
innodb_read_io_threads =4 #
innodb_write_io_threads =4 #
innodb_purge_threads =4 #
innodb_print_all_deadlocks =on # off
innodb_deadlock_detect =on # on
innodb_lock_wait_timeout =50 #
innodb_spin_wait_delay =6 #
innodb_autoinc_lock_mode =2 #
innodb_stats_persistent =on # on
innodb_stats_persistent_sample_pages =20 #
innodb_buffer_pool_instances =1 #
innodb_adaptive_hash_index =on # on
innodb_change_buffering =all # all
innodb_change_buffer_max_size =25 #
innodb_flush_neighbors =1 #
innodb_flush_method =O_DIRECT #
innodb_doublewrite =on # on
innodb_log_buffer_size =16M # 16777216(16M)
innodb_flush_log_at_timeout =1 #
innodb_flush_log_at_trx_commit =1 #
innodb_buffer_pool_size =134217728 # 134217728(128M)
autocommit =1 # #### for performance_schema
performance_schema =on # on
performance_schema_consumer_events_stages_current =on # off
performance_schema_consumer_events_stages_history =on # off
performance_schema_consumer_events_stages_history_long =off # off
performance_schema_consumer_statements_digest =on # on
performance_schema_consumer_events_statements_current =on # on
performance_schema_consumer_events_statements_history =on # on
performance_schema_consumer_events_statements_history_long =off # off
performance_schema_consumer_events_waits_current =on # off
performance_schema_consumer_events_waits_history =on # off
performance_schema_consumer_events_waits_history_long =off # off
performance_schema_consumer_global_instrumentation =on # on
performance_schema_consumer_thread_instrumentation =on # on
balm002的配置如下:
[mysql]
auto-rehash [mysqld]
####: for global
user =mysql # mysql
basedir =/usr/local/mysql # /usr/local/mysql/
datadir =/usr/local/mysql_datas/3306 # /usr/local/mysql/data
server_id =2 #
port =3306 #
character_set_server =utf8 # latin1
log_timestamps =system # utc
socket =/tmp/mysql.sock # /tmp/mysql.sock
read_only =1 # off
skip-slave-start =1 #
auto_increment_increment =1 #
auto_increment_offset =1 #
lower_case_table_names =1 #
secure_file_priv = # null ####: for binlog
binlog_format =row # row
log_bin =mysql-bin # off
binlog_rows_query_log_events =on # off
log_slave_updates =on # off
expire_logs_days =4 #
binlog_cache_size =32768 # 32768(32k)
binlog_checksum =none # CRC32
sync_binlog =1 # ####: for error-log
log_error =error.log # /usr/local/mysql/data/localhost.localdomain.err ####: for slow query log ####: for gtid
gtid_executed_compression_period =1000 #
gtid_mode =on # off
enforce_gtid_consistency =on # off ####: for replication
master_info_repository =table # file
relay_log_info_repository =table # file ####: for group replication
transaction_write_set_extraction =XXHASH64 # off
loose-group_replication_group_name ="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" #
loose-group_replication_start_on_boot =off # off
loose-group_replication_local_address ="172.16.192.202:24901" #
loose-group_replication_group_seeds ="172.16.192.201:24901,172.16.192.202:24901,172.16.192.203:24901"
loose-group_replication_bootstrap_group =off # off ####: for innodb
default_storage_engine =innodb # innodb
default_tmp_storage_engine =innodb # innodb
innodb_data_file_path =ibdata1:12M:autoextend # ibdata1:12M:autoextend
innodb_temp_data_file_path =ibtmp1:12M:autoextend # ibtmp1:12M:autoextend
innodb_buffer_pool_filename =ib_buffer_pool # ib_buffer_pool
innodb_log_group_home_dir =./ # ./
innodb_log_files_in_group =2 #
innodb_log_file_size =48M # 50331648(48M)
innodb_file_format =Barracuda # Barracuda
innodb_file_per_table =on # on
innodb_page_size =16k # 16384(16k)
innodb_thread_concurrency =0 #
innodb_read_io_threads =4 #
innodb_write_io_threads =4 #
innodb_purge_threads =4 #
innodb_print_all_deadlocks =on # off
innodb_deadlock_detect =on # on
innodb_lock_wait_timeout =50 #
innodb_spin_wait_delay =6 #
innodb_autoinc_lock_mode =2 #
innodb_stats_persistent =on # on
innodb_stats_persistent_sample_pages =20 #
innodb_buffer_pool_instances =1 #
innodb_adaptive_hash_index =on # on
innodb_change_buffering =all # all
innodb_change_buffer_max_size =25 #
innodb_flush_neighbors =1 #
innodb_flush_method =O_DIRECT #
innodb_doublewrite =on # on
innodb_log_buffer_size =16M # 16777216(16M)
innodb_flush_log_at_timeout =1 #
innodb_flush_log_at_trx_commit =1 #
innodb_buffer_pool_size =134217728 # 134217728(128M)
autocommit =1 # #### for performance_schema
performance_schema =on # on
performance_schema_consumer_events_stages_current =on # off
performance_schema_consumer_events_stages_history =on # off
performance_schema_consumer_events_stages_history_long =off # off
performance_schema_consumer_statements_digest =on # on
performance_schema_consumer_events_statements_current =on # on
performance_schema_consumer_events_statements_history =on # on
performance_schema_consumer_events_statements_history_long =off # off
performance_schema_consumer_events_waits_current =on # off
performance_schema_consumer_events_waits_history =on # off
performance_schema_consumer_events_waits_history_long =off # off
performance_schema_consumer_global_instrumentation =on # on
performance_schema_consumer_thread_instrumentation =on # on
balm003的配置如下:
[mysql]
auto-rehash [mysqld]
####: for global
user =mysql # mysql
basedir =/usr/local/mysql # /usr/local/mysql/
datadir =/usr/local/mysql_datas/3306 # /usr/local/mysql/data
server_id =3 #
port =3306 #
character_set_server =utf8 # latin1
log_timestamps =system # utc
socket =/tmp/mysql.sock # /tmp/mysql.sock
read_only =1 # off
skip-slave-start =1 #
auto_increment_increment =1 #
auto_increment_offset =1 #
lower_case_table_names =1 #
secure_file_priv = # null ####: for binlog
binlog_format =row # row
log_bin =mysql-bin # off
binlog_rows_query_log_events =on # off
log_slave_updates =on # off
expire_logs_days =4 #
binlog_cache_size =32768 # 32768(32k)
binlog_checksum =none # CRC32
sync_binlog =1 # ####: for error-log
log_error =error.log # /usr/local/mysql/data/localhost.localdomain.err ####: for slow query log ####: for gtid
gtid_executed_compression_period =1000 #
gtid_mode =on # off
enforce_gtid_consistency =on # off ####: for replication
master_info_repository =table # file
relay_log_info_repository =table # file ####: for group replication
transaction_write_set_extraction =XXHASH64 # off
loose-group_replication_group_name ="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" #
loose-group_replication_start_on_boot =off # off
loose-group_replication_local_address ="172.16.192.203:24901" #
loose-group_replication_group_seeds ="172.16.192.201:24901,172.16.192.202:24901,172.16.192.203:24901"
loose-group_replication_bootstrap_group =off # off ####: for innodb
default_storage_engine =innodb # innodb
default_tmp_storage_engine =innodb # innodb
innodb_data_file_path =ibdata1:12M:autoextend # ibdata1:12M:autoextend
innodb_temp_data_file_path =ibtmp1:12M:autoextend # ibtmp1:12M:autoextend
innodb_buffer_pool_filename =ib_buffer_pool # ib_buffer_pool
innodb_log_group_home_dir =./ # ./
innodb_log_files_in_group =2 #
innodb_log_file_size =48M # 50331648(48M)
innodb_file_format =Barracuda # Barracuda
innodb_file_per_table =on # on
innodb_page_size =16k # 16384(16k)
innodb_thread_concurrency =0 #
innodb_read_io_threads =4 #
innodb_write_io_threads =4 #
innodb_purge_threads =4 #
innodb_print_all_deadlocks =on # off
innodb_deadlock_detect =on # on
innodb_lock_wait_timeout =50 #
innodb_spin_wait_delay =6 #
innodb_autoinc_lock_mode =2 #
innodb_stats_persistent =on # on
innodb_stats_persistent_sample_pages =20 #
innodb_buffer_pool_instances =1 #
innodb_adaptive_hash_index =on # on
innodb_change_buffering =all # all
innodb_change_buffer_max_size =25 #
innodb_flush_neighbors =1 #
innodb_flush_method =O_DIRECT #
innodb_doublewrite =on # on
innodb_log_buffer_size =16M # 16777216(16M)
innodb_flush_log_at_timeout =1 #
innodb_flush_log_at_trx_commit =1 #
innodb_buffer_pool_size =134217728 # 134217728(128M)
autocommit =1 # #### for performance_schema
performance_schema =on # on
performance_schema_consumer_events_stages_current =on # off
performance_schema_consumer_events_stages_history =on # off
performance_schema_consumer_events_stages_history_long =off # off
performance_schema_consumer_statements_digest =on # on
performance_schema_consumer_events_statements_current =on # on
performance_schema_consumer_events_statements_history =on # on
performance_schema_consumer_events_statements_history_long =off # off
performance_schema_consumer_events_waits_current =on # off
performance_schema_consumer_events_waits_history =on # off
performance_schema_consumer_events_waits_history_long =off # off
performance_schema_consumer_global_instrumentation =on # on
performance_schema_consumer_thread_instrumentation =on # on
3、对balm001进行group-replication 的配置(我要用balm001这台机器做集群的seed结点)
set sql_log_bin=0;
create user rpl_user@'%' identified by '';
grant replication slave,replication client on *.* to rpl_user@'%';
create user rpl_user@'127.0.0.1' identified by '';
grant replication slave,replication client on *.* to rpl_user@'127.0.0.1';
create user rpl_user@'localhost' identified by '';
grant replication slave,replication client on *.* to rpl_user@'localhost';
set sql_log_bin=; change master to
master_user='rpl_user',
master_password=''
for channel 'group_replication_recovery'; install plugin group_replication soname 'group_replication.so'; set global group_replication_bootstrap_group=on;
start group_replication;
set global group_replication_bootstrap_group=off;
4、配置balm002 & balm003
set sql_log_bin=0;
create user rpl_user@'%' identified by '';
grant replication slave,replication client on *.* to rpl_user@'%';
create user rpl_user@'127.0.0.1' identified by '';
grant replication slave,replication client on *.* to rpl_user@'127.0.0.1';
create user rpl_user@'localhost' identified by '';
grant replication slave,replication client on *.* to rpl_user@'localhost';
set sql_log_bin=; change master to
master_user='rpl_user',
master_password=''
for channel 'group_replication_recovery'; install plugin group_replication soname 'group_replication.so'; #非seed结点直接start group_replication 就行
start group_replication;
5、检查mysql group-replication 是否配置成功
select * from replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 2263e491-05ad-11e7-b3ef-000c29c965ef | balm001 | 3306 | ONLINE |
| group_replication_applier | 441db987-0653-11e7-9d42-000c2922addb | balm003 | 3306 | ONLINE |
| group_replication_applier | 49ed2458-05b0-11e7-91af-000c29cac83b | balm002 | 3306 | ONLINE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
MEMBER_STATE列都是ONLINE说明集群状态是正常的
6、在配置的过程中日志如下
2017-03-30T21:48:57.781612+08:00 4 [Note] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
2017-03-30T21:48:57.781849+08:00 4 [Note] Plugin group_replication reported: '[GCS] Added automatically IP ranges 127.0.0.1/8,172.16.192.202/16 to the whitelist'
2017-03-30T21:48:57.782666+08:00 4 [Note] Plugin group_replication reported: '[GCS] SSL was not enabled'
2017-03-30T21:48:57.782703+08:00 4 [Note] Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"; group_replication_local_address: "172.16.192.202:24901"; group_replication_group_seeds: "172.16.192.201:24901,172.16.192.202:24901,172.16.192.203:24901"; group_replication_bootstrap_group: false; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: "AUTOMATIC"'
2017-03-30T21:48:57.783601+08:00 6 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2017-03-30T21:48:57.807438+08:00 4 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2017-03-30T21:48:57.807513+08:00 4 [Note] Plugin group_replication reported: 'auto_increment_increment is set to 7'
2017-03-30T21:48:57.807522+08:00 4 [Note] Plugin group_replication reported: 'auto_increment_offset is set to 2'
2017-03-30T21:48:57.807902+08:00 9 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './balm002-relay-bin-group_replication_applier.000001' position: 4
2017-03-30T21:48:57.812977+08:00 0 [Note] Plugin group_replication reported: 'state 0 action xa_init'
2017-03-30T21:48:57.835709+08:00 0 [Note] Plugin group_replication reported: 'Successfully bound to 0.0.0.0:24901 (socket=46).'
2017-03-30T21:48:57.835753+08:00 0 [Note] Plugin group_replication reported: 'Successfully set listen backlog to 32 (socket=46)!'
2017-03-30T21:48:57.835759+08:00 0 [Note] Plugin group_replication reported: 'Successfully unblocked socket (socket=46)!'
2017-03-30T21:48:57.835792+08:00 0 [Note] Plugin group_replication reported: 'connecting to 172.16.192.202 24901'
2017-03-30T21:48:57.835918+08:00 0 [Note] Plugin group_replication reported: 'client connected to 172.16.192.202 24901 fd 47'
2017-03-30T21:48:57.835962+08:00 0 [Note] Plugin group_replication reported: 'Ready to accept incoming connections on 0.0.0.0:24901 (socket=46)!'
2017-03-30T21:48:57.836055+08:00 0 [Note] Plugin group_replication reported: 'connecting to 172.16.192.202 24901'
2017-03-30T21:48:57.836090+08:00 0 [Note] Plugin group_replication reported: 'client connected to 172.16.192.202 24901 fd 63'
2017-03-30T21:48:57.836159+08:00 0 [Note] Plugin group_replication reported: 'connecting to 172.16.192.202 24901'
2017-03-30T21:48:57.836192+08:00 0 [Note] Plugin group_replication reported: 'client connected to 172.16.192.202 24901 fd 65'
2017-03-30T21:48:57.836255+08:00 0 [Note] Plugin group_replication reported: 'connecting to 172.16.192.202 24901'
2017-03-30T21:48:57.836285+08:00 0 [Note] Plugin group_replication reported: 'client connected to 172.16.192.202 24901 fd 67'
2017-03-30T21:48:57.836350+08:00 0 [Note] Plugin group_replication reported: 'connecting to 172.16.192.202 24901'
2017-03-30T21:48:57.836382+08:00 0 [Note] Plugin group_replication reported: 'client connected to 172.16.192.202 24901 fd 69'
2017-03-30T21:48:57.836460+08:00 0 [Note] Plugin group_replication reported: 'connecting to 172.16.192.202 24901'
2017-03-30T21:48:57.836548+08:00 0 [Note] Plugin group_replication reported: 'client connected to 172.16.192.202 24901 fd 71'
2017-03-30T21:48:57.836621+08:00 0 [Note] Plugin group_replication reported: 'connecting to 172.16.192.201 24901'
2017-03-30T21:48:57.836983+08:00 0 [Note] Plugin group_replication reported: 'client connected to 172.16.192.201 24901 fd 73'
2017-03-30T21:48:59.097399+08:00 0 [Note] Plugin group_replication reported: 'state 4257 action xa_snapshot'
2017-03-30T21:48:59.097702+08:00 0 [Note] Plugin group_replication reported: 'new state x_recover'
2017-03-30T21:48:59.097720+08:00 0 [Note] Plugin group_replication reported: 'state 4277 action xa_complete'
2017-03-30T21:48:59.097829+08:00 0 [Note] Plugin group_replication reported: 'new state x_run'
2017-03-30T21:49:00.103807+08:00 0 [Note] Plugin group_replication reported: 'Starting group replication recovery with view_id 14908590232397539:7'
2017-03-30T21:49:00.105031+08:00 12 [Note] Plugin group_replication reported: 'Establishing group recovery connection with a possible donor. Attempt 1/10'
2017-03-30T21:49:00.125578+08:00 12 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='balm001', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''.
2017-03-30T21:49:00.130977+08:00 12 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor 2263e491-05ad-11e7-b3ef-000c29c965ef at balm001 port: 3306.'
2017-03-30T21:49:00.131254+08:00 14 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2017-03-30T21:49:00.136503+08:00 14 [Note] Slave I/O thread for channel 'group_replication_recovery': connected to master 'rpl_user@balm001:3306',replication started in log 'FIRST' at position 4
2017-03-30T21:49:00.148622+08:00 15 [Note] Slave SQL thread for channel 'group_replication_recovery' initialized, starting replication in log 'FIRST' at position 0, relay log './balm002-relay-bin-group_replication_recovery.000001' position: 4
2017-03-30T21:49:00.176671+08:00 12 [Note] Plugin group_replication reported: 'Terminating existing group replication donor connection and purging the corresponding logs.'
2017-03-30T21:49:00.176723+08:00 15 [Note] Slave SQL thread for channel 'group_replication_recovery' exiting, replication stopped in log 'mysql-bin.000003' at position 1563
2017-03-30T21:49:00.177801+08:00 14 [Note] Slave I/O thread killed while reading event for channel 'group_replication_recovery'
2017-03-30T21:49:00.177833+08:00 14 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'mysql-bin.000003', position 1887
2017-03-30T21:49:00.182909+08:00 12 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='balm001', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2017-03-30T21:49:00.189188+08:00 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'
7、在配置中遇到的坑
The START GROUP_REPLICATION command failed as there was an error when initializ ... ...
最张确认这个错是由于dns没有配置引起的、对就的改一下/etc/hosts/就行了;要做到集群中的各个主机之间的dns解析是正常的
[root@balm003 Desktop]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.192.22 studio
172.16.192.201 balm001
172.16.192.202 balm002
172.16.192.203 balm003
----
mysql-group-replication 测试环境的搭建与排错的更多相关文章
- 找到当前mysql group replication 环境的primary结点
一.起原: mysql group replication 有两种模式.第一种是single primary 也就是说单个primary .这个模式下只有这一个主可以写入: 第二种是multi pri ...
- MySQL Group Replication配置
MySQL Group Replication简述 MySQL 组复制实现了基于复制协议的多主更新(单主模式). 复制组由多个 server成员构成,并且组中的每个 server 成员可以独立地执行事 ...
- mysql group replication 安装&配置详解
一.原起: 之前也有写过mysql-group-replication (mgr) 相关的文章.那时也没有什么特别的动力要写好它.主要是因为在 mysql-5.7.20 之前的版本的mgr都有着各种各 ...
- 使用ProxySQL实现MySQL Group Replication的故障转移、读写分离(一)
导读: 在之前,我们搭建了MySQL组复制集群环境,MySQL组复制集群环境解决了MySQL集群内部的自动故障转移,但是,组复制并没有解决外部业务的故障转移.举个例子,在A.B.C 3台机器上搭建了组 ...
- Mysql 5.7 基于组复制(MySQL Group Replication) - 运维小结
之前介绍了Mysq主从同步的异步复制(默认模式).半同步复制.基于GTID复制.基于组提交和并行复制 (解决同步延迟),下面简单说下Mysql基于组复制(MySQL Group Replication ...
- Galera将死——MySQL Group Replication正式发布
2016-12-14 来源:InsideMySQL 作者:姜承尧 MySQL Group Replication GA 很多同学表示昨天的从你的全世界路过画风不对,好在今天MySQL界终于有大事情发生 ...
- 使用ProxySQL实现MySQL Group Replication的故障转移、读写分离(二)
在上一篇文章<使用ProxySQL实现MySQL Group Replication的故障转移.读写分离(一) > 中,已经完成了MGR+ProxySQL集群的搭建,也测试了ProxySQ ...
- Percona XtraDB Cluster vs Galera Cluster vs MySQL Group Replication
Percona XtraDB Cluster vs Galera Cluster vs MySQL Group Replication Overview Galera Cluster 由 Coders ...
- mysql group replication 主节点宕机恢复
一.mysql group replication 生来就要面对两个问题: 一.主节点宕机如何恢复. 二.多数节点离线的情况下.余下节点如何继续承载业务. 在这里我们只讨论第一个问题.也就是说当主结点 ...
随机推荐
- vm克隆linux系统 后连接网络
第一步 vi /etc/udev/rules.d/70-persistent-net.rules 将之前的eth0注释掉, 将eth1改为eth0 并复制mac地址 第二部 vi /et ...
- 兼容IE8
由于IE8不支持HTML5,而它又是Win7的默认浏览器,我们即使讨厌它,在这几年却也拿它没办法. 最近做了个需要兼容IE8的项目,不可避免地用了HTML5+CSS3,甚至canvas和svg,做兼容 ...
- HDU 1845 Jimmy’s Assignment(二分匹配)
Jimmy’s Assignment Time Limit: 1000/1000 MS (Java/Others) Memory Limit: 65535/65535 K (Java/Other ...
- U-Boot添加menu命令的方法及U-Boot命令执行过程
转;http://chenxing777414.blog.163.com/blog/static/186567350201141791224740/ 下面以添加menu命令(启动菜单)为例讲解U-Bo ...
- JS/JQuery判断是否移动设备+JS/JQuery判断浏览器类型
原文:https://blog.csdn.net/Little_Stars/article/details/48624669 JS代码如下(点击事件依赖JQuery): //判断设备类型 $(&quo ...
- VUE -- router 传参和获取参数
- JAVA HDFS API Client 连接HA
如果Hadoop开启HA,那么用Java Client连接Hive的时候,需要指定一些额外的参数 package cn.itacst.hadoop.hdfs; import java.io.FileI ...
- 理解JS里的稀疏数组与密集数组
一般来说,JavaScript中的数组是稀疏的. 什么是稀疏呢?稀疏也就是说,数组中的元素之间可以有空隙,因为一个数组其实就是一个键值映射.本文解释了如何创建稀疏数组和不稀疏的数组. 1.稀疏数组 创 ...
- http://www.cnblogs.com/zhengyun_ustc/p/55solution2.html
http://www.cnblogs.com/zhengyun_ustc/p/55solution2.html http://wenku.baidu.com/link?url=P756ZrmasJTK ...
- android 步骤控件的使用
有的时候我们做Android开发会用到表示步骤的需求.这时候github给我们提供了一个非常好地表示步骤的组件,使用她仅仅须要4步就能够完毕了. 项目地址https://github.com/anto ...