Preface

    We've learned the machenism of MGR yesterday,Let's configurate an environment and have some test today.MGR can be installed as plugin like semisynchronous replication.
 
Node information
 
ID IP Hostname Database Port Port of Seed Server ID
1 192.168.1.101 zlm2 MySQL 5.7.21 3306 33061 1013306
2 192.168.1.102 zlm3 MySQL 5.7.21 3306 33062 1023306
3 192.168.1.103 zlm4 MySQL 5.7.21 3306 33063 1033306
 
 
 
 
 
 
Configuration 
 
 ##Check "/etc/hosts" file on all servers and make sure the right mapping relationship of ip & hostname.
[root@zlm2 :: ~]
#cat /etc/hosts
127.0.0.1 zlm2 zlm2
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
#:: localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.100 zlm1 zlm1
192.168.1.101 zlm2 zlm2
192.168.1.102 zlm3 zlm3
192.168.1.103 zlm4 zlm4 ##Check the parameter in my.cnf which Group Replication needs on server zlm2.
[root@zlm2 :: ~]
#vim /data/mysql/mysql3306/my.cnf
... -- Omitted the other parameter
#group replication -- These parameters beneath are reauired by Group Replication.
server_id=
gtid_mode=ON -- Group Replication lies on GTID,so that it should be set "on".
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON -- Make sure the GTID information can be write into binary logs instead of mysql.gtid_executed table.
log_bin=binlog
binlog_format=ROW
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="ed142e35-6ed1-11e8-86c6-080027de0e0e" -- This is UUID which can be generate by SELECT UUID();
loose-group_replication_start_on_boot=off -- Only if you've finished configuration of Group Replication,then you can set it to "on".
loose-group_replication_local_address= "zlm2:33061"
loose-group_replication_group_seeds= "zlm2:33061,zlm3:33062,zlm4:33063" -- Candidate members of group,the port can be different from mysqld.
loose-group_replication_bootstrap_group=off -- Notice,it merely can be set to "on" in the member who has created the group and started first. ##Restart mysqld and add user of Group Replication.
(root@localhost mysql3306.sock)[(none)]::>SET SQL_LOG_BIN=;
Query OK, rows affected (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>CREATE USER rpl_mgr@'%' IDENTIFIED BY 'rpl4mgr';
Query OK, rows affected (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>GRANT REPLICATION SLAVE ON *.* TO rpl_mgr@'%';
Query OK, rows affected (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>FLUSH PRIVILEGES;
Query OK, rows affected (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>SET SQL_LOG_BIN=;
Query OK, rows affected (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>CHANGE MASTER TO MASTER_USER='rpl_mgr', MASTER_PASSWORD='rpl4mgr' FOR CHANNEL 'group_replication_recovery'; -- The name of channel is fixed and cannot be changed.
Query OK, rows affected, warnings (0.03 sec) ##Install the Group Replication plugin.
(root@localhost mysql3306.sock)[(none)]::>INSTALL PLUGIN group_replication SONAME 'group_replication.so';
Query OK, rows affected (0.03 sec) (root@localhost mysql3306.sock)[(none)]::>show plugins;
+----------------------------+----------+--------------------+----------------------+---------+
| Name | Status | Type | Library | License |
+----------------------------+----------+--------------------+----------------------+---------+
| binlog | ACTIVE | STORAGE ENGINE | NULL | GPL |
| mysql_native_password | ACTIVE | AUTHENTICATION | NULL | GPL |
| sha256_password | ACTIVE | AUTHENTICATION | NULL | GPL |
| PERFORMANCE_SCHEMA | ACTIVE | STORAGE ENGINE | NULL | GPL |
| MRG_MYISAM | ACTIVE | STORAGE ENGINE | NULL | GPL |
| MEMORY | ACTIVE | STORAGE ENGINE | NULL | GPL |
| InnoDB | ACTIVE | STORAGE ENGINE | NULL | GPL |
| INNODB_TRX | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_LOCKS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_LOCK_WAITS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMPMEM | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMPMEM_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP_PER_INDEX | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_CMP_PER_INDEX_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_BUFFER_PAGE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_BUFFER_PAGE_LRU | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_BUFFER_POOL_STATS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_TEMP_TABLE_INFO | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_METRICS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_DEFAULT_STOPWORD | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_DELETED | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_BEING_DELETED | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_CONFIG | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_INDEX_CACHE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_FT_INDEX_TABLE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_TABLES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_TABLESTATS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_INDEXES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_COLUMNS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_FIELDS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_FOREIGN | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_FOREIGN_COLS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_TABLESPACES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_DATAFILES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| INNODB_SYS_VIRTUAL | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
| CSV | ACTIVE | STORAGE ENGINE | NULL | GPL |
| MyISAM | ACTIVE | STORAGE ENGINE | NULL | GPL |
| ARCHIVE | ACTIVE | STORAGE ENGINE | NULL | GPL |
| partition | ACTIVE | STORAGE ENGINE | NULL | GPL |
| BLACKHOLE | ACTIVE | STORAGE ENGINE | NULL | GPL |
| FEDERATED | DISABLED | STORAGE ENGINE | NULL | GPL |
| ngram | ACTIVE | FTPARSER | NULL | GPL |
| group_replication | ACTIVE | GROUP REPLICATION | group_replication.so | GPL |
+----------------------------+----------+--------------------+----------------------+---------+
rows in set (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>select * from performance_schema.replication_group_members;
+---------------------------+-----------+-------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+-----------+-------------+-------------+--------------+
| group_replication_applier | | | NULL | OFFLINE | -- there's a record here after install the plugin.
+---------------------------+-----------+-------------+-------------+--------------+
row in set (0.00 sec) ##Set server zlm2 the seed member of group,then start up the Group Replicaiton.
(root@localhost mysql3306.sock)[(none)]::>SET GLOBAL group_replication_bootstrap_group=ON; -- This "on" value merely can be set once.
Query OK, rows affected (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>START GROUP_REPLICATION;
Query OK, rows affected (2.05 sec) (root@localhost mysql3306.sock)[(none)]::>SET GLOBAL group_replication_bootstrap_group=OFF; -- Disable it after starting.
Query OK, rows affected (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 1b7181ee-6eaf-11e8-998e-080027de0e0e | zlm2 | | ONLINE | -- There's a memeber in it.
+---------------------------+--------------------------------------+-------------+-------------+--------------+
row in set (0.00 sec) ##Let's do some operation on the server zlm2.
(root@localhost mysql3306.sock)[(none)]::>show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
rows in set (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>create database zlm;
Query OK, row affected (0.00 sec) (root@localhost mysql3306.sock)[(none)]::>use zlm;
Database changed
(root@localhost mysql3306.sock)[zlm]::>create table test_mgr (id int primary key, name char() not null);
Query OK, rows affected (0.02 sec) (root@localhost mysql3306.sock)[zlm]::>insert into test_mgr VALUES (, 'aaron8219');
Query OK, row affected (0.01 sec) (root@localhost mysql3306.sock)[zlm]::>select * from test_mgr;
+----+-----------+
| id | name |
+----+-----------+
| | aaron8219 |
+----+-----------+
row in set (0.00 sec) (root@localhost mysql3306.sock)[zlm]::>show binlog events;
+---------------+------+----------------+-----------+-------------+-------------------------------------------------------------------------------+
| Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
+---------------+------+----------------+-----------+-------------+-------------------------------------------------------------------------------+
| binlog. | | Format_desc | | | Server ver: 5.7.-log, Binlog ver: |
| binlog. | | Previous_gtids | | | |
| binlog. | | Gtid | | | SET @@SESSION.GTID_NEXT= 'ed142e35-6ed1-11e8-86c6-080027de0e0e:1' |
| binlog. | | Query | | | BEGIN |
| binlog. | | View_change | | | view_id=: |
| binlog. | | Query | | | COMMIT |
| binlog. | | Gtid | | | SET @@SESSION.GTID_NEXT= 'ed142e35-6ed1-11e8-86c6-080027de0e0e:2' |
| binlog. | | Query | | | create database zlm |
| binlog. | | Gtid | | | SET @@SESSION.GTID_NEXT= 'ed142e35-6ed1-11e8-86c6-080027de0e0e:3' |
| binlog. | | Query | | | use `zlm`; create table test_mgr (id int primary key, name char() not null) |
| binlog. | | Gtid | | | SET @@SESSION.GTID_NEXT= 'ed142e35-6ed1-11e8-86c6-080027de0e0e:4' |
| binlog. | | Query | | | BEGIN |
| binlog. | | Table_map | | | table_id: (zlm.test_mgr) |
| binlog. | | Write_rows | | | table_id: flags: STMT_END_F |
| binlog. | | Xid | | | COMMIT /* xid=59 */ |
+---------------+------+----------------+-----------+-------------+-------------------------------------------------------------------------------+
rows in set (0.00 sec) (root@localhost mysql3306.sock)[zlm]::> ##Configure the other two servers like what i've done on server zlm2:
-- Omitted. ##START Group Replication
(root@localhost mysql3306.sock)[(none)]::>START GROUP_REPLICATION;
ERROR (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.
(root@localhost mysql3306.sock)[(none)]::>(root@localhost mysql3306.sock)[(none)]::>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 5c77c31b-4add-11e8-81e2-080027de0e0e | zlm3 | | OFFLINE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
row in set (0.00 sec) ##There's something wrong when I execute "START GROUP_REPLICATION;".the server zlm3 doesn't join the right group create by server zlm2.
the error.log shows below:
--13T07::.249829Z [Note] mysqld (mysqld 5.7.-log) starting as process ...
--13T07::.256669Z [Note] InnoDB: PUNCH HOLE support available
--13T07::.256701Z [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
--13T07::.256705Z [Note] InnoDB: Uses event mutexes
--13T07::.256708Z [Note] InnoDB: GCC builtin __sync_synchronize() is used for memory barrier
--13T07::.256708Z [Note] InnoDB: Compressed tables use zlib 1.2.
--13T07::.256708Z [Note] InnoDB: Using Linux native AIO
--13T07::.256708Z [Note] InnoDB: Number of pools:
--13T07::.256718Z [Note] InnoDB: Using CPU crc32 instructions
--13T07::.258124Z [Note] InnoDB: Initializing buffer pool, total size = 100M, instances = , chunk size = 100M
--13T07::.263012Z [Note] InnoDB: Completed initialization of buffer pool
--13T07::.264222Z [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
--13T07::.289331Z [Note] InnoDB: Highest supported file format is Barracuda.
--13T07::.475746Z [Note] InnoDB: Creating shared tablespace for temporary tables
--13T07::.475831Z [Note] InnoDB: Setting file './ibtmp1' size to MB. Physically writing the file full; Please wait ...
--13T07::.781737Z [Note] InnoDB: File './ibtmp1' size is now MB.
--13T07::.782469Z [Note] InnoDB: redo rollback segment(s) found. redo rollback segment(s) are active.
--13T07::.782482Z [Note] InnoDB: non-redo rollback segment(s) are active.
--13T07::.783403Z [Note] InnoDB: Waiting for purge to start
--13T07::.960368Z [Note] InnoDB: 5.7. started; log sequence number
--13T07::.960713Z [Note] Plugin 'FEDERATED' is disabled.
--13T07::.964346Z [Note] InnoDB: Loading buffer pool(s) from /data/mysql/mysql3306/data/ib_buffer_pool
--13T07::.968486Z [Warning] unknown variable 'loose_tokudb_cache_size=100M'
--13T07::.968509Z [Warning] unknown variable 'loose_tokudb_directio=ON'
--13T07::.968511Z [Warning] unknown variable 'loose_tokudb_fsync_log_period=1000'
--13T07::.968513Z [Warning] unknown variable 'loose_tokudb_commit_sync=0'
--13T07::.968515Z [Warning] unknown variable 'loose-group_replication_group_name=a5e7836a-6edc-11e8-a20d-080027de0e0e'
--13T07::.968516Z [Warning] unknown variable 'loose-group_replication_start_on_boot=off'
--13T07::.968518Z [Warning] unknown variable 'loose-group_replication_local_address=zlm3:33062'
--13T07::.968520Z [Warning] unknown variable 'loose-group_replication_group_seeds=zlm2:33061,zlm3:33062,zlm4:33063'
--13T07::.968521Z [Warning] unknown variable 'loose-group_replication_bootstrap_group=off'
--13T07::.983518Z [Warning] Failed to set up SSL because of the following SSL library error: SSL context is not usable without certificate and private key
--13T07::.983631Z [Note] Server hostname (bind-address): '*'; port:
--13T07::.983667Z [Note] IPv6 is available.
--13T07::.983673Z [Note] - '::' resolves to '::';
--13T07::.983690Z [Note] Server socket created on IP: '::'.
--13T07::.036682Z [Note] Event Scheduler: Loaded events
--13T07::.037391Z [Note] mysqld: ready for connections.
Version: '5.7.21-log' socket: '/tmp/mysql3306.sock' port: MySQL Community Server (GPL)
--13T07::.083468Z [Note] InnoDB: Buffer pool(s) load completed at ::
--13T08::.631676Z [Note] Aborted connection to db: 'unconnected' user: 'root' host: 'localhost' (Got timeout reading communication packets)
--13T08::.693094Z [Note] Aborted connection to db: 'unconnected' user: 'root' host: 'localhost' (Got timeout reading communication packets)
--13T08::.529090Z [Note] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
--13T08::.529197Z [Note] Plugin group_replication reported: '[GCS] Added automatically IP ranges 10.0.2.15/24,127.0.0.1/8,192.168.1.102/24 to the whitelist'
--13T08::.529394Z [Note] Plugin group_replication reported: '[GCS] Translated 'zlm3' to 192.168.1.102'
--13T08::.529486Z [Warning] Plugin group_replication reported: '[GCS] Automatically adding IPv4 localhost address to the whitelist. It is mandatory that it is added.'
--13T08::.531296Z [Note] Plugin group_replication reported: '[GCS] SSL was not enabled'
--13T08::.531336Z [Note] Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: "a5e7836a-6edc-11e8-a20d-080027de0e0e"; group_replication_local_address: "zlm3:33062"; group_replication_group_seeds: "zlm2:33061,zlm3:33062,zlm4:33063"; group_replication_bootstrap_group: false; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: "AUTOMATIC"'
--13T08::.531375Z [Note] Plugin group_replication reported: 'Member configuration: member_id: 1023306; member_uuid: "5c77c31b-4add-11e8-81e2-080027de0e0e"; single-primary mode: "true"; group_replication_auto_increment_increment: 7; '
--13T08::.549240Z [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='', master_port= , master_log_file='', master_log_pos= , master_bind=''. New state master_host='<NULL>', master_port= , master_log_file='', master_log_pos= , master_bind=''.
--13T08::.568485Z [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position , relay log './relay-bin-group_replication_applier.000001' position:
--13T08::.569516Z [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
--13T08::.569528Z [Note] Plugin group_replication reported: 'auto_increment_increment is set to 7'
--13T08::.569531Z [Note] Plugin group_replication reported: 'auto_increment_offset is set to 1023306'
--13T08::.569631Z [Note] Plugin group_replication reported: 'state 0 action xa_init'
--13T08::.589865Z [Note] Plugin group_replication reported: 'Successfully bound to 0.0.0.0:33062 (socket=62).'
--13T08::.589970Z [Note] Plugin group_replication reported: 'Successfully set listen backlog to 32 (socket=62)!'
--13T08::.590011Z [Note] Plugin group_replication reported: 'Successfully unblocked socket (socket=62)!'
--13T08::.590098Z [Note] Plugin group_replication reported: 'Ready to accept incoming connections on 0.0.0.0:33062 (socket=62)!'
--13T08::.590549Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
--13T08::.590788Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 63'
--13T08::.593734Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
--13T08::.593853Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 65'
--13T08::.593966Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
--13T08::.594016Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 67'
--13T08::.595449Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
--13T08::.595554Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 60'
--13T08::.595792Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
--13T08::.595887Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 70'
--13T08::.596009Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
--13T08::.596069Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 72'
--13T08::.596168Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.596594Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.596622Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.596629Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.596947Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.596965Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.596971Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.597300Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.597314Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.597320Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.597547Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.597568Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.597582Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.597931Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.597960Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.597966Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.598270Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.598297Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.598303Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.598561Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.598583Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.598590Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.598849Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.598876Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.598882Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.599181Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.599199Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.599205Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.599519Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.599549Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.599572Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.599884Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.599896Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.599901Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.600125Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.600139Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.600145Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.600879Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.600930Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.600938Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.601423Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.601449Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.601464Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.604719Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.604760Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.604768Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.605086Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.605103Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.605110Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.606780Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.606820Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.606828Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.607219Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.607232Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.607237Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
--13T08::.608667Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
--13T08::.608702Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
--13T08::.608710Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
--13T08::.609062Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
--13T08::.609080Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
--13T08::.609086Z [ERROR] Plugin group_replication reported: '[GCS] Error connecting to all peers. Member join failed. Local port: 33062'
--13T08::.609134Z [Note] Plugin group_replication reported: 'state 4338 action xa_terminate'
--13T08::.609141Z [Note] Plugin group_replication reported: 'new state x_start'
--13T08::.609143Z [Note] Plugin group_replication reported: 'state 4338 action xa_exit'
--13T08::.609182Z [Note] Plugin group_replication reported: 'Exiting xcom thread'
--13T08::.609186Z [Note] Plugin group_replication reported: 'new state x_start'
--13T08::.618446Z [Warning] Plugin group_replication reported: 'read failed'
--13T08::.618546Z [ERROR] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33062'
--13T08::.570227Z [ERROR] Plugin group_replication reported: 'Timeout on wait for view after joining group'
--13T08::.570326Z [Note] Plugin group_replication reported: 'Requesting to leave the group despite of not being a member'
--13T08::.570364Z [ERROR] Plugin group_replication reported: '[GCS] The member is leaving a group without being on one.'
--13T08::.570551Z [Note] Plugin group_replication reported: 'auto_increment_increment is reset to 1'
--13T08::.570559Z [Note] Plugin group_replication reported: 'auto_increment_offset is reset to 1'
--13T08::.570655Z [Note] Error reading relay log event for channel 'group_replication_applier': slave SQL thread was killed
--13T08::.570836Z [Note] Plugin group_replication reported: 'The group replication applier thread was killed' ##Finally,i find out the firewall is not disabled on server zlm2.
[root@zlm2 :: ~]
#systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)
Active: active (running) since Wed -- :: CEST; 7h ago
Main PID: (firewalld)
CGroup: /system.slice/firewalld.service
└─ /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid Jun :: localhost.localdomain systemd[]: Started firewalld - dynamic firewall daemon. [root@zlm2 :: ~]
#systemctl stop firewalld [root@zlm2 :: ~]
#systemctl disable firewalld
rm '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service'
rm '/etc/systemd/system/basic.target.wants/firewalld.service' [root@zlm2 :: ~]
#systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled)
Active: inactive (dead) Jun :: localhost.localdomain systemd[]: Starting firewalld - dynamic firewall daemon...
Jun :: localhost.localdomain systemd[]: Started firewalld - dynamic firewall daemon.
Jun :: zlm2 systemd[]: Stopping firewalld - dynamic firewall daemon...
Jun :: zlm2 systemd[]: Stopped firewalld - dynamic firewall daemon. ##Start Group Replication again.
(root@localhost mysql3306.sock)[(none)]::>START GROUP_REPLICATION;
ERROR (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.
(root@localhost mysql3306.sock)[(none)]::> --13T08::.361028Z [ERROR] Plugin group_replication reported: '[GCS] Timeout while waiting for the group communication engine to be ready!'
--13T08::.361070Z [ERROR] Plugin group_replication reported: '[GCS] The group communication engine is not ready for the member to join. Local port: 33062'
--13T08::.361171Z [Note] Plugin group_replication reported: 'state 4338 action xa_terminate'
--13T08::.361185Z [Note] Plugin group_replication reported: 'new state x_start'
--13T08::.361188Z [Note] Plugin group_replication reported: 'state 4338 action xa_exit'
--13T08::.361254Z [Note] Plugin group_replication reported: 'Exiting xcom thread'
--13T08::.361258Z [Note] Plugin group_replication reported: 'new state x_start'
--13T08::.371810Z [Warning] Plugin group_replication reported: 'read failed'
--13T08::.387635Z [ERROR] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33062'
--13T08::.349695Z [ERROR] Plugin group_replication reported: 'Timeout on wait for view after joining group'
--13T08::.349732Z [Note] Plugin group_replication reported: 'Requesting to leave the group despite of not being a member'
--13T08::.349745Z [ERROR] Plugin group_replication reported: '[GCS] The member is leaving a group without being on one.'
--13T08::.349969Z [Note] Plugin group_replication reported: 'auto_increment_increment is reset to 1'
--13T08::.349975Z [Note] Plugin group_replication reported: 'auto_increment_offset is reset to 1'
--13T08::.350079Z [Note] Error reading relay log event for channel 'group_replication_applier': slave SQL thread was killed
--13T08::.350240Z [Note] Plugin group_replication reported: 'The group replication applier thread was killed'
    The other two servers(zlm3,zlm4) cannot join the group created by zlm2,I've not figured out what's wrong with it yet.I'll test it again in sometime later.
 

MySQL高可用之MGR安装测试的更多相关文章

  1. MySQL高可用之MGR安装测试(续)

      Preface       I've implemented the Group Replication with three servers yesterday,What a shame it ...

  2. 【DB宝45】MySQL高可用之MGR+Consul架构部署

    目录 一.MGR+Consul架构简介 二.搭建MGR 2.1.申请3台MGR机器 2.2.3台主机安装MySQL环境 2.3.修改MySQL参数 2.4.重启MySQL环境 2.5.安装MGR插件( ...

  3. MySQL高可用架构-MMM安装教程

    安装指南: 一.架构以及服务器信息 基本安装包含至少2个数据库服务器和1个监视服务器.本例中使用2个监视服务器和5个数据库服务器(服务器系统为CentOS 7) 用途 IP 主机名 Server-id ...

  4. MySQL高可用之PXC安装部署(续)

      Preface       Yesterday I implemented a three-nodes PXC,but there were some errors when proceeding ...

  5. MySQL高可用之PXC安装部署

      Preface       Today,I'm gonna implement a PXC,Let's see the procedure.   Framework   Hostname IP P ...

  6. MySQL高可用之MHA切换测试(switchover & failover)

      Preface       I've installed MasterHA yesterday,Now let's test the master-slave switch and failove ...

  7. MySQL高可用之MHA安装

      Preface       MasterHA is a tool which can be used in MySQL HA architecture.I'm gonna implement it ...

  8. 032:基于Consul和MGR的MySQL高可用架构

    目录 一.Consul 1.Consul简介 2.准备环境 3.Consul 安装 4.Consul配置文件 5.Consul 服务检查脚本 6.Consul启动 二.MGR搭建 1.MGR配置 2. ...

  9. MySQL高可用新玩法之MGR+Consul

    前面的文章有提到过利用consul+mha实现mysql的高可用,以及利用consul+sentinel实现redis的高可用,具体的请查看:http://www.cnblogs.com/gomysq ...

随机推荐

  1. JavaWeb请求-响应学习笔记

    先来看一个流程图: 服务器处理请求的流程: (1)服务器每次收到请求时,都会为这个请求开辟一个新的线程.   (2)服务器会把客户端的请求数据封装到request对象中,request就是请求数据的载 ...

  2. android中的内部存储与外部存储

    我们先来考虑这样一个问题: 打开手机设置,选择应用管理,选择任意一个App,然后你会看到两个按钮,一个是清除缓存,另一个是清除数据,那么当我们点击清除缓存的时候清除的是哪里的数据?当我们点击清除数据的 ...

  3. 09_Redis持久化——AOF方式

    [AOF简述] AOF(Append-only) Redis每次接受到一条改变数据的命令时,它会把该命令写到一个AOF文件中(只记录写操作,不记录读操作),当Redis启动时,它通过执行AOF文件中的 ...

  4. 第6课 仿Siri机器人-语音朗读和语音识别

    一.功能设计 输入文本,单击“朗读”按钮,由手机读出该文本(如果没有输入文本,则弹出消息框警告“请输入文本):单击“识别”按钮,读入语音,从文本框中输出文字.(另,增加识别的视觉效果,如果读入的文本含 ...

  5. Python & Selenium & Pycharm 环境搭建

    最近在研究python+selenium进行自动化测试.然后用的python开发工具是Pycharm.然后,今天就跟大家讲一下怎么搭建一整套的自动化测试环境. 安装python 首先,安装python ...

  6. [acm 1002] 浙大 Fire Net

    已转战浙大 题目 http://acm.zju.edu.cn/onlinejudge/showProblem.do?problemId=2 浙大acm 1002 #include <iostre ...

  7. 查看源代码HTML

    HTML 提示 - 如何查看源代码 如果您想找到其中的奥秘,只需要单击右键,然后选择“查看源文件”(IE)或“查看页面源代码”(Firefox),其他浏览器的做法也是类似的.这么做会打开一个包含页面 ...

  8. angularJS directive中的controller和link function辨析

    在angularJS中,你有一系列的view,负责将数据渲染给用户:你有一些controller,负责管理$scope(view model)并且暴露相关behavior(通过$scope定义)给到v ...

  9. 二、docker学习笔记——安装redis

    前提:打开powershell(管理员) 1.官网路径 按照官网的做了,但外网无法链接,只好换个做法. 2.docker pull redis 这下载的最新版redis 3.在docker上挂载文件夹 ...

  10. Hadoop学习---Eclipse中hadoop环境的搭建

    在eclipse中建立hadoop环境的支持 1.需要下载安装eclipse 2.需要hadoop-eclipse-plugin-2.6.0.jar插件,插件的终极解决方案是https://githu ...