Preface

    We've learned the machenism of MGR yesterday,Let's configurate an environment and have some test today.MGR can be installed as plugin like semisynchronous replication.
 
Node information
 
ID IP Hostname Database Port Port of Seed Server ID
1 192.168.1.101 zlm2 MySQL 5.7.21 3306 33061 1013306
2 192.168.1.102 zlm3 MySQL 5.7.21 3306 33062 1023306
3 192.168.1.103 zlm4 MySQL 5.7.21 3306 33063 1033306
 
 
 
 
 
 
Configuration 
 
  1. ##Check "/etc/hosts" file on all servers and make sure the right mapping relationship of ip & hostname.
  2. [root@zlm2 :: ~]
  3. #cat /etc/hosts
  4. 127.0.0.1 zlm2 zlm2
  5. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  6. #:: localhost localhost.localdomain localhost6 localhost6.localdomain6
  7. 192.168.1.100 zlm1 zlm1
  8. 192.168.1.101 zlm2 zlm2
  9. 192.168.1.102 zlm3 zlm3
  10. 192.168.1.103 zlm4 zlm4
  11.  
  12. ##Check the parameter in my.cnf which Group Replication needs on server zlm2.
  13. [root@zlm2 :: ~]
  14. #vim /data/mysql/mysql3306/my.cnf
  15. ... -- Omitted the other parameter
  16. #group replication -- These parameters beneath are reauired by Group Replication.
  17. server_id=
  18. gtid_mode=ON -- Group Replication lies on GTID,so that it should be set "on".
  19. enforce_gtid_consistency=ON
  20. master_info_repository=TABLE
  21. relay_log_info_repository=TABLE
  22. binlog_checksum=NONE
  23. log_slave_updates=ON -- Make sure the GTID information can be write into binary logs instead of mysql.gtid_executed table.
  24. log_bin=binlog
  25. binlog_format=ROW
  26. transaction_write_set_extraction=XXHASH64
  27. loose-group_replication_group_name="ed142e35-6ed1-11e8-86c6-080027de0e0e" -- This is UUID which can be generate by SELECT UUID();
  28. loose-group_replication_start_on_boot=off -- Only if you've finished configuration of Group Replication,then you can set it to "on".
  29. loose-group_replication_local_address= "zlm2:33061"
  30. loose-group_replication_group_seeds= "zlm2:33061,zlm3:33062,zlm4:33063" -- Candidate members of group,the port can be different from mysqld.
  31. loose-group_replication_bootstrap_group=off -- Notice,it merely can be set to "on" in the member who has created the group and started first.
  32.  
  33. ##Restart mysqld and add user of Group Replication.
  34. (root@localhost mysql3306.sock)[(none)]::>SET SQL_LOG_BIN=;
  35. Query OK, rows affected (0.00 sec)
  36.  
  37. (root@localhost mysql3306.sock)[(none)]::>CREATE USER rpl_mgr@'%' IDENTIFIED BY 'rpl4mgr';
  38. Query OK, rows affected (0.00 sec)
  39.  
  40. (root@localhost mysql3306.sock)[(none)]::>GRANT REPLICATION SLAVE ON *.* TO rpl_mgr@'%';
  41. Query OK, rows affected (0.00 sec)
  42.  
  43. (root@localhost mysql3306.sock)[(none)]::>FLUSH PRIVILEGES;
  44. Query OK, rows affected (0.00 sec)
  45.  
  46. (root@localhost mysql3306.sock)[(none)]::>SET SQL_LOG_BIN=;
  47. Query OK, rows affected (0.00 sec)
  48.  
  49. (root@localhost mysql3306.sock)[(none)]::>CHANGE MASTER TO MASTER_USER='rpl_mgr', MASTER_PASSWORD='rpl4mgr' FOR CHANNEL 'group_replication_recovery'; -- The name of channel is fixed and cannot be changed.
  50. Query OK, rows affected, warnings (0.03 sec)
  51.  
  52. ##Install the Group Replication plugin.
  53. (root@localhost mysql3306.sock)[(none)]::>INSTALL PLUGIN group_replication SONAME 'group_replication.so';
  54. Query OK, rows affected (0.03 sec)
  55.  
  56. (root@localhost mysql3306.sock)[(none)]::>show plugins;
  57. +----------------------------+----------+--------------------+----------------------+---------+
  58. | Name | Status | Type | Library | License |
  59. +----------------------------+----------+--------------------+----------------------+---------+
  60. | binlog | ACTIVE | STORAGE ENGINE | NULL | GPL |
  61. | mysql_native_password | ACTIVE | AUTHENTICATION | NULL | GPL |
  62. | sha256_password | ACTIVE | AUTHENTICATION | NULL | GPL |
  63. | PERFORMANCE_SCHEMA | ACTIVE | STORAGE ENGINE | NULL | GPL |
  64. | MRG_MYISAM | ACTIVE | STORAGE ENGINE | NULL | GPL |
  65. | MEMORY | ACTIVE | STORAGE ENGINE | NULL | GPL |
  66. | InnoDB | ACTIVE | STORAGE ENGINE | NULL | GPL |
  67. | INNODB_TRX | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  68. | INNODB_LOCKS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  69. | INNODB_LOCK_WAITS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  70. | INNODB_CMP | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  71. | INNODB_CMP_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  72. | INNODB_CMPMEM | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  73. | INNODB_CMPMEM_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  74. | INNODB_CMP_PER_INDEX | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  75. | INNODB_CMP_PER_INDEX_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  76. | INNODB_BUFFER_PAGE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  77. | INNODB_BUFFER_PAGE_LRU | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  78. | INNODB_BUFFER_POOL_STATS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  79. | INNODB_TEMP_TABLE_INFO | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  80. | INNODB_METRICS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  81. | INNODB_FT_DEFAULT_STOPWORD | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  82. | INNODB_FT_DELETED | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  83. | INNODB_FT_BEING_DELETED | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  84. | INNODB_FT_CONFIG | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  85. | INNODB_FT_INDEX_CACHE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  86. | INNODB_FT_INDEX_TABLE | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  87. | INNODB_SYS_TABLES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  88. | INNODB_SYS_TABLESTATS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  89. | INNODB_SYS_INDEXES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  90. | INNODB_SYS_COLUMNS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  91. | INNODB_SYS_FIELDS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  92. | INNODB_SYS_FOREIGN | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  93. | INNODB_SYS_FOREIGN_COLS | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  94. | INNODB_SYS_TABLESPACES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  95. | INNODB_SYS_DATAFILES | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  96. | INNODB_SYS_VIRTUAL | ACTIVE | INFORMATION SCHEMA | NULL | GPL |
  97. | CSV | ACTIVE | STORAGE ENGINE | NULL | GPL |
  98. | MyISAM | ACTIVE | STORAGE ENGINE | NULL | GPL |
  99. | ARCHIVE | ACTIVE | STORAGE ENGINE | NULL | GPL |
  100. | partition | ACTIVE | STORAGE ENGINE | NULL | GPL |
  101. | BLACKHOLE | ACTIVE | STORAGE ENGINE | NULL | GPL |
  102. | FEDERATED | DISABLED | STORAGE ENGINE | NULL | GPL |
  103. | ngram | ACTIVE | FTPARSER | NULL | GPL |
  104. | group_replication | ACTIVE | GROUP REPLICATION | group_replication.so | GPL |
  105. +----------------------------+----------+--------------------+----------------------+---------+
  106. rows in set (0.00 sec)
  107.  
  108. (root@localhost mysql3306.sock)[(none)]::>select * from performance_schema.replication_group_members;
  109. +---------------------------+-----------+-------------+-------------+--------------+
  110. | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
  111. +---------------------------+-----------+-------------+-------------+--------------+
  112. | group_replication_applier | | | NULL | OFFLINE | -- there's a record here after install the plugin.
  113. +---------------------------+-----------+-------------+-------------+--------------+
  114. row in set (0.00 sec)
  115.  
  116. ##Set server zlm2 the seed member of group,then start up the Group Replicaiton.
  117. (root@localhost mysql3306.sock)[(none)]::>SET GLOBAL group_replication_bootstrap_group=ON; -- This "on" value merely can be set once.
  118. Query OK, rows affected (0.00 sec)
  119.  
  120. (root@localhost mysql3306.sock)[(none)]::>START GROUP_REPLICATION;
  121. Query OK, rows affected (2.05 sec)
  122.  
  123. (root@localhost mysql3306.sock)[(none)]::>SET GLOBAL group_replication_bootstrap_group=OFF; -- Disable it after starting.
  124. Query OK, rows affected (0.00 sec)
  125.  
  126. (root@localhost mysql3306.sock)[(none)]::>select * from performance_schema.replication_group_members;
  127. +---------------------------+--------------------------------------+-------------+-------------+--------------+
  128. | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
  129. +---------------------------+--------------------------------------+-------------+-------------+--------------+
  130. | group_replication_applier | 1b7181ee-6eaf-11e8-998e-080027de0e0e | zlm2 | | ONLINE | -- There's a memeber in it.
  131. +---------------------------+--------------------------------------+-------------+-------------+--------------+
  132. row in set (0.00 sec)
  133.  
  134. ##Let's do some operation on the server zlm2.
  135. (root@localhost mysql3306.sock)[(none)]::>show databases;
  136. +--------------------+
  137. | Database |
  138. +--------------------+
  139. | information_schema |
  140. | mysql |
  141. | performance_schema |
  142. | sys |
  143. +--------------------+
  144. rows in set (0.00 sec)
  145.  
  146. (root@localhost mysql3306.sock)[(none)]::>create database zlm;
  147. Query OK, row affected (0.00 sec)
  148.  
  149. (root@localhost mysql3306.sock)[(none)]::>use zlm;
  150. Database changed
  151. (root@localhost mysql3306.sock)[zlm]::>create table test_mgr (id int primary key, name char() not null);
  152. Query OK, rows affected (0.02 sec)
  153.  
  154. (root@localhost mysql3306.sock)[zlm]::>insert into test_mgr VALUES (, 'aaron8219');
  155. Query OK, row affected (0.01 sec)
  156.  
  157. (root@localhost mysql3306.sock)[zlm]::>select * from test_mgr;
  158. +----+-----------+
  159. | id | name |
  160. +----+-----------+
  161. | | aaron8219 |
  162. +----+-----------+
  163. row in set (0.00 sec)
  164.  
  165. (root@localhost mysql3306.sock)[zlm]::>show binlog events;
  166. +---------------+------+----------------+-----------+-------------+-------------------------------------------------------------------------------+
  167. | Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
  168. +---------------+------+----------------+-----------+-------------+-------------------------------------------------------------------------------+
  169. | binlog. | | Format_desc | | | Server ver: 5.7.-log, Binlog ver: |
  170. | binlog. | | Previous_gtids | | | |
  171. | binlog. | | Gtid | | | SET @@SESSION.GTID_NEXT= 'ed142e35-6ed1-11e8-86c6-080027de0e0e:1' |
  172. | binlog. | | Query | | | BEGIN |
  173. | binlog. | | View_change | | | view_id=: |
  174. | binlog. | | Query | | | COMMIT |
  175. | binlog. | | Gtid | | | SET @@SESSION.GTID_NEXT= 'ed142e35-6ed1-11e8-86c6-080027de0e0e:2' |
  176. | binlog. | | Query | | | create database zlm |
  177. | binlog. | | Gtid | | | SET @@SESSION.GTID_NEXT= 'ed142e35-6ed1-11e8-86c6-080027de0e0e:3' |
  178. | binlog. | | Query | | | use `zlm`; create table test_mgr (id int primary key, name char() not null) |
  179. | binlog. | | Gtid | | | SET @@SESSION.GTID_NEXT= 'ed142e35-6ed1-11e8-86c6-080027de0e0e:4' |
  180. | binlog. | | Query | | | BEGIN |
  181. | binlog. | | Table_map | | | table_id: (zlm.test_mgr) |
  182. | binlog. | | Write_rows | | | table_id: flags: STMT_END_F |
  183. | binlog. | | Xid | | | COMMIT /* xid=59 */ |
  184. +---------------+------+----------------+-----------+-------------+-------------------------------------------------------------------------------+
  185. rows in set (0.00 sec)
  186.  
  187. (root@localhost mysql3306.sock)[zlm]::>
  188.  
  189. ##Configure the other two servers like what i've done on server zlm2:
  190. -- Omitted.
  191.  
  192. ##START Group Replication
  193. (root@localhost mysql3306.sock)[(none)]::>START GROUP_REPLICATION;
  194. ERROR (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.
  195. (root@localhost mysql3306.sock)[(none)]::>(root@localhost mysql3306.sock)[(none)]::>select * from performance_schema.replication_group_members;
  196. +---------------------------+--------------------------------------+-------------+-------------+--------------+
  197. | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
  198. +---------------------------+--------------------------------------+-------------+-------------+--------------+
  199. | group_replication_applier | 5c77c31b-4add-11e8-81e2-080027de0e0e | zlm3 | | OFFLINE |
  200. +---------------------------+--------------------------------------+-------------+-------------+--------------+
  201. row in set (0.00 sec)
  202.  
  203. ##There's something wrong when I execute "START GROUP_REPLICATION;".the server zlm3 doesn't join the right group create by server zlm2.
  204. the error.log shows below:
  205. --13T07::.249829Z [Note] mysqld (mysqld 5.7.-log) starting as process ...
  206. --13T07::.256669Z [Note] InnoDB: PUNCH HOLE support available
  207. --13T07::.256701Z [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
  208. --13T07::.256705Z [Note] InnoDB: Uses event mutexes
  209. --13T07::.256708Z [Note] InnoDB: GCC builtin __sync_synchronize() is used for memory barrier
  210. --13T07::.256708Z [Note] InnoDB: Compressed tables use zlib 1.2.
  211. --13T07::.256708Z [Note] InnoDB: Using Linux native AIO
  212. --13T07::.256708Z [Note] InnoDB: Number of pools:
  213. --13T07::.256718Z [Note] InnoDB: Using CPU crc32 instructions
  214. --13T07::.258124Z [Note] InnoDB: Initializing buffer pool, total size = 100M, instances = , chunk size = 100M
  215. --13T07::.263012Z [Note] InnoDB: Completed initialization of buffer pool
  216. --13T07::.264222Z [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
  217. --13T07::.289331Z [Note] InnoDB: Highest supported file format is Barracuda.
  218. --13T07::.475746Z [Note] InnoDB: Creating shared tablespace for temporary tables
  219. --13T07::.475831Z [Note] InnoDB: Setting file './ibtmp1' size to MB. Physically writing the file full; Please wait ...
  220. --13T07::.781737Z [Note] InnoDB: File './ibtmp1' size is now MB.
  221. --13T07::.782469Z [Note] InnoDB: redo rollback segment(s) found. redo rollback segment(s) are active.
  222. --13T07::.782482Z [Note] InnoDB: non-redo rollback segment(s) are active.
  223. --13T07::.783403Z [Note] InnoDB: Waiting for purge to start
  224. --13T07::.960368Z [Note] InnoDB: 5.7. started; log sequence number
  225. --13T07::.960713Z [Note] Plugin 'FEDERATED' is disabled.
  226. --13T07::.964346Z [Note] InnoDB: Loading buffer pool(s) from /data/mysql/mysql3306/data/ib_buffer_pool
  227. --13T07::.968486Z [Warning] unknown variable 'loose_tokudb_cache_size=100M'
  228. --13T07::.968509Z [Warning] unknown variable 'loose_tokudb_directio=ON'
  229. --13T07::.968511Z [Warning] unknown variable 'loose_tokudb_fsync_log_period=1000'
  230. --13T07::.968513Z [Warning] unknown variable 'loose_tokudb_commit_sync=0'
  231. --13T07::.968515Z [Warning] unknown variable 'loose-group_replication_group_name=a5e7836a-6edc-11e8-a20d-080027de0e0e'
  232. --13T07::.968516Z [Warning] unknown variable 'loose-group_replication_start_on_boot=off'
  233. --13T07::.968518Z [Warning] unknown variable 'loose-group_replication_local_address=zlm3:33062'
  234. --13T07::.968520Z [Warning] unknown variable 'loose-group_replication_group_seeds=zlm2:33061,zlm3:33062,zlm4:33063'
  235. --13T07::.968521Z [Warning] unknown variable 'loose-group_replication_bootstrap_group=off'
  236. --13T07::.983518Z [Warning] Failed to set up SSL because of the following SSL library error: SSL context is not usable without certificate and private key
  237. --13T07::.983631Z [Note] Server hostname (bind-address): '*'; port:
  238. --13T07::.983667Z [Note] IPv6 is available.
  239. --13T07::.983673Z [Note] - '::' resolves to '::';
  240. --13T07::.983690Z [Note] Server socket created on IP: '::'.
  241. --13T07::.036682Z [Note] Event Scheduler: Loaded events
  242. --13T07::.037391Z [Note] mysqld: ready for connections.
  243. Version: '5.7.21-log' socket: '/tmp/mysql3306.sock' port: MySQL Community Server (GPL)
  244. --13T07::.083468Z [Note] InnoDB: Buffer pool(s) load completed at ::
  245. --13T08::.631676Z [Note] Aborted connection to db: 'unconnected' user: 'root' host: 'localhost' (Got timeout reading communication packets)
  246. --13T08::.693094Z [Note] Aborted connection to db: 'unconnected' user: 'root' host: 'localhost' (Got timeout reading communication packets)
  247. --13T08::.529090Z [Note] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
  248. --13T08::.529197Z [Note] Plugin group_replication reported: '[GCS] Added automatically IP ranges 10.0.2.15/24,127.0.0.1/8,192.168.1.102/24 to the whitelist'
  249. --13T08::.529394Z [Note] Plugin group_replication reported: '[GCS] Translated 'zlm3' to 192.168.1.102'
  250. --13T08::.529486Z [Warning] Plugin group_replication reported: '[GCS] Automatically adding IPv4 localhost address to the whitelist. It is mandatory that it is added.'
  251. --13T08::.531296Z [Note] Plugin group_replication reported: '[GCS] SSL was not enabled'
  252. --13T08::.531336Z [Note] Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: "a5e7836a-6edc-11e8-a20d-080027de0e0e"; group_replication_local_address: "zlm3:33062"; group_replication_group_seeds: "zlm2:33061,zlm3:33062,zlm4:33063"; group_replication_bootstrap_group: false; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: "AUTOMATIC"'
  253. --13T08::.531375Z [Note] Plugin group_replication reported: 'Member configuration: member_id: 1023306; member_uuid: "5c77c31b-4add-11e8-81e2-080027de0e0e"; single-primary mode: "true"; group_replication_auto_increment_increment: 7; '
  254. --13T08::.549240Z [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='', master_port= , master_log_file='', master_log_pos= , master_bind=''. New state master_host='<NULL>', master_port= , master_log_file='', master_log_pos= , master_bind=''.
  255. --13T08::.568485Z [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position , relay log './relay-bin-group_replication_applier.000001' position:
  256. --13T08::.569516Z [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
  257. --13T08::.569528Z [Note] Plugin group_replication reported: 'auto_increment_increment is set to 7'
  258. --13T08::.569531Z [Note] Plugin group_replication reported: 'auto_increment_offset is set to 1023306'
  259. --13T08::.569631Z [Note] Plugin group_replication reported: 'state 0 action xa_init'
  260. --13T08::.589865Z [Note] Plugin group_replication reported: 'Successfully bound to 0.0.0.0:33062 (socket=62).'
  261. --13T08::.589970Z [Note] Plugin group_replication reported: 'Successfully set listen backlog to 32 (socket=62)!'
  262. --13T08::.590011Z [Note] Plugin group_replication reported: 'Successfully unblocked socket (socket=62)!'
  263. --13T08::.590098Z [Note] Plugin group_replication reported: 'Ready to accept incoming connections on 0.0.0.0:33062 (socket=62)!'
  264. --13T08::.590549Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
  265. --13T08::.590788Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 63'
  266. --13T08::.593734Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
  267. --13T08::.593853Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 65'
  268. --13T08::.593966Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
  269. --13T08::.594016Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 67'
  270. --13T08::.595449Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
  271. --13T08::.595554Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 60'
  272. --13T08::.595792Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
  273. --13T08::.595887Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 70'
  274. --13T08::.596009Z [Note] Plugin group_replication reported: 'connecting to zlm3 33062'
  275. --13T08::.596069Z [Note] Plugin group_replication reported: 'client connected to zlm3 33062 fd 72'
  276. --13T08::.596168Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
  277. --13T08::.596594Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
  278. --13T08::.596622Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
  279. --13T08::.596629Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
  280. --13T08::.596947Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
  281. --13T08::.596965Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
  282. --13T08::.596971Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
  283. --13T08::.597300Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
  284. --13T08::.597314Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
  285. --13T08::.597320Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
  286. --13T08::.597547Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
  287. --13T08::.597568Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
  288. --13T08::.597582Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
  289. --13T08::.597931Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
  290. --13T08::.597960Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
  291. --13T08::.597966Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
  292. --13T08::.598270Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
  293. --13T08::.598297Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
  294. --13T08::.598303Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
  295. --13T08::.598561Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
  296. --13T08::.598583Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
  297. --13T08::.598590Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
  298. --13T08::.598849Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
  299. --13T08::.598876Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
  300. --13T08::.598882Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
  301. --13T08::.599181Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
  302. --13T08::.599199Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
  303. --13T08::.599205Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
  304. --13T08::.599519Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
  305. --13T08::.599549Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
  306. --13T08::.599572Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
  307. --13T08::.599884Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
  308. --13T08::.599896Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
  309. --13T08::.599901Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
  310. --13T08::.600125Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
  311. --13T08::.600139Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
  312. --13T08::.600145Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
  313. --13T08::.600879Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
  314. --13T08::.600930Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
  315. --13T08::.600938Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
  316. --13T08::.601423Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
  317. --13T08::.601449Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
  318. --13T08::.601464Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
  319. --13T08::.604719Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
  320. --13T08::.604760Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
  321. --13T08::.604768Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
  322. --13T08::.605086Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
  323. --13T08::.605103Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
  324. --13T08::.605110Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
  325. --13T08::.606780Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
  326. --13T08::.606820Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
  327. --13T08::.606828Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
  328. --13T08::.607219Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
  329. --13T08::.607232Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
  330. --13T08::.607237Z [Note] Plugin group_replication reported: 'connecting to zlm2 33061'
  331. --13T08::.608667Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm2 with error 113 -No route to host.'
  332. --13T08::.608702Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm2:33061 on local port: 33062.'
  333. --13T08::.608710Z [Note] Plugin group_replication reported: 'connecting to zlm4 33063'
  334. --13T08::.609062Z [Note] Plugin group_replication reported: 'Getting the peer name failed while connecting to server zlm4 with error 111 -Connection refused.'
  335. --13T08::.609080Z [ERROR] Plugin group_replication reported: '[GCS] Error on opening a connection to zlm4:33063 on local port: 33062.'
  336. --13T08::.609086Z [ERROR] Plugin group_replication reported: '[GCS] Error connecting to all peers. Member join failed. Local port: 33062'
  337. --13T08::.609134Z [Note] Plugin group_replication reported: 'state 4338 action xa_terminate'
  338. --13T08::.609141Z [Note] Plugin group_replication reported: 'new state x_start'
  339. --13T08::.609143Z [Note] Plugin group_replication reported: 'state 4338 action xa_exit'
  340. --13T08::.609182Z [Note] Plugin group_replication reported: 'Exiting xcom thread'
  341. --13T08::.609186Z [Note] Plugin group_replication reported: 'new state x_start'
  342. --13T08::.618446Z [Warning] Plugin group_replication reported: 'read failed'
  343. --13T08::.618546Z [ERROR] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33062'
  344. --13T08::.570227Z [ERROR] Plugin group_replication reported: 'Timeout on wait for view after joining group'
  345. --13T08::.570326Z [Note] Plugin group_replication reported: 'Requesting to leave the group despite of not being a member'
  346. --13T08::.570364Z [ERROR] Plugin group_replication reported: '[GCS] The member is leaving a group without being on one.'
  347. --13T08::.570551Z [Note] Plugin group_replication reported: 'auto_increment_increment is reset to 1'
  348. --13T08::.570559Z [Note] Plugin group_replication reported: 'auto_increment_offset is reset to 1'
  349. --13T08::.570655Z [Note] Error reading relay log event for channel 'group_replication_applier': slave SQL thread was killed
  350. --13T08::.570836Z [Note] Plugin group_replication reported: 'The group replication applier thread was killed'
  351.  
  352. ##Finally,i find out the firewall is not disabled on server zlm2.
    [root@zlm2 :: ~]
  353. #systemctl status firewalld
  354. firewalld.service - firewalld - dynamic firewall daemon
  355. Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)
  356. Active: active (running) since Wed -- :: CEST; 7h ago
  357. Main PID: (firewalld)
  358. CGroup: /system.slice/firewalld.service
  359. └─ /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
  360.  
  361. Jun :: localhost.localdomain systemd[]: Started firewalld - dynamic firewall daemon.
  362.  
  363. [root@zlm2 :: ~]
  364. #systemctl stop firewalld
  365.  
  366. [root@zlm2 :: ~]
  367. #systemctl disable firewalld
  368. rm '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service'
  369. rm '/etc/systemd/system/basic.target.wants/firewalld.service'
  370.  
  371. [root@zlm2 :: ~]
  372. #systemctl status firewalld
  373. firewalld.service - firewalld - dynamic firewall daemon
  374. Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled)
  375. Active: inactive (dead)
  376.  
  377. Jun :: localhost.localdomain systemd[]: Starting firewalld - dynamic firewall daemon...
  378. Jun :: localhost.localdomain systemd[]: Started firewalld - dynamic firewall daemon.
  379. Jun :: zlm2 systemd[]: Stopping firewalld - dynamic firewall daemon...
  380. Jun :: zlm2 systemd[]: Stopped firewalld - dynamic firewall daemon.
  381.  
  382. ##Start Group Replication again.
  383. (root@localhost mysql3306.sock)[(none)]::>START GROUP_REPLICATION;
  384. ERROR (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.
  385. (root@localhost mysql3306.sock)[(none)]::>
  386.  
  387. --13T08::.361028Z [ERROR] Plugin group_replication reported: '[GCS] Timeout while waiting for the group communication engine to be ready!'
  388. --13T08::.361070Z [ERROR] Plugin group_replication reported: '[GCS] The group communication engine is not ready for the member to join. Local port: 33062'
  389. --13T08::.361171Z [Note] Plugin group_replication reported: 'state 4338 action xa_terminate'
  390. --13T08::.361185Z [Note] Plugin group_replication reported: 'new state x_start'
  391. --13T08::.361188Z [Note] Plugin group_replication reported: 'state 4338 action xa_exit'
  392. --13T08::.361254Z [Note] Plugin group_replication reported: 'Exiting xcom thread'
  393. --13T08::.361258Z [Note] Plugin group_replication reported: 'new state x_start'
  394. --13T08::.371810Z [Warning] Plugin group_replication reported: 'read failed'
  395. --13T08::.387635Z [ERROR] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33062'
  396. --13T08::.349695Z [ERROR] Plugin group_replication reported: 'Timeout on wait for view after joining group'
  397. --13T08::.349732Z [Note] Plugin group_replication reported: 'Requesting to leave the group despite of not being a member'
  398. --13T08::.349745Z [ERROR] Plugin group_replication reported: '[GCS] The member is leaving a group without being on one.'
  399. --13T08::.349969Z [Note] Plugin group_replication reported: 'auto_increment_increment is reset to 1'
  400. --13T08::.349975Z [Note] Plugin group_replication reported: 'auto_increment_offset is reset to 1'
  401. --13T08::.350079Z [Note] Error reading relay log event for channel 'group_replication_applier': slave SQL thread was killed
  402. --13T08::.350240Z [Note] Plugin group_replication reported: 'The group replication applier thread was killed'
    The other two servers(zlm3,zlm4) cannot join the group created by zlm2,I've not figured out what's wrong with it yet.I'll test it again in sometime later.
 

MySQL高可用之MGR安装测试的更多相关文章

  1. MySQL高可用之MGR安装测试(续)

      Preface       I've implemented the Group Replication with three servers yesterday,What a shame it ...

  2. 【DB宝45】MySQL高可用之MGR+Consul架构部署

    目录 一.MGR+Consul架构简介 二.搭建MGR 2.1.申请3台MGR机器 2.2.3台主机安装MySQL环境 2.3.修改MySQL参数 2.4.重启MySQL环境 2.5.安装MGR插件( ...

  3. MySQL高可用架构-MMM安装教程

    安装指南: 一.架构以及服务器信息 基本安装包含至少2个数据库服务器和1个监视服务器.本例中使用2个监视服务器和5个数据库服务器(服务器系统为CentOS 7) 用途 IP 主机名 Server-id ...

  4. MySQL高可用之PXC安装部署(续)

      Preface       Yesterday I implemented a three-nodes PXC,but there were some errors when proceeding ...

  5. MySQL高可用之PXC安装部署

      Preface       Today,I'm gonna implement a PXC,Let's see the procedure.   Framework   Hostname IP P ...

  6. MySQL高可用之MHA切换测试(switchover & failover)

      Preface       I've installed MasterHA yesterday,Now let's test the master-slave switch and failove ...

  7. MySQL高可用之MHA安装

      Preface       MasterHA is a tool which can be used in MySQL HA architecture.I'm gonna implement it ...

  8. 032:基于Consul和MGR的MySQL高可用架构

    目录 一.Consul 1.Consul简介 2.准备环境 3.Consul 安装 4.Consul配置文件 5.Consul 服务检查脚本 6.Consul启动 二.MGR搭建 1.MGR配置 2. ...

  9. MySQL高可用新玩法之MGR+Consul

    前面的文章有提到过利用consul+mha实现mysql的高可用,以及利用consul+sentinel实现redis的高可用,具体的请查看:http://www.cnblogs.com/gomysq ...

随机推荐

  1. mvn 打包命令

    mvn install & package:package是把jar打到本项目的target下,而install时把target下的jar安装到本地仓库,供其他项目使用. mvn clean ...

  2. gradle中文学习资料

    http://wiki.jikexueyuan.com/project/GradleUserGuide-Wiki/ https://www.gitbook.com/book/lippiouyang/g ...

  3. 新鲜出炉的Java开发者中心,约起来!

    入门教程.SDK 和工具推荐下载.操作方法指导.API 参考,Java 开发者需要的,这里应有尽有. ▼ 话说现在 Java 开发者在云端进行开发非常火热啊,「云+Java」就好比才子配佳人,真是难以 ...

  4. tcp-full.cc

    ns2--tcp-full.cc /* -*- Mode:C++; c-basic-offset:8; tab-width:8; indent-tabs-mode:t -*- */ /* * Copy ...

  5. dedecms Ajax异步获取文章列表

    dedecms如何通过ajax(异步)动态获取文章列表数据. 第一步添加:服务端(PHP)代码 打开plus目录下面的list.php文件,在12行代码下面添加以下代码: if(isset($_GET ...

  6. spring的声明式事务,及redis事务。

    Redis的事务功能详解 http://ghoulich.xninja.org/2016/10/12/how-to-use-transaction-in-redis/ MULTI.EXEC.DISCA ...

  7. 解决Unity3D操作界面字体模糊的问题

    新装的电脑安装了UNITY后,操作界面的字体异常模糊,搜了半天看看有没有换字体的功能,也没找到 后来快放弃的时候,偶然看到这篇文章http://eyehere.net/2014/unity3d-int ...

  8. UVa 1220 - Party at Hali-Bula(树形DP)

    链接: https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem& ...

  9. 关于PHP中的 serialize () 和 unserialize () 的使用(即关于PHP中的值与已存储的表示的相互转换)

    有时,我们会碰到这样的数据(字符串) a:3:{i:0;s:44:"/Uploads/images/2017-07-21/5971a9a08ad57.png";i:1;s:44:& ...

  10. thinkphp清除缓存

    前台 //清除缓存 $(function(){ $("#cache").click(function(){ layer.confirm('你确定要清除缓存吗?', {icon: 3 ...