1.    MongoDB分片+副本集

健壮的集群方案

多个配置服务器 多个mongos服务器  每个片都是副本集 正确设置w

架构图

说明:

1.   此实验环境在一台机器上通过不同port和dbpath实现启动不同的mongod实例

2.   总的9个mongod实例,分别做成shard1、shard2、shard3三组副本集,每组1主2从

3.   Mongos进程的数量不限,建议把mongos配置在每个应用服务器本机上,这样每个应用服务器就与自身的mongos进行通信,如果服务器不工作了,并不会影响其他的应用服务器与其自己的mongos通信

4.   此实验模拟2台应用服务器(2个mongos服务)

5.   生产环境中每个片都应该是副本集,这样单个服务器坏了,才不会导致片失效

部署环境

创建相关目录

  1. [root@Master cluster2]# mkdir -p shard{,,}/node{,,}
  2. [root@Master cluster2]# mkdir -p shard{,,}/logs
  3. [root@Master cluster2]# ls shard*
  4. shard1:
  5. logs node1 node2 node3
  6.  
  7. shard2:
  8. logs node1 node2 node3
  9.  
  10. shard3:
  11. logs node1 node2 node3
  12. [root@Master cluster2]# mkdir -p config/logs
  13. [root@Master cluster2]# mkdir -p config/node{,,}
  14. [root@Master cluster2]# ls config/
  15. logs node1 node2 node3
  16.  
  17. [root@Master cluster2]# mkdir -p mongos/logs

启动配置服务

Config server

/data/mongodb/config/node1

/data/mongodb/config/logs/node1.log

10000

/data/mongodb/config/node2

/data/mongodb/config/logs/node2.log

20000

/data/mongodb/config/node3

/data/mongodb/config/logs/node3.log

30000

#按规划启动3个:跟启动单个配置服务一样,只是重复3次

  1. [root@Master cluster2]# mongod --dbpath config/node1 --logpath config/logs/node1.log --logappend --fork --port
  2. [root@Master cluster2]# mongod --dbpath config/node2 --logpath config/logs/node2.log --logappend --fork --port
  3. [root@Master cluster2]# mongod --dbpath config/node3 --logpath config/logs/node3.log --logappend --fork --port
  4. [root@Master cluster2]# ps -ef|grep mongod|grep -v grep
  5. mongod : ? :: /usr/bin/mongod -f /etc/mongod.conf
  6. root : ? :: mongod --dbpath config/node1 --logpath config/logs/node1.log --logappend --fork --port
  7. root : ? :: mongod --dbpath config/node2 --logpath config/logs/node2.log --logappend --fork --port
  8. root : ? :: mongod --dbpath config/node3 --logpath config/logs/node3.log --logappend --fork --port

启动路由服务

Mongos server

——

/data/mongodb/mongos/logs/node1.log

40000

——

/data/mongodb/mongos/logs/node2.log

50000

#mongos的数量不受限制,通常应用一个服务器运行一个mongos

  1. [root@Master cluster2]# mongos --port --configdb localhost:,localhost:,localhost: --logpath mongos/logs/mongos1.log --logappend --fork
  2. [root@Master cluster2]# mongos --port --configdb localhost:,localhost:,localhost: --logpath mongos/logs/mongos1.log --logappend --fork
  3. [root@Master cluster2]# ps -ef|grep mongos|grep -v grep
  4. root : ? :: mongos --port --configdb localhost:,localhost:,localhost: --logpath mongos/logs/mongos1.log --logappend --fork
  5. root : ? :: mongos --port --configdb localhost:,localhost:,localhost: --logpath mongos/logs/mongos1.log --logappend --fork

配置副本集

按规划,配置启动shard1、shard2、shard3三组副本集

#此处以shard1为例说明配置方法

#启动三个mongod进程

  1. [root@Master cluster2]# mongod --replSet shard1 --dbpath shard1/node1 --logpath shard1/logs/node1.log --logappend --fork --port
  2. [root@Master cluster2]# mongod --replSet shard1 --dbpath shard1/node2 --logpath shard1/logs/node2.log --logappend --fork --port
  3. [root@Master cluster2]# mongod --replSet shard1 --dbpath shard1/node3 --logpath shard1/logs/node3.log --logappend --fork --port

#初始化Replica Set:shard1

  1. [root@Master cluster2]# mongo --port
  2. MongoDB shell version: 3.0.
  3. connecting to: 127.0.0.1:/test
  4. > use admin
  5. switched to db admin
  6. > rsconf={"_id" : "shard1","members" : [{"_id" : , "host" : "localhost:10001"}]}
  7. {
  8. "_id" : "shard1",
  9. "members" : [
  10. {
  11. "_id" : ,
  12. "host" : "localhost:10001"
  13. }
  14. ]
  15. }
  16. > rs.initiate(rsconf)
  17. { "ok" : }
  18. shard1:OTHER> rs.add("localhost:10002")
  19. { "ok" : }
  20. shard1:PRIMARY> rs.add("localhost:10003")
  21. { "ok" : }
  22. shard1:PRIMARY> rs.conf()
  23. {
  24. "_id" : "shard1",
  25. "version" : ,
  26. "members" : [
  27. {
  28. "_id" : ,
  29. "host" : "localhost:10001",
  30. "arbiterOnly" : false,
  31. "buildIndexes" : true,
  32. "hidden" : false,
  33. "priority" : ,
  34. "tags" : {
  35.  
  36. },
  37. "slaveDelay" : ,
  38. "votes" :
  39. },
  40. {
  41. "_id" : ,
  42. "host" : "localhost:10002",
  43. "arbiterOnly" : false,
  44. "buildIndexes" : true,
  45. "hidden" : false,
  46. "priority" : ,
  47. "tags" : {
  48.  
  49. },
  50. "slaveDelay" : ,
  51. "votes" :
  52. },
  53. {
  54. "_id" : ,
  55. "host" : "localhost:10003",
  56. "arbiterOnly" : false,
  57. "buildIndexes" : true,
  58. "hidden" : false,
  59. "priority" : ,
  60. "tags" : {
  61.  
  62. },
  63. "slaveDelay" : ,
  64. "votes" :
  65. }
  66. ],
  67. "settings" : {
  68. "chainingAllowed" : true,
  69. "heartbeatTimeoutSecs" : ,
  70. "getLastErrorModes" : {
  71.  
  72. },
  73. "getLastErrorDefaults" : {
  74. "w" : ,
  75. "wtimeout" :
  76. }
  77. }
  78. } 

Shard2和shard3同shard1配置副本集

  1. [root@Master cluster2]# mongod --replSet shard2 --dbpath shard2/node1 --logpath shard2/logs/node1.log --logappend --fork --port
  2. [root@Master cluster2]# mongod --replSet shard2 --dbpath shard2/node2 --logpath shard2/logs/node2.log --logappend --fork --port
  3. [root@Master cluster2]# mongod --replSet shard2 --dbpath shard2/node3 --logpath shard2/logs/node3.log --logappend --fork --port
  4. [root@Master cluster2]# mongo --port
  5. > use admin
  6. > rsconf={"_id" : "shard2","members" : [{"_id" : , "host" : "localhost:20001"}]}
  7. > rs.initiate(rsconf)
  8. shard2:OTHER> rs.add("localhost:20002")
  9. shard2:PRIMARY> rs.add("localhost:20003")
  10. shard2:PRIMARY> rs.conf()
  11. {
  12. "_id" : "shard2",
  13. "version" : ,
  14. "members" : [
  15. {
  16. "_id" : ,
  17. "host" : "localhost:20001",
  18. "arbiterOnly" : false,
  19. "buildIndexes" : true,
  20. "hidden" : false,
  21. "priority" : ,
  22. "tags" : {
  23.  
  24. },
  25. "slaveDelay" : ,
  26. "votes" :
  27. },
  28. {
  29. "_id" : ,
  30. "host" : "localhost:20002",
  31. "arbiterOnly" : false,
  32. "buildIndexes" : true,
  33. "hidden" : false,
  34. "priority" : ,
  35. "tags" : {
  36.  
  37. },
  38. "slaveDelay" : ,
  39. "votes" :
  40. },
  41. {
  42. "_id" : ,
  43. "host" : "localhost:20003",
  44. "arbiterOnly" : false,
  45. "buildIndexes" : true,
  46. "hidden" : false,
  47. "priority" : ,
  48. "tags" : {
  49.  
  50. },
  51. "slaveDelay" : ,
  52. "votes" :
  53. }
  54. ],
  55. "settings" : {
  56. "chainingAllowed" : true,
  57. "heartbeatTimeoutSecs" : ,
  58. "getLastErrorModes" : {
  59.  
  60. },
  61. "getLastErrorDefaults" : {
  62. "w" : ,
  63. "wtimeout" :
  64. }
  65. }
  66. }

  

  1. [root@Master cluster2]# mongod --replSet shard3 --dbpath shard3/node1 --logpath shard3/logs/node1.log --logappend --fork --port
  2. [root@Master cluster2]# mongod --replSet shard3 --dbpath shard3/node2 --logpath shard3/logs/node2.log --logappend --fork --port
  3. [root@Master cluster2]# mongod --replSet shard3 --dbpath shard3/node3 --logpath shard3/logs/node3.log --logappend --fork --port
  4. [root@Master cluster2]# mongo --port
  5. connecting to: 127.0.0.1:/test> use admin
  6. > rsconf={"_id" : "shard3","members" : [{"_id" : , "host" : "localhost:30001"}]}
    > rs.initiate(rsconf)
  7. shard3:OTHER> rs.add("localhost:30002")
  8. shard3:PRIMARY> rs.add("localhost:30003")
  9. shard3:PRIMARY> rs.conf()
  10. {
  11. "_id" : "shard3",
  12. "version" : ,
  13. "members" : [
  14. {
  15. "_id" : ,
  16. "host" : "localhost:30001",
  17. "arbiterOnly" : false,
  18. "buildIndexes" : true,
  19. "hidden" : false,
  20. "priority" : ,
  21. "tags" : {
  22.  
  23. },
  24. "slaveDelay" : ,
  25. "votes" :
  26. },
  27. {
  28. "_id" : ,
  29. "host" : "localhost:30002",
  30. "arbiterOnly" : false,
  31. "buildIndexes" : true,
  32. "hidden" : false,
  33. "priority" : ,
  34. "tags" : {
  35.  
  36. },
  37. "slaveDelay" : ,
  38. "votes" :
  39. },
  40. {
  41. "_id" : ,
  42. "host" : "localhost:30003",
  43. "arbiterOnly" : false,
  44. "buildIndexes" : true,
  45. "hidden" : false,
  46. "priority" : ,
  47. "tags" : {
  48.  
  49. },
  50. "slaveDelay" : ,
  51. "votes" :
  52. }
  53. ],
  54. "settings" : {
  55. "chainingAllowed" : true,
  56. "heartbeatTimeoutSecs" : ,
  57. "getLastErrorModes" : {
  58.  
  59. },
  60. "getLastErrorDefaults" : {
  61. "w" : ,
  62. "wtimeout" :
  63. }
  64. }
  65. }

添加(副本集)分片

#连接到mongs,并切换到admin这里必须连接路由节点

  1. [root@Master cluster2]# mongo --port
  2. MongoDB shell version: 3.0.
  3. connecting to: 127.0.0.1:/test
  4. mongos> use admin
  5. switched to db admin
  6. mongos> db.runCommand({"addShard":"shard1/localhost:10001"})
  7. { "shardAdded" : "shard1", "ok" : }
  8. mongos> db.runCommand({"addShard":"shard2/localhost:20001"})
  9. { "shardAdded" : "shard2", "ok" : }
  10. mongos> db.runCommand({"addShard":"shard3/localhost:30001"})
  11. { "shardAdded" : "shard3", "ok" : }
  12. mongos> db.runCommand({listshards:})
  13. {
  14. "shards" : [
  15. {
  16. "_id" : "shard1",
  17. "host" : "shard1/localhost:10001,localhost:10002,localhost:10003"
  18. },
  19. {
  20. "_id" : "shard2",
  21. "host" : "shard2/localhost:20001,localhost:20002,localhost:20003"
  22. },
  23. {
  24. "_id" : "shard3",
  25. "host" : "shard3/localhost:30001,localhost:30002,localhost:30003"
  26. }
  27. ],
  28. "ok" :
  29. }

激活db和collections分片

激活数据库分片,命令

> db.runCommand( { enablesharding : "数据库名称"} );

执行以上命令,可以让数据库跨shard,如果不执行这步,数据库只会存放在一个shard

一旦激活数据库分片,数据库中不同的collection将被存放在不同的shard上

但一个collection仍旧存放在同一个shard上,要使单个collection也分片,还需单独对collection作些操作

#如:激活test数据库分片功能,连接mongos进程

  1. [root@Master cluster2]# mongo --port
  2. MongoDB shell version: 3.0.
  3. connecting to: 127.0.0.1:/test
  4. mongos> use admin
  5. switched to db admin
  6. mongos> db.runCommand({"enablesharding":"test"})
  7. { "ok" : }

要使单个collection也分片存储,需要给collection指定一个分片key,通过以下命令操作:

> db.runCommand( { shardcollection : "集合名称",key : "字段名称"});

注:  a. 分片的collection系统会自动创建一个索引(也可用户提前创建好)

b. 分片的collection只能有一个在分片key上的唯一索引,其它唯一索引不被允许

#对collection:test.yujx分片

  1. mongos> db.runCommand({"shardcollection":"test.yujx","key":{"_id":}})
  2. { "collectionsharded" : "test.yujx", "ok" : }

生成测试数据

  1. mongos> use test
  2. switched to db test
  3. mongos> for(var i=;i<=;i++) db.yujx.save({"id":i,"a":,"b":,"c":})
  4. WriteResult({ "nInserted" : })
  5. mongos> db.yujx.count()

查看集合分片

  1. mongos> db.yujx.stats()
  2. {
  3. "sharded" : true,
  4. "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
  5. "userFlags" : ,
  6. "capped" : false,
  7. "ns" : "test.yujx",
  8. "count" : ,
  9. "numExtents" : ,
  10. "size" : ,
  11. "storageSize" : ,
  12. "totalIndexSize" : ,
  13. "indexSizes" : {
  14. "_id_" :
  15. },
  16. "avgObjSize" : ,
  17. "nindexes" : ,
  18. "nchunks" : ,
  19. "shards" : {
  20. "shard1" : {
  21. "ns" : "test.yujx",
  22. "count" : ,
  23. "size" : ,
  24. "avgObjSize" : ,
  25. "numExtents" : ,
  26. "storageSize" : ,
  27. "lastExtentSize" : ,
  28. "paddingFactor" : ,
  29. "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
  30. "userFlags" : ,
  31. "capped" : false,
  32. "nindexes" : ,
  33. "totalIndexSize" : ,
  34. "indexSizes" : {
  35. "_id_" :
  36. },
  37. "ok" : ,
  38. "$gleStats" : {
  39. "lastOpTime" : Timestamp(, ),
  40. "electionId" : ObjectId("55d15366716d7504d5d74c4c")
  41. }
  42. },
  43. "shard2" : {
  44. "ns" : "test.yujx",
  45. "count" : ,
  46. "size" : ,
  47. "avgObjSize" : ,
  48. "numExtents" : ,
  49. "storageSize" : ,
  50. "lastExtentSize" : ,
  51. "paddingFactor" : ,
  52. "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
  53. "userFlags" : ,
  54. "capped" : false,
  55. "nindexes" : ,
  56. "totalIndexSize" : ,
  57. "indexSizes" : {
  58. "_id_" :
  59. },
  60. "ok" : ,
  61. "$gleStats" : {
  62. "lastOpTime" : Timestamp(, ),
  63. "electionId" : ObjectId("55d1543eabed7d6d4a71d25e")
  64. }
  65. },
  66. "shard3" : {
  67. "ns" : "test.yujx",
  68. "count" : ,
  69. "size" : ,
  70. "avgObjSize" : ,
  71. "numExtents" : ,
  72. "storageSize" : ,
  73. "lastExtentSize" : ,
  74. "paddingFactor" : ,
  75. "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
  76. "userFlags" : ,
  77. "capped" : false,
  78. "nindexes" : ,
  79. "totalIndexSize" : ,
  80. "indexSizes" : {
  81. "_id_" :
  82. },
  83. "ok" : ,
  84. "$gleStats" : {
  85. "lastOpTime" : Timestamp(, ),
  86. "electionId" : ObjectId("55d155346f36550e3c5f062c")
  87. }
  88. }
  89. },
  90. "ok" :
  91. }

查看数据库分片

  1. mongos> db.printShardingStatus()
  2. --- Sharding Status ---
  3. sharding version: {
  4. "_id" : ,
  5. "minCompatibleVersion" : ,
  6. "currentVersion" : ,
  7. "clusterId" : ObjectId("55d152a35348652fbc726a10")
  8. }
  9. shards:
  10. { "_id" : "shard1", "host" : "shard1/localhost:10001,localhost:10002,localhost:10003" }
  11. { "_id" : "shard2", "host" : "shard2/localhost:20001,localhost:20002,localhost:20003" }
  12. { "_id" : "shard3", "host" : "shard3/localhost:30001,localhost:30002,localhost:30003" }
  13. balancer:
  14. Currently enabled: yes
  15. Currently running: yes
  16. Balancer lock taken at Sun Aug :: GMT- (PDT) by Master.Hadoop::::Balancer:
  17. Failed balancer rounds in last attempts:
  18. Migration Results for the last hours:
  19. : Success
  20. : Failed with error 'could not acquire collection lock for test.yujx to migrate chunk [{ : MinKey },{ : MaxKey }) :: caused by :: Lock for migrating chunk [{ : MinKey }, { : MaxKey }) in test.yujx is taken.', from shard1 to shard2
  21. databases:
  22. { "_id" : "admin", "partitioned" : false, "primary" : "config" }
  23. { "_id" : "test", "partitioned" : true, "primary" : "shard1" }
  24. test.yujx
  25. shard key: { "_id" : }
  26. chunks:
  27. shard1
  28. shard2
  29. shard3
  30. { "_id" : { "$minKey" : } } -->> { "_id" : ObjectId("55d157cca0c90140e33a9342") } on : shard3 Timestamp(, )
  31. { "_id" : ObjectId("55d157cca0c90140e33a9342") } -->> { "_id" : ObjectId("55d157cca0c90140e33a934a") } on : shard1 Timestamp(, )
  32. { "_id" : ObjectId("55d157cca0c90140e33a934a") } -->> { "_id" : { "$maxKey" : } } on : shard2 Timestamp(, )

#或者连接mongos的config数据库查询

  1. mongos> use config
  2. switched to db config
  3. mongos> db.shards.find()
  4. { "_id" : "shard1", "host" : "shard1/localhost:10001,localhost:10002,localhost:10003" }
  5. { "_id" : "shard2", "host" : "shard2/localhost:20001,localhost:20002,localhost:20003" }
  6. { "_id" : "shard3", "host" : "shard3/localhost:30001,localhost:30002,localhost:30003" }
  7. mongos> db.databases.find()
  8. { "_id" : "admin", "partitioned" : false, "primary" : "config" }
  9. { "_id" : "test", "partitioned" : true, "primary" : "shard1" }
  10. mongos> db.chunks.find()
  11. { "_id" : "test.yujx-_id_MinKey", "lastmod" : Timestamp(, ), "lastmodEpoch" : ObjectId("55d15738679c4d5f9108eba0"), "ns" : "test.yujx", "min" : { "_id" : { "$minKey" : } }, "max" : { "_id" : ObjectId("55d157cca0c90140e33a9342") }, "shard" : "shard3" }
  12. { "_id" : "test.yujx-_id_ObjectId('55d157cca0c90140e33a9342')", "lastmod" : Timestamp(, ), "lastmodEpoch" : ObjectId("55d15738679c4d5f9108eba0"), "ns" : "test.yujx", "min" : { "_id" : ObjectId("55d157cca0c90140e33a9342") }, "max" : { "_id" : ObjectId("55d157cca0c90140e33a934a") }, "shard" : "shard1" }
  13. { "_id" : "test.yujx-_id_ObjectId('55d157cca0c90140e33a934a')", "lastmod" : Timestamp(, ), "lastmodEpoch" : ObjectId("55d15738679c4d5f9108eba0"), "ns" : "test.yujx", "min" : { "_id" : ObjectId("55d157cca0c90140e33a934a") }, "max" : { "_id" : { "$maxKey" : } }, "shard" : "shard2" }

hash分片

MongoDB2.4以上的版本支持基于哈希的分片

  1. mongos> use admin
  2. mongos> db.runCommand({"enablesharding":"mydb"})
  3. mongos> db.runCommand({"shardcollection":"mydb.mycollection","key":{"_id":"hashed"}})
  4. mongos> use mydb
  5. switched to db mydb
  6. mongos> for(i=;i<;i++){ db.mycollection.insert({"Uid":i,"Name":"zhanjindong2","Age":,"Date":new Date()}); }
  7. WriteResult({ "nInserted" : })
  8. mongos> db.mycollection.stats()
  9. {
  10. "sharded" : true,
  11. "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
  12. "userFlags" : ,
  13. "capped" : false,
  14. "ns" : "mydb.mycollection",
  15. "count" : ,
  16. "numExtents" : ,
  17. "size" : ,
  18. "storageSize" : ,
  19. "totalIndexSize" : ,
  20. "indexSizes" : {
  21. "_id_" : ,
  22. "_id_hashed" :
  23. },
  24. "avgObjSize" : ,
  25. "nindexes" : ,
  26. "nchunks" : ,
  27. "shards" : {
  28. "shard1" : {
  29. "ns" : "mydb.mycollection",
  30. "count" : ,
  31. "size" : ,
  32. "avgObjSize" : ,
  33. "numExtents" : ,
  34. "storageSize" : ,
  35. "lastExtentSize" : ,
  36. "paddingFactor" : ,
  37. "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
  38. "userFlags" : ,
  39. "capped" : false,
  40. "nindexes" : ,
  41. "totalIndexSize" : ,
  42. "indexSizes" : {
  43. "_id_" : ,
  44. "_id_hashed" :
  45. },
  46. "ok" : ,
  47. "$gleStats" : {
  48. "lastOpTime" : Timestamp(, ),
  49. "electionId" : ObjectId("55d15366716d7504d5d74c4c")
  50. }
  51. },
  52. "shard2" : {
  53. "ns" : "mydb.mycollection",
  54. "count" : ,
  55. "size" : ,
  56. "avgObjSize" : ,
  57. "numExtents" : ,
  58. "storageSize" : ,
  59. "lastExtentSize" : ,
  60. "paddingFactor" : ,
  61. "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
  62. "userFlags" : ,
  63. "capped" : false,
  64. "nindexes" : ,
  65. "totalIndexSize" : ,
  66. "indexSizes" : {
  67. "_id_" : ,
  68. "_id_hashed" :
  69. },
  70. "ok" : ,
  71. "$gleStats" : {
  72. "lastOpTime" : Timestamp(, ),
  73. "electionId" : ObjectId("55d1543eabed7d6d4a71d25e")
  74. }
  75. },
  76. "shard3" : {
  77. "ns" : "mydb.mycollection",
  78. "count" : ,
  79. "size" : ,
  80. "avgObjSize" : ,
  81. "numExtents" : ,
  82. "storageSize" : ,
  83. "lastExtentSize" : ,
  84. "paddingFactor" : ,
  85. "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
  86. "userFlags" : ,
  87. "capped" : false,
  88. "nindexes" : ,
  89. "totalIndexSize" : ,
  90. "indexSizes" : {
  91. "_id_" : ,
  92. "_id_hashed" :
  93. },
  94. "ok" : ,
  95. "$gleStats" : {
  96. "lastOpTime" : Timestamp(, ),
  97. "electionId" : ObjectId("55d155346f36550e3c5f062c")
  98. }
  99. }
  100. },
  101. "ok" :
  102. }

单点故障分析

由于这是为了了解入门mongodb做的实验,而故障模拟太浪费时间,所以这里就不一一列出,关于故障场景分析,可以参考:
http://blog.itpub.net/27000195/viewspace-1404402/

MongoDB健壮集群——用副本集做分片的更多相关文章

  1. 搭建mongodb集群(副本集+分片)

    搭建mongodb集群(副本集+分片) 转载自:http://blog.csdn.net/bluejoe2000/article/details/41323051 完整的搭建mongodb集群(副本集 ...

  2. MongoDB集群搭建-副本集

    MongoDB集群搭建-副本集 概念性的知识,可以参考本人博客地址: 一.Master-Slave方案: 主从: 二.Replica Set方案: 副本集: 步骤:(只要按步骤操作,100%成功) 1 ...

  3. mongodb集群配置副本集

    测试环境 操作系统:CentOS 7.2 最小化安装 主服务器IP地址:192.168.197.21 mongo01 从服务器IP地址:192.168.197.22 mongo02 从服务器IP地址: ...

  4. centos7下安装部署mongodb集群(副本集模式)

    环境需求:Mongodb集群有三种模式:  Replica Set, Sharding,Master-Slaver.  这里部署的是Replica Set模式. 测试环境: 这里副本集(Replica ...

  5. Mongodb集群之副本集

    上篇咱们遗留了几个问题 1主节点是否能自己主动切换连接? 眼下须要手动切换 2主节点读写压力过大怎样解决 3从节点每一个上面的数据都是对数据库全量拷贝,从节点压力会不会过大 4数据压力达到机器支撑不了 ...

  6. MongoDB学习笔记~Mongo集群和副本集

    回到目录 一些概念 对于Mongo在数据容灾上,推荐的模式是使用副本集模式,它有一个对外的主服务器Primary,还有N个副本服务器Secondary(N>=1,当N=1时,需要有一台仲裁服务器 ...

  7. MongoDB集群-主从复制(副本集)、failover

    1.概念 主从复制的目的:数据冗余.备份.读写分离 主从方式:一主一从(不推荐,只能实现复制,主节点挂掉且未重新启动的时候,无法提升从节点为master),一主一从一裁判,一主多从 复制方式:主节点记 ...

  8. MongoDB学习笔记——Replica Set副本集

    副本集 可以将MongoDB中的副本集看作一组服务器集群由一个主节点和多个副本节点等组成,相对于之前讲到的主从复制提供了故障自动转移的功能 副本集实现数据同步的方式依赖于local数据库中的oplog ...

  9. MongoDB高可用集群搭建(主从、分片、路由、安全验证)

    目录 一.环境准备 1.部署图 2.模块介绍 3.服务器准备 二.环境变量 1.准备三台集群 2.安装解压 3.配置环境变量 三.集群搭建 1.新建配置目录 2.修改配置文件 3.分发其他节点 4.批 ...

随机推荐

  1. js练习 closure

    window.onload = function() {            for (var i = 1; i < 4; i++) {                var id = doc ...

  2. FluentValidation 模型验证

    FluentValidation 是 .NET 下的模型验证组件,和 ASP.NET MVC 基于Attribute 声明式验证的不同处,其利用表达式语法链式编程,使得验证组件与实体分开.正如 Flu ...

  3. unix时间戳与时间

    [root@pserver ~]# date -d "@1381371010" Thu Oct :: CST [root@pserver ~]# date --date=" ...

  4. ubuntu server静态IP和DNS服务器设置

    Ubuntu的网络参数保存在文件 /etc/network/interfaces中, 默认设置使用dhcp,动态IP获取.   设置静态ip的方法如下: 1) 编辑 /etc/network/inte ...

  5. JavaScript 对象属性作实参以及实参对象的callee属性

    参考自<<JavaScript权威指南 第6版>> /* * 将对象属性用作实参, 从而不必记住参数的顺序. */ function arraycopy(from,from_s ...

  6. Jenkins 邮箱配置及问题解决

    Failed to send out e-mail javax.mail.MessagingException: Could not connect to SMTP host: smtp.rytong ...

  7. 解决windows搭建jenkins执行selenium无法启动浏览器问题

    因为jenkins是用windows installer 安装成windows的服务了,那么jenkins是一个后台服务,所以跑selium cases 的时候不显示浏览器 Step 1. Contr ...

  8. UVa 10559 Blocks (DP)

    题意:一排带有颜色的砖块,每一个可以消除相同颜色的砖块,,每一次可以到块数k的平方分数.求最大分数是多少. 析:dp[i][j][k] 表示消除 i ~ j,并且右边再拼上 k 个 颜色等于a[j] ...

  9. Thrift线程和状态机分析

    目录 目录 1 1. 工作线程和IO线程 1 2. TNonblockingServer::TConnection::transition() 2 3. RPC函数被调用过程 3 4. 管道和任务队列 ...

  10. linux每天一小步---head命令详解

    1 命令功能      head命令用来查看文件的前多少行或多少字节的内容(默认显示10行) 2 命令语法 head  [选项参数]  [文件名] 3 命令参数 -q  显示多个文件的内容时不显示文件 ...