第三十六课 非关系统型数据库-mangodb

目录

二十四 mongodb介绍

二十五 mongodb安装

二十六 连接mongodb

二十七 mongodb用户管理

二十八 mongodb创建集合、数据管理

二十九 php的mongodb扩展

三十 php的mongo扩展

三十一 mongodb副本集介绍

三十二 mongodb副本集搭建

三十三 mongodb副本集测试

三十四 mongodb分片介绍

三十五 mongodb分片搭建

三十六 mongodb分片测试

三十七 mongodb备份恢复

三十八 扩展

二十四 mongodb介绍

官网www.mongodb.com, 截止2018年08月26日,当前最新版4.0.1

C++编写,基于分布式的,属于NoSQL的一种

在NoSQL中是最像关系型数据库的

MongoDB 将数据存储为一个文档,数据结构由键值(key=>value)对组成。MongoDB 文档类似于 JSON 对象。字段值可以包含其他文档、数组及文档数组。

关于JSON http://www.w3school.com.cn/json/index.asp

因为基于分布式,所以很容易扩展

MongoDB和关系型数据库对比

关系型数据库数据结构

mongodb数据库数据结构

##二十五 mongodb安装

官方安装文档https://docs.mongodb.com/manual/tutorial/install-mongodb-on-red-hat/

1.新建yum源

  1. [root@mangodbserver1 ~]# vim /etc/yum.repos.d/mangodb-org-4.0.repo
  2. [mongodb-org-4.0]
  3. name=MongoDB Repository
  4. baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/
  5. gpgcheck=1
  6. enabled=1
  7. gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc

2.查看软件包

  1. [root@mangodbserver1 ~]# yum list |grep mongodb
  2. collectd-write_mongodb.x86_64 5.8.0-4.el7 epel
  3. mongodb.x86_64 2.6.12-6.el7 epel
  4. mongodb-org.x86_64 4.0.1-1.el7 mongodb-org-4.0
  5. mongodb-org-mongos.x86_64 4.0.1-1.el7 mongodb-org-4.0
  6. mongodb-org-server.x86_64 4.0.1-1.el7 mongodb-org-4.0
  7. mongodb-org-shell.x86_64 4.0.1-1.el7 mongodb-org-4.0
  8. mongodb-org-tools.x86_64 4.0.1-1.el7 mongodb-org-4.0
  9. mongodb-server.x86_64 2.6.12-6.el7 epel
  10. mongodb-test.x86_64 2.6.12-6.el7 epel
  11. nodejs-mongodb.noarch 1.4.7-1.el7 epel
  12. php-mongodb.noarch 1.0.4-1.el7 epel
  13. php-pecl-mongodb.x86_64 1.1.10-1.el7 epel
  14. poco-mongodb.x86_64 1.6.1-3.el7 epel
  15. syslog-ng-mongodb.x86_64 3.5.6-3.el7 epel

3.安装mangodb-org

  1. [root@mangodbserver1 ~]# yum -y install mongodb-org
  2. Loaded plugins: fastestmirror
  3. Loading mirror speeds from cached hostfile
  4. * base: mirrors.aliyun.com
  5. * epel: mirrors.tongji.edu.cn
  6. * extras: mirrors.aliyun.com
  7. * updates: mirrors.163.com
  8. Resolving Dependencies
  9. ...中间略...
  10. Installed:
  11. mongodb-org.x86_64 0:4.0.1-1.el7
  12. Dependency Installed:
  13. mongodb-org-mongos.x86_64 0:4.0.1-1.el7 mongodb-org-server.x86_64 0:4.0.1-1.el7 mongodb-org-shell.x86_64 0:4.0.1-1.el7 mongodb-org-tools.x86_64 0:4.0.1-1.el7
  14. Complete!

##二十六 连接mongodb

1.启动mangodb

  1. [root@mangodbserver1 ~]# systemctl start mongod.service
  2. [root@mangodbserver1 ~]# systemctl enable mongod.service
  3. [root@mangodbserver1 ~]# lsof -i :27017
  4. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
  5. mongod 1782 mongod 11u IPv4 22968 0t0 TCP localhost:27017 (LISTEN)

2.连接

  1. # 本机直接运行命令mongo进入到mongodb shell中
  2. -bash: mango: command not found
  3. [root@mangodbserver1 ~]# mongo
  4. MongoDB shell version v4.0.1
  5. connecting to: mongodb://127.0.0.1:27017
  6. MongoDB server version: 4.0.1
  7. Welcome to the MongoDB shell.
  8. For interactive help, type "help".
  9. For more comprehensive documentation, see
  10. http://docs.mongodb.org/
  11. Questions? Try the support group
  12. http://groups.google.com/group/mongodb-user
  13. Server has startup warnings:
  14. 2018-08-26T14:01:51.415+0800 I CONTROL [initandlisten]
  15. 2018-08-26T14:01:51.415+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
  16. 2018-08-26T14:01:51.415+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
  17. 2018-08-26T14:01:51.415+0800 I CONTROL [initandlisten]
  18. ---
  19. Enable MongoDB's free cloud-based monitoring service, which will then receive and display
  20. metrics about your deployment (disk utilization, CPU, operation statistics, etc).
  21. The monitoring data will be available on a MongoDB website with a unique URL accessible to you
  22. and anyone you share the URL with. MongoDB may use this information to make product
  23. improvements and to suggest MongoDB products and deployment options to you.
  24. To enable free monitoring, run the following command: db.enableFreeMonitoring()
  25. To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
  26. ---
  27. >
  28. # 如果mongodb监听端口并不是默认的27017,则在连接的时候需要加--port 选项,例如
  29. # mongo --port 27018
  30. # 连接远程mongodb,需要加--host,例如
  31. # mongo --host 192.168.1.47
  32. # 如果设置了验证,则在连接的时候需要带用户名和密码
  33. # mongo -uusername -ppasswd --authenticationDatabase db 类似于MySQL

二十七 mongodb用户管理

常用操作

  1. # 需要切换到admn库
  2. > use admin
  3. switched to db admin
  4. # 创建用户admin,user指定用户,customData为说明字段,可以省略,pwd为密码,roles指定用户的角色,db指定库名
  5. > db.createUser( { user: "admin", customData: {description: "superuser"}, pwd: "admin122", roles: [ { role: "root", db: "admin" } ] } )
  6. Successfully added user: {
  7. "user" : "admin",
  8. "customData" : {
  9. "description" : "superuser"
  10. },
  11. "roles" : [
  12. {
  13. "role" : "root",
  14. "db" : "admin"
  15. }
  16. ]
  17. }
  18. # 列出所有用户,需要切换到admin库
  19. > db.system.users.find()
  20. { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "credentials" : { "SCRAM-SHA-1" : { "iterationCount" : 10000, "salt" : "t3C0r5eRm8qrPrdyQwPGRw==", "storedKey" : "2FNCyDURiJKU6LnC8QvRVtUcO00=", "serverKey" : "1eyFpJkirCnUwvFTj7sLeucNZ5Q=" }, "SCRAM-SHA-256" : { "iterationCount" : 15000, "salt" : "h58+0R+lhBUZFMMRqjwRYcnOPyoCl62xb0gg5g==", "storedKey" : "G9gV0/k0nQ+KjBE/12qvtjhGNiFPBy6RRSolPZmVkNo=", "serverKey" : "/Vh31wMqLZkuxPh3zNL6QQLTfGlUcxqZx8fk1GRRugY=" } }, "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ] }
  21. # 查看当前库下所有的用户
  22. > show users
  23. {
  24. "_id" : "admin.admin",
  25. "user" : "admin",
  26. "db" : "admin",
  27. "customData" : {
  28. "description" : "superuser"
  29. },
  30. "roles" : [
  31. {
  32. "role" : "root",
  33. "db" : "admin"
  34. }
  35. ],
  36. "mechanisms" : [
  37. "SCRAM-SHA-1",
  38. "SCRAM-SHA-256"
  39. ]
  40. }
  41. # 创建用户kennminn
  42. > db.createUser({user: "kennminn", pwd: "123456", roles:[{role: "read", db: "test"}]})
  43. Successfully added user: {
  44. "user" : "kennminn",
  45. "roles" : [
  46. {
  47. "role" : "read",
  48. "db" : "test"
  49. }
  50. ]
  51. }
  52. # 删除用户kennminn
  53. > db.dropUser('kennminn')
  54. true
  55. # 若要用户生效,还需要编辑启动脚本vim /usr/lib/systemd/system/mongod.service,在OPTIONS=后面增--auth
  56. [root@mangodbserver1 ~]# vim /usr/lib/systemd/system/mongod.service
  57. # 修改此句开启验证
  58. Environment="OPTIONS=--auth -f /etc/mongod.conf"
  59. [root@mangodbserver1 ~]# systemctl restart mongod.service
  60. Warning: mongod.service changed on disk. Run 'systemctl daemon-reload' to reload units.
  61. [root@mangodbserver1 ~]# systemctl daemon-reload
  62. [root@mangodbserver1 ~]# systemctl restart mongod.service
  63. # 未验证身份,查询失败
  64. [root@mangodbserver1 ~]# mongo
  65. MongoDB shell version v4.0.1
  66. connecting to: mongodb://127.0.0.1:27017
  67. MongoDB server version: 4.0.1
  68. > use admin
  69. switched to db admin
  70. > show users
  71. 2018-08-26T16:52:37.729+0800 E QUERY [js] Error: command usersInfo requires authentication :
  72. _getErrorWithCode@src/mongo/shell/utils.js:25:13
  73. DB.prototype.getUsers@src/mongo/shell/db.js:1757:1
  74. shellHelper.show@src/mongo/shell/utils.js:859:9
  75. shellHelper@src/mongo/shell/utils.js:766:15
  76. @(shellhelp2):1:1
  77. # 认证身份
  78. [root@mangodbserver1 ~]# mongo -u "admin" -p "admin122" --authenticationDatabase "admin"
  79. MongoDB shell version v4.0.1
  80. connecting to: mongodb://127.0.0.1:27017
  81. MongoDB server version: 4.0.1
  82. ---
  83. Enable MongoDB's free cloud-based monitoring service, which will then receive and display
  84. metrics about your deployment (disk utilization, CPU, operation statistics, etc).
  85. The monitoring data will be available on a MongoDB website with a unique URL accessible to you
  86. and anyone you share the URL with. MongoDB may use this information to make product
  87. improvements and to suggest MongoDB products and deployment options to you.
  88. To enable free monitoring, run the following command: db.enableFreeMonitoring()
  89. To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
  90. ---
  91. > use admin
  92. switched to db admin
  93. > show users
  94. {
  95. "_id" : "admin.admin",
  96. "user" : "admin",
  97. "db" : "admin",
  98. "customData" : {
  99. "description" : "superuser"
  100. },
  101. "roles" : [
  102. {
  103. "role" : "root",
  104. "db" : "admin"
  105. }
  106. ],
  107. "mechanisms" : [
  108. "SCRAM-SHA-1",
  109. "SCRAM-SHA-256"
  110. ]
  111. }
  112. # 在db1库中创建用户test1,test1用户对db1库读写,对db2库只读。
  113. > use db1
  114. switched to db db1
  115. > db.createUser( { user: "test1", pwd: "123aaa", roles: [ { role: "readWrite", db: "db1" }, {role: "read", db: "db2" } ] } )
  116. Successfully added user: {
  117. "user" : "test1",
  118. "roles" : [
  119. {
  120. "role" : "readWrite",
  121. "db" : "db1"
  122. },
  123. {
  124. "role" : "read",
  125. "db" : "db2"
  126. }
  127. ]
  128. }
  129. > show users
  130. {
  131. "_id" : "db1.test1",
  132. "user" : "test1",
  133. "db" : "db1",
  134. "roles" : [
  135. {
  136. "role" : "readWrite",
  137. "db" : "db1"
  138. },
  139. {
  140. "role" : "read",
  141. "db" : "db2"
  142. }
  143. ],
  144. "mechanisms" : [
  145. "SCRAM-SHA-1",
  146. "SCRAM-SHA-256"
  147. ]
  148. }
  149. > use db2
  150. switched to db db2
  151. > show users
  152. > db.auth("test1", "123aaa")
  153. Error: Authentication failed.
  154. 0
  155. > use db1
  156. switched to db db1
  157. > db.auth("test1", "123aaa")

MongoDB用户角色

Read:允许用户读取指定数据库

readWrite:允许用户读写指定数据库

dbAdmin:允许用户在指定数据库中执行管理函数,如索引创建、删除,查看统计或访问system.profile

userAdmin:允许用户向system.users集合写入,可以找指定数据库里创建、删除和管理用户

clusterAdmin:只在admin数据库中可用,赋予用户所有分片和复制集相关函数的管理权限。

readAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的读权限

readWriteAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的读写权限

userAdminAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的userAdmin权限

dbAdminAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的dbAdmin权限。

root:只在admin数据库中可用。超级账号,超级权限

MongoDB库管理

  1. # 查看版本
  2. > db.version()
  3. 4.0.1
  4. # 如果库存在就切换,不存在就创建
  5. > use db1
  6. switched to db db1
  7. # 查看库,该库是空库,没有创建集合,
  8. > show dbs
  9. admin 0.000GB
  10. config 0.000GB
  11. local 0.000GB
  12. # 创建集合
  13. > db.createCollection('clo1')
  14. { "ok" : 1 }
  15. > show dbs
  16. admin 0.000GB
  17. config 0.000GB
  18. db1 0.000GB
  19. local 0.000GB
  20. # 删除当前库,要想删除某个库,必须切换到那个库下
  21. > use userdb
  22. switched to db userdb
  23. > db.createCollection('clo1')
  24. { "ok" : 1 }
  25. > db.dropDatabase()
  26. { "dropped" : "userdb", "ok" : 1 }
  27. # 查看mongodb服务器的状态
  28. > db.serverStatus()
  29. {
  30. "host" : "mangodbserver1",
  31. "version" : "4.0.1",
  32. "process" : "mongod",
  33. "pid" : NumberLong(2968),
  34. "uptime" : 2383,
  35. "uptimeMillis" : NumberLong(2382444),
  36. "uptimeEstimate" : NumberLong(2382),
  37. "localTime" : ISODate("2018-08-26T09:39:29.055Z"),
  38. "asserts" : {
  39. ...下略...
  40. },
  41. "ttl" : {
  42. "deletedDocuments" : NumberLong(0),
  43. "passes" : NumberLong(39)
  44. }
  45. },
  46. "ok" : 1
  47. }

二十八 mongodb创建集合、数据管理

1.创建集合

  1. # 语法:db.createCollection(name,options)
  2. # name就是集合的名字,options可选,用来配置集合的参数,参数如下
  3. # capped true/false (可选)如果为true,则启用封顶集合。封顶集合是固定大小的集合,当它达到其最大大小,会自动覆盖最早的条目。如果指定true,则也需要指定尺寸参数。
  4. # autoindexID true/false (可选)如果为true,自动创建索引_id字段的默认值是false。
  5. # size (可选)指定最大大小字节封顶集合。如果封顶如果是 true,那么你还需要指定这个字段。单位B
  6. # max (可选)指定封顶集合允许在文件的最大数量。
  7. > db.createCollection("mycol", { capped : true, size : 6142800, max : 10000 } )
  8. { "ok" : 1 }

2.数据管理

  1. # 查看集合,或者使用show tables
  2. > show collections
  3. clo1
  4. mycol
  5. > show tables
  6. clo1
  7. mycol
  8. # 如果集合不存在,直接插入数据,则mongodb会自动创建集合
  9. > db.Account.insert({AccountID:1,UserName:"123",password:"123456"})
  10. WriteResult({ "nInserted" : 1 })
  11. # 更新
  12. > db.Account.update({AccountID:1},{"$set":{"Age":20}})
  13. WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
  14. # 查看所有文档
  15. > db.Account.find()
  16. { "_id" : ObjectId("5b828f3d727dfa33d5561c62"), "AccountID" : 1, "UserName" : "123", "password" : "123456", "Age" : 20 }
  17. # 根据条件查询
  18. > db.Account.find({AccountID:1})
  19. { "_id" : ObjectId("5b828f3d727dfa33d5561c62"), "AccountID" : 1, "UserName" : "123", "password" : "123456", "Age" : 20 }
  20. # 根据条件删除
  21. > db.Account.remove({AccountID:1})
  22. WriteResult({ "nRemoved" : 1 })
  23. > db.Account.find()
  24. # 删除所有文档,即删除集合
  25. > db.Account.drop()
  26. true
  27. # 先进入对应的库
  28. > use db1
  29. switched to db db1
  30. # 然后查看集合状态
  31. > db.printCollectionStats()
  32. clo1
  33. {
  34. "ns" : "db1.clo1",
  35. "size" : 0,
  36. "count" : 0,
  37. "storageSize" : 4096,
  38. "capped" : false,
  39. "wiredTiger" : {
  40. "metadata" : {
  41. "formatVersion" : 1
  42. ...中间略...
  43. "nindexes" : 1,
  44. "totalIndexSize" : 4096,
  45. "indexSizes" : {
  46. "_id_" : 4096
  47. },
  48. "ok" : 1
  49. }
  50. ---

二十九 php的mongodb扩展

安装过程

  1. [root@mangodbserver1 ~]# cd /usr/local/src/
  2. [root@mangodbserver1 src]# git clone https://github.com/mongodb/mongo-php-driver
  3. Cloning into 'mongo-php-driver'...
  4. remote: Counting objects: 22561, done.
  5. remote: Compressing objects: 100% (2/2), done.
  6. remote: Total 22561 (delta 0), reused 2 (delta 0), pack-reused 22559
  7. Receiving objects: 100% (22561/22561), 6.64 MiB | 2.12 MiB/s, done.
  8. Resolving deltas: 100% (17804/17804), done.
  9. [root@mangodbserver1 src]# cd mongo-php-driver
  10. [root@mangodbserver1 mongo-php-driver]# git submodule update --init
  11. Submodule 'src/libmongoc' (https://github.com/mongodb/mongo-c-driver.git) registered for path 'src/libmongoc'
  12. Cloning into 'src/libmongoc'...
  13. remote: Counting objects: 104584, done.
  14. remote: Compressing objects: 100% (524/524), done.
  15. remote: Total 104584 (delta 266), reused 240 (delta 138), pack-reused 103918
  16. Receiving objects: 100% (104584/104584), 51.46 MiB | 4.84 MiB/s, done.
  17. Resolving deltas: 100% (91157/91157), done.
  18. Submodule path 'src/libmongoc': checked out 'a690091bae086f267791bd2227400f2035de99e8'
  19. [root@mangodbserver1 mongo-php-driver]# /usr/local/php-fpm/bin/php
  20. php php-cgi php-config phpize
  21. [root@mangodbserver1 mongo-php-driver]# /usr/local/php-fpm/bin/phpize
  22. Configuring for:
  23. PHP Api Version: 20131106
  24. Zend Module Api No: 20131226
  25. Zend Extension Api No: 220131226
  26. [root@mangodbserver1 mongo-php-driver]# ./configure --with-php-config=/usr/local/php-fpm/bin/php-config
  27. checking for grep that handles long lines and -e... /usr/bin/grep
  28. checking for egrep... /usr/bin/grep -E
  29. checking for a sed that does not truncate output... /usr/bin/sed
  30. ...中间略...
  31. config.status: creating /usr/local/src/mongo-php-driver/src/libmongoc/src/libbson/src/bson/bson-config.h
  32. config.status: creating /usr/local/src/mongo-php-driver/src/libmongoc/src/libbson/src/bson/bson-version.h
  33. config.status: creating /usr/local/src/mongo-php-driver/src/libmongoc/src/libmongoc/src/mongoc/mongoc-config.h
  34. config.status: creating /usr/local/src/mongo-php-driver/src/libmongoc/src/libmongoc/src/mongoc/mongoc-version.h
  35. config.status: creating config.h
  36. [root@mangodbserver1 mongo-php-driver]# make && make install
  37. /bin/sh /usr/local/src/mongo-php-driver/libtool --mode=compile cc -DBSON_COMPILATION -DMONGOC_COMPILATION -pthread
  38. ...中间略...
  39. Build complete.
  40. Don't forget to run 'make test'.
  41. Installing shared extensions: /usr/local/php-fpm/lib/php/extensions/no-debug-non-zts-20131226/
  42. [root@mangodbserver1 mongo-php-driver]# vim /usr/local/php-fpm/etc/php.ini
  43. # 添加
  44. extension = mongodb.so
  45. [root@mangodbserver1 mongo-php-driver]# /usr/local/php-fpm/sbin/php-fpm -m | grep mongo
  46. mongodb
  47. [root@mangodbserver1 mongo-php-driver]# /etc/init.d/php-fpm restart
  48. Gracefully shutting down php-fpm . done
  49. Starting php-fpm done

三十 php的mongo扩展

  1. [root@mangodbserver1 mongo-php-driver]# cd /usr/local/src/
  2. [root@mangodbserver1 src]# wget https://pecl.php.net/get/mongo-1.6.16.tgz
  3. --2018-08-26 20:58:55-- https://pecl.php.net/get/mongo-1.6.16.tgz
  4. Resolving pecl.php.net (pecl.php.net)... 104.236.228.160
  5. ...中间略...
  6. 2018-08-26 20:58:59 (140 KB/s) - mongo-1.6.16.tgz saved [210341/210341]
  7. [root@mangodbserver1 src]# tar -zxvf mongo-1.6.16.tgz
  8. [root@mangodbserver1 src]# cd mongo-1.6.16/
  9. [root@mangodbserver1 mongo-1.6.16]# /usr/local/php-fpm/bin/phpize
  10. Configuring for:
  11. PHP Api Version: 20131106
  12. Zend Module Api No: 20131226
  13. Zend Extension Api No: 220131226
  14. [root@mangodbserver1 mongo-1.6.16]# ./configure --with-php-config=/usr/local/php-fpm/bin/php-config
  15. checking for grep that handles long lines and -e... /usr/bin/grep
  16. checking for egrep... /usr/bin/grep -E
  17. checking for a sed that does not truncate output... /usr/bin/sed
  18. ...中间略...
  19. creating libtool
  20. appending configuration tag "CXX" to libtool
  21. configure: creating ./config.status
  22. config.status: creating config.h
  23. [root@mangodbserver1 mongo-1.6.16]# make && make install
  24. /bin/sh /usr/local/src/mongo-1.6.16/libtool --mode=compile cc -I./util -I. -I/usr/local/src/mongo-1.6.16 -DPHP_ATOM_INC -I/usr/local/src/mongo-1.6.16/include -I/usr/local/src/mongo-1.6.16/main -I/usr/local/src/mongo-1.6.16 -I/usr/local/php-fpm/include/php -I/usr/local/php-fpm/include/php/main -I/usr/local/php-fpm/include/php/TSRM -I/usr/local/php-fpm/include/php/Zend -I/usr/local/php-fpm/include/php/ext -I/usr/local/php-fpm/include/php/ext/date/lib -I/usr/local/src/mongo-1.6.16/api -I/usr/local/src/mongo-1.6.16/util -I/usr/local/src/mongo-1.6.16/exceptions -I/usr/local/src/mongo-1.6.16/gridfs -I/usr/local/src/mongo-1.6.16/types -I/usr/local/src/mongo-1.6.16/batch -I/usr/local/src/mongo-1.6.16/contrib -I/usr/local/src/mongo-1.6.16/mcon -I/usr/local/src/mongo-1.6.16/mcon/contrib -DHAVE_CONFIG_H -g -O2 -c /usr/local/src/mongo-1.6.16/php_mongo.c -o php_mongo.lo
  25. ...中间略...
  26. Build complete.
  27. Don't forget to run 'make test'.
  28. Installing shared extensions: /usr/local/php-fpm/lib/php/extensions/no-debug-non-zts-20131226/
  29. [root@mangodbserver1 mongo-1.6.16]# vim /usr/local/php-fpm/etc/php.ini
  30. # 添加extension = mongo.so
  31. extension = mongo.so
  32. [root@mangodbserver1 mongo-1.6.16]# /usr/local/php-fpm/sbin/php-fpm -m | grep mon
  33. mongo
  34. mongodb
  35. [root@mangodbserver1 mongo-1.6.16]# /etc/init.d/php-fpm restart
  36. Gracefully shutting down php-fpm . done
  37. Starting php-fpm done

测试mongo扩展

参考文档 https://docs.mongodb.com/ecosystem/drivers/php/

http://www.runoob.com/mongodb/mongodb-php.html

  1. # 新建测试页
  2. [root@mangodbserver1 mongo-1.6.16]# vim /usr/local/nginx/html/1.php
  3. <?php
  4. $m = new MongoClient();
  5. $db = $m->test;
  6. $collection = $db->createCollection("runoob");
  7. echo "集合创建成功";
  8. ?>
  9. [root@mangodbserver1 mongo-1.6.16]# curl localhost/1.php
  10. 集合创建成功[root@mangodbserver1 mongo-1.6.16]#

##三十一 mongodb副本集介绍

早期版本使用master-slave,一主一从和MySQL类似,但slave在此架构中为只读,当主库宕机后,从库不能自动切换为主

目前已经淘汰master-slave模式,改为副本集,这种模式下有一个主(primary),和多个从(secondary),只读。支持给它们设置权重,当主宕掉后,权重最高的从切换为主

在此架构中还可以建立一个仲裁(arbiter)的角色,它只负责裁决,而不存储数据

在此架构中读写数据都是在主上,要想实现负载均衡的目的需要手动指定读库的目标server

三十二 mongodb副本集搭建

环境:

三台机器 CentOS Linux release 7.5.1804 (Core)

mongodbserver1: 192.168.1.47

mongodbserver2: 192.168.1.48

mongodbserver3: 192.168.1.49

1.分别编辑三台server的配置文件

  1. net:
  2. port: 27017
  3. # 增加绑定本机ip
  4. bindIp: 127.0.0.1,192.168.1.49
  5. # 去掉注释,增加下两行内容
  6. replication:
  7. oplogSizeMB: 20
  8. replSetName: rs0
  9. [root@mongodbserver3 ~]# systemctl start mongod.service
  10. [root@mongodbserver3 ~]# netstat -nltup | grep mongo
  11. tcp 0 0 192.168.1.49:27017 0.0.0.0:* LISTEN 2073/mongod
  12. tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 2073/mongod

2.初始化

  1. [root@mongodbserver1 ~]# mongo
  2. > config={_id:"rs0",members:[{_id:0,host:"192.168.1.47:27017"},{_id:1,host:"192.168.1.48:27017"},{_id:2,host:"192.168.1.49:27017"}]}
  3. {
  4. "_id" : "rs0",
  5. "members" : [
  6. {
  7. "_id" : 0,
  8. "host" : "192.168.1.47:27017"
  9. },
  10. {
  11. "_id" : 1,
  12. "host" : "192.168.1.48:27017"
  13. },
  14. {
  15. "_id" : 2,
  16. "host" : "192.168.1.49:27017"
  17. }
  18. ]
  19. }
  20. > rs.initiate(config)
  21. {
  22. "ok" : 1,
  23. "operationTime" : Timestamp(1535293700, 1),
  24. "$clusterTime" : {
  25. "clusterTime" : Timestamp(1535293700, 1),
  26. "signature" : {
  27. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  28. "keyId" : NumberLong(0)
  29. }
  30. }
  31. }
  32. rs0:PRIMARY> rs.status()
  33. {
  34. "set" : "rs0",
  35. "date" : ISODate("2018-08-26T14:31:21.930Z"),
  36. "myState" : 1,
  37. "term" : NumberLong(1),
  38. "syncingTo" : "",
  39. "syncSourceHost" : "",
  40. "syncSourceId" : -1,
  41. "heartbeatIntervalMillis" : NumberLong(2000),
  42. "optimes" : {
  43. "lastCommittedOpTime" : {
  44. "ts" : Timestamp(1535293872, 1),
  45. "t" : NumberLong(1)
  46. },
  47. "readConcernMajorityOpTime" : {
  48. "ts" : Timestamp(1535293872, 1),
  49. "t" : NumberLong(1)
  50. },
  51. "appliedOpTime" : {
  52. "ts" : Timestamp(1535293872, 1),
  53. "t" : NumberLong(1)
  54. },
  55. "durableOpTime" : {
  56. "ts" : Timestamp(1535293872, 1),
  57. "t" : NumberLong(1)
  58. }
  59. },
  60. "lastStableCheckpointTimestamp" : Timestamp(1535293832, 1),
  61. "members" : [
  62. {
  63. "_id" : 0,
  64. "name" : "192.168.1.47:27017",
  65. "health" : 1,
  66. "state" : 1,
  67. "stateStr" : "PRIMARY",
  68. "uptime" : 1166,
  69. "optime" : {
  70. "ts" : Timestamp(1535293872, 1),
  71. "t" : NumberLong(1)
  72. },
  73. "optimeDate" : ISODate("2018-08-26T14:31:12Z"),
  74. "syncingTo" : "",
  75. "syncSourceHost" : "",
  76. "syncSourceId" : -1,
  77. "infoMessage" : "",
  78. "electionTime" : Timestamp(1535293711, 1),
  79. "electionDate" : ISODate("2018-08-26T14:28:31Z"),
  80. "configVersion" : 1,
  81. "self" : true,
  82. "lastHeartbeatMessage" : ""
  83. },
  84. {
  85. "_id" : 1,
  86. "name" : "192.168.1.48:27017",
  87. "health" : 1,
  88. "state" : 2,
  89. "stateStr" : "SECONDARY",
  90. "uptime" : 181,
  91. "optime" : {
  92. "ts" : Timestamp(1535293872, 1),
  93. "t" : NumberLong(1)
  94. },
  95. "optimeDurable" : {
  96. "ts" : Timestamp(1535293872, 1),
  97. "t" : NumberLong(1)
  98. },
  99. "optimeDate" : ISODate("2018-08-26T14:31:12Z"),
  100. "optimeDurableDate" : ISODate("2018-08-26T14:31:12Z"),
  101. "lastHeartbeat" : ISODate("2018-08-26T14:31:21.381Z"),
  102. "lastHeartbeatRecv" : ISODate("2018-08-26T14:31:21.417Z"),
  103. "pingMs" : NumberLong(0),
  104. "lastHeartbeatMessage" : "",
  105. "syncingTo" : "192.168.1.47:27017",
  106. "syncSourceHost" : "192.168.1.47:27017",
  107. "syncSourceId" : 0,
  108. "infoMessage" : "",
  109. "configVersion" : 1
  110. },
  111. {
  112. "_id" : 2,
  113. "name" : "192.168.1.49:27017",
  114. "health" : 1,
  115. "state" : 2,
  116. "stateStr" : "SECONDARY",
  117. "uptime" : 181,
  118. "optime" : {
  119. "ts" : Timestamp(1535293872, 1),
  120. "t" : NumberLong(1)
  121. },
  122. "optimeDurable" : {
  123. "ts" : Timestamp(1535293872, 1),
  124. "t" : NumberLong(1)
  125. },
  126. "optimeDate" : ISODate("2018-08-26T14:31:12Z"),
  127. "optimeDurableDate" : ISODate("2018-08-26T14:31:12Z"),
  128. "lastHeartbeat" : ISODate("2018-08-26T14:31:21.381Z"),
  129. "lastHeartbeatRecv" : ISODate("2018-08-26T14:31:21.548Z"),
  130. "pingMs" : NumberLong(0),
  131. "lastHeartbeatMessage" : "",
  132. "syncingTo" : "192.168.1.47:27017",
  133. "syncSourceHost" : "192.168.1.47:27017",
  134. "syncSourceId" : 0,
  135. "infoMessage" : "",
  136. "configVersion" : 1
  137. }
  138. ],
  139. "ok" : 1,
  140. "operationTime" : Timestamp(1535293872, 1),
  141. "$clusterTime" : {
  142. "clusterTime" : Timestamp(1535293872, 1),
  143. "signature" : {
  144. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  145. "keyId" : NumberLong(0)
  146. }
  147. }
  148. }
  149. rs0:PRIMARY>

三十三 mongodb副本集测试

  1. # 主上操作
  2. rs0:PRIMARY> use mydb
  3. rs0:PRIMARY> use mydb
  4. switched to db mydb
  5. rs0:PRIMARY> db.acc.insert({AccountID:1,UserName:"123",password:"123456"})
  6. WriteResult({ "nInserted" : 1 })
  7. rs0:PRIMARY> show dbs
  8. admin 0.000GB
  9. config 0.000GB
  10. db1 0.000GB
  11. local 0.000GB
  12. mydb 0.000GB
  13. # 从上操作
  14. [root@mongodbserver2 ~]# mongo
  15. rs0:SECONDARY> show dbs
  16. 2018-08-26T22:45:06.594+0800 E QUERY [js] Error: listDatabases failed:{
  17. "operationTime" : Timestamp(1535294702, 1),
  18. "ok" : 0,
  19. "errmsg" : "not master and slaveOk=false",
  20. "code" : 13435,
  21. "codeName" : "NotMasterNoSlaveOk",
  22. "$clusterTime" : {
  23. "clusterTime" : Timestamp(1535294702, 1),
  24. "signature" : {
  25. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  26. "keyId" : NumberLong(0)
  27. }
  28. }
  29. } :
  30. _getErrorWithCode@src/mongo/shell/utils.js:25:13
  31. Mongo.prototype.getDBs@src/mongo/shell/mongo.js:67:1
  32. shellHelper.show@src/mongo/shell/utils.js:876:19
  33. shellHelper@src/mongo/shell/utils.js:766:15
  34. @(shellhelp2):1:1
  35. rs0:SECONDARY> rs.slaveOk()
  36. rs0:SECONDARY> show dbs
  37. admin 0.000GB
  38. config 0.000GB
  39. db1 0.000GB
  40. local 0.000GB
  41. mydb 0.000GB
  42. rs0:SECONDARY> use mydb
  43. switched to db mydb
  44. rs0:SECONDARY> show tables
  45. acc

副本集更改权重模拟主宕机

  1. # 查看当前三台主机的权重,初始权重都是1
  2. rs0:PRIMARY> rs.conf()
  3. {
  4. "_id" : "rs0",
  5. "version" : 1,
  6. "protocolVersion" : NumberLong(1),
  7. "writeConcernMajorityJournalDefault" : true,
  8. "members" : [
  9. {
  10. "_id" : 0,
  11. "host" : "192.168.1.47:27017",
  12. "arbiterOnly" : false,
  13. "buildIndexes" : true,
  14. "hidden" : false,
  15. "priority" : 1,
  16. "tags" : {
  17. },
  18. "slaveDelay" : NumberLong(0),
  19. "votes" : 1
  20. },
  21. {
  22. "_id" : 1,
  23. "host" : "192.168.1.48:27017",
  24. "arbiterOnly" : false,
  25. "buildIndexes" : true,
  26. "hidden" : false,
  27. "priority" : 1,
  28. "tags" : {
  29. },
  30. "slaveDelay" : NumberLong(0),
  31. "votes" : 1
  32. },
  33. {
  34. "_id" : 2,
  35. "host" : "192.168.1.49:27017",
  36. "arbiterOnly" : false,
  37. "buildIndexes" : true,
  38. "hidden" : false,
  39. "priority" : 1,
  40. "tags" : {
  41. },
  42. "slaveDelay" : NumberLong(0),
  43. "votes" : 1
  44. }
  45. ],
  46. "settings" : {
  47. "chainingAllowed" : true,
  48. "heartbeatIntervalMillis" : 2000,
  49. "heartbeatTimeoutSecs" : 10,
  50. "electionTimeoutMillis" : 10000,
  51. "catchUpTimeoutMillis" : -1,
  52. "catchUpTakeoverDelayMillis" : 30000,
  53. "getLastErrorModes" : {
  54. },
  55. "getLastErrorDefaults" : {
  56. "w" : 1,
  57. "wtimeout" : 0
  58. },
  59. "replicaSetId" : ObjectId("5b82b904244cfb9393d33a63")
  60. }
  61. }
  62. # 设置权重
  63. rs0:PRIMARY> cfg.members[0].priority = 3
  64. 3
  65. rs0:PRIMARY> cfg.members[1].priority = 2
  66. 2
  67. rs0:PRIMARY> cfg.members[2].priority = 1
  68. 1
  69. rs0:PRIMARY> rs.reconfig(cfg)
  70. {
  71. "ok" : 1,
  72. "operationTime" : Timestamp(1535296028, 1),
  73. "$clusterTime" : {
  74. "clusterTime" : Timestamp(1535296028, 1),
  75. "signature" : {
  76. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  77. "keyId" : NumberLong(0)
  78. }
  79. }
  80. }
  81. # 查看新的权重,第二个节点将会成为候选主节点。
  82. rs0:PRIMARY> rs.conf()
  83. {
  84. "_id" : "rs0",
  85. "version" : 2,
  86. "protocolVersion" : NumberLong(1),
  87. "writeConcernMajorityJournalDefault" : true,
  88. "members" : [
  89. {
  90. "_id" : 0,
  91. "host" : "192.168.1.47:27017",
  92. "arbiterOnly" : false,
  93. "buildIndexes" : true,
  94. "hidden" : false,
  95. "priority" : 3,
  96. "tags" : {
  97. },
  98. "slaveDelay" : NumberLong(0),
  99. "votes" : 1
  100. },
  101. {
  102. "_id" : 1,
  103. "host" : "192.168.1.48:27017",
  104. "arbiterOnly" : false,
  105. "buildIndexes" : true,
  106. "hidden" : false,
  107. "priority" : 2,
  108. "tags" : {
  109. },
  110. "slaveDelay" : NumberLong(0),
  111. "votes" : 1
  112. },
  113. {
  114. "_id" : 2,
  115. "host" : "192.168.1.49:27017",
  116. "arbiterOnly" : false,
  117. "buildIndexes" : true,
  118. "hidden" : false,
  119. "priority" : 1,
  120. "tags" : {
  121. },
  122. "slaveDelay" : NumberLong(0),
  123. "votes" : 1
  124. }
  125. ],
  126. "settings" : {
  127. "chainingAllowed" : true,
  128. "heartbeatIntervalMillis" : 2000,
  129. "heartbeatTimeoutSecs" : 10,
  130. "electionTimeoutMillis" : 10000,
  131. "catchUpTimeoutMillis" : -1,
  132. "catchUpTakeoverDelayMillis" : 30000,
  133. "getLastErrorModes" : {
  134. },
  135. "getLastErrorDefaults" : {
  136. "w" : 1,
  137. "wtimeout" : 0
  138. },
  139. "replicaSetId" : ObjectId("5b82b904244cfb9393d33a63")
  140. }
  141. }
  142. rs0:PRIMARY>
  143. # 模拟主宕机,断开主的网卡
  144. # 在192.168.1.48上查看,之前的主192.168.1.47的状态:"stateStr" : "(not reachable/healthy)",
  145. # 192.168.1.48已经变为主了。
  146. rs0:PRIMARY> rs.status()
  147. {
  148. "set" : "rs0",
  149. "date" : ISODate("2018-08-26T15:11:07.305Z"),
  150. "myState" : 1,
  151. "term" : NumberLong(2),
  152. "syncingTo" : "",
  153. "syncSourceHost" : "",
  154. "syncSourceId" : -1,
  155. "heartbeatIntervalMillis" : NumberLong(2000),
  156. "optimes" : {
  157. "lastCommittedOpTime" : {
  158. "ts" : Timestamp(1535296259, 1),
  159. "t" : NumberLong(2)
  160. },
  161. "readConcernMajorityOpTime" : {
  162. "ts" : Timestamp(1535296259, 1),
  163. "t" : NumberLong(2)
  164. },
  165. "appliedOpTime" : {
  166. "ts" : Timestamp(1535296259, 1),
  167. "t" : NumberLong(2)
  168. },
  169. "durableOpTime" : {
  170. "ts" : Timestamp(1535296259, 1),
  171. "t" : NumberLong(2)
  172. }
  173. },
  174. "lastStableCheckpointTimestamp" : Timestamp(1535296229, 1),
  175. "members" : [
  176. {
  177. "_id" : 0,
  178. "name" : "192.168.1.47:27017",
  179. "health" : 0,
  180. "state" : 8,
  181. "stateStr" : "(not reachable/healthy)",
  182. "uptime" : 0,
  183. "optime" : {
  184. "ts" : Timestamp(0, 0),
  185. "t" : NumberLong(-1)
  186. },
  187. "optimeDurable" : {
  188. "ts" : Timestamp(0, 0),
  189. "t" : NumberLong(-1)
  190. },
  191. "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
  192. "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
  193. "lastHeartbeat" : ISODate("2018-08-26T15:11:04.624Z"),
  194. "lastHeartbeatRecv" : ISODate("2018-08-26T15:09:36.925Z"),
  195. "pingMs" : NumberLong(0),
  196. "lastHeartbeatMessage" : "Error connecting to 192.168.1.47:27017 :: caused by :: No route to host",
  197. "syncingTo" : "",
  198. "syncSourceHost" : "",
  199. "syncSourceId" : -1,
  200. "infoMessage" : "",
  201. "configVersion" : -1
  202. },
  203. {
  204. "_id" : 1,
  205. "name" : "192.168.1.48:27017",
  206. "health" : 1,
  207. "state" : 1,
  208. "stateStr" : "PRIMARY",
  209. "uptime" : 3505,
  210. "optime" : {
  211. "ts" : Timestamp(1535296259, 1),
  212. "t" : NumberLong(2)
  213. },
  214. "optimeDate" : ISODate("2018-08-26T15:10:59Z"),
  215. "syncingTo" : "",
  216. "syncSourceHost" : "",
  217. "syncSourceId" : -1,
  218. "infoMessage" : "",
  219. "electionTime" : Timestamp(1535296187, 1),
  220. "electionDate" : ISODate("2018-08-26T15:09:47Z"),
  221. "configVersion" : 2,
  222. "self" : true,
  223. "lastHeartbeatMessage" : ""
  224. },
  225. {
  226. "_id" : 2,
  227. "name" : "192.168.1.49:27017",
  228. "health" : 1,
  229. "state" : 2,
  230. "stateStr" : "SECONDARY",
  231. "uptime" : 2565,
  232. "optime" : {
  233. "ts" : Timestamp(1535296259, 1),
  234. "t" : NumberLong(2)
  235. },
  236. "optimeDurable" : {
  237. "ts" : Timestamp(1535296259, 1),
  238. "t" : NumberLong(2)
  239. },
  240. "optimeDate" : ISODate("2018-08-26T15:10:59Z"),
  241. "optimeDurableDate" : ISODate("2018-08-26T15:10:59Z"),
  242. "lastHeartbeat" : ISODate("2018-08-26T15:11:05.393Z"),
  243. "lastHeartbeatRecv" : ISODate("2018-08-26T15:11:07.196Z"),
  244. "pingMs" : NumberLong(0),
  245. "lastHeartbeatMessage" : "",
  246. "syncingTo" : "192.168.1.48:27017",
  247. "syncSourceHost" : "192.168.1.48:27017",
  248. "syncSourceId" : 1,
  249. "infoMessage" : "",
  250. "configVersion" : 2
  251. }
  252. ],
  253. "ok" : 1,
  254. "operationTime" : Timestamp(1535296259, 1),
  255. "$clusterTime" : {
  256. "clusterTime" : Timestamp(1535296259, 1),
  257. "signature" : {
  258. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  259. "keyId" : NumberLong(0)
  260. }
  261. }
  262. }
  263. rs0:PRIMARY>
  264. # 将192.168.1.47重新连上网络,并查看状态,192.168.1.47又变为主了
  265. rs0:PRIMARY> rs.status()
  266. {
  267. "set" : "rs0",
  268. "date" : ISODate("2018-08-26T15:16:21.215Z"),
  269. "myState" : 1,
  270. "term" : NumberLong(3),
  271. "syncingTo" : "",
  272. "syncSourceHost" : "",
  273. "syncSourceId" : -1,
  274. "heartbeatIntervalMillis" : NumberLong(2000),
  275. "optimes" : {
  276. "lastCommittedOpTime" : {
  277. "ts" : Timestamp(1535296574, 1),
  278. "t" : NumberLong(3)
  279. },
  280. "readConcernMajorityOpTime" : {
  281. "ts" : Timestamp(1535296574, 1),
  282. "t" : NumberLong(3)
  283. },
  284. "appliedOpTime" : {
  285. "ts" : Timestamp(1535296574, 1),
  286. "t" : NumberLong(3)
  287. },
  288. "durableOpTime" : {
  289. "ts" : Timestamp(1535296574, 1),
  290. "t" : NumberLong(3)
  291. }
  292. },
  293. "lastStableCheckpointTimestamp" : Timestamp(1535296534, 1),
  294. "members" : [
  295. {
  296. "_id" : 0,
  297. "name" : "192.168.1.47:27017",
  298. "health" : 1,
  299. "state" : 1,
  300. "stateStr" : "PRIMARY",
  301. "uptime" : 3866,
  302. "optime" : {
  303. "ts" : Timestamp(1535296574, 1),
  304. "t" : NumberLong(3)
  305. },
  306. "optimeDate" : ISODate("2018-08-26T15:16:14Z"),
  307. "syncingTo" : "",
  308. "syncSourceHost" : "",
  309. "syncSourceId" : -1,
  310. "infoMessage" : "",
  311. "electionTime" : Timestamp(1535296432, 1),
  312. "electionDate" : ISODate("2018-08-26T15:13:52Z"),
  313. "configVersion" : 2,
  314. "self" : true,
  315. "lastHeartbeatMessage" : ""
  316. },
  317. {
  318. "_id" : 1,
  319. "name" : "192.168.1.48:27017",
  320. "health" : 1,
  321. "state" : 2,
  322. "stateStr" : "SECONDARY",
  323. "uptime" : 159,
  324. "optime" : {
  325. "ts" : Timestamp(1535296574, 1),
  326. "t" : NumberLong(3)
  327. },
  328. "optimeDurable" : {
  329. "ts" : Timestamp(1535296574, 1),
  330. "t" : NumberLong(3)
  331. },
  332. "optimeDate" : ISODate("2018-08-26T15:16:14Z"),
  333. "optimeDurableDate" : ISODate("2018-08-26T15:16:14Z"),
  334. "lastHeartbeat" : ISODate("2018-08-26T15:16:20.414Z"),
  335. "lastHeartbeatRecv" : ISODate("2018-08-26T15:16:19.382Z"),
  336. "pingMs" : NumberLong(0),
  337. "lastHeartbeatMessage" : "",
  338. "syncingTo" : "192.168.1.47:27017",
  339. "syncSourceHost" : "192.168.1.47:27017",
  340. "syncSourceId" : 0,
  341. "infoMessage" : "",
  342. "configVersion" : 2
  343. },
  344. {
  345. "_id" : 2,
  346. "name" : "192.168.1.49:27017",
  347. "health" : 1,
  348. "state" : 2,
  349. "stateStr" : "SECONDARY",
  350. "uptime" : 159,
  351. "optime" : {
  352. "ts" : Timestamp(1535296574, 1),
  353. "t" : NumberLong(3)
  354. },
  355. "optimeDurable" : {
  356. "ts" : Timestamp(1535296574, 1),
  357. "t" : NumberLong(3)
  358. },
  359. "optimeDate" : ISODate("2018-08-26T15:16:14Z"),
  360. "optimeDurableDate" : ISODate("2018-08-26T15:16:14Z"),
  361. "lastHeartbeat" : ISODate("2018-08-26T15:16:20.414Z"),
  362. "lastHeartbeatRecv" : ISODate("2018-08-26T15:16:21.156Z"),
  363. "pingMs" : NumberLong(0),
  364. "lastHeartbeatMessage" : "",
  365. "syncingTo" : "192.168.1.47:27017",
  366. "syncSourceHost" : "192.168.1.47:27017",
  367. "syncSourceId" : 0,
  368. "infoMessage" : "",
  369. "configVersion" : 2
  370. }
  371. ],
  372. "ok" : 1,
  373. "operationTime" : Timestamp(1535296574, 1),
  374. "$clusterTime" : {
  375. "clusterTime" : Timestamp(1535296574, 1),
  376. "signature" : {
  377. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  378. "keyId" : NumberLong(0)
  379. }
  380. }
  381. }
  382. rs0:PRIMARY>

三十四 mongodb分片介绍

分片就是将数据库进行拆分,将大型集合分隔到不同服务器上。比如,本来100G的数据,可以分割成10份存储到10台服务器上,这样每台机器只有10G的数据。

通过一个mongos的进程(路由)实现分片后的数据存储与访问,也就是说mongos是整个分片架构的核心,对客户端而言是不知道是否有分片的,客户端只需要把读写操作转达给mongos即可。

虽然分片会把数据分隔到很多台服务器上,但是每一个节点都是需要有一个备用角色的,这样能保证数据的高可用。

当系统需要更多空间或者资源的时候,分片可以让我们按需方便扩展,只需要把mongodb服务的机器加入到分片集群中即可

mongodb分片架构

mongos: 数据库集群请求的入口,所有的请求都通过mongos进行协调,不需要在应用程序添加一个路由选择器,mongos自己就是一个请求分发中心,它负责把对应的数据请求请求转发到对应的shard服务器上。在生产环境通常有多mongos作为请求的入口,防止其中一个挂掉所有的mongodb请求都没有办法操作。

config server: 配置服务器,存储所有数据库元信息(路由、分片)的配置。mongos本身没有物理存储分片服务器和数据路由信息,只是缓存在内存里,配置服务器则实际存储这些数据。mongos第一次启动或者关掉重启就会从 config server 加载配置信息,以后如果配置服务器信息变化会通知到所有的 mongos 更新自己的状态,这样 mongos 就能继续准确路由。在生产环境通常有多个 config server 配置服务器,因为它存储了分片路由的元数据,防止数据丢失!

shard: 存储了一个集合部分数据的MongoDB实例,每个分片是单独的mongodb服务或者副本集,在生产环境中,所有的分片都应该是副本集。

三十五 mongodb分片搭建

1.分片搭建 -服务器规划

  1. 三台机器
  2. mongodbserver1: 192.168.1.47 mongosconfig server、副本集1主节点、副本集2仲裁、副本集3从节点
  3. mongodbserver2: 192.168.1.48 mongosconfig server、副本集1从节点、副本集2主节点、副本集3仲裁
  4. mongodbserver3: 192.168.1.49 mongosconfig server、副本集1仲裁、副本集2从节点、副本集3
  5. 端口分配:mongos 20000config 21000、副本集1 27001、副本集2 27002、副本集3 27003
  6. 三台机器全部关闭firewalld服务和selinux,或者增加对应端口的规则

2.分片搭建 – 创建目录

  1. # 分别在三台机器上创建各个角色所需要的目录
  2. # mkdir -p /data/mongodb/mongos/log
  3. # mkdir -p /data/mongodb/config/{data,log}
  4. # mkdir -p /data/mongodb/shard1/{data,log}
  5. # mkdir -p /data/mongodb/shard2/{data,log}
  6. # mkdir -p /data/mongodb/shard3/{data,log}
  7. [root@mongodbserver1 ~]# mkdir -p /data/mongodb/mongos/log
  8. [root@mongodbserver1 ~]# ls -ld !$
  9. ls -ld /data/mongodb/mongos/log
  10. drwxr-xr-x 2 root root 6 Aug 27 00:04 /data/mongodb/mongos/log
  11. [root@mongodbserver1 ~]# mkdir -p /data/mongodb/config/{data,log}
  12. [root@mongodbserver1 ~]#
  13. [root@mongodbserver1 ~]# ls -l /data/mongodb/config
  14. total 0
  15. drwxr-xr-x 2 root root 6 Aug 27 00:04 data
  16. drwxr-xr-x 2 root root 6 Aug 27 00:04 log
  17. [root@mongodbserver1 ~]# mkdir -p /data/mongodb/shard1/{data,log}
  18. [root@mongodbserver1 ~]# ls -l /data/mongodb/shard1
  19. total 0
  20. drwxr-xr-x 2 root root 6 Aug 27 00:05 data
  21. drwxr-xr-x 2 root root 6 Aug 27 00:05 log
  22. [root@mongodbserver1 ~]# mkdir -p /data/mongodb/shard2/{data,log}
  23. [root@mongodbserver1 ~]# mkdir -p /data/mongodb/shard3/{data,log}
  24. [root@mongodbserver1 ~]# ls -l /data/mongodb/shard2
  25. total 0
  26. drwxr-xr-x 2 root root 6 Aug 27 00:05 data
  27. drwxr-xr-x 2 root root 6 Aug 27 00:05 log
  28. [root@mongodbserver1 ~]# ls -l /data/mongodb/shard3
  29. total 0
  30. drwxr-xr-x 2 root root 6 Aug 27 00:06 data
  31. drwxr-xr-x 2 root root 6 Aug 27 00:06 log

3.config server配置

  1. # mongodb3.4版本以后需要对config server创建副本集
  2. # 添加配置文件(三台机器都操作)
  3. [root@mongodbserver3 ~]# mkdir /etc/mongod/
  4. [root@mongodbserver3 ~]#
  5. [root@mongodbserver3 ~]# vim /etc/mongod/config.conf
  6. # 内容如下,bind_ip可侦听所有端口,为安全考虑,也可以只侦听本机端口
  7. pidfilepath = /var/run/mongodb/configsrv.pid
  8. dbpath = /data/mongodb/config/data
  9. logpath = /data/mongodb/config/log/congigsrv.log
  10. logappend = true
  11. bind_ip = 0.0.0.0
  12. port = 21000
  13. fork = true
  14. configsvr = true #declare this is a config db of a cluster;
  15. replSet=configs #副本集名称
  16. maxConns=20000 #设置最大连接数
  17. # 启动config server
  18. [root@mongodbserver1 ~]# mongod -f /etc/mongod/config.conf
  19. 2018-08-27T00:13:47.546+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
  20. about to fork child process, waiting until server is ready for connections.
  21. forked process: 41218
  22. child process started successfully, parent exiting
  23. # 登录任意一台机器的21000端口,初始化副本集
  24. [root@mongodbserver1 ~]# mongo --port 21000
  25. > config = { _id: "configs", members: [ {_id : 0, host : "192.168.1.47:21000"},{_id : 1, host : "192.168.1.48:21000"},{_id : 2, host : "192.168.1.49:21000"}] }
  26. {
  27. "_id" : "configs",
  28. "members" : [
  29. {
  30. "_id" : 0,
  31. "host" : "192.168.1.47:21000"
  32. },
  33. {
  34. "_id" : 1,
  35. "host" : "192.168.1.48:21000"
  36. },
  37. {
  38. "_id" : 2,
  39. "host" : "192.168.1.49:21000"
  40. }
  41. ]
  42. }
  43. > rs.initiate(config)
  44. {
  45. "ok" : 1,
  46. "operationTime" : Timestamp(1535300396, 1),
  47. "$gleStats" : {
  48. "lastOpTime" : Timestamp(1535300396, 1),
  49. "electionId" : ObjectId("000000000000000000000000")
  50. },
  51. "lastCommittedOpTime" : Timestamp(0, 0),
  52. "$clusterTime" : {
  53. "clusterTime" : Timestamp(1535300396, 1),
  54. "signature" : {
  55. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  56. "keyId" : NumberLong(0)
  57. }
  58. }
  59. }
  60. configs:PRIMARY>

4.分片配置

  1. # mongodbserver1,mongodbserver2,mongodbserver3上分别新建如下配置文件
  2. [root@mongodbserver1 mongod]# cat /etc/mongod/shard1.conf
  3. pidfilepath = /var/run/mongodb/shard1.pid
  4. dbpath = /data/mongodb/shard1/data
  5. logpath = /data/mongodb/shard1/log/shard1.log
  6. logappend = true
  7. bind_ip = 0.0.0.0
  8. port = 27001
  9. fork = true
  10. oplogSize = 4096
  11. journal = true
  12. quiet = true
  13. replSet=shard1 #副本集名称
  14. shardsvr = true #declare this is a shard db of a cluster;
  15. maxConns=20000 #设置最大连接数
  16. [root@mongodbserver1 mongod]# cat /etc/mongod/shard2.conf
  17. pidfilepath = /var/run/mongodb/shard2.pid
  18. dbpath = /data/mongodb/shard2/data
  19. logpath = /data/mongodb/shard2/log/shard2.log
  20. logappend = true
  21. bind_ip = 0.0.0.0
  22. port = 27002
  23. fork = true
  24. oplogSize = 4096
  25. journal = true
  26. quiet = true
  27. replSet=shard2 #副本集名称
  28. shardsvr = true #declare this is a shard db of a cluster;
  29. maxConns=20000 #设置最大连接数
  30. [root@mongodbserver1 mongod]# cat /etc/mongod/shard3.conf
  31. pidfilepath = /var/run/mongodb/shard3.pid
  32. dbpath = /data/mongodb/shard3/data
  33. logpath = /data/mongodb/shard3/log/shard3.log
  34. logappend = true
  35. bind_ip = 0.0.0.0
  36. port = 27003
  37. fork = true
  38. oplogSize = 4096
  39. journal = true
  40. quiet = true
  41. replSet=shard3 #副本集名称
  42. shardsvr = true #declare this is a shard db of a cluster;
  43. maxConns=20000 #设置最大连接数
  44. [root@mongodbserver1 ~]# mongod -f /etc/mongod/shard1.conf
  45. 2018-08-27T09:45:20.879+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
  46. about to fork child process, waiting until server is ready for connections.
  47. forked process: 46045
  48. child process started successfully, parent exiting
  49. [root@mongodbserver1 ~]# mongod -f /etc/mongod/shard2.conf
  50. 2018-08-27T09:45:20.879+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
  51. about to fork child process, waiting until server is ready for connections.
  52. forked process: 46045
  53. child process started successfully, parent exiting
  54. [root@mongodbserver1 mongod]# mongod -f /etc/mongod/shard3.conf
  55. 2018-08-27T09:50:15.220+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
  56. about to fork child process, waiting until server is ready for connections.
  57. forked process: 46162
  58. child process started successfully, parent exiting
  59. [root@mongodbserver1 mongod]# netstat -nltup | grep :27003
  60. tcp 0 0 0.0.0.0:27003 0.0.0.0:* LISTEN 46162/mongod
  61. [root@mongodbserver1 mongod]# netstat -nltup | grep :27001
  62. tcp 0 0 0.0.0.0:27001 0.0.0.0:* LISTEN 45946/mongod
  63. [root@mongodbserver1 mongod]# netstat -nltup | grep :27002
  64. tcp 0 0 0.0.0.0:27002 0.0.0.0:* LISTEN 46045/mongod

4.副本集初始化

  1. # shard1副本集初始化
  2. # 登录192.168.1.47或者192.168.1.48中的任何一台机器的27001端口初始化副本集,192.168.1.49因为shard1中我们把这台机器的27001端口作为了仲裁节点
  3. [root@mongodbserver1 mongod]# mongo --port 27001
  4. MongoDB shell version v4.0.1
  5. connecting to: mongodb://127.0.0.1:27001/
  6. MongoDB server version: 4.0.1
  7. Server has startup warnings:
  8. 2018-08-27T09:37:06.874+0800 I CONTROL [initandlisten]
  9. 2018-08-27T09:37:06.874+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
  10. 2018-08-27T09:37:06.874+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
  11. 2018-08-27T09:37:06.874+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
  12. 2018-08-27T09:37:06.874+0800 I CONTROL [initandlisten]
  13. ---
  14. Enable MongoDB's free cloud-based monitoring service, which will then receive and display
  15. metrics about your deployment (disk utilization, CPU, operation statistics, etc).
  16. The monitoring data will be available on a MongoDB website with a unique URL accessible to you
  17. and anyone you share the URL with. MongoDB may use this information to make product
  18. improvements and to suggest MongoDB products and deployment options to you.
  19. To enable free monitoring, run the following command: db.enableFreeMonitoring()
  20. To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
  21. ---
  22. > use admin
  23. switched to db admin
  24. > config = { _id: "shard1", members: [ {_id : 0, host : "192.168.1.47:27001"}, {_id: 1,host : "192.168.1.48:27001"},{_id : 2, host : "192.168.1.49:27001",arbiterOnly:true}] }
  25. {
  26. "_id" : "shard1",
  27. "members" : [
  28. {
  29. "_id" : 0,
  30. "host" : "192.168.1.47:27001"
  31. },
  32. {
  33. "_id" : 1,
  34. "host" : "192.168.1.48:27001"
  35. },
  36. {
  37. "_id" : 2,
  38. "host" : "192.168.1.49:27001",
  39. "arbiterOnly" : true
  40. }
  41. ]
  42. }
  43. > rs.initiate(config)
  44. {
  45. "ok" : 1,
  46. "operationTime" : Timestamp(1535335745, 1),
  47. "$clusterTime" : {
  48. "clusterTime" : Timestamp(1535335745, 1),
  49. "signature" : {
  50. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  51. "keyId" : NumberLong(0)
  52. }
  53. }
  54. }
  55. shard1:SECONDARY> rs.status()
  56. {
  57. "set" : "shard1",
  58. "date" : ISODate("2018-08-27T02:09:26.775Z"),
  59. "myState" : 1,
  60. "term" : NumberLong(1),
  61. "syncingTo" : "",
  62. "syncSourceHost" : "",
  63. "syncSourceId" : -1,
  64. "heartbeatIntervalMillis" : NumberLong(2000),
  65. "optimes" : {
  66. "lastCommittedOpTime" : {
  67. "ts" : Timestamp(1535335758, 2),
  68. "t" : NumberLong(1)
  69. },
  70. "readConcernMajorityOpTime" : {
  71. "ts" : Timestamp(1535335758, 2),
  72. "t" : NumberLong(1)
  73. },
  74. "appliedOpTime" : {
  75. "ts" : Timestamp(1535335758, 2),
  76. "t" : NumberLong(1)
  77. },
  78. "durableOpTime" : {
  79. "ts" : Timestamp(1535335758, 2),
  80. "t" : NumberLong(1)
  81. }
  82. },
  83. "lastStableCheckpointTimestamp" : Timestamp(1535335758, 1),
  84. "members" : [
  85. {
  86. "_id" : 0,
  87. "name" : "192.168.1.47:27001",
  88. "health" : 1,
  89. "state" : 1,
  90. "stateStr" : "PRIMARY",
  91. "uptime" : 1940,
  92. "optime" : {
  93. "ts" : Timestamp(1535335758, 2),
  94. "t" : NumberLong(1)
  95. },
  96. "optimeDate" : ISODate("2018-08-27T02:09:18Z"),
  97. "syncingTo" : "",
  98. "syncSourceHost" : "",
  99. "syncSourceId" : -1,
  100. "infoMessage" : "could not find member to sync from",
  101. "electionTime" : Timestamp(1535335756, 1),
  102. "electionDate" : ISODate("2018-08-27T02:09:16Z"),
  103. "configVersion" : 1,
  104. "self" : true,
  105. "lastHeartbeatMessage" : ""
  106. },
  107. {
  108. "_id" : 1,
  109. "name" : "192.168.1.48:27001",
  110. "health" : 1,
  111. "state" : 2,
  112. "stateStr" : "SECONDARY",
  113. "uptime" : 20,
  114. "optime" : {
  115. "ts" : Timestamp(1535335758, 2),
  116. "t" : NumberLong(1)
  117. },
  118. "optimeDurable" : {
  119. "ts" : Timestamp(1535335758, 2),
  120. "t" : NumberLong(1)
  121. },
  122. "optimeDate" : ISODate("2018-08-27T02:09:18Z"),
  123. "optimeDurableDate" : ISODate("2018-08-27T02:09:18Z"),
  124. "lastHeartbeat" : ISODate("2018-08-27T02:09:26.532Z"),
  125. "lastHeartbeatRecv" : ISODate("2018-08-27T02:09:25.106Z"),
  126. "pingMs" : NumberLong(0),
  127. "lastHeartbeatMessage" : "",
  128. "syncingTo" : "192.168.1.47:27001",
  129. "syncSourceHost" : "192.168.1.47:27001",
  130. "syncSourceId" : 0,
  131. "infoMessage" : "",
  132. "configVersion" : 1
  133. },
  134. {
  135. "_id" : 2,
  136. "name" : "192.168.1.49:27001",
  137. "health" : 1,
  138. "state" : 7,
  139. "stateStr" : "ARBITER",
  140. "uptime" : 20,
  141. "lastHeartbeat" : ISODate("2018-08-27T02:09:26.531Z"),
  142. "lastHeartbeatRecv" : ISODate("2018-08-27T02:09:25.865Z"),
  143. "pingMs" : NumberLong(0),
  144. "lastHeartbeatMessage" : "",
  145. "syncingTo" : "",
  146. "syncSourceHost" : "",
  147. "syncSourceId" : -1,
  148. "infoMessage" : "",
  149. "configVersion" : 1
  150. }
  151. ],
  152. "ok" : 1,
  153. "operationTime" : Timestamp(1535335758, 2),
  154. "$clusterTime" : {
  155. "clusterTime" : Timestamp(1535335758, 2),
  156. "signature" : {
  157. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  158. "keyId" : NumberLong(0)
  159. }
  160. }
  161. }
  162. shard1:PRIMARY>
  163. # shard2副本集初始化
  164. # 登录192.168.1.48或者192.168.1.49中的任何一台机器的27002端口初始化副本集,192.168.1.47因为shard2中我们把这台机器的27002端口作为了仲裁节点
  165. # rs.remove("192.168.1.49:27002"); 删除节点
  166. # rs.add({_id: 2, host: "192.168.1.49:27002"})
  167. # rs.add({_id: 2, host: "192.168.1.47:27002",arbiterOnly:true})
  168. [root@mongodbserver2 ~]# mongo --port 27002
  169. >config = { _id: "shard2", members: [ {_id : 0, host : "192.168.1.47:27002" ,arbiterOnly:true},{_id : 1, host : "192.168.1.48:27002"},{_id : 2, host : "192.168.1.49:27002"}] }
  170. >rs.reinitiate(config)
  171. # shard3副本集初始化
  172. # 登录192.168.1.47或者192.168.1.49中的任何一台机器的27003端口初始化副本集,192.168.1.48因为shard3中我们把这台机器的27002端口作为了仲裁节点
  173. [root@mongodbserver3 ~]# mongo --port 27003
  174. > use admin
  175. switched to db admin
  176. > config = { _id: "shard3", members: [ {_id : 0, host : "192.168.1.47:27003"}, {_id : 1, host : "192.168.1.48:27003", arbiterOnly:true}, {_id : 2, host : "192.168.1.49:27003"}] }
  177. {
  178. "_id" : "shard3",
  179. "members" : [
  180. {
  181. "_id" : 0,
  182. "host" : "192.168.1.47:27003"
  183. },
  184. {
  185. "_id" : 1,
  186. "host" : "192.168.1.48:27003",
  187. "arbiterOnly" : true
  188. },
  189. {
  190. "_id" : 2,
  191. "host" : "192.168.1.49:27003"
  192. }
  193. ]
  194. }
  195. > rs.initiate(config)
  196. {
  197. "ok" : 1,
  198. "operationTime" : Timestamp(1535338725, 1),
  199. "$clusterTime" : {
  200. "clusterTime" : Timestamp(1535338725, 1),
  201. "signature" : {
  202. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  203. "keyId" : NumberLong(0)
  204. }
  205. }
  206. }
  207. shard3:PRIMARY> rs.status()
  208. {
  209. "set" : "shard3",
  210. "date" : ISODate("2018-08-27T02:59:59.805Z"),
  211. "myState" : 1,
  212. "term" : NumberLong(1),
  213. "syncingTo" : "",
  214. "syncSourceHost" : "",
  215. "syncSourceId" : -1,
  216. "heartbeatIntervalMillis" : NumberLong(2000),
  217. "optimes" : {
  218. "lastCommittedOpTime" : {
  219. "ts" : Timestamp(1535338797, 1),
  220. "t" : NumberLong(1)
  221. },
  222. "readConcernMajorityOpTime" : {
  223. "ts" : Timestamp(1535338797, 1),
  224. "t" : NumberLong(1)
  225. },
  226. "appliedOpTime" : {
  227. "ts" : Timestamp(1535338797, 1),
  228. "t" : NumberLong(1)
  229. },
  230. "durableOpTime" : {
  231. "ts" : Timestamp(1535338797, 1),
  232. "t" : NumberLong(1)
  233. }
  234. },
  235. "lastStableCheckpointTimestamp" : Timestamp(1535338797, 1),
  236. "members" : [
  237. {
  238. "_id" : 0,
  239. "name" : "192.168.1.47:27003",
  240. "health" : 1,
  241. "state" : 2,
  242. "stateStr" : "SECONDARY",
  243. "uptime" : 74,
  244. "optime" : {
  245. "ts" : Timestamp(1535338797, 1),
  246. "t" : NumberLong(1)
  247. },
  248. "optimeDurable" : {
  249. "ts" : Timestamp(1535338797, 1),
  250. "t" : NumberLong(1)
  251. },
  252. "optimeDate" : ISODate("2018-08-27T02:59:57Z"),
  253. "optimeDurableDate" : ISODate("2018-08-27T02:59:57Z"),
  254. "lastHeartbeat" : ISODate("2018-08-27T02:59:57.821Z"),
  255. "lastHeartbeatRecv" : ISODate("2018-08-27T02:59:58.287Z"),
  256. "pingMs" : NumberLong(0),
  257. "lastHeartbeatMessage" : "",
  258. "syncingTo" : "192.168.1.49:27003",
  259. "syncSourceHost" : "192.168.1.49:27003",
  260. "syncSourceId" : 2,
  261. "infoMessage" : "",
  262. "configVersion" : 1
  263. },
  264. {
  265. "_id" : 1,
  266. "name" : "192.168.1.48:27003",
  267. "health" : 1,
  268. "state" : 7,
  269. "stateStr" : "ARBITER",
  270. "uptime" : 74,
  271. "lastHeartbeat" : ISODate("2018-08-27T02:59:57.821Z"),
  272. "lastHeartbeatRecv" : ISODate("2018-08-27T02:59:59.384Z"),
  273. "pingMs" : NumberLong(0),
  274. "lastHeartbeatMessage" : "",
  275. "syncingTo" : "",
  276. "syncSourceHost" : "",
  277. "syncSourceId" : -1,
  278. "infoMessage" : "",
  279. "configVersion" : 1
  280. },
  281. {
  282. "_id" : 2,
  283. "name" : "192.168.1.49:27003",
  284. "health" : 1,
  285. "state" : 1,
  286. "stateStr" : "PRIMARY",
  287. "uptime" : 4185,
  288. "optime" : {
  289. "ts" : Timestamp(1535338797, 1),
  290. "t" : NumberLong(1)
  291. },
  292. "optimeDate" : ISODate("2018-08-27T02:59:57Z"),
  293. "syncingTo" : "",
  294. "syncSourceHost" : "",
  295. "syncSourceId" : -1,
  296. "infoMessage" : "could not find member to sync from",
  297. "electionTime" : Timestamp(1535338735, 1),
  298. "electionDate" : ISODate("2018-08-27T02:58:55Z"),
  299. "configVersion" : 1,
  300. "self" : true,
  301. "lastHeartbeatMessage" : ""
  302. }
  303. ],
  304. "ok" : 1,
  305. "operationTime" : Timestamp(1535338797, 1),
  306. "$clusterTime" : {
  307. "clusterTime" : Timestamp(1535338797, 1),
  308. "signature" : {
  309. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  310. "keyId" : NumberLong(0)
  311. }
  312. }
  313. }

5.配置路由服务器

  1. bye
  2. [root@mongodbserver2 ~]# vim /etc/mongod/mongos.conf
  3. pidfilepath = /var/run/mongodb/mongos.pid
  4. logpath = /data/mongodb/mongos/log/mongos.log
  5. logappend = true
  6. bind_ip = 0.0.0.0
  7. port = 20000
  8. fork = true
  9. configdb = configs/192.168.1.47:21000, 192.168.1.48:21000, 192.168.1.49:21000 #监听的配置服务器,只能有1个或者3个>
  10. configs为配置服务器的副本集名字
  11. maxConns=20000 #设置最大连接数
  12. "/etc/mongod/mongos.conf" [New] 8L, 360C written
  13. [root@mongodbserver2 ~]# mongos -f /etc/mongod/mongos.conf
  14. 2018-08-27T11:05:04.093+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
  15. 2018-08-27T11:05:04.254+0800 I NETWORK [main] getaddrinfo(" 192.168.1.48") failed: Name or service not known
  16. 2018-08-27T11:05:04.305+0800 I NETWORK [main] getaddrinfo(" 192.168.1.49") failed: Name or service not known
  17. about to fork child process, waiting until server is ready for connections.
  18. forked process: 4238
  19. child process started successfully, parent exiting
  20. [root@mongodbserver2 ~]#

6.启用分片

  1. mongos> sh.addShard("shard1/192.168.1.47:27001,192.168.1.48:27001,192.168.1.49:27001")
  2. {
  3. "shardAdded" : "shard1",
  4. "ok" : 1,
  5. "operationTime" : Timestamp(1535339410, 3),
  6. "$clusterTime" : {
  7. "clusterTime" : Timestamp(1535339410, 3),
  8. "signature" : {
  9. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  10. "keyId" : NumberLong(0)
  11. }
  12. }
  13. }
  14. # 192.168.1.49初始化的时候角色弄错了,删掉了
  15. mongos> sh.addShard("shard2/192.168.1.47:27002,192.168.1.48:27002")
  16. {
  17. "shardAdded" : "shard2",
  18. "ok" : 1,
  19. "operationTime" : Timestamp(1535339580, 9),
  20. "$clusterTime" : {
  21. "clusterTime" : Timestamp(1535339580, 9),
  22. "signature" : {
  23. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  24. "keyId" : NumberLong(0)
  25. }
  26. }
  27. }
  28. mongos> sh.addShard("shard3/192.168.1.47:27003,192.168.1.48:27003,192.168.1.49:27003")
  29. {
  30. "shardAdded" : "shard3",
  31. "ok" : 1,
  32. "operationTime" : Timestamp(1535339462, 5),
  33. "$clusterTime" : {
  34. "clusterTime" : Timestamp(1535339462, 5),
  35. "signature" : {
  36. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  37. "keyId" : NumberLong(0)
  38. }
  39. }
  40. }
  41. mongos> sh.status()
  42. --- Sharding Status ---
  43. sharding version: {
  44. "_id" : 1,
  45. "minCompatibleVersion" : 5,
  46. "currentVersion" : 6,
  47. "clusterId" : ObjectId("5b82d3380925dce6faaca18a")
  48. }
  49. shards:
  50. { "_id" : "shard1", "host" : "shard1/192.168.1.47:27001,192.168.1.48:27001", "state" : 1 }
  51. { "_id" : "shard2", "host" : "shard2/192.168.1.48:27002", "state" : 1 }
  52. { "_id" : "shard3", "host" : "shard3/192.168.1.47:27003,192.168.1.49:27003", "state" : 1 }
  53. active mongoses:
  54. "4.0.1" : 3
  55. autosplit:
  56. Currently enabled: yes
  57. balancer:
  58. Currently enabled: yes
  59. Currently running: no
  60. Failed balancer rounds in last 5 attempts: 0
  61. Migration Results for the last 24 hours:
  62. No recent migrations
  63. databases:
  64. { "_id" : "config", "primary" : "config", "partitioned" : true }
  65. config.system.sessions
  66. shard key: { "_id" : 1 }
  67. unique: false
  68. balancing: true
  69. chunks:
  70. shard1 1
  71. { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
  72. mongos>

三十六 mongodb分片测试

  1. [root@mongodbserver1 mongod]# mongo --port 20000
  2. MongoDB shell version v4.0.1
  3. connecting to: mongodb://127.0.0.1:20000/
  4. MongoDB server version: 4.0.1
  5. Server has startup warnings:
  6. 2018-08-27T11:05:03.697+0800 I CONTROL [main]
  7. 2018-08-27T11:05:03.697+0800 I CONTROL [main] ** WARNING: Access control is not enabled for the database.
  8. 2018-08-27T11:05:03.697+0800 I CONTROL [main] ** Read and write access to data and configuration is unrestricted.
  9. 2018-08-27T11:05:03.697+0800 I CONTROL [main] ** WARNING: You are running this process as the root user, which is not recommended.
  10. 2018-08-27T11:05:03.697+0800 I CONTROL [main]
  11. mongos> use admin
  12. switched to db admin
  13. mongos> db.runCommand({ enablesharding : "testdb"})
  14. {
  15. "ok" : 1,
  16. "operationTime" : Timestamp(1535339870, 5),
  17. "$clusterTime" : {
  18. "clusterTime" : Timestamp(1535339870, 5),
  19. "signature" : {
  20. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  21. "keyId" : NumberLong(0)
  22. }
  23. }
  24. }
  25. mongos> db.runCommand( { shardcollection : "testdb.table1",key : {id: 1} } )
  26. {
  27. "collectionsharded" : "testdb.table1",
  28. "collectionUUID" : UUID("733c1dad-cb4e-4ed3-a3ca-c3cfbec2e30e"),
  29. "ok" : 1,
  30. "operationTime" : Timestamp(1535339897, 16),
  31. "$clusterTime" : {
  32. "clusterTime" : Timestamp(1535339897, 16),
  33. "signature" : {
  34. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  35. "keyId" : NumberLong(0)
  36. }
  37. }
  38. }
  39. mongos> use testdb
  40. switched to db testdb
  41. mongos> for (var i = 1; i <= 10000; i++) db.table1.save({id:i,"test1":"testval1"})
  42. WriteResult({ "nInserted" : 1 })
  43. mongos> db.table1.stats()
  44. {
  45. "sharded" : true,
  46. "capped" : false,
  47. "wiredTiger" : {
  48. "metadata" : {
  49. "formatVersion" : 1
  50. },
  51. "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
  52. "type" : "file",
  53. "uri" : "statistics:table:collection-24--8506913186853910022",
  54. "LSM" : {
  55. "bloom filter false positives" : 0,
  56. "bloom filter hits" : 0,
  57. "bloom filter misses" : 0,
  58. "bloom filter pages evicted from cache" : 0,
  59. "bloom filter pages read into cache" : 0,
  60. "bloom filters in the LSM tree" : 0,
  61. "chunks in the LSM tree" : 0,
  62. "highest merge generation in the LSM tree" : 0,
  63. "queries that could have benefited from a Bloom filter that did not exist" : 0,
  64. "sleep for LSM checkpoint throttle" : 0,
  65. "sleep for LSM merge throttle" : 0,
  66. "total size of bloom filters" : 0
  67. },
  68. "block-manager" : {
  69. "allocations requiring file extension" : 21,
  70. "blocks allocated" : 21,
  71. "blocks freed" : 0,
  72. "checkpoint size" : 155648,
  73. "file allocation unit size" : 4096,
  74. "file bytes available for reuse" : 0,
  75. "file magic number" : 120897,
  76. "file major version number" : 1,
  77. "file size in bytes" : 167936,
  78. "minor version number" : 0
  79. },
  80. "btree" : {
  81. "btree checkpoint generation" : 91,
  82. "column-store fixed-size leaf pages" : 0,
  83. "column-store internal pages" : 0,
  84. "column-store variable-size RLE encoded values" : 0,
  85. "column-store variable-size deleted values" : 0,
  86. "column-store variable-size leaf pages" : 0,
  87. "fixed-record size" : 0,
  88. "maximum internal page key size" : 368,
  89. "maximum internal page size" : 4096,
  90. "maximum leaf page key size" : 2867,
  91. "maximum leaf page size" : 32768,
  92. "maximum leaf page value size" : 67108864,
  93. "maximum tree depth" : 3,
  94. "number of key/value pairs" : 0,
  95. "overflow pages" : 0,
  96. "pages rewritten by compaction" : 0,
  97. "row-store internal pages" : 0,
  98. "row-store leaf pages" : 0
  99. },
  100. "cache" : {
  101. "bytes currently in the cache" : 1349553,
  102. "bytes read into cache" : 0,
  103. "bytes written from cache" : 530298,
  104. "checkpoint blocked page eviction" : 0,
  105. "data source pages selected for eviction unable to be evicted" : 0,
  106. "eviction walk passes of a file" : 0,
  107. "eviction walk target pages histogram - 0-9" : 0,
  108. "eviction walk target pages histogram - 10-31" : 0,
  109. "eviction walk target pages histogram - 128 and higher" : 0,
  110. "eviction walk target pages histogram - 32-63" : 0,
  111. "eviction walk target pages histogram - 64-128" : 0,
  112. "eviction walks abandoned" : 0,
  113. "eviction walks gave up because they restarted their walk twice" : 0,
  114. "eviction walks gave up because they saw too many pages and found no candidates" : 0,
  115. "eviction walks gave up because they saw too many pages and found too few candidates" : 0,
  116. "eviction walks reached end of tree" : 0,
  117. "eviction walks started from root of tree" : 0,
  118. "eviction walks started from saved location in tree" : 0,
  119. "hazard pointer blocked page eviction" : 0,
  120. "in-memory page passed criteria to be split" : 0,
  121. "in-memory page splits" : 0,
  122. "internal pages evicted" : 0,
  123. "internal pages split during eviction" : 0,
  124. "leaf pages split during eviction" : 0,
  125. "modified pages evicted" : 0,
  126. "overflow pages read into cache" : 0,
  127. "page split during eviction deepened the tree" : 0,
  128. "page written requiring lookaside records" : 0,
  129. "pages read into cache" : 0,
  130. "pages read into cache after truncate" : 1,
  131. "pages read into cache after truncate in prepare state" : 0,
  132. "pages read into cache requiring lookaside entries" : 0,
  133. "pages requested from the cache" : 10000,
  134. "pages seen by eviction walk" : 0,
  135. "pages written from cache" : 20,
  136. "pages written requiring in-memory restoration" : 0,
  137. "tracked dirty bytes in the cache" : 1349094,
  138. "unmodified pages evicted" : 0
  139. },
  140. "cache_walk" : {
  141. "Average difference between current eviction generation when the page was last considered" : 0,
  142. "Average on-disk page image size seen" : 0,
  143. "Average time in cache for pages that have been visited by the eviction server" : 0,
  144. "Average time in cache for pages that have not been visited by the eviction server" : 0,
  145. "Clean pages currently in cache" : 0,
  146. "Current eviction generation" : 0,
  147. "Dirty pages currently in cache" : 0,
  148. "Entries in the root page" : 0,
  149. "Internal pages currently in cache" : 0,
  150. "Leaf pages currently in cache" : 0,
  151. "Maximum difference between current eviction generation when the page was last considered" : 0,
  152. "Maximum page size seen" : 0,
  153. "Minimum on-disk page image size seen" : 0,
  154. "Number of pages never visited by eviction server" : 0,
  155. "On-disk page image sizes smaller than a single allocation unit" : 0,
  156. "Pages created in memory and never written" : 0,
  157. "Pages currently queued for eviction" : 0,
  158. "Pages that could not be queued for eviction" : 0,
  159. "Refs skipped during cache traversal" : 0,
  160. "Size of the root page" : 0,
  161. "Total number of pages currently in cache" : 0
  162. },
  163. "compression" : {
  164. "compressed pages read" : 0,
  165. "compressed pages written" : 19,
  166. "page written failed to compress" : 0,
  167. "page written was too small to compress" : 1,
  168. "raw compression call failed, additional data available" : 0,
  169. "raw compression call failed, no additional data available" : 0,
  170. "raw compression call succeeded" : 0
  171. },
  172. "cursor" : {
  173. "bulk-loaded cursor-insert calls" : 0,
  174. "create calls" : 5,
  175. "cursor operation restarted" : 0,
  176. "cursor-insert key and value bytes inserted" : 561426,
  177. "cursor-remove key bytes removed" : 0,
  178. "cursor-update value bytes updated" : 0,
  179. "cursors cached on close" : 0,
  180. "cursors reused from cache" : 9996,
  181. "insert calls" : 10000,
  182. "modify calls" : 0,
  183. "next calls" : 1,
  184. "prev calls" : 1,
  185. "remove calls" : 0,
  186. "reserve calls" : 0,
  187. "reset calls" : 20003,
  188. "search calls" : 0,
  189. "search near calls" : 0,
  190. "truncate calls" : 0,
  191. "update calls" : 0
  192. },
  193. "reconciliation" : {
  194. "dictionary matches" : 0,
  195. "fast-path pages deleted" : 0,
  196. "internal page key bytes discarded using suffix compression" : 36,
  197. "internal page multi-block writes" : 0,
  198. "internal-page overflow keys" : 0,
  199. "leaf page key bytes discarded using prefix compression" : 0,
  200. "leaf page multi-block writes" : 1,
  201. "leaf-page overflow keys" : 0,
  202. "maximum blocks required for a page" : 1,
  203. "overflow values written" : 0,
  204. "page checksum matches" : 0,
  205. "page reconciliation calls" : 2,
  206. "page reconciliation calls for eviction" : 0,
  207. "pages deleted" : 0
  208. },
  209. "session" : {
  210. "cached cursor count" : 5,
  211. "object compaction" : 0,
  212. "open cursor count" : 0
  213. },
  214. "transaction" : {
  215. "update conflicts" : 0
  216. }
  217. },
  218. "ns" : "testdb.table1",
  219. "count" : 10000,
  220. "size" : 540000,
  221. "storageSize" : 167936,
  222. "totalIndexSize" : 208896,
  223. "indexSizes" : {
  224. "_id_" : 94208,
  225. "id_1" : 114688
  226. },
  227. "avgObjSize" : 54,
  228. "maxSize" : NumberLong(0),
  229. "nindexes" : 2,
  230. "nchunks" : 1,
  231. "shards" : {
  232. "shard3" : {
  233. "ns" : "testdb.table1",
  234. "size" : 540000,
  235. "count" : 10000,
  236. "avgObjSize" : 54,
  237. "storageSize" : 167936,
  238. "capped" : false,
  239. "wiredTiger" : {
  240. "metadata" : {
  241. "formatVersion" : 1
  242. },
  243. "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
  244. "type" : "file",
  245. "uri" : "statistics:table:collection-24--8506913186853910022",
  246. "LSM" : {
  247. "bloom filter false positives" : 0,
  248. "bloom filter hits" : 0,
  249. "bloom filter misses" : 0,
  250. "bloom filter pages evicted from cache" : 0,
  251. "bloom filter pages read into cache" : 0,
  252. "bloom filters in the LSM tree" : 0,
  253. "chunks in the LSM tree" : 0,
  254. "highest merge generation in the LSM tree" : 0,
  255. "queries that could have benefited from a Bloom filter that did not exist" : 0,
  256. "sleep for LSM checkpoint throttle" : 0,
  257. "sleep for LSM merge throttle" : 0,
  258. "total size of bloom filters" : 0
  259. },
  260. "block-manager" : {
  261. "allocations requiring file extension" : 21,
  262. "blocks allocated" : 21,
  263. "blocks freed" : 0,
  264. "checkpoint size" : 155648,
  265. "file allocation unit size" : 4096,
  266. "file bytes available for reuse" : 0,
  267. "file magic number" : 120897,
  268. "file major version number" : 1,
  269. "file size in bytes" : 167936,
  270. "minor version number" : 0
  271. },
  272. "btree" : {
  273. "btree checkpoint generation" : 91,
  274. "column-store fixed-size leaf pages" : 0,
  275. "column-store internal pages" : 0,
  276. "column-store variable-size RLE encoded values" : 0,
  277. "column-store variable-size deleted values" : 0,
  278. "column-store variable-size leaf pages" : 0,
  279. "fixed-record size" : 0,
  280. "maximum internal page key size" : 368,
  281. "maximum internal page size" : 4096,
  282. "maximum leaf page key size" : 2867,
  283. "maximum leaf page size" : 32768,
  284. "maximum leaf page value size" : 67108864,
  285. "maximum tree depth" : 3,
  286. "number of key/value pairs" : 0,
  287. "overflow pages" : 0,
  288. "pages rewritten by compaction" : 0,
  289. "row-store internal pages" : 0,
  290. "row-store leaf pages" : 0
  291. },
  292. "cache" : {
  293. "bytes currently in the cache" : 1349553,
  294. "bytes read into cache" : 0,
  295. "bytes written from cache" : 530298,
  296. "checkpoint blocked page eviction" : 0,
  297. "data source pages selected for eviction unable to be evicted" : 0,
  298. "eviction walk passes of a file" : 0,
  299. "eviction walk target pages histogram - 0-9" : 0,
  300. "eviction walk target pages histogram - 10-31" : 0,
  301. "eviction walk target pages histogram - 128 and higher" : 0,
  302. "eviction walk target pages histogram - 32-63" : 0,
  303. "eviction walk target pages histogram - 64-128" : 0,
  304. "eviction walks abandoned" : 0,
  305. "eviction walks gave up because they restarted their walk twice" : 0,
  306. "eviction walks gave up because they saw too many pages and found no candidates" : 0,
  307. "eviction walks gave up because they saw too many pages and found too few candidates" : 0,
  308. "eviction walks reached end of tree" : 0,
  309. "eviction walks started from root of tree" : 0,
  310. "eviction walks started from saved location in tree" : 0,
  311. "hazard pointer blocked page eviction" : 0,
  312. "in-memory page passed criteria to be split" : 0,
  313. "in-memory page splits" : 0,
  314. "internal pages evicted" : 0,
  315. "internal pages split during eviction" : 0,
  316. "leaf pages split during eviction" : 0,
  317. "modified pages evicted" : 0,
  318. "overflow pages read into cache" : 0,
  319. "page split during eviction deepened the tree" : 0,
  320. "page written requiring lookaside records" : 0,
  321. "pages read into cache" : 0,
  322. "pages read into cache after truncate" : 1,
  323. "pages read into cache after truncate in prepare state" : 0,
  324. "pages read into cache requiring lookaside entries" : 0,
  325. "pages requested from the cache" : 10000,
  326. "pages seen by eviction walk" : 0,
  327. "pages written from cache" : 20,
  328. "pages written requiring in-memory restoration" : 0,
  329. "tracked dirty bytes in the cache" : 1349094,
  330. "unmodified pages evicted" : 0
  331. },
  332. "cache_walk" : {
  333. "Average difference between current eviction generation when the page was last considered" : 0,
  334. "Average on-disk page image size seen" : 0,
  335. "Average time in cache for pages that have been visited by the eviction server" : 0,
  336. "Average time in cache for pages that have not been visited by the eviction server" : 0,
  337. "Clean pages currently in cache" : 0,
  338. "Current eviction generation" : 0,
  339. "Dirty pages currently in cache" : 0,
  340. "Entries in the root page" : 0,
  341. "Internal pages currently in cache" : 0,
  342. "Leaf pages currently in cache" : 0,
  343. "Maximum difference between current eviction generation when the page was last considered" : 0,
  344. "Maximum page size seen" : 0,
  345. "Minimum on-disk page image size seen" : 0,
  346. "Number of pages never visited by eviction server" : 0,
  347. "On-disk page image sizes smaller than a single allocation unit" : 0,
  348. "Pages created in memory and never written" : 0,
  349. "Pages currently queued for eviction" : 0,
  350. "Pages that could not be queued for eviction" : 0,
  351. "Refs skipped during cache traversal" : 0,
  352. "Size of the root page" : 0,
  353. "Total number of pages currently in cache" : 0
  354. },
  355. "compression" : {
  356. "compressed pages read" : 0,
  357. "compressed pages written" : 19,
  358. "page written failed to compress" : 0,
  359. "page written was too small to compress" : 1,
  360. "raw compression call failed, additional data available" : 0,
  361. "raw compression call failed, no additional data available" : 0,
  362. "raw compression call succeeded" : 0
  363. },
  364. "cursor" : {
  365. "bulk-loaded cursor-insert calls" : 0,
  366. "create calls" : 5,
  367. "cursor operation restarted" : 0,
  368. "cursor-insert key and value bytes inserted" : 561426,
  369. "cursor-remove key bytes removed" : 0,
  370. "cursor-update value bytes updated" : 0,
  371. "cursors cached on close" : 0,
  372. "cursors reused from cache" : 9996,
  373. "insert calls" : 10000,
  374. "modify calls" : 0,
  375. "next calls" : 1,
  376. "prev calls" : 1,
  377. "remove calls" : 0,
  378. "reserve calls" : 0,
  379. "reset calls" : 20003,
  380. "search calls" : 0,
  381. "search near calls" : 0,
  382. "truncate calls" : 0,
  383. "update calls" : 0
  384. },
  385. "reconciliation" : {
  386. "dictionary matches" : 0,
  387. "fast-path pages deleted" : 0,
  388. "internal page key bytes discarded using suffix compression" : 36,
  389. "internal page multi-block writes" : 0,
  390. "internal-page overflow keys" : 0,
  391. "leaf page key bytes discarded using prefix compression" : 0,
  392. "leaf page multi-block writes" : 1,
  393. "leaf-page overflow keys" : 0,
  394. "maximum blocks required for a page" : 1,
  395. "overflow values written" : 0,
  396. "page checksum matches" : 0,
  397. "page reconciliation calls" : 2,
  398. "page reconciliation calls for eviction" : 0,
  399. "pages deleted" : 0
  400. },
  401. "session" : {
  402. "cached cursor count" : 5,
  403. "object compaction" : 0,
  404. "open cursor count" : 0
  405. },
  406. "transaction" : {
  407. "update conflicts" : 0
  408. }
  409. },
  410. "nindexes" : 2,
  411. "totalIndexSize" : 208896,
  412. "indexSizes" : {
  413. "_id_" : 94208,
  414. "id_1" : 114688
  415. },
  416. "ok" : 1,
  417. "operationTime" : Timestamp(1535339957, 1),
  418. "$gleStats" : {
  419. "lastOpTime" : {
  420. "ts" : Timestamp(1535339939, 572),
  421. "t" : NumberLong(1)
  422. },
  423. "electionId" : ObjectId("7fffffff0000000000000001")
  424. },
  425. "lastCommittedOpTime" : Timestamp(1535339957, 1),
  426. "$configServerState" : {
  427. "opTime" : {
  428. "ts" : Timestamp(1535339956, 1),
  429. "t" : NumberLong(1)
  430. }
  431. },
  432. "$clusterTime" : {
  433. "clusterTime" : Timestamp(1535339957, 1),
  434. "signature" : {
  435. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  436. "keyId" : NumberLong(0)
  437. }
  438. }
  439. }
  440. },
  441. "ok" : 1,
  442. "operationTime" : Timestamp(1535339957, 1),
  443. "$clusterTime" : {
  444. "clusterTime" : Timestamp(1535339957, 1),
  445. "signature" : {
  446. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  447. "keyId" : NumberLong(0)
  448. }
  449. }
  450. }
  451. mongos>

三十七 mongodb备份恢复

1.备份指定库

[root@mongodbserver1 mongod]# mongodump --host 127.0.0.1 --port 20000  -d testdb -o /tmp/mongobak
2018-08-27T11:22:43.430+0800 writing testdb.table1 to
2018-08-27T11:22:43.541+0800 done dumping testdb.table1 (10000 documents)
[root@mongodbserver1 mongod]# ls -lh /tmp/mongobak/testdb/
total 532K
-rw-r--r-- 1 root root 528K Aug 27 11:22 table1.bson
-rw-r--r-- 1 root root 187 Aug 27 11:22 table1.metadata.json

2.备份所有库

[root@mongodbserver1 mongod]# mongodump --host 127.0.0.1 --port 20000 -o /tmp/mongobak/alldatabase
2018-08-27T11:24:40.955+0800 writing admin.system.version to
2018-08-27T11:24:41.049+0800 done dumping admin.system.version (1 document)
2018-08-27T11:24:41.143+0800 writing config.locks to
2018-08-27T11:24:41.144+0800 writing config.changelog to
2018-08-27T11:24:41.144+0800 writing testdb.table1 to
2018-08-27T11:24:41.144+0800 writing config.lockpings to
2018-08-27T11:24:41.149+0800 done dumping config.changelog (7 documents)
2018-08-27T11:24:41.149+0800 writing config.shards to
2018-08-27T11:24:41.149+0800 done dumping config.locks (4 documents)
2018-08-27T11:24:41.149+0800 writing config.mongos to
2018-08-27T11:24:41.149+0800 done dumping config.lockpings (9 documents)
2018-08-27T11:24:41.149+0800 writing config.chunks to
2018-08-27T11:24:41.151+0800 done dumping config.mongos (3 documents)
2018-08-27T11:24:41.151+0800 writing config.collections to
2018-08-27T11:24:41.152+0800 done dumping config.shards (3 documents)
2018-08-27T11:24:41.152+0800 writing config.databases to
2018-08-27T11:24:41.153+0800 done dumping config.chunks (2 documents)
2018-08-27T11:24:41.153+0800 writing config.version to
2018-08-27T11:24:41.154+0800 done dumping config.databases (1 document)
2018-08-27T11:24:41.154+0800 writing config.tags to
2018-08-27T11:24:41.155+0800 done dumping config.collections (2 documents)
2018-08-27T11:24:41.155+0800 writing config.migrations to
2018-08-27T11:24:41.157+0800 done dumping config.version (1 document)
2018-08-27T11:24:41.170+0800 done dumping config.tags (0 documents)
2018-08-27T11:24:41.179+0800 done dumping config.migrations (0 documents)
2018-08-27T11:24:41.301+0800 done dumping testdb.table1 (10000 documents)

3.指定备份集合

# 它依然会生成mydb目录,再在这目录下面生成两个文件
[root@mongodbserver1 mongod]# mongodump --host 127.0.0.1 --port 20000 -d testdb -c table1 -o /tmp/mongobak/
2018-08-27T11:28:31.578+0800 writing testdb.table1 to
2018-08-27T11:28:31.636+0800 done dumping testdb.table1 (10000 documents)

4.导出集合为json文件

[root@mongodbserver1 mongod]# mongoexport --host 127.0.0.1 --port 20000 -d testdb -c table1 -o /tmp/mydb2/testdb.json
2018-08-27T11:31:07.827+0800 connected to: 127.0.0.1:20000
2018-08-27T11:31:07.974+0800 exported 10000 records
[root@mongodbserver1 mongod]#

5.mongodb的恢复

1.恢复所有库

# 其中dir是备份所有库的目录名字,其中--drop可选,意思是当恢复之前先把之前的数据删除,不建议使用
# 其中--drop可选,意思是当恢复之前先把之前的数据删除,不建议使用
[root@mongodbserver1 mongod]# mongorestore -h 127.0.0.1 --port 20000 --drop /tmp/mongobak/alldatabase

2.恢复指定库

# -d跟要恢复的库名字,dir就是该库备份时所在的目录
[root@mongodbserver1 mongod]# mongorestore -d testdb /tmp/mongobak

3.恢复集合

# -c后面跟要恢复的集合名字,dir是备份mydb库时生成文件所在路径,这里是一个bson文件的路径
[root@mongodbserver1 mongod]# mongorestore -d testdb -c table1 /tmp/mongobak/

4.导入集合

[root@mongodbserver1 mongod]# mongoimport -d testdb -c table1 --file /tmp/mydb2/testdb.json

三十八 扩展

扩展内容

mongodb安全设置 http://www.mongoing.com/archives/631

mongodb执行js脚本 http://www.jianshu.com/p/6bd8934bd1ca

非关系统型数据库-mangodb的更多相关文章

  1. Mysql-关系型数据库与非关系型数据库

    一.什么是数据库 数据库是数据的仓库. 与普通的"数据仓库"不同的是,数据库依据"数据结构"来组织数据,因为"数据结构",所以我们看到的数据 ...

  2. 关于数据库管理系统DBMS--关系型数据库(MySQL/MariaDB)

    数据库的结构(3种):层次,网状,关系型(用的最多): DBMS的三层模型: 视图层:面向最终用户: 逻辑层:面向程序员或DBA: 物理层:面向系统管理员: 关系型数据库管理系统——RDBMS: 主要 ...

  3. 回首2018 | 分析型数据库AnalyticDB: 不忘初心 砥砺前行

    题记 分析型数据库AnalyticDB(下文简称ADB),是阿里巴巴自主研发.唯一经过超大规模以及核心业务验证的PB级实时数据仓库.截止目前,现有外部支撑客户既包括传统的大中型企业和政府机构,也包括众 ...

  4. ADO.NET 连接方式和非链接方式访问数据库

    一.//连接方式访问数据库的主要步骤(利用DataReader对象实现数据库连接模式) 1.创建连接对象(连接字符串) SqlConnection con = new SqlConnection(Co ...

  5. (转)操作型数据库的春天:MongoDB 1.5亿美元融资背后的故事

    大部分融资都要耗时数月,但非关系式数据库MongoDB仅用3周时间就完成了1.5亿美元的融资.为什么这个进程会这么快,MongoDB CEO Max Schireson在接受采访时说,这是因为投资者看 ...

  6. MongoDB,无模式文档型数据库简介

    MongoDB的名字源自一个形容词humongous(巨大无比的),在向上扩展和快速处理大数据量方面,它会损失一些精度,在旧金山举行的MondoDB大会上,Merriman说:“你不适宜用它来处理复杂 ...

  7. MySQL新特性文档型数据库

    mongodb在文档型数据库这方面一直做的很好,也发展了很多年,MySQL作为一个比较大众的数据库也慢慢支持了该特性,下面介绍一下MySQL支持文档型数据库的简单操作. 环境: 主机名 IP 系统 软 ...

  8. 阿里下一代云分析型数据库AnalyticDB入选Forrester云化数仓象限

    前言 近期, 全球权威IT咨询机构Forrester发布"The Forrester Wave: CloudData Warehouse Q4 2018"研究报告,阿里巴巴分析型数 ...

  9. 阿里巴巴下一代云分析型数据库AnalyticDB入选Forrester Wave™ 云数仓评估报告 解读

    前言近期, 全球权威IT咨询机构Forrester发布"The Forrester WaveTM: CloudData Warehouse Q4 2018"研究报告,阿里巴巴分析型 ...

随机推荐

  1. 20165306 Exp5 MSF基础应用

    Exp5 MSF基础应用 一.实践概述 1. 实践内容 本实践目标是掌握metasploit的基本应用方式,重点常用的三种攻击方式的思路.实现: 1.1一个主动攻击实践 ms08-067+window ...

  2. 剑指offer:顺时针打印矩阵

    问题描述 输入一个矩阵,按照从外向里以顺时针的顺序依次打印出每一个数字,例如,如果输入如下4 X 4矩阵: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 则依次打印出数 ...

  3. CentOS7攻克日记(四) —— 安装Mysql和Redis

    这一篇主要安装mysql,redis等数据库   在这篇开始之前,有一个坑,上一篇更改python软连接的时候,尽量都用名字是python3来软连接/usr/../bin/python3.6,把名字是 ...

  4. ELK6.6.0+filebeat6.6.0部署

    elastic不能用root用户去启动,否则会报错,所以创建elastic用户ES集群部署 1.创建elastic用户 $ useradd elastic $ passwd elastic 2..部署 ...

  5. Docker跨主机网络联通之etcd实现

    搭建ETCD集群 查看NODE1机器IP,并启动ETCD ubuntu@docker-node1:~$ ifconfig eth0 eth0: flags=4163<UP,BROADCAST,R ...

  6. ArrayList源码阅读笔记(1.8)

    目录 ArrayList类的注解阅读 ArrayList类的定义 属性的定义 ArrayList构造器 核心方法 普通方法 迭代器(iterator&ListIterator)实现 最后声明 ...

  7. 使用npm私有服务器保存公司内部强业务类型组件(一):npm私有服务器搭建

    1:安装centOS虚拟机 2:安装完成虚拟机后完成后开启系统网卡: 进入到/etc/sysconfig/network-scprits/ 打开ifcfg-ens33文件 找到 ONBOOT=NO 改 ...

  8. (桥接)完美解决linux设置静态ip。

    网上找来找去都是一些隔靴挠痒的操作,这里引自https://blog.csdn.net/yefeng0810/article/details/81150605.感谢大佬的博客.

  9. 服务器与客户端连接 & 聊天机器人

    服务器运行当显示 E:\pycharm\python\venv\Scripts\python.exe E:/pycharm/python/协议/机器人聊天服务器.py 开始监听 accept 说明服务 ...

  10. Hadoop 2.7.4 HDFS+YRAN HA增加datanode和nodemanager

    当前集群 主机名称 IP地址 角色 统一安装目录 统一安装用户 sht-sgmhadoopnn-01 172.16.101.55 namenode,resourcemanager /usr/local ...