目录

1. 逻辑结构

Mongodb 逻辑结构                         MySQL逻辑结构
库database 库
集合(collection) 表
文档(document) 数据行

2. 安装部署

2.1 系统准备

(1)redhat或centos6.2以上系统
(2)系统开发包完整
(3)ip地址和hosts文件解析正常
(4)iptables防火墙&SElinux关闭
(5)关闭大页内存机制
########################################################################
root用户下
在vi /etc/rc.local最后添加如下代码
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi cat /sys/kernel/mm/transparent_hugepage/enabled
cat /sys/kernel/mm/transparent_hugepage/defrag
其他系统关闭参照官方文档: https://docs.mongodb.com/manual/tutorial/transparent-huge-pages/
---------------
为什么要关闭?
Transparent Huge Pages (THP) is a Linux memory management system
that reduces the overhead of Translation Lookaside Buffer (TLB)
lookups on machines with large amounts of memory by using larger memory pages.
However, database workloads often perform poorly with THP,
because they tend to have sparse rather than contiguous memory access patterns.
You should disable THP on Linux machines to ensure best performance with MongoDB.

2.2 mongodb安装

2.2.1 创建所需用户和组

useradd mongod
passwd mongod

2.2.2 创建mongodb所需目录结构

mkdir -p /mongodb/conf
mkdir -p /mongodb/log
mkdir -p /mongodb/data

2.2.3 上传并解压软件到指定位置

[root@db01 data]# cd   /data
[root@db01 data]# tar xf mongodb-linux-x86_64-rhel70-3.6.12.tgz
[root@db01 data]# cp -r /data/mongodb-linux-x86_64-rhel70-3.6.12/bin/ /mongodb

2.2.4 设置目录结构权限

chown -R mongod:mongod /mongodb

2.2.5 设置用户环境变量

su - mongod
vi .bash_profile
export PATH=/mongodb/bin:$PATH
source .bash_profile

2.2.6 启动mongodb

mongod --dbpath=/mongodb/data --logpath=/mongodb/log/mongodb.log --port=27017 --logappend --fork 

[mongod@db01 ~]$ mongod --version
db version v3.6.12
git version: c2b9acad0248ca06b14ef1640734b5d0595b55f1
OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
allocator: tcmalloc
modules: none
build environment:
distmod: rhel70
distarch: x86_64
target_arch: x86_64

2.2.7 登录mongodb

[mongod@db01 ~]$ mongod --dbpath=/mongodb/data --logpath=/mongodb/log/mongodb.log --port=27017 --logappend --fork
about to fork child process, waiting until server is ready for connections.
forked process: 2233
child process started successfully, parent exiting [mongod@db01 ~]$ lsof -i:27017
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
mongod 2233 mongod 11u IPv4 25808 0t0 TCP localhost:27017 (LISTEN)

2.2.8 使用配置文件

YAML模式

NOTE:
YAML does not support tab characters for indentation: use spaces instead. --系统日志有关
systemLog:
destination: file
path: "/mongodb/log/mongodb.log" --日志位置
logAppend: true --日志以追加模式记录 --数据存储有关
storage:
journal:
enabled: true
dbPath: "/mongodb/data" --数据路径的位置 -- 进程控制
processManagement:
fork: true --后台守护进程
pidFilePath: <string> --pid文件的位置,一般不用配置,可以去掉这行,自动生成到data中 --网络配置有关
net:
bindIp: <ip> -- 监听地址
port: <port> -- 端口号,默认不配置端口号,是27017 -- 安全验证有关配置
security:
authorization: enabled --是否打开用户名密码验证 ------------------以下是复制集与分片集群有关---------------------- replication:
oplogSizeMB: <NUM>
replSetName: "<REPSETNAME>"
secondaryIndexPrefetch: "all" sharding:
clusterRole: <string>
archiveMovedChunks: <boolean> ---for mongos only
replication:
localPingThresholdMs: <int> sharding:
configDB: <string>
---
++++++++++++++++++++++
YAML例子
cat > /mongodb/conf/mongo.conf <<EOF
systemLog:
destination: file
path: "/mongodb/log/mongodb.log"
logAppend: true
storage:
journal:
enabled: true
dbPath: "/mongodb/data/"
processManagement:
fork: true
net:
port: 27017
bindIp: 10.0.0.51,127.0.0.1
EOF
mongod -f /mongodb/conf/mongo.conf --shutdown
mongod -f /mongodb/conf/mongo.conf

2.2.9 mongodb的关闭方式

[mongod@db01 ~]$ mongod -f /mongodb/conf/mongo.conf --shutdown
killing process with pid: 2233
[mongod@db01 ~]$ mongod -f /mongodb/conf/mongo.conf
about to fork child process, waiting until server is ready for connections.
forked process: 2286
child process started successfully, parent exiting

2.3.0 mongodb 使用systemd管理

[root@db01 ~]# cat > /etc/systemd/system/mongod.service <<EOF
[Unit]
Description=mongodb
After=network.target remote-fs.target nss-lookup.target
[Service]
User=mongod
Type=forking
ExecStart=/mongodb/bin/mongod --config /mongodb/conf/mongo.conf
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/mongodb/bin/mongod --config /mongodb/conf/mongo.conf --shutdown
PrivateTmp=true
[Install]
WantedBy=multi-user.target
EOF [root@db01 ~]# systemctl restart mongod
[root@db01 ~]# systemctl stop mongod
[root@db01 ~]# systemctl start mongod

3、mongodb常用基本操作

3.0 mongodb 默认存在的库

test:登录时默认存在的库
管理MongoDB有关的系统库
admin库:系统预留库,MongoDB系统管理库
local库:本地预留库,存储关键日志
config库:MongoDB配置信息库 show databases/show dbs
show tables/show collections
use admin
db/select database()

3.1 命令种类

3.1.1 db 对象相关命令

db.[TAB][TAB]
db.help()
db.cheng.[TAB][TAB]
db.cheng.help()

3.1.2 rs 复制集有关(replication set):

rs.[TAB][TAB]
rs.help()

3.1.3 sh 分片集群(sharding cluster)

sh.[TAB][TAB]
sh.help()

4. mongodb对象操作

mongo         mysql
库 -----> 库
集合 -----> 表
文档 -----> 数据行

4.1 库的操作

> use test
>db.dropDatabase()
{ "dropped" : "test", "ok" : 1 }

4.2 集合的操作

app> db.createCollection('a')
{ "ok" : 1 }
app> db.createCollection('b')
方法2:当插入一个文档的时候,一个集合就会自动创建。 use cheng
db.test.insert({name:"zhangsan"})
db.stu.insert({id:101,name:"zhangsan",age:20,gender:"m"})
show tables;
db.stu.insert({id:102,name:"lisi"})
db.stu.insert({a:"b",c:"d"})
db.stu.insert({a:1,c:2})

4.3 文档操作

数据录入:
> for(i=0;i<10000;i++){db.log.insert({"uid":i,"name":"mongodb","age":6,"date":new
... Date()})}
WriteResult({ "nInserted" : 1 }) 查询数据行数:
> db.log.count()
10000 全表查询:
> db.log.find() 每页显示50条记录:
> DBQuery.shellBatchSize=50; 按照条件查询
> db.log.find({uid:999})
{ "_id" : ObjectId("5e17d817ea0f8e1be4269a78"), "uid" : 999, "name" : "mongodb", "age" : 6, "date" : ISODate("2020-01-10T01:49:11.311Z") } 以标准的json格式显示数据
> db.log.find({uid:999}).pretty()
{
"_id" : ObjectId("5e17d817ea0f8e1be4269a78"),
"uid" : 999,
"name" : "mongodb",
"age" : 6,
"date" : ISODate("2020-01-10T01:49:11.311Z")
} 删除集合中所有记录
> db.log.remove({}) 按条件删除
> db.log.remove({uid:999})
WriteResult({ "nRemoved" : 1 })
> db.log.find({uid:999})

4.3.1 查看集合存储信息

> db.log.totalSize()
696320 //集合中索引+数据压缩存储之后的大小

5. 用户及权限管理

5.1 注意

验证库: 建立用户时use到的库,在使用用户时,要加上验证库才能登陆。

对于管理员用户,必须在admin下创建.
1. 建用户时,use到的库,就是此用户的验证库
2. 登录时,必须明确指定验证库才能登录
3. 通常,管理员用的验证库是admin,普通用户的验证库一般是所管理的库设置为验证库
4. 如果直接登录到数据库,不进行use,默认的验证库是test,不是我们生产建议的.
5. 从3.6 版本开始,不添加bindIp参数,默认不让远程登录,只能本地管理员登录。

5.2 用户创建语法

use admin
db.createUser
{
user: "<name>",
pwd: "<cleartext password>",
roles: [
{ role: "<role>",
db: "<database>" } | "<role>",
...
]
} 基本语法说明:
user:用户名
pwd:密码
roles:
role:角色名
db:作用对象
role:root, readWrite,read
验证数据库:
mongo -u cheng -p 123 10.0.0.51/cheng

5.3 用户管理例子

创建超级管理员:管理所有数据库(必须use admin再去创建)
$ mongo
use admin
> db.createUser(
... {
... user: "root",
... pwd: "root123",
... roles: [ { role: "root", db: "admin" } ]
... }
... )
Successfully added user: {
"user" : "root",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}

5.4 验证用户

> db.auth('root','root123')
1

5.5 配置文件中,加入以下配置

[mongod@db01 ~]$ vim /mongodb/conf/mongo.conf  加最后边
security:
authorization: enabled

5.6 重启mongodb

[mongod@db01 ~]$ mongod -f /mongodb/conf/mongo.conf --shutdown
killing process with pid: 2717
[mongod@db01 ~]$ mongod -f /mongodb/conf/mongo.conf
about to fork child process, waiting until server is ready for connections.
forked process: 2868
child process started successfully, parent exiting

5.7 登录验证

mongo -uroot -proot123  admin
mongo -uroot -proot123 10.0.0.51/admin 或者
mongo
use admin
db.auth('root','root123')

5.8 查看用户:

use admin
db.system.users.find().pretty()

5.9 创建应用用户

[mongod@db01 ~]$ mongo -uroot -proot123 --host=10.0.0.51 --port=27017 admin
use cheng
> db.createUser(
... {
... user: "app01",
... pwd: "app01",
... roles: [ { role: "readWrite" , db: "cheng" } ]
... }
... ) Successfully added user: {
"user" : "app01",
"roles" : [
{
"role" : "readWrite",
"db" : "cheng"
}
]
} 远程登录:
[mongod@db01 ~]$ mongo -uapp01 -papp01 10.0.0.51/cheng
MongoDB shell version v3.6.12
connecting to: mongodb://10.0.0.51:27017/cheng?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("0459de17-a9f3-4691-a1b5-351ecd4e37e3") }
MongoDB server version: 3.6.12
>

5.10 查询mongodb中的用户信息

[mongod@db01 ~]$ mongo -uroot -proot123
> use admin
switched to db admin > db.system.users.find().pretty()
{
"_id" : "admin.root",
"user" : "root",
"db" : "admin",
"credentials" : {
"SCRAM-SHA-1" : {
"iterationCount" : 10000,
"salt" : "RYTp8T4pDQdkmx9OeN9o4Q==",
"storedKey" : "KsWW+xILkuXru2UKvQNTwGRhl2c=",
"serverKey" : "4is9P5N8x7dw6yqmOwfzI3xDHmo="
}
},
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}
{
"_id" : "cheng.app01",
"user" : "app01",
"db" : "cheng",
"credentials" : {
"SCRAM-SHA-1" : {
"iterationCount" : 10000,
"salt" : "SjUWkgInUxxyba+Wv8Q5Yw==",
"storedKey" : "3p2XWCzZiiadeLPnN1czQsKo1ok=",
"serverKey" : "3M20fakCqSgUUi7FJY+LQwecsgc="
}
},
"roles" : [
{
"role" : "readWrite",
"db" : "cheng"
}
]
}
>

5.11 删除用户(root身份登录,use到验证库)

删除用户
db.createUser({user: "app02",pwd: "app02",roles: [ { role: "readWrite" , db: "cheng1" } ]})
mongo -uroot -proot123 10.0.0.51/admin
use cheng1
db.dropUser("app02")

5.12 用户管理注意事项

1. 建用户要有验证库,管理员admin,普通用户是要管理的库
2. 登录时,注意验证库
mongo -uapp01 -papp01 10.0.0.51:27017/cheng
3. 重点参数
[mongod@db01 ~]$ vim /mongodb/conf/mongo.conf
net:
port: 27017
bindIp: 10.0.0.51,127.0.0.1
security:
authorization: enabled

6. MongoDB复制集RS(ReplicationSet)

6.1 基本原理

基本构成是1主2从的结构,自带互相监控投票机制(Raft(MongoDB)  Paxos(mysql MGR 用的是变种))
如果发生主库宕机,复制集内部会进行投票选举,选择一个新的主库替代原有主库对外提供服务。同时复制集会自动通知
客户端程序,主库已经发生切换了。应用就会连接到新的主库。

6.2 Replication Set配置过程详解

6.2.1 规划

三个以上的mongodb节点(或多实例)

6.2.2 环境准备

1. 多个端口:

28017、28018、28019、28020

2. 多套目录:

su - mongod
mkdir -p /mongodb/28017/conf /mongodb/28017/data /mongodb/28017/log
mkdir -p /mongodb/28018/conf /mongodb/28018/data /mongodb/28018/log
mkdir -p /mongodb/28019/conf /mongodb/28019/data /mongodb/28019/log
mkdir -p /mongodb/28020/conf /mongodb/28020/data /mongodb/28020/log

3. 多套配置文件

/mongodb/28017/conf/mongod.conf
/mongodb/28018/conf/mongod.conf
/mongodb/28019/conf/mongod.conf
/mongodb/28020/conf/mongod.conf

4. 配置文件内容

cat > /mongodb/28017/conf/mongod.conf <<EOF
systemLog:
destination: file
path: /mongodb/28017/log/mongodb.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/28017/data
directoryPerDB: true
#engine: wiredTiger 引擎
wiredTiger:
engineConfig:
cacheSizeGB: 1 #缓存大小
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
net:
bindIp: 10.0.0.51,127.0.0.1
port: 28017
replication:
oplogSizeMB: 2048 #日志最大容量
replSetName: my_repl
EOF \cp /mongodb/28017/conf/mongod.conf /mongodb/28018/conf/
\cp /mongodb/28017/conf/mongod.conf /mongodb/28019/conf/
\cp /mongodb/28017/conf/mongod.conf /mongodb/28020/conf/ sed 's#28017#28018#g' /mongodb/28018/conf/mongod.conf -i
sed 's#28017#28019#g' /mongodb/28019/conf/mongod.conf -i
sed 's#28017#28020#g' /mongodb/28020/conf/mongod.conf -i

5. 启动多个实例备用

[mongod@db01 ~]$ mongod -f /mongodb/28017/conf/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 3034
child process started successfully, parent exiting
[mongod@db01 ~]$ mongod -f /mongodb/28018/conf/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 3063
child process started successfully, parent exiting
[mongod@db01 ~]$ mongod -f /mongodb/28019/conf/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 3092
child process started successfully, parent exiting
[mongod@db01 ~]$ mongod -f /mongodb/28020/conf/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 3121
child process started successfully, parent exiting [mongod@db01 ~]$ netstat -lnp|grep 280
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:28017 0.0.0.0:* LISTEN 3034/mongod
tcp 0 0 10.0.0.51:28017 0.0.0.0:* LISTEN 3034/mongod
tcp 0 0 127.0.0.1:28018 0.0.0.0:* LISTEN 3063/mongod
tcp 0 0 10.0.0.51:28018 0.0.0.0:* LISTEN 3063/mongod
tcp 0 0 127.0.0.1:28019 0.0.0.0:* LISTEN 3092/mongod
tcp 0 0 10.0.0.51:28019 0.0.0.0:* LISTEN 3092/mongod
tcp 0 0 127.0.0.1:28020 0.0.0.0:* LISTEN 3121/mongod
tcp 0 0 10.0.0.51:28020 0.0.0.0:* LISTEN 3121/mongod
unix 2 [ ACC ] STREAM LISTENING 42006 3034/mongod /tmp/mongodb-28017.sock
unix 2 [ ACC ] STREAM LISTENING 42017 3063/mongod /tmp/mongodb-28018.sock
unix 2 [ ACC ] STREAM LISTENING 42031 3092/mongod /tmp/mongodb-28019.sock
unix 2 [ ACC ] STREAM LISTENING 42048 3121/mongod /tmp/mongodb-28020.sock

6.3 配置普通复制集:

1主2从,从库普通从库
[mongod@db01 ~]$ mongo --port 28017 admin
> config = {_id: 'my_repl', members: [
... {_id: 0, host: '10.0.0.51:28017'},
... {_id: 1, host: '10.0.0.51:28018'},
... {_id: 2, host: '10.0.0.51:28019'}]
... }
{
"_id" : "my_repl",
"members" : [
{
"_id" : 0,
"host" : "10.0.0.51:28017"
},
{
"_id" : 1,
"host" : "10.0.0.51:28018"
},
{
"_id" : 2,
"host" : "10.0.0.51:28019"
}
]
}
> rs.initiate(config)
{
"ok" : 1,
"operationTime" : Timestamp(1578626170, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1578626170, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
my_repl:SECONDARY>
my_repl:PRIMARY>
my_repl:PRIMARY>
my_repl:PRIMARY> 查询复制集状态
my_repl:PRIMARY> rs.status();
{
"set" : "my_repl",
"date" : ISODate("2020-01-10T03:17:27.739Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1578626243, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1578626243, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1578626243, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1578626243, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "10.0.0.51:28017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 174,
"optime" : {
"ts" : Timestamp(1578626243, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-01-10T03:17:23Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1578626182, 1),
"electionDate" : ISODate("2020-01-10T03:16:22Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "10.0.0.51:28018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 77,
"optime" : {
"ts" : Timestamp(1578626243, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1578626243, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-01-10T03:17:23Z"),
"optimeDurableDate" : ISODate("2020-01-10T03:17:23Z"),
"lastHeartbeat" : ISODate("2020-01-10T03:17:26.070Z"),
"lastHeartbeatRecv" : ISODate("2020-01-10T03:17:26.844Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "10.0.0.51:28017",
"syncSourceHost" : "10.0.0.51:28017",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "10.0.0.51:28019",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 77,
"optime" : {
"ts" : Timestamp(1578626243, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1578626243, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-01-10T03:17:23Z"),
"optimeDurableDate" : ISODate("2020-01-10T03:17:23Z"),
"lastHeartbeat" : ISODate("2020-01-10T03:17:26.070Z"),
"lastHeartbeatRecv" : ISODate("2020-01-10T03:17:26.836Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "10.0.0.51:28017",
"syncSourceHost" : "10.0.0.51:28017",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1578626243, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1578626243, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
my_repl:PRIMARY>

6.4 1主1从1个arbiter

mongo -port 28017 admin
config = {_id: 'my_repl', members: [
{_id: 0, host: '10.0.0.51:28017'},
{_id: 1, host: '10.0.0.51:28018'},
{_id: 2, host: '10.0.0.51:28019',"arbiterOnly":true}]
}
rs.initiate(config)

6.5 复制集管理操作

6.5.1 查看复制集状态

1. rs.status();    //查看整体复制集状态
2. rs.isMaster(); // 查看当前是否是主节点
3. rs.conf(); //查看复制集配置信息
my_repl:PRIMARY> rs.conf()
{
"_id" : "my_repl",
"version" : 1,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "10.0.0.51:28017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : { },
"slaveDelay" : NumberLong(0), #类似于延时从库
"votes" : 1
},
{
"_id" : 1,
"host" : "10.0.0.51:28018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : { },
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "10.0.0.51:28019",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : { },
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : -1,
"catchUpTakeoverDelayMillis" : 30000,
"getLastErrorModes" : { },
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("5e17ec7a5506d8f4ad899583")
}
}

6.5.2 添加删除节点

rs.remove("ip:port"); // 删除一个节点
rs.add("ip:port"); // 新增从节点
rs.addArb("ip:port"); // 新增仲裁节点 例子:
添加 arbiter节点
1、连接到主节点
[mongod@db01 ~]$ mongo --port 28018 admin 2、添加仲裁节点
my_repl:PRIMARY> rs.addArb("10.0.0.51:28020")
{
"ok" : 1,
"operationTime" : Timestamp(1578626917, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1578626917, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} 3、查看节点状态
my_repl:PRIMARY> rs.isMaster()
{
"hosts" : [
"10.0.0.51:28017",
"10.0.0.51:28018",
"10.0.0.51:28019"
],
"arbiters" : [
"10.0.0.51:28020"
],
"setName" : "my_repl",
"setVersion" : 2,
"ismaster" : true,
"secondary" : false,
"primary" : "10.0.0.51:28017",
"me" : "10.0.0.51:28017",
"electionId" : ObjectId("7fffffff0000000000000001"),
"lastWrite" : {
"opTime" : {
"ts" : Timestamp(1578626917, 1),
"t" : NumberLong(1)
},
"lastWriteDate" : ISODate("2020-01-10T03:28:37Z"),
"majorityOpTime" : {
"ts" : Timestamp(1578626917, 1),
"t" : NumberLong(1)
},
"majorityWriteDate" : ISODate("2020-01-10T03:28:37Z")
},
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 100000,
"localTime" : ISODate("2020-01-10T03:28:46.949Z"),
"logicalSessionTimeoutMinutes" : 30,
"minWireVersion" : 0,
"maxWireVersion" : 6,
"readOnly" : false,
"ok" : 1,
"operationTime" : Timestamp(1578626917, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1578626917, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} rs.remove("ip:port"); // 删除一个节点
例子:
my_repl:PRIMARY> rs.remove("10.0.0.51:28019");
{
"ok" : 1,
"operationTime" : Timestamp(1578627065, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1578627065, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} my_repl:PRIMARY> rs.isMaster() #查询
{
"hosts" : [
"10.0.0.51:28017",
"10.0.0.51:28018"
], rs.add("ip:port"); // 新增从节点
例子:
my_repl:PRIMARY> rs.add("10.0.0.51:28019")
{ "ok" : 1 }
my_repl:PRIMARY> rs.isMaster()

6.5.3 特殊从节点

1. 介绍:

arbiter节点:主要负责选主过程中的投票,但是不存储任何数据,也不提供任何服务
hidden节点:隐藏节点,不参与选主,也不对外提供服务。
delay节点:延时节点,数据落后于主库一段时间,因为数据是延时的,也不应该提供服务或参与选主,所以通常会配合hidden(隐藏)
一般情况下会将delay+hidden一起配置使用

2. 配置延时节点(一般延时节点也配置成hidden)

cfg=rs.conf()
cfg.members[3].priority=0 #修改3号节点的值
cfg.members[3].hidden=true #隐藏
cfg.members[3].slaveDelay=120 #延时
my_repl:PRIMARY> rs.reconfig(cfg)
{
"ok" : 1,
"operationTime" : Timestamp(1578628124, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1578628124, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} my_repl:PRIMARY> rs.conf(); #查看修改后的信息
{
"_id" : 4,
"host" : "10.0.0.51:28019",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : true,
"priority" : 0,
"tags" : { },
"slaveDelay" : NumberLong(120),
"votes" : 1
} 注意:节点号必须仔细查看,是按照顺序进行排列的,需要修改哪个按顺序数即可 取消以上配置
cfg=rs.conf()
cfg.members[2].priority=1
cfg.members[2].hidden=false
cfg.members[2].slaveDelay=0
rs.reconfig(cfg)
配置成功后,通过以下命令查询配置后的属性
rs.conf();

6.5.4 副本集其他操作命令

查看副本集的配置信息
admin> rs.conf()
查看副本集各成员的状态
admin> rs.status()
++++++++++++++++++++++++++++++++++++++++++++++++
--副本集角色切换(不要人为随便操作)
admin> rs.stepDown()
注:
admin> rs.freeze(300) //锁定从,使其不会转变成主库
freeze()和stepDown单位都是秒。
+++++++++++++++++++++++++++++++++++++++++++++
设置副本节点可读:在副本节点执行
admin> rs.slaveOk()
eg:
admin> use app
switched to db app
app> db.createCollection('a')
{ "ok" : 0, "errmsg" : "not master", "code" : 10107 } 查看副本节点(监控主从延时)
admin> rs.printSlaveReplicationInfo()
source: 192.168.1.22:27017
syncedTo: Thu May 26 2016 10:28:56 GMT+0800 (CST)
0 secs (0 hrs) behind the primary

7. MongoDB Sharding Cluster 分片集群【MSC】

7.1 规划

10个实例:38017-38026
(1)configserver:38018-38020
3台构成的复制集(1主两从,不支持arbiter)38018-38020(复制集名字configsvr)
(2)shard节点:
sh1:38021-23 (1主两从,其中一个节点为arbiter,复制集名字sh1)
sh2:38024-26 (1主两从,其中一个节点为arbiter,复制集名字sh2)
(3) mongos:
38017

7.2 Shard节点配置过程

7.2.1 目录创建:

mkdir -p /mongodb/38021/conf  /mongodb/38021/log  /mongodb/38021/data
mkdir -p /mongodb/38022/conf /mongodb/38022/log /mongodb/38022/data
mkdir -p /mongodb/38023/conf /mongodb/38023/log /mongodb/38023/data
mkdir -p /mongodb/38024/conf /mongodb/38024/log /mongodb/38024/data
mkdir -p /mongodb/38025/conf /mongodb/38025/log /mongodb/38025/data
mkdir -p /mongodb/38026/conf /mongodb/38026/log /mongodb/38026/data

7.2.2 修改配置文件:

第一组复制集搭建:21-23(1主 1从 1Arb)

cat >  /mongodb/38021/conf/mongodb.conf  <<EOF
systemLog:
destination: file
path: /mongodb/38021/log/mongodb.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/38021/data
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 10.0.0.51,127.0.0.1
port: 38021
replication:
oplogSizeMB: 2048
replSetName: sh1 #复制集名称
sharding:
clusterRole: shardsvr #集群固定角色名
processManagement:
fork: true
EOF \cp /mongodb/38021/conf/mongodb.conf /mongodb/38022/conf/
\cp /mongodb/38021/conf/mongodb.conf /mongodb/38023/conf/ sed 's#38021#38022#g' /mongodb/38022/conf/mongodb.conf -i
sed 's#38021#38023#g' /mongodb/38023/conf/mongodb.conf -i

第二组节点:24-26(1主1从1Arb)

cat > /mongodb/38024/conf/mongodb.conf <<EOF
systemLog:
destination: file
path: /mongodb/38024/log/mongodb.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/38024/data
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 10.0.0.51,127.0.0.1
port: 38024
replication:
oplogSizeMB: 2048
replSetName: sh2
sharding:
clusterRole: shardsvr
processManagement:
fork: true
EOF \cp /mongodb/38024/conf/mongodb.conf /mongodb/38025/conf/
\cp /mongodb/38024/conf/mongodb.conf /mongodb/38026/conf/ sed 's#38024#38025#g' /mongodb/38025/conf/mongodb.conf -i
sed 's#38024#38026#g' /mongodb/38026/conf/mongodb.conf -i

7.2.3 启动所有节点,并搭建复制集

mongod -f  /mongodb/38021/conf/mongodb.conf
mongod -f /mongodb/38022/conf/mongodb.conf
mongod -f /mongodb/38023/conf/mongodb.conf
mongod -f /mongodb/38024/conf/mongodb.conf
mongod -f /mongodb/38025/conf/mongodb.conf
mongod -f /mongodb/38026/conf/mongodb.conf
ps -ef |grep mongod [mongod@db01 ~]$ mongo --port 38021
use admin
config = {_id: 'sh1', members: [
{_id: 0, host: '10.0.0.51:38021'},
{_id: 1, host: '10.0.0.51:38022'},
{_id: 2, host: '10.0.0.51:38023',"arbiterOnly":true}]
}
==显示结果如下:================================================
{
"_id" : "sh1",
"members" : [
{
"_id" : 0,
"host" : "10.0.0.51:38021"
},
{
"_id" : 1,
"host" : "10.0.0.51:38022"
},
{
"_id" : 2,
"host" : "10.0.0.51:38023",
"arbiterOnly" : true
}
]
} > rs.initiate(config)
{ "ok" : 1 } mongo --port 38024
use admin
config = {_id: 'sh2', members: [
{_id: 0, host: '10.0.0.51:38024'},
{_id: 1, host: '10.0.0.51:38025'},
{_id: 2, host: '10.0.0.51:38026',"arbiterOnly":true}]
}
==显示结果如下:================================================
{
"_id" : "sh2",
"members" : [
{
"_id" : 0,
"host" : "10.0.0.51:38024"
},
{
"_id" : 1,
"host" : "10.0.0.51:38025"
},
{
"_id" : 2,
"host" : "10.0.0.51:38026",
"arbiterOnly" : true
}
]
} > rs.initiate(config)
{ "ok" : 1 }

7.3 config节点配置

7.3.1 目录创建

mkdir -p /mongodb/38018/conf  /mongodb/38018/log  /mongodb/38018/data

mkdir -p /mongodb/38019/conf  /mongodb/38019/log  /mongodb/38019/data

mkdir -p /mongodb/38020/conf  /mongodb/38020/log  /mongodb/38020/data

7.3.2修改配置文件:

cat > /mongodb/38018/conf/mongodb.conf <<EOF
systemLog:
destination: file
path: /mongodb/38018/log/mongodb.conf
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/38018/data
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 10.0.0.51,127.0.0.1
port: 38018
replication:
oplogSizeMB: 2048
replSetName: configReplSet
sharding:
clusterRole: configsvr #集群配置节点的角色
processManagement:
fork: true
EOF \cp /mongodb/38018/conf/mongodb.conf /mongodb/38019/conf/
\cp /mongodb/38018/conf/mongodb.conf /mongodb/38020/conf/ sed 's#38018#38019#g' /mongodb/38019/conf/mongodb.conf -i
sed 's#38018#38020#g' /mongodb/38020/conf/mongodb.conf -i

7.3.3启动节点,并配置复制集

mongod -f /mongodb/38018/conf/mongodb.conf
mongod -f /mongodb/38019/conf/mongodb.conf
mongod -f /mongodb/38020/conf/mongodb.conf mongo --port 38018
use admin
config = {_id: 'configReplSet', members: [
{_id: 0, host: '10.0.0.51:38018'},
{_id: 1, host: '10.0.0.51:38019'},
{_id: 2, host: '10.0.0.51:38020'}]
}
==显示结果如下:1组两从复制集================================================ {
"_id" : "configReplSet",
"members" : [
{
"_id" : 0,
"host" : "10.0.0.51:38018"
},
{
"_id" : 1,
"host" : "10.0.0.51:38019"
},
{
"_id" : 2,
"host" : "10.0.0.51:38020"
}
]
} > rs.initiate(config)
==显示结果如下:================================================
{
"ok" : 1,
"operationTime" : Timestamp(1578630303, 1),
"$gleStats" : {
"lastOpTime" : Timestamp(1578630303, 1),
"electionId" : ObjectId("000000000000000000000000")
},
"$clusterTime" : {
"clusterTime" : Timestamp(1578630303, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} 注:configserver 可以是一个节点,官方建议复制集。configserver不能有arbiter。
新版本中,要求必须是复制集。
注:mongodb 3.4之后,虽然要求config server为replica set,但是不支持arbiter

7.4 mongos节点配置:

7.4.1创建目录:

mkdir -p /mongodb/38017/conf  /mongodb/38017/log 

7.4.2配置文件:

cat > /mongodb/38017/conf/mongos.conf <<EOF
systemLog:
destination: file
path: /mongodb/38017/log/mongos.log
logAppend: true
net:
bindIp: 10.0.0.51,127.0.0.1
port: 38017
sharding:
configDB: configReplSet/10.0.0.51:38018,10.0.0.51:38019,10.0.0.51:38020 #复制集
processManagement:
fork: true
EOF

7.4.3启动mongos

[mongod@db01 ~]$ mongos -f /mongodb/38017/conf/mongos.conf
about to fork child process, waiting until server is ready for connections.
forked process: 4272
child process started successfully, parent exiting

7.5 分片集群添加节点

 连接到其中一个mongos(10.0.0.51),做以下配置
(1)连接到mongs的admin数据库
# su - mongod
[mongod@db01 ~]$ mongo 10.0.0.51:38017/admin (2)添加分片
mongos> db.runCommand( { addshard : "sh1/10.0.0.51:38021,10.0.0.51:38022,10.0.0.51:38023",name:"shard1"} )
==显示结果如下:================================================
{
"shardAdded" : "shard1",
"ok" : 1,
"operationTime" : Timestamp(1578630429, 7),
"$clusterTime" : {
"clusterTime" : Timestamp(1578630429, 7),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} mongos> db.runCommand( { addshard : "sh2/10.0.0.51:38024,10.0.0.51:38025,10.0.0.51:38026",name:"shard2"} )
==显示结果如下:================================================
{
"shardAdded" : "shard2",
"ok" : 1,
"operationTime" : Timestamp(1578630450, 6),
"$clusterTime" : {
"clusterTime" : Timestamp(1578630450, 6),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} (3)列出分片
mongos> db.runCommand( { listshards : 1 } )
==显示结果如下:================================================
{
"shards" : [
{
"_id" : "shard1",
"host" : "sh1/10.0.0.51:38021,10.0.0.51:38022",
"state" : 1
},
{
"_id" : "shard2",
"host" : "sh2/10.0.0.51:38024,10.0.0.51:38025",
"state" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1578630472, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1578630472, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} (4)整体状态查看
mongos> sh.status();
==显示结果如下:================================================
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5e17fcac2da5fd62a2420873")
}
shards:
{ "_id" : "shard1", "host" : "sh1/10.0.0.51:38021,10.0.0.51:38022", "state" : 1 }
{ "_id" : "shard2", "host" : "sh2/10.0.0.51:38024,10.0.0.51:38025", "state" : 1 }
active mongoses:
"3.6.12" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }

7.6 使用分片集群

7.6.1 RANGE分片配置及测试

1、激活数据库分片功能

1.登录mongo
mongo --port 38017 admin
use admin
admin> ( { enablesharding : "数据库名称" } )
eg:
mongos> db.runCommand( { enablesharding : "test" } )
{
"ok" : 1,
"operationTime" : Timestamp(1578640496, 8),
"$clusterTime" : {
"clusterTime" : Timestamp(1578640496, 8),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}

2、指定分片键对集合分片

1. 创建索引
use test
mongos> db.vast.ensureIndex( { id: 1 } )
{
"raw" : {
"sh2/10.0.0.51:38024,10.0.0.51:38025" : {
"createdCollectionAutomatically" : true,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
},
"ok" : 1,
"operationTime" : Timestamp(1578640541, 3),
"$clusterTime" : {
"clusterTime" : Timestamp(1578640541, 3),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} 2. 开启分片
mongos> use admin
switched to db admin
mongos> db.runCommand( { shardcollection : "test.vast",key : {id: 1} } )
{
"collectionsharded" : "test.vast",
"collectionUUID" : UUID("fdd6b123-248f-4e89-a4fa-cdf476333053"),
"ok" : 1,
"operationTime" : Timestamp(1578640569, 10),
"$clusterTime" : {
"clusterTime" : Timestamp(1578640569, 10),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}

3、集合分片验证

mongos> use test
switched to db test
mongos> for(i=1;i<1000000;i++){ db.vast.insert({"id":i,"name":"shenzheng","age":70,"date":new Date()}); } test> db.vast.stats()

4、分片结果测试

shard1:
mongo --port 38021
db.vast.count(); shard2:
mongo --port 38024
db.vast.count();

7.6.2 Hash分片 实战:

对cheng库下的vast大表进行hash
创建哈希索引
(1)对于cheng开启分片功能
[mongod@db01 ~]$ mongo --port 38017 admin
use admin
开启分片策略
mongos> db.runCommand( { enablesharding : "cheng" } )
{
"ok" : 1,
"operationTime" : Timestamp(1578640812, 474),
"$clusterTime" : {
"clusterTime" : Timestamp(1578640812, 474),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} (2)对于cheng库下的vast表建立hash索引
use cheng
mongos> db.vast.ensureIndex( { id: "hashed" } )
{
"raw" : {
"sh1/10.0.0.51:38021,10.0.0.51:38022" : {
"createdCollectionAutomatically" : true,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
},
"ok" : 1,
"operationTime" : Timestamp(1578640843, 454),
"$clusterTime" : {
"clusterTime" : Timestamp(1578640843, 486),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} (3)开启分片
use admin
mongos> sh.shardCollection( "cheng.vast", { id: "hashed" } )
{
"collectionsharded" : "cheng.vast",
"collectionUUID" : UUID("7f8e023e-fe67-40b0-8c2b-0bc08bfc7a6a"),
"ok" : 1,
"operationTime" : Timestamp(1578640865, 354),
"$clusterTime" : {
"clusterTime" : Timestamp(1578640865, 354),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} (4)录入10w行数据测试
use cheng
mongos> for(i=1;i<100000;i++){ db.vast.insert({"id":i,"name":"shenzheng","age":70,"date":new Date()}); }
WriteResult({ "nInserted" : 1 }) mongos> db.vast.count();
99999 (5)hash分片结果测试
[mongod@db01 ~]$ mongo --port 38021
sh1:PRIMARY> use cheng
sh1:PRIMARY> db.vast.count(); [mongod@db01 ~]$ mongo --port 38024
sh1:PRIMARY> use cheng
sh1:PRIMARY> db.vast.count();

7.7 分片集群的查询及管理

7.7.1 判断是否Shard集群

[mongod@db01 ~]$ mongo --port 38017 admin
admin> db.runCommand({ isdbgrid : 1})

7.7.2 列出所有分片信息

admin> db.runCommand({ listshards : 1})

7.7.3 列出开启分片的数据库

mongos> use config
config> db.databases.find( { "partitioned": true } )
或者:
config> db.databases.find() //列出所有数据库分片情况

7.7.4 查看分片的片键

mongos> use config
switched to db config
mongos> db.collections.find().pretty()
{
"_id" : "config.system.sessions",
"lastmodEpoch" : ObjectId("5e17fd7b2da5fd62a2420d1a"),
"lastmod" : ISODate("1970-02-19T17:02:47.296Z"),
"dropped" : false,
"key" : {
"_id" : 1
},
"unique" : false,
"uuid" : UUID("9f1ed165-b1e0-45f5-a0d4-b117624e1bf3")
}
{
"_id" : "test.vast",
"lastmodEpoch" : ObjectId("5e1824b92da5fd62a242eecb"),
"lastmod" : ISODate("1970-02-19T17:02:47.296Z"),
"dropped" : false,
"key" : {
"id" : 1
},
"unique" : false,
"uuid" : UUID("fdd6b123-248f-4e89-a4fa-cdf476333053")
}
{
"_id" : "cheng.vast",
"lastmodEpoch" : ObjectId("5e1825e12da5fd62a242f6f1"),
"lastmod" : ISODate("1970-02-19T17:02:47.297Z"),
"dropped" : false,
"key" : {
"id" : "hashed"
},
"unique" : false,
"uuid" : UUID("7f8e023e-fe67-40b0-8c2b-0bc08bfc7a6a")
}

7.7.5 查看分片的详细信息

admin> sh.status()

7.7.6 删除分片节点(谨慎)

(1)确认blance是否在工作
sh.getBalancerState() (2)删除shard2节点(谨慎)
mongos> db.runCommand( { removeShard: "shard2" } )
注意:删除操作一定会立即触发blancer。

7.8 balancer操作

7.8.1 介绍

mongos的一个重要功能,自动巡查所有shard节点上的chunk的情况,自动做chunk迁移。
什么时候工作?
1、自动运行,会检测系统不繁忙的时候做chunk迁移
2、在做节点删除的时候,他会立即开始迁移工作
3、balancer只能在预设定的时间窗口内运行 有需要时可以关闭和开启blancer(备份的时候)
mongos> sh.stopBalancer()
mongos> sh.startBalancer()

7.8.2 自定义 自动平衡进行的时间段

官方文档地址:https://docs.mongodb.com/manual/tutorial/manage-sharded-cluster-balancer/#schedule-the-balancing-window
// connect to mongos [mongod@db01 ~]$ mongo --port 38017 admin
mongos> use config
switched to db config
mongos> sh.setBalancerState( true )
{
"ok" : 1,
"operationTime" : Timestamp(1578641744, 546),
"$clusterTime" : {
"clusterTime" : Timestamp(1578641744, 546),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} mongos> db.settings.update({ _id : "balancer" }, { $set : { activeWindow : { start : "3:00", stop : "5:00" } } }, true )
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 }) 查看时间间隔
mongos> sh.getBalancerWindow()
{ "start" : "3:00", "stop" : "5:00" } sh.status() 查看时间间隔
Balancer active window is set between 3:00 and 5:00 server local time 关于集合的balancer(了解下)
关闭某个集合的balance
sh.disableBalancing("students.grades")
打开某个集合的balancer
sh.enableBalancing("students.grades")
确定某个集合的balance是开启或者关闭
db.getSiblingDB("config").collections.findOne({_id : "students.grades"}).noBalance;

8. 备份恢复

8.1 备份恢复工具介绍:

(1)**   mongoexport/mongoimport
(2)***** mongodump/mongorestore 俺建议用这个

8.2 备份工具区别

应用场景总结:
mongoexport/mongoimport: 备份出来结果是json格式和csv
1、异构平台迁移 mysql <---> mongodb
2、同平台,跨大版本:mongodb 2 ----> mongodb 3 mongodump/mongorestore
日常备份恢复时使用.

8.3 导出工具mongoexport

mongoexport具体用法如下所示:
$ mongoexport --help
参数说明:
-h:指明数据库宿主机的IP
-u:指明数据库的用户名
-p:指明数据库的密码
-d:指明数据库的名字
-c:指明collection的名字
-f:指明要导出那些列
-o:指明到要导出的文件名
-q:指明导出数据的过滤条件
--authenticationDatabase admin 1.单表备份至json格式
[mongod@db01 ~]$ mongoexport -uroot -proot123 --port 27017 --authenticationDatabase admin -d cheng -c log -o /mongodb/log.json
2020-01-10T15:47:48.933+0800 connected to: localhost:27017
2020-01-10T15:47:49.827+0800 exported 9999 records 注:备份文件的名字可以自定义,默认导出了JSON格式的数据。 2. 单表备份至csv格式
如果我们需要导出CSV格式的数据,则需要使用----type=csv参数: [mongod@db01 ~]$ mongoexport -uroot -proot123 --port 27017 --authenticationDatabase admin -d cheng -c log --type=csv -f uid,name,age,date -o /mongodb/log.csv
2020-01-10T15:49:27.256+0800 connected to: localhost:27017
2020-01-10T15:49:27.331+0800 exported 9999 records -type 类型;csv格式用的较多
-f 导出哪些列的值;

8.4 导入工具mongoimport

$ mongoimport --help
参数说明:
-h:指明数据库宿主机的IP
-u:指明数据库的用户名
-p:指明数据库的密码
-d:指明数据库的名字
-c:指明collection的名字
-f:指明要导入哪些列
-j, --numInsertionWorkers=<number> number of insert operations to run concurrently (defaults to 1)
//并行
数据恢复:
1.恢复json格式表数据到log1
mongoimport -uroot -proot123 --port 27017 --authenticationDatabase admin -d cheng -c log1 /mongodb/log.json 2.恢复csv格式的文件到log2
上面演示的是导入JSON格式的文件中的内容,如果要导入CSV格式文件中的内容,则需要通过--type参数指定导入格式,具体如下所示:
错误的恢复 注意:
(1)csv格式的文件头行,有列名字
[mongod@db01 ~]$ mongoimport -uroot -proot123 --port 27017 --authenticationDatabase admin -d cheng -c log2 --type=csv --headerline --file /mongodb/log.csv (2)csv格式的文件头行,没有列名字
[mongod@db01 ~]$ mongoimport -uroot -proot123 --port 27017 --authenticationDatabase admin -d cheng -c log3 --type=csv -f id,name,age,date --file /mongodb/log.csv
-f 指定列名
--headerline:指明第一行是列名,不需要导入。

8.5 异构平台迁移案例

mysql   -----> mongodb
world数据库下city表进行导出,导入到mongodb (1)mysql开启安全路径
vim /etc/my.cnf --->添加以下配置
secure-file-priv=/tmp --重启数据库生效
/etc/init.d/mysqld restart (2)导出mysql的city表数据
source /root/world.sql select * from test.t100w into outfile '/tmp/city1.csv' fields terminated by ','; (3)处理备份文件
vim /tmp/city.csv ----> 添加第一行列名信息
ID,Name,CountryCode,District,Population (4)在mongodb中导入备份
[root@db01 ~]# su - mongod
Last login: Fri Jan 10 15:52:04 CST 2020 on pts/2
[mongod@db01 ~]$ mongoimport -uroot -proot123 --port 27017 --authenticationDatabase admin -d t100w -c city --type=csv -f ID,Name,CountryCode,District,Population --file /tmp/city1.csv
2020-01-10T16:31:00.365+0800 connected to: localhost:27017
2020-01-10T16:31:03.355+0800 [#######.................] t100w.city 11.8MB/39.5MB (30.0%)
2020-01-10T16:31:06.355+0800 [##############..........] t100w.city 23.7MB/39.5MB (60.1%)
2020-01-10T16:31:09.356+0800 [#####################...] t100w.city 36.1MB/39.5MB (91.4%)
2020-01-10T16:31:10.299+0800 [########################] t100w.city 39.5MB/39.5MB (100.0%)
2020-01-10T16:31:10.299+0800 imported 990001 documents 使用infomation_schema.columns
mysql> select group_concat(column_name) from information_schema.columns where table_schema='test' and table_name='t100w';
+---------------------------+
| group_concat(column_name) |
+---------------------------+
| id,num,k1,k2,dt |
+---------------------------+
1 row in set (0.11 sec) 查看表内容
> db.city.find() -------------
world共100张表,全部迁移到mongodb select table_name ,group_concat(column_name) from columns where table_schema='world' group by table_name; select * from world.city into outfile '/tmp/world_city.csv' fields terminated by ','; select concat("select * from ",table_schema,".",table_name ," into outfile '/tmp/",table_schema,"_",table_name,".csv' fields terminated by ',';")
from information_schema.tables where table_schema ='world'; 导入:
提示,使用infomation_schema.columns + information_schema.tables mysql导出csv:
select * from test_info
into outfile '/tmp/test.csv'
fields terminated by ','    ------字段间以,号分隔
optionally enclosed by '"'   ------字段用"号括起
escaped by '"'        ------字段中使用的转义符为"
lines terminated by '\r\n';  ------行以\r\n结束 mysql导入csv:
load data infile '/tmp/test.csv'
into table test_info
fields terminated by ','
optionally enclosed by '"'
escaped by '"'
lines terminated by '\r\n';

8.6 mongodump和mongorestore

8.6.1介绍

mongodump能够在Mongodb运行时进行备份,它的工作原理是对运行的Mongodb做查询,然后将所有查到的文档写入磁盘。
但是存在的问题时使用mongodump产生的备份不一定是数据库的实时快照,如果我们在备份时对数据库进行了写入操作,
则备份出来的文件可能不完全和Mongodb实时数据相等。另外在备份时可能会对其它客户端性能产生不利的影响。

8.6.2 mongodump用法如下:

$ mongodump --help
参数说明:
-h:指明数据库宿主机的IP
-u:指明数据库的用户名
-p:指明数据库的密码
-d:指明数据库的名字
-c:指明collection的名字
-o:指明到要导出的文件名
-q:指明导出数据的过滤条件
-j, --numParallelCollections= number of collections to dump in parallel (4 by default)
--oplog 备份的同时备份oplog

8.6.3 mongodump和mongorestore基本使用

1. 全库备份

mkdir /mongodb/backup
mongodump -uroot -proot123 --port 27017 --authenticationDatabase admin -o /mongodb/backup

2. 备份world库

$ mongodump   -uroot -proot123 --port 27017 --authenticationDatabase admin -d t100w -o /mongodb/backup/

3. 备份cheng库下的log集合

[mongod@db01 backup]$ mongodump   -uroot -proot123 --port 27017 --authenticationDatabase admin -d cheng -c log -o /mongodb/backup/

4. 压缩备份

全备t100w+压缩
[mongod@db01 ~]$ mongodump -uroot -proot123 --port 27017 --authenticationDatabase admin -d t100w -o /mongodb/backup/ --gzip 全备压缩
[mongod@db01 ~]$ mongodump -uroot -proot123 --port 27017 --authenticationDatabase admin -o /mongodb/backup/ --gzip

5. 恢复test库

$ mongorestore   -uroot -proot123 --port 27017 --authenticationDatabase admin -d test  /mongodb/backup/world

6. 恢复chengyinwu库下的t1集合

[mongod@db01 cheng]$ mongorestore   -uroot -proot123 --port 27017 --authenticationDatabase admin -d world -c t1  --gzip  /mongodb/backup.bak/cheng/log1.bson.gz 

7. drop表示恢复的时候把之前的集合drop掉(危险)

$ mongorestore  -uroot -proot123 --port 27017 --authenticationDatabase admin -d cheng --drop  /mongodb/backup/cheng

8.7 mongodump和mongorestore高级企业应用(--oplog)

注意:这是replica set或者master/slave模式专用
--oplog
use oplog for taking a point-in-time snapshot
基于一个时间点的快照进行备份

8.7.1 oplog介绍

在replica set中oplog是一个定容集合(capped collection),它的默认大小是磁盘空间的5%(可以通过--oplogSizeMB参数修改).

位于local库的db.oplog.rs,有兴趣可以看看里面到底有些什么内容。
其中记录的是整个mongod实例一段时间内数据库的所有变更(插入/更新/删除)操作。
当空间用完时新记录自动覆盖最老的记录。
其覆盖范围被称作oplog时间窗口。需要注意的是,因为oplog是一个定容集合,
所以时间窗口能覆盖的范围会因为你单位时间内的更新次数不同而变化。
想要查看当前的oplog时间窗口预计值,可以使用以下命令: mongod -f /mongodb/28017/conf/mongod.conf
mongod -f /mongodb/28018/conf/mongod.conf
mongod -f /mongodb/28019/conf/mongod.conf
mongod -f /mongodb/28020/conf/mongod.conf use local
db.oplog.rs.find().pretty()
"ts" : Timestamp(1551597844, 1), #时间戳,唯一标识日志GTID
"op" : "n"
"o" : "i": insert
"u": update
"d": delete
"c": db cmd mongo --port 28017
test:PRIMARY> rs.printReplicationInfo()
configured oplog size: 1561.5615234375MB <--集合大小
log length start to end: 423849secs (117.74hrs) <--预计窗口覆盖时间
oplog first event time: Wed Sep 09 2015 17:39:50 GMT+0800 (CST)
oplog last event time: Mon Sep 14 2015 15:23:59 GMT+0800 (CST)
now: Mon Sep 14 2015 16:37:30 GMT+0800 (CST)

8.7.2 oplog企业级应用

(1)实现热备,在备份时使用--oplog选项
注:为了演示效果我们在备份过程,模拟数据插入 (2)准备测试数据
[mongod@db01 conf]$ mongo --port 28018
my_repl:SECONDARY> use cheng
switched to db cheng
my_repl:SECONDARY> for(var i = 1 ;i < 100; i++) {
... db.foo.insert({a:i});
... }
WriteResult({ "writeError" : { "code" : 10107, "errmsg" : "not master" } })
my_repl:SECONDARY> db.oplog.rs.find({"op":"i"}).pretty()
Error: error: {
"operationTime" : Timestamp(1578648315, 1),
"ok" : 0,
"errmsg" : "not master and slaveOk=false",
"code" : 13435,
"codeName" : "NotMasterNoSlaveOk",
"$clusterTime" : {
"clusterTime" : Timestamp(1578648315, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} oplog 配合mongodump实现热备
mongodump --port 28018 --oplog -o /mongodb/backup
作用介绍:--oplog 会记录备份过程中的数据变化。会以oplog.bson保存下来
恢复
mongorestore --port 28018 --oplogReplay /mongodb/backup

8.8 oplog高级应用

背景:每天0点全备,oplog恢复窗口为48小时
某天,上午10点test.t1 业务表被误删除。
恢复思路:
0、停应用
2、找测试库
3、恢复昨天晚上全备
4、截取全备之后到world.city误删除时间点的oplog,并恢复到测试库
5、将误删除表导出,恢复到生产库 恢复步骤:
模拟故障环境: 1、全备数据库
模拟原始数据 mongo --port 28017
use test
for(var i = 1 ;i < 20; i++) {
db.ci.insert({a: i});
} 全备:
rm -rf /mongodb/backup/*
mongodump --port 28018 --oplog -o /mongodb/backup --oplog功能:在备份同时,将备份过程中产生的日志进行备份
文件必须存放在/mongodb/backup下,自动命令为oplog.bson 再次模拟数据
db.ci1.insert({id:1})
db.ci2.insert({id:2}) 2、上午10点:删除wo库下的ci表
10:00时刻,误删除
db.ci.drop()
show tables; 3、备份现有的oplog.rs表
mongodump --port 28018 -d local -c oplog.rs -o /mongodb/backup
[mongod@db01 ~]$ cd /mongodb/backup/local/
[mongod@db01 local]$ cp oplog.rs.bson ../oplog.bson
[mongod@db01 backup]$ rm -rf local/ 4、截取oplog并恢复到drop之前的位置
更合理的方法:登陆到原数据库
[mongod@db01 local]$ mongo --port 28018
my_repl:PRIMARY> use local
db.oplog.rs.find({op:"c"}).pretty(); #看最后一条 {
"ts" : Timestamp(1578649035, 1),
"t" : NumberLong(1),
"h" : NumberLong("1366252837263648141"),
"v" : 2,
"op" : "c",
"ns" : "test.$cmd",
"ui" : UUID("e65f4d27-02fd-48ac-aaad-3310deca9302"),
"wall" : ISODate("2020-01-10T09:37:15.385Z"),
"o" : {
"drop" : "ci"
}
} 获取到oplog误删除时间点位置:
"ts" : Timestamp(1578649035, 1) 5、恢复备份+应用oplog
mongorestore --port 38021 --oplogReplay --oplogLimit "1578649035:1" --drop /mongodb/backup/

8.9 分片集群的备份思路

1、你要备份什么?
config server
shard 节点 单独进行备份
2、备份有什么困难和问题
(1)chunk迁移的问题
人为控制在备份的时候,避开迁移的时间窗口
(2)shard节点之间的数据不在同一时间点。
选业务量较少的时候 Ops Manager 企业专用

MongoDB 总结的更多相关文章

  1. 【翻译】MongoDB指南/聚合——聚合管道

    [原文地址]https://docs.mongodb.com/manual/ 聚合 聚合操作处理数据记录并返回计算后的结果.聚合操作将多个文档分组,并能对已分组的数据执行一系列操作而返回单一结果.Mo ...

  2. 【翻译】MongoDB指南/CRUD操作(四)

    [原文地址]https://docs.mongodb.com/manual/ CRUD操作(四) 1 查询方案(Query Plans) MongoDB 查询优化程序处理查询并且针对给定可利用的索引选 ...

  3. 【翻译】MongoDB指南/CRUD操作(三)

    [原文地址]https://docs.mongodb.com/manual/ CRUD操作(三) 主要内容: 原子性和事务(Atomicity and Transactions),读隔离.一致性和新近 ...

  4. 【翻译】MongoDB指南/CRUD操作(二)

    [原文地址]https://docs.mongodb.com/manual/ MongoDB CRUD操作(二) 主要内容: 更新文档,删除文档,批量写操作,SQL与MongoDB映射图,读隔离(读关 ...

  5. 【翻译】MongoDB指南/CRUD操作(一)

    [原文地址]https://docs.mongodb.com/manual/ MongoDB CRUD操作(一) 主要内容:CRUD操作简介,插入文档,查询文档. CRUD操作包括创建.读取.更新和删 ...

  6. CRL快速开发框架系列教程十二(MongoDB支持)

    本系列目录 CRL快速开发框架系列教程一(Code First数据表不需再关心) CRL快速开发框架系列教程二(基于Lambda表达式查询) CRL快速开发框架系列教程三(更新数据) CRL快速开发框 ...

  7. MongoDB系列(二):C#应用

    前言 上一篇文章<MongoDB系列(一):简介及安装>已经介绍了MongoDB以及其在window环境下的安装,这篇文章主要讲讲如何用C#来与MongoDB进行通讯.再次强调一下,我使用 ...

  8. MongoDB系列(一):简介及安装

    什么是MongoDB MongoDB 是由C++语言编写的,是一个基于分布式文件存储的开源数据库系统. 在高负载的情况下,添加更多的节点,可以保证服务器性能. MongoDB 旨在为应用提供可扩展的高 ...

  9. [原]分享一下我和MongoDB与Redis那些事

    缘起:来自于我在近期一个项目上遇到的问题,在Segmentfault上发表了提问 知识背景: 对不是很熟悉MongoDB和Redis的同学做一下介绍. 1.MongoDB数组查询:MongoDB自带L ...

  10. 用MongoDB分析合肥餐饮业

    看了<从数据角度解析福州美食>后难免心痒,动了要分析合肥餐饮业的念头,因此特地写了Node.js爬虫爬取了合肥的大众点评数据.分析数据库我并没有采用MySQL而是用的MongoDB,是因为 ...

随机推荐

  1. linux重启后nginx服务无法启动

    查看ngin.conf pid的内容 例如: pid /usr/local/nginx/logs/nginx.pid 根据以上配置内容来做,检查/usr/local/nginx/logs/是否存在,如 ...

  2. Windows下python+allure的下载、安装、配置与使用

    下载安装allure 1.Windows和mac均可选择从官网下载,下载地址: https://repo.maven.apache.org/maven2/io/qameta/allure/allure ...

  3. Windows锁定屏幕然后关闭显示器,可执行程序

    有时候我们需要关闭屏幕来休息一下或者在本上写东西,但是屏幕亮着的时候会分心,但是关闭显示器又太麻烦了,所以直接来一个小程序(非微信小程序).还有一种情况,有时候晚上要离开电脑旁了,但是电脑还在做事情, ...

  4. 篇章三:SVN-对文件的操作

    添加文件 在检出的工作副本中添加一个Readme文本文件,这时候这个文本文件会显示为没有版本控制的状态,如图: 这时候,你需要告知TortoiseSVN你的操作,如图: 加入以后,你的文件会变成这个状 ...

  5. 使用ajax请求上传多个或者多个附件

    jsp页面 <%@ page language="java" pageEncoding="UTF-8"%> <!DOCTYPE HTML> ...

  6. Linux嵌入式学习-ds18b20驱动

    ds18b20的时序图如下: 复位时序: 读写时序: 以下是程序代码: #include <linux/module.h> #include <linux/init.h> #i ...

  7. Centos7无网络下安装mysql5.7——mysql-rpm安装

    本教程指将mysql安装到系统默认目录下,如想自定义修改目录,请在rpm安装时自行修改: rpm -ivh --prefix= /opt xxx.rpm #将xxx.rpm安装到/opt下 一.下载m ...

  8. 实现连续登录X天送红包这个连续登录X天算法

    实现用户只允许登录系统1次(1天无论登录N次算一次) //timeStamp%864000计算结果为当前时间在一天当中过了多少秒 //当天0点时间戳 long time=timeStamp-timeS ...

  9. TurtleBot3 Waffle (tx2版华夫)(12)建图-hector建图

    1)[Remote PC] 启动roscore $ roscore 2)[TurBot3] 启动turbot3 $ roslaunch turbot3_bringup minimal.launch 3 ...

  10. 第八章节 BJROBOT hector 算法构建地图【ROS全开源阿克曼转向智能网联无人驾驶车】

    1.把小车平放在地板上,用资料里的虚拟机,打开一个终端 ssh 过去主控端启动roslaunch znjrobot bringup.launch. 2.在虚拟机端打开一个终端,ssh 过去主控端启动r ...