mongodb的备份与恢复详解
简单
Mongodb导出与导入
1: 导入/导出可以操作的是本地的mongodb服务器,也可以是远程的.
所以,都有如下通用选项:
-h host 主机
--port port 端口
-u username 用户名
-p passwd 密码
2: mongoexport 导出json格式的文件
问: 导出哪个库,哪张表,哪几列,哪几行?
-d 库名
-c 表名
-f field1,field2...列名
-q 查询条件
-o 导出的文件名
-- csv 导出csv格式(便于和传统数据库交换数据)
例:
[root@localhost mongodb]# ./bin/mongoexport -d test -c news -o test.json
connected to: 127.0.0.1
exported 3 records
[root@localhost mongodb]# ls
bin dump GNU-AGPL-3.0 README test.json THIRD-PARTY-NOTICES
[root@localhost mongodb]# more test.json
{ "_id" : { "$oid" : "51fc59c9fecc28d8316cfc03" }, "title" : "aaaa" }
{ "_id" : { "$oid" : "51fcaa3c5eed52c903a91837" }, "title" : "today is sataday" }
{ "_id" : { "$oid" : "51fcaa445eed52c903a91838" }, "title" : "ok now" }
例2: 只导出goods_id,goods_name列
./bin/mongoexport -d test -c goods -f goods_id,goods_name -o goods.json
例3: 只导出价格低于1000元的行
./bin/mongoexport -d test -c goods -f goods_id,goods_name,shop_price -q ‘{shop_price:{$lt:200}}’ -o goods.json
注: _id列总是导出
Mongoimport 导入
-d 待导入的数据库
-c 待导入的表(不存在会自己创建)
--type csv/json(默认)
--file 备份文件路径
例1: 导入json
./bin/mongoimport -d test -c goods --file ./goodsall.json
例2: 导入csv
./bin/mongoimport -d test -c goods --type csv -f goods_id,goods_name --file ./goodsall.csv
./bin/mongoimport -d test -c goods --type csv --headline -f goods_id,goods_name --file ./goodsall.csv
mongodump 导出二进制bson结构的数据及其索引信息
-d 库名
-c 表名
-f field1,field2...列名
例:
mongodum -d test [-c 表名] 默认是导出到mongo下的dump目录
规律:
1:导出的文件放在以database命名的目录下
2: 每个表导出2个文件,分别是bson结构的数据文件, json的索引信息
3: 如果不声明表名, 导出所有的表
mongorestore 导入二进制文件
例:
./bin/mongorestore -d test --directoryperdb dump/test/ (mongodump时的备份目录)
二进制备份,不仅可以备份数据,还可以备份索引,
备份数据比较小.
详细过程
检查环境
[mongod@mcw01 ~]$ ps -ef|grep mongo
root 16595 16566 0 10:57 pts/0 00:00:00 su - mongod
mongod 16596 16595 0 10:57 pts/0 00:00:00 -bash
mongod 16683 1 1 12:06 ? 00:01:20 mongod --dbpath=/mongodb/data --logpath=/mongodb/log/mongodb.log --port=27017 --logappend --fork --auth
mongod 16758 16596 0 13:27 pts/0 00:00:00 ps -ef
mongod 16759 16596 0 13:27 pts/0 00:00:00 grep --color=auto mongo
[mongod@mcw01 ~]$ kill -2 16683
[mongod@mcw01 ~]$ ps -ef|grep mongo
root 16595 16566 0 10:57 pts/0 00:00:00 su - mongod
mongod 16596 16595 0 10:57 pts/0 00:00:00 -bash
mongod 16761 16596 0 13:27 pts/0 00:00:00 ps -ef
mongod 16762 16596 0 13:27 pts/0 00:00:00 grep --color=auto mongo
[mongod@mcw01 ~]$ mongod --dbpath=/mongodb/data --logpath=/mongodb/log/mongodb.log --port=27017 --logappend --fork
about to fork child process, waiting until server is ready for connections.
forked process: 16765
child process started successfully, parent exiting
[mongod@mcw01 ~]$ ps -ef|grep mongo
root 16595 16566 0 10:57 pts/0 00:00:00 su - mongod
mongod 16596 16595 0 10:57 pts/0 00:00:00 -bash
mongod 16765 1 11 13:28 ? 00:00:01 mongod --dbpath=/mongodb/data --logpath=/mongodb/log/mongodb.log --port=27017 --logappend --fork
mongod 16782 16596 0 13:28 pts/0 00:00:00 ps -ef
mongod 16783 16596 0 13:28 pts/0 00:00:00 grep --color=auto mongo
[mongod@mcw01 ~]$ mongo
MongoDB shell version: 3.2.8
connecting to: test
Server has startup warnings:
2022-03-04T13:28:03.832+0800 I CONTROL [initandlisten]
2022-03-04T13:28:03.832+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2022-03-04T13:28:03.832+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2022-03-04T13:28:03.832+0800 I CONTROL [initandlisten]
2022-03-04T13:28:03.832+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2022-03-04T13:28:03.832+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2022-03-04T13:28:03.832+0800 I CONTROL [initandlisten]
2022-03-04T13:28:03.832+0800 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 4096 processes, 65535 files. Number of processes should be at least 32767.5 : 0.5 times number of files.
2022-03-04T13:28:03.832+0800 I CONTROL [initandlisten]
> use test;
switched to db test
> show tables;
bar
foo
shop
stu
tea
> db.stu.find();
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc079"), "sn" : 1, "name" : "student1" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc07a"), "sn" : 2, "name" : "student2" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc07b"), "sn" : 3, "name" : "student3" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc07c"), "sn" : 4, "name" : "student4" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc07d"), "sn" : 5, "name" : "student5" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc07e"), "sn" : 6, "name" : "student6" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc07f"), "sn" : 7, "name" : "student7" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc080"), "sn" : 8, "name" : "student8" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc081"), "sn" : 9, "name" : "student9" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc082"), "sn" : 10, "name" : "student10" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc083"), "sn" : 11, "name" : "student11" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc084"), "sn" : 12, "name" : "student12" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc085"), "sn" : 13, "name" : "student13" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc086"), "sn" : 14, "name" : "student14" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc087"), "sn" : 15, "name" : "student15" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc088"), "sn" : 16, "name" : "student16" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc089"), "sn" : 17, "name" : "student17" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc08a"), "sn" : 18, "name" : "student18" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc08b"), "sn" : 19, "name" : "student19" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc08c"), "sn" : 20, "name" : "student20" }
Type "it" for more
> db.stu.find().count();
1000
>
导出为json数据
[mongod@mcw01 ~]$ ls /mongodb/bin/
bsondump mongod mongoexport mongoimport mongoperf mongos mongotop
mongo mongodump mongofiles mongooplog mongorestore mongostat
[mongod@mcw01 ~]$ #导出命令,指定库test 指定表(集合)stu,指定导出字段sn,name ,指定查询条件(跟db.stu.find(查询条件)一样的做法),然后指定输出到文件test.stu.json
[mongod@mcw01 ~]$ #查询条件查出什么就能导出什么
[mongod@mcw01 ~]$ mongoexport -d test -c stu -f sn,name -q '{sn:{$lte:1000}}' -o ./test.stu.json
2022-03-04T13:41:01.667+0800 connected to: localhost
2022-03-04T13:41:01.943+0800 exported 1000 records #导出了1000条记录
[mongod@mcw01 ~]$ ls
test.stu.json
[mongod@mcw01 ~]$ tail test.stu.json #如下导出的是json格式数据,每个文档都是一个json数据,占一行
{"_id":{"$oid":"6220ff64ae6f9ca7b52cc457"},"sn":991.0,"name":"student991"}
{"_id":{"$oid":"6220ff64ae6f9ca7b52cc458"},"sn":992.0,"name":"student992"}
{"_id":{"$oid":"6220ff64ae6f9ca7b52cc459"},"sn":993.0,"name":"student993"}
{"_id":{"$oid":"6220ff64ae6f9ca7b52cc45a"},"sn":994.0,"name":"student994"}
{"_id":{"$oid":"6220ff64ae6f9ca7b52cc45b"},"sn":995.0,"name":"student995"}
{"_id":{"$oid":"6220ff64ae6f9ca7b52cc45c"},"sn":996.0,"name":"student996"}
{"_id":{"$oid":"6220ff64ae6f9ca7b52cc45d"},"sn":997.0,"name":"student997"}
{"_id":{"$oid":"6220ff64ae6f9ca7b52cc45e"},"sn":998.0,"name":"student998"}
{"_id":{"$oid":"6220ff64ae6f9ca7b52cc45f"},"sn":999.0,"name":"student999"}
{"_id":{"$oid":"6220ff64ae6f9ca7b52cc460"},"sn":1000.0,"name":"student1000"}
[mongod@mcw01 ~]$
如果需要将数据导入MySQL,那么就导出为csv格式,这个格式的就默认没有导出_id
[mongod@mcw01 ~]$ mongoexport -d test -c stu -f sn,name -q '{sn:{$lte:1000}}' --csv -o ./test.stu.csv
2022-03-04T13:59:47.086+0800 csv flag is deprecated; please use --type=csv instead
2022-03-04T13:59:47.088+0800 connected to: localhost
2022-03-04T13:59:47.252+0800 exported 1000 records
[mongod@mcw01 ~]$ ls
test.stu.csv test.stu.json
[mongod@mcw01 ~]$ tail test.stu.csv
991,student991
992,student992
993,student993
994,student994
995,student995
996,student996
997,student997
998,student998
999,student999
1000,student1000
[mongod@mcw01 ~]$
导入json格式数据
mongoimport -d test -c animal --type json --file ./test.stu.json
[mongod@mcw01 ~]$ ls
test.stu.csv test.stu.json
[mongod@mcw01 ~]$ mongo
MongoDB shell version: 3.2.8
connecting to: test
.......
> use test;
switched to db test
> show tables; #查看没有animal的表
bar
foo
shop
stu
tea
>
bye
[mongod@mcw01 ~]$ ls
test.stu.csv test.stu.json
[mongod@mcw01 ~]$ #指定导入到库test,指定导入到表animal,这个表还不存在。指定导入的数据类型是json,指定导入数据的json文件
[mongod@mcw01 ~]$
[mongod@mcw01 ~]$ mongoimport -d test -c animal --type json --file ./test.stu.json
2022-03-04T14:09:02.530+0800 connected to: localhost
2022-03-04T14:09:02.914+0800 imported 1000 documents
[mongod@mcw01 ~]$ mongo
MongoDB shell version: 3.2.8
connecting to: test
......
> use test;
switched to db test
> show tables;
animal
bar
foo
shop
stu
tea
> db.animal.find(); #查看导入成功
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc079"), "sn" : 1, "name" : "student1" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc07a"), "sn" : 2, "name" : "student2" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc07b"), "sn" : 3, "name" : "student3" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc07c"), "sn" : 4, "name" : "student4" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc07d"), "sn" : 5, "name" : "student5" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc07e"), "sn" : 6, "name" : "student6" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc07f"), "sn" : 7, "name" : "student7" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc080"), "sn" : 8, "name" : "student8" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc081"), "sn" : 9, "name" : "student9" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc082"), "sn" : 10, "name" : "student10" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc083"), "sn" : 11, "name" : "student11" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc084"), "sn" : 12, "name" : "student12" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc085"), "sn" : 13, "name" : "student13" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc086"), "sn" : 14, "name" : "student14" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc087"), "sn" : 15, "name" : "student15" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc088"), "sn" : 16, "name" : "student16" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc089"), "sn" : 17, "name" : "student17" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc08a"), "sn" : 18, "name" : "student18" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc08b"), "sn" : 19, "name" : "student19" }
{ "_id" : ObjectId("6220ff62ae6f9ca7b52cc08c"), "sn" : 20, "name" : "student20" }
Type "it" for more
> db.animal.find().count(); #查看导入数量没有问题
1000
>
导入csv格式的数据
csv导出的时候,是整行整列的。第一行有字段名称
[mongod@mcw01 ~]$ mongoexport -d test -c stu -f sn,name -q '{sn:{$lte:10}}' --csv -o ./test.stu.csv #我们导出十行数据
2022-03-04T14:17:42.807+0800 csv flag is deprecated; please use --type=csv instead
2022-03-04T14:17:42.814+0800 connected to: localhost
2022-03-04T14:17:42.818+0800 exported 10 records
[mongod@mcw01 ~]$ ls
test.stu.csv test.stu.json
[mongod@mcw01 ~]$ more test.stu.csv #查看十行数据
sn,name
1,student1
2,student2
3,student3
4,student4
5,student5
6,student6
7,student7
8,student8
9,student9
10,student10
[mongod@mcw01 ~]$
[mongod@mcw01 ~]$ mongoimport -d test -c bird --type csv --file ./test.stu.csv #指定类型是csv,导入报错缺少字段和头
2022-03-04T14:23:28.638+0800 error validating settings: must specify --fields, --fieldFile or --headerline to import this file type
2022-03-04T14:23:28.638+0800 try 'mongoimport --help' for more information
[mongod@mcw01 ~]$
[mongod@mcw01 ~]$ mongoimport -d test -c bird --type csv -f sn,name --file ./test.stu.csv #-f指定字段名称了,但是导入的数量不对,是11行
2022-03-04T14:24:56.680+0800 connected to: localhost
2022-03-04T14:24:56.738+0800 imported 11 documents
[mongod@mcw01 ~]$
[mongod@mcw01 ~]$ mongo #进入之后查看,它把csv的第一行也作为数据导入了,但是我们其实是不需要这行数据的。
.......
> use test
switched to db test
> db.bird.find();
{ "_id" : ObjectId("6221b0b8310ae65086fee5e7"), "sn" : "sn", "name" : "name" }
{ "_id" : ObjectId("6221b0b8310ae65086fee5e8"), "sn" : 1, "name" : "student1" }
{ "_id" : ObjectId("6221b0b8310ae65086fee5e9"), "sn" : 2, "name" : "student2" }
{ "_id" : ObjectId("6221b0b8310ae65086fee5ea"), "sn" : 3, "name" : "student3" }
{ "_id" : ObjectId("6221b0b8310ae65086fee5eb"), "sn" : 4, "name" : "student4" }
{ "_id" : ObjectId("6221b0b8310ae65086fee5ec"), "sn" : 5, "name" : "student5" }
{ "_id" : ObjectId("6221b0b8310ae65086fee5ed"), "sn" : 6, "name" : "student6" }
{ "_id" : ObjectId("6221b0b8310ae65086fee5ee"), "sn" : 7, "name" : "student7" }
{ "_id" : ObjectId("6221b0b8310ae65086fee5ef"), "sn" : 8, "name" : "student8" }
{ "_id" : ObjectId("6221b0b8310ae65086fee5f0"), "sn" : 9, "name" : "student9" }
{ "_id" : ObjectId("6221b0b8310ae65086fee5f1"), "sn" : 10, "name" : "student10" }
>
> db.bird.drop() #我们将这个表删除,重新导入
true
> db.bird.find();
>
[mongod@mcw01 ~]$ mongoimport -d test -c bird --type csv -f sn,name --headerline --file ./test.stu.csv
2022-03-04T14:30:52.953+0800 error validating settings: incompatible options: --fields and --headerline
2022-03-04T14:30:52.953+0800 try 'mongoimport --help' for more information
[mongod@mcw01 ~]$ mongoimport --help #可能 版本不一样,加了--headerline就不需要加-f指定字段名称了。它直接以第一行为字段名称
--headerline use first line in input source as the field list (CSV
and TSV only)
[mongod@mcw01 ~]$ mongoimport -d test -c bird --type csv --headerline --file ./test.stu.csv #添加头参数--headerline。
2022-03-04T14:31:36.099+0800 connected to: localhost
2022-03-04T14:31:36.130+0800 imported 10 documents
[mongod@mcw01 ~]$ mongo
.........
> use test
switched to db test
> db.bird.find()
{ "_id" : ObjectId("6221b248310ae65086fee5f2"), "sn" : 1, "name" : "student1" }
{ "_id" : ObjectId("6221b248310ae65086fee5f3"), "sn" : 2, "name" : "student2" }
{ "_id" : ObjectId("6221b248310ae65086fee5f4"), "sn" : 3, "name" : "student3" }
{ "_id" : ObjectId("6221b248310ae65086fee5f5"), "sn" : 4, "name" : "student4" }
{ "_id" : ObjectId("6221b248310ae65086fee5f6"), "sn" : 5, "name" : "student5" }
{ "_id" : ObjectId("6221b248310ae65086fee5f7"), "sn" : 6, "name" : "student6" }
{ "_id" : ObjectId("6221b248310ae65086fee5f8"), "sn" : 7, "name" : "student7" }
{ "_id" : ObjectId("6221b248310ae65086fee5f9"), "sn" : 8, "name" : "student8" }
{ "_id" : ObjectId("6221b248310ae65086fee5fa"), "sn" : 9, "name" : "student9" }
{ "_id" : ObjectId("6221b248310ae65086fee5fb"), "sn" : 10, "name" : "student10" }
>
二进制方式导出(mongodump)
mongodump导出bson数据 ,更快捷,适用于mongodb的备份恢复,不适合用于可读性好和其它数据库中使用它。它还能导出索引
[mongod@mcw01 ~]$ ls
test.stu.csv test.stu.json
[mongod@mcw01 ~]$ mongodump -d test -c tea
2022-03-04T15:07:49.018+0800 writing test.tea to
2022-03-04T15:07:49.023+0800 done dumping test.tea (4 documents)
[mongod@mcw01 ~]$ ls
dump test.stu.csv test.stu.json
[mongod@mcw01 ~]$ ls dump/ #不指定路径,会在当前目录生成dump目录,备份文件放在目录下面
test
[mongod@mcw01 ~]$ ls dump/test/
tea.bson tea.metadata.json
[mongod@mcw01 ~]$
[mongod@mcw01 ~]$ tail dump/test/tea.metadata.json #索引信息也导出了
{"options":{},"indexes":[{"v":1,"key":{"_id":1},"name":"_id_","ns":"test.tea"},{"v":1,"key":{"email":"hashed"},"name":"email_hashed","ns":"test.tea"}]}[mongod@mcw01 ~]$
[mongod@mcw01 ~]$ tail -2 dump/test/tea.bson #不好读的数据,
b@163.com+_idb!¢˲K«½Ȃemail
c@163.com_idb!¦˲K«½[mongod@mcw01 ~]$ 当不指定表的时候,会导出当前库所有表
[mongod@mcw01 ~]$ ls
dump test.stu.csv test.stu.json
[mongod@mcw01 ~]$ rm -rf dump/
[mongod@mcw01 ~]$ mongodump -d test
2022-03-04T15:14:37.145+0800 writing test.bar to
2022-03-04T15:14:37.146+0800 writing test.stu to
2022-03-04T15:14:37.166+0800 writing test.animal to
2022-03-04T15:14:37.178+0800 writing test.bird to
2022-03-04T15:14:37.561+0800 done dumping test.bird (10 documents)
2022-03-04T15:14:37.562+0800 writing test.tea to
2022-03-04T15:14:37.621+0800 done dumping test.tea (4 documents)
2022-03-04T15:14:37.621+0800 writing test.foo to
2022-03-04T15:14:37.666+0800 done dumping test.foo (2 documents)
2022-03-04T15:14:37.667+0800 writing test.shop to
2022-03-04T15:14:37.674+0800 done dumping test.stu (1000 documents)
2022-03-04T15:14:37.682+0800 done dumping test.animal (1000 documents)
2022-03-04T15:14:37.701+0800 done dumping test.shop (2 documents)
2022-03-04T15:14:37.781+0800 done dumping test.bar (10000 documents)
[mongod@mcw01 ~]$ ls
dump test.stu.csv test.stu.json
[mongod@mcw01 ~]$ ls dump/
test
[mongod@mcw01 ~]$ ls dump/test/ #每个表都有连两个文件
animal.bson bar.metadata.json foo.bson shop.metadata.json tea.bson
animal.metadata.json bird.bson foo.metadata.json stu.bson tea.metadata.json
bar.bson bird.metadata.json shop.bson stu.metadata.json
[mongod@mcw01 ~]$
整个库的恢复(二进制方式mongorestore )
> use test
switched to db test
> db.dropDatabase() #删除测试库
{ "dropped" : "test", "ok" : 1 }
> show dbs;
admin 0.000GB
local 0.000GB
shop 0.000GB
>
bye
[mongod@mcw01 ~]$ 下面方法我的是不能用的
[mongod@mcw01 ~]$ ls
dump test.stu.csv test.stu.json
[mongod@mcw01 ~]$ ls dump/
test
[mongod@mcw01 ~]$ ls dump/test/
animal.bson bar.metadata.json foo.bson shop.metadata.json tea.bson
animal.metadata.json bird.bson foo.metadata.json stu.bson tea.metadata.json
bar.bson bird.metadata.json shop.bson stu.metadata.json
[mongod@mcw01 ~]$ mongorestore -d test --directoryperdb dump/test
2022-03-04T15:19:45.238+0800 error parsing command line options: --dbpath and related flags are not supported in 3.0 tools.
See http://dochub.mongodb.org/core/tools-dbpath-deprecated for more information
2022-03-04T15:19:45.239+0800 try 'mongorestore --help' for more information
[mongod@mcw01 ~]$ #查看下怎么回事
[mongod@mcw01 ~]$ mongorestore --help #原来是变了
--dir=<directory-name> input directory, use '-' for stdin 正确的恢复方法:
[mongod@mcw01 ~]$ ls
dump test.stu.csv test.stu.json
[mongod@mcw01 ~]$ ls dump/
test
[mongod@mcw01 ~]$ ls dump/test/ #数据都备份在这里
animal.bson bar.metadata.json foo.bson shop.metadata.json tea.bson
animal.metadata.json bird.bson foo.metadata.json stu.bson tea.metadata.json
bar.bson bird.metadata.json shop.bson stu.metadata.json
[mongod@mcw01 ~]$ mongorestore -d test --dir dump/test #指定备份目录,然后恢复数据
2022-03-04T15:22:10.488+0800 building a list of collections to restore from dump/test dir
2022-03-04T15:22:10.530+0800 reading metadata for test.bar from dump/test/bar.metadata.json
2022-03-04T15:22:10.582+0800 restoring test.bar from dump/test/bar.bson
2022-03-04T15:22:10.663+0800 reading metadata for test.animal from dump/test/animal.metadata.json
2022-03-04T15:22:10.663+0800 reading metadata for test.stu from dump/test/stu.metadata.json
2022-03-04T15:22:10.664+0800 reading metadata for test.bird from dump/test/bird.metadata.json
2022-03-04T15:22:10.774+0800 restoring test.animal from dump/test/animal.bson
2022-03-04T15:22:10.828+0800 restoring test.stu from dump/test/stu.bson
2022-03-04T15:22:10.949+0800 restoring test.bird from dump/test/bird.bson
2022-03-04T15:22:10.958+0800 restoring indexes for collection test.bird from metadata
2022-03-04T15:22:10.964+0800 finished restoring test.bird (10 documents)
2022-03-04T15:22:10.964+0800 reading metadata for test.shop from dump/test/shop.metadata.json
2022-03-04T15:22:11.044+0800 restoring indexes for collection test.stu from metadata
2022-03-04T15:22:11.045+0800 restoring test.shop from dump/test/shop.bson
2022-03-04T15:22:11.048+0800 restoring indexes for collection test.shop from metadata
2022-03-04T15:22:11.076+0800 finished restoring test.shop (2 documents)
2022-03-04T15:22:11.076+0800 reading metadata for test.tea from dump/test/tea.metadata.json
2022-03-04T15:22:11.114+0800 restoring indexes for collection test.animal from metadata
2022-03-04T15:22:11.123+0800 finished restoring test.stu (1000 documents)
2022-03-04T15:22:11.123+0800 reading metadata for test.foo from dump/test/foo.metadata.json
2022-03-04T15:22:11.165+0800 finished restoring test.animal (1000 documents)
2022-03-04T15:22:11.165+0800 restoring test.tea from dump/test/tea.bson
2022-03-04T15:22:11.196+0800 restoring indexes for collection test.tea from metadata
2022-03-04T15:22:11.196+0800 restoring test.foo from dump/test/foo.bson
2022-03-04T15:22:11.248+0800 restoring indexes for collection test.foo from metadata
2022-03-04T15:22:11.249+0800 finished restoring test.tea (4 documents)
2022-03-04T15:22:11.253+0800 finished restoring test.foo (2 documents)
2022-03-04T15:22:11.805+0800 restoring indexes for collection test.bar from metadata
2022-03-04T15:22:11.807+0800 finished restoring test.bar (10000 documents)
2022-03-04T15:22:11.807+0800 done
[mongod@mcw01 ~]$
[mongod@mcw01 ~]$ mongo #检查数据恢复的正常
......
> show dbs;
admin 0.000GB
local 0.000GB
shop 0.000GB
test 0.001GB
> use test;
switched to db test
> show tables;
animal
bar
bird
foo
shop
stu
tea
> db.tea.find();
{ "_id" : ObjectId("622180a312caf24babbda9c8"), "email" : "a@163.com" }
{ "_id" : ObjectId("622180a912caf24babbda9c9"), "email" : "b@163.com" }
{ "_id" : ObjectId("622184a212caf24babbda9ca"), "email" : "c@163.com" }
{ "_id" : ObjectId("622185a612caf24babbda9cc") }
>
mongodb的备份与恢复详解的更多相关文章
- 使用VS2010编译MongoDB C++驱动详解
最近为了解决IM消息记录的高速度写入.多文档类型支持的需求,决定使用MongoDB来解决. 考虑到MongoDB对VS版本要求较高,与我现有的VS版本不兼容,在leveldb.ssdb.redis.h ...
- ORACLE数据库备份与恢复详解
ORACLE数据库备份与恢复详解 学习过程中的总结,有兴趣不妨看看,如果有不对的地方,高手不要留情!! Oracle的备份与恢复有三种标准的模式,大致分为两 大类,备份恢复(物理上的)以及导入导出(逻 ...
- Mongodb安装启动详解
最近在倒腾node+mongodb,安装mongodb的时候开始遇到很多问题,然后折腾了好几次,直到可以很顺利完成安装 ,所以把安装的过程记录下来. 线上系统基本上都是linux的,所以只安装了lin ...
- 【转载】高可用的MongoDB集群详解
1.序言 MongoDB 是一个可扩展的高性能,开源,模式自由,面向文档的数据库. 它使用 C++编写.MongoDB 包含一下特点: l 面向集合的存储:适合存储对象及JSON形式的数据. l ...
- mongodb.conf配置文件详解
mongod --config /etc/mongodb.conf 配置如下:verbose:日志信息冗余.默认false.提高内部报告标准输出或记录到logpath配置的日志文件中.要启用verbo ...
- MongoDB副本集配置系列十:MongoDB local库详解和数据同步原理
1:local库是MongoDB的系统库,记录着时间戳和索引和复制集等信息 gechongrepl:PRIMARY> use local switched to db local gechong ...
- mongodb的配置文件详解()
官方地址 https://docs.mongodb.com/manual/reference/configuration-options/#configuration-file 以下页面描述了Mon ...
- MongoDB复合索引详解
摘要: 对于MongoDB的多键查询,创建复合索引可以有效提高性能. 什么是复合索引? 复合索引,即Compound Index,指的是将多个键组合到一起创建索引,这样可以加速匹配多个键的查询.不妨通 ...
- Mongodb查询命令详解
前面我们简单的讲了下find方法,下面来深入的过一下它的用法以及常用的字方法. 下面是mongo中db.user.help()中对find方法的定义和解释: db.user.find([query], ...
- MongoDB的Find详解(一)
1.指定返回的键 db.[documentName].find ({条件},{键指定}) 数据准备persons.json var persons = [{name:"jim",a ...
随机推荐
- Lambda表达式和闭包Closure
目录 简介 JS中的闭包 java中的闭包 深入理解lambda表达式和函数的局部变量 总结 简介 我们通常讲到闭包,一般都是指在javascript的环境中.闭包是JS中一个非常重要的也非常常用的概 ...
- 上传文件附件时判断word、excel、txt等是否含有敏感词如身份证号,手机号等
上传附件判断word.excel.txt等文档中是否含有敏感词如身份证号,手机号等,其它检测如PDF,图片(OCR)等可以自行扩展. 互联网项目中,展示的数据中不能包含个人信息等敏感信息.判断word ...
- 【编译原理】Antlr 入门使用
前面文章我们学习了编译器前端的词法和语法分析工具,本篇我们来看看如何借助 Antlr 工具,快速生成词法和语法分析代码. 一.安装 mac 环境: 1)安装 brew install antlr 2) ...
- Perm 排列计数——Lucas&dfs
思路:这道题给出的公式看明白后即可得出正解,我们可以把他想象成一颗二叉树,任意一个点的任意一个子孙一直除以2后最终都会到达一终点,终点则为以该点为根的子树的最小值. so--我们可以将根节点作为最后终 ...
- BZOJ 4403序列统计
假设存在一个满足条件的长度为i的不下降序列(显然是一定存在的)那么只需要从中选出i个数即可 (不必在意选出具体数的大小,可以把满足条件的序列写下来,选几个数感受一下). 但是$n \choose m ...
- 树模型-CART树
分类回归树CART CART树是后面所有模型的基础,也是核心树 在ID3算法中我们使用了信息增益来选择特征,信息增益大的优先选择.在C4.5算法中,采用了信息增益比来选择特征,以减少信息增益容易选择特 ...
- 实训篇-Html-注册页面【简单】
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title> ...
- bookstack书栈网docker搭建
准备好数据后,直接运行以下命令即可. docker run -d --name bookstack \ --restart always \ --privileged=true\ -p 8181:81 ...
- https http2 http3
HTTP 1.1 对比 1.0,HTTP 1.1 主要区别主要体现在: 缓存处理:在 HTTP 1.0 中主要使用 header 里的 If-Modified-Since,Expires 来做为缓存判 ...
- 【oracle】想要得到一个与输入顺序相同的结果
[oracle]想要得到一个与输入顺序相同的结果 在Oracle中,输出结果顺序好像是个rowid相同的,也就是经常使用的rownum序列的值,所以可以通过对rownum进行order by来让输出结 ...