参考http://blog.51cto.com/kaliarch/2047358

一、概述

1.1 背景

为解决mongodb在replica set每个从节点上面的数据库均是对数据库的全量拷贝,从节点压力在高并发大数据量的场景下存在很大挑战,同时考虑到后期mongodb集群的在数据压力巨大时的扩展性,应对海量数据引出了分片机制。

1.2 分片概念

分片是将数据库进行拆分,将其分散在不同的机器上的过程,无需功能强大的服务器就可以存储更多的数据,处理更大的负载,在总数据中,将集合切成小块,将这些块分散到若干片中,每个片只负载总数据的一部分,通过一个知道数据与片对应关系的组件mongos的路由进程进行操作。

1.3 基础组件

其利用到了四个组件:mongos,config server,shard,replica set

mongos:数据库集群请求的入口,所有请求需要经过mongos进行协调,无需在应用层面利用程序来进行路由选择,mongos其自身是一个请求分发中心,负责将外部的请求分发到对应的shard服务器上,mongos作为统一的请求入口,为防止mongos单节点故障,一般需要对其做HA。

config server:配置服务器,存储所有数据库元数据(分片,路由)的配置。mongos本身没有物理存储分片服务器和数据路由信息,只是存缓存在内存中来读取数据,mongos在第一次启动或后期重启时候,就会从config server中加载配置信息,如果配置服务器信息发生更新会通知所有的mongos来更新自己的状态,从而保证准确的请求路由,生产环境中通常也需要多个config server,防止配置文件存在单节点丢失问题。

shard:在传统意义上来讲,如果存在海量数据,单台服务器存储1T压力非常大,无论考虑数据库的硬盘,网络IO,又有CPU,内存的瓶颈,如果多台进行分摊1T的数据,到每台上就是可估量的较小数据,在mongodb集群只要设置好分片规则,通过mongos操作数据库,就可以自动把对应的操作请求转发到对应的后端分片服务器上。

replica set:在总体mongodb集群架构中,对应的分片节点,如果单台机器下线,对应整个集群的数据就会出现部分缺失,这是不能发生的,因此对于shard节点需要replica set来保证数据的可靠性,生产环境通常为2个副本+1个仲裁。

1.4 架构图

二、安装部署

2.1 基础环境

为了节省服务器,采用多实例配置,三个mongos,三个config server,单个服务器上面运行不通角色的shard(为了后期数据分片均匀,将三台shard在各个服务器上充当不同的角色。),在一个节点内采用replica set保证高可用,对应主机与端口信息如下:

主机名

IP地址

组件config server

组件mongos

shard

docker-1

172.17.0.2

端口:20000

端口:30000

主节点:   27017

副本节点:27018

仲裁节点:27019

docker-2

172.17.0.3

端口:20000

端口:30000

仲裁节点:27017

主节点:   27018

副本节点:27019

docker-3

172.17.0.4

端口:20000

端口:30000

副本节点:27017

仲裁节点:27018

主节点:   27019

2.2、安装部署

2.2.1 软件下载目录创建

wget -c https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.6.3.tgz(-c:继续执行上次终端的任务)

tar -zxvf mongodb-linux-x86_64-3.6.3.tgz

ln -sv mongodb-linux-x86_64-3.6.3 /usr/local/mongodb

-s 软链接(符号链接)

-v 显示详细的处理过程

echo "PATH=$PAHT:/usr/local/mongodb/bin">/etc/profile.d/mongodb.sh

source /etc/profile.d/mongodb.sh

2.2.2 创建目录

分别在docker-1/docker-2/docker-3创建目录及日志文件

mkdir -p /root/application/program/mongodb/data/server1/{logs,conf,data,socket}

mkdir -p /root/application/program/mongodb/data/server2/{logs,conf,data,socket}

mkdir -p /root/application/program/mongodb/data/server3/{logs,conf,data,socket}

mkdir -p /root/application/program/mongodb/data/server1/data/mongod-27017

mkdir -p /root/application/program/mongodb/data/server1/data/mongod-27018

mkdir -p /root/application/program/mongodb/data/server1/data/mongod-27019

mkdir -p /root/application/program/mongodb/data/server1/data/mongosvr-20000

mkdir -p /root/application/program/mongodb/data/server2/data/mongod-27017

mkdir -p /root/application/program/mongodb/data/server2/data/mongod-27018

mkdir -p /root/application/program/mongodb/data/server2/data/mongod-27019

mkdir -p /root/application/program/mongodb/data/server2/data/mongosvr-20000

mkdir -p /root/application/program/mongodb/data/server3/data/mongod-27017

mkdir -p /root/application/program/mongodb/data/server3/data/mongod-27018

mkdir -p /root/application/program/mongodb/data/server3/data/mongod-27019

mkdir -p /root/application/program/mongodb/data/server3/data/mongosvr-20000

2.2.3 启动docker容器

cd /root/application/program/mongodb

docker run -d -v `pwd`/data/server1:/mongodb -p 27017:27017 docker.io/mongodb:3.6.3 /usr/sbin/init

docker run -d -v `pwd`/data/server2:/mongodb -p 27018:27017 docker.io/mongodb:3.6.3 /usr/sbin/init

docker run -d -v `pwd`/data/server3:/mongodb -p 27019:27017 docker.io/mongodb:3.6.3 /usr/sbin/init

2.2.4 配置config server副本集

在mongodb3.4版本后要求配置服务器也创建为副本集,在此副本集名称:configdb

在三台服务器上配置config server副本集配置文件,并启动服务

cat >> /mongodb/mongosvr-20000.conf <<ENDF

systemLog:

destination: file

###日志存储位置

path: /mongodb/logs/mongosvr-20000.log  #定义config server日志文件

logAppend: true

storage:

##journal配置

journal:

enabled: true

##数据文件存储位置

dbPath: /mongodb/data/mongosvr-20000

##是否一个库一个文件夹

directoryPerDB: true

##数据引擎

engine: wiredTiger

##WT引擎配置

wiredTiger:

engineConfig:

##WT最大使用cache(根据服务器实际情况调节)

cacheSizeGB: 10

##是否将索引也按数据库名单独存储

directoryForIndexes: true

##表压缩配置

collectionConfig:

blockCompressor: zlib

##索引配置

indexConfig:

prefixCompression: true

processManagement:

fork: true  # fork and run in background

pidFilePath: /mongodb/socket/mongodsvr-20000.pid

##端口配置

net:

port: 20000

bindIp: 172.17.0.2    #注意修改绑定IP

sharding:

clusterRole: configsvr

ENDF

mongod -f mongosvr-20000.conf -replSet configdb

[root@a35e154acb47 mongodb]# mongod -f mongosvr-20000.conf -replSet configdb

about to fork child process, waiting until server is ready for connections.

forked process: 204

child process started successfully, parent exiting

[root@a35e154acb47 mongodb]# mongod -f mongosvr-20000.conf -replSet configdb

[root@a35e154acb47 mongodb]# ps -ef | grep mongo

root        204      0 22 06:57 ?        00:00:00 mongod -f mongosvr-20000.conf -replSet configdb

root        240    170  0 06:57 ?        00:00:00 grep --color=auto mongo

[root@a35e154acb47 mongodb]#

任意登录一台服务器进行配置服务器副本集初始化

config = {_id:"configdb",members:[

{_id:0,host:"172.17.0.2:20000"},{_id:0,host:"172.17.0.2:20000"},

{_id:1,host:"172.17.0.3:20000"},{_id:1,host:"172.17.0.3:20000"},

{_id:2,host:"172.17.0.4:20000"},]{_id:2,host:"172.17.0.4:20000"},]

}}

rs.initiate(config)

[root@a35e154acb47 mongodb]# mongo 172.17.0.2:20000

> use admin

switched to db admin

> config = {_id:"configdb",members:[

... {_id:0,host:"172.17.0.2:20000"},

... {_id:1,host:"172.17.0.3:20000"},

... {_id:2,host:"172.17.0.4:20000"},]

... }

{

"_id" : "configdb",

"members" : [

{

"_id" : 0,

"host" : "172.17.0.2:20000"

},

{

"_id" : 1,

"host" : "172.17.0.3:20000"

},

{

"_id" : 2,

"host" : "172.17.0.4:20000"

}

]

}

> rs.initiate(config)

{

"ok" : 1,

"operationTime" : Timestamp(1534231136, 1),

"$gleStats" : {

"lastOpTime" : Timestamp(1534231136, 1),

"electionId" : ObjectId("000000000000000000000000")

},

"$clusterTime" : {

"clusterTime" : Timestamp(1534231136, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

查看集群状态:

configdb:SECONDARY> rs.status()rs.status()

{

"set" : "configdb",

"date" : ISODate("2018-08-14T07:19:04.515Z"),

"myState" : 2,

"term" : NumberLong(0),

"configsvr" : true,

"heartbeatIntervalMillis" : NumberLong(2000),

"optimes" : {

"lastCommittedOpTime" : {

"ts" : Timestamp(0, 0),

"t" : NumberLong(-1)

},

"appliedOpTime" : {

"ts" : Timestamp(1534231136, 1),

"t" : NumberLong(-1)

},

"durableOpTime" : {

"ts" : Timestamp(1534231136, 1),

"t" : NumberLong(-1)

}

},

"members" : [

{

"_id" : 0,

"name" : "172.17.0.2:20000",

"health" : 1,

"state" : 2,

"stateStr" : "PRIMARY",

"uptime" : 1303,

"optime" : {

"ts" : Timestamp(1534231136, 1),

"t" : NumberLong(-1)

},

"optimeDate" : ISODate("2018-08-14T07:18:56Z"),

"infoMessage" : "could not find member to sync from",

"configVersion" : 1,

"self" : true

},

{

"_id" : 1,

"name" : "172.17.0.3:20000",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 8,

"optime" : {

"ts" : Timestamp(1534231136, 1),

"t" : NumberLong(-1)

},

"optimeDurable" : {

"ts" : Timestamp(1534231136, 1),

"t" : NumberLong(-1)

},

"optimeDate" : ISODate("2018-08-14T07:18:56Z"),

"optimeDurableDate" : ISODate("2018-08-14T07:18:56Z"),

"lastHeartbeat" : ISODate("2018-08-14T07:19:01.250Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T07:19:01.983Z"),

"pingMs" : NumberLong(0),

"configVersion" : 1

},

{

"_id" : 2,

"name" : "172.17.0.4:20000",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 8,

"optime" : {

"ts" : Timestamp(1534231136, 1),

"t" : NumberLong(-1)

},

"optimeDurable" : {

"ts" : Timestamp(1534231136, 1),

"t" : NumberLong(-1)

},

"optimeDate" : ISODate("2018-08-14T07:18:56Z"),

"optimeDurableDate" : ISODate("2018-08-14T07:18:56Z"),

"lastHeartbeat" : ISODate("2018-08-14T07:19:01.251Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T07:19:02.006Z"),

"pingMs" : NumberLong(1),

"configVersion" : 1

}

],

"ok" : 1,

"operationTime" : Timestamp(1534231136, 1),

"$gleStats" : {

"lastOpTime" : Timestamp(1534231136, 1),

"electionId" : ObjectId("000000000000000000000000")

},

"$clusterTime" : {

"clusterTime" : Timestamp(1534231136, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

此时config server集群已经配置完成,docker-1为primary,docker-2/docker-3为secondary

 

 

2.2.5 配置shard集群

三台服务器均进行shard集群配置

shard1配置

cat >> /mongodb/mongod-27017.conf <<ENDF

systemLog:

destination: file

###日志存储位置

path: /mongodb/logs/mongod-27017.log

logAppend: true

storage:

##journal配置

journal:

enabled: true

##数据文件存储位置

dbPath: /mongodb/data/mongod-27017

##是否一个库一个文件夹

directoryPerDB: true

##数据引擎

engine: wiredTiger

##WT引擎配置

wiredTiger:

engineConfig:

##WT最大使用cache(根据服务器实际情况调节)

cacheSizeGB: 10

##是否将索引也按数据库名单独存储

directoryForIndexes: true

##表压缩配置

collectionConfig:

blockCompressor: zlib

##索引配置

indexConfig:

prefixCompression: true

processManagement:

fork: true  # fork and run in background

pidFilePath: /mongodb/socket/mongod-27017.pid

##端口配置

net:

port: 27017

bindIp: 172.17.0.2

ENDF

### –shardsvr 是表示以sharding模式启动Mongodb服务器

mongod -f mongod-27017.conf -replSet shard1 -shardsvr

[root@a35e154acb47 mongodb]# mongod -f mongod-27017.conf -replSet shard1 -shardsvr

about to fork child process, waiting until server is ready for connections.

forked process: 1020

child process started successfully, parent exiting

[root@a35e154acb47 mongodb]# ps -ef | grep mongo

root        204      0  1 06:57 ?        00:01:27 mongod -f mongosvr-20000.conf -replSet configdb

root       1020      0 15 08:47 ?        00:00:01 mongod -f mongod-27017.conf -replSet shard1 -shardsvr

root       1110    170  0 08:47 ?        00:00:00 grep --color=auto mongo

查看此时服务已经正常启动,shard1的27017端口已经正常监听,接下来登录docker-1服务器进行shard1副本集初始化

[root@a35e154acb47 mongodb]# mongo 172.17.0.2:27017

MongoDB shell version v3.6.3

connecting to: mongodb://172.17.0.2:27017/test

MongoDB server version: 3.6.3

Server has startup warnings:

2018-08-14T07:29:04.783+0000 I CONTROL  [initandlisten]

2018-08-14T07:29:04.783+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.

2018-08-14T07:29:04.783+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.

2018-08-14T07:29:04.783+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.

2018-08-14T07:29:04.783+0000 I CONTROL  [initandlisten]

2018-08-14T07:29:04.784+0000 I CONTROL  [initandlisten]

2018-08-14T07:29:04.784+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.

2018-08-14T07:29:04.784+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

2018-08-14T07:29:04.784+0000 I CONTROL  [initandlisten]

2018-08-14T07:29:04.784+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.

2018-08-14T07:29:04.784+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

2018-08-14T07:29:04.784+0000 I CONTROL  [initandlisten]

>

>

> use admin

switched to db admin

> config = {_id:"shard1",members:[

... {_id:0,host:"172.17.0.2:27017"},

... {_id:1,host:"172.17.0.3:27017",arbiterOnly:true},

... {_id:2,host:"172.17.0.4:27017"},]

... }}

{

"_id" : "shard1",

"members" : [

{

"_id" : 0,

"host" : "172.17.0.2:27017"

},

{

"_id" : 1,

"host" : "172.17.0.3:27017",

"arbiterOnly" : true

},

{

"_id" : 2,

"host" : "172.17.0.4:27017"

}

]

}

> rs.initiate(config);rs.initiate(config);

{

"ok" : 1,

"operationTime" : Timestamp(1534232200, 1),

"$clusterTime" : {

"clusterTime" : Timestamp(1534232200, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

查看集群状态,只列出了部分信息:

shard1:SECONDARY> rs.status()

{

"set" : "shard1",

"date" : ISODate("2018-08-14T07:36:53.095Z"),

"myState" : 1,

"term" : NumberLong(1),

"heartbeatIntervalMillis" : NumberLong(2000),

"optimes" : {

"lastCommittedOpTime" : {

"ts" : Timestamp(1534232212, 5),

"t" : NumberLong(1)

},

"readConcernMajorityOpTime" : {

"ts" : Timestamp(1534232212, 5),

"t" : NumberLong(1)

},

"appliedOpTime" : {

"ts" : Timestamp(1534232212, 5),

"t" : NumberLong(1)

},

"durableOpTime" : {

"ts" : Timestamp(1534232212, 5),

"t" : NumberLong(1)

}

},

"members" : [

{

"_id" : 0,

"name" : "172.17.0.2:27017",

"health" : 1,

"state" : 1,

"stateStr" : "PRIMARY",

"uptime" : 69,

"optime" : {

"ts" : Timestamp(1534232212, 5),

"t" : NumberLong(1)

},

"optimeDate" : ISODate("2018-08-14T07:36:52Z"),

"infoMessage" : "could not find member to sync from",

"electionTime" : Timestamp(1534232211, 1),

"electionDate" : ISODate("2018-08-14T07:36:51Z"),

"configVersion" : 1,

"self" : true

},

{

"_id" : 1,

"name" : "172.17.0.3:27017",

"health" : 1,

"state" : 7,

"stateStr" : "ARBITER",

"uptime" : 12,

"lastHeartbeat" : ISODate("2018-08-14T07:36:53.062Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T07:36:52.233Z"),

"pingMs" : NumberLong(0),

"configVersion" : 1

},

{

"_id" : 2,

"name" : "172.17.0.4:27017",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 12,

"optime" : {

"ts" : Timestamp(1534232212, 5),

"t" : NumberLong(1)

},

"optimeDurable" : {

"ts" : Timestamp(1534232212, 5),

"t" : NumberLong(1)

},

"optimeDate" : ISODate("2018-08-14T07:36:52Z"),

"optimeDurableDate" : ISODate("2018-08-14T07:36:52Z"),

"lastHeartbeat" : ISODate("2018-08-14T07:36:53.062Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T07:36:52.345Z"),

"pingMs" : NumberLong(0),

"syncingTo" : "172.17.0.2:27017",

"configVersion" : 1

}

],

"ok" : 1,

"operationTime" : Timestamp(1534232212, 5),

"$clusterTime" : {

"clusterTime" : Timestamp(1534232212, 5),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

此时shard1 副本集已经配置完成,docker-1为primary,docker-2为arbiter,docker-3为secondary。

同样的操作进行shard2配置和shard3配置

注意:进行shard2的副本集初始化,在mongodb-2, 初始化shard3副本集在mongodb-3上进行操作。

shard2配置文件

cat >> /mongodb/mongod-27017.conf <<ENDF

systemLog:

destination: file

###日志存储位置

path: /mongodb/logs/mongod-27017.log

logAppend: true

storage:

##journal配置

journal:

enabled: true

##数据文件存储位置

dbPath: /mongodb/data/mongod-27017

##是否一个库一个文件夹

directoryPerDB: true

##数据引擎

engine: wiredTiger

##WT引擎配置

wiredTiger:

engineConfig:

##WT最大使用cache(根据服务器实际情况调节)

cacheSizeGB: 10

##是否将索引也按数据库名单独存储

directoryForIndexes: true

##表压缩配置

collectionConfig:

blockCompressor: zlib

##索引配置

indexConfig:

prefixCompression: true

processManagement:

fork: true  # fork and run in background

pidFilePath: /mongodb/socket/mongod-27017.pid

##端口配置

net:

port: 27017

bindIp: 172.17.0.3

ENDF

mongod -f mongod-27017.conf -replSet shard2 -shardsvr

shard3配置文件

cat >> /mongodb/mongod-27017.conf <<ENDF

systemLog:

destination: file

###日志存储位置

path: /mongodb/logs/mongod-27017.log

logAppend: true

storage:

##journal配置

journal:

enabled: true

##数据文件存储位置

dbPath: /mongodb/data/mongod-27017

##是否一个库一个文件夹

directoryPerDB: true

##数据引擎

engine: wiredTiger

##WT引擎配置

wiredTiger:

engineConfig:

##WT最大使用cache(根据服务器实际情况调节)

cacheSizeGB: 10

##是否将索引也按数据库名单独存储

directoryForIndexes: true

##表压缩配置

collectionConfig:

blockCompressor: zlib

##索引配置

indexConfig:

prefixCompression: true

processManagement:

fork: true  # fork and run in background

pidFilePath: /mongodb/socket/mongod-27017.pid

##端口配置

net:

port: 27017

bindIp: 172.17.0.3

ENDF

mongod -f mongod-27017.conf -replSet shard3 -shardsvr

在docker-2上进行shard2副本集初始化

[root@a35e154acb47 mongodb]# mongo 172.17.0.2:27018

MongoDB shell version v3.6.3

connecting to: mongodb://172.17.0.2:27018/test

MongoDB server version: 3.6.3

Server has startup warnings:

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten]

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten]

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten]

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten]

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

2018-08-14T07:40:03.757+0000 I CONTROL  [initandlisten]

> use admin

switched to db admin

> config = {_id:"shard2",members:[

... {_id:0,host:"172.17.0.2:27018"},

... {_id:1,host:"172.17.0.3:27018"},

... {_id:2,host:"172.17.0.4:27018",arbiterOnly:true},]

... }}

{

"_id" : "shard2",

"members" : [

{

"_id" : 0,

"host" : "172.17.0.2:27018"

},

{

"_id" : 1,

"host" : "172.17.0.3:27018"

},

{

"_id" : 2,

"host" : "172.17.0.4:27018",

"arbiterOnly" : true

}

]

}

> rs.initiate(config);rs.initiate(config);

{

"ok" : 1,

"operationTime" : Timestamp(1534232500, 1),

"$clusterTime" : {

"clusterTime" : Timestamp(1534232500, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

shard2:SECONDARY>

查看shard2副本集状态

shard2:PRIMARY> rs.status()

{

"set" : "shard2",

"date" : ISODate("2018-08-14T07:44:32.972Z"),

"myState" : 1,

"term" : NumberLong(1),

"heartbeatIntervalMillis" : NumberLong(2000),

"optimes" : {

"lastCommittedOpTime" : {

"ts" : Timestamp(1534232663, 1),

"t" : NumberLong(1)

},

"readConcernMajorityOpTime" : {

"ts" : Timestamp(1534232663, 1),

"t" : NumberLong(1)

},

"appliedOpTime" : {

"ts" : Timestamp(1534232663, 1),

"t" : NumberLong(1)

},

"durableOpTime" : {

"ts" : Timestamp(1534232663, 1),

"t" : NumberLong(1)

}

},

"members" : [

{

"_id" : 0,

"name" : "172.17.0.2:27018",

"health" : 1,

"state" : 1,

"stateStr" : "PRIMARY",

"uptime" : 269,

"optime" : {

"ts" : Timestamp(1534232663, 1),

"t" : NumberLong(1)

},

"optimeDate" : ISODate("2018-08-14T07:44:23Z"),

"electionTime" : Timestamp(1534232512, 1),

"electionDate" : ISODate("2018-08-14T07:41:52Z"),

"configVersion" : 1,

"self" : true

},

{

"_id" : 1,

"name" : "172.17.0.3:27018",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 172,

"optime" : {

"ts" : Timestamp(1534232663, 1),

"t" : NumberLong(1)

},

"optimeDurable" : {

"ts" : Timestamp(1534232663, 1),

"t" : NumberLong(1)

},

"optimeDate" : ISODate("2018-08-14T07:44:23Z"),

"optimeDurableDate" : ISODate("2018-08-14T07:44:23Z"),

"lastHeartbeat" : ISODate("2018-08-14T07:44:32.246Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T07:44:30.986Z"),

"pingMs" : NumberLong(0),

"syncingTo" : "172.17.0.2:27018",

"configVersion" : 1

},

{

"_id" : 2,

"name" : "172.17.0.4:27018",

"health" : 1,

"state" : 7,

"stateStr" : "ARBITER",

"uptime" : 172,

"lastHeartbeat" : ISODate("2018-08-14T07:44:32.244Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T07:44:32.794Z"),

"pingMs" : NumberLong(0),

"configVersion" : 1

}

],

"ok" : 1,

"operationTime" : Timestamp(1534232663, 1),

"$clusterTime" : {

"clusterTime" : Timestamp(1534232663, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

登录docker-3进行shard3副本集初始化

[root@a35e154acb47 mongodb]# mongo 172.17.0.3:27019

MongoDB shell version v3.6.3

connecting to: mongodb://172.17.0.3:27019/test

MongoDB server version: 3.6.3

Server has startup warnings:

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten]

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten]

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten]

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten]

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

2018-08-14T08:00:06.479+0000 I CONTROL  [initandlisten]

> use adminuse admin

switched to db admin

>

>

> config = {_id:"shard3",members:[

... {_id:0,host:"172.17.0.2:27019",arbiterOnly:true},

... {_id:1,host:"172.17.0.3:27019"},

... {_id:2,host:"172.17.0.4:27019"},]

... }}

{

"_id" : "shard3",

"members" : [

{

"_id" : 0,

"host" : "172.17.0.2:27019",

"arbiterOnly" : true

},

{

"_id" : 1,

"host" : "172.17.0.3:27019"

},

{

"_id" : 2,

"host" : "172.17.0.4:27019"

}

]

}

> rs.initiate(config);

{

"ok" : 1,

"operationTime" : Timestamp(1534234156, 1),

"$clusterTime" : {

"clusterTime" : Timestamp(1534234156, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

shard3:SECONDARY> rs.status()

{

"set" : "shard3",

"date" : ISODate("2018-08-14T08:09:23.568Z"),

"myState" : 2,

"term" : NumberLong(0),

"heartbeatIntervalMillis" : NumberLong(2000),

"optimes" : {

"lastCommittedOpTime" : {

"ts" : Timestamp(0, 0),

"t" : NumberLong(-1)

},

"appliedOpTime" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

},

"durableOpTime" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

}

},

"members" : [

{

"_id" : 0,

"name" : "172.17.0.2:27019",

"health" : 1,

"state" : 7,

"stateStr" : "ARBITER",

"uptime" : 6,

"lastHeartbeat" : ISODate("2018-08-14T08:09:21.944Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T08:09:18.925Z"),

"pingMs" : NumberLong(0),

"configVersion" : 1

},

{

"_id" : 1,

"name" : "172.17.0.3:27019",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 558,

"optime" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

},

"optimeDate" : ISODate("2018-08-14T08:09:16Z"),

"infoMessage" : "could not find member to sync from",

"configVersion" : 1,

"self" : true

},

{

"_id" : 2,

"name" : "172.17.0.4:27019",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 6,

"optime" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

},

"optimeDurable" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

},

"optimeDate" : ISODate("2018-08-14T08:09:16Z"),

"optimeDurableDate" : ISODate("2018-08-14T08:09:16Z"),

"lastHeartbeat" : ISODate("2018-08-14T08:09:21.944Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T08:09:19.060Z"),

"pingMs" : NumberLong(0),

"configVersion" : 1

}

],

"ok" : 1,

"operationTime" : Timestamp(1534234156, 1),

"$clusterTime" : {

"clusterTime" : Timestamp(1534234156, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

shard3:SECONDARY>

查看shard3副本集状态

shard3:SECONDARY> rs.status()

{

"set" : "shard3",

"date" : ISODate("2018-08-14T08:09:25.488Z"),

"myState" : 2,

"term" : NumberLong(0),

"heartbeatIntervalMillis" : NumberLong(2000),

"optimes" : {

"lastCommittedOpTime" : {

"ts" : Timestamp(0, 0),

"t" : NumberLong(-1)

},

"appliedOpTime" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

},

"durableOpTime" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

}

},

"members" : [

{

"_id" : 0,

"name" : "172.17.0.2:27019",

"health" : 1,

"state" : 7,

"stateStr" : "ARBITER",

"uptime" : 8,

"lastHeartbeat" : ISODate("2018-08-14T08:09:21.944Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T08:09:23.928Z"),

"pingMs" : NumberLong(0),

"configVersion" : 1

},

{

"_id" : 1,

"name" : "172.17.0.3:27019",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 560,

"optime" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

},

"optimeDate" : ISODate("2018-08-14T08:09:16Z"),

"infoMessage" : "could not find member to sync from",

"configVersion" : 1,

"self" : true

},

{

"_id" : 2,

"name" : "172.17.0.4:27019",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 8,

"optime" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

},

"optimeDurable" : {

"ts" : Timestamp(1534234156, 1),

"t" : NumberLong(-1)

},

"optimeDate" : ISODate("2018-08-14T08:09:16Z"),

"optimeDurableDate" : ISODate("2018-08-14T08:09:16Z"),

"lastHeartbeat" : ISODate("2018-08-14T08:09:21.944Z"),

"lastHeartbeatRecv" : ISODate("2018-08-14T08:09:24.061Z"),

"pingMs" : NumberLong(0),

"configVersion" : 1

}

],

"ok" : 1,

"operationTime" : Timestamp(1534234156, 1),

"$clusterTime" : {

"clusterTime" : Timestamp(1534234156, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}

此时shard集群全部已经配置完毕。

2.2.6 配置路由服务器mongos

目前三台服务器的配置服务器和分片服务器均已启动,配置三台mongos服务器

由于mongos服务器的配置是从内存中加载,所以自己没有存在数据目录configdb连接为配置服务器集群

cat >> /mongodb/mongos-30000.conf <<ENDF

systemLog:

destination: file

###日志存储位置

path: /mongodb/logs/mongos-30000.log

logAppend: true

processManagement:

fork: true  # fork and run in background

pidFilePath: /mongodb/socket/mongos-30000.pid

##端口配置

net:

port: 30000

bindIp: 172.17.0.2

## 将confige server 添加到路由

sharding:

configDB: configdb/172.17.0.2:20000,172.17.0.3:20000,172.17.0.4:20000

ENDF

mongos -f mongos-30000.conf

目前config server集群/shard集群/mongos服务均已启动,但此时为设置分片,还不能使用分片功能。需要登录mongos启用分片

登录任意一台mongos

[root@a35e154acb47 mongodb]# mongo 172.17.0.2:30000

MongoDB shell version v3.6.3

connecting to: mongodb://172.17.0.2:30000/test

MongoDB server version: 3.6.3

Server has startup warnings:

2018-08-14T07:21:29.694+0000 I CONTROL  [main]

2018-08-14T07:21:29.694+0000 I CONTROL  [main] ** WARNING: Access control is not enabled for the database.

2018-08-14T07:21:29.694+0000 I CONTROL  [main] **          Read and write access to data and configuration is unrestricted.

2018-08-14T07:21:29.694+0000 I CONTROL  [main] ** WARNING: You are running this process as the root user, which is not recommended.

2018-08-14T07:21:29.694+0000 I CONTROL  [main]

mongos> use adminuse admin

switched to db admin

mongos> db.runCommand({addshard:"shard1/172.17.0.2:27017,172.17.0.3:27017,172.17.0.4:27017"})

db.runCommand({addshard:"shard2/172.17.0.2:27018,172.17.0.3:27018,172.17.0.4:27018"})

db.runCommand({addshard:"shard3/172.17.0.2:27019,172.17.0.3:27019,172.17.0.4:27019"}){

"shardAdded" : "shard1",

"ok" : 1,

"$clusterTime" : {

"clusterTime" : Timestamp(1534234937, 3),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

},

"operationTime" : Timestamp(1534234937, 3)

}

mongos> db.runCommand({addshard:"shard2/172.17.0.2:27018,172.17.0.3:27018,172.17.0.4:27018"})db.runCommand({addshard:"shard2/172.17.0.2:27018,172.17.0.3:27018,172.17.0.4:27018"})

{

"shardAdded" : "shard2",

"ok" : 1,

"$clusterTime" : {

"clusterTime" : Timestamp(1534234937, 5),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

},

"operationTime" : Timestamp(1534234937, 5)

}

mongos> db.runCommand({addshard:"shard3/172.17.0.2:27019,172.17.0.3:27019,172.17.0.4:27019"})db.runCommand({addshard:"shard3/172.17.0.2:27019,172.17.0.3:27019,172.17.0.4:27019"})

{

"shardAdded" : "shard3",

"ok" : 1,

"$clusterTime" : {

"clusterTime" : Timestamp(1534234938, 2),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

},

"operationTime" : Timestamp(1534234938, 2)

}

mongos>

查看集群

mongos> sh.status()

--- Sharding Status ---

sharding version: {

"_id" : 1,

"minCompatibleVersion" : 5,

"currentVersion" : 6,

"clusterId" : ObjectId("5b72826cf59ff6d759023045")

}

shards:

{  "_id" : "shard1",  "host" : "shard1/172.17.0.2:27017,172.17.0.4:27017",  "state" : 1 }

{  "_id" : "shard2",  "host" : "shard2/172.17.0.2:27018,172.17.0.3:27018",  "state" : 1 }

{  "_id" : "shard3",  "host" : "shard3/172.17.0.3:27019,172.17.0.4:27019",  "state" : 1 }

active mongoses:

"3.6.3" : 3

autosplit:

Currently enabled: yes

balancer:

Currently enabled:  yes

Currently running:  no

Failed balancer rounds in last 5 attempts:  0

Migration Results for the last 24 hours:

2 : Success

1 : Failed with error 'aborted', from shard3 to shard1

databases:

{  "_id" : "config",  "primary" : "config",  "partitioned" : true }

config.system.sessions

shard key: { "_id" : 1 }

unique: false

balancing: true

chunks:

shard11

{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)

{  "_id" : "school",  "primary" : "shard3",  "partitioned" : true }

school.student

shard key: { "_id" : "hashed" }

unique: false

balancing: true

chunks:

shard11

shard21

shard31

{ "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-3074457345618258602") } on : shard1 Timestamp(2, 0)

{ "_id" : NumberLong("-3074457345618258602") } -->> { "_id" : NumberLong("3074457345618258602") } on : shard2 Timestamp(3, 0)

{ "_id" : NumberLong("3074457345618258602") } -->> { "_id" : { "$maxKey" : 1 } } on : shard3 Timestamp(3, 1)

mongos>

三、 测试

目前配置服务、路由服务、分片服务、副本集服务都已经串联起来了,此时进行数据插入,数据能够自动分片。连接在mongos上让指定的数据库、指定的集合分片生效。

注意:设置分片需要在admin数据库进行

1

2

3

use admin

db.runCommand( { enablesharding :"school"});    #开启kaliarch库分片功能

db.runCommand( { shardcollection : "school.student",key : {_id:"hashed"} } )    #指定数据库里需要分片的集合tables和片键_id

设置school的 student表需要分片,根据 _id 自动分片到 shard1 ,shard2,shard3 上面去。

mongos> db.runCommand( { enablesharding :"school"});

{

"ok" : 1,

"$clusterTime" : {

"clusterTime" : Timestamp(1534235216, 7),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

},

"operationTime" : Timestamp(1534235216, 7)

}

mongos> db.runCommand( { shardcollection : "school.student",key : {_id:"hashed"} } )

{

"collectionsharded" : "school.student",

"collectionUUID" : UUID("dbbcd092-a519-44be-8ebf-3cec16f866c5"),

"ok" : 1,

"$clusterTime" : {

"clusterTime" : Timestamp(1534235226, 22),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

},

"operationTime" : Timestamp(1534235226, 22)

}

查看分片信息

mongos> db.runCommand({listshards:1})

{

"shards" : [

{

"_id" : "shard1",

"host" : "shard1/172.17.0.2:27017,172.17.0.4:27017",

"state" : 1

},

{

"_id" : "shard2",

"host" : "shard2/172.17.0.2:27018,172.17.0.3:27018",

"state" : 1

},

{

"_id" : "shard3",

"host" : "shard3/172.17.0.3:27019,172.17.0.4:27019",

"state" : 1

}

],

"ok" : 1,

"$clusterTime" : {

"clusterTime" : Timestamp(1534235275, 2),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

},

"operationTime" : Timestamp(1534235275, 2)

}

测试插入数据

mongos> use school

switched to db school

mongos> for (var i = 1; i <= 1; i++) db.student.save({_id:i,"test1":"testval1"});for (var i = 1; i <= 1; i++) db.student.save({_id:i,"test1":"testval1"});

WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : 1 })

mongos> for (var i = 1; i <= 100000; i++) db.student.save({_id:i,"test1":"testval1"});

WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : 100000 })

mongos>

查看分片情况:(省去部分信息)

db.table1.stats()

{

"sharded" : true,

"capped" : false,

"ns" : "school.student",

"count" : 100000,        #总count

"size" : 3800000,

"storageSize" : 1335296,

"totalIndexSize" : 4329472,

"indexSizes" : {

"_id_" : 1327104,

"_id_hashed" : 3002368

},

"avgObjSize" : 38,

"nindexes" : 2,

"nchunks" : 6,

"shards" : {

"shard1" : {

"ns" : "school.student",

"size" : 1282690,

"count" : 33755,        #shard1的count数

"avgObjSize" : 38,

"storageSize" : 450560,

"capped" : false,

......

"shard2" : {

"ns" : "school.student",

"size" : 1259434,

"count" : 33143,        #shard2的count数

"avgObjSize" : 38,

"storageSize" : 442368,

"capped" : false,

.......

"shard3" : {

"ns" : "school.student",

"size" : 1257876,

"count" : 33102,            #shard3的count数

"avgObjSize" : 38,

"storageSize" : 442368,

"capped" : false,

.......

此时架构中的mongos,config server,shard集群均已经搭建部署完毕,在实际生成环境话需要对前端的mongos做高可用来提示整体高可用。

Mongodb集群搭建之 Sharding+ Replica Sets集群架构(2)的更多相关文章

  1. Mongodb集群搭建之 Sharding+ Replica Sets集群架构

    1.本例使用1台Linux主机,通过Docker 启动三个容器 IP地址如下: docker run -d -v `pwd`/data/master:/mongodb -p 27017:27017 d ...

  2. windows+mysql集群搭建-三分钟搞定集群

    注:本文来源:  陈晓婵   <  windows+mysql集群搭建-三分钟搞定集群   > 一:mysql集群搭建教程-基础篇 计算机一级考试系统要用集群,目标是把集群搭建起来,保证一 ...

  3. 学习MongoDB(Troubleshoot Replica Sets) 集群排除故障

    Test Connections Between all Members(集群中节点网络测试) 在进行Mongodb集群时,每个节点的网络都需要互动,假设有3个服务器节点. m1.example.ne ...

  4. MongoDB集群搭建---副本和分片(伪集群)

    参考地址:https://blog.csdn.net/weixin_43622131/article/details/105984032 已配置好的所有的配置文件下载地址:https://files. ...

  5. MongoDB分布式集群搭建(分片加副本集)

    # 环境准备 服务器 # 环境搭建 文件配置和目录添加 新建目录的操作要在三台机器中进行,为配置服务器新建数据目录和日志目录 mkdir -p $MONGODB_HOME/config/data mk ...

  6. 学习MongoDB(三) Add an Arbiter to Replica Set 集群中加入仲裁节点

    Add an Arbiter to Replica Set 在集群中加入仲裁节点,当集群中主节点挂掉后负责选出新的主节点,仲裁节点也是一个mongo实力,但是它不存储数据. 1.仲裁节点消耗很小的资源 ...

  7. k8s集群搭建之二:etcd集群的搭建

    一 介绍 Etcd是一个高可用的 Key/Value 存储系统,主要用于分享配置和服务发现. 简单:支持 curl 方式的用户 API (HTTP+JSON) 安全:可选 SSL 客户端证书认证 快速 ...

  8. redis集群主从集群搭建、sentinel(哨兵集群)配置以及Jedis 哨兵模式简要配置

    前端时间项目上为了提高平台性能,为应用添加了redis缓存,为了提高服务的可靠性,redis部署了高可用的主从缓存,主从切换使用的redis自带的sentinel集群.现在权作记录.

  9. hbase 集群搭建(公司内部测试集群)

    我用的是cdh4.5版本:配置文件:$HBASE_HOME/conf/hbase-env.shexport JAVA_HOME=$JAVA_HOMEexport JAVA_HOME=/home/had ...

随机推荐

  1. fcntl获取和修改文件打开状态标志

    [root@bogon code]# cat b.c #include<stdio.h> #include<error.h> #include<unistd.h> ...

  2. How to scale Complex Event Processing (CEP)/ Streaming SQL Systems?

    转自:https://iwringer.wordpress.com/2012/05/18/how-to-scale-complex-event-processing-cep-systems/ What ...

  3. Compoxure 微服务组合proxy 中间件

    Compoxure 是一个不错的微服务组合中间件,使用此工具我们可以快速的进行micro frontends 应用的开发 使用此工具我们可以替换esi+ ssi 的开发模型(尽管都挺不错). 同时支持 ...

  4. 机器学习 - 开发环境安装pycharm + pyspark + spark集成篇

    AS WE ALL KNOW,学机器学习的一般都是从python+sklearn开始学,适用于数据量不大的场景(这里就别计较“不大”具体指标是啥了,哈哈) 数据量大了,就需要用到其他技术了,如:spa ...

  5. 使用apache cxf实现webservice服务

    1.在idea中使用maven新建web工程并引入spring mvc,具体可以参考https://www.cnblogs.com/laoxia/p/9311442.html; 2.在工程POM文件中 ...

  6. uglifyjs-webpack-plugin 插件,drop_console 默认为 false(不清除 console 语句),drop_debugger 默认为 true(清除 debugger 语句)

    uglifyjs-webpack-plugin 插件,drop_console 默认为 false(不清除console语句),drop_debugger 默认为 true(清除 debugger 语 ...

  7. oauth2 java 代码示例

    @RequestMapping("/oauth") @Controller public class OauthController { String clientId = &qu ...

  8. Linux 安装MySql启动Can't locate Data/Dumper.pm in @INC

    通过RPM包CentOS7 安装MySQL的时候提示“Can't locate Data/Dumper.pm in @INC (@INC contains: /usr/local/lib64/perl ...

  9. php调用c#的dll(转)

    这几天,一直在做DES ecb模式的加解密,刚用.net实现了加解密,完了由于需要又要转型成PHP代码,费了九牛二虎之力单独用PHP没能实现,结构看到一篇php直接调用c#里生成的.dll文件的方法, ...

  10. deque/defaultdict/orderedict/collections.namedtuple()/collections.ChainMap() 笔记

    关于deque的使用 collections.deque([list[, max_length]]) # 不限定长度,可随意添加没有上限 >>> from collections i ...