环境

$ cat /etc/redhat-release
CentOS Linux release 7.0.1406 (Core)
$ uname -a
Linux zhaopin-2-201 3.10.0-123.el7.x86_64 #1 SMP Mon Jun 30 12:09:22 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
$ mongo --version
MongoDB shell version: 3.0.6 node1: 172.30.2.201
node2: 172.30.2.202
node3: 172.30.2.203

配置Shard Server

  • 在3个节点分别执行: *

创建目录

$ sudo mkdir -p /data/mongodb/{data/{sh0,sh1},backup/{sh0,sh1},log/{sh0,sh1},conf/{sh0,sh1}}

准备配置文件

  • 第一个分片: *
$ sudo vim /data/mongodb/conf/sh0/mongodb.conf
# base
port = 27010
maxConns = 800
filePermissions = 0700
fork = true
noauth = true
directoryperdb = true
dbpath = /data/mongodb/data/sh0
pidfilepath = /data/mongodb/data/sh0/mongodb.pid
oplogSize = 10
journal = true
# security
nohttpinterface = true
rest = false
# log
logpath = /data/mongodb/log/sh0/mongodb.log
logRotate = rename
logappend = true
slowms = 50
replSet = sh0
shardsvr = true
  • 第二个分片: *
$ sudo vim /data/mongodb/conf/sh1/mongodb.conf
# base
port = 27011
maxConns = 800
filePermissions = 0700
fork = true
noauth = true
directoryperdb = true
dbpath = /data/mongodb/data/sh1
pidfilepath = /data/mongodb/data/sh1/mongodb.pid
oplogSize = 10
journal = true
# security
nohttpinterface = true
rest = false
# log
logpath = /data/mongodb/log/sh1/mongodb.log
logRotate = rename
logappend = true
slowms = 50
replSet = sh1
shardsvr = true

启动Shard Server

$ sudo /opt/mongodb/bin/mongod --config /data/mongodb/conf/sh0/mongodb.conf
about to fork child process, waiting until server is ready for connections.
forked process: 41492
child process started successfully, parent exiting
$ sudo /opt/mongodb/bin/mongod --config /data/mongodb/conf/sh1/mongodb.conf
about to fork child process, waiting until server is ready for connections.
forked process: 41509
child process started successfully, parent exiting
$ ps aux | grep mongo | grep -v grep
root 41492 0.5 0.0 518016 54604 ? Sl 10:09 0:00 /opt/mongodb/bin/mongod --config /data/mongodb/conf/sh0/mongodb.conf
root 41509 0.5 0.0 516988 51824 ? Sl 10:09 0:00 /opt/mongodb/bin/mongod --config /data/mongodb/conf/sh1/mongodb.conf
$ mongo --port 27010
MongoDB shell version: 3.0.6
connecting to: 127.0.0.1:27010/test
>
bye
$ mongo --port 27011
MongoDB shell version: 3.0.6
connecting to: 127.0.0.1:27011/test
>
bye

配置Config Server

  • 在3个节点分别执行: *

创建目录

$ sudo mkdir -p /data/mongodb/{data/cf0,backup/cf0,log/cf0,conf/cf0}

准备配置文件

$ sudo vim /data/mongodb/conf/cf0/config.conf
# base
port = 27000
maxConns = 800
filePermissions = 0700
fork = true
noauth = true
directoryperdb = true
dbpath = /data/mongodb/data/cf0
pidfilepath = /data/mongodb/data/cf0/config.pid
oplogSize = 10
journal = true
# security
nohttpinterface = true
rest = false
# log
logpath = /data/mongodb/log/cf0/config.log
logRotate = rename
logappend = true
slowms = 50
configsvr = true

启动

$ sudo /opt/mongodb/bin/mongod --config /data/mongodb/conf/cf0/config.conf
about to fork child process, waiting until server is ready for connections.
forked process: 41759
child process started successfully, parent exiting
$ ps aux | grep mongo | grep -v grep
root 41492 0.3 0.0 518016 54728 ? Sl 10:09 0:06 /opt/mongodb/bin/mongod --config /data/mongodb/conf/sh0/mongodb.conf
root 41509 0.3 0.0 518016 54760 ? Sl 10:09 0:06 /opt/mongodb/bin/mongod --config /data/mongodb/conf/sh1/mongodb.conf
root 41855 0.4 0.0 467828 51684 ? Sl 10:25 0:03 /opt/mongodb/bin/mongod --config /data/mongodb/conf/cf0/config.conf

配置Query Routers

  • 在3个节点分别执行: *

创建目录

$ sudo mkdir -p /data/mongodb/{data/ms0,backup/ms0,log/ms0,conf/ms0}

准备配置文件

$ sudo vim /data/mongodb/conf/ms0/mongos.conf
# base
port = 30000
maxConns = 800
filePermissions = 0700
fork = true
pidfilepath = /data/mongodb/data/ms0/mongos.pid
# log
logpath = /data/mongodb/log/ms0/mongos.log
logRotate = rename
logappend = true
configdb = 172.30.2.201:27000,172.30.2.202:27000,172.30.2.203:27000

启动

$ sudo /opt/mongodb/bin/mongos --config /data/mongodb/conf/ms0/mongos.conf
about to fork child process, waiting until server is ready for connections.
forked process: 42233
child process started successfully, parent exiting
$ ps aux | grep mongo | grep -v grep
root 41492 0.3 0.0 518016 54728 ? Sl 10:09 0:06 /opt/mongodb/bin/mongod --config /data/mongodb/conf/sh0/mongodb.conf
root 41509 0.3 0.0 518016 54760 ? Sl 10:09 0:07 /opt/mongodb/bin/mongod --config /data/mongodb/conf/sh1/mongodb.conf
root 41855 0.4 0.0 546724 37812 ? Sl 10:25 0:03 /opt/mongodb/bin/mongod --config /data/mongodb/conf/cf0/config.conf
root 41870 0.4 0.0 546724 38188 ? Sl 10:25 0:03 /opt/mongodb/bin/mongod --conf
root 42233 0.5 0.0 233536 10188 ? Sl 10:38 0:00 /opt/mongodb/bin/mongos --config /data/mongodb/conf/ms0/mongos.conf

初始化副本集

配置副本集的好处是为了高可用,配置单节点是我自己为了节省时间,后续添加节点和副本集的操作一样,分片的配置不需要修改,在任何一个节点执行,这里在node1上执行

  • 分片一: *
$ mongo --port 27010
MongoDB shell version: 3.0.6
connecting to: 127.0.0.1:27010/test
> use admin
switched to db admin
> cfg={_id:"sh0", members:[ {_id:0,host:"172.30.2.201:27010"}, {_id:1,host:"172.30.2.202:27010"}, {_id:2,host:"172.30.2.203:27010"} ] }
{
"_id" : "sh0",
"members" : [
{
"_id" : 0,
"host" : "172.30.2.201:27010"
},
{
"_id" : 1,
"host" : "172.30.2.202:27010"
},
{
"_id" : 2,
"host" : "172.30.2.203:27010"
}
]
}
> rs.initiate( cfg );
{ "ok" : 1 }
sh0:OTHER> rs.status()
{
"set" : "sh0",
"date" : ISODate("2015-10-23T05:33:31.920Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "172.30.2.201:27010",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 270,
"optime" : Timestamp(1445578404, 1),
"optimeDate" : ISODate("2015-10-23T05:33:24Z"),
"electionTime" : Timestamp(1445578408, 1),
"electionDate" : ISODate("2015-10-23T05:33:28Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "172.30.2.202:27010",
"health" : 1,
"state" : 5,
"stateStr" : "STARTUP2",
"uptime" : 7,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2015-10-23T05:33:30.289Z"),
"lastHeartbeatRecv" : ISODate("2015-10-23T05:33:30.295Z"),
"pingMs" : 1,
"configVersion" : 1
},
{
"_id" : 2,
"name" : "172.30.2.203:27010",
"health" : 1,
"state" : 5,
"stateStr" : "STARTUP2",
"uptime" : 7,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2015-10-23T05:33:30.289Z"),
"lastHeartbeatRecv" : ISODate("2015-10-23T05:33:30.293Z"),
"pingMs" : 1,
"configVersion" : 1
}
],
"ok" : 1
}
sh0:PRIMARY>
bye
  • 分片二: *
$ mongo --port 27011
MongoDB shell version: 3.0.6
connecting to: 127.0.0.1:27011/test
> use admin
switched to db admin
> cfg={_id:"sh1", members:[ {_id:0,host:"172.30.2.201:27011"}, {_id:1,host:"172.30.2.202:27011"}, {_id:2,host:"172.30.2.203:27011"} ] }
{
"_id" : "sh1",
"members" : [
{
"_id" : 0,
"host" : "172.30.2.201:27011"
},
{
"_id" : 1,
"host" : "172.30.2.202:27011"
},
{
"_id" : 2,
"host" : "172.30.2.203:27011"
}
]
}
> rs.initiate( cfg );
{ "ok" : 1 }
sh1:OTHER> rs.status();
{
"set" : "sh1",
"date" : ISODate("2015-10-23T05:36:02.365Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "172.30.2.201:27011",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 406,
"optime" : Timestamp(1445578557, 1),
"optimeDate" : ISODate("2015-10-23T05:35:57Z"),
"electionTime" : Timestamp(1445578561, 1),
"electionDate" : ISODate("2015-10-23T05:36:01Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "172.30.2.202:27011",
"health" : 1,
"state" : 5,
"stateStr" : "STARTUP2",
"uptime" : 5,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2015-10-23T05:36:01.168Z"),
"lastHeartbeatRecv" : ISODate("2015-10-23T05:36:01.175Z"),
"pingMs" : 0,
"configVersion" : 1
},
{
"_id" : 2,
"name" : "172.30.2.203:27011",
"health" : 1,
"state" : 5,
"stateStr" : "STARTUP2",
"uptime" : 5,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2015-10-23T05:36:01.167Z"),
"lastHeartbeatRecv" : ISODate("2015-10-23T05:36:01.172Z"),
"pingMs" : 0,
"configVersion" : 1
}
],
"ok" : 1
}
sh1:PRIMARY>
bye

配置分片

$ mongo --port 30000
MongoDB shell version: 3.0.6
connecting to: 127.0.0.1:30000/test
mongos> use admin;
switched to db admin
mongos> sh.addShard("sh0/172.30.2.201:27010,172.30.2.202:27010,172.30.2.203:27010");
{ "shardAdded" : "sh0", "ok" : 1 }
mongos> sh.addShard("sh1/172.30.2.201:27011,172.30.2.202:27011,172.30.2.203:27011");
{ "shardAdded" : "sh1", "ok" : 1 }
mongos> use mydb;
switched to db mydb
mongos> db.createCollection("test");
{
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(1444358911, 1),
"electionId" : ObjectId("56172a4bc03d9b1667f8e928")
}
}
mongos> sh.enableSharding("mydb");
{ "ok" : 1 }
mongos> sh.shardCollection("mydb.test", {"_id":1});
{ "collectionsharded" : "mydb.test", "ok" : 1 }
mongos> sh.status();
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("561728b4030ea038bcb57fa0")
}
shards:
{ "_id" : "sh0", "host" : "sh0/172.30.2.201:27010,172.30.2.202:27010,172.30.2.203:27010" }
{ "_id" : "sh1", "host" : "sh1/172.30.2.201:27011,172.30.2.202:27011,172.30.2.203:27011" }
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "mydb", "partitioned" : true, "primary" : "sh0" }
mydb.test
shard key: { "_id" : 1 }
chunks:
sh0 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : sh0 Timestamp(1, 0)

可见分片已经配置完成了

添加开机启动项

$ sudo vim /etc/rc.local
ulimit -SHn 65535
/opt/mongodb/bin/mongod --config /data/mongodb/conf/sh0/mongodb.conf
/opt/mongodb/bin/mongod --config /data/mongodb/conf/sh1/mongodb.conf
/opt/mongodb/bin/mongod --config /data/mongodb/conf/cf0/config.conf
/opt/mongodb/bin/mongos --config /data/mongodb/conf/ms0/mongos.conf

备注

虽然也是3台机器,使用分片的好处是可以把两个分片的primary设置在不同的节点,这个可以分摊单节点的压力,当然有更多机器就可以把分片放到不同机器上。

按照mongodb官网的要求, 需要多余多处理器需要关闭muna,以及其他参数 参加mongodb check list

mongodb check list

MongoDB分片搭建的更多相关文章

  1. Mongodb - 切片搭建

    0.概述 mongodb分片搭建,版本号4.0.2,以下除了创建opt文件夹以外,所有操作均在mongodb用户下面执行 准备三台机器:192.168.56.81192.168.56.82192.16 ...

  2. MongoDB 分片的原理、搭建、应用

    一.概念: 分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程.将数据分散到不同的机器上,不需要功能强大的服务器就可以存储更多的数据和处理更大的负载.基本思想就是将集合切成小块,这 ...

  3. (转)MongoDB分片实战 集群搭建

    环境准备 Linux环境 主机 OS 备注 192.168.32.13 CentOS6.3 64位 普通PC 192.168.71.43 CentOS6.2 64位 服务器,NUMA CPU架构 Mo ...

  4. 搭建mongodb分片

    搭建mongodb分片 http://gong1208.iteye.com/blog/1622078 Sharding分片概念 这是一种将海量的数据水平扩展的数据库集群系统,数据分表存储在shardi ...

  5. 搭建MongoDB分片集群

    在部门服务器搭建MongoDB分片集群,记录整个操作过程,朋友们也可以参考. 计划如下: 用5台机器搭建,IP分别为:192.168.58.5.192.168.58.6.192.168.58.8.19 ...

  6. MongoDB 分片的原理、搭建、应用 !

    MongoDB 分片的原理.搭建.应用   一.概念: 分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程.将数据分散到不同的机器上,不需要功能强大的服务器就可以存储更多的数据和处 ...

  7. MongoDB 分片的原理、搭建、应用 (转)

    一.概念: 分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程.将数据分散到不同的机器上,不需要功能强大的服务器就可以存储更多的数据和处理更大的负载.基本思想就是将集合切成小块,这 ...

  8. Windows 搭建MongoDB分片集群(二)

    在本篇博客中我们主要讲描述分片集群的搭建过程.配置分片集群主要有两个步骤,第一启动所有需要的mongod和mongos进程.第二步就是启动一个mongos与集群通信.下面我们一步步来描述集群的搭建过程 ...

  9. Windows 搭建MongoDB分片集群(一)

    一.角色说明 要构建一个MongoDB分片集群,需要三个角色: shard server  即存储实际数据得分片,每个shard 可以是一个Mongod实例,也可以是一组mongod实例构成得Repl ...

随机推荐

  1. GDBus

    1. https://en.wikipedia.org/wiki/D-Bus In computing, D-Bus (for "Desktop Bus"[4]), a softw ...

  2. HTML5 2D平台游戏开发#10Wall Jump

    这个术语不知道怎么翻译比较贴切,但并不妨碍对字面意思的理解,大概就是飞檐走壁.比如: 这是游戏<忍者龙剑传>中的场景,玩家可以通过操纵角色在墙面上移动并跳跃. 首先需要实现角色抓墙这一动作 ...

  3. hadoop System times on machines may be out of sync. Check system time and time zones.

    之前环境一直好好的,由于玩坏了一个mini3只能复制一个了,但是复制之后就出现这个问题了 解决办法是 设置xshell向每一个窗口发消息http://mofansheng.blog.51cto.com ...

  4. JSP指令用来设置整个JSP页面相关的属性

    JSP 指令 JSP指令用来设置整个JSP页面相关的属性,如网页的编码方式和脚本语言. 语法格式如下: <%@ directive attribute="value" %&g ...

  5. YUM安装(卸载)KDE和GNOME

    YUM安装(卸载)KDE和GNOME显示系统已经安装的组件,和可以安装的组件:#yum grouplist 如果系统安装之初采用最小化安装,没有安装xwindow,那么先安装:#yum groupin ...

  6. IOS ARC内存管理,提高效率避免内存泄露

    本文转载至 http://blog.csdn.net/allison162004/article/details/38756263 Cocoa内存管理机制 (1)当你使用new.alloc.copy方 ...

  7. 【BZOJ4358】permu kd-tree

    [BZOJ4358]permu Description 给出一个长度为n的排列P(P1,P2,...Pn),以及m个询问.每次询问某个区间[l,r]中,最长的值域连续段长度. Input 第一行两个整 ...

  8. 如何使用EasyNVR+CDN突破萤石云在直播客户端数量上的限制,做到低成本高性价比的直播

    恰逢五一假期,有以为来自内蒙的用户向我电话咨询,大概的场景是这样的: 目前用户使用的是全套的海康IPC和NVR设备: 海康NVR设备通过设置萤石云平台,由萤石云对外提供直播服务: 萤石云对单个摄像机同 ...

  9. Netty聊天室(2):从0开始实战100w级流量应用

    目录 客户端 Client 登录和响应处理 写在前面 客户端的会话管理 客户端的逻辑构成 连接服务器与Session 的创建 Session和 channel 相互绑定 AttributeMap接口的 ...

  10. scikit-learn(project中用的相对较多的模型介绍):1.14. Semi-Supervised

    參考:http://scikit-learn.org/stable/modules/label_propagation.html The semi-supervised estimators insk ...