高可用性和PyMongo
High Availability and PyMongo
高可用性和PyMongo
************************************
PyMongo makes it easy to write highly available applications whether you use a single replica set or a large sharded cluster.
不论你使用一个简单的副本集还是一个大型的分片集群,Pymongo都让你能轻松的写出高可用性的应用程序.
Connecting to a Replica Set
连接到一个副本集
============================
PyMongo makes working with replica sets easy. Here we’ll launch a new replica set and show how to handle both initialization and normal connections with PyMongo.
用PyMongo连接副本集很容易.我们将启动一个新的副本集来展示如何用Pymongo初始化和连接它.
Note
Replica sets require server version >= 1.6.0. Support for connecting to replica sets also requires PyMongo version >= 1.8.0.
副本集要求服务器版本不低于1.6.0. 要连接到副本集,要求PyMongo版本不低于 1.8.0.
See general MongoDB documentation rs ( http://dochub.mongodb.org/core/rs )
Starting a Replica Set
启动一个副本集
============================
The main replica set documentation contains extensive information about setting up a new replica set or migrating an existing MongoDB setup, be sure to check that out. Here, we’ll just do the bare minimum to get a three node replica set setup locally.
副本集的主文档包含丰富的关于如何设置一个新的副本集或者从已经存在的mongo改装安装的信息,一定要看一下那个文档.
这里,我们只做最基本的,在本地建立一个3节点的副本集.
Warning
Replica sets should always use multiple nodes in production - putting all set members on the same physical node is only recommended for testing and development.
生产环境中,副本集应用总是使用多个节点 - 将所有副本集成员放到一个物理节点上的行为,建议只在测试和开发环境中存在.
We start three mongod processes, each on a different port and with a different dbpath, but all using the same replica set name “foo”. In the example we use the hostname “morton.local”, so replace that with your hostname when running:
我们起了3个mongod进程,分别使用不同的端口,不同的db路径,它们使用同一个副本集名称"foo". 在示例中我们使用的hostname为"morton.local", 自己实验时别忘了改成你自己的hostname.
$ hostname
morton.local
$ mongod --replSet foo/morton.local:27018,morton.local:27019 --rest
$ mongod --port 27018 --dbpath /data/db1 --replSet foo/morton.local:27017 --rest
$ mongod --port 27019 --dbpath /data/db2 --replSet foo/morton.local:27017 --rest
Initializing the Set
初始化集合
============================
At this point all of our nodes are up and running, but the set has yet to be initialized. Until the set is initialized no node will become the primary, and things are essentially “offline”.
现在所有的节点都起来了, 但是集合还需要初始化.初始化之前,集合中将没有主节点,本质上相当于offline.
To initialize the set we need to connect to a single node and run the initiate command. Since we don’t have a primary yet, we’ll need to tell PyMongo that it’s okay to connect to a slave/secondary:
我们需要连接到一个节点并且运行初始化命令来初始化副本集.由于我们现在还没有主节点,我们需要告诉PyMongo连接到一个slave/secondary节点也无妨:
>>> from pymongo import MongoClient, ReadPreference
>>> c = MongoClient("morton.local:27017",
read_preference=ReadPreference.SECONDARY)
Note
We could have connected to any of the other nodes instead, but only the node we initiate from is allowed to contain any initial data.
我们可以连接任何一个节点去做集合的初始化,但是只有我们连的这台机器才能包含初始化数据.(?)
After connecting, we run the initiate command to get things started (here we just use an implicit configuration, for more advanced configuration options see the replica set documentation):
连上一台db server之后,我们运行初始化命令来使集合运行起来(我们这里只用了一个显式的配置,更多高级的配置选项,参见 副本集 的文档):
>>> c.admin.command("replSetInitiate")
{u'info': u'Config now saved locally. Should come online in about a minute.',
u'info2': u'no configuration explicitly specified -- making one', u'ok': 1.0}
The three mongod servers we started earlier will now coordinate and come online as a replica set.
我们之前启动的三台mongod server现在将一起合作并且作为一个副本集而online了.
Connecting to a Replica Set
连接到副本集
============================
The initial connection as made above is a special case for an uninitialized replica set. Normally we’ll want to connect differently. A connection to a replica set can be made using the normal MongoClient() constructor, specifying one or more members of the set. For example, any of the following will create a connection to the set we just created:
前面的初始化连接是一种专门用来连接未初始化的副本集的情况. 通常情况下,我们不这么做(译者注: 因为通常我们不需要自己在程序里初始化副本集).
可以用一个普通的MongoClient()构造器通过制定一个或多个集合成员来连接到副本集. 例如,如下的方式都能连接到我们刚刚创建的副本集:
(这些方法可以连接未初始化的副本集吗? 应该不行. ??)
>>> MongoClient("morton.local", replicaset='foo')
MongoClient([u'morton.local:27019', 'morton.local:27017', u'morton.local:27018'])
>>> MongoClient("morton.local:27018", replicaset='foo')
MongoClient([u'morton.local:27019', u'morton.local:27017', 'morton.local:27018'])
>>> MongoClient("morton.local", 27019, replicaset='foo')
MongoClient(['morton.local:27019', u'morton.local:27017', u'morton.local:27018'])
>>> MongoClient(["morton.local:27018", "morton.local:27019"])
MongoClient(['morton.local:27019', u'morton.local:27017', 'morton.local:27018'])
>>> MongoClient("mongodb://morton.local:27017,morton.local:27018,morton.local:27019")
MongoClient(['morton.local:27019', 'morton.local:27017', 'morton.local:27018'])
The nodes passed to MongoClient() are called the seeds. If only one host is specified the replicaset parameter must be used to indicate this isn’t a connection to a single node. As long as at least one of the seeds is online, the driver will be able to “discover” all of the nodes in the set and make a connection to the current primary.
传递给MongoClient()的节点被成为种子.如果只指定了一个host,那么必须使用'replicaset'参数来指明不是要连接到一个单独节点.
种子中要至少有一台在线, driver才能"发现"副本集中所有的节点并且连接到当前的主节点.
Handling Failover
处理 failover
============================
When a failover occurs, PyMongo will automatically attempt to find the new primary node and perform subsequent operations on that node. This can’t happen completely transparently, however. Here we’ll perform an example failover to illustrate how everything behaves. First, we’ll connect to the replica set and perform a couple of basic operations:
当failover发生时, Pymongo会自动尝试发现新的主节点并且在新的主节点上进行后续操作. 然而,这个过程并不是完全透明的. 我们将用一个示例failover来演示会发生什么事情.
首先,我们连接到副本集并且做一些基本操作:
>>> db = MongoClient("morton.local", replicaSet='foo').test
>>> db.test.save({"x": 1})
ObjectId('...')
>>> db.test.find_one()
{u'x': 1, u'_id': ObjectId('...')}
By checking the host and port, we can see that we’re connected to morton.local:27017, which is the current primary:
通过检查 host和port,我们可以看出我们当前连接到 morton.local:27017, 也就是当前的主节点:
>>> db.connection.host
'morton.local'
>>> db.connection.port
27017
Now let’s bring down that node and see what happens when we run our query again:
现在我们把这个节点放倒来看看我们再次运行查询时会发生什么:
>>> db.test.find_one()
Traceback (most recent call last):
pymongo.errors.AutoReconnect: ...
We get an AutoReconnect exception. This means that the driver was not able to connect to the old primary (which makes sense, as we killed the server), but that it will attempt to automatically reconnect on subsequent operations. When this exception is raised our application code needs to decide whether to retry the operation or to simply continue, accepting the fact that the operation might have failed.
我们得到一个 AutoReconnect 异常.这意味着驱动连接不到老的主节点(这就对了,我们刚刚杀掉了这个server), 但是驱动会尝试自动重连.
当这个异常被抛出时,我们的应用程序需要决定是重试操作还是直接继续,接受刚才这个操作可能失败了的事实.
On subsequent attempts to run the query we might continue to see this exception. Eventually, however, the replica set will failover and elect a new primary (this should take a couple of seconds in general). At that point the driver will connect to the new primary and the operation will succeed:
后面再次尝试这个查询时,我们还是有可能看到这个异常. 不过,最终,副本集会重新选出一个主节点(这个过程通常需要几秒钟). 到时候,驱动会连接到这个新的主节点,操作就会成功了.
>>> db.test.find_one()
{u'x': 1, u'_id': ObjectId('...')}
>>> db.connection.host
'morton.local'
>>> db.connection.port
27018
MongoReplicaSetClient
MongoReplicaSetClient
============================
Using a MongoReplicaSetClient instead of a simple MongoClient offers two key features: secondary reads and replica set health monitoring. To connect using MongoReplicaSetClient just provide a host:port pair and the name of the replica set:
使用MongoReplicaSetClient替代MongoClient提供两个关键的特性: 读从库和副本集健康监控. 用MongoReplicaSetClient连接副本集只需要提供一个 host:port对和副本集名称即可:
>>> from pymongo import MongoReplicaSetClient
>>> MongoReplicaSetClient("morton.local:27017", replicaSet='foo')
MongoReplicaSetClient([u'morton.local:27019', u'morton.local:27017', u'morton.local:27018'])
Secondary Reads
读从库
------------------
By default an instance of MongoReplicaSetClient will only send queries to the primary member of the replica set. To use secondaries for queries we have to change the ReadPreference:
默认情况下,MongoReplicaSetClient的实例只会将查询发送到副本集的主节点. 为了使用读从库的功能我们需要修改ReadPreference.
>>> db = MongoReplicaSetClient("morton.local:27017", replicaSet='foo').test
>>> from pymongo.read_preferences import ReadPreference
>>> db.read_preference = ReadPreference.SECONDARY_PREFERRED
Now all queries will be sent to the secondary members of the set. If there are no secondary members the primary will be used as a fallback. If you have queries you would prefer to never send to the primary you can specify that using the SECONDARY read preference:
并非所有的查询都会被发送到副本集的从库. 如果没有从库,则查询会回溯到主节点. 如果你有些查询不希望发到主节点,你可以指定它使用 SECONDARY 读:
>>> db.read_preference = ReadPreference.SECONDARY
Read preference can be set on a client, database, collection, or on a per-query basis, e.g.:
读偏好 可以在client,database,collection或者单个查询为基础设定,例如:
>>> db.collection.find_one(read_preference=ReadPreference.PRIMARY)
Reads are configured using three options: read_preference, tag_sets, and secondary_acceptable_latency_ms.
有三个选项可以配置读操作: read_preference, tag_sets 和 secondary_acceptable_latency_ms.
read_preference:
- - - - - - - - -
* PRIMARY:
Read from the primary. This is the default, and provides the strongest consistency. If no primary is available, raise AutoReconnect.
从主节点读. 这是默认行为, 而且提供了最强的一致性保障. 如果主节点不可用, 抛出 AutoReconnect 异常.
* PRIMARY_PREFERRED:
Read from the primary if available, or if there is none, read from a secondary matching your choice of tag_sets and secondary_acceptable_latency_ms.
如果主节点可用则读主节点, 如果不可用, 读第二个符合你的 tag_sets 和 secondary_acceptable_latency_ms 选择的节点.
* SECONDARY:
Read from a secondary matching your choice of tag_sets and secondary_acceptable_latency_ms. If no matching secondary is available, raise AutoReconnect.
读第二个符合你的 tag_sets 和 secondary_acceptable_latency_ms 选择的节点. 如果不存在这样的节点, 抛出 AutoReconnect 异常.
* SECONDARY_PREFERRED:
Read from a secondary matching your choice of tag_sets and secondary_acceptable_latency_ms if available, otherwise from primary (regardless of the primary’s tags and latency).
读第二个符合你的 tag_sets 和 secondary_acceptable_latency_ms 选择的节点. 如果不存在这样的节点, 读主节点(忽略主节点的tags和latency).
* NEAREST:
Read from any member matching your choice of tag_sets and secondary_acceptable_latency_ms.
从任意一个符合你 tag_sets 和 secondary_acceptable_latency_ms 选择的节点.
tag_sets:
- - - - - -
Replica-set members can be tagged according to any criteria you choose. By default, MongoReplicaSetClient ignores tags when choosing a member to read from, but it can be configured with the tag_sets parameter. tag_sets must be a list of dictionaries, each dict providing tag values that the replica set member must match. MongoReplicaSetClient tries each set of tags in turn until it finds a set of tags with at least one matching member. For example, to prefer reads from the New York data center, but fall back to the San Francisco data center, tag your replica set members according to their location and create a MongoReplicaSetClient like so:
副本集成员可以根据你选择的任何标准来打tag. 默认情况下, MongoReplicaSetClient 选择读节点时忽略tags, 但是这个行为可以通过tag_sets参数配置.
tag_sets 必须是一个字典的列表,每一个字典提供副本集成员需要满足的tag 值. MongoReplicaSetClient 顺序尝试每一个tag集合,直到发现有至少一个匹配成员的tag集合.
例如, 要优先从New York数据中心读数据, 其次从 San Francisco数据中心读, 可以给你的副本集按照位置打tag,并且创建一个这样的 MongoReplicaSetClient:
>>> rsc = MongoReplicaSetClient(
... "morton.local:27017",
... replicaSet='foo'
... read_preference=ReadPreference.SECONDARY,
... tag_sets=[{'dc': 'ny'}, {'dc': 'sf'}]
... )
MongoReplicaSetClient tries to find secondaries in New York, then San Francisco, and raises AutoReconnect if none are available. As an additional fallback, specify a final, empty tag set, {}, which means “read from any member that matches the mode, ignoring tags.”
MongoReplicaSetClient 尝试从NewYork寻找 secondaries, 然后尝试从 San Francisco找, 如果一个匹配都没有则抛出 AutoReconnect 异常.
作为一个附加的跌落方案, 指定一个最终的,空的tag集合, {}, 这意味着"从任何一个匹配mode的成员读数据,忽略tags."
secondary_acceptable_latency_ms:
- - - - - - - - - - - - - - - - -
If multiple members match the mode and tag sets, MongoReplicaSetClient reads from among the nearest members, chosen according to ping time. By default, only members whose ping times are within 15 milliseconds of the nearest are used for queries. You can choose to distribute reads among members with higher latencies by setting secondary_acceptable_latency_ms to a larger number. In that case, MongoReplicaSetClient distributes reads among matching members within secondary_acceptable_latency_ms of the closest member’s ping time.
如果多个成员匹配mode 和 tag集合, MongoReplicaSetClient将从最近的成员那里读数据, 以ping耗时排列远近. 默认情况下,只有ping延时比最近节点慢15毫秒以内的节点才会被查询.
你可以通过将 secondary_acceptable_latency_ms 设置为一个大一点的数字来选择延迟高一些成员进行查询.
这种情况下, MongoReplicaSetClient 将查询分发到延迟符合条件的成员中.
Note
secondary_acceptable_latency_ms is ignored when talking to a replica set through a mongos. The equivalent is the localThreshold command line option.
(??)
Health Monitoring
健康监控
------------------------
When MongoReplicaSetClient is initialized it launches a background task to monitor the replica set for changes in:
MongoReplicaSetClient初始化之后, 将启动一个后台进程来监控副本集的如下变化:
* Health: detect when a member goes down or comes up, or if a different member becomes primary
健康: 检测成员的下线和上线, 或者主节点变更
* Configuration: detect changes in tags
配置: 检测tags 的变更
* Latency: track a moving average of each member’s ping time
延迟: 跟踪每个成员的平均ping耗时
Replica-set monitoring ensures queries are continually routed to the proper members as the state of the replica set changes.
副本集监控能确保副本集状态发生变更时,查询被持续的路由到合适的成员.
It is critical to call close() to terminate the monitoring task before your process exits.
程序结束前,调用 close()方法结束监控任务 是很重要的.
High Availability and mongos
高可用性和 mongos
============================
An instance of MongoClient can be configured to automatically connect to a different mongos if the instance it is currently connected to fails. If a failure occurs, PyMongo will attempt to find the nearest mongos to perform subsequent operations. As with a replica set this can’t happen completely transparently, Here we’ll perform an example failover to illustrate how everything behaves. First, we’ll connect to a sharded cluster, using a seed list, and perform a couple of basic operations:
MongoClient的实例可以配置成当前连接失败时自动连接到另一个mongos. 当失败发生时,PyMongo会尝试找出最近的mongos来进行后续的操作.
需iyu副本集来说,这不会是完全透明的,我们来人造一个failover演示一下事情会怎样.首先,我们连接到一个分片的集群,使用一个种子列表, 然后执行一些基本操作:
>>> db = MongoClient('morton.local:30000,morton.local:30001,morton.local:30002').test
>>> db.test.save({"x": 1})
ObjectId('...')
>>> db.test.find_one()
{u'x': 1, u'_id': ObjectId('...')}
Each member of the seed list passed to MongoClient must be a mongos. By checking the host, port, and is_mongos attributes we can see that we’re connected to morton.local:30001, a mongos:
传递给MongoClient的每一个种子列表都必须是一个mongos. 通过查看host,port和is_mongos属性 我们可以看到我们现在连接到 morton.local:30001, 一个mongos:
>>> db.connection.host
'morton.local'
>>> db.connection.port
30001
>>> db.connection.is_mongos
True
Now let’s shut down that mongos instance and see what happens when we run our query again:
现在我们关闭这个mongos实例来看看当我们再次执行查询时会发生什么:
>>> db.test.find_one()
Traceback (most recent call last):
pymongo.errors.AutoReconnect: ...
As in the replica set example earlier in this document, we get an AutoReconnect exception. This means that the driver was not able to connect to the original mongos at port 30001 (which makes sense, since we shut it down), but that it will attempt to connect to a new mongos on subsequent operations. When this exception is raised our application code needs to decide whether to retry the operation or to simply continue, accepting the fact that the operation might have failed.
就像前面的副本集示例一样,我们得到了一个AutoReconnect异常.
这意味着驱动无法连接到最初的端口30001上的mongos了(这很正常,因为我们把它关了), 但是它会尝试为后续操作连接一个新的mongos.
当这个异常被抛出时,我们的应用程序需要决定是重试操作还是直接继续,接受刚才这个操作可能失败了的事实.
As long as one of the seed list members is still available the next operation will succeed:
只要种子列表成员中还有一个成员可用,下一步操作就会成功:
>>> db.test.find_one()
{u'x': 1, u'_id': ObjectId('...')}
>>> db.connection.host
'morton.local'
>>> db.connection.port
30002
>>> db.connection.is_mongos
True
高可用性和PyMongo的更多相关文章
- SQL Server中的高可用性(2)----文件与文件组
在谈到SQL Server的高可用性之前,我们首先要谈一谈单实例的高可用性.在单实例的高可用性中,不可忽略的就是文件和文件组的高可用性.SQL Server允许在某些文件损坏或离线的情况下,允 ...
- Python: Windows 7 64位 安装、使用 pymongo 3.2
官网tutorial: http://api.mongodb.com/python/current/tutorial.html 本教程将要告诉你如何使用pymongo模块来操作MongoDB数据库. ...
- SQL Server中的高可用性(1)----高可用性概览
自从SQL Server 2005以来,微软已经提供了多种高可用性技术来减少宕机时间和增加对业务数据的保护,而随着SQL Server 2008,SQL Server 2008 R2,SQL ...
- SQL Server中的高可用性(3)----复制
在本系列文章的前两篇对高可用性的意义和单实例下的高可用性做了阐述.但是当随着数据量的增长,以及对RTO和RPO要求的严格,单实例已经无法满足HA/DR方面的要求,因此需要做多实例的高可用性.本 ...
- 集群(cluster)和高可用性(HA)的概念
1.1 什么是集群 简单的说,集群(cluster)就是一组计算机,它们作为一个整体向用户提供一组网络资源.这些单个的计算机系统就是集群的节点(node).一个理想的集群是,用户从来不会意识到集群系统 ...
- 2.0 (2)测试pymongo
在数据库中创建数据库.表,插入数据. from pymongo import MongoClient host = "localhost" port = 27017 client ...
- Windows平台下为Python添加MongoDB支持PyMongo
到Python官网下载pymongo-2.6.3.win-amd64-py2.7.exe 安装pymongo-2.6.3.win-amd64-py2.7.exe 参照官方的用例进行测试 打开命令提示符 ...
- Kafka 0.9+Zookeeper3.4.6集群搭建、配置,新Client API的使用要点,高可用性测试,以及各种坑 (转载)
Kafka 0.9版本对java client的api做出了较大调整,本文主要总结了Kafka 0.9在集群搭建.高可用性.新API方面的相关过程和细节,以及本人在安装调试过程中踩出的各种坑. 关于K ...
- 【Python】pymongo使用
官方文档:http://api.mongodb.com/python/current/index.html MongoReplicaSetClient:http://api.mongodb.com/p ...
随机推荐
- Python种使用Excel
今天用到Excel的相关操作,看了一些资料,借此为自己保存一些用法. 参考资料: python excel 的相关操作 python操作excel之xlrd python操作Excel读写--使用xl ...
- Springboot 工具类静态注入
用springboot搭了一个项目,里面要用到一个DictUtils,因为要用到DictMapper,在百度找了一些方法,最后用下面的方法能成功获取到DictMapper @Component pub ...
- Spring框架 JdbcTemplate类 @Junit单元测试,可以让方法独立执行 如:@Test
package cn.zmh.PingCe; import org.junit.Test; import org.springframework.jdbc.core.BeanPropertyRowMa ...
- AbstractQueuedSynchronizer 队列同步器源码分析
AbstractQueuedSynchronizer 队列同步器(AQS) 队列同步器 (AQS), 是用来构建锁或其他同步组件的基础框架,它通过使用 int 变量表示同步状态,通过内置的 FIFO ...
- mysql 5.7版本目录无data文件夹的解决办法
安装mysql 5.7+版本时,若发现因根目录下,缺少data文件夹的情况, ***请不要去拷贝其他版本的data文件夹!*** 因为此操作会出现很多潜在问题:比如我遇到的执行show variabl ...
- .net core mvc启动顺序以及主要部件2
原文:.net core mvc启动顺序以及主要部件2 前一篇提到WebHost.CreateDefaultBuilder(args)方法创建了WebHostBuilder实例,WebHostBuil ...
- k-svd字典学习,稀疏编码
1. K-SVD usage: Design/Learn a dictionary adaptively to betterfit the model and achieve sparse s ...
- 安卓开发懒鬼最爱之ButterKnife,依赖注入第三方是库,进一步加速开发速度
转载请注明出处:王亟亟的大牛之路 还在烦躁一大堆findById的控件操作而烦恼么? 平时,我们的那一系列findById是一个"浩大的project"样比例如以下 这是以前一个项 ...
- stl_内存基本处理工具
内存基本处理工具 STL定义5个全局函数.作用于初始化空间上.各自是:用于构造的construct(),用于析构的destroy(),uninitialized_copy(),uninitialize ...
- mysql (primary key)_(unique key)_(index) difference
MYSQL index MYSQL索引用来快速地寻找那些具有特定值的记录,所有MySQL索引都以B-树的形式保存.如果没有索引,执行查询时MySQL必须从第一个记录开始扫描整个表的所有记录,直至找 ...