Kombu源码分析(一)概述
Celery是Python中最流行的异步消息队列框架,支持RabbitMQ、Redis、ZoopKeeper等作为Broker,而对这些消息队列的抽象,都是通过Kombu实现的。Kombu实现了对AMQP transport和non-AMQP transports(Redis、Amazon SQS、ZoopKeeper等)的兼容。
AMQP中的各种概念,Message、Producer、Exchange、Queue、Consumer、Connection、Channel在Kombu中都相应做了实现,另外Kombu还实现了Transport,就是存储和发送消息的实体,用来区分底层消息队列是用amqp、Redis还是其它实现的。
- Message:消息,发送和消费的主体
- Producer: 消息发送者
- Consumer:消息接收者
- Exchange:交换机,消息发送者将消息发至Exchange,Exchange负责将消息分发至队列
- Queue:消息队列,存储着即将被应用消费掉的消息,Exchange负责将消息分发Queue,消费者从Queue接收消息
- Connection:对消息队列连接的抽象
- Channel:与AMQP中概念类似,可以理解成共享一个Connection的多个轻量化连接
- Transport:真实的MQ连接,区分底层消息队列的实现
对于不同的Transport的支持:
代码示例
先从官网示例代码开始:
from kombu import Connection, Exchange, Queue media_exchange = Exchange('media', 'direct', durable=True)
video_queue = Queue('video', exchange=media_exchange, routing_key='video') def process_media(body, message):
print body
message.ack() # connections
with Connection('amqp://guest:guest@localhost//') as conn: # produce
producer = conn.Producer(serializer='json')
producer.publish({'name': '/tmp/lolcat1.avi', 'size': 1301013},
exchange=media_exchange, routing_key='video',
declare=[video_queue]) # the declare above, makes sure the video queue is declared
# so that the messages can be delivered.
# It's a best practice in Kombu to have both publishers and
# consumers declare the queue. You can also declare the
# queue manually using:
# video_queue(conn).declare() # consume
with conn.Consumer(video_queue, callbacks=[process_media]) as consumer:
# Process messages and handle events on all channels
while True:
conn.drain_events() # Consume from several queues on the same channel:
video_queue = Queue('video', exchange=media_exchange, key='video')
image_queue = Queue('image', exchange=media_exchange, key='image') with connection.Consumer([video_queue, image_queue],
callbacks=[process_media]) as consumer:
while True:
connection.drain_events()
基本上,各种角色都出场了。各种角色的使用都要从建立Connection开始。
Connection
获取连接很简单:
>>> from kombu import Connection
>>> connection = Connection('amqp://guest:guest@localhost:5672//')
现在的连接其实并未真正建立,只有在需要使用的时候才真正建立连接并将连接缓存:
@property
def connection(self):
"""The underlying connection object.
Warning:
This instance is transport specific, so do not
depend on the interface of this object.
"""
if not self._closed:
if not self.connected:
self.declared_entities.clear()
self._default_channel = None
self._connection = self._establish_connection()
self._closed = False
return self._connection
也可以主动连接:
>>> connection.connect()
def connect(self):
"""Establish connection to server immediately."""
self._closed = False
return self.connection
当然,连接底层是由各自使用的不同的Transport
建立的:
conn = self.transport.establish_connection()
连接需要显式的关闭:
>>> connection.release()
由于Connection
实现了上下文生成器:
def __enter__(self):
return self def __exit__(self, *args):
self.release()
所以可以使用with语句,以免忘记关闭连接:
with Connection() as connection:
# work with connection
可以使用Connection
直接建立Procuder
和Consumer
,其实就是调用了各自的创建类:
def Producer(self, channel=None, *args, **kwargs):
"""Create new :class:`kombu.Producer` instance."""
from .messaging import Producer
return Producer(channel or self, *args, **kwargs) def Consumer(self, queues=None, channel=None, *args, **kwargs):
"""Create new :class:`kombu.Consumer` instance."""
from .messaging import Consumer
return Consumer(channel or self, queues, *args, **kwargs)
Producer
连接创建后,可以使用连接创建Producer
:
producer = conn.Producer(serializer='json')
也可以直接使用Channel创建:
with connection.channel() as channel:
producer = Producer(channel, ...)
Producer
实例初始化的时候会检查第一个channel
参数:
self.revive(self.channel)
channel = self.channel = maybe_channel(channel)
这里会检查channel
是不是Connection
实例,是的话会将其替换为Connection
实例的default_channel
属性:
def maybe_channel(channel):
"""Get channel from object.
Return the default channel if argument is a connection instance,
otherwise just return the channel given.
"""
if is_connection(channel):
return channel.default_channel
return channel
所以Producer
还是与Channel
联系在一起的。
Producer
发送消息:
producer.publish({'name': '/tmp/lolcat1.avi', 'size': 1301013},
exchange=media_exchange, routing_key='video',
declare=[video_queue])
pulish
做的事情,主要是由Channel
完成的:
def _publish(self, body, priority, content_type, content_encoding,
┆ ┆ ┆ ┆headers, properties, routing_key, mandatory,
┆ ┆ ┆ ┆immediate, exchange, declare):
┆ channel = self.channel
┆ message = channel.prepare_message(
┆ ┆ body, priority, content_type,
┆ ┆ content_encoding, headers, properties,
┆ )
┆ if declare:
┆ ┆ maybe_declare = self.maybe_declare
┆ ┆ [maybe_declare(entity) for entity in declare] ┆ # handle autogenerated queue names for reply_to
┆ reply_to = properties.get('reply_to')
┆ if isinstance(reply_to, Queue):
┆ ┆ properties['reply_to'] = reply_to.name
┆ return channel.basic_publish(
┆ ┆ message,
┆ ┆ exchange=exchange, routing_key=routing_key,
┆ ┆ mandatory=mandatory, immediate=immediate,
┆ )
Channel
组装消息prepare_message
,并且发送消息basic_publish
。
而Channel
又是Transport
创建的:
chan = self.transport.create_channel(self.connection)
Transport
当创建Connection
时,需要传入hostname
,类似于:
amqp://guest:guest@localhost:5672//
然后获取hostname
的scheme
,比如redis
:
transport = transport or urlparse(hostname).scheme
以此来区分创建的Transport
的类型。
创建过程:
self.transport_cls = transport transport_cls = get_transport_cls(transport_cls) def get_transport_cls(transport=None):
"""Get transport class by name. The transport string is the full path to a transport class, e.g.:: ┆ "kombu.transport.pyamqp:Transport" If the name does not include `"."` (is not fully qualified),
the alias table will be consulted.
"""
if transport not in _transport_cache:
┆ _transport_cache[transport] = resolve_transport(transport)
return _transport_cache[transport] transport = TRANSPORT_ALIASES[transport] TRANSPORT_ALIASES = {
... 'redis': 'kombu.transport.redis:Transport', ...
}
以Redis
为例,Transport
类在/kombu/transport/redis.py
文件,继承自/kombu/transport/virtual/base.py
中的Transport
类。
创建Channel
:
channel = self.Channel(connection)
然后Channel
组装消息prepare_message
,并且发送消息basic_publish
。
Channel
Channel
实例有几个属性关联着Consumer、Queue等,virtual.Channel
:
class Channel(AbstractChannel, base.StdChannel):
def __init__(self, connection, **kwargs):
self.connection = connection
self._consumers = set()
self._cycle = None
self._tag_to_queue = {}
self._active_queues = []
...
其中,_consumers
是相关联的消费者标签集合,_active_queues
是相关联的Queue列表,_tag_to_queue
则是消费者标签与Queue的映射:
self._tag_to_queue[consumer_tag] = queue
self._consumers.add(consumer_tag)
self._active_queues.append(queue)
Channel
对于不同的底层消息队列,也有不同的实现,以Redis
为例:
class Channel(virtual.Channel):
"""Redis Channel."""
继承自virtual.Channel
。
组装消息函数prepare_message
:
def prepare_message(self, body, priority=None, content_type=None,
┆ ┆ ┆ ┆ ┆ content_encoding=None, headers=None, properties=None):
┆ """Prepare message data."""
┆ properties = properties or {}
┆ properties.setdefault('delivery_info', {})
┆ properties.setdefault('priority', priority or self.default_priority) ┆ return {'body': body,
┆ ┆ ┆ 'content-encoding': content_encoding,
┆ ┆ ┆ 'content-type': content_type,
┆ ┆ ┆ 'headers': headers or {},
┆ ┆ ┆ 'properties': properties or {}}
基本上是为消息添加各种属性。
发送消息basic_publish
方法是调用_put
方法:
def _put(self, queue, message, **kwargs):
┆ """Deliver message."""
┆ pri = self._get_message_priority(message, reverse=False) ┆ with self.conn_or_acquire() as client:
┆ ┆ client.lpush(self._q_for_pri(queue, pri), dumps(message))
client
是一个redis.StrictRedis
连接:
def _create_client(self, asynchronous=False):
┆ if asynchronous:
┆ ┆ return self.Client(connection_pool=self.async_pool)
┆ return self.Client(connection_pool=self.pool) self.Client = self._get_client() def _get_client(self):
┆ if redis.VERSION < (3, 2, 0):
┆ ┆ raise VersionMismatch(
┆ ┆ ┆ 'Redis transport requires redis-py versions 3.2.0 or later. '
┆ ┆ ┆ 'You have {0.__version__}'.format(redis))
┆ return redis.StrictRedis
Redis
将消息置于某个列表(lpush)中。还会根据是否异步的选项选择不同的connection_pool
。
Consumer
现在消息已经被放置与队列中,那么消息又被如何使用呢?
Consumer
初始化需要声明Channel
和要消费的队列列表以及处理消息的回调函数列表:
with Consumer(connection, queues, callbacks=[process_media], accept=['json']):
connection.drain_events(timeout=1)
当Consumer
实例被当做上下文管理器使用时,会调用consume
方法:
def __enter__(self):
self.consume()
return self
consume
方法代码:
def consume(self, no_ack=None):
"""Start consuming messages. Can be called multiple times, but note that while it
will consume from new queues added since the last call,
it will not cancel consuming from removed queues (
use :meth:`cancel_by_queue`). Arguments:
no_ack (bool): See :attr:`no_ack`.
"""
queues = list(values(self._queues))
if queues:
no_ack = self.no_ack if no_ack is None else no_ack H, T = queues[:-1], queues[-1]
for queue in H:
self._basic_consume(queue, no_ack=no_ack, nowait=True)
self._basic_consume(T, no_ack=no_ack, nowait=False)
使用_basic_consume
方法处理相关的队列列表中的每一项,其中处理最后一个Queue时设置标志nowait=False
。
_basic_consume
方法代码:
def _basic_consume(self, queue, consumer_tag=None,
no_ack=no_ack, nowait=True):
tag = self._active_tags.get(queue.name)
if tag is None:
tag = self._add_tag(queue, consumer_tag)
queue.consume(tag, self._receive_callback,
no_ack=no_ack, nowait=nowait)
return tag
是将消费者标签以及回调函数传给Queue
的consume
方法。
Queue
的consume
方法代码:
def consume(self, consumer_tag='', callback=None,
no_ack=None, nowait=False):
"""Start a queue consumer. Consumers last as long as the channel they were created on, or
until the client cancels them. Arguments:
consumer_tag (str): Unique identifier for the consumer.
The consumer tag is local to a connection, so two clients
can use the same consumer tags. If this field is empty
the server will generate a unique tag. no_ack (bool): If enabled the broker will automatically
ack messages. nowait (bool): Do not wait for a reply. callback (Callable): callback called for each delivered message.
"""
if no_ack is None:
no_ack = self.no_ack
return self.channel.basic_consume(
queue=self.name,
no_ack=no_ack,
consumer_tag=consumer_tag or '',
callback=callback,
nowait=nowait,
arguments=self.consumer_arguments)
又回到了Channel
,Channel
的basic_consume
代码:
def basic_consume(self, queue, no_ack, callback, consumer_tag, **kwargs):
"""Consume from `queue`."""
self._tag_to_queue[consumer_tag] = queue
self._active_queues.append(queue) def _callback(raw_message):
message = self.Message(raw_message, channel=self)
if not no_ack:
self.qos.append(message, message.delivery_tag)
return callback(message) self.connection._callbacks[queue] = _callback
self._consumers.add(consumer_tag) self._reset_cycle()
Channel
将Consumer
标签,Consumer
要消费的队列,以及标签与队列的映射关系都记录下来,等待循环调用。另外,还通过Transport
将队列与回调函数列表的映射关系记录下来,以便于从队列中取出消息后执行回调函数。
真正的调用是下面这行代码实现的:
connection.drain_events(timeout=1)
现在来到Transport
的drain_events
方法:
def drain_events(self, connection, timeout=None):
time_start = monotonic()
get = self.cycle.get
polling_interval = self.polling_interval
if timeout and polling_interval and polling_interval > timeout:
polling_interval = timeout
while 1:
try:
get(self._deliver, timeout=timeout)
except Empty:
if timeout is not None and monotonic() - time_start >= timeout:
raise socket.timeout()
if polling_interval is not None:
sleep(polling_interval)
else:
break
看上去是在无限执行get(self._deliver, timeout=timeout)
get
是self.cycle
的一个方法,cycle
是一个FairCycle
实例:
self.cycle = self.Cycle(self._drain_channel, self.channels, Empty) @python_2_unicode_compatible
class FairCycle(object):
"""Cycle between resources. Consume from a set of resources, where each resource gets
an equal chance to be consumed from. Arguments:
fun (Callable): Callback to call.
resources (Sequence[Any]): List of resources.
predicate (type): Exception predicate.
""" def __init__(self, fun, resources, predicate=Exception):
self.fun = fun
self.resources = resources
self.predicate = predicate
self.pos = 0 def _next(self):
while 1:
try:
resource = self.resources[self.pos]
self.pos += 1
return resource
except IndexError:
self.pos = 0
if not self.resources:
raise self.predicate() def get(self, callback, **kwargs):
"""Get from next resource."""
for tried in count(0): # for infinity
resource = self._next()
try:
return self.fun(resource, callback, **kwargs)
except self.predicate:
# reraise when retries exchausted.
if tried >= len(self.resources) - 1:
raise
FairCycle
接受两个参数,fun
是要执行的函数fun
,而resources
作为一个迭代器,每次提供一个item供fun
调用。
此处的fun
是_drain_channel
,resources
是channels
:
def _drain_channel(self, channel, callback, timeout=None):
return channel.drain_events(callback=callback, timeout=timeout)
Transport
相关联的每一个channel都要执行drain_events
。
Channel
的drain_events
代码:
def drain_events(self, timeout=None, callback=None):
callback = callback or self.connection._deliver
if self._consumers and self.qos.can_consume():
if hasattr(self, '_get_many'):
return self._get_many(self._active_queues, timeout=timeout)
return self._poll(self.cycle, callback, timeout=timeout)
raise Empty()
_poll
代码:
def _poll(self, cycle, callback, timeout=None):
"""Poll a list of queues for available messages."""
return cycle.get(callback)
又回到了FairCycle
,Channel
的FairCycle
实例:
def _reset_cycle(self):
self._cycle = FairCycle(
self._get_and_deliver, self._active_queues, Empty)
_get_and_deliver
方法从队列中取出消息,然后调用Transport
传递过来的_deliver
方法:
def _get_and_deliver(self, queue, callback):
message = self._get(queue)
callback(message, queue)
_deliver
代码:
def _deliver(self, message, queue):
if not queue:
raise KeyError(
'Received message without destination queue: {0}'.format(
message))
try:
callback = self._callbacks[queue]
except KeyError:
logger.warning(W_NO_CONSUMERS, queue)
self._reject_inbound_message(message)
else:
callback(message)
做的事情是根据队列取出注册到此队列的回调函数列表,然后对消息执行列表中的所有回调函数。
回顾
可见,Kombu中Channel
和Transport
非常重要,Channel
记录了队列列表、消费者列表以及两者的映射关系,而Transport
记录了队列与回调函数的映射关系。Kombu对所有需要监听的队列_active_queues
都查询一遍,直到查询完毕或者遇到一个可以使用的Queue,然后就获取消息,回调此队列对应的callback。
Kombu源码分析(一)概述的更多相关文章
- Netty源码分析(前言, 概述及目录)
Netty源码分析(完整版) 前言 前段时间公司准备改造redis的客户端, 原生的客户端是阻塞式链接, 并且链接池初始化的链接数并不高, 高并发场景会有获取不到连接的尴尬, 所以考虑了用netty长 ...
- Kafka源码分析(一) - 概述
系列文章目录 https://zhuanlan.zhihu.com/p/367683572 目录 系列文章目录 一. 实际问题 二. 什么是Kafka, 如何解决这些问题的 三. 基本原理 1. 基本 ...
- HDFS源码分析一-概述
HDFS 主要包含 NameNode, SecondaryNameNode, DataNode 以及 HDFS Client . 我们从以下这几部分讲: 1. HDFS概述 2. NameNode 实 ...
- [源码分析] 消息队列 Kombu 之 Consumer
[源码分析] 消息队列 Kombu 之 Consumer 目录 [源码分析] 消息队列 Kombu 之 Consumer 0x00 摘要 0x01 综述功能 0x02 示例代码 0x03 定义 3.1 ...
- [源码分析] 消息队列 Kombu 之 Hub
[源码分析] 消息队列 Kombu 之 Hub 0x00 摘要 本系列我们介绍消息队列 Kombu.Kombu 的定位是一个兼容 AMQP 协议的消息队列抽象.通过本文,大家可以了解 Kombu 中的 ...
- [源码分析] 消息队列 Kombu 之 启动过程
[源码分析] 消息队列 Kombu 之 启动过程 0x00 摘要 本系列我们介绍消息队列 Kombu.Kombu 的定位是一个兼容 AMQP 协议的消息队列抽象.通过本文,大家可以了解 Kombu 是 ...
- [源码分析] 消息队列 Kombu 之 Producer
[源码分析] 消息队列 Kombu 之 Producer 目录 [源码分析] 消息队列 Kombu 之 Producer 0x00 摘要 0x01 示例代码 0x02 来由 0x03 建立 3.1 定 ...
- Android Small插件化框架源码分析
Android Small插件化框架源码分析 目录 概述 Small如何使用 插件加载流程 待改进的地方 一.概述 Small是一个写得非常简洁的插件化框架,工程源码位置:https://github ...
- Jvm(jdk8)源码分析1-java命令启动流程详解
JDK8加载源码分析 1.概述 现在大多数互联网公司都是使用java技术体系搭建自己的系统,所以对java开发工程师以及java系统架构师的需求非常的多,虽然普遍的要求都是需要熟悉各种java开发框架 ...
随机推荐
- 动态规划大合集II
1.前言 大合集总共14道题,出自江哥之手(这就没什么好戏了),做得让人花枝乱颤.虽说大部分是NOIP难度,也有简单的几道题目,但是还是做的很辛苦,有几道题几乎没思路,下面一道道边看边分析一下. 2. ...
- parameter与argument,property与attribute
本文摘自:https://blog.csdn.net/Zhangxichao100/article/details/59484133 parameter与argument,property与attri ...
- failed to execute /bin/bash: Resource temporarily unavailable的问题处理
[admin@localhost ~]$ sudo su - scloanLast login: Tue Jun 12 14:06:31 CST 2018 on pts/3su: failed to ...
- c++primer(第五版) 阅读笔记
快速阅读一遍c++ primer,复习c++ 1.本书代码:http://www.informit.com/store/c-plus-plus-primer-9780321714114 2.本书结构:
- Azure DevOps Server 2019 第一个补丁包(2019.0.1 RTW)
在Azure DevOps Server 2019正式发布后的2周左右时间,微软快速发布了第一个补丁包Azure DevOps Server 2019.0.1 RTW.Azure DevOps Ser ...
- 【2019年05月20日】A股滚动市盈率PE历史新低排名
2010年01月01日 到 2019年05月20日 之间,滚动市盈率历史新低排名. 上市三年以上的公司, 2019年05月20日市盈率在300以下的公司. 1 - 阳光照明(SH600261) - 历 ...
- [IOI 1994]数字三角形
数字三角形 总时间限制: 1000ms 内存限制: 65536kB 描述 73 88 1 02 7 4 44 5 2 6 5 (图1) 图1给出了一个数字三角形.从三角形的顶部到底部有很多条不同的路径 ...
- 修改Ubuntu系统的用户名和主机名
1.前言 当我们拿到别人拷贝的系统时,往往需要修改拷贝系统的密码.用户名和主机名,本文简单介绍在Ubuntu下如何进行相关配置文件的修改. 2.如何修改 (1)修改root的密码 运行下面的命令对对r ...
- [转帖]How long does it take to make a context switch?
How long does it take to make a context switch? FROM: http://blog.tsunanet.net/2010/11/how-long-do ...
- 第四节:EF Core的并发处理
1.说明 和EF版本的并发处理方案一致,需要知道乐观并发和悲观并发的区别,EF Core只支持乐观并发:监控并发的两种方案:监测单个字段和监测整条数据,DataAnnotations 和 Fluent ...