If you've worked with Django at some point you probably had the need for some background processing of long running tasks. Chances are you've used some sort of task queue, and Celery is currently the most popular project for this sort of thing in the Python (and Django) world (but there are others).

While working on some projects that used Celery for a task queue I've gathered a number of best practices and decided to document them. Nevertheless, this is more a rant about what I think should be the proper way to do things, and about some underused features that the celery ecosystem offers.

No.1: Don't use the database as your AMQP Broker

Let me explain why I think this is wrong (aside from the limitations pointed out in the celery docs).

A database is not built for doing the things a proper AMQP broker like RabbitMQ is designed for. It will break down at one point, probably in production with not that much traffic/user base.

I guess the most popular reason people decide to use a database is because, well, they already have one for their web app, so why not re-use it. Setting up is a breeze and you don't need to worry about another component (like RabbitMQ).

Not so hypothetical scenario: Let's say you have 4 background workers processing the tasks you've put in the database. This means that you get 4 processes polling the database for new tasks fairly often, not to mention that each of those 4 workers can have multiple concurrent threads of it's own. At some point you notice that you are falling behind on your task processing and more tasks are coming in than are being completed, so naturally you increase the number of workers doing the task processing. Suddenly your database starts falling apart due to the huge number of workers polling the database for new tasks, your disk IO goes through the roof and your webapp starts being affected by this slow down because the workers are basically DDOS-ing the database.

This does not happen when you have a proper AMQP like RabbitMQ because, for one thing, the queue resides in memory so you don't hammer your disk. The consumers (the workers) do not need to resort to polling as the queue has a way of pushing new tasks to the consumers, and if the AMQP does get overwhelmed for some other reason, at least it will not bring down the user facing web app with it.

I would go as far to say that you shouldn't use a database for a broker even in development, what with things like Docker and a ton of pre-built images that already give you RabbitMQ out of the box.

No.2: Use more Queues (ie. not just the default one)

Celery is fairly simple to set up, and it comes with a default queue in which it puts all the tasks unless you tell it otherwise. The most common thing you'll see is something like this:

@app.task()
def my_taskA(a, b, c):
print("doing something here...") @app.task()
def my_taskB(x, y):
print("doing something here...")

What happens here is that both tasks will end up in the same Queue (if not specified otherwise in the celeryconfig.py file). I can definitely see the appeal of doing something like this because with just one decorator you've got yourself some sweet background tasks. My concern here is that taskA and taskB might be doing totally different things, and perhaps one of them might even be much more important than the other, so why throw them both in the same basket? Even if you've got just one worker processing both tasks, suppose that at some point the unimportant taskB gets so massive in numbers that the more important taksA just can't get enough attention from the worker? At this point increasing the number of workers will probably not solve your problem as all workers still need to process both tasks, and with taskB so great in numbers taskA still can't get the attention it deserves. Which brings us to the next point.

No.3: Use priority workers

The way to solve the issue above is to have taskA in one queue, and taskB in another and then assign x workers to process Q1 and all the other workers to process the more intensive Q2 as it has more tasks coming in. This way you can still make sure that taskB gets enough workers all the while maintaining a few priority workers that just need to process taskA when one comes in without making it wait to long on processing.

So, define your queues manually:

CELERY_QUEUES = (
Queue('default', Exchange('default'), routing_key='default'),
Queue('for_task_A', Exchange('for_task_A'), routing_key='for_task_A'),
Queue('for_task_B', Exchange('for_task_B'), routing_key='for_task_B'),
)

And your routes that will decide which task goes where:

CELERY_ROUTES = {
'my_taskA': {'queue': 'for_task_A', 'routing_key': 'for_task_A'},
'my_taskB': {'queue': 'for_task_B', 'routing_key': 'for_task_B'},
}

Which will allow you to run workers for each task:

celery worker -E -l INFO -n workerA -Q for_task_A
celery worker -E -l INFO -n workerB -Q for_task_B

No.4: Use Celery's error handling mechanisms

Most tasks I've seen in the wild don't have a notion of error handling at all. If a task fails that's it, it failed. This might be fine for some use cases, however, most tasks I've seen are talking to some kind of 3rd party API and fail because of some sort of network error, or other kind of "resource availability" error. The most simple way we can handle these kinds of errors is to just retry the task, because maybe the 3rd party API just had some server/network issues and it will be back up shortly, why not give it a go?

@app.task(bind=True, default_retry_delay=300, max_retries=5)
def my_task_A():
try:
print("doing stuff here...")
except SomeNetworkException as e:
print("maybe do some clenup here....")
self.retry(e)

What I like to do is define per task defaults for how long should a task wait before being retried, and how many retries is enough before finally giving up (the default_retry_delay and max_retries parameters respectively). This is the most basic form of error handling that I can think of and yet I see it used almost never. Of course Celery offers more in terms of error handling but I'll leave you with the celery docs for that.

No.5: Use Flower

The Flower project is a wonderful tool for monitoring your celery tasks and workers. It's web based and allows you to do stuff like see task progress, details, worker status, bringing up new workers and so forth. Check out the full list of features in the provided link.

No.6: Keep track of results only if you really need them

A task status is the information about the task exiting with a success or failure. It can be useful for some kind of statistics later on. The big thing to note here is that the exit status is not the result of the job that the task was performing, that information is most likely some sort of side effect that gets written to the database (ie. update a user's friend list).

Most projects I've seen don't really care about keeping persistent track of a task's status after it exited yet most of them use either the default sqlite database for saving this information, or even better, they've taken the time and use their regular database (postgres or otherwise).

Why hammer your webapp's database for no reason? Use CELERY_IGNORE_RESULT = True in your celeryconfig.py and discard the results.

No.7: Don't pass Database/ORM objects to tasks

After giving this talk at a local Python meetup a few people suggested I add this to the list. What's it all about? You shouldn't pass Database objects (for instance your User model) to a background task because the serialized object might contain stale data. What you want to do is feed the task the User id and have the task ask the database for a fresh User object.

Celery - Best Practices的更多相关文章

  1. celery最佳实践

    作为一个Celery使用重度用户.看到Celery Best Practices这篇文章.不由得菊花一紧. 干脆翻译出来,同一时候也会添加我们项目中celery的实战经验. 至于Celery为何物,看 ...

  2. Celery最佳实践(转)

    原文:http://my.oschina.net/siddontang/blog/284107 英文原文:https://denibertovic.com/posts/celery-best-prac ...

  3. python的分布式爬虫框架

    scrapy + celery: Scrapy原生不支持js渲染,需要单独下载[scrapy-splash](GitHub - scrapy-plugins/scrapy-splash: Scrapy ...

  4. 异步任务队列Celery在Django中的使用

    前段时间在Django Web平台开发中,碰到一些请求执行的任务时间较长(几分钟),为了加快用户的响应时间,因此决定采用异步任务的方式在后台执行这些任务.在同事的指引下接触了Celery这个异步任务队 ...

  5. celery使用的一些小坑和技巧(非从无到有的过程)

    纯粹是记录一下自己在刚开始使用的时候遇到的一些坑,以及自己是怎样通过配合redis来解决问题的.文章分为三个部分,一是怎样跑起来,并且怎样监控相关的队列和任务:二是遇到的几个坑:三是给一些自己配合re ...

  6. tornado+sqlalchemy+celery,数据库连接消耗在哪里

    随着公司业务的发展,网站的日活数也逐渐增多,以前只需要考虑将所需要的功能实现就行了,当日活越来越大的时候,就需要考虑对服务器的资源使用消耗情况有一个清楚的认知.     最近老是发现数据库的连接数如果 ...

  7. celery 框架

    转自:http://www.cnblogs.com/forward-wang/p/5970806.html 生产者消费者模式 在实际的软件开发过程中,经常会碰到如下场景:某个模块负责产生数据,这些数据 ...

  8. celery使用方法

    1.celery4.0以上不支持windows,用pip安装celery 2.启动redis-server.exe服务 3.编辑运行celery_blog2.py !/usr/bin/python c ...

  9. Celery的实践指南

    http://www.cnblogs.com/ToDoToTry/p/5453149.html Celery的实践指南   Celery的实践指南 celery原理: celery实际上是实现了一个典 ...

随机推荐

  1. OGRE: "OgreOverlaySystem.h": No such file or directory

    这两天学习OGRE,遇到"OgreOverlaySystem.h": No such file or directory的错误. 这是由于OGRE提供的例子过老,和SDK版本不一致 ...

  2. 哈希表工作原理 (并不特指Java中的HashTable)

    1. 引言         哈希表(Hash Table)的应用近两年才在NOI中出现,作为一种高效的数据结构,它正在竞赛中发挥着越来越重要的作用.  哈希表最大的优点,就是把数据的存储和查找消耗的时 ...

  3. TYVJ P1077 有理逼近 Label:坑,tle的好帮手 不懂

    描述 对于一个素数P,我们可以用一系列有理分数(分子.分母都是不大于N的自然数)来逼近sqrt(p),例如P=2,N=5的时候:1/1<5/4<4/3<sqrt(2)<3/2& ...

  4. 验证标题是否存在(TextBox控件失去焦点验证)

    首先解释两个属性, AutoPostBack 属性用于设置或返回当用户在 TextBox 控件中按 Enter 或 Tab 键时,是否发生自动回传到服务器的操作. 如果把该属性设置为 TRUE,则启用 ...

  5. 什么是SQL注入式攻击

    什么是SQL注入式攻击? 所谓SQL注入式攻击,就是攻击者把SQL命令插入到Web表单的输入域或页面请求的查询字符串,欺骗服务器执行恶意的SQL命令.在某些表单中,用户输入的内容直接用来构造(或者影响 ...

  6. java第一节课

    1.安装 2.编写java程序 首先,新建一个文本文档:把后缀改成.java,然后起一个文件名,要是英文的,如:Hello. 然后,编辑,代码如下: class Hello { public stat ...

  7. 短语密码(blowfish_secret)的设置

    简单的说,phpmyadmin就是一种mysql的管理工具,安装该工具后,即可以通过web形式直接管理mysql数据,而不需要通过执行系统命令来管理,非常适合对数据库操作命令不熟悉的数据库管理者,下面 ...

  8. 离线更新SEPM服务器的病毒定义库

    1. 从http://www.symantec.com/security_response/definitions/download/detail.jsp?gid=sep下载JDB文件    2. 将 ...

  9. Java实战equals()与hashCode()

    一.equals()方法详解 equals()方法在object类中定义如下: 代码 public boolean equals(Object obj) { return (this == obj); ...

  10. Memcached 笔记与总结(5)Memcached 的普通哈希分布和一致性哈希分布

    普通 Hash 分布算法的 PHP 实现 首先假设有 2 台服务器:127.0.0.1:11211 和 192.168.186.129:11211 当存储的 key 经过对 2 (2 台服务器)取模运 ...