21
votes

I am using Celery with RabbitMQ. Lately, I have noticed that a large number of temporary queues are getting made.

So, I experimented and found that when a task fails (that is a tasks raises an Exception), then a temporary queue with a random name (like c76861943b0a4f3aaa6a99a6db06952c) is formed and the queue remains.

Some properties of the temporary queue as found in rabbitmqadmin are as follows -

auto_delete : True consumers : 0 durable : False messages : 1 messages_ready : 1

And one such temporary queue is made everytime a task fails (that is, raises an Exception). How to avoid this situation? Because in my production environment a large number of such queues get formed.

5
That is an interesting observation! I, too, would like to know.Elver Loho
Hi Elver. I was able to solve the problem. Please have a look at the answer (one by me as well). Hope it helps.Siddharth

5 Answers

17
votes

It sounds like you're using the amqp as the results backend. From the docs here are the pitfalls of using that particular setup:

  • Every new task creates a new queue on the server, with thousands of tasks the broker may be overloaded with queues and this will affect
    performance in negative ways. If you’re using RabbitMQ then each
    queue will be a separate Erlang process, so if you’re planning to
    keep many results simultaneously you may have to increase the Erlang
    process limit, and the maximum number of file descriptors your OS
    allows
  • Old results will not be cleaned automatically, so you must make sure to consume the results or else the number of queues will eventually go out of control. If you’re running RabbitMQ 2.1.1 or higher you can take advantage of the x-expires argument to queues, which will expire queues after a certain time limit after they are unused. The queue expiry can be set (in seconds) by the CELERY_AMQP_TASK_RESULT_EXPIRES setting (not enabled by default).

From what I've read in the changelog, this is no longer the default backend in versions >=2.3.0 because users were getting bit in the rear end by this behavior. I'd suggest changing the results backend if this not the functionality you need.

11
votes

Well, Philip is right there. The following is a description of how I solved it. It is a configuration in celeryconfig.py.

I am still using CELERY_BACKEND = "amqp" as Philip had said. But in addition to that, I am now using CELERY_IGNORE_RESULT = True. This configuration will ensure that the extra queues are not formed for every task.

I was already using this configuration but still when a task fails, the extra queue was formed. Then I noticed that I was using another configuration which needed to be removed which was CELERY_STORE_ERRORS_EVEN_IF_IGNORED = True. What this did that it did not store the results for all tasks but did only for errors (tasks which failed) and hence one extra queue for a task which failed.

3
votes

The CELERY_TASK_RESULT_EXPIRES dictates the time to live of the temp queues. The default is 1 day. You can modify this value.

0
votes

The reason this is happening is because celery workers remote control is enabled (it is enabled by default).

You can disable it by setting the CELERY_ENABLE_REMOTE_CONTROL setting to False However, note that you will lose the ability to do things like add_consumer, cancel_consumer etc using the celery command

0
votes

amqp backend creates a new queue for each task. If you want to avoid it, you can use rpc backend which keeps results in a single queue.

In your config, set

CELERY_RESULT_BACKEND = 'rpc'
CELERY_RESULT_PERSISTENT = True

You can read more about this on celery docs.