1
votes

I have a Django application and celery worker running with Python 2.7, Django 1.11 and Celery 4.3 (prefork) and RabbitMQ broker.

Occasionally, it would appear that the application or a celery worker will "forget" that it's been configured to put tasks on a specific queue via CELERY_TASK_DEFAULT_QUEUE and will put tasks on the "celery" queue instead of the configured queue.

Most of the instances of this happening is when a celery task puts another task on the queue via .delay().

I've not been able to reproduce this in a development environment, so I'm wondering what I could try to do next to work out how tasks issued from tasks are ending up on the celery queue and not the configured task_default_queue.


Update:

Following the suggestion from @DejanLekic, I set up a Celery Signal to listen to the before_task_publish signal, and report on any tasks not matching the expected routing_key.

@before_task_publish.connect
def task_sending_handler(sender=None, headers=None, body=None, exchange=None, routing_key=None, **kwargs):
    """Reports on tasks not send to the source queue. Used to debug tasks not getting put on the source queue"""
    if not routing_key == 'source':
        info = headers if 'task' in headers else body
        logger.info(
            'Sending task {info[id]} to exchange {} with routing key {} from sender {} that is not source'.format(
                exchange, routing_key, sender, info=info))

Now I have a confirmed case of a task being submitted from another task, and going off the rails using the default celery queue instead of the supposedly configured source queue.

However, this is only one case after about 5 hours of operation, in which many actual invocations of that task would have happened. I guess I may need to log other stuff to work out what the Celery app thinks it looks like, to see exactly what the app is configured as.

1
Could it be that the "internal" Celery tasks (like starmap, etc) are running in the "celery" queue? You could in theory setup monitoring (look for Celery events) and find out which task was sent to the "celery" queue, and then try to figure out why that happened. - Was it a Celery bug, or something else. - DejanLekic
Ty, I'll give that a go. I'm also thinking about setting up a task router for the Celery app, even though all the tasks should be going to the same queue. Usage of a router may hide the original problem, but as long as things are working, that's fine by me. I'll try the monitoring first, though. - Reuben

1 Answers

0
votes

Turns out, the application submits celery requests to other queues.

By default, when a Celery instance is created, it will also become the current app.

If you are creating Celery instances for requests that are going to be handled by other applications, be sure to pass set_as_current=False when creating those Celery instances so it doesn't interfere with submission of tasks to your application.