I have a Django application and celery worker running with Python 2.7, Django 1.11 and Celery 4.3 (prefork) and RabbitMQ broker.
Occasionally, it would appear that the application or a celery worker will "forget" that it's been configured to put tasks on a specific queue via CELERY_TASK_DEFAULT_QUEUE and will put tasks on the "celery" queue instead of the configured queue.
Most of the instances of this happening is when a celery task puts another task on the queue via .delay().
I've not been able to reproduce this in a development environment, so I'm wondering what I could try to do next to work out how tasks issued from tasks are ending up on the celery queue and not the configured task_default_queue.
Update:
Following the suggestion from @DejanLekic, I set up a Celery Signal to listen to the before_task_publish signal, and report on any tasks not matching the expected routing_key.
@before_task_publish.connect
def task_sending_handler(sender=None, headers=None, body=None, exchange=None, routing_key=None, **kwargs):
"""Reports on tasks not send to the source queue. Used to debug tasks not getting put on the source queue"""
if not routing_key == 'source':
info = headers if 'task' in headers else body
logger.info(
'Sending task {info[id]} to exchange {} with routing key {} from sender {} that is not source'.format(
exchange, routing_key, sender, info=info))
Now I have a confirmed case of a task being submitted from another task, and going off the rails using the default celery queue instead of the supposedly configured source queue.
However, this is only one case after about 5 hours of operation, in which many actual invocations of that task would have happened. I guess I may need to log other stuff to work out what the Celery app thinks it looks like, to see exactly what the app is configured as.