1
votes

I have a Celery 4.1 worker configured to process tasks from a queue called "longjobs", using RabbitMQ as my messaging backend.

My Celery configuration and workers are managed through a Django 1.11 project.

Nothing throws any errors, but tasks launched from my Django application are never picked up by my worker.

My celery.py file looks like:

from __future__ import absolute_import
import os
import sys

from celery import Celery
from celery._state import _set_current_app
import django

app = Celery('myproject')
app.config_from_object('django.conf:settings', namespace='CELERY')
_set_current_app(app)

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings.settings')
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../myproject')))
django.setup()
from django.conf import settings
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

My Django Celery settings are:

CELERY_IGNORE_RESULT = False
CELERY_TRACK_STARTED = True
CELERY_IMPORTS = (
    'myproject.myapp.tasks',
)
CELERY_RESULT_BACKEND = 'amqp'
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
CELERY_TASK_SERIALIZER = 'pickle'
CELERY_RESULT_SERIALIZER = 'pickle'
CELERY_RESULT_PERSISTENT = True
CELERY_ALWAYS_EAGER = False
CELERY_ROUTES = {
    'mytask': {'queue': 'longjobs'},
}
CELERY_WORKER_PREFETCH_MULTIPLIER = CELERYD_PREFETCH_MULTIPLIER = 1
CELERY_SEND_TASK_ERROR_EMAILS = True
CELERY_ACKS_LATE = True
CELERY_TASK_RESULT_EXPIRES = 360000

And I launch my worker with:

celery worker -A myproject -l info -n longjobs@%h -Q longjobs

and in its log file, I see:

[2017-11-09 16:51:03,218: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672/myproject
[2017-11-09 16:51:03,655: INFO/MainProcess] mingle: searching for neighbors
[2017-11-09 16:51:05,441: INFO/MainProcess] mingle: all alone
[2017-11-09 16:51:06,162: INFO/MainProcess] longjobs@localhost ready.

indicating that the worker is successfully connecting to RabbitMQ with the correct virtual host and queue name.

I'm using Flower and the RabbitMQ admin interface to debug. Flower confirms that my worker is running, but says it never receives any tasks.

The RabbitMQ admin is a little stranger. It says that the "longjob" queue exists for my the "myproject" virtual host, and it too has not received any tasks, but there are a ton of queues with a UUID for a name, that have varying numbers of "ready" tasks pending. One of these has 200+ tasks.

Why isn't my Celery worker pulling tasks from RabbitMQ correctly? I'm not seeing any errors in any logs files. How do I diagnose this?

2
where are your tasks? how do you call one?Nour Wolf
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')Ykh

2 Answers

0
votes

Sorry I can't comment yet, but where is your beat process? You can run the worker with the --beat option, not recommended for production, or you can run the beat process separately.

celery beat -A myproject -l info [--detach]
0
votes

Try to change CELERY_ROUTES to CELERY_TASK_ROUTES (in version 4.x).

Or, I rather change your router:

CELERY_ROUTES = {
    'mytask': {'queue': 'longjobs'},
}

to:

CELERY_ROUTES = {
    'mytask': {
        'exchange': 'longjobs',
        'exchange_type': 'longjobs',
        'routing_key': 'longjobs'
    }
}