22
votes

My celery tasks stops getting executed in between. My rabbitmq stops in between and then I need to restart it manually. Last time(15-16 hours back), similar problem occurred, I did the following (manually), and it started working again.

I reinstalled the rabbitmq and then it started working again.

sudo apt-get --purge remove raabitmq-server

sudo apt-get install raabitmq-server

Now it is again showing `

Celery - errno 111 connection refused

Following is my config.

BROKER_URL = 'amqp://'
CELERY_RESULT_BACKEND = 'amqp://'

CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_ACCEPT_CONTENT=['json']
CELERY_TIMEZONE = 'Europe/Oslo'
CELERY_ENABLE_UTC = True

CELERY_CREATE_MISSING_QUEUES = True

Please let me know where I'm going wrong?

How should I rectify it?

Part2

Also, I've multiple queues. I can run it within the project directory, but when demonizing, the workers dont take task. I still need to start the celery workers manually. How can I demozize it?

Here is my celerd conf.

# Name of nodes to start, here we have a single node
CELERYD_NODES="w1 w2 w3 w4"


CELERY_BIN="/usr/local/bin/celery"

# Where to chdir at start.
CELERYD_CHDIR="/var/www/fractal/parser-quicklook/"

# Python interpreter from environment, if using virtualenv
#ENV_PYTHON="/somewhere/.virtualenvs/MyProject/bin/python"

# How to call "manage.py celeryd_multi"
#CELERYD_MULTI="/usr/local/bin/celeryd-multi"

# How to call "manage.py celeryctl"
#CELERYCTL="/usr/local/bin/celeryctl"

#CELERYBEAT="/usr/local/bin/celerybeat"

# Extra arguments to celeryd
CELERYD_OPTS="--time-limit=300 --concurrency=8  -Q BBC,BGR,FASTCOMPANY,Firstpost,Guardian,IBNLIVE,LIVEMINT,Mashable,NDTV,Pandodaily,Reuters,TNW,TheHindu,ZEENEWS "

# Name of the celery config module, don't change this.
CELERY_CONFIG_MODULE="celeryconfig"

# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"

# Workers should run as an unprivileged user.
#CELERYD_USER="nobody"
#CELERYD_GROUP="nobody"

# Set any other env vars here too!
PROJET_ENV="PRODUCTION"

# Name of the projects settings module.
# in this case is just settings and not the full path because it will change the dir to
# the project folder first.
CELERY_CREATE_DIRS=1

Celeryconfig is already provided in part1.

Here is my proj directory structure.

project
|-- main.py
|-- project
|   |-- celeryconfig.py
|   |-- __init__.py
|-- tasks.py

How can I demonize with the Queues? I have provided the queues in CELERYD_OPTS as well.

Is there a way in which we can dynamically demonize the number of queues in the celery? For eg:- we have CELERY_CREATE_MISSING_QUEUES = True for creating the missing queues. Is there something similar to daemonize the celery queues?

2
did you update the rabbitmq server database after you brought it up again ? YOu need to add users vhost and set permissions before you can connect celery using the same user. sudo rabbitmqtcl add_user USERNAME PASSWORD sudo rabbitmqctl add_vhost VHOST_NAME sudo rabbitmqctl set_permissions -p VHOST_NAME USERNAME ".*" ".*" ".*" - cmidi
Also how does your celery app configuration element BROKER_URL looks like ? - cmidi
I'm not able to create a user. Whenever I try to create one, it throws up an error saying Error: unable to connect to node 'rabbit@li732-193': nodedown. Upon looking at the sudo service rabbitmq-server status it shows the same error. - PythonEnthusiast
I restarted the celery-server and created the user and add the permissions. Then gave a restart to the rabbit-server. After doing all this, I checked the celery status and it still shows the same error. Connection refused. - PythonEnthusiast
I was finally able to fix it. sudo apt-get --purge remove raabitmq-server and sudo apt-get install raabitmq-server. This fixed it,. - PythonEnthusiast

2 Answers

1
votes

not sure if you fixed this already, but from the look of it, seems you have a bunch of problems.

first and foremost, check if your RabbitMQ server has troubles staying up for some reason.

also, be sure that your RabbitMQ server has been configured with the correct credentials and allow access from your worker's location (e.g. enable connections other than loopback users): here's what you need to do: https://www.rabbitmq.com/access-control.html

then, check you have configured your worker with the correct authentication credentials, a full URL should look similar to the following (where user must be granted access to the specific virtualhost, it's quite easy to configure it via the RabbitMQ management interface https://www.rabbitmq.com/management.html):

BROKER_URL = 'amqp://user:pass@host:port/virtualhost' CELERY_RESULT_BACKEND = 'amqp://user:pass@host:port/virtualhost'

and finally, try to traceback the exception in python, that should hopefully give you some additional information about the error

hth

p.s. re. demonizing your celery worker, @budulianin answer is spot on!

0
votes

How can I demozize it?

Usually, I use supervisord for this purpose.

Here config example:

[program:celery]
command=/home/my_project/.virtualenvs/my_project/bin/celery worker
    -A my_project.project.celery_app.celery_app
    -n worker-%(process_num)s
    --concurrency=4
    --statedb=/tmp/%(process_num)s.state
    -l INFO

environment=MY_SETTINGS='/etc/my_settings.py'
process_name=%(program_name)s_%(process_num)02d
numprocs_start=1
numprocs=4
user=user_name
directory=/home/my_project
stdout_logfile=/var/log/my_project/celery.log
stderr_logfile=/var/log/my_project/celery_err.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
killasgroup=true
priority=998

BTW, CELERY_CREATE_MISSING_QUEUES enabled by default.