2
votes

There are a couple of questions similar to this one, but neither one has a proper solution, nor describes the exact same problem.

Periodic tasks work fine with my config if I start celery by myself from the command line, like so:

celery --app=proj.mycelery worker -B

The problem is when I try to daemonize celery. After following this tutorial, I start the service with:

sudo /etc/init.d/celerybeat start

and it seems to start fine, but the periodic task that is set to execute every 5 seconds, just doesn't happen.

These are my celery settings inside Django's settings.py:

BROKER_URL = 'amqp://guest:guest@localhost//'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'

This is my /etc/default/celerybeat configuration:

# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/burzum/.pyenv/versions/old_django/bin/celery"

# App instance to use
CELERY_APP="proj.mycelery"

# Where to chdir at start.
CELERYBEAT_CHDIR="/home/burzum/repos/proj/"

# Extra arguments to celerybeat
CELERYBEAT_OPTS="--schedule=/var/run/celery/celerybeat-schedule"

export DJANGO_SETTINGS_MODULE="proj.settings"

CELERYD_CHDIR="/home/burzum/repos/proj"

The /etc/init.d/celerybeat file is the same as taken from the tutorial (this one). I just added the following line at the beginning:

export PYTHONPATH='/home/burzum/repos'

The output of /var/log/celery/beat.log is:

[2015-11-23 09:15:18,304: INFO/MainProcess] beat: Starting...
[2015-11-23 09:15:23,307: INFO/MainProcess] Scheduler: Sending due task reports.tasks.test_periodic_task (reports.tasks.test_periodic_task)
[2015-11-23 09:15:28,310: INFO/MainProcess] Scheduler: Sending due task reports.tasks.test_periodic_task (reports.tasks.test_periodic_task)

So, it looks like the periodic task is being called, but nothing is happening.

Output of sudo /etc/init.d/celerybeat status is:

celery init v10.1.
Using configuration: , /etc/default/celerybeat
celerybeat (pid 11696) is up...

Output of starting the service with sudo sh -x /etc/init.d/celerybeat start is:

+ VERSION=10.1
+ export PYTHONPATH=/home/burzum/repos
+ echo celery init v10.1.
celery init v10.1.
+ id -u
+ [ 0 -ne 0 ]
+ [ -L /etc/init.d/celerybeat ]
+ SCRIPT_FILE=/etc/init.d/celerybeat
+ basename /etc/init.d/celerybeat
+ SCRIPT_NAME=celerybeat
+ scripts=
+ test -f /etc/default/celeryd
+ EXTRA_CONFIG=/etc/default/celerybeat
+ test -f /etc/default/celerybeat
+ scripts=, /etc/default/celerybeat
+ _config_sanity /etc/default/celerybeat
+ local path=/etc/default/celerybeat
+ ls -ld /etc/default/celerybeat
+ awk {print $3}
+ local owner=root
+ ls -ld /etc/default/celerybeat+ 
cut -b 6
+ local iwgrp=-
+ ls -ld+  /etc/default/celerybeat
cut -b 9
+ local iwoth=-
+ id -u root
+ [ 0 != 0 ]
+ [ - != - ]
+ [ - != - ]
+ . /etc/default/celerybeat
+ CELERY_BIN=/home/burzum/.pyenv/versions/old_django/bin/celery
+ CELERY_APP=proj.mycelery
+ CELERYBEAT_CHDIR=/home/burzum/repos/proj/
+ CELERYBEAT_OPTS=--schedule=/var/run/celery/celerybeat-schedule
+ export DJANGO_SETTINGS_MODULE=proj.settings
+ CELERYD_CHDIR=/home/burzum/repos/proj
+ echo Using configuration: , /etc/default/celerybeat
Using configuration: , /etc/default/celerybeat
+ CELERY_BIN=/home/burzum/.pyenv/versions/old_django/bin/celery
+ DEFAULT_USER=celery
+ DEFAULT_PID_FILE=/var/run/celery/beat.pid
+ DEFAULT_LOG_FILE=/var/log/celery/beat.log
+ DEFAULT_LOG_LEVEL=INFO
+ DEFAULT_CELERYBEAT=/home/burzum/.pyenv/versions/old_django/bin/celery beat
+ CELERYBEAT=/home/burzum/.pyenv/versions/old_django/bin/celery beat
+ CELERYBEAT_LOG_LEVEL=INFO
+ CELERY_APP_ARG=
+ [ ! -z proj.mycelery ]
+ CELERY_APP_ARG=--app=proj.mycelery
+ CELERYBEAT_USER=celery
+ CELERY_CREATE_DIRS=0
+ CELERY_CREATE_RUNDIR=0
+ CELERY_CREATE_LOGDIR=0
+ [ -z  ]
+ CELERYBEAT_PID_FILE=/var/run/celery/beat.pid
+ CELERY_CREATE_RUNDIR=1
+ [ -z  ]
+ CELERYBEAT_LOG_FILE=/var/log/celery/beat.log
+ CELERY_CREATE_LOGDIR=1
+ export CELERY_LOADER
+ CELERYBEAT_OPTS=--schedule=/var/run/celery/celerybeat-schedule -f /var/log/celery/beat.log -l INFO
+ [ -n  ]
+ dirname /var/log/celery/beat.log
+ CELERYBEAT_LOG_DIR=/var/log/celery
+ dirname /var/run/celery/beat.pid
+ CELERYBEAT_PID_DIR=/var/run/celery
+ CELERYBEAT_CHDIR=/home/burzum/repos/proj/
+ [ -n /home/burzum/repos/proj/ ]
+ DAEMON_OPTS= --workdir=/home/burzum/repos/proj/
+ export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/sbin:/sbin
+ check_dev_null
+ [ ! -c /dev/null ]
+ check_paths
+ [ 1 -eq 1 ]
+ create_default_dir /var/log/celery
+ [ ! -d /var/log/celery ]
+ [ 1 -eq 1 ]
+ create_default_dir /var/run/celery
+ [ ! -d /var/run/celery ]
+ start_beat
+ echo Starting celerybeat...
Starting celerybeat...
+ _chuid --app=proj.mycelery --schedule=/var/run/celery/celerybeat-schedule -f /var/log/celery/beat.log -l INFO --workdir=/home/burzum/repos/proj/ --detach --pidfile=/var/run/celery/beat.pid
+ su celery -c /home/burzum/.pyenv/versions/old_django/bin/celery beat --app=proj.mycelery --schedule=/var/run/celery/celerybeat-schedule -f /var/log/celery/beat.log -l INFO --workdir=/home/burzum/repos/proj/ --detach --pidfile=/var/run/celery/beat.pid
+ exit 0
3
Did you check whether your celery instance actually listen to the correct broker queue and receives messages. Tasks will get loaded correctly even if the broker is incorrect. I would assume that your tasks are loaded but the messages don't make it through. More than one instance of celery on the same task queue could be a reason.Falk Schuetzenmeister
Do you have any suggestions on how to check whether the celery instance is listening to the correct broker? I'm not very experienced with this.burzum
You could try rabbitmq.com/management.html which has a nice UI. Another option is to run celery with -l info option and check logs whether messages are received. You can also run a non-daemonized instance of celery that will print all the logging to stdout i.e. the terminal where it is running. If your non-daemonized setup worked you could use supervisor to daemonize (this is described in the same tutorial a little bit further down). This way you can run it exactly with the same settings as in your dev environment. I think that would be the preferred setup. No solution, just ideas.Falk Schuetzenmeister
I will give supervisord a try, thank you, Falk.burzum

3 Answers

1
votes

This is a working example, finally.

  1. Make celery_service.conf
  2. Make celery.service

Run service with Centos systemctl, for example. And that's it.

I've added useful scripts:

0
votes

I have no proper answer on this too, and read tonns of tutorials which copy&paste the same :

Note -B is meant to be used for development purposes. For production environment, you need to start celery beat separately.

And nobody really show any live example of how to achieve that. I thought option

CELERYD_NODES="beat"

Will make the trick, but it doesn't work. So only adding option -B or --beat to your "celeryd" config will bring this separate worker to real beat state.

CELERYD_OPTS="--beat --scheduler=django_celery_beat.schedulers:DatabaseScheduler"

UPD: I found, this is a documentation issue, described here: https://github.com/celery/celery/issues/4304

Here you can see an example of proper implementation of beat via systemd: https://specialistoff.net/question/238

I've checked it now - beat looks up and working, need some time for testing, anyway.

0
votes

I guess you have a linux server with systemd. Use systemd, not "generic init-scripts".

There are docs for systemd, but they are below the old init.d way:

http://docs.celeryproject.org/en/master/userguide/daemonizing.html#usage-systemd

Quoting the docs:

This is an example systemd file:

/etc/systemd/system/celery.service:

[Unit]
Description=Celery Service
After=network.target

[Service]
Type=forking
User=celery
Group=celery
EnvironmentFile=/etc/conf.d/celery
WorkingDirectory=/opt/celery
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
  -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
  --pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
  -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'

[Install]
WantedBy=multi-user.target