0
votes

I have an issue where django_celery_beat’s DatabaseScheduler doesn’t run periodic tasks. Or I should say where celery beat doesn’t find any tasks when the scheduler is DatabaseScheduler. In case I use the standard scheduler the tasks are executed regularly.

I setup celery on heroku by using a dyno for worker and one for beat (and one for web, obviously).

I know that beat and worker are connected to redis and to postgres for task results.

Every periodic task I run from django admin by selecting a task and “run selected task” gets executed.

However, it is about two days that I’m trying to figure out why there isn’t a way for beat/worker to find that I scheduled a task to execute every 10 seconds, or using a cron (even restarting beat and remot doesn’t change it).

I’m kind of desperate, and my next move would be to give redbeat a try.

Any help on how to how to troubleshoot this particular problem would be greatly appreciated. I suspect the problem is in the is_due method. I am using UTC (in celery and django), all cron are UTC based. All I see in the beat log is “writing entries..” every on and then.

I’ve tried changing celery version from 4.3 to 4.4 and django celery beat from 1.4.0 to 1.5.0 to 1.6.0

Any help would be greatly appreciated.

1

1 Answers

1
votes

In case it helps someone who's having or will have a similar trouble as ours: to recreate this issue, it is possible to create a task as simple as:

@app.task(bind=True)
def test(self, arg):

    print(kwargs.get("notification_id"))

then, in django admin, enter the task editing and put something in the extra args field. Or, viceversa, the task could be

@app.task(bind=True)
def test(self, **kwargs):

    print(notification_id)

And try to pass a positional argument. While locally this breaks, on Heroku's beat and worker dyno, this somehow slips away unnoticed, and django_celery_beats stop processing any task whatsoever in the future. The scheduler is completely broken by a "wrong" task.