0
votes

I have one celery beat task, that is running other scraping tasks. When those tasks are not processed, queue is starting to grow.

I know celery use backend db, but there are only: id, task_id, status, result, date_done, traceback.

My ideas is to switch from celery beat to rescheduling tasks by them self, but some tasks are unconnected or can get lost, so celery beat is useful in these cases.

Second idea is to add my logs, like my table, where I can save task-id and task context, by which I will be able to find out if task already exists.

May be you have better approach? Thanks

1

1 Answers