12
votes

I am seeing tasks seemingly "disappear" in celery, running with 2 nodes. It seems to happen randomly. The task gets created like this:

task = perform_advance.apply_async(...)
logger.info('Task created, id: {}'.format(task.task_id))

When this works, I will see something like:

[2016-04-21 01:13:02,470: INFO/Worker-8] foo.tasks.some_task[e52615da-de7a-49de-88d6-b3ca43a3383f]: Task created, id: eaaeb427-a167-4a78-ba39-4803e20cc753

[2016-04-29 21:18:40,667: DEBUG/MainProcess] Task accepted: foo.tasks.some_task[eaaeb427-a167-4a78-ba39-4803e20cc753] pid:1104

But when it fails, I never see the task being accepted, only it being created. There are no errors in the logs.

celery version: 3.1.23

rabbitmq version: 3.3.3

1
check it here see if it helps stackoverflow.com/questions/5336645/… - Haifeng Zhang
I use CELERY_ACKS_LATE = True into the celery config with Rabbitmq broker - silviud
This happened to me using redis. It had to do with memory Ram on my VPS I Just setup more ram and the problem was gone. (Used to process 2+ Million Async Requests) - Eddwin Paz
Note that you are using a RabbitMQ version that is several years old - istepaniuk

1 Answers

0
votes

Worked on this as well.

I thought I'll be so kind to share the solution to this issue here.

It turned out to be the internal Amazon ELB load balancer to RabbitMQ that messed us up. Connecting to RabbitMQ directly rather than the ELB solved this.