3
votes

I have two application servers(both having the django application). Both have celery worker running. RabbitMQ server is set up on a third different server.

When any of the test task is being executed from any of the two application's servers through shell using delay(), they get executed fine.

If the same task is triggered from server1 from the browser (through ajax) it works fine again.

But in case of server2 (having the same config and code as server1), when the same task is triggered from the browser it gives [Error 111] Connection refused error.

Some of the installed packages on server1 or server2 are:

celery 3.1.18
amqp 1.4.9
django 1.8.5

Can anybody help me out with this? Thanks!

The error trace is as follows:

File "../lib/python2.7/site-packages/celery/app/task.py" in delay
  453.         return self.apply_async(args, kwargs)
File "../lib/python2.7/site-packages/celery/app/task.py" in apply_async
  559.             **dict(self._get_exec_options(), **options)
File "../lib/python2.7/site-packages/celery/app/base.py" in send_task
  353.                 reply_to=reply_to or self.oid, **options
File "../lib/python2.7/site-packages/celery/app/amqp.py" in publish_task
  305.             **kwargs
File "../lib/python2.7/site-packages/kombu/messaging.py" in publish
  172.                        routing_key, mandatory, immediate, exchange, declare)
File "../lib/python2.7/site-packages/kombu/connection.py" in _ensured
  457.                                            interval_max)
File "../lib/python2.7/site-packages/kombu/connection.py" in ensure_connection
  369.                         interval_start, interval_step, interval_max, callback)
File "../lib/python2.7/site-packages/kombu/utils/__init__.py" in retry_over_time
  246.             return fun(*args, **kwargs)
File "../local/lib/python2.7/site-packages/kombu/connection.py" in connect
  237.         return self.connection
File "../lib/python2.7/site-packages/kombu/connection.py" in connection
  742.                 self._connection = self._establish_connection()
File "../lib/python2.7/site-packages/kombu/connection.py" in _establish_connection
  697.         conn = self.transport.establish_connection()
File "../lib/python2.7/site-packages/kombu/transport/pyamqp.py" in establish_connection
  116.         conn = self.Connection(**opts)
File "../lib/python2.7/site-packages/amqp/connection.py" in __init__
  165.         self.transport = self.Transport(host, connect_timeout, ssl)
File "../lib/python2.7/site-packages/amqp/connection.py" in Transport
  186.         return create_transport(host, connect_timeout, ssl)
File "../lib/python2.7/site-packages/amqp/transport.py" in create_transport
  299.         return TCPTransport(host, connect_timeout)
File "../lib/python2.7/site-packages/amqp/transport.py" in __init__
  95.             raise socket.error(last_err)
1
Your rabbitmq is not running or is not accessableSardorbek Imomaliev
No rabbitmq is running fine. Otherwise through python shell also the task wouldn't have gotten executed.ndk
it looks like something is preventing a network connection from server2 to the rabbitmq service - you should try using basic network diagnostic tools to figure out what is going on. For a very basic try e.g. telnet from server 2 to the rabbitmq host on port 5672scytale

1 Answers

0
votes

I'd say start adding some extra logging calls before calling delay on server2 just to make sure your celery config is correct when running it as a webserver (as opposed to the manage.py shell instance). It sounds like some startup script for gunicorn / uwsgi / apache / magic isn't loading some variable needed to actually configure the celery correctly. Or it's just being overridden somehow in that context.

Really horrible method is run your webserver on server2 as manage.py runserver and put a PDB in there right before your call to .delay() and poke around. Not exactly something you want open to the general internet while you're doing that but when all else fails...