0
votes

I'm using Supervisord with Celery on a tornado server (note: not tcelery, since my server isn't using any async features yet) with three workers: w1, w2, and w3. Each has a concurrency of 10. I do this via supervisor by adding the following to /etc/supervisord.conf:

[program:sendgrid_gateway_server]
command=sudo python main.py -o runserver
numprocs=1
directory=/home/ubuntu/sendgrid_gateway/sendgrid-gateway
stdout_logfile=/home/ubuntu/sendgrid_gateway/sendgrid-gateway/logs/server_log.txt
autostart=true
autorestart=true
user=root

[program:sendgrid_gateway_server_w1]
command=celery worker -A tasks --loglevel=INFO --concurrency=10 -n w1
numprocs=1
directory=/home/ubuntu/sendgrid_gateway/sendgrid-gateway
stdout_logfile=/home/ubuntu/sendgrid_gateway/sendgrid-gateway/logs/w1_log.txt
autostart=true
autorestart=true
user=root

[program:sendgrid_gateway_server_w2]
command=celery worker -A tasks --loglevel=INFO --concurrency=10 -n w2
numprocs=1
directory=/home/ubuntu/sendgrid_gateway/sendgrid-gateway
stdout_logfile=/home/ubuntu/sendgrid_gateway/sendgrid-gateway/logs/w2_log.txt
autostart=true
autorestart=true
user=root

[program:sendgrid_gateway_server_w3]
command=celery worker -A tasks --loglevel=INFO --concurrency=10 -n w3
numprocs=1
directory=/home/ubuntu/sendgrid_gateway/sendgrid-gateway
stdout_logfile=/home/ubuntu/sendgrid_gateway/sendgrid-gateway/logs/w3_log.txt
autostart=true

The first [program] block is for my main python application that runs Tornado. The next three are (obviously) my Celery workers. What worries me is that when I "supervisorctl start all" all 30 processes show up in the list:

root 2547 0.0 0.0 40848 1672 ? S 13:40 0:00 sudo python main.py -o runserver root 2548 0.2 1.9 176140 33020 ? Sl 13:40 0:04 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w3 root 2549 0.0 2.1 196848 35632 ? S 13:40 0:01 python main.py -o runserver root 2560 0.2 1.9 176140 33016 ? Sl 13:40 0:03 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w2 root 2561 0.2 1.9 176140 33020 ? Sl 13:40 0:03 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w1 root 2581 0.0 1.6 175144 28616 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w3 root 2582 0.0 1.6 175144 28624 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w3 root 2583 0.0 1.6 175144 28628 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w3 root 2584 0.0 1.6 175144 28628 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w3 root 2585 0.0 1.6 175144 28628 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w3 root 2586 0.0 1.6 175144 28632 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w3 root 2587 0.0 1.6 175144 28632 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w3 root 2589 0.0 1.6 175144 28636 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w3 root 2590 0.0 1.6 175144 28644 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w3 root 2591 0.0 1.6 175144 28640 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w3 root 2595 0.0 1.6 175144 28612 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w2 root 2596 0.0 1.6 175144 28624 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w1 root 2597 0.0 1.6 175144 28632 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w1 root 2598 0.0 1.6 175144 28620 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w2 root 2599 0.0 1.6 175144 28620 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w2 root 2600 0.0 1.6 175144 28620 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w2 root 2601 0.0 1.6 175144 28624 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w2 root 2602 0.0 1.6 175144 28636 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w1 root 2603 0.0 1.6 175144 28628 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w2 root 2604 0.0 1.6 175144 28636 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w1 root 2605 0.0 1.6 175144 28632 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w1 root 2608 0.0 1.6 175144 28632 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w1 root 2609 0.0 1.6 175144 28628 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w2 root 2610 0.0 1.6 175144 28640 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w1 root 2611 0.0 1.6 175144 28640 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w1 root 2612 0.0 1.6 175144 28632 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w2 root 2613 0.0 1.6 175144 28648 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w1 root 2614 0.0 1.6 175144 28644 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w1 root 2616 0.0 1.6 175144 28640 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w2 root 2617 0.0 1.6 175144 28636 ? S 13:40 0:00 /usr/bin/python /usr/local/bin/celery worker -A tasks --loglevel=INFO --concurrency=10 -n w2

Those are the 30 Celery processes, plus a few extra (not quite sure why the extra ones are there...) I was under the impression that the unnecessary processes should terminate after a task has been finished. Is this the case or am I just loony?

Thanks in advance.

1

1 Answers

0
votes

Yes, they should all show as processes. However you might want to use the stopasgroup=true and killasgroup=true options under your program configurations to get stop all the child processes at once, or else they may keep running even after you have run your stop [programname] command with supervisorctl.