Lets say I have the following processes declared in my Procfile
web: newrelic-admin run-program python manage.py run_gunicorn -b 0.0.0.0:$PORT -w 9 -k gevent --max-requests 250 --preload --timeout 240
scheduler: python manage.py celery worker -B -E --maxtasksperchild=1000
worker: python manage.py celery worker -E --maxtasksperchild=1000
celerymon: python manage.py celerymon -B 0.0.0.0 -P $PORT
I basically have to run a few dynos of my primary web process. Run a scheduler. Run a few workers. Monitor celery. Separately use a hosted AMQP broker.
I have tried the alternative of running multiple processes on a single dyno but it doesn't seem to work reliably and anyways is not something I would like to use in production.
I find the cost of running all this to be a bit prohibitive especially when I think I could club together some processes on a single dyno. Maybe combining the scheduler with monitoring or run the scheduler and worker together.
Added to this is the fact that only 80 and 443 ports are exposed by Heroku and there is no way to run services on multiple ports on the same dyno.
What would be a good strategy to go about optimizing process and dyno usage?
Alternatively how does one go about monitoring celery tasks on heroku if running celerycam adds another dyno to your cost?