3
votes

I have a Flask app that uses dependency injection and celery. I have the example working below, but my main app has to instantiate all of the modules required by celery in addition to creating a second Flask instance. Is there a better way to achieve this?

In particular:

Why should the Main "frontend" app depend on the entire Celery "backend" stack just to configure the Celery client? I would like to decouple these sub-systems since the frontend just kicks off tasks.

main.py

import tasks.py
app = Flask(__name__)
FlaskInjector(app=app, modules=[A, B, C, D, E, F])

celery.py

app = Flask(__name__)
injector = Injector(modules=[A, B])
FlaskInjector(app=app, injector=injector)
celery = Celery(app.import_name, include=['tasks'])

tasks.py

from celery import celery, injector
@celery.task
def my_task():
    injector.get(A).foo()

I don't import app from main because I don't want Celery to depend on all of the stuff in the main app that has nothing to do with running tasks. Conversely, I don't want my main app to depend on all of the bootstrap of the Celery client that is needed to configure the worker. It's fine for toy apps, but as a large system grows, managing these dependencies is important and I don't understand the separation of the Celery client config (to invoke tasks) and what is needed for the worker.

I have a frontend Flask application and backend Celery application. The Celery app has business logic that I don't want Flask to depend on. It's large, complex and changes often. I don't want bloat the Flask app, redeploy it every time Celery changes, or expose my Flask developers to Celery code. But as far as I know the Flask invocation of tasks can't be decoupled from the Celery implementation of them.

1

1 Answers