I am running a FastAPI application on a single server with Celery to do the heavy lifting work. Every request to the FastAPI server kicks of a task on the same server that can run for hours.
Since, currently the API code and celery workers code resides and runs on the same server. I am launching the API server as:
uvicorn app:app.main --port <PORT> --host <HOST>
And the workers as:
celery -A app.worker worker --loglevel=info
The current directory structure is roughly as follows with the tasks defined in tasks/tasks.py
and discovered by the worker.py
. The API routes import the specific task function from the tasks/tasks.py
and call test_task.delay()
on them.
app/
models/
__init__.py
model1.py
model2.py
routers/
__init__.py
router1.py
router2.py
tasks/
__init__.py
tasks.py
main.py
worker.py
But I am expecting high load in the upcoming days. I am looking forward to scaling out with one server only handling the API requests and multiple high end servers running the celery workers, connected to the same broker instance.
I want the API code to remain as a separate module and the workers to stay as a separate module so that they can be separately deployed but the tasks can be called from the FastAPI application.
However, I cannot understand how exactly to structure my project files in a way that I don't need to load the FastAPI code on the celery servers, and vice versa, yet keeping the system robust, extensible and production ready.
I am new to this, so kindly excuse me if this is a noob question.