6
votes

I'm probably not understanding the asynchronous concept correctly in FastAPI.

I'm accessing the root endpoint of the following app from two clients at the same time. I'd expect FastAPI to print Started twice in a row at the start of the execution:

from fastapi import FastAPI
import asyncio

app = FastAPI()

@app.get("/")
async def read_root():
    print('Started')
    await asyncio.sleep(5)
    print('Finished')
    return {"Hello": "World"}

Instead I get the following, which looks very much non asynchronous:

Started
Finished
INFO: ('127.0.0.1', 49655) - "GET / HTTP/1.1" 200
Started
Finished
INFO: ('127.0.0.1', 49655) - "GET / HTTP/1.1" 200

What am I missing?

6
" I'd expect FastAPI to print Started twice". It is printed twice! - TomTom101
Good point, I edited the question to make it clearer - Florentin Hennecker

6 Answers

7
votes

How do you ensure there are multiple simultaneous requests?

Your code is fine - try using following command for testing:

for n in {1..5}; do curl http://localhost:8000/ & ; done

Your browser might be caching subsequent requests to the same URL.

2
votes

Well from what the demos in this github issue show, it's probably not due to FastAPI but the client that runs the requests.

1
votes

I did the same experiment with Chrome browser and the result was the same as what was originally reported. The requests from two separate Chrome browsers were processed one after another (as if in serial).

@app.get("/test")
async def test():
    r = {"message": "Hello by /test api"}
    r['timestamp'] = datetime.datetime.utcnow()
    await asyncio.sleep(10)
    return r

2 requests took 20 seconds (10 secs each) to finish whole process and this is obviously not a concurrent way!

However, when I tried with curl as suggested in the answer, it was processed in parallel (!)

I did the last experiment with 2 Firefox browsers and the result was also parallel execution.

And finally, I was able to find the clue from the logs of FastAPI. When I try with 2 Chrome browsers, the source of the request (ip:port) were recorded identical

INFO:     10.10.62.106:54668 - "GET /test HTTP/1.1" 200 OK
INFO:     10.10.62.106:54668 - "GET /test HTTP/1.1" 200 OK

However, if I try with Firefox, the source were different.

INFO:     10.10.62.106:54746 - "GET /test HTTP/1.1" 200 OK
INFO:     10.10.62.106:54748 - "GET /test HTTP/1.1" 200 OK

From the log above, I can conclude that FastAPI (or uvicorn in front) handles requests only in parallel when the source address is different.

Please someone add comments on the above conclusion. Thanks.

0
votes

You're probably expecting

Started
INFO: ('127.0.0.1', 49655) - "GET / HTTP/1.1" 200
Finished
Started
INFO: ('127.0.0.1', 49655) - "GET / HTTP/1.1" 200
Finished

but it's not the case.

return {"Hello": "World"}

completes the execution of the coroutine or your API route, so before returning the function doesn't leave any statement execution incomplete.

Instead if you place simultaneous requests for example 2, both requests would be completed in 5.01 to 5.10 seconds whereas synchronous execution would take slightly more than 10 seconds for your code.

0
votes

If I understand correctly, You tried to combine the print executing from different requests. You should probably implement print("started") as a coroutine and use it this way:

async def started():
    print("Started")

@app.get("/")
async def read_root():
    await started()
    await asyncio.sleep(5)
    print('Finished')
    return {"Hello": "World"}

But It won't work as you want as well. You can see truly async if You use difficult requests with DB connection, authentication, and other computation, especially with requests to third-party API.

Good luck )

0
votes

When you declare a path operation function with normal def instead of async def, it is run in an external threadpool that is then awaited, instead of being called directly (as it would block the server).

You can find more details in this article, where I explained where to use async in FastAPI.