Consider a simple workflow like this:
from dask.distributed import Client
import time
with Client() as client:
futs = client.map(time.sleep, list(range(10)))
The above code will submit and almost immediately cancel the futures since the context manager will close. It's possible to keep the context manager open until tasks are completed with client.gather, however that will block further execution in the current process.
I am interested in submitting tasks to multiple clusters (e.g. local and distributed) within the same process, ideally without blocking the current process. It's straightforward to do with explicit definition of different clients and clusters, but is it also possible with context managers (one for each unique client/cluster)?
It might sound like a bit of an anti-pattern, but maybe there is way to close the cluster only after computations all futures run. I tried fire_and_forget and also tried passing shutdown_on_close=False, but that doesn't seem to be implemented.
withcontexts? - mdurantjobs(to launch dask workers) left hanging in the queue which need to cleared manually... it's a minor nuisance, so I thought maybe it can be fixed with context manager, but I understand that it's an antipattern now. - SultanOrazbayevweakref.finalizeoratexit? Also, I think you can attach a callback to your future, to have something happen when its done. - mdurant