I have a task in an airflow dag that requires 100 GB in RAM to successfully complete. I have 3 nodes with 50 GB Memory each in Composer environment. I have 3 workers ( one running on each node ). The issue here is, this task is running only on one of the workers( max Memory it can use is 50 GB), therefore it is failing because of memory issues.
Is there way to make this task use memory from all the nodes (150 GB) ? (Assume we can't the split the task into smaller steps)
Also, in cloud composer, can we make a worker span across multiple nodes? (If so, I can force one worker to run on all three nodes and use 150 GB memory)