So I have this Pipeline that runs for a long time (weeks), which loads some tables with Data Factory and processes them with Databricks.
Also, I have another Pipeline that is run each day for a couple of hours. However, the Databricks cluster seems not to be powerful enough to run both pipelines simultaneously, as it throws an error when both Pipelines are active (seems to be a memory error, "Spark driver has stopped unexpectedly").
The daily Pipeline is highest priority, though, so ideally I would like to pause for around 3 hours the long term Pipeline, then execute the daily trigger, and then resume the long term Pipeline execution.
Is it possible to do that?
Thanks in advance!

