I'm running Zeppelin 0.8.1 and have configured it to submit Spark jobs to a Yarn 2.7.5 cluster, with interpreters both in cluster-mode (as in the AM is running on yarn, and not on driver host), and in client-mode.
The yarn applications started in client mode are immediately killed after I stop the Zeppelin server. But, the jobs started in cluster mode become zombie-like, and start hogging all the resources in the Yarn cluster (No dynamic resource allocation).
Is there a way to make zeppelin kill those jobs upon exit? Or anything that solves this problem?