From the official spark documentation (http://spark.apache.org/docs/1.2.0/running-on-yarn.html):
In yarn-cluster mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application.
Is there a way that a client reconnects back to the driver at some point later to collect the results?