As per my understanding spark does not need to be installed on all the node in a yarn cluster. Spark installation is only required at the node(usually gateway node) from where spark-submit script is fired.
As per spark programming guide
To make Spark runtime jars accessible from YARN side, you can specify spark.yarn.archive or spark.yarn.jars.
How does libraries containing Spark code (i.e spark runtime jar available in ../spark-2.0.1-bin-hadoop2.6/jars) get distributed to Physical Worker Node(where executor are launched) in a YARN cluster.
Thank You.