Hi i am launch my Spark application with the spark submit script as such
spark-submit --master spark://Maatari-xxxxxxx.local:7077 --class EstimatorApp /Users/sul.maatari/IdeaProjects/Workshit/target/scala-2.11/Workshit-assembly-1.0.jar --d
eploy-mode cluster --executor-memory 15G num-executors 2
I have a spark standalone cluster deployed on two nodes (my 2 laptops). The cluster is running fine. By default it set 15G for the workers and 8 cores for the executors. Now i am experiencing the following strange behavior. Although i am explicity setting the memory and this can also be seen in the environmement variable of the sparconf UI, in the Cluster UI it says that my application is limited to 1024MB for the executor memory. This makes me think of the default 1G parameter. I wonder why that it.
My application indeed fail because of the memory issue. I know that i need a lot of memory for that application.
One last point of confusion is the Driver program. Why given that i am on cluster mode, spark submit does not return immediately ? I though that given that the driver is executed on the cluster, the client i.e. submit application should return immediately. This further suggest me that something is not right with my conf and how things are being executed.
Can anyone help diagnose that ?
spark.executorEnv.
settings are not taken by spark. Try to set the--driver-memory
to other values and test. Also, set them through the program or calltoDebugString
on conf and see the values to debug more. – moonnum-executors 2
should be--num-executors 2
– WestCoastProjects