I am running spark-master and spark-worker in a separate docker.
I can see them running
✗ ps -ef | grep spark root 3477 3441 0 1월05 ? 00:04:17 /usr/lib/jvm/java-1.8-openjdk/jre/bin/java -cp /usr/local/spark/conf/:/usr/local/spark/jars/* -Xmx1g org.apache.spark.deploy.master.Master --ip node-master --port 7077 --webui-port 10080
I'm not sure if my workers are using 1g or 8g, I do set memory options via SparkConf
conf.set("spark.executor.memory", "8g")
conf.set("spark.driver.memory", "8g")
I can see 8g in the web ui
Am I really using 8g? Is there a way to change the Xmm1g part which are shown in the command line under ps?
** edit
I'm running standalone cluster (not yarn), and using pyspark, It's not possible to use spark-submit python files in standalone cluster mode
Currently, the standalone mode does not support cluster mode for Python applications.
http://spark.apache.org/docs/latest/submitting-applications.html
