I have setup a spark (1.6)standalone cluster. have 1 master and added 3 machines under conf/slaves file as workers. Even though I have allocated 4GB memory to each of my workers in spark, why does it use only 1024MB when the application is running? I would like for it use all 4 GB allocated to it. Help me figure out where and what I am doing wrong.
Below is the screenshot of the spark master page (when the application is running using spark-submit) where under the Memory column it shows 1024.0 MB used in brackets next to 4.0 GB.
I also tried setting --executor-memory 4G option with spark-submit and it does not work (as suggested in How to change memory per node for apache spark worker).
These are the options I have set in spark-env.sh file
export SPARK_WORKER_CORES=3
export SPARK_WORKER_MEMORY=4g
export SPARK_WORKER_INSTANCES=2