0
votes

I'm running spark in single node.

My application (java-web) is using less memory than available.. I found this thread as useful.

http://apache-spark-user-list.1001560.n3.nabble.com/Setting-spark-executor-memory-problem-td11429.html.

From the link

For local mode you only have one executor, and this executor is your driver, so you need to set the driver's memory instead. *That said, in local mode, by the time you run spark-submit, a JVM has already been launched with the default memory settings, so setting "spark.driver.memory" in your conf won't actually do anything for you. Instead, you need to run spark-submit as follows

bin/spark-submit --driver-memory 2g --class your.class.here app.jar

It suggests to use the memory-flag along with bin/spark-submit --for a jar file

But I'm running a maven-web applicaiton. Can I run this with spark-submit??

I set these in spark-env.sh and run source spark-env.sh but still no change

SPARK_EXECUTOR_MEMORY=10g
SPARK_WORKER_MEMORY=10g
2

2 Answers

1
votes

You can just config these parameters in Spark's configuration files(spark/conf/spark-defaults.sh). And by the way, this is a better way than configuration in spark-shell unless you want to change it every time.

0
votes

Using spark-env.sh won't work in your setup since you're using a web application that uses its own runtime environment that in turn doesn't see the env vars. You can't use spark-submit either.

Given the comment about using local mode it means you need to tune in your JVM where the web app runs within and stop worrying about Spark. It will pick up the memory setting off the JVM since all services are running on the single JVM.