24
votes

I'm trying to run the spark examples from Eclipse and getting this generic error: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources.

The version I have is spark-1.6.2-bin-hadoop2.6. I started spark using the ./sbin/start-master.sh command from a shell, and set my sparkConf like this:

SparkConf conf = new SparkConf().setAppName("Simple Application");
conf.setMaster("spark://My-Mac-mini.local:7077");

I'm not bringing any other code here because this error pops up with any of the examples I'm running. The machine is a Mac OSX and I'm pretty sure it has enough resources to run the simplest examples.

What am I missing?

6
are u able to run the examples outside of eclipse ? using spark-submit ? - Knight71
I'm able to do ./bin/run-example SparkPi 10 successfully. - Eddy
run-example will use local[*] instead of the spark master that you had run . Are you able to see the spark master UI and all the worker nodes in that ? - Knight71
At localhost:8080 I can see the running and completed applications. The workers line is empty. The spark master at the top of the page is spark://My-Mac-mini.local:7077 - Eddy
You should start your worker also by doing start-slave.sh <master-url> - Knight71

6 Answers

11
votes

The error indicates that you cluster has insufficient resources for current job.Since you have not started the slaves i.e worker . The cluster won't have any resources to allocate to your job. Starting the slaves will work.

`start-slave.sh <spark://master-ip:7077>`
10
votes

I had the same problem, and it was because the workers could not communicate with the driver.

You need to set spark.driver.port (and open said port on your driver), spark.driver.host and spark.driver.bindAddress in your spark-submit from the driver.

4
votes

Solution to your Answer

Reason

  1. Spark Master doesn't have any resources allocated to execute the Job like worker node or slave node.

Fix

  1. You have to start the slave node by connecting with the master node like this /SPARK_HOME/sbin> ./start-slave.sh spark://localhost:7077 (if your master in your local node)

Conclusion

  1. start your master node and also slave node during spark-submit, so that you will get the enough resources allocated to execute the job.

Alternate-way

  1. You need to make necessary changes in spark-env.sh file which is not recommended.
1
votes

I had a stand-alone cluster setup on my local Mac machine with 1 master and 1 worker. The worker was connected to master and everything seemed to be Ok. However, to save memory I thought I will start the worker with 500M memory only and I had this problem. I restarted the worker with 1G of memory and it worked.

./start-slave.sh spark://{master_url}:{master_port} -c 2 -m 1G
0
votes

If you try to run your application with IDE, and you have free resources on your workers, you need to do this:

1) Before all, configure workers and master spark nodes.

2) Specify driver(PC) configuration to return calculation value from workers.

SparkConf conf = new SparkConf()
            .setAppName("Test spark")
            .setMaster("spark://ip of your master node:port of your master node")
            .set("spark.blockManager.port", "10025")
            .set("spark.driver.blockManager.port", "10026")
            .set("spark.driver.port", "10027") //make all communication ports static (not necessary if you disabled firewalls, or if your nodes located in local network, otherwise you must open this ports in firewall settings)
            .set("spark.cores.max", "12") 
            .set("spark.executor.memory", "2g")
            .set("spark.driver.host", "ip of your driver (PC)"); //(necessary)
-1
votes

Try using "spark://127.0.0.1:7077" as a master address instead of *.local name. Sometime java is not able to resolve .local addresses - for reasons I don't understand.