2
votes

Installed spark cluster on standalone mode with 2 nodes on first node there is spark master running and on another node spark worker. When i try to run spark shell on worker node with word count code it runs fine but when i try to run spark shell on the master node it gives following output :

WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

Executor is not triggered to run the job. Even though there is worker available to spark master its giving such a problem . Any help is appriciated , thanks

1
running the following command : ./bin/spark-shell --master spark://mastrIP:7077 without specifying the deploy modeRishikesh Teke
what are entries in slaves file under spark/conf ?FaigB

1 Answers

2
votes

You use client deploy mode so the best bet is that executor nodes cannot connect to the driver port on the local machine. It could be firewall issue or problem with advertised IP / hostname. Please make sure that:

  • spark.driver.bindAddress
  • spark.driver.host
  • spark.driver.port

use expected values. Please refer to the networking section of Spark documentation.

Less likely it is a lack of resources. Please check if you don't request more resources than provided by workers.