I am trying to setup a spark cluster in DigitalOcean and have created a master and two slave nodes there; I have been unable to connect to the master from the pyspark method setMaster() even though there are unused executors and lot of RAM still available.
The error I get is: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources.
My spark-env.sh file in master looks like this:
export SPARK_MASTER_HOST='<MASTER IP ADDRESS>'
export JAVA_HOME='/usr/lib/jvm/java-8-oracle'
export SPARK_LOCAL_IP='<MASTER IP ADDRESS>'
The spark-env.sh file in slave looks like this:
export SPARK_MASTER_HOST='<MASTER IP ADDRESS>'
export JAVA_HOME='/usr/lib/jvm/java-8-oracle'
export SPARK_LOCAL_IP='<SLAVE IP ADDRESS>'
I tried to use the Private Ip for the SPARK_MASTER_HOST as well as for the SPARK_LOCAL_IP but the error refuses to go away. What am I doing wrong?