I've been using Spark for a couple of weeks now in a cluster set up on Digital Ocean, with one master and one slave, but I keep having the same error "Initial job has not accepted any ressources; check your cluster UI to ensure that workers are registered and have sufficient resources". I have to ask because no answer here or on the internet has solved this.
So I'm trying on my computer as well as on master this command:
./bin/pyspark --master spark://<MASTER-IP>:7077
and the shell launches correctly but if I test it with this example:
sc.parallelize(range(10)).count()
I get the error.
I'm sure it's not a problem of ressources because I can launch the shell from both nodes and create rdds without a problem, with memory and core variables set in spark-env.sh and master and slave can communicate through ssh to one another. I've read that it could be the slave not able to communicate back to the driver, which in my case would either be my computer or master.