I am running Apache Spark on cluster mode using Apache Mesos. But, when I start Spark-Shell to run a simple test command (sc.parallelize(0 to 10, 8).count) I receive the following warning message:
16/03/10 11:50:55 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
If I check on Mesos WebUI I can see that Spark-Shell is listed as a framework and I have listed one slave (my own machine). Any help how to troubleshoot it?