My question is similar to other posters reporting on "Initial job has not accepted any resources". I read their suggestions and still not able to submit the job from Java. I am wondering if somebody with more experience installing Spark sees an obvious miss or knows how to troubleshoot this?
Spark : check your cluster UI to ensure that workers are registered.
My configuration is as follows: (VM Fedora) MASTER: version 2.0.2, prebuilt w/ hadoop. WORKER: single instance.
(Host/Windows Java app) Client is a sample JavaApp, configured with
conf.set("spark.cores.max","1");
conf.set("spark.shuffle.service.enabled", "false");
conf.set("spark.dynamicAllocation.enabled", "false");
Attached is a snapshot of Spark UI. As far as I can tell my job is received, submitted and running. It also appears that I am not over-utilizing CPU and RAM.
Java(client) console reports
12:15:47.816 DEBUG parentName: , name: TaskSet_0, runningTasks: 0
12:15:48.815 DEBUG parentName: , name: TaskSet_0, runningTasks: 0
12:15:49.806 WARN Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
12:15:49.816 DEBUG parentName: , name: TaskSet_0, runningTasks: 0
12:15:50.816 DEBUG parentName: , name: TaskSet_0, runningTasks: 0
Spark worker log reports.
16/11/22 12:16:34 INFO Worker: Asked to launch executor app-20161122121634-0012/0 for Simple
Application
16/11/22 12:16:34 INFO SecurityManager: Changing modify acls groups to:
16/11/22 12:16:34 INFO SecurityManager: SecurityManager: authentication disabled; ui acls dis
abled; users with view permissions: Set(john); groups with view permissions: Set(); users
with modify permissions: Set(john); groups with modify permissions: Set()
16/11/22 12:16:34 INFO ExecutorRunner: Launch command: "/apps/jdk1.8.0_101/jre/bin/java" "-cp " "/apps/spark-2.0.2-bin-hadoop2.7/conf/:/apps/spark-2.0.2-bin-hadoop2.7/jars/*" "-Xmx1024M" "-Dspark.driver.port=29015" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://[email protected]:29015" "--executor-id" "0" "--hostname" "192.168.56.103" "--cores" "1" "--app-id" "app-20161122121634-0012" "--worker-url" "spark://[email protected]:38701"
Initial Job has not accepted any resources
then go to the Spark UI to see how many appplications are submitted.My hunch is , one application would be in waiting and the one would be executing , and consuming all resource ! Then try to kill the running application and see what happens ? – Shiv4nsh