3
votes

I want to submit a Spark job on a remote YARN cluster using the spark-submit command. My client is a Windows machine and the cluster is composed of a master and 4 slaves. I copied the Hadoop config files from my cluster to the remote machine, namely core-site.xml and yarn-site.xml and set the HADOOP_CONF_DIR variable in spark-env.sh to point to them.

However, when I submit a job using the following command :

spark-submit --jars hdfs:///user/kmansour/elevation/geotrellis-1.2.1-assembly.jar \  
 --class tutorial.CalculateFlowDirection hdfs:///user/kmansour/elevation/demo_2.11-0.2.0.jar hdfs:///user/kmansour/elevation/TIF/DTM_1m_19_E_17_108_*.tif \  
 --deploy-mode cluster \  
 --master yarn

I get stuck with:

INFO yarn.Client: Application report for application_1519070657292_0088 (state: ACCEPTED)

Until I get this :

 diagnostics: Application application_1519070657292_0088 failed 2 times due to AM Container for appattempt_1519070657292_0088_000002 exited with  exitCode: 10
    For more detailed output, check application tracking page:http://node1:8088/cluster/app/application_1519070657292_0088Then, click on links to logs of each attempt.
    Diagnostics: Exception from container-launch.
    Container id: container_1519070657292_0088_02_000001
    Exit code: 10
    Stack trace: ExitCodeException exitCode=10:
            at org.apache.hadoop.util.Shell.runCommand(Shell.java:585)
            at org.apache.hadoop.util.Shell.run(Shell.java:482)
            at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:776)
            at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
            at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
            at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
            at java.util.concurrent.FutureTask.run(FutureTask.java:266)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
            at java.lang.Thread.run(Thread.java:748)

When I check out the application tracking page, I get this on stderr :

18/03/13 14:48:05 INFO util.SignalUtils: Registered signal handler for TERM
18/03/13 14:48:05 INFO util.SignalUtils: Registered signal handler for HUP
18/03/13 14:48:05 INFO util.SignalUtils: Registered signal handler for INT
18/03/13 14:48:06 INFO yarn.ApplicationMaster: Preparing Local resources
18/03/13 14:48:08 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1519070657292_0088_000002
18/03/13 14:48:08 INFO spark.SecurityManager: Changing view acls to: kmansour
18/03/13 14:48:08 INFO spark.SecurityManager: Changing modify acls to: kmansour
18/03/13 14:48:08 INFO spark.SecurityManager: Changing view acls groups to: 
18/03/13 14:48:08 INFO spark.SecurityManager: Changing modify acls groups to: 
18/03/13 14:48:08 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(kmansour); groups with view permissions: Set(); users  with modify permissions: Set(kmansour); groups with modify permissions: Set()
18/03/13 14:48:08 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable.
18/03/13 14:50:15 ERROR yarn.ApplicationMaster: Failed to connect to driver at 132.156.9.98:50687, retrying ...
18/03/13 14:50:15 ERROR yarn.ApplicationMaster: Uncaught exception: 
org.apache.spark.SparkException: Failed to connect to driver!
    at org.apache.spark.deploy.yarn.ApplicationMaster.waitForSparkDriver(ApplicationMaster.scala:577)
    at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:433)
    at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:256)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:764)
    at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:67)
    at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:66)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:66)
    at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:762)
    at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:785)
    at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
18/03/13 14:50:15 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 10, (reason: Uncaught exception: org.apache.spark.SparkException: Failed to connect to driver!)
18/03/13 14:50:16 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: Uncaught exception: org.apache.spark.SparkException: Failed to connect to driver!)
18/03/13 14:50:16 INFO yarn.ApplicationMaster: Deleting staging directory hdfs://132.156.9.142:8020/user/kmansour/.sparkStaging/application_1519070657292_0088
18/03/13 14:50:16 INFO util.ShutdownHookManager: Shutdown hook called

The IP address of my master node is 132.156.9.142 and the IP address of my client is 132.156.9.98. The log shows me that the application master is attempting to connect to the driver on the client when I explicitly stated --deploy-mode cluster.

Shouldn't the driver driver be on a node in the cluster ?

This is the content of my config files :

spark-defaults.conf :

spark.eventLog.enabled           true
spark.eventLog.dir               hdfs://132.156.9.142:8020/events
spark.history.fs.logDirectory    hdfs://132.156.9.142:8020/events
spark.serializer                 org.apache.spark.serializer.KryoSerializer
spark.driver.cores               2
spark.driver.memory              5g
spark.executor.extraJavaOptions  -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.executor.instances         4
spark.executor.cores             2
spark.executor.memory            6g
spark.yarn.am.memory             2g
spark.yarn.jars                  hdfs://node1:8020/jars/*.jar

yarn-site.xml:

<configuration>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>node1</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>8192</value>
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>1024</value>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>7168</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>2</value>
    </property>
    <property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-pmem-ratio</name>
        <value>5</value>
    </property>
</configuration>

core-site.xml :

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://132.156.9.142:8020</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>C:\Users\kmansour\Documents\hadoop-2.7.4\tmp</value>
    </property>
</configuration>

I am very new at all this and perhaps my reasoning is flawed, any input or suggestions would help.

1

1 Answers

3
votes

You need to change order or parameters passed to spark-submit. In your configuration:

spark-submit --jars hdfs:///user/kmansour/elevation/geotrellis-1.2.1-assembly.jar \  
 --class tutorial.CalculateFlowDirection hdfs:///user/kmansour/elevation/demo_2.11-0.2.0.jar hdfs:///user/kmansour/elevation/TIF/DTM_1m_19_E_17_108_*.tif \  
 --deploy-mode cluster \  
 --master yarn

Spark is called in default mode (yarn-client probably) and then your --deploy-mode and --master as passed as app parameters, because there are entered after jar file location. Change it to:

spark-submit --jars hdfs:///user/kmansour/elevation/geotrellis-1.2.1-assembly.jar \  
 --deploy-mode cluster \  
 --master yarn \
 --class tutorial.CalculateFlowDirection hdfs:///user/kmansour/elevation/demo_2.11-0.2.0.jar hdfs:///user/kmansour/elevation/TIF/DTM_1m_19_E_17_108_*.tif  

and you will get true yarn-cluster mode.