I'm new to Bluemix. I have created the Apache Spark Service and I tried to submit a simple hello-world jar through spark submit. (I used this link to fallow: https://console.ng.bluemix.net/docs/services/AnalyticsforApacheSpark/index-gentopic3.html#genTopProcId4
After submitting the jar, the log file says:
Submit job result: { "action" : "CreateSubmissionResponse",
"message" : "Driver successfully submitted as driver-20170602xxxxxxxxxxx", "serverSparkVersion" : "2.0.2",
"submissionId" : "driver-20170602xxxxxxxxxxx", "success" : true }Submission ID: driver-20170602xxxxxxxxxxx Job submitted : driver-driver-20170602xxxxxxxxxxx Polling job status. Poll #1. Getting status ==== Failed Status output ===================================================== ERROR: Job failed. spark-submit log file: spark-submit_1496403637550663675.log View job's stdout log at stdout_1496403637550663675 View job's stderr log at stderr_1496403637550663675
What may be the problem in this case?
Also I see people talking about notebook and Jupiter for accessing/running spark job. But I don't see the notebook /Jupiter option on my Dashboard.
Thank you for your inputs
The curl on stdout i.e.,(https://spark.eu-gb.bluemix.net/tenant/data/workdir/driver-20170614074046xxxxxxxxx277e6a/stdout) is: "no extra configuration defined"
But i found the below error message on stderr:
log4j:ERROR Could not find value for key log4j.appender.FILE log4j:ERROR Could not instantiate appender named "FILE". ERROR deploy.ego.EGOClusterDriverWrapper: Uncaught exception: java.nio.file.NoSuchFileException: /gpfs/fs01/user/sd74-836f4292ca6442xxxxxxxx/data/e717e66fe44f5a1ea7eec81cbd/hellospark_2.11-1.0.jar at sun.nio.fs.UnixException.translateToIOException(UnixException.java:98) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:114) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:119) at sun.nio.fs.UnixCopyFile.copy(UnixCopyFile.java:538) at sun.nio.fs.UnixFileSystemProvider.copy(UnixFileSystemProvider.java:265) at java.nio.file.Files.copy(Files.java:1285) at org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$copyRecursive(Utils.scala:629) at org.apache.spark.util.Utils$.copyFile(Utils.scala:600) at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:685) at org.apache.spark.util.Utils$.fetchFile(Utils.scala:484) at org.apache.spark.deploy.ego.EGOClusterDriverWrapper$$anonfun$startUserClass$2.apply(EGOClusterDriverWrapper.scala:411) at org.apache.spark.deploy.ego.EGOClusterDriverWrapper$$anonfun$startUserClass$2.apply(EGOClusterDriverWrapper.scala:404) at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186) at org.apache.spark.deploy.ego.EGOClusterDriverWrapper.startUserClass(EGOClusterDriverWrapper.scala:404) at org.apache.spark.deploy.ego.EGOClusterDriverWrapper.runDriver(EGOClusterDriverWrapper.scala:295) at org.apache.spark.deploy.ego.EGOClusterDriverWrapper.run(EGOClusterDriverWrapper.scala:218) at org.apache.spark.deploy.ego.EGOClusterDriverWrapper$$anonfun$receive$1$$anon$1.run(EGOClusterDriverWrapper.scala:144)