1
votes

I have a Spark Cluster deployed using bdutil for Google Cloud. I installed a GUI on my driver instance to be able to run IntelliJ from it, so that I can try to run my Spark processes in interactive mode.

The first issue I faced was that the spark-env.sh and core-site.xml were not used at all when running from IntelliJ. I finally managed to set the configuration manually in Scala by copying values from the configuration files. Is there a way to avoid that ?

The last thing which is not working is that even if the gcs connector seems to "see" the folder I set as source, each time it tries to read the actual files in that folder, I get a java.io.EOFException.

Here's my code for my tests :

object SparkBasicTest {

  def main(args: Array[String]) {

    val conf = new SparkConf().setAppName("Simple Application")

    conf.setMaster("spark://research-m:7077")

    conf.set("spark.akka.frameSize", "512")
    conf.set("spark.driver.maxResultSize", "1631m")
    conf.set("spark.yarn.executor.memoryOverhead", "384")

    conf.set("spark.driver.memory", "3263m")
    conf.set("spark.executor.memory", "10444m")
    conf.set("spark.driver.extraClassPath", ":/home/hadoop/hadoop-install/lib/gcs-connector-1.4.0-hadoop1.jar")

    val path = "STAGE/out/scored"

    val sc = new SparkContext(conf)

    sc.hadoopConfiguration.set("fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem")
    sc.hadoopConfiguration.set("fs.AbstractFileSystem.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS")
    sc.hadoopConfiguration.set("fs.gs.project.id", "xxxxx")
    sc.hadoopConfiguration.set("fs.gs.system.bucket", "yyyyy")
    sc.hadoopConfiguration.set("fs.gs.metadata.cache.directory", "/hadoop_gcs_connector_metadata_cache")
    sc.hadoopConfiguration.set("fs.gs.metadata.cache.enable", "true")
    sc.hadoopConfiguration.set("fs.gs.metadata.cache.type", "FILESYSTEM_BACKED")
    sc.hadoopConfiguration.set("fs.gs.working.dir", "/")
    sc.hadoopConfiguration.set("fs.default.name", "gs://yyyyyy/")
    sc.hadoopConfiguration.set("fs.defaultFS", "gs://yyyyyy/")
    sc.hadoopConfiguration.set("hadoop.tmp.dir", "/hadoop/tmp")
    sc.hadoopConfiguration.set("dfs.datanode.data.dir.perm", "755")

    val lines = sc.textFile(path)
    val result = lines.count()

  }

}

And the output I get after running it :

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/07/27 12:00:47 INFO SparkContext: Running Spark version 1.4.0
15/07/27 12:00:48 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/07/27 12:00:48 INFO SecurityManager: Changing view acls to: antvoice
15/07/27 12:00:48 INFO SecurityManager: Changing modify acls to: antvoice
15/07/27 12:00:48 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(antvoice); users with modify permissions: Set(antvoice)
15/07/27 12:00:49 INFO Slf4jLogger: Slf4jLogger started
15/07/27 12:00:49 INFO Remoting: Starting remoting
15/07/27 12:00:50 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:45952]
15/07/27 12:00:50 INFO Utils: Successfully started service 'sparkDriver' on port 45952.
15/07/27 12:00:50 INFO SparkEnv: Registering MapOutputTracker
15/07/27 12:00:50 INFO SparkEnv: Registering BlockManagerMaster
15/07/27 12:00:50 INFO DiskBlockManager: Created local directory at /mnt/pd1/hadoop/spark/tmp/spark-dbaf72cb-599b-40c9-a9f8-ad9ede2b0654/blockmgr-24fd090a-b9df-4754-8022-ccaf8800ca2a
15/07/27 12:00:50 INFO MemoryStore: MemoryStore started with capacity 1566.8 MB
15/07/27 12:00:50 INFO HttpFileServer: HTTP File server directory is /mnt/pd1/hadoop/spark/tmp/spark-dbaf72cb-599b-40c9-a9f8-ad9ede2b0654/httpd-27e69b24-ad3d-4019-9bf7-37649c2ebc8e
15/07/27 12:00:50 INFO HttpServer: Starting HTTP Server
15/07/27 12:00:50 INFO Utils: Successfully started service 'HTTP file server' on port 57505.
15/07/27 12:00:50 INFO SparkEnv: Registering OutputCommitCoordinator
15/07/27 12:00:56 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/07/27 12:00:56 INFO SparkUI: Started SparkUI at http://10.240.63.109:4040
15/07/27 12:00:56 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@research-m:7077/user/Master...
15/07/27 12:00:57 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20150727120057-0000
15/07/27 12:00:57 INFO AppClient$ClientActor: Executor added: app-20150727120057-0000/0 on worker-20150727114108-10.240.205.199-50284 (10.240.205.199:50284) with 2 cores
15/07/27 12:00:57 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150727120057-0000/0 on hostPort 10.240.205.199:50284 with 2 cores, 10.2 GB RAM
15/07/27 12:00:57 INFO AppClient$ClientActor: Executor updated: app-20150727120057-0000/0 is now RUNNING
15/07/27 12:00:57 INFO AppClient$ClientActor: Executor updated: app-20150727120057-0000/0 is now LOADING
15/07/27 12:00:57 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 38947.
15/07/27 12:00:57 INFO NettyBlockTransferService: Server created on 38947
15/07/27 12:00:57 INFO BlockManagerMaster: Trying to register BlockManager
15/07/27 12:00:57 INFO BlockManagerMasterEndpoint: Registering block manager 10.240.63.109:38947 with 1566.8 MB RAM, BlockManagerId(driver, 10.240.63.109, 38947)
15/07/27 12:00:57 INFO BlockManagerMaster: Registered BlockManager
15/07/27 12:00:57 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
15/07/27 12:00:57 INFO deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
15/07/27 12:00:58 INFO MemoryStore: ensureFreeSpace(112832) called with curMem=0, maxMem=1642919362
15/07/27 12:00:58 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 110.2 KB, free 1566.7 MB)
15/07/27 12:00:58 INFO MemoryStore: ensureFreeSpace(10627) called with curMem=112832, maxMem=1642919362
15/07/27 12:00:58 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 10.4 KB, free 1566.7 MB)
15/07/27 12:00:58 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.240.63.109:38947 (size: 10.4 KB, free: 1566.8 MB)
15/07/27 12:00:58 INFO SparkContext: Created broadcast 0 from textFile at SparkBasicTest.scala:36
15/07/27 12:00:58 INFO GoogleHadoopFileSystemBase: GHFS version: 1.4.0-hadoop1
15/07/27 12:01:00 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://[email protected]:54716/user/Executor#396919943]) with ID 0
15/07/27 12:01:00 INFO BlockManagerMasterEndpoint: Registering block manager 10.240.205.199:36835 with 5.3 GB RAM, BlockManagerId(0, 10.240.205.199, 36835)
15/07/27 12:01:02 INFO FileInputFormat: Total input paths to process : 47
15/07/27 12:01:02 INFO SparkContext: Starting job: count at SparkBasicTest.scala:37
15/07/27 12:01:02 INFO DAGScheduler: Got job 0 (count at SparkBasicTest.scala:37) with 47 output partitions (allowLocal=false)
15/07/27 12:01:02 INFO DAGScheduler: Final stage: ResultStage 0(count at SparkBasicTest.scala:37)
15/07/27 12:01:02 INFO DAGScheduler: Parents of final stage: List()
15/07/27 12:01:02 INFO DAGScheduler: Missing parents: List()
15/07/27 12:01:02 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at textFile at SparkBasicTest.scala:36), which has no missing parents
15/07/27 12:01:02 INFO MemoryStore: ensureFreeSpace(2968) called with curMem=123459, maxMem=1642919362
15/07/27 12:01:02 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 2.9 KB, free 1566.7 MB)
15/07/27 12:01:02 INFO MemoryStore: ensureFreeSpace(1752) called with curMem=126427, maxMem=1642919362
15/07/27 12:01:02 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1752.0 B, free 1566.7 MB)
15/07/27 12:01:02 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.240.63.109:38947 (size: 1752.0 B, free: 1566.8 MB)
15/07/27 12:01:02 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:874
15/07/27 12:01:02 INFO DAGScheduler: Submitting 47 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at textFile at SparkBasicTest.scala:36)
15/07/27 12:01:02 INFO TaskSchedulerImpl: Adding task set 0.0 with 47 tasks
15/07/27 12:01:03 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 10.240.205.199, PROCESS_LOCAL, 1416 bytes)
15/07/27 12:01:03 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 10.240.205.199, PROCESS_LOCAL, 1416 bytes)
15/07/27 12:01:03 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, 10.240.205.199, PROCESS_LOCAL, 1416 bytes)
15/07/27 12:01:03 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, 10.240.205.199): java.io.EOFException
    at java.io.ObjectInputStream$BlockDataInputStream.readFully(ObjectInputStream.java:2744)
    at java.io.ObjectInputStream.readFully(ObjectInputStream.java:1032)
    at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
    at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
    at org.apache.hadoop.io.UTF8.readChars(UTF8.java:216)
    at org.apache.hadoop.io.UTF8.readString(UTF8.java:208)
    at org.apache.hadoop.mapred.FileSplit.readFields(FileSplit.java:87)
    at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:237)
    at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:66)
    at org.apache.spark.SerializableWritable$$anonfun$readObject$1.apply$mcV$sp(SerializableWritable.scala:45)
    at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1239)
    at org.apache.spark.SerializableWritable.readObject(SerializableWritable.scala:41)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:69)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:95)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

15/07/27 12:01:03 INFO TaskSetManager: Starting task 1.1 in stage 0.0 (TID 3, 10.240.205.199, PROCESS_LOCAL, 1416 bytes)
15/07/27 12:01:03 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 4, 10.240.205.199, PROCESS_LOCAL, 1416 bytes)
15/07/27 12:01:03 INFO TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) on executor 10.240.205.199: java.io.EOFException (null) [duplicate 1]
15/07/27 12:01:03 INFO TaskSetManager: Lost task 2.0 in stage 0.0 (TID 2) on executor 10.240.205.199: java.io.EOFException (null) [duplicate 2]
15/07/27 12:01:03 INFO TaskSetManager: Starting task 2.1 in stage 0.0 (TID 5, 10.240.205.199, PROCESS_LOCAL, 1416 bytes)
15/07/27 12:01:03 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 6, 10.240.205.199, PROCESS_LOCAL, 1416 bytes)
15/07/27 12:01:03 INFO TaskSetManager: Lost task 3.0 in stage 0.0 (TID 4) on executor 10.240.205.199: java.io.EOFException (null) [duplicate 3]
15/07/27 12:01:03 INFO TaskSetManager: Lost task 1.1 in stage 0.0 (TID 3) on executor 10.240.205.199: java.io.EOFException (null) [duplicate 4]
15/07/27 12:01:03 INFO TaskSetManager: Starting task 1.2 in stage 0.0 (TID 7, 10.240.205.199, PROCESS_LOCAL, 1416 bytes)
15/07/27 12:01:03 INFO TaskSetManager: Lost task 2.1 in stage 0.0 (TID 5) on executor 10.240.205.199: java.io.EOFException (null) [duplicate 5]
15/07/27 12:01:03 INFO TaskSetManager: Starting task 2.2 in stage 0.0 (TID 8, 10.240.205.199, PROCESS_LOCAL, 1416 bytes)
15/07/27 12:01:03 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 6) on executor 10.240.205.199: java.io.EOFException (null) [duplicate 6]
15/07/27 12:01:03 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 9, 10.240.205.199, PROCESS_LOCAL, 1416 bytes)
15/07/27 12:01:03 INFO TaskSetManager: Lost task 1.2 in stage 0.0 (TID 7) on executor 10.240.205.199: java.io.EOFException (null) [duplicate 7]
15/07/27 12:01:03 INFO TaskSetManager: Starting task 1.3 in stage 0.0 (TID 10, 10.240.205.199, PROCESS_LOCAL, 1416 bytes)
15/07/27 12:01:03 INFO TaskSetManager: Lost task 2.2 in stage 0.0 (TID 8) on executor 10.240.205.199: java.io.EOFException (null) [duplicate 8]
15/07/27 12:01:03 INFO TaskSetManager: Starting task 2.3 in stage 0.0 (TID 11, 10.240.205.199, PROCESS_LOCAL, 1416 bytes)
15/07/27 12:01:03 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 9) on executor 10.240.205.199: java.io.EOFException (null) [duplicate 9]
15/07/27 12:01:03 INFO TaskSetManager: Lost task 1.3 in stage 0.0 (TID 10) on executor 10.240.205.199: java.io.EOFException (null) [duplicate 10]
15/07/27 12:01:03 ERROR TaskSetManager: Task 1 in stage 0.0 failed 4 times; aborting job
15/07/27 12:01:03 INFO TaskSetManager: Lost task 2.3 in stage 0.0 (TID 11) on executor 10.240.205.199: java.io.EOFException (null) [duplicate 11]
15/07/27 12:01:03 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
15/07/27 12:01:03 INFO TaskSchedulerImpl: Cancelling stage 0
15/07/27 12:01:03 INFO DAGScheduler: ResultStage 0 (count at SparkBasicTest.scala:37) failed in 0.319 s
15/07/27 12:01:03 INFO DAGScheduler: Job 0 failed: count at SparkBasicTest.scala:37, took 0.437413 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 10, 10.240.205.199): java.io.EOFException
    at java.io.ObjectInputStream$BlockDataInputStream.readFully(ObjectInputStream.java:2744)
    at java.io.ObjectInputStream.readFully(ObjectInputStream.java:1032)
    at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
    at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
    at org.apache.hadoop.io.UTF8.readChars(UTF8.java:216)
    at org.apache.hadoop.io.UTF8.readString(UTF8.java:208)
    at org.apache.hadoop.mapred.FileSplit.readFields(FileSplit.java:87)
    at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:237)
    at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:66)
    at org.apache.spark.SerializableWritable$$anonfun$readObject$1.apply$mcV$sp(SerializableWritable.scala:45)
    at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1239)
    at org.apache.spark.SerializableWritable.readObject(SerializableWritable.scala:41)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:69)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:95)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1450)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1411)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
15/07/27 12:01:03 INFO SparkContext: Invoking stop() from shutdown hook
15/07/27 12:01:03 INFO SparkUI: Stopped Spark web UI at http://10.240.63.109:4040
15/07/27 12:01:03 INFO DAGScheduler: Stopping DAGScheduler
15/07/27 12:01:03 INFO SparkDeploySchedulerBackend: Shutting down all executors
15/07/27 12:01:03 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
15/07/27 12:01:03 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
15/07/27 12:01:03 INFO Utils: path = /mnt/pd1/hadoop/spark/tmp/spark-dbaf72cb-599b-40c9-a9f8-ad9ede2b0654/blockmgr-24fd090a-b9df-4754-8022-ccaf8800ca2a, already present as root for deletion.
15/07/27 12:01:03 INFO MemoryStore: MemoryStore cleared
15/07/27 12:01:03 INFO BlockManager: BlockManager stopped
15/07/27 12:01:03 INFO BlockManagerMaster: BlockManagerMaster stopped
15/07/27 12:01:03 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
15/07/27 12:01:03 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/07/27 12:01:03 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
15/07/27 12:01:03 INFO SparkContext: Successfully stopped SparkContext
15/07/27 12:01:03 INFO Utils: Shutdown hook called
15/07/27 12:01:03 INFO Utils: Deleting directory /mnt/pd1/hadoop/spark/tmp/spark-dbaf72cb-599b-40c9-a9f8-ad9ede2b0654

Process finished with exit code 1

What am I missing? Thanks in advance for any help !

1
Are you making sure to use a version of Spark compiled for Hadoop 1 in your case? Which Spark tarball are you installing from?Dennis Huo

1 Answers

2
votes

One possibility is that you somehow have a mismatch of Hadoop versions on your classpaths. In particular, if you use Spark's prebuilt tarball that was built for Hadoop 2 but run it on a cluster that has Hadoop 1 installed, you may hit the error you encountered. Note that the stack trace indicates errors when trying to "readObject", which means trying to deserialize a class; if class definitions differ between classloaders, this can happen.

I set up a few different IntelliJ installations on different bdutil-deployed Spark clusters, and encountered the same stack trace you saw when I tried running from cluster that has spark-1.4.0-bin-hadoop2.6.tgz for the IntelliJ library and driver, but submitting to another node which is using spark-1.4.0-bin-hadoop1.tgz. Here's a related stack overflow question running on EC2 and here's another manifestation that wasn't a mismatch, but a requirement to add the hadoop-client library to the classpath.