7
votes

I have written Spark job which seems to be working fine for almost an hour and after that executor start getting lost because of timeout I see the following in log statement

15/08/16 12:26:46 WARN spark.HeartbeatReceiver: Removing executor 10 with no recent heartbeats: 1051638 ms exceeds timeout 1000000 ms

I don't see any errors but I see above warning and because of it executor gets removed by YARN and I see Rpc client disassociated error and IOException connection refused and FetchFailedException

After executor gets removed I see it is again getting added and starts working and some other executors fails again. My question is is it normal for executor getting lost? What happens to that task lost executors were working on? My Spark job keeps on running since it is long around 4-5 hours I have very good cluster with 1.2 TB memory and good no of CPU cores.

To solve above time out issue I tried to increase time spark.akka.timeout to 1000 seconds but no luck. I am using the following command to run my Spark job. I am new to Spark. I am using Spark 1.4.1.

./spark-submit --class com.xyz.abc.MySparkJob --conf "spark.executor.extraJavaOptions=-XX:MaxPermSize=512M" --driver-java-options -XX:MaxPermSize=512m --driver-memory 4g --master yarn-client --executor-memory 25G --executor-cores 8 --num-executors 5 --jars /path/to/spark-job.jar

1
A few questions: does your application complete, even with executor losses? Can you turn on debugging in conf/log4j.properties.conf and check if you have more details about executor losses?Bacon
Quick answer for the "What happens to that task lost executors were working on? " question: Spark is going to launch it again (with an ID which looks like: task 14.1)Bacon
Hi Bacon thanks for the response application does not complete once executor start getting lost though it's been added again eventually they lost again and never added again and in the end drive remains which can't do anything so application never finishunk1102

1 Answers

7
votes

What might happen is that the slaves cannot launch executor anymore, due to memory issue. Look for the following messages in the master logs:

15/07/13 13:46:50 INFO Master: Removing executor app-20150713133347-0000/5 because it is EXITED
15/07/13 13:46:50 INFO Master: Launching executor app-20150713133347-0000/9 on worker worker-20150713153302-192.168.122.229-59013
15/07/13 13:46:50 DEBUG Master: [actor] handled message (2.247517 ms) ExecutorStateChanged(app-20150713133347-0000,5,EXITED,Some(Command exited with code 1),Some(1)) from Actor[akka.tcp://[email protected]:59013/user/Worker#-83763597]

You might find some detailed java errors in the worker's log directory, and maybe this type of file: work/app-id/executor-id/hs_err_pid11865.log.

See http://pastebin.com/B4FbXvHR

This issue might be resolved by your application management of RDD's, not by increasing the size of the jvm's heap.