0
votes

I have two paragraphs in a zeppelin spark notebook:

val a = 5

and:

println(a)

If I run the second immediately after the first, then all is ok. but if I wait a few seconds, then the interpreter shuts down, and the second paragraph fails

In the interpreter log, the interpeter is seen to shut down, after a few seconds:

INFO [2017-03-14 12:58:12,053] ({pool-1-thread-3} Logging.scala[logInfo]:54) - Stopped Spark web UI at http://10.0.0.2:4040 INFO [2017-03-14 12:58:12,065] ({pool-1-thread-3} Logging.scala[logInfo]:54) - Shutting down all executors INFO [2017-03-14 12:58:12,066] ({dispatcher-event-loop-11} Logging.scala[logInfo]:54) - Asking each executor to shut down INFO [2017-03-14 12:58:12,087] ({dispatcher-event-loop-25} Logging.scala[logInfo]:54) - MapOutputTrackerMasterEndpoint stopped! INFO [2017-03-14 12:58:12,103] ({pool-1-thread-3} Logging.scala[logInfo]:54) - MemoryStore cleared INFO [2017-03-14 12:58:12,104] ({pool-1-thread-3} Logging.scala[logInfo]:54) - BlockManager stopped INFO [2017-03-14 12:58:12,115] ({pool-1-thread-3} Logging.scala[logInfo]:54) - BlockManagerMaster stopped INFO [2017-03-14 12:58:12,120] ({dispatcher-event-loop-28} Logging.scala[logInfo]:54) - OutputCommitCoordinator stopped! INFO [2017-03-14 12:58:12,124] ({pool-1-thread-3} Logging.scala[logInfo]:54) - Successfully stopped SparkContext INFO [2017-03-14 12:58:12,141] ({pool-1-thread-3} InterpreterGroup.java[close]:145) - Close interpreter group 2CAQSK5DV::2CBFFWCNP INFO [2017-03-14 12:58:14,247] ({Thread-3} Logging.scala[logInfo]:54) - Shutdown hook called INFO [2017-03-14 12:58:14,249] ({Thread-3} Logging.scala[logInfo]:54) - Deleting directory /tmp/spark-b0cfee1f-e8ee-49dd-aae2-2d3446bfaaa1 INFO [2017-03-14 12:58:14,256] ({Thread-3} Logging.scala[logInfo]:54) - Deleting directory /tmp/spark-9c7c38d8-3e82-4833-8557-afb94f3c3cb7

I tried adding interpreter configuration such as:

  • zeppelin.interpreter.persistent = true
  • spark.qubole.idle.timeout = 900 (I'm not using qubole, but seemed worth trying ...)

... but nothing changes this behavior.

how to handle this?

Edit: note that the same problem seems to exist with %python interpreter too, so seems not interpreter/spark-specific, but global to all zeppelin interpreters

1

1 Answers

0
votes

So, it turned out, in the end, that a notebook, that was in the trash, had the scheduler set to run every minute, and that was triggering the interpreter shutdown. After noticing in the 'jobs' page that this job was running repeatedly, I opened that notebook, noticed the scheduler was on, turned it off, and then Zeppelin interpreter no longer kept shutting down any more :-)