1
votes

I have been trying to configure Apache Zeppeling with Spark 2.0. I managed to install them both on a linux os and I set the the spark on the 8080 port while zeppelin server on the 8082 port number.

In the zeppelin-env.sh file from zeppelin I set the SPARK_HOME variable to the location of the Spark folder.

However when I try to create a new node nothing compiles properly. From what it seems I didn't configure the interpreters as the interpreter tab is missing from in the home tab.

Any help would be much appreciated.

enter image description here

EDIT: E.I. when I am trying to run the zeppelin tutorial, the 'Load data into table' process I receive the following error:

java.lang.ClassNotFoundException: org.apache.spark.repl.SparkCommandLine at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:400) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341) at org.apache.zeppelin.scheduler.Job.run(Job.java:176) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

1
it's not missing. it's on the top right under anonymous. plus spark 2.7 doesn't existeliasah
yeah sorry it is spark 2.0 and hadoop 2.7. I know that you can access the interpreter tab from from under anonymus, but my concerned was that if it doesn t appear in the main home page than it means somethign went wrong in the configuration process as when try to compile any code in scala, spark , java etc it gives an error.Marc Zaharescu

1 Answers

1
votes

I don't think it's possible to use spark 2.0 without building from source, since some relatively big changes have happened with this release.

You can clone the zeppelin git repo and build using the spark 2.0 profile as mentioned in the readme on github https://github.com/apache/zeppelin.

I've tried it and it works.