1
votes

I'm working with Zeppelin (0.7.1) on Spark (2.1.1) on my localhost, and trying to add some configuration values to the jobs I run.

Specifically, I'm trying to set the es.nodes value for elasticsearch-hadoop.

I tried adding the key and value to the interpreter configuration, but that didn't show up in sc.getConf. Adding to the interpreter's "args" configuration key the value of "--conf mykey:myvalue" didn't register as well. Is that not what the spark interpreter configuration is supposed to do?

1
Hi, Did you try using sc.setConf? elastic.co/guide/en/elasticsearch/hadoop/current/…1ambda
AFAIK Zeppelin creates the sparkcontext on its own and I can't recreate it within the paragraph, nor can I add configuration flags to it after hand. If someone has this working please let me know.Oren

1 Answers

1
votes

Apparently this is an intentional change in Zeppelin, implemented not long ago... It only allows spark.* properties to be delegated to the SparkConf. I have submitted a comment to change this, as I believe it is problematic. https://github.com/apache/zeppelin/pull/1970