0
votes

I have deployed a HDInsight 3.6 Spark (2.3) cluster on Microsoft Azure with the standard configurations (Location = Central US, Head Nodes = D12 v2 (x2) - 8 cores, Worker Nodes = D13 v2 (x4)) - 32 cores).

  1. Launch Jupyter notebook and when selecting spark notebook gives a weird error which i am not able to figure out .

Any help on this please

Jupyter Notebook Issue

1

1 Answers

1
votes

Please try below steps to resolve this issue.

  1. Connect to headnode via ssh and change content of file - /usr/bin/anaconda/lib/python2.7/site-packages/nbformat/_version.py, replace 5 to 4.

enter image description here

Change this to:

version_info = (4, 0, 3)

enter image description here

  1. Restart Jupyter service via Ambari.

enter image description here

Reference:HDInshight Create not create Jupyter notebook