I have deployed a HDInsight 3.6 Spark (2.3) cluster on Microsoft Azure with the standard configurations (Location = Central US, Head Nodes = D12 v2 (x2) - 8 cores, Worker Nodes = D13 v2 (x4)) - 32 cores).
- Launch Jupyter notebook and when selecting spark notebook gives a weird error which i am not able to figure out .
Any help on this please