0
votes

I have been using spark 2.0.1 all this while, but tried to upgrade to a newer version i.e 2.1.1 by downloading the tar file into my local and changing the PATHS.

However, now when i try to run any program, it is failing at initialization of sparkContext. i.e.

    sc = SparkContext()

The entire sample code that I am trying to run is :

     import os
     os.environ['SPARK_HOME']="/opt/apps/spark-2.1.1-bin-hadoop2.7/"

     from pyspark import SparkContext
     from pyspark.sql import *
     sc = SparkContext()

     sqlContext = SQLContext(sc)

     df_tract_alpha= sqlContext.read.parquet("tract_alpha.parquet")
     print (df_tract_alpha.count())

The exception I get is at the start itself i.e :


    Traceback (most recent call last):
      File "/home/vna/scripts/global_score_pipeline/test_code_here.py", line 47, in 
        sc = SparkContext()
      File "/opt/apps/spark-2.1.1-bin-hadoop2.7/python/pyspark/context.py", line 118, in __init__
        conf, jsc, profiler_cls)
      File "/opt/apps/spark-2.1.1-bin-hadoop2.7/python/pyspark/context.py", line 182, in _do_init
        self._jsc = jsc or self._initialize_context(self._conf._jconf)
      File "/opt/apps/spark-2.1.1-bin-hadoop2.7/python/pyspark/context.py", line 249, in _initialize_context
        return self._jvm.JavaSparkContext(jconf)
      File "/opt/apps/spark-2.1.1-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1401, in __call__
      File "/opt/apps/spark-2.1.1-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
    py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
    : java.lang.NumberFormatException: For input string: "Ubuntu"
        at java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)

I am not passing Ubuntu anywhere in my variables or my ENV variables as well..

I have also tried changing sc = SparkContext(master='local'), Still the issue is the same.

Please help in identifying this issue

Edit: Content of spark-defaults.conf


    spark.master                     spark://master:7077
    # spark.eventLog.enabled           true
    # spark.eventLog.dir               hdfs://namenode:8021/directory
    spark.serializer                 org.apache.spark.serializer.KryoSerializer
    spark.driver.memory              8g
    spark.executor.extraJavaOptions  -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
    spark.driver.extraClassPath /opt/apps/spark-2.1.1-bin-hadoop2.7/jars/mysql-connector-java-5.1.35-bin.jar
    spark.executor.extraClassPath /opt/apps/spark-2.1.1-bin-hadoop2.7/jars/mysql-connector-java-5.1.35-bin.jar

1

1 Answers

0
votes

Have you checked your configuration files (e.g. spark-defaults.conf)? It could be a parsing error for a field expecting an integers. For example, if you try to set spark.executor.cores Ubuntu you could get that exception.