2
votes

I'm trying to install Spark on my 64 -bit Windows OS computer. I installed python 3.8.2. I have pip with version 20.0.2. I download spark-2.4.5-bin-hadoop2.7 and set environment variables as HADOOP_HOME, SPARK_HOME and I add pyspark to path variable. When I run pyspark from cmd I see the error given below:

C:\Users\aa>pyspark
Python 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 23:03:10) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Traceback (most recent call last):
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\shell.py", line 31, in <module>
    from pyspark import SparkConf
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\__init__.py", line 51, in <module>
    from pyspark.context import SparkContext
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\context.py", line 31, in <module>
    from pyspark import accumulators
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\accumulators.py", line 97, in <module>
    from pyspark.serializers import read_int, PickleSerializer
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\serializers.py", line 72, in <module>
    from pyspark import cloudpickle
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\cloudpickle.py", line 145, in <module>
    _cell_set_template_code = _make_cell_set_template_code()
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\cloudpickle.py", line 126, in _make_cell_set_template_code
    return types.CodeType(
TypeError: an integer is required (got type bytes)

I want to import pyspark to my python code but in Pycharm but after I run my code file I take an error like TypeError: an integer is required (got type bytes) also. I uninstall python 3.8.2 and tried with python 2.7 but in this case I take an depreciation error. I take the error given below and update pip installer.

Could not find a version that satisfies the requirement pyspark (from versions: )
No matching distribution found for pyspark 

Then I run python -m pip install --upgrade pip to update pip but I have TypeError: an integer is required (got type bytes) problem again.

C:\Users\aa>python --version
Python 3.8.2

C:\Users\aa>pip --version
pip 20.0.2 from c:\users\aa\appdata\local\programs\python\python38\lib\site-packages\pip (python 3.8)

C:\Users\aa>java --version
java 14 2020-03-17
Java(TM) SE Runtime Environment (build 14+36-1461)
Java HotSpot(TM) 64-Bit Server VM (build 14+36-1461, mixed mode, sharing)

How can I fix and overcome the problem? Currently I have spark-2.4.5-bin-hadoop2.7 and python 3.8.2. Thanks in advance!

1
Thank you @D Untouchable. I apply the steps. I install python 3.7.6(downversion), latest version of pip, java 8, spark-hadoop 2.4.4 (downversion), hadoop 2.7 (for winutils.exe) then everything is allright now. But I did not handle and use with python 3.8. I will search it thanks again! - Denisa
Thanks for your help @ei-grad ! I applied the steps in that link but it did not work with python 3.8, I have version incompatiblity problems and took some errors again then I downgraded pthon version and problem is solved! - Denisa

1 Answers

11
votes

It is a python3.8 and spark version compatibility problem you can see : https://github.com/apache/spark/pull/26194.

To make it functional (to a certain extent) you need to :

def print_exec(stream):
    ei = sys.exc_info()
    traceback.print_exception(ei[0], ei[1], ei[2], None, stream)

you'll then be able to import pyspark.