The question I am trying to answer is:
Create RDD
Use the map to create an RDD of the NumPy arrays specified by the columns. The name of the RDD would be Rows
My code: Rows = df.select(col).rdd.map(make_array)
After I type this, I get a strange error, which basically says: Exception: Python in worker has different version 2.7 than that in driver 3.6, PySpark cannot run with different minor versions. Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.

I know I am working in an environment with Python 3.6. I am not sure if this specific line of code is triggering this error? What do you think
Just to note, this isn't my first line of code on this Jupyter notebook. If you need more information, please let me know and I will provide it. I can't understand why this is happening.