So, I am trying to load a csv file, and then save it as a parquet file, and then load it into a Hive table. However whenever it load it into the table, the values are out of place and all over the place.I am using Pyspark/Hive
Here is the content in my csv file:

Here is my code to convert csv to parquet and write it to my HDFS location:
#This creates the sparkSession
from pyspark.sql import SparkSession
#from pyspark.sql import SQLContext
spark = (SparkSession \
.builder \
.appName("S_POCC") \
.enableHiveSupport()\
.getOrCreate())
df = spark.read.load('/user/new_file.csv', format="csv", sep=",", inferSchema="true", header="false")
df.write.save('hdfs://my_path/table/test1.parquet')
This succesfully converts it to parquet and to the path however when I load it using the following statements in Hive, it gives a weird output.
Hive statements:
drop table sndbx_test.test99 purge ;
create external table if not exists test99 ( c0 string, c1 string, c2 string, c3 string, c4 string, c5 string, c6 string);
load data inpath 'hdfs://my_path/table/test1.parquet;
Any ideas/suggestions?
