I have a hive table which is partitioned by column inserttime.
I have a pyspark dataframe which has the same columns as the table except for the partitioned column.
The following works well when the table is not partitioned:
df.insertInto('tablename',overwrite=True)
But I am not able to figure out how to insert to a particular partition from pyspark
Tried below:
df.insertInto('tablename',overwrite=True,partition(inserttime='20170818-0831'))
but it did not work and failed with
SyntaxError: non-keyword arg after keyword arg
and I am using pyspark 1.6