Want to write spark dataframe into existing parquet hive table. I am able to do it usingdf.write.mode("append").insertI to("myexistinghivetable")but if I check through file system I could see spark files are landed with .c000 extension.
What those files mean? And how to write dataframe into parquet hive table.
2
votes
I don't know the hive table location that will be decided run time based on the partition values. Again can't we directly write dataframe into hive parquet table without workaround
- Rahul
2 Answers
3
votes