4
votes

I have a Spark dataframe which I want to save as Hive table with partitions. I tried the following two statements but they don't work. I don't see any ORC files in HDFS directory, it's empty. I can see baseTable is there in Hive console but obviously it's empty because of no files inside HDFS.

The following two lines saveAsTable() and insertInto()do not work. registerDataFrameAsTable() method works but it creates in memory table and causing OOM in my use case as I have thousands of Hive partitions to process. I am new to Spark.

dataFrame.write().mode(SaveMode.Append).partitionBy("entity","date").format("orc").saveAsTable("baseTable"); 

dataFrame.write().mode(SaveMode.Append).format("orc").partitionBy("entity","date").insertInto("baseTable");

//the following works but creates in memory table and seems to be reason for OOM in my case
    
hiveContext.registerDataFrameAsTable(dataFrame, "baseTable");
1
Use this dataFrame.write().mode(SaveMode.Append).partitionBy("entity","date").format("orc").save("baseTable"); and try to put the full path in save(), not the relative one. - TheMP

1 Answers

1
votes

Hope you have already got your answer , but posting this answer for others reference, partitionBy was only supported for Parquet till Spark 1.4 , support for ORC ,JSON, text and avro was added in version 1.5+ please refer the doc below

https://spark.apache.org/docs/1.6.1/api/java/org/apache/spark/sql/DataFrameWriter.html