6
votes

I have a data frame that when saved as Parquet format takes ~11GB. When reading to a dataframe and writing to json, it takes 5 minutes. When I add partitionBy("day") it takes hours to finish. I understand that the distribution to partitions is the costly action. Is there a way to make it faster? Will sorting the files can make it better?

Example:

Run 5 minutes

df=spark.read.parquet(source_path).
df.write.json(output_path)

Run for hours

spark.read.parquet(source_path).createOrReplaceTempView("source_table")
sql="""
select cast(trunc(date,'yyyymmdd') as int) as day, a.*
from source_table a"""
spark.sql(sql).write.partitionBy("day").json(output_path)
2

2 Answers

14
votes

Try adding a repartition("day") before the write, like this:

spark
  .sql(sql)
  .repartition("day")
  .write
  .partitionBy("day")
  .json(output_path)

It should speed up your query.

2
votes

Try adding repartition(any number ) to start with, then try increasing / decreasing the number depending upon the time it takes to write

 spark
  .sql(sql)
  .repartition(any number)
  .write
  .partitionBy("day")
  .json(output_path)