1
votes

According so many good resources, it is advisable to re-partition a RDD after filter operation. since, there is a possibility that most of the partitions are now empty. I have a doubt that in case of Data Frames has this been handled in current versions or do we still need to repartition it after a filter operation?

3

3 Answers

3
votes

I have a doubt that in case of Data Frames has this been handled in current versions or do we still need to repartition it after a filter operation?

If you ask if Spark automatically repartitions data the answer is negative (and I hope it won't change in the future)

According so many good resources, it is advisable to re-partition a RDD after filter operation. since, there is a possibility that most of the partitions are now empty.

This really depends on two factors:

  • How selective is the filter (what is the expected fraction of the records preserved).
  • What is the distribution of data, in respect to predicate, prior to filter.

Unless you expect that predicate prunes majority of data or prior distribution will leave significant fraction of partitions empty, costs of repartitioning usually outweigh potential benefits, so the main reason to call repartition is to limit the number of the output files.

1
votes

Spark does not automatically repartition data. It would be a good idea to repartition the data after filtering if you need to do operations such as join and aggregate. Based on your needs you should either use repartition or coalesce. Typically coalesce is preferable since it tries to group data together without shuffling, therefore it only decreases the # of partitions. (good link for understanding coalesce and repartition)

There aren't huge performance boost if you don't do any heavy computation after your filtering operation. Keep in mind that repartition by itself could also be expensive. You must know your data to make that decision

0
votes

I am assuming that this is your question.

Shall I run a filter operation before repartition or after repartition?

Based on this assumption, a filter will always try to find records matching some conditions. So, the resultant data frame/RDD is always either less than or equal to the previous data frame/RDD. In most cases, the resultant set is less than the previous one.

Whereas repartition is one of the most expensive operations because it does a shuffle. Always remember whenever we are performing a repartition the less the data is in memory the better the performance we can get out of it.

I don't even have to talk more about how Spark handles it etc, in general filter before repartition is good for performance!

For example, catalyst optimizer itself uses before and after filter to improve performance.

Blog Link:

For example, Spark knows how and when to do things like combine filters, or move filters before joins. Spark 2.0 even allows you to define, add, and test out your own additional optimization rules at runtime. 1[2]