I have been experimenting with partitions and repartitioning of PySpark RDDs.
I noticed, when repartitioning a small sample RDD from 2 to 6 partitions, that simply a few empty parts are added.
rdd = sc.parallelize([1,2,3,43,54,678], 2)
rdd.glom().collect()
>>> [[1, 2, 3], [43, 54, 678]]
rdd6 = rdd.repartition(6)
rdd6.glom().collect()
>>> [[], [1, 2, 3], [], [], [], [43, 54, 678]]
Now, I wonder if that also happens in my real data.
It seems I can't use glom() on larger data (df with 192497 rows).df.rdd.glom().collect()
Because when I try, nothing happens. It makes sense though, the resulting print would be enormous...
SO
I'd like to print each partition, to check if they are empty. or at least the top 20 elements of each partition.
any ideas?
PS: I found solutions for Spark, but I couldn't get them to work in PySpark...
How to print elements of particular RDD partition in Spark?
btw: if someone can explain to me why I get those empty partitions in the first place, I'd be all ears...
Or how I know when to expect this to happen and how to avoid this.
Or does it simply not influence performance, if there are empty partitions in a dataset?