0
votes

I am beginner in Spark and I am looking for a solution for my issue. I'm trying to sort a dataframe according to the number of null values each column contains in ascending order.

For example: data:

column1    Column2     Column3
a          d           h
b          null        null
null       e           i
null       f           h
null       null        k
c          g           l

After sorting, the dataframe should be:

Column3     Colum2     Column1

All I could do is to count each column's null values.

data.select([count(when(col(c).isNull(), c)).alias(c) for c in data.columns])

Now, I have no idea how to continue. I wish you could help me.

1

1 Answers

0
votes

My solution, it work as you want:

#Based on your code
df=df.select([count(when(col(c).isNull(), c)).alias(c) for c in df.columns])

# Convert dataframe to dictionary (Python 3.x)
dict = list(map(lambda row: row.asDict(), df.collect()))[0]

# Create a dictionary with sorted values based on keys
sorted_dict={k: v for k, v in sorted(dict.items(), key=lambda item: item[1])}

# Create a sorted list with the column names
sorted_cols = [c for c in sorted_dict.keys()]

# With .select() method we re-order the dataframe
df.select(sorted_cols).show()