23
votes

I have a PySpark Dataframe with two columns:

+---+----+
| Id|Rank|
+---+----+
|  a|   5|
|  b|   7|
|  c|   8|
|  d|   1|
+---+----+

For each row, I'm looking to replace Id column with "other" if Rank column is larger than 5.

If I use pseudocode to explain:

For row in df:
  if row.Rank > 5:
     then replace(row.Id, "other")

The result should look like this:

+-----+----+
|   Id|Rank|
+-----+----+
|    a|   5|
|other|   7|
|other|   8|
|    d|   1|
+-----+----+

Any clue how to achieve this? Thanks!!!


To create this Dataframe:

df = spark.createDataFrame([('a', 5), ('b', 7), ('c', 8), ('d', 1)], ['Id', 'Rank'])
2

2 Answers

56
votes

You can use when and otherwise like -

from pyspark.sql.functions import *

df\
.withColumn('Id_New',when(df.Rank <= 5,df.Id).otherwise('other'))\
.drop(df.Id)\
.select(col('Id_New').alias('Id'),col('Rank'))\
.show()

this gives output as -

+-----+----+
|   Id|Rank|
+-----+----+
|    a|   5|
|other|   7|
|other|   8|
|    d|   1|
+-----+----+
9
votes

Starting with @Pushkr solution couldn't you just use the following ?

from pyspark.sql.functions import *

df.withColumn('Id',when(df.Rank <= 5,df.Id).otherwise('other')).show()