0
votes

I tried searching for this but the closest I could come to was this. But it did not give me what I wanted. I want to drop all instances of duplicates in a dataframe. For example, if I have a data frame

   Col1   Col2   Col3
   Alice  Girl   April
   Jean   Boy    Aug
   Jean   Boy    Sept

I want to remove all duplicate based on Col1 and Col2 so that I get

  Col1   Col2  Col3
  Alice  Girl  April

Is there any way to do this?

Also if I have a large number of columns like so:

   Col1   Col2   Col3  .... Col n
   Alice  Girl   April .... Apple
   Jean   Boy    Aug   .... Orange
   Jean   Boy    Sept  .... Banana

How would I group by only Col1 and Col2 but still keep the remaining columns?

Thank You

1

1 Answers

1
votes
from pyspark.sql import functions as F
# Sample Dataframe
df = sqlContext.createDataFrame([
    ["Alice", "Girl","April"],
   ["Jean","Boy", "Aug"],
   ["Jean","Boy","Sept"]
], 
    ["Col1","Col2", "Col3"])

# Group by on required column and select rows where count is 1.
df2 = (df
       .groupBy(["col1", "col2"])
       .agg(
           F.count(F.lit(1)).alias('count'), 
           F.max("col3").alias("col3")).where("count = 1")).drop("count")

df2.show(10, False)

Output:

+-----+----+-----+
|col1 |col2|col3 |
+-----+----+-----+
|Alice|Girl|April|
+-----+----+-----+

Response to the edited version

df = sqlContext.createDataFrame([
    ["Alice", "Girl","April", "April"],
    ["Jean","Boy", "Aug", "XYZ"],
    ["Jean","Boy","Sept", "IamBatman"]
], 
    ["col1","col2", "col3", "newcol"])

groupingcols = ["col1", "col2"]
othercolumns = [F.max(col).alias(col) for col in df.columns if col not in groupingcols]

df2 = (df
       .groupBy(groupingcols)
       .agg(F.count(F.lit(1)).alias('count'), *othercolumns)
       .where("count = 1")
       .drop("count"))

df2.show(10, False)

Output:

+-----+----+-----+------+
|col1 |col2|col3 |newcol|
+-----+----+-----+------+
|Alice|Girl|April|April |
+-----+----+-----+------+