2
votes

I have a dataFrame like this:

+--------+----------+-------------------+------+---------------+
|      ID|      DATE|               TIME|  COL1|           COL2|
+--------+----------+-------------------+------+---------------+
|21763789|2019-08-29|2019-08-29 17:08:06|  1   |            USA|
|29211238|2019-08-27|2019-08-27 11:04:42|  1   |          SPAIN|
| 1696884|2019-08-10|2019-08-10 21:07:57|  1   |         ITALIA|
|  211801|2019-08-06|2019-08-06 20:42:25|  1   |          SPAIN|
|20183201|2019-08-07|2019-08-07 16:59:09|  5001|          SPAIN|
|21763789|2019-08-27|2019-08-27 10:14:38|  1   |            USA|
|29211238|2019-08-14|2019-08-14 09:39:09|  1   |         ITALIA|
|20183201|2019-08-19|2019-08-19 21:30:29|  5001|            USA|
|29211238|2019-08-23|2019-08-23 19:00:25|  1   |            USA|
|  211801|2019-08-22|2019-08-22 05:22:28|  1   |            USA|
|  211801|2019-08-28|2019-08-28 11:58:33|  1   |         ITALIA|
|25648097|2019-08-30|2019-08-30 15:10:22|  2   |          SPAIN|
|29211238|2019-08-27|2019-08-27 11:04:44|  1   |          SPAIN|
|26295227|2019-08-25|2019-08-25 00:08:22|  1   |            USA|
|21763789|2019-08-20|2019-08-20 13:04:34|  1   |          SPAIN|
| 1696884|2019-08-23|2019-08-23 09:27:50|  1   |         ITALIA| 
| 6209818|2019-08-03|2019-08-03 14:52:25|  1   |         ITALIA|
|26295227|2019-08-21|2019-08-21 12:46:58|  1   |            USA|
|29211238|2019-08-22|2019-08-22 17:46:42|  1   |            USA|
|21763789|2019-08-07|2019-08-07 13:02:18|  1   |          SPAIN|
+--------+----------+-------------------+------+---------------+

I want to groupBy this dataFrame by ID and DATE and then I want to keep only the most recent row in the TIME column :

df.groupBy(col("ID"), col("DATE")).agg(min(col("TIME"))) Perhaps it works but I have a lot of another column then perhaps my aggregation will break them ?

val onlyRecent = Window.partitionBy(col("ID"), col("DATE")).orderBy(col("TIME")) I dont know if this can be useful.

Do you have any idea, please? Thanks

1

1 Answers

2
votes

You are on the right track with a window function. Basically, you want to "label" the record to keep in some way, and then filter by that label. Which label function you use depends entirely on what you want to do with duplications in the TIME column. The following will select one of the "tied" records (effectively at random).

val w = Window.partitionBy($"ID", $"DATE").orderBy($"TIME".desc)

df.withColumn("rank", row_number().over(w)).where($"rank" === 1).drop("rank")

If you want to keep both records in the event of a "tie", then use either rank() or dense_rank() in the place of row_number().