Context: I need to filter a dataframe based on what contains another dataframe's column using the isin function.
For Python users working with pandas, that would be: isin().
For R users, that would be: %in%.
So I have a simple spark dataframe with id and value columns:
l = [(1, 12), (1, 44), (1, 3), (2, 54), (3, 18), (3, 11), (4, 13), (5, 78)]
df = spark.createDataFrame(l, ['id', 'value'])
df.show()
+---+-----+
| id|value|
+---+-----+
| 1| 12|
| 1| 44|
| 1| 3|
| 2| 54|
| 3| 18|
| 3| 11|
| 4| 13|
| 5| 78|
+---+-----+
I want to get all ids that appear multiple times. Here's a dataframe of unique ids in df:
unique_ids = df.groupBy('id').count().where(col('count') < 2)
unique_ids.show()
+---+-----+
| id|count|
+---+-----+
| 5| 1|
| 2| 1|
| 4| 1|
+---+-----+
So the logical operation would be:
df = df[~df.id.isin(unique_ids.id)]
# This is the same than:
df = df[df.id.isin(unique_ids.id) == False]
However, I get an empty dataframe:
df.show()
+---+-----+
| id|value|
+---+-----+
+---+-----+
This "error" works in the opposite way:
df[df.id.isin(unique_ids.id)]
returns all the rows of df.
isin
here- usejoin
. For example:df.join(unique_ids, on="id").show()
. You can only useisin
with literal values (ex:df.where(df["id"].isin([1, 2, 3]))
), not with a column. – pault