I have a large list which includes duplicate values and I wish to subset a data frame using the list values. Usually I would use the .isin
method, but I want to keep duplicate rows. Here is some example code:
df = pd.DataFrame(np.array([[1, 2, 'car'], [4, 5, 'bike'], [1, 2, 'train'], [1, 2, 'car'], [1, 2, 'train']]),columns=['a', 'b', 'c'])
lst = ['car', 'bike', 'car', 'car']
So I want to return a data frame that includes all rows each time they occur. Every time a item occurs in the list, I want to return the corresponding rows.
On a simple dataset such as the above I can loop through the list and append to a new data frame the returned values, but on a large dataset this seems to be taking an extremely long time. Any suggestions?
EDIT: So Chris' suggestion works, and provides the expected output using:
pd.concat([df[df['c'].eq(x)] for x in lst])
However, as with using a loop this is extremely slow when compared to something like the .isin
method when working with much larger data. Added this edit so that the expected output can be created.
pd.concat([df[df['c'].eq(x)] for x in lst])
- is this what you mean? – Chris A