6
votes

I have a pandas data frame with some categorical columns. Some of these contains non-integer values.

I currently want to apply several machine learning models on this data. With some models, it is necessary to do normalization to get better result. For example, converting categorical variable into dummy/indicator variables. Indeed, pandas has a function called get_dummies for that purpose. However, this function returns the result depending on the data. So if I call get_dummies on training data, then call it again on test data, columns achieved in two cases can be different because a categorical column in test data can contains just a sub-set/different set of possible values compared to possible values in training data.

Therefore, I am looking for other methods to do one-hot coding.

What are possible ways to do one hot encoding in python (pandas/sklearn)?

3
You will always have that issue between training and testing data with categorical data that can change.Alexander
well if you have different values in the test (and real) data from the train data your model won't be any good anyhow when encountering unknown data... Your test dataset must have an instance at least of everything that the final data will have.. As far as I know, machine learning is not magic!dartdog
If you have true class balance and check that it exists -- You are peaking into the test set.Merlin

3 Answers

9
votes

Scikit-learn provides an encoder sklearn.preprocessing.LabelBinarizer.

For encoding training data you can use fit_transform which will discover the category labels and create appropriate dummy variables.

label_binarizer = sklearn.preprocessing.LabelBinarizer()
training_mat = label_binarizer.fit_transform(df.Label)

For the test data you can use the same set of categories using transform.

test_mat = label_binarizer.transform(test_df.Label)
7
votes

In the past, I've found the easiest way to deal with this problem is to use get_dummies and then enforce that the columns match up between test and train. For example, you might do something like:

import pandas as pd

train = pd.get_dummies(train_df)
test = pd.get_dummies(test_df)

# get the columns in train that are not in test
col_to_add = np.setdiff1d(train.columns, test.columns)

# add these columns to test, setting them equal to zero
for c in col_to_add:
    test[c] = 0

# select and reorder the test columns using the train columns
test = test[train.columns]

This will discard information about labels that you haven't seen in the training set, but will enforce consistency. If you're doing cross validation using these splits, I'd recommend two things. First, do get_dummies on the whole dataset to get all of the columns (instead of just on the training set as in the code above). Second, use StratifiedKFold for cross validation so that your splits contain the relevant labels.

0
votes

Say, I have a feature "A" with possible values "a", "b", "c", "d". But the training data set consists of only three categories "a", "b", "c" as values. If get_dummies is used at this stage, features generated will be three (A_a, A_b, A_c). But ideally there should be another feature A_d as well with all zeros. That can be achieved in the following way :

import pandas as pd
data = pd.DataFrame({"A" : ["a", "b", "c"]})
data["A"] = data["A"].astype("category", categories=["a", "b", "c", "d"])
mod_data = pd.get_dummies(data[["A"]])
print(mod_data)

The output being

   A_a  A_b  A_c  A_d
0  1.0  0.0  0.0  0.0
1  0.0  1.0  0.0  0.0
2  0.0  0.0  1.0  0.0