0
votes

I want to get the F1 score for each of the classes (I have 4 classes) and for each of the cross-validation folds. clf is my trained model, X_test is the features and y_test the labels of the test set. Since I am doing 5-fold cross-validation, I am supposed to get 4 F1 scores for each class on the first fold, 4 on the second... total of 20. Can I do this in python in a simple way?

The following line will give me the average F1 for all the classes, just 5 values for each fold. I checked the options for the variable scoring in the cross_val_score (https://scikit-learn.org/stable/modules/model_evaluation.html) and it seems like I cannot get the F1 score for each class in each fold (or maybe I am lost somewhere).

scores = cross_val_score(clf, X_test, y_test, cv=5, scoring='f1_macro')
1

1 Answers

0
votes

Ok, I found a solution. X is my dataframe of the features and y the labels. f1_score(y_test, y_pred, average=None)gives the F1 scores for each class, without aggregation. So each of the folds, we train the model and we try it on the test set.

from sklearn.model_selection import KFold
cv = KFold(n_splits=5, shuffle=False) 
for train_index, test_index in cv.split(X):
        X_train, X_test = X.iloc[train_index], X.iloc[test_index]
        y_train, y_test = y.iloc[train_index], y.iloc[test_index]
        clf = clf.fit(X_train, y_train)
        y_pred = clf.predict(X_test)
        print(f1_score(y_test, y_pred, average=None))

Then, the result will be:

[0.99320793 0.79749478 0.34782609 0.44243792]
[0.99352309 0.82583622 0.34615385 0.48873874]
[0.99294785 0.78794403 0.28571429 0.42403628]
[0.99324611 0.79236813 0.31654676 0.43778802]
[0.99327615 0.79136691 0.32704403 0.42410197]

where each line has the F1 scores for each fold and each value represents the F1 score of each class.

If there is a shorter & simpler solution to this, please, feel free to post it.