I want to use Scikit-Learn's GridSearchCV to run a bunch of experiments and then print out the recall, precision, and f1 of each experiment.
This article (https://scikit-learn.org/stable/auto_examples/model_selection/plot_grid_search_digits.html) suggests that I need to run .fit
and .predict
multiple times.
...
scores = ['precision', 'recall']
...
for score in scores:
...
clf = GridSearchCV(
SVC(), tuned_parameters, scoring='%s_macro' % score
)
clf.fit(X_train, y_train) # running for each scoring metric
...
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r"
% (mean, std * 2, params))
...
y_true, y_pred = y_test, clf.predict(X_test) # running for each scoring metric
print(classification_report(y_true, y_pred))
I would like to just run .fit
once and log all of the recall, precision, and f1 metrics. So for example, something along the lines of:
clf = GridSearchCV(
SVC(), tuned_parameters, scoring=['recall', 'precision', 'f1'] # I don't think this syntax is even possible
)
clf.fit(X_train, y_train)
for metric in clf.something_that_i_cannot_find:
### does something like this exist?
print(metric['precision']
print(metric['recall'])
print(metric['f1'])
###:end does something like this exist?
Or maybe even:
...
for run in clf.something_that_i_cannot_find:
### does something like this exist?
print(classification_report(run.y_true, run.y_pred))
###:end does something like this exist?
This article (Scoring in Gridsearch CV) suggests that GridSearchCV can be made aware of multiple scorers, but I still can't figure out how to access each of those scores for all of the experiments.
Is what I'm looking not supported by GridSearchCV? Is the method used in the article (i.e. running the .fit
and .predict
multiple times) the easiest way to accomplish something similar to what I'm asking for?
Thank you for your time ????