0
votes

For example, I trained a Bayes(SVM, RandomForest or something else) model with below score:

Model:
             precision    recall  f1-score   support

         neg     0.0622    0.9267    0.1166       191
         pos     0.9986    0.7890    0.8815     12647

avg / total       0.98      0.79      0.87     12838

My boss tell me that precision of neg is too low and he can accept recall by 60%, no need so high. So I need a way to get best precision by limiting recall at 60% .But I didn't find similar feature in sklearn.

Is there any way to train a model with best precision while recall can be limited to a specific value? (Or to reach 20% precision on neg, don't care recall)

1
Google "precision recall trade-off". - Joonatan Samuel

1 Answers

1
votes

sklearn implements precision-recall tradeoff as follows: http://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html

One method is to use precision_recall_curve() and then find a point on the graph with your desired recall.