3
votes

There's always a tradeoff between precision and recall. I'm dealing with a multi-class problem, where for some classes I have perfect precision but really low recall.

Since for my problem false positives are less of an issue than missing true positives, I want reduce precision in favor of increasing recall for some specific classes, while keeping other things as stable as possible. What are some ways to trade in precision for better recall?

1
It looks like your question belongs to Cross Validated, and also doesn't look like it has anything to do with TF or Keras. Some suggestions would be to search for this tradeoff on CV, and especially look up over/undersampling.Jakub Bartczuk

1 Answers

3
votes

You can use a threshold on the confidence score of your classifier output layer and plot the precision and recall at different values of the threshold. You can use different thresholds for different classes.

You can also take a look on weighted cross entropy of Tensorflow as a loss function. As stated, it uses weights to allow trade off recall and precision by up- or down-weighting the cost of a positive error relative to a negative error.