2
votes

I use the object detection library to train models on my own dataset given different hyper-parameters, pre-processing, etc. Then I want to evaluate those models to compare them.

I know that the library have some evaluation mechanism, however it seems to return only global metrics. However when I will use my model, I will be able to discard some low confidence detections to match my need (prefer fn over fp).

So, is there a simple way to fetch the metrics like tp/fp/fn/precision/recall for each confidence threshold (instead of the AP) as well as for different IOU (0.1, 0.5, 0.75), to draw plots similar to COCO ones.

If there is no simple way, can you give me some advices on how to implement new Evaluator/Evaluation classes to achieve this.

Thanks, Alexis.

1

1 Answers

0
votes

I finally implemented my own object_detection.utils.object_detection_evaluation.DetectionEvaluator. I give it lists of: categories, IOU thresholds, score thresolds, max number of detections and area ranges (for small, medium, big detections).

Then, it computes a confusion matrix for each combination of IOU, score, number of detections and area range, and aggregate it returning number of false/true positive/negative.

Kind