I use the object detection library to train models on my own dataset given different hyper-parameters, pre-processing, etc. Then I want to evaluate those models to compare them.
I know that the library have some evaluation mechanism, however it seems to return only global metrics. However when I will use my model, I will be able to discard some low confidence detections to match my need (prefer fn over fp).
So, is there a simple way to fetch the metrics like tp/fp/fn/precision/recall for each confidence threshold (instead of the AP) as well as for different IOU (0.1, 0.5, 0.75), to draw plots similar to COCO ones.
If there is no simple way, can you give me some advices on how to implement new Evaluator/Evaluation classes to achieve this.
Thanks, Alexis.