- Which checkpoint from the ones stored in checkpoint_dir do the
results correspond to ?
In your train_dir
you will find a checkpoint
file, if you open it, in the first line of code is your last checkpoint, which will be used for the evaluation, you can change that first line to your desired checkpoint for evaluation
- I get a value of -1.00 in some cases. How do i interpret that ?
When you obtain a -1 in the metric it means that it doesnt exist any result that meets the criteria, in your case, it means that your dataset doesn't have any objects with small area so it is discarded, if you had those objects and you had no detection on them, it would appear 0 instead of -1.
- small objects: area < 32^2 px
- medium objects: 32^2 < area < 96^2 px
- large objects: area > 96^2
- What is the difference between eval.py and model_main.py scripts provided?
The eval.py
script only evaluates the the model and returns the metrics. The model_main.py
combines the train script with the eval, enabling you to do the following of your choosing:
- Train the model
- Evaluate the model
- Train and evaluate the model simultaneously
In the latter you should provide the validation data and not your test data.
- Any resource related to evaluation and inference for object detection api that you can refer me to ?
I think you are looking for this Jupyter notebook for off-the-shelf inference