1
votes

I am building an object detector in TensorFlow to detect, motorbike riders with and without helmet, I have 1000 Images each for riders with helmet, withouthelmet and pedestrians(pu together -- 3000 IMAGES), My last checkpoint was 35267 steps, I have tested using a traffic video, but I see unusally large bounding boxes with wrong results. Can someone please explain the reason for such detections? Do I need to wait for atleast 50000 steps?? or Do I need to add datasets(Images in the angle to Traffic Cameras)?

Model - SSD Mobilenet COCO - Custom Object Detection, Training Platform - Google Colab

Please find the Images attachedVideo Snapshot 1

Video Snapshot 2

Day 2 - 10/30/2018

I have tested with Images today, I have got different results, seems to be correct,2nd Day if I test with single object in a Image. Please find the results Single Object IMage Test 1 Single Object Image Test 2

Tested CHeckpoint - 52,000 Steps

But, If I test with the Images with multiple objects in a road, the detection is wrong and bounding boxes are weirdly bigger, Is it because of the dataset, as I am training with One Motorbike rider(with or with out helmet) per image.

Please find the wrong results

Multi Object Image Test Multi Object Image Test

I had also tested with images like all Motorbikes in the scene, In this case, I did not get any results, Please find the Images

No Result Image No Result Image

The results are very confusing, Is there anything I am missing?,

1
Please also share your .config file using for your training. So we can provide better suggestions. - Janikan

1 Answers

0
votes

There is no need to wait till 50000 epocs you should get decent result in 35k or even in 10k. I would suggest

  1. go through you data-set again and check all the bounding boxes (data cleaning)
  2. Check your model with inference code for changes like batch normalization etc
  3. Add some more data with different features, angles and color complexities

I would check these points before going further.