4
votes

I have trained a faster rcnn model using the Tensorflow object detection API and am using this inference script with my frozen graph:

https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb

I intend to use it for object tracking in videos, but inference using this script is very slow since it only processes one image at a time instead of a batch of images. Is there any way to do inference on a batch of images at once ? The relevant inference function is here, I am wondering how to modify it to work with a stack of images

def run_inference_for_single_image(image, graph):
with graph.as_default():
    with tf.Session() as sess:
        # Get handles to input and output tensors
        ops = tf.get_default_graph().get_operations()
        all_tensor_names = {output.name for op in ops for output in op.outputs}
        tensor_dict = {}
        for key in ['num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks']:
            tensor_name = key + ':0'
            if tensor_name in all_tensor_names:
                tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(tensor_name)
        if 'detection_masks' in tensor_dict:
            # The following processing is only for single image
            detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
            detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
            # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
            real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
            detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
            detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
            detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(detection_masks, detection_boxes, image.shape[0], image.shape[1])
            detection_masks_reframed = tf.cast(tf.greater(detection_masks_reframed, 0.5), tf.uint8)
            # Follow the convention by adding back the batch dimension
            tensor_dict['detection_masks'] = tf.expand_dims(detection_masks_reframed, 0)
        image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')

        # Run inference
        output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)})

        # all outputs are float32 numpy arrays, so convert types as appropriate
        output_dict['num_detections'] = int(output_dict['num_detections'][0])
        output_dict['detection_classes'] = output_dict['detection_classes'][0].astype(np.uint8)
        output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
        output_dict['detection_scores'] = output_dict['detection_scores'][0]
        if 'detection_masks' in output_dict:
            output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
2

2 Answers

4
votes

Instead of passing just one numpy array of the size (1, image_width, image_heigt, 3) you can pass a numpy array with your image batch of the size (batch_size, image_width, image_heigt, 3) to the sess.run command:

output_dict = sess.run(tensor_dict, feed_dict={image_tensor: image_batch})

The output_dict will be slightly different then before, still haven't figured out how exactly. Maybe someone can help furthermore?

Edit

It seems that the output_dict gets another index which corresponds to the image number in your batch. So you'll find the boxes for a certain image in: output_dict['detection_boxes'][image_counter]

Edit2

For some reason this won't work with Mask RCNN...

0
votes

If you run export_inference_graph.py, you should be able to input batches of images by default as it sets the image_tensor shape to [None, None, None, 3].

python object_detection/export_inference_graph.py \ --input_type image_tensor \ --pipeline_config_path ${PIPELINE_CONFIG_PATH} \ --trained_checkpoint_prefix ${TRAIN_PATH} \ --output_directory output_inference_graph.pb