5
votes

I'm trying to figure out the easiest way to run object detection from a Tensorflow model (Inception or mobilenet) in an iOS app.

I have iOS Tensorflow image classification working in my own app and network following this example

and have Tensorflow image classification and object detection working in Android for my own app and network following this example

but the iOS example does not contain object detection, only image classification, so how to extend the iOS example code to support object detection, or is there a complete example for this in iOS? (preferably objective-C)

I did find this and this, but it recompiles Tensorflow from source, which seems complex,

also found Tensorflow lite,

but again no object detection.

I also found an option of converting Tensorflow model to Apple Core ML, using Core ML, but this seems very complex, and could not find a complete example for object detection in Core ML

2

2 Answers

1
votes

You need to train your own ML model.
For iOS it will be easier to just use Core ML. Also tensorflow models can be exported in Core ML format. You can play with this sample and try different models. https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture
Or here:
https://github.com/ytakzk/CoreML-samples

0
votes

So I ended up following this demo project,

https://github.com/csharpseattle/tensorflowiOS

It provided a working demo app/project, and was easy to switch its Tensorflow pb file for my own trained network file.

The instructions in the readme are pretty straight forward. You do need to checkout and recompile Tensorflow, which takes several hours and 10gb of space. I did have the thread issue, used the gsed instructions, which worked. You also need to install Homebrew.

I have not looked at Core ML yet, but from what I have read converting from Tensorflow to Core ML is complicated, and you may loose parts of your model.

It ran quite fast on iPhone, even using an Inception model instead of Mobilenet.