I am getting my feet wet with Tensorflow and MLEngine.
Based on faster_rcnn_resnet101_coco_11_06_2017 I trained a model with custom data using the Object Detection API.
The exported model's size is 190.5MB.
Local prediction works fine. But MLEngine gives me the following error when using gcloud:
"error": {
"code": 429,
"message": "Prediction server is out of memory, possibly because model size
is too big.",
"status": "RESOURCE_EXHAUSTED"
}
And the following error when using the NodeJS client library:
code: 429,
errors:
[ { message: 'Prediction server is out of memory, possibly because model size is too big.',
domain: 'global',
reason: 'rateLimitExceeded' } ] }
The images I am using for testing predictions are PNGs with dimensions of 700px*525px (365kb) and 373px*502px (90kb)
I am not sure how too proceed.
Is it expected that object detection requires more memory than MLEngine offers?
Is the model's size really the issue here and how can I improve on that?
Thanks for you help and ideas!