0
votes

I trained a model on GCloud AutoML Vision, exported it as a TensorFlow.js model, and loaded it on the application start. Looking at the model.json, the model is definitely expecting a 224x224 image. I had to do the tensor.reshape because it was rejecting my tensor when I ran a prediction on a tensor of [224, 224, 3].

Base64 comes in from camera. I believe I am preparing this image correctly, but I have no way of knowing for sure.

const imgBuffer = decodeBase64(base64) // from 'base64-arraybuffer' package
const raw = new Uint8Array(imgBuffer)

const imageTensor = decodeJpeg(raw)
const resizedImageTensor = imageTensor.resizeBilinear([224, 224])
const reshapedImageTensor = resizedImageTensor.reshape([1, 224, 224, 3])

const res = model.predict(reshapedImageTensor)
console.log('response', res)

But the response I get doesn't seem to have much...

{
   "dataId":{},
   "dtype":"float32",
   "id":293,
   "isDisposedInternal":false,
   "kept":false,
   "rankType":"2",
   "scopeId":5,
   "shape":[
      1,
      1087
   ],
   "size":1087,
   "strides":[
      1087
   ]
}

What does this type of response mean? Is there something I'm doing wrong?

1

1 Answers

0
votes

You need to use dataSync() to download the actual predictions of the model.

const res = model.predict(reshapedImageTensor);
const predictions = res.dataSync();
console.log('Predictions', predictions);