2
votes

I want to create a model that predicts the age and the gender and to integrate it in an android app.

I am on Ubuntu 16 and use Python 3.6, Tensorflow 1.13.1 and Keras 2.2.4.

First I train different models with the IMDB data-set : Mobilenet V1 and V2 from keras, and a VGG I coded myself. For the two mobilenets I used imagenet weights to initialize the model.

The accuracy is quite good with more than 90% for the gender.

When the training is over, I tried several ways to convert the models in tflite :

  • three ways where I convert directly from the .h5 file :
converter = tf.lite.TFLiteConverter.from_keras_model_file(model_to_convert)
tflite_model = converter.convert()
open(model_converted, "wb").write(tflite_model)
converter = tf.contrib.lite.TFLiteConverter.from_keras_model_file(model_to_convert)
tflite_model = converter.convert()
open(model_converted, "wb").write(tflite_model)
converter = tf.contrib.lite.TocoConverter.from_keras_model_file(model_to_convert)
tflite_model = converter.convert()
open(model_converted, "wb").write(tflite_model)
  • one where I first convert the model to a tf graph as explained in this example

I also tried to use this line of code before the conversion :

tf.keras.backend.set_learning_phase(0)

Finally I load the .tflite file in Android Studio :

private ByteBuffer convertBitmapToByteBuffer(Bitmap bitmap) {
        int SIZE_IMAGE = 96;
        ByteBuffer byteBuffer = ByteBuffer.allocateDirect(4*1*SIZE_IMAGE*SIZE_IMAGE*3);
        byteBuffer.order(ByteOrder.nativeOrder());
        int[] pixels = new int[SIZE_IMAGE * SIZE_IMAGE];
        bitmap.getPixels(pixels, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
        int pixel = 0;
        for(int i=0; i < SIZE_IMAGE; i++){
            for(int j=0; j<SIZE_IMAGE;j++){
                final int val = pixels[pixel++];
                byteBuffer.putFloat((float) (((val >> 16) & 0xFF)/255));
                byteBuffer.putFloat((float) (((val >> 8) & 0xFF)/255));
                byteBuffer.putFloat((float) ((val & 0xFF)/255));

            }
        }    

public String recognizeImage(Bitmap bitmap) {
        ByteBuffer byteBuffer = convertBitmapToByteBuffer(bitmap);
        Map<Integer, Object> cnnOutputs = new HashMap<>();
        float[][] gender=new float[1][2];
        cnnOutputs.put(0,gender);
        float[][]age=new float[1][21];
        cnnOutputs.put(1,age);
        Object[] inputs = {byteBuffer};
        interpreter.runForMultipleInputsOutputs(inputs, cnnOutputs);
        String result = convertToResults(gender[0], age[0]);
        return result;
    }

During the final inference, the accuracy is very low regardless the model used. Either the interpreter predicts always exactly the same result, either the predicted age changes a little but the predicted gender is always "woman".

What should I do?

Thanks in advance!

1

1 Answers

0
votes

Try use your keras model and the tflite model to process one input data and compare the inference result. Probably the outputs don't match. You can debug from there.