1
votes

As part of Tensorflow learning, I wrote a code to identify me VS someone else through face pictures. However, it misidentifies 1 out of 3 pictures of someone else as me. How do you go about fixing this? I tried different things below, but still got the same result.

  • 2 Dense layers with relu and sigmoid activation function.
  • 2 Dense layers with relu and softmax activation function.
  • 3 layers with relu and sigmoid activation function.
  • binary_crossentropy, and categorical_crossentropy
  • 10 epochs and 15 epochs

All of the above are done with 3 layers of Conv2D, 3 layers of MaxPooling2D.

Last epoch training:

Epoch 15/15 8/8 [==============================] - 2s 272ms/step - loss: 4.9323e-07 - acc: 1.0000 - val_loss: 0.0326 - val_acc: 1.0000

1

1 Answers

1
votes

Fully connected layers are never useful for computer vision tasks these days.

You're trying face recognition, it's a complicated task and will require state of the art ConvNets to work properly.

  1. If you too many samples per subject, a simple CNN would be a good start. Downside is it's not practical in real-life situations.

  2. You can design a siamese network, with triplet loss, where the model learns a face embedding for each subject. This is a better approach as it's a few-shot learning approach and will require a few examples per subject.

SOTA face recognition papers with code:

https://paperswithcode.com/task/face-recognition