22
votes

I’m developing an android application for face recognition, using JavaCV which is an unofficial wrapper of OpenCV. After importing com.googlecode.javacv.cpp.opencv_contrib.FaceRecognizer, I apply and test the following known methods:

  • LBPH using createLBPHFaceRecognizer() method
  • FisherFace using createFisherFaceRecognizer() method
  • EigenFace using createEigenFaceRecognizer() method

Before I recognize the detected face, I correct the rotated face and crop the proper zone, inspiring from this method

In general when I pass on camera a face already exist in the database, the recognition is ok. But this is not always correct. Sometimes it recognizes the unknown face (not found in Database of trained samples) with a high probability. When we have in the DB two or more faces of similar features (beard, mustache, glasses...) the recognition may be highly mistaken between those faces!

To predict the result using the test face image, I apply the following code:

public String predict(Mat m) {

        int n[] = new int[1];
        double p[] = new double[1];
        IplImage ipl = MatToIplImage(m,WIDTH, HEIGHT);

        faceRecognizer.predict(ipl, n, p);

        if (n[0]!=-1)
         mProb=(int)p[0];
        else
            mProb=-1;
            if (n[0] != -1)
            return labelsFile.get(n[0]);
        else
            return "Unkown";
    }

I can’t control the threshold of the probability p, because:

  • Small p < 50 could predict a correct result.
  • High p > 70 could predict a false result.
  • Middle p could predict a correct or false.

As well, I don’t understand why predict() function gives sometime a probability greater than 100 in case of using LBPH??? and in case of Fisher and Eigen it gives very big values (>2000) ?? Can someone help in finding a solution for these bizarre problems? Is there any suggestion to improve robustness of recognition? especially in case of similarity of two different faces.

The following is the entire class using Facerecognizer:

package org.opencv.javacv.facerecognition;

import static  com.googlecode.javacv.cpp.opencv_highgui.*;
import static  com.googlecode.javacv.cpp.opencv_core.*;

import static  com.googlecode.javacv.cpp.opencv_imgproc.*;
import static com.googlecode.javacv.cpp.opencv_contrib.*;

import java.io.File;
import java.io.FileOutputStream;
import java.io.FilenameFilter;
import java.util.ArrayList;

import org.opencv.android.Utils;
import org.opencv.core.Mat;

import com.googlecode.javacv.cpp.opencv_imgproc;
import com.googlecode.javacv.cpp.opencv_contrib.FaceRecognizer;
import com.googlecode.javacv.cpp.opencv_core.IplImage;
import com.googlecode.javacv.cpp.opencv_core.MatVector;

import android.graphics.Bitmap;
import android.os.Environment;
import android.util.Log;
import android.widget.Toast;

public  class PersonRecognizer {

    public final static int MAXIMG = 100;
    FaceRecognizer faceRecognizer;
    String mPath;
    int count=0;
    labels labelsFile;

     static  final int WIDTH= 128;
     static  final int HEIGHT= 128;;
     private int mProb=999;


    PersonRecognizer(String path)
    {
      faceRecognizer =  com.googlecode.javacv.cpp.opencv_contrib.createLBPHFaceRecognizer(2,8,8,8,200);
     // path=Environment.getExternalStorageDirectory()+"/facerecog/faces/";
     mPath=path;
     labelsFile= new labels(mPath);


    }

    void changeRecognizer(int nRec)
    {
        switch(nRec) {
        case 0: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createLBPHFaceRecognizer(1,8,8,8,100);
                break;
        case 1: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createFisherFaceRecognizer();
                break;
        case 2: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createEigenFaceRecognizer();
                break;
        }
        train();

    }

    void add(Mat m, String description) {
        Bitmap bmp= Bitmap.createBitmap(m.width(), m.height(), Bitmap.Config.ARGB_8888);

        Utils.matToBitmap(m,bmp);
        bmp= Bitmap.createScaledBitmap(bmp, WIDTH, HEIGHT, false);

        FileOutputStream f;
        try {
            f = new FileOutputStream(mPath+description+"-"+count+".jpg",true);
            count++;
            bmp.compress(Bitmap.CompressFormat.JPEG, 100, f);
            f.close();

        } catch (Exception e) {
            Log.e("error",e.getCause()+" "+e.getMessage());
            e.printStackTrace();

        }
    }

    public boolean train() {

        File root = new File(mPath);
        Log.i("mPath",mPath);
        FilenameFilter pngFilter = new FilenameFilter() {
            public boolean accept(File dir, String name) {
                return name.toLowerCase().endsWith(".jpg");

        };
        };

        File[] imageFiles = root.listFiles(pngFilter);

        MatVector images = new MatVector(imageFiles.length);

        int[] labels = new int[imageFiles.length];

        int counter = 0;
        int label;

        IplImage img=null;
        IplImage grayImg;

        int i1=mPath.length();


        for (File image : imageFiles) {
            String p = image.getAbsolutePath();
            img = cvLoadImage(p);

            if (img==null)
                Log.e("Error","Error cVLoadImage");
            Log.i("image",p);

            int i2=p.lastIndexOf("-");
            int i3=p.lastIndexOf(".");
            int icount=Integer.parseInt(p.substring(i2+1,i3)); 
            if (count<icount) count++;

            String description=p.substring(i1,i2);

            if (labelsFile.get(description)<0)
                labelsFile.add(description, labelsFile.max()+1);

            label = labelsFile.get(description);

            grayImg = IplImage.create(img.width(), img.height(), IPL_DEPTH_8U, 1);

            cvCvtColor(img, grayImg, CV_BGR2GRAY);

            images.put(counter, grayImg);

            labels[counter] = label;

            counter++;
        }
        if (counter>0)
            if (labelsFile.max()>1)
                faceRecognizer.train(images, labels);
        labelsFile.Save();
    return true;
    }

    public boolean canPredict()
    {
        if (labelsFile.max()>1)
            return true;
        else
            return false;

    }

    public String predict(Mat m) {
        if (!canPredict())
            return "";
        int n[] = new int[1];
        double p[] = new double[1];
        IplImage ipl = MatToIplImage(m,WIDTH, HEIGHT);
//      IplImage ipl = MatToIplImage(m,-1, -1);

        faceRecognizer.predict(ipl, n, p);

        if (n[0]!=-1)
         mProb=(int)p[0];
        else
            mProb=-1;
    //  if ((n[0] != -1)&&(p[0]<95))
        if (n[0] != -1)
            return labelsFile.get(n[0]);
        else
            return "Unkown";
    }




      IplImage MatToIplImage(Mat m,int width,int heigth)
      {


           Bitmap bmp=Bitmap.createBitmap(m.width(), m.height(), Bitmap.Config.ARGB_8888);


           Utils.matToBitmap(m, bmp);
           return BitmapToIplImage(bmp,width, heigth);

      }

    IplImage BitmapToIplImage(Bitmap bmp, int width, int height) {

        if ((width != -1) || (height != -1)) {
            Bitmap bmp2 = Bitmap.createScaledBitmap(bmp, width, height, false);
            bmp = bmp2;
        }

        IplImage image = IplImage.create(bmp.getWidth(), bmp.getHeight(),
                IPL_DEPTH_8U, 4);

        bmp.copyPixelsToBuffer(image.getByteBuffer());

        IplImage grayImg = IplImage.create(image.width(), image.height(),
                IPL_DEPTH_8U, 1);

        cvCvtColor(image, grayImg, opencv_imgproc.CV_BGR2GRAY);

        return grayImg;
    }



    protected void SaveBmp(Bitmap bmp,String path)
      {
            FileOutputStream file;
            try {
                file = new FileOutputStream(path , true);

            bmp.compress(Bitmap.CompressFormat.JPEG,100,file);  
            file.close();
            }
            catch (Exception e) {
                // TODO Auto-generated catch block
                Log.e("",e.getMessage()+e.getCause());
                e.printStackTrace();
            }

      }


    public void load() {
        train();

    }

    public int getProb() {
        // TODO Auto-generated method stub
        return mProb;
    }


}
1
yes, you'll need 3 different threshold values, one for each method, since the feature-space for all of them differ. also, your 'p' value in the prediction is not a probability, but the distance from your test image to the closest found match in the db (so, kinda the inverse of it), so that's not at all in the [0..100] range.berak
@berak We pass three paramter to predict iplimage, int[], double[]. Ok, we pass Iplimage, we got distance in double[] but what int[]? what it represents. Because i am not really understanding it i am getting diferent value in it like 1,4,8.umerk44
that's the class label, the id of your person. did you write that code ?berak
dervish, you will have to run tests on your data. honestly, getting the code to run is the easy part. optimizing the outcome is where the real work starts.berak
I agree to berak, you need a proper amount of different images from the same person, so that the model can be trained with it. In the end the prediction is better.Spindizzy

1 Answers

2
votes

I think you need to implement something to be more robust to illumination changes. see: Illumination normalization in OpenCV

Then, in order to manage similarity between images maybe you can use something like Principal component Analysis.