0
votes

I'm quite new to this; I'm try to classify textures as defective or non-defective. I've used a Gabor filter bank with Matlab which outputs a column vector of the Gabor features of an image. I have a data set of non-defective images and defective images.

My question is, what can I now do with this (or these) feature vectors to classify the texture? I've read about many types of classification, but couldn't find any similar types of implementation to help me get an idea of what I'm doing. Many thanks.

2
This question isn't of an appropriate format for Stack Overflow (i.e. this is not a code problem). You should consider asking at stats.stackexchange.com or dsp.stackexchange.com.Dan
@Dan Thanks, I'll ask on dsp.Mike Miller

2 Answers

1
votes

You can either use Support Vector Machine(SVM) or Neural Networks. SVM is widely used and gives great results. An example of how you can use it in Matlab.

  1. First of all, you need to divide your data into 'Training' and 'Testing' set.
  2. 'Training' set is the one about which you know i.e. in your case you know which textures are defective and which are non-defective.
  3. 'Testing' set is the one on which you want to test your method of classification.

Lets say training matrix contains the Gabor features of all training set images where each row corresponds to feature vector of an image (transposed column vector). Lets assume that first 25 are non-defective and next 25 are defective. Now, you need to create a group matrix which tells SVM which are defective and which are not. So,

group = [ones(25,1); -1*ones(25,1)]; // non-defective = 1, defective = -1    
SVMStruct = svmtrain(training, group);

SVMStruct is the support vector which you will use for classifying 'Testing' data. Lets say testing matrix contains Gabor features as previous.

results = svmclassify(SVMStruct, testing);

results is the final decision matrix which contains 1 or -1 depending upon the decision made.

1
votes

There are many ways to go if you have extracted your feature vectors.

  • For example you can use an svm approach on your samples from your two classes.

  • Simpler approaches include nearest neighbor, nearest centroid etc

Edit:

I thought this would be a comment but it's getting too big to fit.

As regards the separability of your samples:

  • One way to determine the linear separability is to use linear svm as a boundary (unless you are concerned about time efiiciency so you are stuck with linear anyway). This svm model does not overtrain and can give a clue about separability.
  • Other options include a pca that will project your samples to fewer dimensions and these reduced dimensional samples can be easily plotted to examine it visually. This approach has the advantage of visual examination but it depends from the pca step how well it represent the separability of your samples. Maybe the separability lie in a non-principal component (i.e. dimension of your samples) and then pca just fails.
  • As a rough approximation I often plot random dimensions of my samples together to get a quick (and maybe inaccurate of course) look of them. If for example you have samples of 100 dimension you can plot the first two dimensions only (as if you had 2-D samples) to see if your two classes collide in a large degree. If they do, then you can check other dimensions but if they don't then you know that they are separable at least to some dimensions.