7
votes

I'm having a three class problem with unbalanced data (90%, 5%, 5%). Now I want to train a classifier using LIBSVM.

The problem is that LIBSVM optimizes its parameter gamma and Cost for optimal accuracy, which means that 100% of the examples are classified as class 1, which is of course not what I want.

I've tried modifying the weight parameters -w without much success.

So what I want is, modifying grid.py in a way that it optimizes Cost and gamma for precision and recall separated by classes rather than for overall accuracy. Is there any way to do that? Or are there other scripts out there that can do something like this?

4

4 Answers

8
votes

The -w parameter is what you need for unbalanced data. What have you tried so far?

If your classes are:

  • class 0: 90%
  • class 1: 5%
  • class 2: 5%

You should pass the following params to svm:

-w0 5 -w1 90 -w2 90
4
votes

If you want to try an alternative, one of the programs in the svmlight family, http://www.cs.cornell.edu/people/tj/svm_light/svm_rank.html, directly minimizes the area under the ROC curve.

Minimizing the AUC may give better results than re-weighting training examples.

0
votes

You can optimize any of the precision, recall, F-score and AUC using grid.py. Tweak is that you have to change cross validation evaluation measure used by svm-train in LIBSVM. Follow the procedure given on LIBSVM website.

0
votes

If you have unbalanced data, you probably shouldn't be optimizing accuracy. Instead optimize f-score (or recall, if that's more important to you). You can change the evaluation function as described here.

I think you should also optimize gamma and Cost, while using different class weight configurations. I modified the "get_cmd" function in grid.py by passing different class weights for that purpose (-wi weight). In my experience, class weighting doesn't always help.