0
votes

I have a dataset of around 3000 positive and 1500 negative samples, with around 1000 features. All features are real number. I want to train a randomForest classifier with "randomForest" R package.

The problem is that I want a classifier with 100% precision (TP / TP+FP) on training dataset. However, I can hardly achieve this by adjusting the $votes in the trained random Forest.

I wonder if anybody have experience or have any idea on such kind of problem? If you have any clue, please give me some hint. Thanks in advance!

I am open to any other machine learning method, if it promise me 100% precision.

1
Recall = TP/(TP + FN). Precision = TP/(TP + FP). en.wikipedia.org/wiki/…Steve Tjoa

1 Answers

1
votes

If you haven't been able to do it by modifying your votes fraction threshold, then you'll have to somehow modify the trees themselves.

One way to do this is to actually train weighted trees. Unfortunately, I dont' have a pointer right now for this, but this is similar to what's done in the Viola/Jones paper here (but they do it for boosting.)

(One second thought have you looked at the parameter: classwt that has the comment "Priors of the classes. Need not add up to one. Ignored for regression.") on this page?

One quick point: false positive rate doesn't equal FP / (FP + TP). It's really FP / (FP + TN) or equivalently FP / "actual negatives" because you really only want to consider how many false positives are detected as functions of the actual negatives.