I'm using Libsvm to resolve a binary classification problem. My dataset has ~50K attributes and 18 samples. I'm using a leave one out validation (training on 17 samples and test on the remaining sample). I'm normalizing the data by using:
svm-scale -s scaling_parameters Train$i > TrainScaled$i
svm-scale -r scaling_parameters Test$i > TestScaled$i
the training and the prediction is done as:
svm-train -s 0 -c 5 -t 2 -g 0.5 -e 0.1 TrainScaled$i model
svm-predict TestScaled$i model predicted.out
The model always predict the same class (majority one). So I obtain 75% accuracy, but the model is useless because it always predict the same class for each sample. I tried different type of kernels and paramenters but I still have the same result. What could it be? Are the data so hard to "divided" by the hyperplane?