1
votes

I'm using the function 'fitcsvm' to train an SVM with a polynomial kernel on a dataset with 4 classes using a one-versus-all approach. To do a sanity check, I tried applying the resultant model to the same dataset I used for training using the function 'predict'. I predict labels for all observations for each SVM and I choose the label corresponding to the SVM with the highest posterior probability for a particular observation as its final label. However, the training and test errors aren't exactly the same. What is the reason behind this?

1
Could you provide some code and data?! - Cleb

1 Answers

0
votes

Do the 4 classes have the same number of instances? If not, then it may be that fitcsvm normalizes the accuracy to take that into account.

This also sounds like a good question for Mathworks tech support.