Does Matlab's crossval method in the context of (binary) classification respect class frequencies?
Most classification models in Matlab offer the possibility to compute a cross-validated model. For instance when training a linear SVM by means of svm=fitcsvm(X,y);, one can compute a cross-validated model by calling cv=crossval(svm);. (Here the documentation for the method crossval for objects of type ClassificationSVM.) This cross-validated model then can be used to estimate the generalization error of the training process.
Now my question(s): When partitioning the training data, does crossval take into account the class frequencies? For instance, we may have 5 times more observations $X_0$ for class 0 than observations $X_1$ for class 1. So do the partitioned versions of the data roughly have the same ratio of observations for each class (5:1 in my example)? Or is this completely ignored, with the reason that if the dataset is large enough, the partitions very likely will have about the same relative class size.
Before dealing with Matlab's crossval feature, I used my own partitioning algorithm, that respected the relative class sizes when splitting the data. In essence, the algorithm would draw 5 items of class 0 at random and then 1 item of class 1 if the class frequencies were 5/6 and 1/6, until the partitions were full.
If the the relative class sizes were ignored, I'd say that this can be problematic for very imbalanced and/or small datasets. Or am I mistaken here? Very glad to read your thoughts about this.