Python scikit-learn SGDClassifier() supports both l1, l2, and elastic, it seems to be important to find optimal value of regularization parameter.
I got an advice to use SGDClassifier() with GridSearchCV() to do this, but in SGDClassifier serves only regularization parameter alpha. If I use loss functions such as SVM or LogisticRegression, I think there should be C instead of alpha for parameter optimization. Is there any way to set optimal parameter in SGDClassifier() when using Logisitic Regression or SVM?
In addition, I have one more question about iteration parameter n_iter, but I did not understand what this parameter mean. Does it work like a bagging if used with shuffle option together? So, if I use l1 penalty and large value of n_iter, would it work like RandomizedLasso()?