From scikit-learn's documentation, the default penalty is "l2", and C (inverse of regularization strength) is "1". If I keep this setting penalty='l2' and C=1.0, does it mean the training algorithm is an unregularized logistic regression? In contrast, when C is anything other than 1.0, then it's a regularized logistic regression classifier?
2 Answers
4
votes
No, it's not like that.
Let's have a look at the definitions within sklearn's user-guide:
We see:
C
is multiplied with the loss while the left-term (regularization) is untouched
This means:
- Without modifying the code you can never switch-off the regularization completely
- But: you can approximately switch-off regularization by setting
C
to a huge number!- As the optimization tries to minimize the sum of regularization-penalty and loss, increasing
C
decreases the relevance of the regularization-penalty
- As the optimization tries to minimize the sum of regularization-penalty and loss, increasing