0
votes

So I've done a coursera ml course, and now i am looking at scikit-learn logistic regression which is a little bit different. I've been using sigmoid function and a cost function was divided to two separate cases when y=0 and y=1. But scikit learn have one function(I've found out that this is Generalised logistic function) which really don't make sense for me.

http://scikit-learn.org/stable/_images/math/760c999ccbc78b72d2a91186ba55ce37f0d2cf37.png I am sorry i don't have enough reputation to post image.

So the main concern of the function is the case when y=0, than the cost function always have this log(e^0+1) value, so it does not matter what was the X or w. Can someone explain me that ?

1
Show your original definition of logistic-regression, which you learned.sascha
I think this answer may help you. the answer!jinyu0310

1 Answers

0
votes

If I am correct the $y_i$ you have in this formula can only assume the values -1 or 1 (as derived in the link given by @jinyu0310). Normally you use this cost function (with regularisation)

(sorry for the equation inserted as image, but cannot use latex here, the image comes from goo.gl/M53u44)

enter image description here

So you always have the two terms that plays a role when yi = 0 or when yi=1. I am trying to find a better explanation on the notation that scikit uses in this formula but so far no luck.

Hope that helps you. Is just another way of writing the cost function with a regularisation factor. Keep also in mind that a constant factor in front of everything will not play a role in the optimisation process. Since you want to find the minimum and are not interested in overall factors that multiply everything.