0
votes

I am studying regression with Machine Learning in Action book and I saw a source like below :

def stocGradAscent0(dataMatrix, classLabels):
    m, n = np.shape(dataMatrix)
    alpha = 0.01
    weights = np.ones(n)   #initialize to all ones
    for i in range(m):
        h = sigmoid(sum(dataMatrix[i]*weights))
        error = classLabels[i] - h
        weights = weights + alpha * error * dataMatrix[i]
    return weights

You may guess what the code means. But I didn't understand it. I read the book several times and searched related stuff like wiki or google, where exponential function is from to get weights for minimum differences. And why do we get proper weight using the exponential function with sum of X*weights? It would be kind of OLS. Anyway then we get the result like below: enter image description here

Thanks!

1

1 Answers

2
votes

It just the basics in linear regression. In the for loop it tries to calculate the error function

Z = β₀ + β₁X ; where β₁ AND X are matrices

hΘ(x) = sigmoid(Z)

i.e. hΘ(x) = 1/(1 + e^-(β₀ + β₁X)

then update the weights. normally it's better to give it a high number for iterations in the for loop like 1000, m it would be small i guess.

i want to explain more but i can't explain better than this dude here

Happy learning!!