0
votes

I've been trying to implement stochastic gradient descent as part of a recommendation system following these equations:

enter image description here

I have:

for step in range(max_iter):
        e = 0
        for x in range(len(R)):
            for i in range(len(R[x])):
                if R[x][i] > 0:
                    exi = 2 * (R[x][i] - np.dot(Q[:,i], P[x,:]))
                    qi, px = Q[:,i], P[x,:]

                    qi += _mu_2 * (exi * px - (2 * _lambda_1 * qi))
                    px += _mu_1 * (exi * qi - (2 * _lambda_2 * px))

                    Q[:,i], P[x,:] = qi, px

The output I expect isn't quite right but I can't really put a finger on it. Please help me to identify the problem I have in my code.

Much appreciate your support

1
Did you ever figure this out? I am looking for a solution too. - nad
unfortunately I never did. but I reckon I should ask my fellow classmates who scored 100 on this one for their solution. - Thang Do

1 Answers

0
votes

When you update qi and px, you should exchange mu1 and mu2.