0
votes

I am trying to translate a piece of code from matlab to python involving gradient function for (500x500) 2D matrices. 't' and 's' are 2D matrices with some values. 'T' and 'S' are 2D matrices with zeros and np.zeros respectively. row, col are same value integers, in my case 127.

for i=1:row
    for j=2:col
        T(i,j)=t(i,j-1)+gradient(t(i,j-1));
        S(i,j)=s(i,j-1)+gradient(s(i,j-1));
    end
end

My resultant Python code is:

for i in range(1, row):
    for j in range(2, col):
        T[i][j] = t[i][j - 1] + np.gradient(t[i][j - 1])
        S[i][j] = s[i][j - 1] + np.gradient(s[i][j - 1])

But this conversion gives the error

in gradient if max(axes) >= N or min(axes) < 0: ValueError: max() arg is an empty sequence. I get the error during first loop in the gradient function. What am I missing here? Any suggestions?

1
Did you miss an indent? on the line under for i in... should the for j... be indented or is that a typo?Chuck
no that was a typo, sorry for that. done the correction hereAman
How are you initializing T, S? I think you should be using : [i,j] instead of [i][j].Divakar
No, that doesn't work @DivakarAman
Elaborate on "doesn't work"?Divakar

1 Answers

0
votes

EDIT:

You are trying to do an np.gradient on a scalar value. This will always return an error. For example, try np.gradient(2), which will return the same error as you have above. The np.gradient function requires an array input. Conversely, if you give matlab a scalar input for it's gradient function, it returns 0. In your matlab code, you seem to just be adding 0 to your t and s matrices to obtain your T and S matrices.

Not sure if this will fix your specific problem but don't forget python starts with index=0, whereas matlab starts with index=1. Try:

for i in range(0, row):
    for j in range(1, col):
        T[i][j] = t[i][j - 1] + np.gradient(t[i][j - 1])
        S[i][j] = s[i][j - 1] + np.gradient(s[i][j - 1])