I am trying to translate a piece of code from matlab to python involving gradient function for (500x500) 2D matrices. 't' and 's' are 2D matrices with some values. 'T' and 'S' are 2D matrices with zeros and np.zeros respectively. row, col are same value integers, in my case 127.
for i=1:row
for j=2:col
T(i,j)=t(i,j-1)+gradient(t(i,j-1));
S(i,j)=s(i,j-1)+gradient(s(i,j-1));
end
end
My resultant Python code is:
for i in range(1, row):
for j in range(2, col):
T[i][j] = t[i][j - 1] + np.gradient(t[i][j - 1])
S[i][j] = s[i][j - 1] + np.gradient(s[i][j - 1])
But this conversion gives the error
in gradient if max(axes) >= N or min(axes) < 0: ValueError: max() arg is an empty sequence. I get the error during first loop in the gradient function. What am I missing here? Any suggestions?
for i in...
should thefor j...
be indented or is that a typo? – ChuckT
,S
? I think you should be using :[i,j]
instead of[i][j]
. – Divakar