I have a problem with optimization to compute errors for disparity map estimation.
To compute errors I create a class with called methods for each error. I need to iterate for every pixel to get an error. This arrays are big cause I'm iterating in size of 1937 x 1217 images. Do you know how to optimize it?
Here is code of my method:
EDIT:
def mreError(self):
s_gt = self.ref_disp_norm
s_all = self.disp_bin
s_r = self.disp_norm
s_gt = s_gt.astype(np.float32)
s_r = s_r.astype(np.float32)
n, m = s_gt.shape
all_arr = []
for i in range(0, n):
for j in range(0, m):
if s_all[i,j] == 255:
if s_gt[i,j] == 0:
sub_mre = 0
else:
sub_mre = np.abs(s_gt[i,j] - s_r[i,j]) / s_gt[i,j]
all_arr.append(sub_mre)
mre_all = np.mean(all_arr)
return mre_all