1
votes

I have 2 major problem with defining custom loss-function in Keras to compile my CNN network. I am working on 2D image registration (aligning a pair of 2D images to be best fit on each other) via CNN. The output of the network will be a 5-dim float-typed array as the prediction of net. (1 scaling, 2 translation and 2 scaling over x and y). There are two main loss functions (and also metrics) for the registration problem called Dice Coefficient and TRE (Target Registration Error, which is the sum of distances between pairs of landmark points marked by a physician). By the way, I need to implement these two loss functions. For Dice coefficient:

1- First of all, I need to know which sample is under the consideration by the optimizer so that I can read the content of that sample and compute Dice, while there are only y_true and y_pred defined in the custom loss functions based on the Keras Documentation.

2- I write the following code as my loss function to 1) First, warp the 1st image, 2) Second, make both image binary (each sample is composed of 2 images: one is moving image and the other is fixed image), 3) third, to return the Dice coefficient between the pair images (warped and fixed).

Since the parameters of custom loss function are restricted to y_true and y_pred, and there is no index for the sample under the consideration and my problem is unsupervised (i.e. there is no need for any label), I used the index of samples feeded to the CNN as the labels, and tried to use y_true[0] as the index of train sample under-the-consideration by CNN, and by setting the batch-size to 1.

def my_loss_f(y_true,y_pred):
    from scipy.spatial import distance as dis
    a = y_true[0]
    nimg1=warping(Train_DataCT[a],y_pred) # line 83 in CNN1.py
    return dis.dice(BW(nimg1).flatten(),BW(Train_DataMR[a]).flatten())

def warping(nimg,x):
    import scipy.ndimage as ndi
    nimg1 = ndi.rotate(nimg, x[0], reshape=False)
    nimg1 = ndi.shift(nimg1, [x[1], x[2]])
    nimg1 = clipped_zoom(nimg1, [x[3], x[4]])
    return nimg1

def BW(nimg1):
    hist = ndi.histogram(nimg1, 0, 255, 255)
    som = ndi.center_of_mass(hist)
    bwnimg = np.where(nimg1 > som, 1, 0)
    return bwnimg

But, I constantly get different errors such as follows. Someone told me to use TensorFlow or Keras-backend to rewrite my own loss function, but I need Numpy and SciPy and cannot jump into the such kind of low-level programming as my time to complete the project is very restricted.

The main problem is that y_true is empty (it is just a placeholder not real variable with value), and cannot be used as the index for Train_DataCT[y_true[0]] as the error is: the index should be integer, :, Boolean and so on and a tensor cannot be used as a index! I tried a number of ways e.g. to convert the y_true to ndarray or use y_true.eval() to initialize it but instead I got the error: Session error, no default session.

Thanks ahead, please someone help me.


Traceback (most recent call last):
  File "D:/Python/Reg/Deep/CNN1.py", line 83, in <module>
    model.compile(optimizer='rmsprop',loss=my_loss_f)
  File "C:\Users\Hamidreza\Anaconda3\lib\site-packages\keras\engine\training.py", line 342, in compile
    sample_weight, mask)
  File "C:\Users\Hamidreza\Anaconda3\lib\site-packages\keras\engine\training_utils.py", line 404, in weighted
    score_array = fn(y_true, y_pred)
  File "D:/Python/Reg/Deep/CNN1.py", line 68, in my_loss_f
    nimg1=warping(Train_DataCT[1],y_pred)
  File "D:/Python/Reg/Deep/CNN1.py", line 55, in warping
    nimg1 = ndi.rotate(nimg, x[0], reshape=False)
  File "C:\Users\Hamidreza\Anaconda3\lib\site-packages\scipy\ndimage\interpolation.py", line 703, in rotate
    m11 = math.cos(angle)
TypeError: must be real number, not Tensor

Process finished with exit code 1

2
you Train_DataCT[1] . y_pred either of them is a tensor, and not a real value. Try converting it to list and compute the rest. It should work. Also please edit the error log, such that its more readable.venkata krishnan
Train_DataCT is an ndarray, but y_pred and y_true are tensores. They should be converted to ndarray, but how??? I do not know, since y_pred.numpy() does not work in real-time (in the loss function).Hamidreza

2 Answers

1
votes

Your loss functions should work on the tensor type of you backend. If you're using keras with tf backend, the following function might help with combining advanced numpy/scipy functions and tensors:

https://www.tensorflow.org/api_docs/python/tf/numpy_function?version=stable

Also in the following you can find a lot more useful stuff on this:

How to make a custom activation function with only Python in Tensorflow?

0
votes

Let me refine my question: I need my inputted sample data to calculate the loss function. With/Without batch, I should know the index of the sample under-the-consideration by CNN in order to compute loss e.g. Dice coefficient between a pair of inputted images.

Since my problem is unsupervised learning, as an alternative solution, I used y_true as the index of sample, but when e.g. after tf.flatten, I use y_true[0] such as Train_DataCT[y_true[0]], I get the error: The index cannot be a tensor!

How could I use .run() or .eval() in a customized loss function so that y_true can get value so that I can convert it to a e.g. ndarray???