0
votes

I have a loss function built in tensorflow, it need logits and labels as input:

   def median_weight_class_loss(labels, logits):
       epsilon = tf.constant(value=1e-10)
       logits = logits + epsilon
       softmax = tf.nn.softmax(logits)
       #this is just the number of samples in each class in my dataset divided by the sum of samples 10015.
       weight_sample = np.array([1113,6705,514,327,1099,115,142])/10015
       weight_sample = 0.05132302/weight_sample
       xent = -tf.reduce_sum(tf.multiply(labels * tf.log(softmax + epsilon), weight_sample), axis=1)
       return xent

the problem is in keras loss functions are in different format:

   custom_loss(y_true, y_pred)

it used y_true, y_pred as inputs,

I found a way to get logits in keras, by using linear activation instead softmax in the last layer in my model.

   model.add(Activation('linear'))

But I need my model to have softmax activation in the last layer, what you think the solution is? thank you.

1

1 Answers

0
votes

Strictly speaking, this loss does not need logits, you can input softmax probabilities directly by modifying the loss like:

def median_weight_class_loss(y_true, y_pred):
       epsilon = tf.constant(value=1e-10)
       weight_sample = np.array([1113,6705,514,327,1099,115,142])/10015
       weight_sample = 0.05132302/weight_sample
       xent = -tf.reduce_sum(tf.multiply(y_true * tf.log(y_pred + epsilon), weight_sample), axis=1)
       return xent