22
votes

How to compute the mean IU (mean Intersection over Union) score as in this paper?

Long, Jonathan, Evan Shelhamer, and Trevor Darrell. "Fully Convolutional Networks for Semantic Segmentation."

4

4 Answers

32
votes

For each class Intersection over Union (IU) score is:

true positive / (true positive + false positive + false negative)

The mean IU is simply the average over all classes.


Regarding the notation in the paper:

  • n_cl : the number of classes
  • t_i : the total number of pixels in class i
  • n_ij : the number of pixels of class i predicted to belong to class j. So for class i:

    • n_ii : the number of correctly classified pixels (true positives)
    • n_ij : the number of pixels wrongly classified (false positives)
    • n_ji : the number of pixels wrongly not classifed (false negatives)

You can find the matlab code to compute this directly in the Pascak DevKit here

16
votes
 from sklearn.metrics import confusion_matrix  
 import numpy as np

 def compute_iou(y_pred, y_true):
     # ytrue, ypred is a flatten vector
     y_pred = y_pred.flatten()
     y_true = y_true.flatten()
     current = confusion_matrix(y_true, y_pred, labels=[0, 1])
     # compute mean iou
     intersection = np.diag(current)
     ground_truth_set = current.sum(axis=1)
     predicted_set = current.sum(axis=0)
     union = ground_truth_set + predicted_set - intersection
     IoU = intersection / union.astype(np.float32)
     return np.mean(IoU)
2
votes

This should help

def computeIoU(y_pred_batch, y_true_batch):
    return np.mean(np.asarray([pixelAccuracy(y_pred_batch[i], y_true_batch[i]) for i in range(len(y_true_batch))])) 

def pixelAccuracy(y_pred, y_true):
    y_pred = np.argmax(np.reshape(y_pred,[N_CLASSES_PASCAL,img_rows,img_cols]),axis=0)
    y_true = np.argmax(np.reshape(y_true,[N_CLASSES_PASCAL,img_rows,img_cols]),axis=0)
    y_pred = y_pred * (y_true>0)

    return 1.0 * np.sum((y_pred==y_true)*(y_true>0)) /  np.sum(y_true>0)
0
votes

The jaccard_similarity_score (per How to find IoU from segmentation masks?) can be used to get the same results as @Alex-zhai's code above:

import numpy as np
from sklearn.metrics import jaccard_score

y_true = np.array([[0, 1, 1],
                   [1, 1, 0]])
y_pred = np.array([[1, 1, 1],
                   [1, 0, 0]])

labels = [0, 1]
jaccards = []
for label in labels:
    jaccard = jaccard_score(y_pred.flatten(),y_true.flatten(), pos_label=label)
    jaccards.append(jaccard)
print(f'avg={np.mean(jaccards)}')