The top-voted answer has a mathematical error if you are working with screen (pixel) coordinates! I submitted an edit a few weeks ago with a long explanation for all readers so that they would understand the math. But that edit wasn't understood by the reviewers and was removed, so I've submitted the same edit again, but more briefly summarized this time. (Update: Rejected 2vs1 because it was deemed a "substantial change", heh).
So I will completely explain the BIG problem with its math here in this separate answer.
So, yes, in general, the top-voted answer is correct and is a good way to calculate the IoU. But (as other people have pointed out too) its math is completely incorrect for computer screens. You cannot just do (x2 - x1) * (y2 - y1)
, since that will not produce the correct area calculations whatsoever. Screen indexing starts at pixel 0,0
and ends at width-1,height-1
. The range of screen coordinates is inclusive:inclusive
(inclusive on both ends), so a range from 0
to 10
in pixel coordinates is actually 11 pixels wide, because it includes 0 1 2 3 4 5 6 7 8 9 10
(11 items). So, to calculate the area of screen coordinates, you MUST therefore add +1 to each dimension, as follows: (x2 - x1 + 1) * (y2 - y1 + 1)
.
If you're working in some other coordinate system where the range is not inclusive (such as an inclusive:exclusive
system where 0
to 10
means "elements 0-9 but not 10"), then this extra math would NOT be necessary. But most likely, you are processing pixel-based bounding boxes. Well, screen coordinates start at 0,0
and go up from there.
A 1920x1080
screen is indexed from 0
(first pixel) to 1919
(last pixel horizontally) and from 0
(first pixel) to 1079
(last pixel vertically).
So if we have a rectangle in "pixel coordinate space", to calculate its area we must add 1 in each direction. Otherwise, we get the wrong answer for the area calculation.
Imagine that our 1920x1080
screen has a pixel-coordinate based rectangle with left=0,top=0,right=1919,bottom=1079
(covering all pixels on the whole screen).
Well, we know that 1920x1080
pixels is 2073600
pixels, which is the correct area of a 1080p screen.
But with the wrong math area = (x_right - x_left) * (y_bottom - y_top)
, we would get: (1919 - 0) * (1079 - 0)
= 1919 * 1079
= 2070601
pixels! That's wrong!
That is why we must add +1
to each calculation, which gives us the following corrected math: area = (x_right - x_left + 1) * (y_bottom - y_top + 1)
, giving us: (1919 - 0 + 1) * (1079 - 0 + 1)
= 1920 * 1080
= 2073600
pixels! And that's indeed the correct answer!
The shortest possible summary is: Pixel coordinate ranges are inclusive:inclusive
, so we must add + 1
to each axis if we want the true area of a pixel coordinate range.
For a few more details about why +1
is needed, see Jindil's answer: https://stackoverflow.com/a/51730512/8874388
As well as this pyimagesearch article:
https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/
And this GitHub comment:
https://github.com/AlexeyAB/darknet/issues/3995#issuecomment-535697357
Since the fixed math wasn't approved, anyone who copies the code from the top-voted answer hopefully sees this answer, and will be able to bugfix it themselves, by simply copying the bugfixed assertions and area-calculation lines below, which have been fixed for inclusive:inclusive
(pixel) coordinate ranges:
assert bb1['x1'] <= bb1['x2']
assert bb1['y1'] <= bb1['y2']
assert bb2['x1'] <= bb2['x2']
assert bb2['y1'] <= bb2['y2']
................................................
# The intersection of two axis-aligned bounding boxes is always an
# axis-aligned bounding box.
# NOTE: We MUST ALWAYS add +1 to calculate area when working in
# screen coordinates, since 0,0 is the top left pixel, and w-1,h-1
# is the bottom right pixel. If we DON'T add +1, the result is wrong.
intersection_area = (x_right - x_left + 1) * (y_bottom - y_top + 1)
# compute the area of both AABBs
bb1_area = (bb1['x2'] - bb1['x1'] + 1) * (bb1['y2'] - bb1['y1'] + 1)
bb2_area = (bb2['x2'] - bb2['x1'] + 1) * (bb2['y2'] - bb2['y1'] + 1)