This algorithm has been in my mind for a long time, but I cannot find it described anywhere. It's so simple though that I can't be the only one who has thought of it. Here's how it works:
You start with an image. Say, 7x7px:
You need to resample it say, to 5x5px:
So all you do is take the average color of each new square:
This isn't the nearest-neighbor, because that takes the color of just one pixel, not fractional pixels who happen to overlay the source pixel. It's also not bilinear, bicubic, lanczos or anything else interpolating.
So - what is it? It intuitively seems to me that this should be the "mathematically perfect" resampling algorithm, although since I don't have a definition of what "mathematically perfect" is, I cannot prove or disprove that.
Last but not least, "mathematically perfect" isn't always "best looking", so I wonder how it compares to other mainstream image resampling algorithms (bicubic, lanczos) in terms of "quality"? This is a subjective term, of course, so I'm really interested if there are significant differences between this algorithm and others, which most people would agree upon.
P.S. A few things I can already tell about it - it won't be "best looking" for pixel art, as demonstrated here; there are special algorithms for that (2xSAI etc); and also it won't be best for enlarging pictures - interpolation would win out there. But for shrinking pictures...?
Update 1: Hmm, just found out about supersampling. This seems like a variant of it, with a grid-type arrangement of samples, where the number of samples is optimized for the resolution of the source & target images.