Of course, despite the overflow possibility others have mentioned, it only needs to be able to return e.g. -1, 0, or 1 - which easily fit in a signed char. The real historical reason for this is that in the original version of C in the 1970s, functions couldn't return a char, and any attempt to do so resulted in returning an int.
In these early compilers, int was also the default type (many situations, including function return values as seen in main below, allowed you to declare something as int without actually using the int keyword), so it made sense to define any function that didn't specifically need to return a different type as returning an int.
Even now, a char return simply sign-extends the value into the int return register (r0 on pdp11, eax on x86), anyway. Treating it as a char would not have any performance benefit, whereas allowing it to be the actual difference rather than forcing it to be -1 or 1 did have a small performance benefit. And axiac's answer also makes the good point that it would have had to be promoted back to an int anyway, for the comparison operator. The reason for these promotions is also historical, incidentally, it was so that the compiler did not have to implement separate operators for every possible combination of char and int, especially since the comparison instructions on many processors only works with an int anyway.
Proof: If I make a test program on Unix V6 for PDP-11, the char type is silently ignored and an integer value outside the range is returned:
char foo() {
return 257;
}
main() {
printf("%d\n", foo());
return 0;
}
# cc foo.c
# a.out
257
char
? it returns the difference between the strings. – Iharob Al Asimi