I'm not sure how the current situation came to be, but it's currently the case that FP exception detection support is very different from integer. It's common for integer division to trap. POSIX requires it to raise SIGFPE
if it raises an exception at all.
However, you can sort out what kind of SIGFPE it was, to see that it was actually a division exception. (Not necessarily divide-by-zero, though: 2's complement INT_MIN
/ -1
division traps, and x86's div
and idiv
also trap when the quotient of 64b/32b division doesn't fit in the 32b output register. But that's not the case on AArch64 using sdiv
.)
The glibc manual explains that BSD and GNU systems deliver an extra arg to the signal handler for SIGFPE
, which will be FPE_INTDIV_TRAP
for divide by zero. POSIX documents FPE_INTDIV_TRAP
as a possible value for siginfo_t
's int si_code
field, on systems where siginfo_t
includes that member.
IDK if Windows delivers a different exception in the first place, or if it bundles things into different flavours of the same arithmetic exception like Unix does. If so, the default handler decodes the extra info to tell you what kind of exception it was.
POSIX and Windows both use the phrase "division by zero" to cover all integer division exceptions, so apparently this is common shorthand. For people that do know about about INT_MIN / -1 (with 2's complement) being a problem, the phrase "division by zero" can be taken as synonymous with a divide exception. The phrase immediately points out the common case for people that don't know why integer division might be a problem.
FP exceptions semantics
FP exceptions are masked by default for user-space processes in most operating systems / C ABIs.
This makes sense, because IEEE floating point can represent infinities, and has NaN to propagate the error to all future calculations using the value.
0.0/0.0
=> NaN
- If
x
is finite: x/0.0
=> +/-Inf
with the sign of x
This even allows things like this to produce a sensible result when exceptions are masked:
double x = 0.0;
double y = 1.0/x; // y = +Inf
double z = 1.0/y; // z = 1/Inf = 0.0, no FP exception
FP vs. integer error detection
The FP way of detecting errors is pretty good: when exceptions are masked, they set a flag in the FP status register instead of trapping. (e.g. x86's MXCSR for SSE instructions). The flag stays set until manually cleared, so you can check once (after a loop for example) to see which exceptions happened, but not where they happened.
There have been proposals for having similar "sticky" integer-overflow flags to record if overflow happened at any point during a sequence of computations. Allowing integer division exceptions to be masked would be nice in some cases, but dangerous in other cases (e.g. in an address calculation, you should trap instead of potentially storing to a bogus location).
On x86, though, detecting if integer overflow happened during a sequence of calculations requires putting a conditional branch after every one of them, because flags are just overwritten. MIPS has an add
instruction that will trap on signed overflow, and an unsigned instruction that never traps. So integer exception detection and handling is a lot less standardized.
Integer division doesn't have the option of producing NaN or Inf results, so it makes sense for it to work this way.
Any integer bit pattern produced by integer division will be wrong, because it will represent a specific finite value.
However, on x86, converting an out-of-range floating point value to integer with cvtsd2si
or any similar conversion instruction produces the "integer indefinite" value if the "floating-point invalid" exception is masked. The value is all-zero except the sign bit. i.e. INT_MIN
.
(See the Intel manuals, links in the x86 tag wiki.