In this answer, zwol made this claim:
The correct way to convert two bytes of data from an external source into a 16-bit signed integer is with helper functions like this:
#include <stdint.h>
int16_t be16_to_cpu_signed(const uint8_t data[static 2]) {
uint32_t val = (((uint32_t)data[0]) << 8) |
(((uint32_t)data[1]) << 0);
return ((int32_t) val) - 0x10000u;
}
int16_t le16_to_cpu_signed(const uint8_t data[static 2]) {
uint32_t val = (((uint32_t)data[0]) << 0) |
(((uint32_t)data[1]) << 8);
return ((int32_t) val) - 0x10000u;
}
Which of the above functions is appropriate depends on whether the array contains a little endian or a big endian representation. Endianness is not the issue at question here, I am wondering why zwol subtracts 0x10000u
from the uint32_t
value converted to int32_t
.
Why is this the correct way?
How does it avoid the implementation defined behavior when converting to the return type?
Since you can assume 2's complement representation, how would this simpler cast fail: return (uint16_t)val;
What is wrong with this naive solution:
int16_t le16_to_cpu_signed(const uint8_t data[static 2]) {
return (uint16_t)data[0] | ((uint16_t)data[1] << 8);
}
int16_t
is implementation-defined, so the naive approach isn't portable. – nwellnhofint16_t
– M.M0xFFFF0001u
can't be represented asint16_t
, and in the second approach0xFFFFu
can't be represented asint16_t
. – Sander De Dycker