Regardless of endianness, x & 0xFF
will give you the least significant byte.
First of all, you should understand the difference between endianness and significance. Endianness means in what order the bytes are written to memory; it's completely irrelevant to any computation in the CPU. Significance says which bits have a higher value; it's completely irrelevant to any system of storage.
Once you load a value from memory into CPU, it's endianness doesn't matter, since to the CPU (more accurately, ALU) all that matters is the significance of the bits.
So, as far as C is concerned, 0x000000FF
has 1s in its least significant byte and and
ing it with a variable would give its least significant byte.
In fact, in the whole C standard, you can't find the word "endian". C defines an "abstract machine" where only the significance of the bits matter. It's the responsibility of the compiler to compile the program in such a way that it behaves the same as the abstract machine, regardless of endianness. So unless you are expecting a certain layout of memory (for example through a union
or a cast of pointers), you don't need to think about endianness at all.
Another example that might interest you is shifting. The same thing applies to shifting. In fact, like I said before, endianness doesn't matter to the ALU, so <<
always translates to shift towards more significant bits by not even the compiler, but the CPU itself, regardless of endianness.
Let me put these in a graph with two orthogonal directions so maybe you understand it better. This is how a load operation looks like from the CPU's point of view.
On a little-endian machine you have:
MEMORY CPU Register
LSB BYTE2 BYTE3 MSB ----> MSB
\ \ \-----------> BYTE3
\ \----------------> BYTE2
\--------------------> LSB
On a big endian machine you have:
MEMORY CPU Register
/--------------------> MSB
/ /----------------> BYTE3
/ / /-----------> BYTE2
MSB BYTE3 BYTE2 LSB ----> LSB
As you can see, in both cases, you have:
CPU Register
MSB
BYTE3
BYTE2
LSB
which means in both cases, the CPU ended up loading the exact same value.
#define MSB(x) (((x) & 0xFF000000) >> 24)
or just#define MSB(x) ((x) >> 24)
(assuming a 32bit value is passed)? – alkMSB(x) = ((x) >> 24)
, otherwise code likeif (MSB(x) == 0xFF) ...
won't work. – japreiss