When we say a specific architecture is either little-endian or big-endian, we are referring to the whether numerical significance is stored from left-to-right or right-to-left in memory. My question is: does this ordering refer to how bits or ordered in a byte, or how bytes are ordered in a memory?
For example, consider the number 6000=1770h=0001011101110000b
. If both bits in a byte and byte in memory are little-endian, this would be stored as
00001110 11101000 = 0E E8,
if bits in a byte were big-endian, but bytes in memory were little-endian, this would be stored as (for what it's worth, this happens to be how Visual Studio seems to be telling me that memory is organized in x64 architecture)
01110000 00010111 = 70 17,
if bits were little-endian, but bytes were big-endian, this would be stored as
11101000 00001110 = 0E E8,
and finally, if bits were big-endian, but bytes were little-endian, this would be stored as
00010111 01110000 = 17 70
(Hopefully I did that right.)
So then, what do the terms "little-endian" and "big-endian" actually refer to? Do the terms refer to the ordering of bits in a byte, or the ordering of bytes in memory, or both? Furthermore, if VS tells me that, for example, 7C
, is 'in' a given particular byte, do they mean that the bits that make up that byte in computer memory are literally 0111 1100
, or do they just mean that the value stored in that byte is 7Ch=124
, but or may not be actually represented as 7c=01111100
depending on whether or not the underlying architecture happens to be little-endian?