As we know, we have two types of Endianness: big endian and little endian.
Let's say that an integer takes 4 bytes, so the layout of the integer 1 should be 0x01 0x00 0x00 0x00 for little endian and 0x00 0x00 0x00 0x01 for big endian.
To check if a machine is little endian or big endian, we can code as below:
int main()
{
int a = 1;
char *p = (char *)&a;
// *p == 1 means little endian, otherwise, big endian
return 0;
}
As my understanding, *p is assigned with the first octet: 0x01 for little endian and 0x00 for big endian (the two bold parts above), that's how the code works.
Now I don't quite understand how bit field works with different Endianness.
Let's say we have such a struct:
typedef struct {
unsigned char a1 : 1;
unsigned char a2 : 1;
unsigned char a6 : 3;
}Bbit;
And we do the assignment as below:
Bbit bit;
bit.a1 = 1;
bit.a2 = 1;
Will this piece of code be implementation specific? I'm asking if the values of bit.a1 and of bit.a2 are 1 on little endian and are 0 on big endian? Or are they definitely 1 regardless of the different Endianness?
intsizes. - chux - Reinstate Monicaunsigned char a1 : 1;lacks portability. - chux - Reinstate Monicaintmight differ fromsigned int--> "it is implementation-defined whether the specifier int designates the same type as signed int or the same type as unsigned int."int a1 : 1;might encode -1,0 or 0,1. - chux - Reinstate Monica