This is not an answer to the stated question — it has been well answered by others already —, but a footnote explaining some of the terms in the hope that it will clarify the related concepts. In particular, this is not specific to c at all.
Endianness and byte order
When a value larger than byte is stored or serialized into multiple bytes, the choice of the order in which the component bytes are stored is called byte order, or endian, or endianness.
Historically, there have been three byte orders in use: "big-endian", "little-endian", and "PDP-endian" or "middle-endian".
Big-endian and little-endian byte order names are derived from the way they order the bytes: big-endian puts the most significant byte (the byte that affects the logical value most) first, with successive bytes in decreasing order of significance; and little-endian puts the least significant byte first, with successive bytes in increasing order of significance.
Note that byte order may differ for integer types and floating-point types; they may even be implemented in separate hardware units. On most hardware they do have the same byte order, though.
Bit order
Bit order is very similar concept to endianness, except that it involves individual bits rather than bytes. The two concepts are related, but not the same.
Bit order is only meaningful when bits are serialized, for example via a serial or SPI or I2C bus; one after another.
When bits are referred to in a larger group used in parallel, as one unit, like in a byte or a word, there is no order: there is only labeling and significance. (It is because they are accessed and manipulated as a group, in parallel, rather than serially one by one, that there is no specific order. Their interpretation as a group yields differing significance to each, and us humans can label or number them for ease of reference.)
Bit significance
When a group of bits are treated as a binary value, there is a least significant bit, and a most significant bit. These names are derived from the fact that if you change the least significant bit, the value of the bit group changes by the smallest amount possible; if you change the most significant bit, the value of the bit group changes by the largest amount possible (by a single bit change).
Let's say you have a group of five bits, say a, b, c, d, and e, that form a five-bit unsigned integer value. If a is the most significant, and e the least significant, and the three others are in order of decreasing significance, the unsigned integer value is
value = a·24 + b·23 + c·22 + d·21 + e·20
i.e.
value = 16a + 8b + 4c + 2d + e
In other words, bit significance is derived from the mathematical (or logical) interpretation of a group of bits, and is completely separate from the order in which the bits might be serialized on some bus, and also from any human-assigned labels or numbers.
This is true for all bit groups that logically construct numerical values, even for floating-point numbers.
Bit labels or bit numbering
For ease of reference in documentation for example, it is often useful to label the individual bits. This is essentially arbitrary; and indeed, I used letters a to f in an example above. More often, numbers are easier than letters — it's not that easy to label more than 27 bits with single letters.
There are two approaches to label bits with numbers.
The most common one currently is to label the bits according to their significance, with bit 0 referring to the least significant bit. This is useful, because bit i then has logical value 2i.
On certain architectures' documentation, like IBM's POWER documentation, the most significant bit is labeled 0, in decreasing order of significance. In this case, the logical value of a bit depends on the number of bits in that unit. If an unit has N bits, then bit i has logical value of 2N-i-1.
While this ordering may feel weird, these architectures are all big-endian, and it might be useful for humans to just remember/assume that most significant comes first on these systems.
Remember, however, that this is a completely arbitrary decision, and in both cases the documentation could be written with the other bit labeling scheme, without any effect on the real-world performance of the systems. It is like choosing whether to write from left to right, or from right to left (or top-down, for that matter): the contents are unaffected, as long as you know and understand the convention.
While there is some correlation between byte order and bit labeling, all four of the above concepts are separate.
There is correlation between byte order and bit labeling — in the sense that the documentation for a lot of big-endian hardware uses bit labeling where the most significant bit is bit zero —, but that is only because of choises made by humans.
In c, the order in which the C compiler packs bitfields in a struct, varies between compilers and architectures. It is not specified by the C standard at all. Because of this, it is usually a bad idea to read binary files into a struct type with bitfields. (Even if it works on some specific machine and compiler, there is no guarantee it works on others; often, it does not. So, it definitely makes the code less portable.) Instead, read into a buffer, and array of unsigned char
, and use helper accessor functions to extract the bit fields from the array using bit shifts (<<
, >>
), binary ors (|
), and masking (binary and, &
).