33
votes

I'm writing a userspace filesystem driver on Windows and endianness conversions are something I've been dealing with, as this particular filesystem always stores values in little-endian format and the driver is expected to convert them (if necessary) for the CPU it's running on. However, I find myself wondering if I even need to worry about endianness conversions, since as far as I can tell, desktop Windows only supports little-endian architectures (IA32, x86-84, etc.), and therefore, the on-disk little-endian values are perfectly fine sans conversion. Is this observation accurate, and if so, is it generally acceptable to make the assumption that Windows will always be running on little-endian hardware? Additionally, is it even possible (in 2011) to run Windows on a big-endian emulator or something, such that one could even test for endianness issues?

Edit: For additional clarity, the way my code currently works, I do an endianness check at startup time, and then every time I load a value off the disk, I run it through an inline function that uses an intrinsic to change endianness if the architecture is big-endian. The problem is, I don't know if I might have missed one or more of those places where I needed to do a conversion and the easiest way to see if I screwed up is to run the program on a big-endian architecture. So I'm interested in knowing (a) if it's even necessary to do these checks since Windows doesn't ordinarily run on little-endian platforms (today anyway), and (b) how I could possibly test my code, seeing as I can't think of a way to run Windows on a big-endian architecture, and manually reversing all the multibyte values on disk still involves a manual process that I might well screw up.

2
endianness checks are (or should be) done at compile time, never runtime. The endianness of an architecture is intrinsic to the machine code generated - its a waste to check at runtime, something that was fixed at compile time.Chris Becke
@Chris Becke, Good idea. I switched over to using <boost/detail/endian.hpp>, which has a preprocessor definition-based way of determining the byte order. This let me use conditional compilation to make the conversion functions one-liners rather than checking the endianness each time (which should make the functions entirely disappear when optimized on little-endian platforms, I believe).jgottula
@ChrisBecke So is there any way to test the endianess at compile-time without using Boost?Niklas R

2 Answers

9
votes

All versions of Windows that you'll see are little-endian, yes. The NT kernel actually runs on a big-endian architecture even today.

6
votes

Edit after changed question:

A) No it is not necessary to check endianness if your sole target is Windows x86 or x64. I wouldn't even spend the time checking the endianness in that case.

B) If you want to check bi-endian support of your code I recommend splitting it into libraries that are themselves cross platform compilable. Then compile and run the code on your favorite Linux flavor that supports big-endian and see if it works. I have yet to hear of any compiler or software that can detect bi-endian issues.

Original response:

As far as I'm aware there are no desktop or server versions of windows that support big-endian. Itanium processors (which I believe were always called IA 64, not IA32 but I could be wrong) have the ability to run in big-endian but Windows doesn't support it.

This isn't to say that Windows 8 will be little-endian only as Windows 8 is targeting ARM processors.

If for some reason you are on Windows (#ifdef _WIN32) and big-endian simply reverse the data structures when you load from disk and just always save in little-endian format which is much more common.