So can someone explain what should be used when doing network IO using Java NIO ByteBuffers,
it has to match what the other end expects.
when sending bytes across the wire from one machine to another?
If it doesn't matter I would leave the default because the performance gain you get is unlikely to matter, even for a 1 Gb network.
If you want to use the native endian of a machine you can use
ByteBuffer.order(ByteBuffer.nativeOrder());
Here is a comparison of how long it takes
public static void main(String... args) {
ByteBuffer bb = ByteBuffer.allocateDirect( 1024 * 1024);
for (int i = 0; i < 5; i++) {
testEndian(bb.order(ByteOrder.LITTLE_ENDIAN));
testEndian(bb.order(ByteOrder.BIG_ENDIAN));
}
}
public static void testEndian(ByteBuffer bb) {
long start = System.nanoTime();
int runs = 100;
for (int i = 0; i < runs; i++) {
bb.clear();
int count = 0;
while (bb.remaining() > 3)
bb.putInt(count++);
bb.flip();
int count2 = 0;
while (bb.remaining() > 3)
if (count2++ != bb.getInt())
throw new AssertionError();
}
long time = System.nanoTime() - start;
System.out.printf(bb.order() + " took %,d μs to write/read %,d ints%n", time / 1000 / runs, bb.capacity() / 4);
}
prints
LITTLE_ENDIAN took 1,357 μs to write/read 262,144 ints
BIG_ENDIAN took 1,484 μs to write/read 262,144 ints
LITTLE_ENDIAN took 867 μs to write/read 262,144 ints
BIG_ENDIAN took 880 μs to write/read 262,144 ints
LITTLE_ENDIAN took 860 μs to write/read 262,144 ints
BIG_ENDIAN took 881 μs to write/read 262,144 ints
LITTLE_ENDIAN took 853 μs to write/read 262,144 ints
BIG_ENDIAN took 879 μs to write/read 262,144 ints
LITTLE_ENDIAN took 858 μs to write/read 262,144 ints
BIG_ENDIAN took 871 μs to write/read 262,144 ints
In one MB of data, using little endian saved around 20 μs (1/1000th of 1 ms).
To send 1 MB over 1 Gb takes about 9,000 μs.