2
votes

I have a problem considering the endianness of my data types. I have to send data over the ethernet using TCP / IP. The byte order however needs to be big endian when being sent and is in big endian, when being received. Therefore I try to reverse all my date before sending it using this class:

class ReverseBinaryReader : BinaryReader
{
    private byte[] a16 = new byte[2];
    private byte[] ua16 = new byte[2];
    private byte[] a32 = new byte[4];
    private byte[] a64 = new byte[8];
    private byte[] reverse8 = new byte[8];
    public ReverseBinaryReader(System.IO.Stream stream) : base(stream) { }

    public override int ReadInt32()
    {
        a32 = base.ReadBytes(4);
        Array.Reverse(a32);
        return BitConverter.ToInt32(a32, 0);
    }

    public override Int16 ReadInt16()
    {
        a16 = base.ReadBytes(2);
        Array.Reverse(a16);
        return BitConverter.ToInt16(a16, 0);
    }

    [ . . . ] // All other types are converted accordingly.

}

This works fine till I assign the converted values like this:

ReverseBinaryReader binReader = new ReverseBinaryReader(new MemoryStream(content));

this.Size = binReader.ReadInt16();  // public short Size

For example, if I want to save the bytes: 0x00, 0x02 as big endian, I would expect this in the memory: 0x0200 however the short value of Size would become 0x0002. Why is that?

Any ideas? Thanks, Peer

// Edit 2:

To clear the issue up a little I'll try to show an example:

public class Message {
    public short Size;
    public byte[] Content;

    public Message(byte[] rawData)
    {
        ReverseBinaryReader binReader = new ReverseBinaryReader(new MemoryStream(rawData));

        this.Size = binReader.ReadInt16();  // public short Size
        this.Content = binReader.ReadBytes(2); // These are not converted an work just fine.
    }
}

public class prog {
     public static int main()
     {
          TCPClient aClient = new TCPClient("127.0.0.1",999); // Async socket
          aClient.Send(new Message(new byte[] { 0x00, 0x02 } );
     }
}
2
It's very unclear whether the problem is with what data is being sent, or how it's being received. Please show a short but complete program demonstrating the problem.Jon Skeet
I tried to clear it up a little, did it help? :Ppeer
Not a lot - there's still no short but complete program demonstrating the problem...Jon Skeet
Using your aMsg example, what values does a16 have and what value is returned by return BitConverter.ToInt16(a16, 0);Trisped
a16 has the values: 0x02, 0x00 which is correct. The return value is 2 (0x0002)peer

2 Answers

1
votes

Edit:

Moved this to the top, since it is the actual answer.

I had to stare at it for far to long, but here is the source of your error.

  • Your initial array is { 0x00, 0x02 } In LE that is 0x0200, BE is 0x0002
  • You send it across the network, and read the first two bytes still { 0x00, 0x02 }
  • You reverse it giving you {0x02, 0x00} LE is 0x0002, BE is 0x0200
  • Your architecture is Little Endian so you get the "correct" result of 0x0002.

Ergo the error is in your test array. You can verify that your Architecure is little endian by checking the IsLittleEndian property on BitConverter.

=====================================================

The following code is to help clarify, more than it is to answer your direct question. And to show that BitConverter is predictable on all architectures.

using System;

namespace BitConverterTest
{
    class Program
    {
        static void Main(string[] args)
        {
            UInt32 u32 = 0x01020304;

            byte[] u32ArrayLittle = {0x04, 0x03, 0x02, 0x01};
            byte[] u32ArrayBig = {0x01, 0x02, 0x03, 0x04};

            if (BitConverter.IsLittleEndian)
            {
                if (BitConverter.ToUInt32(u32ArrayLittle, 0) != u32)
                {
                    throw new Exception("Failed to convert the Little endian bytes");
                }
                Array.Reverse(u32ArrayBig); // convert the bytes to LittleEndian since that is our architecture.
                if (BitConverter.ToUInt32(u32ArrayBig, 0) != u32)
                {
                    throw new Exception("Failed to convert the Big endian bytes");
                }
            } else
            {
                Array.Reverse(u32ArrayLittle); // we are on a big endian platform
                if (BitConverter.ToUInt32(u32ArrayLittle, 0) != u32)
                {
                    throw new Exception("Failed to convert the Little endian bytes");
                }
                if (BitConverter.ToUInt32(u32ArrayBig, 0) != u32)
                {
                    throw new Exception("Failed to convert the Big endian bytes");
                }
            }
        }
    }
}
0
votes

Please try the following:

public override Int16 ReadInt16()
{
    a16 = base.ReadBytes(2);
    return ((Int16)a16[1] * 0x00FF) + (Int16)a16[0];
}

public override int ReadInt32()
{
    a32 = base.ReadBytes(4);
    return ((int)a32[3] * 0x00FFFFFF) + (int)a32[2] * 0x0000FFFF) + (int)a32[1] * 0x000000FF) + (int)a32[0];
}

As for why this works, if you expect 0x00, 0x02 to be 0x0200 then you are using little-endian (since the least significant byte is first).

Since BitConverter is system depenate and your implementation is not, I would suggest trying to stay way from it where reasonable.