13
votes

I need to convert bytes in two's complement format to positive integer bytes. The range -128 to 127 mapped to 0 to 255.

Examples: -128 (10000000) -> 0 , 127 (01111111) -> 255, etc.

EDIT To clear up the confusion, the input byte is (of course) an unsigned integer in the range 0 to 255. BUT it represents a signed integer in the range -128 to 127 using two's complement format. For example, the input byte value of 128 (binary 10000000) actually represents -128.

EXTRA EDIT Alrighty, lets say we have the following byte stream 0,255,254,1,127. In two's complement format this represents 0, -1, -2, 1, 127. This I need clamping to the 0 to 255 range. For more info check out this hard to find article: Two's complement

10
byte is not signed, what are you trying to do?leppie
I am still not completely sure what is attempted here. Either it is the way all the answers gives you, or you are understanding 2's complement representation incorrectly.leppie

10 Answers

8
votes

From you sample input you simply want:

sbyte something = -128;

byte foo = (byte)( something + 128);
7
votes
new = old + 128;

bingo :-)

3
votes

Try

sbyte signed = (sbyte)input;

or

int signed = input | 0xFFFFFF00;
2
votes
    public static byte MakeHexSigned(byte value)
    {
        if (value > 255 / 2)
        {
            value = -1 * (255 + 1) + value;
        }

        return value;
    }
2
votes

I believe that 2s complement bytes would be best done with the following. Maybe not elegant or short but clear and obvious. I would put it as a static method in one of my util classes.

public static sbyte ConvertTo2Complement(byte b)
{
    if(b < 128)
    {
        return Convert.ToSByte(b);
    }
    else
    {
        int x = Convert.ToInt32(b);
        return Convert.ToSByte(x - 256);
    }
}
1
votes

If I undestood correctly, your problem is how to convert the input, which is really a signed-byte (sbyte), but that input is stored in a unsigned integer, and then also avoid negative values by converting them to zero.

To be clear, when you use a signed type (like ubyte) the framework is using Two's complement behind the scene, so just by casting to the right type you will be using two's complement.

Then, once you have that conversion done, you could clamp the negative values with a simple if or a conditional ternary operator (?:).

The functions presented below will return 0 for values from 128 to 255 (or from -128 to -1), and the same value for values from 0 to 127.

So, if you must use unsigned integers as input and output you could use something like this:

private static uint ConvertSByteToByte(uint input)
{
    sbyte properDataType = (sbyte)input; //128..255 will be taken as -128..-1
    if (properDataType < 0) { return 0; } //when negative just return 0
    if (input > 255) { return 0; } //just in case as uint can be greater than 255
    return input;
}

Or, IMHO, you could change your input and outputs to the data types best suited to your input and output (sbyte and byte):

private static byte ConvertSByteToByte(sbyte input)
{
    return input < 0 ? (byte)0 : (byte)input;
}
1
votes
int8_t indata; /* -128,-127,...-1,0,1,...127 */
uint8_t byte = indata ^ 0x80;

xor MSB, that's all

1
votes

Here is my solution for this problem, for numbers bigger than 8-bits. My example is for a 16-bit value. Note: You will have to check the first bit, to see if it is a negative or not.

Steps:

  1. Convert # to compliment by placing '~' before variable. (ie. y = ~y)

  2. Convert #s to binary string

  3. Break binary strings into character array

  4. Starting with right most value, add 1 , keeping track of carries. Store result in character array.

  5. Convert character array back to string.

    private string TwosComplimentMath(string value1, string value2)
    {
        char[] binary1 = value1.ToCharArray();
        char[] binary2 = value2.ToCharArray();
        bool carry = false;
        char[] calcResult = new char[16];
    
        for (int i = 15; i >= 0; i--)
        {
            if (binary1[i] == binary2[i])
            {
                if (binary1[i] == '1')
                {
                    if (carry)
                    {
                        calcResult[i] = '1';
                        carry = true;
                    }
                    else
                    {
                        calcResult[i] = '0';
                        carry = true;
                    }
                }
                else
                {
                    if (carry)
                    {
                        calcResult[i] = '1';
                        carry = false;
                    }
                    else
                    {
                        calcResult[i] = '0';
                        carry = false;
                    }
                }
            }
            else
            {
                if (carry)
                {
                    calcResult[i] = '0';
                    carry = true;
                }
                else
                {
                    calcResult[i] = '1';
                    carry = false;
                }
            }
    
        }
    
        string result = new string(calcResult);
        return result;
    
    }
    
1
votes

So the problem is that the OP's problem isn't really two's complement conversion. He's adding a bias to a set of values, to adjust the range from -128..127 to 0..255.

To actually do a two's complement conversion, you just typecast the signed value to the unsigned value, like this:

sbyte test1 = -1;
byte test2 = (byte)test1;

-1 becomes 255. -128 becomes 128. This doesn't sound like what the OP wants, though. He just wants to slide an array up so that the lowest signed value (-128) becomes the lowest unsigned value (0).

To add a bias, you just do integer addition:

newValue = signedValue+128;
0
votes

You could be describing something as simple as adding a bias to your number ( in this case, adding 128 to the signed number ).