1
votes

I'm working on a homework assignment to print out big and little endian values of an int and float. I'm having trouble converting to little endian.

here's my code

void convertLitteE(string input)
{
    int theInt;
    stringstream stream(input);
    while(stream >> theInt)
    {
        float f = (float)theInt;

        printf("\n%d\n",theInt);
        printf("int:   0x");
        printLittle((char *) &theInt, sizeof(theInt));

        printf("\nfloat: 0x");
        printLittle((char *) &f, sizeof(f));
        printf("\n\n");
    }
}

void printLittle(char *p, int nBytes)
{
    for (int i = 0; i < nBytes; i++, p++)
    {
        printf("%02X", *p);
    }
}

when input is 12 I get what I would expect

output:
int:   0x0C000000
float: 0x00004041

but when input is 1234

output:
int:   0xFFFFFFD20400000
float: 0x0040FFFFFFF9A44

but I would expect

int :  0xD2040000
float: 0x00409A44

When I step through the for loop I can see where there appears to be a garbage value and then it prints all the F's but I don't know why. I've tried this so many different ways but I can't get it to work.

Any help would be greatly appreciated.

1
I'm using a string because we have to be able to enter in multiple numbers at once. So i just use getline then get the number values from the string.anthony hall
What language is this? C++? You need to tag your question with the language in use.jwodder
@Jeff It looks like convertLitteE is supposed to convert from a decimal string to little-endian hexadecimal output. No fundamental issue with that part.aschepler
Your approach seems to be to print the bytes of the internal representation of the int, one by one. That is not guaranteed to give you a little-endian integer representation. It will give you whatever representation your system uses for integers.jogojapan
@jogojapan true, but for this homework assignment I'm assuming the system is big endian.anthony hall

1 Answers

2
votes

Apparently on your system, char is a signed 8-bit type. Using unsigned 8-bit bytes, the 4-byte little-endian representation of 1234 would be 0xd2, 0x04, 0x00, 0x00. But when interpreted as a signed char on most systems, 0xd2 becomes -0x2e.

Then the call to printf promotes that char to the int with value -0x2e, then printf (which is not very typesafe) reads in an unsigned int where you passed the int. This is Undefined Behavior, but on most systems it will be the same as a static_cast, so you get the value 0xFFFFFFD2 when trying to print the first byte.

If you stick to using unsigned char instead of char in these functions, you can avoid this particular problem.

(But as @jogojapan pointed out, this entire approach is not portable at all.)