8
votes

I'd like to store two float values in a single 32 bit float variable. The encoding will happen in C# while the decoding is to be done in a HLSL shader.

The best solution I've found so far is hard-wiring the offset of the decimal in the encoded values and storing them as integer and decimal of the "carrier" float:

123.456 -> 12.3 and 45.6

It can't handle negative values but that's ok.

However I was wondering if there is a better way to do this.

EDIT: A few more details about the task:

I'm working with a fixed data structure in Unity where the vertex data is stored as floats. (Float2 for a UV, float3 the normal, and so on.) Apparently there is no way to properly add extra data so I have to work within these limits, that's why I figured it was all down to a more general issue of encoding data. For example I could sacrifice the secondary UV data to transfer the 2x2 extra data channels.

The target is shader model 3.0 but I wouldn't mind if the decoding was working reasonably on SM2.0 too.

Data loss is fine as long as it's "reasonable". The expected value range is 0..64 but as I come to think of it 0..1 would be fine too since that is cheap to remap to any range inside the shader. The important thing is to keep precision as high as possible. Negative values are not important.

1
@ZoltanE: Do you know the ranges of the variables you are trying to encode?Andrew Russell
@ZoltanE: Also what shader model and version of DirectX are you targeting? Can you give more details about why you are encoding like this? Finally: Consider asking over on the gamedev site, you might get better answers.Andrew Russell

1 Answers

3
votes

Following Gnietschow's recommendation I adapted the algo of YellPika. (It's C# for Unity 3d.)

float Pack(Vector2 input, int precision)
{
    Vector2 output = input;
    output.x = Mathf.Floor(output.x * (precision - 1));
    output.y = Mathf.Floor(output.y * (precision - 1));

    return (output.x * precision) + output.y;
}

Vector2 Unpack(float input, int precision)
{
    Vector2 output = Vector2.zero;

    output.y = input % precision;
    output.x = Mathf.Floor(input / precision);

    return output / (precision - 1);
}

The quick and dirty testing produced the following stats (1 million random value pairs in the 0..1 range):

Precision: 2048 | Avg error: 0.00024424 | Max error: 0.00048852
Precision: 4096 | Avg error: 0.00012208 | Max error: 0.00024417
Precision: 8192 | Avg error: 0.00011035 | Max error: 0.99999940

Precision of 4096 seems to be the sweet spot. Note that both packing and unpacking in these tests ran on the CPU so the results could be worse on a GPU if it cuts corners with float precision.

Anyway, I don't know if this is the best algorithm but it seems good enough for my case.