1
votes

In my attempts of fixing the liquid simualtion app that works fine on iPad3(PowerVR SGX543MP4, iOS 8.1) and older hardware but fails on iPad Air(PowerVR G6430, iOS 8.1)... I stumbled upon quite "curious" handling of small floating point values on latter. It boiled down to such test case:

Fragment Shader 1(writes test value):

precision highp float;
const highp float addValue = 0.000013;

uniform highp sampler2D maskField;
varying mediump vec2 pixelUv;

void main()
{
    highp vec4 mask = texture2D(maskField, pixelUv);

    gl_FragColor = vec4(addValue * mask.x);
}

Fragment Shader 2(checks the value and outputs red)

precision highp float;
uniform highp sampler2D testTexture;
varying highp vec2 pixelUv;

void main()
{
    highp vec4 test = texture2D(testTexture, pixelUv);

    gl_FragColor = vec4(test.a > 0.000012, 0.0, 0.0, 1.0);
}

First shader is writing to HALF_FLOAT_OES texture. the result is red color on Ipad3 and black on ipad air. And don't hurry posting about mask.x value. It is verified correct(1.0) through debugger on both devices.

Even weirder, if i increase value of "addValue" to 0.000062, it writes it successfuly. But if it's lower than that and variable is alternated with anything but constants, it truncates to 0.0 on iPad Air. So... this works:

const highp float addValue = 0.000013;

void main()
{
    gl_FragColor = vec4(addValue * 1.0);
}

but this doesn't:

uniform highp float one; //= 1.0
const highp float addValue = 0.000013;

void main()
{
    gl_FragColor = vec4(addValue * one);
}

same with adding. if i try to accumulate value like that, it adds nothing:

const highp float addValue = 0.000013;
uniform highp sampler2D pressureField;

varying mediump vec2 pixelUv;

void main()
{
    highp vec4 pressureData = texture2D(pressureField, pixelUv);
    pressureData.a += addValue;
    gl_FragColor = vec4(addValue);
}

any suggestions on what is going on and how to fix it are welcome! used algorithm really needs high precision floats. right now i'm on a fringe of just filing a bug report to apple and waiting for an eternity.

1

1 Answers

2
votes

When storing 0.000013 as a GL_HALF_FLOAT, you're dealing with denormalized numbers. The smallest normalized number that can be represented by a standard half float (IEEE 754-2008) is 2^-14, which is approximately 0.000061, or more than the value you are representing.

The OpenGL spec leaves implementations some latitude on how to deal with denormalized 16-bit floats. This is documented very similarly both in the extension spec for ES 2.0 and in the ES 3.0 spec.

The extension spec uses this as the primary definition for the case where the exponent is 0, which is the case for de-normalized numbers:

(-1)^S * 2^-14 * (M / 2^10),         if E == 0 and M != 0,

This is enough precision to resolve the difference between 0.000012 and 0.000013. But it also says that "Implementations are also allowed to use any of the following alternative encodings":

(-1)^S * 0.0,                        if E == 0 and M != 0,

With this encoding, both 0.000012 and 0.000013 are rounded to 0.0, and therefore become equal.

The ES 3.0 spec has the first definition, but then adds in text:

Providing a denormalized number or negative zero to GL must yield predictable results, whereby the value is either preserved or forced to positive or negative zero.

Which also allows rounding these values to 0.0.

As for why this behaves differently between hardware generations, I don't have any detailed insight. But it looks to me like both implementations are spec compliant, since they use two different variations that are both explicitly allowed by the spec.

If you want to be sure that these values are represented with your required range/precision, you'll have to use a texture of type GL_FLOAT, which will store the values in full 32-bit range/precision.