10
votes

But pow(0, 2.0) gives 0

Seems that any float exponent gives 1 while integer exponents give 0.

I am using DirectX 9 and hlsl compiler "D3DCompiler_43.dll". Confirmed that on Nvidia and Ati cards.

I am confused! Is that some kind of known behaviour or bug?

To illustrate the effect try the following simple test shader:

// Test shader demonstrating strange pow behaviour
// when brightness becomes 0 resulting color will jump to white

float4x4 WorldViewProjXf : WorldViewProjection < string UIWidget="None";>;

float Brightness
<
    string UIName = "Brightness";   
    float UIMin =  0;   
    float UIMax =  1;       
> = 0;


struct VS_Input
{
    float4 position : POSITION0;
};

struct VS_Output
{
    float4 position : POSITION0;
};

VS_Output Vertex_Func( VS_Input in_data )
{
    VS_Output outData;
    outData.position = mul(in_data.position, WorldViewProjXf);
    return outData;
}

float4 Fragment_Func( VS_Output in_data ) : COLOR
{
    return pow(Brightness, 2.2);
}

technique Main 
{
    pass p0 
    {       
        VertexShader = compile vs_3_0 Vertex_Func();
        PixelShader = compile ps_3_0 Fragment_Func();
    }
}
1

1 Answers

8
votes

Looking at the HLSL docs, pow(x, y) appears to be implemented directly as exp(y * log(x)). Since x=0 in your question, again looking at the docs log(x) == -INF. It seems like the value of y shouldn't matter at that point as long as it's positive and greater than zero.

You might be accidentally comparing pow(0.0, 2.0) == 0.0 with pow(0.0, 0.0) == 1.0. That's my best guess for what's happening.

log(0)=-INF reference

pow = exp(y * log(x)) reference