0
votes

I'm experiencing with shaders a bit and I keep getting this weird compilation error that's driving me crazy!

the following pixel shader code snippet:

            DirectionVector = normalize(f3LightPosition[i] - PixelPos);
            LightVec = PixelNormal - DirectionVector;

            // Get the light strenght factor
            LightStrFactor = float(abs((LightVec.x + LightVec.y + LightVec.z) / 3.0f));

            // TEST!!!
            LightStrFactor = 1.0f;

            // Add this light to the total light on this pixel
            LightVal += f4Light[i] * LightStrFactor;

works perfectly, but as soon as i remove the "LightStrFactor = 1.0f;" line, i.e. letting 'LightStrFactor ' value be the result of the calculation above, it fails to compile the shader.

LightStrFactor is a float LightVal & f4Light[i] are float4 All the rest are float3.

my question is, besides why it doesn't compile, is how come DX compiler cares about the value of a float? even if my values are incorrect, shouldn't it be run-time? the shader compilation code is this:

/* Compile the bitch */
if (FAILED(D3DXCompileShaderFromFile(fileName, NULL, NULL, "PS_MAIN", "ps_2_0", 0, &this->m_pCode, NULL, &this->m_constantTable)))
    GraphicException("Failed to compile pixel shader!");  // <-- gets here :(

if (FAILED(g_D3dDevice->CreatePixelShader( (DWORD*)this->m_pCode->GetBufferPointer(), &this->m_hPixelShader )))
    GraphicException("Failed to create pixel shader!");

this->m_fLoaded = true;

any help is appreciated thanks!!! :]

3
What's the error message? Take the time to get the errors and output them while you compile, this will be a time saver.Coincoin

3 Answers

0
votes

Don't forget that shaders get optimized a lot when they are being compiled. This might be why it doesn't fail when you hardcode the value.

When you hardcode a value right after assigning it an equation, the whole equation gets optimized out and you are left out only with the final assignation.

0
votes

Pixel shaders don't support C++-style casts -- the float(...) in youre example. Since its completely redundant, you can just get rid of it, but if you want a cast, use (float) as in C

0
votes

From your shader snippet, it looks like you're iterating through a number of lights, accumulating their contribution.

My guess would be that when the compiler unrolls the loop with your actual light shading calculations, the compiled shader uses more arithmetic instruction slots than the ps_2_0 profile supports (max 64 instructions).

When you replace the calculations with LightStrFactor=1, the compiler completely optimizes away the three code lines preceding it, which results in your test shader being significantly shorter, and hence fitting inside the allotted 64 instructions.

If possible for your application hardware target, simply bumping the shader profile version will allow your shader to use more instruction slots, and compile without errors. Any of ps_3_0 / ps_2_a / ps_2_b should be able to compile your shader. (The 2_a/b profiles being sorta bastard, but officially supported, NV/ATI extensions to the base 2_0 profile)

(As mentioned in another reply, taking the time to capture and print the compilation errors will be well worth your while.)