I'm writing a refraction shader that takes into account two surfaces. As such, I'm using FBO's to render the depth and normals to texture, and a cubemap to represent the environment. I need to use the values of the normals stored in the texture to fetch values from the cubemap in order to get the refraction normal of the back surface.
The cubemap works perfectly as long as I don't try to access it from a vector whose value has been retrieved from a texture.
Here is a minimal fragment shader that fails. The color stays desperatly black. I'm sure that the call to texture 2D returns non-zero values: if I try to display the texture color (representing the normals) contained in direction, I get a perfectly colored model. No matter what kind of operations I do with the "direction" vector, it keeps on failing.
uniform samplerCube cubemap;
uniform sampler2D normalTexture;
uniform vec2 viewportSize;
void main()
{
vec3 direction = texture2D(normalTexture, gl_FragCoord.xy/viewportSize).xyz;
// direction = vec3(1., 0., 0) + direction; // fails as well!!
vec4 color = textureCube(cubemap, direction);
gl_FragColor = color;
}
Here are the values of the vector "direction" displayed as color, just a proof that they're not null!
And here is the result of the above shader (just the teapot).
While this code works perfectly:
uniform samplerCube cubemap;
uniform vec2 viewportSize;
varying vec3 T1;
void main()
{
vec4 color = textureCube(cubemap, T1);
gl_FragColor = color;
}
I can't think of any reason why my color would stay black whenever I access the sampler cube values!
Just for the sake of completeness, even though my cubemap works, here are the parameters used to set it up: glGenTextures(1, &mTextureId);
glEnable(GL_TEXTURE_CUBE_MAP);
glBindTexture(GL_TEXTURE_CUBE_MAP, mTextureId);
// Set parameters
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
Unless I've missed something important somewhere, I'm thinking it might possibly be a driver bug. I don't have any graphics card, I'm using the Intel Core i5 processor chipset.
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
Any idea on why this might be occurring, or do you have a workaround ?
Edit: Here is how my shader class binds the textures
4 textures to bind
Bind texture 3 on texture unit unit 0
Bind to shader uniform: 327680
Bind texture 4 on texture unit unit 1
Bind to shader uniform: 262144
Bind texture 5 on texture unit unit 2
Bind to shader uniform: 393216
Bind texture 9 on texture unit unit 3
Bind to shader uniform: 196608
Textures 3 and 4 are depth, 5 is the normal map, 9 is the cubemap.
And the code that does the binding:
void Shader::bindTextures() {
dinf << m_textures.size() << " textures to bind" << endl;
int texture_slot_index = 0;
for (auto it = m_textures.begin(); it != m_textures.end(); it++) {
dinf << "Bind texture " << it->first<< " on texture unit unit "
<< texture_slot_index << std::endl;
glActiveTexture(GL_TEXTURE0 + texture_slot_index);
glBindTexture(GL_TEXTURE_2D, it->first);
// Binds to the shader
dinf << "Bind to shader uniform: " << it->second << endl;
glUniform1i(it->second, texture_slot_index);
texture_slot_index++;
}
// Make sure that the texture unit which is left active is the number 0
glActiveTexture(GL_TEXTURE0);
}
m_textures is a map of texture ids to uniform ids.