I'm working under Mac OSX and trying to map an image on a cube through a GLSL shader.
My method to display the cube (and the image, when it's not passing through a shader) is :
glPushMatrix();
{
glTranslatef(position.getX(), position.getY(), position.getZ());
glRotatef(angle, axis.getX(), axis.getY(), axis.getZ());
if (bodyImage &&
textureCoords != 0 &&
[bodyImage lockTextureRepresentationWithColorSpace:CGColorSpaceCreateDeviceRGB() forBounds:[bodyImage imageBounds]]) {
[bodyImage bindTextureRepresentationToCGLContext:cgl_ctx textureUnit:GL_TEXTURE0 normalizeCoordinates:YES];
texture = [bodyImage textureName];
if(shader != nil) {
glUseProgramObjectARB([shader programObject]);
glUniform1iARB([shader getUniformLocation:"tex0"], 0);
} else {
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(3, GL_FLOAT, sizeof(btVector3), &textureCoords[0].getX());
}
}
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(btVector3), &vertices[0].getX());
glNormalPointer(GL_FLOAT, sizeof(btVector3), &normals[0].getX());
glDrawElements(GL_TRIANGLES, indicesCount, GL_UNSIGNED_INT, indices);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
if (bodyImage) {
if (shader != nil) {
glUseProgramObjectARB(NULL);
} else {
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
[bodyImage unbindTextureRepresentationFromCGLContext:cgl_ctx textureUnit:GL_TEXTURE0];
[bodyImage unlockTextureRepresentation];
}
}
glPopMatrix();
As you can see, I'm checking, for this test, if there is a shader to be applied on my object (it's a wrapping that works really well.)
If there's no shader, I only enable the GL_TEXTURE_COORD_ARRAY, if there's one I try to bind the image to the sampler2D uniform in the shader.
The shader I'm using is very simple : it's only displaying the texture. I tested it under Quartz Composer and it works well.
But, here, it only display black.
EDIT
Here is the shader...
Vertex
varying vec2 texture_coordinate;
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
texture_coordinate = vec2(gl_MultiTexCoord0);
}
Fragment
varying vec2 texture_coordinate;
uniform sampler2D tex0;
void main()
{
gl_FragColor = gl_Color * texture2D(tex0, gl_TexCoord[0].xy);
}
GL_TEXTURE_COORD_ARRAY
is deprecated, you should be using vertex attributes. If you are not using vertex attributes, then you need to use the appropriate fixed-function GLSL reserved word (e.g.gl_MultiTexCoord0
). And come to think of it, even if you were using that reserved word in your shader to get the texture coordinates, since you are not setting the texture coordinate pointer when shader is non-NULL this would not work anyhow. – Andon M. Colemangl_TexCoord [0].xy
. I would suggest you replace that withtexture_coordinate
in your fragment shader and you should be good to go.gl_TexCoord [n]
can be used to store texture coordinates between the vertex and fragment shader stages, but it is a good idea to avoid using it if you ever want to migrate to modern OpenGL (where that particular reserved word does not exist). Alternatively, you could have setgl_TexCoord [0] = gl_MultiTexCoord0;
in the vertex shader, but again I am trying to set you on a path that will help you transition to modern GLSL. – Andon M. ColemanglTexCoordPointer(2, GL_FLOAT, sizeof(btVector2), &textureCoords[0].getX());
unless you are doing things like texture projection or 3D texturing. – Andon M. Coleman