I have a running program that uses glVertexPointer, glNormalPointer etc to draw a large number of elements. This works around 30 FPS. Now I've reach a point where integrating shaders would be a viable option (each vertex need to have a color calculated based on a specific class). 1. Now my first question is how much would using shaders affect my FPS ? My current implementation (which I'm 99.99% is flawed in at least some way, will post code below) drops the FPS drastically to 3 FPS. If this kind of drop in FPS is normal there is no point to struggle with this.
Now for some code. My shaders look like:
vertexsource = """
attribute vec3 position;
attribute vec3 normal;
varying vec3 norm;
varying vec3 color_out;
uniform vec3 color_in;
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * vec4( position,1);
color_out = color_in;
norm = normal;
}"""
fragmentsource = """
varying vec3 norm;
varying vec3 color_out;
void main()
{
gl_FragColor = vec4( color_out, 1.0);
}"""
vshader = compileShader( vertexsource, GL_VERTEX_SHADER )
fshader = compileShader( fragmentsource, GL_FRAGMENT_SHADER )
program = compileProgram( vshader, fshader )
color = glGetUniformLocation( program, "color_in")
normal = glGetAttribLocation( program, "normal" )
position = glGetAttribLocation( program, "position" )
glUseProgram( program )
glUniform3fv( color, 3, (0,0,1) )
return position, normal
So I return position and normal because I will use them later to pass the actual vertices and normals. Right from here I have a question. Without shaders I just pass the normals and vertices arrays using VertexPointer and NormalPointer and OpenGL handles the rest. 2. I've read that shaders have gl_Vertex and gl_Normal attributes build in. How do I pass the values from my VBO's to these ? As you can see currently I'm passing my vertexes position to the attribute position and my normals to normal, but I'm not doing anything with the normals as I don't know what.
The drawing is done like this:
glEnableVertexAttribArray( self.position )
glEnableVertexAttribArray( self.normal )
glBindBufferARB(GL_ARRAY_BUFFER_ARB, self.bufferVertices)
glVertexAttribPointer( self.position, 3, GL_FLOAT, GL_FALSE, 0, None )
glEnableClientState(GL_NORMAL_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, self.bufferNormals)
glVertexAttribPointer( self.normal, 3, GL_FLOAT, GL_FALSE, 0, None )
glDrawElements(GL_TRIANGLES, len(self.triangles) , GL_UNSIGNED_SHORT, ADT.voidDataPointer(self.triangles))
Here self.bufferVertices, self.bufferNormals are VBO's containg my vertices and normals. self.triangles indices array. It draws the correct indices so far but my FPS is very low as mentioned.
- Is this drawing sequence correct? Also are there other things that when enabled could conflict with the shader? (
GL_LIGHTNING
,GL_DEPTH_TEST
and so on)
gl_Vertex
,gl_Normal
,... attributes are just set with the usual attributes (glVertex/glVertexPointer
,glNormal/glNormalPointer
,...), that's their advantage. – Christian Rau