1
votes

I have

VGA compatible controller: Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 10) on Ubuntu 10.10 Linux.

I'm rendering statically one VBO per frame. This VBO has 30,000 triangles, with 3 lights and one texture, and I'm getting 15 FPS.

Are intel cards so bad, or am I doing sth wrong?

Drivers are standard, open source drivers from intel.

My code:


void init() {
  glGenBuffersARB(4, vbos);  
  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[0]);
  glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 3, vertXYZ, GL_STATIC_DRAW_ARB);
  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[1]);
  glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 4, colorRGBA, GL_STATIC_DRAW_ARB);
  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[2]);
  glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 3, normXYZ, GL_STATIC_DRAW_ARB);
  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[3]);
  glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 2, texXY, GL_STATIC_DRAW_ARB);
}

void draw() {
  glPushMatrix();

  const Vector3f O = ps.getPosition();

  glScalef(scaleXYZ[0], scaleXYZ[1], scaleXYZ[2]);
  glTranslatef(O.x() - originXYZ[0], O.y() - originXYZ[1], O.z()
          - originXYZ[2]);

  glEnableClientState(GL_VERTEX_ARRAY);
  glEnableClientState(GL_COLOR_ARRAY);
  glEnableClientState(GL_NORMAL_ARRAY);
  glEnableClientState(GL_TEXTURE_COORD_ARRAY);

  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[0]);
  glVertexPointer(3, GL_FLOAT, 0, 0);

  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[1]);
  glColorPointer(4, GL_FLOAT, 0, 0);

  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[2]);
  glNormalPointer(GL_FLOAT, 0, 0);

  glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[3]);
  glTexCoordPointer(2, GL_FLOAT, 0, 0);

  texture->bindTexture();
  glDrawArrays(GL_TRIANGLES, 0, verticesNum);

  glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0); //disabling VBO

  glDisableClientState(GL_VERTEX_ARRAY);
  glDisableClientState(GL_COLOR_ARRAY);
  glEnableClientState(GL_NORMAL_ARRAY);
  glEnableClientState(GL_TEXTURE_COORD_ARRAY);

  glPopMatrix();
}

EDIT: maybe it's not clear - initialization is in different function, and is only called once.

3
In my experience, with no empirical data to back it up, Intel graphics cards are horrible. But any integrated graphics card is going to suck anyway. - user114600
I planned to target intel graphic cards, so it will work everywhere, but I hoped I can display sth like 100 000 textured triangles in static VBO for terrain + 10-100 small static VBOs for objects. If that's not possible on intel cards, I wold have to target something more powerfull. But it's probably me doing something stupid. - ajuc
I've removed your link because that website is possibly dangerous. - Bobby
Make sure your system has OpenGL acceleration enabled: $ glxinfo | grep rendering - ewindisch
Intel cards are this bad ON LINUX. On Windows, they are perfectly fine. Seriously. A few years ago it all started to degrade and we are at the worst point or so. - Apache

3 Answers

4
votes

A few hints:

  • Using that number of vertices you should interleave the arrays. Vertex caches usually don't hold more than 1000 entries. Interleaving the data of course implies that the data is hold by a single VBO.

  • Using glDrawArrays is suboptimal if there are a lot of shared vertices, which is likely the case for a (static) terrain. Instead draw using glDrawElements. You can use the index array to implement some cheap LOD

  • Experiment with the number of vertices in the index buffer given to glDrawArrays. Try batches of at most 2^14, 2^15 or 2^16 indices. This is again to relieve cache pressure.

Oh and in your code the last two lines

  glDisableClientState(GL_VERTEX_ARRAY);
  glDisableClientState(GL_COLOR_ARRAY);
  glEnableClientState(GL_NORMAL_ARRAY); 
  glEnableClientState(GL_TEXTURE_COORD_ARRAY);

I think you meant those to be glDisableClientState.

0
votes

Make sure your system has OpenGL acceleration enabled:

$ glxinfo | grep rendering
direct rendering: Yes

If you get 'no', then you don't have OpenGL acceleration.

-1
votes

Thanks fo answers.

Yeah, I have direct rendering on, according to glxinfo. In glxgears I get sth like 150 FPS, and games like Warzone or glest works fast enough. So probably problem is in my code.

I'll buy some real graphic card eventually anyway, but I wanted my game to work on integrated graphic cards too, that's why I posted this question.