I have a Java application in which I compute a 3D model on the CPU. The computation takes some time (order of seconds - minutes) but happens in iterations, levels of detail if you will. I am using JOGL to visualize the model. I want to visualize the model right away and update the view with a new level of detail when it is ready. I have a setup in which this is working, but I have the feeling I'm doing something wrong (I'm learning OpenGL with this project): right now I cache the vertex info of a new LoD in a FloatBuffer and set a flag. The render loop check this flag and if it is set it will allocate a new data store (glBufferData) for the VBO. I considered using glBufferSubData for better performance (allocating a new data store causes a hickup in performance at the higher levels, which are at least 1M vertices) but the problem is that a new level of detail has an arbitrary amount of vertices (not the same as the previous level of detail) and I can't predict, when initializing the VBO the first time, how large the buffer should maximally be. I've tried allocating a really big one, but that is leaving me with JVM crashes or GL 1281 errors (I guess I'm trying to allocate a buffer that is too big: 1440*1440*4 bytes). Should I be using multiple separate VBOs? How large can I make one VBO? Should I have a separate (set of) VBOs for vertex colors, normals and for vertex positions? What is the best practice when a new level of detail is ready?
Extra information:
Output from the check in the render loop with inits a new buffer data store (time measured using System.currentTimeMillis()) for different levels of detail arriving, showing the noticeable hickups in performance:
OpenGL received new data of length 0
Allocating a new data store took 0 ms
OpenGL received new data of length 0
Allocating a new data store took 0 ms
OpenGL received new data of length 360
Allocating a new data store took 3 ms
OpenGL received new data of length 6240
Allocating a new data store took 0 ms
OpenGL received new data of length 34020
Allocating a new data store took 0 ms
OpenGL received new data of length 177840
Allocating a new data store took 1 ms
OpenGL received new data of length 1018980
Allocating a new data store took 7 ms
OpenGL received new data of length 5214660
Allocating a new data store took 20 ms
OpenGL received new data of length 19392000
Allocating a new data store took 59 ms
OpenGL received new data of length 56229840
Allocating a new data store took 157 ms
The actual code:
long before = System.currentTimeMillis();
gl.glBufferData(GL_ARRAY_BUFFER, vertexData.limit() * 4, vertexData, GL_STATIC_DRAW);
System.out.println("Allocating a new data store took " + (System.currentTimeMillis() - before) + " ms");
where vertexData is allocated using GLBuffers.newDirectFloatBuffer(...) on the float[] containing vertex data (this is done on the CPU thread). On a sidenote, notice how the largest model of detail contains 5622984 vertices (each vertex consists of 10 float values). This code works and my pc does not complain when it's asked to allocate a buffer data store of that size with this line of code. I'm weirded out because if I try to directly allocate a buffer that size in my init routine by hard coding this amount of bytes, the JVM crashes... (# Problematic frame:
[nvoglv64.DLL+0xd16e7b]). So changing
gl.glBufferData(
GL_ARRAY_BUFFER,
vertexData.limit() * 4,
vertexData,
GL_STATIC_DRAW);
to
gl.glBufferData(
GL_ARRAY_BUFFER,
56229840*4,
vertexData,
GL_STATIC_DRAW);
causes a JVM crash.