I have the following goal:
Use input vertex data to perform vertex processing (vertex, tessealation and/or geometry shaders). This should take all object-space vertex data from different mesh-specific buffers, transform them into world space and store it into a singe world mesh buffer. Also use the geometry shader to analyze the mesh in world space (e.g. collision detection). The vertex data and the collision detection output would be transform-feedback'd into different buffers.
Use the world space vertex data in another vertex processing stage, which will apply view transformations and perspective projection. This will allow moving around the "camera" without repeating a potentially expensive object-to-world space transformations (e.g. instantiating a bezier surface). At this stage camera-related mesh analysis can be done (e.g. ray-triangle intersection for mouse selection). As before, the vertex data and the ray-triangle intersection data would be written into separate buffers.
Use the camera space vertex data to rasterize the mesh and render into the frame buffer. Updating the screen (e.g. window getting resized) would only require to repeat the third pass. Moving the camera around wold only require to repeat the second and third passes.
The point of having the entire pipeline split into these 3 stages is:
- Do vertex processing for rendering and for GPGPU (collision detection, ray-triangle intersection, ...) all at once, thus avoiding repetition.
- Reuse model transformations when no world-space changes have been made.
Here's what I have:
My prototype solution that only separates the pipeline onto two stages (stage 1 and stage 2 are combined for now). I created two programs for each of those stages. The fist one only has a vertex shader and a geometry shader and the second one only has a pass-through vertex shader and a fragment shader. I successfully set up everything necessary to get the first stage to perform. I tested this by mapping the transform feedback buffer and querying the transform feedback primitive count. I tried messing around with the vertices in the geometry shader and the result reflected the changes accordingly. I have two vertex array objects: one for the input vertex data and one for the transform feedback output vertex data.
Here's my problem:
When I try to bind the transform feedback buffer to the GL_ARRAY_BUFFER binding point and use the second vertex array object for it I get nothing. I used the OpenGL profiler for OSX to check for any GL errors by setting breakpoints on everything and nothing happens. So I don't cause a GL error, I get the feedback data by mapping the buffer but I don't see anything getting rendered. I tried both glDrawTransformFeedback() and glDrawArrays() by manually querying the primitive count.
Note:
My code base is highly integrated with Cocoa and is spread across many files. I do realize that I will have to post some source code so please tell me which parts you need to see and I'll collect them and post them in a neat manner.
GL_RASTERIZER_DISCARD
? – Andon M. Coleman