1
votes

I have a question about a very specific method on how to render surface particles. The method is explained very well in the Nvidia GPU Gems 3 chapter 7 "Point-Based Visualization of Metaballs on a GPU", link to this chapter.

The article is about rendering an implicit surface using points or splats that are evenly distributed over the surface. They say that the computation of these particles is done completely on the GPU. Only the data which defines the surface is sent from CPU to the GPU to keep the traffic as low as possible.

They also gave some pseudo code examples of fragment shader programs to compute the particle positions, velocity etc. and for me it looks like these programs should run once for every particle.

Now my question is, how do they store these particles? What kind of data structure is it? It must be some kind of buffer or texture that can be accessed for reading as well as for writing operations on the GPU. But how do I render this buffer/texture again in the next rendering step?

My first idea was some kind of vertex-buffer-object which is sent to the GPU once at the beginning and continuously updated there at each rendering pass. Is that possible at all?

One requirement for me is that it must be implemented using OpenGL/GLSL, I hope that is possible.

1
That book is ancient by today's standards, just so you know. Since that book was written about 2-3 generations of hardware have introduced new techniques for persistent storage of transformed vertices. On modern hardware, the preferred approach would be Shader Storage Buffers and Compute Shaders. But back when that was written, it was common to handle particle simulation in a fragment shader and transfer pixel data to a vertex buffer or use vertex texture fetches. Transform feedback came along later. Ultimately, your approach will depend on target hardware requirements - what are they?Andon M. Coleman
Yepp, there may be better ways how to render particles using modern techniques. I planed to stick as close as possible to the pipeline they used to be able to apply their methods for fast access to neighboring particle data and also hash table access (texture queries). Maybe I have to try if these methods could also be applied using the transform feedback method to render particles. (Hardware requirements are not that important for now, shader model 4+ hardware is available :) )bender
In that case, you could use a pixel pack buffer (PBO) to transfer the output of the fragment shader to a Buffer Object completely on the GPU. Then turn around and use that as a VBO for drawing. Chances are you will need multiple fragment shader outputs and thus multiple PBOs.Andon M. Coleman
Ok, that sounds interesting. Do you have an example on this? If I understand this correctly, I would first create a FBO holding a PBO, then do a offscreen drawing of the changed particles into the PBO. But I still miss the PBO to VBO part ;-) could not find any materials how to do that (so far).bender
Not off the top of my head. I assume you have used FBOs before? Despite they name they are not actually "Buffer Objects" (they are Framebuffer Objects). What you need to do is draw into a texture attached to an FBO, then read the image for that attachment into your Pixel Pack Buffer using glGetTexImage (...). Buffer objects are just general memory, so that Pixel Pack Buffer can immediately be used as a Vertex Buffer Object (just bind it to GL_ARRAY_BUFFER) as long as alignment / image format are meaningful (e.g. GL_RGBA32F == vec4).Andon M. Coleman

1 Answers

0
votes

Yes you need some kind of VBO and repeated passes over the same data. The data structure can be a SoA (Struct of Arrays) or AoS (Array of Structs) depending on how you prefer to code the access to the different properties of the array, ie:

SoA:

  • Positions Array
  • Speed Array
  • Normal Array

AoS:

  • Just one Array containing [Position, Speed, Normal].

    AoS are the same as interleaved arrays for rendering where in only one array you keep all the properties of the mesh.

You could use either a VBO or a Texture, the only difference is the way the caching is done, since textures are optimized for 2D access.

The rendering is done in steps exactly like you are picturing it, so all you need to do is to "render" the physical stepping of the system using shaders that compute the properties you want and then bind the same structures to the true graphics rendering in a subsequent step.