8
votes

I'm trying to learn how to take advantage of gpu possibilities for threejs and webgl stuff so im just analysing code to get some patterns, methods how things are done and I need some code explanation.

I found this example: One million particles, which seems to be the easiest one involving calculations made in shaders and spit back out.

So from what I have figured out: - Data for velocity and position of particles are kept in textures passed to shaders to perform calculations there, and get them back for update

  • Particles are created randomly on the plane no more than the texture size ?

    for (var i = 0; i < 1000000; i++) {

    particles.vertices.push(new THREE.Vector3((i % texSize)/texSize,
    

    Math.floor(i/texSize)/texSize , 0)) ; }

  • I don't see any particles position updates? How is the data from shaders retrieved and updates each particle?

    pick() only passes the mouse position to calculate the direction of particles movement?

  • why are there 2 buffers? and 8 (4 pairs of fragment and vector) shaders? Is only the one for calculating velocity and position not enough?

  • how does the shader update the texture? I just see reading from it not writing to it?

Thanks in advance for any explanations!

1
The example you linked to does not run for me. See this three.js example: threejs.org/examples/webgl_gpgpu_birds.html. Google "GPGPU". Google "GPGPU ping-pong". Ideally, the data is not passed back to the CPU "for updates", as you assume -- all computations are performed on the GPU.WestLangley
@WestLangley if it is not passed back to the cpu, than it is stored in the texture data and read in the next iteration again to update the position further in the vertex shader?mjanisz1

1 Answers

20
votes

How the heck have they done that:

In this post, I'll explain how this results get computed nearly solely on the gpu via WebGL/Three.js - it might look a bit sloppy as I'm using integrated graphics of an Intel i7 4770k:

Particle system computed on the graphics card in browser via WebGL/Three.js


Introduction:

Simple idea to keep everything intra-gpu: Each particle's state will be represented by one texture pixel color value. One Million particles will result in 1024x1024 pixel textures, one to hold the current position and another one that holds the velocities of those particles.

Nobody ever forbid to abuse the RGB color values of a texture for completely different data of 0...255 universe. You basically have 32-bit (R + G + B + alpha) per texture pixel for whatever you want to save in GPU memory. (One might even use multiple texture pixels if he needs to store more data per particle/object).

They basically used multiple shaders in a sequential order. From the source code, one can identify these steps of their processing pipeline:

  1. Randomize particles (ignored in this answer) ('randShader')
  2. Determine each particles velocity by its distance to mouse location ('velShader')
  3. Based on velocity, move each particle accordingly ('posShader')
  4. Display the screen ('dispShader')**

.


Step 2: Determining Velocity per particle:

They call a draw process on 1 Million points which's output will be saved as a texture. In the vertex shader each fragment gets 2 additional varyings named "vUv", which basically determine the x and y pixel positions inside the textures used in the process.

Next step is its fragment shader, as only this shader can output (as RGB values into the framebuffer, which gets converted to a texture buffer afterwards - all happening inside gpu memory only). You can see in the id="velFrag" fragment shader, that it gets an input variable called uniform vec3 targetPos;. Those uniforms are set cheaply with each frame from the CPU, because they are shared among all instances and don't involve large memory transfers. (containing the mouse coordinate, in -1.00f to +1.00f universe probably - they probably also update mouse coords once every FEW frames, to lower cpu usage).

Whats going on here? Well, that shader calculates the distance of that particle to the mouse coordinate and depending on that it alter that particles velocity - the velocity also holds information about the particles flight direction. Note: this velocity step also makes particles gain momentum and keep flying/overshooting mouse position, depending on gray value.

.


Step 3: Updating positions per particle:

So far each particle got a velocity and an previous position. Those two values will get processed into a new position, again being outputted as a texture - this time into the positionTexture. Until the whole frame got rendered (into default framebuffer)and then marked as the new texture, the old positionTexture remains unchanged and can get read with ease:

In id="posFrag" fragment shader, they read from both textures (posTexture and velTexture) and process this data into a new position. They output the x and y position coordinates into the colors of that texture (as red and green values).

.


Step 4: Prime time (=output)

To output the results, they probably took again a million points/vertexes and gave it the positionTexture as an input. Then the vertex shader sets the position of each point by reading the texture's RGB value at location x,y (passed as vertex attributes).

// From <script type="x-shader/x-vertex" id="dispVert">
vec3 mvPosition = texture2D(posTex, vec2(x, y)).rgb;
gl_PointSize = 1.0;
gl_Position = projectionMatrix * modelViewMatrix * vec4(mvPosition,1.0);

In the display fragment shader, they only need to set a color (note the low alpha, causing it to allow 20 particles to stack up to fully light up a pixel).

// From <script type="x-shader/x-fragment" id="dispFrag">
gl_FragColor = vec4(vec3(0.5, 1.0, 0.1), 0.05);

.


I hope this made it clear how this little demo works :-) I am not the author of that demo, though. Just noticed this answer actually became a super duper detailed one - fly through the thick keywords to get the short version.