For a physics particle system simulation, I'm wanting to optimize a WebGL2 program. I want to know if it is faster to adjust vertice positions using a transformfeedback accessing a 3D-texture, setting my position to be, for example, "color.r" from a pixel of the texture, or alternatively, dumping the entire 3D-texture back to the CPU and extracting position values for all the vertices from the texture, and resubmitting the new vertice array to the GPU for processing on the subsequent draw cycle.
Being a beginner, I'm clueless as to what would be faster. I need to use a texture because my position calculation requires knowing the positions of 26 neighbor particles relative to the vertex being calculated.
I have no code to show. I'm hoping for guidance for an approach before I write code for either approach.
My intuition says that staying on the GPU rather than pumping 1,000,000 vertices (minimum) worth of data back and forth each draw cycle would be faster, but this is a newbie intuition and I prefer to get guidance from someone who has confidence in his knowledge.
texelFetch
instruction.vec4 color = texelFetch(someSamper, ivec3(intX, intY, intZ), mipLevel);
use ivec2 if it'd 2D texture. Given the values are in a texture though it seems like you'd render with fragment shaders, one draw call per N slices. Seems like you asked that question here stackoverflow.com/questions/55815145/…. I can't imagine how transform feedback would be faster for this case but maybe I don't understand what you're trying to do – gman