0
votes

I have a GPU implementation of Marching Cubes which uses a sequence of 6 GL compute shaders, with each reading from buffers written to by previous shaders, after the appropriate memory barriers. The buffers used in earlier stages hold temporary marker variables, and should be resized to 0 when no longer needed, but not deleted as I'll want them again for later runs.

In some stages, I need to read from a buffer in a shader then deallocate it immediately after the shader completes, before allocating buffers for the next shader stage. My question is how to do this safely. The memory barrier docs talk about ensuring all writes are completed before allowing another shader to read, but say nothing about reads in the first shader.

If I do:

glUseProgram(firstShader);
glDispatchCompute(size,1,1);
glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);
glNamedBufferData(firstBuffer,0,NULL,GL_DYNAMIC_DRAW);
glNamedBufferData(secondBuffer,1000000,&data,GL_DYNAMIC_DRAW);
glUseProgram(secondShader);
glDispatchCompute(size,1,1);

is firstBuffer guaranteed not to be resized until firstShader is done reading from it? If not, how do I make this happen?

1
Be advised that allocating/deallocating memory in your primary performance loop is not going to buy you anything in terms of performance. Indeed, you'd probably get a lot more out of your algorithm if you restructure your compute shader to use shared variables rather than multiple passes and read/writes to GPU memory.Nicol Bolas
Thanks for the comment Nicol. I'm modelling pretty large meshes so the issue is memory use rather than performance. I use the first couple of passes to gather the samples and find the geometry-producing cubes so I can run a much reduced number of invocations from then on. There's a couple of points where I have to keep the whole sample space in memory which can take hundreds of MB so I'd like to get rid of it as soon as I'm done. Not sure how to go about using shared variables as they're quite small and limited to a single workgroup right?russ

1 Answers

0
votes

and should be resized to 0 when no longer needed, but not deleted as I'll want them again for later runs.

Resizing a buffer is equivalent to deleting it and allocating a new buffer on the same id.

In some stages, I need to read from a buffer in a shader then deallocate it immediately after the shader completes, before allocating buffers for the next shader stage. My question is how to do this safely.

Just delete it. Deleting a buffer in the first stage only deletes the id. The id is just another reference to the actual buffer object. When resizing or deleting a buffer only that association between id and the actual buffer is severed. Resizing actually creates a new buffer and reassociates the id with it. In fact calling glBufferData will do the same thing (in contrast to glBufferSubData). This is called "orphaning".

The actual buffer is deallocated once the last reference to it, either by use or from an id, goes down.