3
votes

Is there a way to, instead of using the predetermined coordinate as output by gl_FragColor, set the color of a specific pixel by its coordinate?

I'm currently trying to implement the Mean Shift algorithm via shaders. My input is a black and white texture, where white dots represent points to be clustered and black represents no-data.

After calculating the weighted average of all point positions in the neighborhood, I have to set the pixel in the resulting position to a new color that represents a cluster.

For example, if I look at a neighborhood of 18x18 centered on the pixel relate to fragcoord and find 3 white pixels: Fragcoord = 30,33 Pixel 1: coordinate (30,33) Pixel 2: coordinate (27,33) Pixel 3: coordinate (30,30)

After calculating the average of their positions, I'll have (29,32). Is there a way to set the pixel at 29,32 to a different color, in a shader unit that has a different fragcoord (for example, 30,33)?

Something like gl_FragColor(vec2(29,32)) = vec4(1.0,1.0,1.0,1.0); ?

2

2 Answers

3
votes

As Christian said, it's not possible; and if you can use it, a compute framework or image load/store is your best option to switch to.

If you must use GLSL without image load/store, you do have an option: if your image has n pixels total, then send n vertices to the vertex shader as points; in the vertex shader, read from the texture based on your gl_VertexID (available in GLSL 1.10... if you have 1.40+ you should probably use instancing and gl_InstanceID instead), and position the point so that when it goes to the fragment shader, it covers exactly the pixel you want. Then just have the pixel shader output white no matter what.

Its a hack, but it may work fine if you have no other options.

3
votes

No, that's not possible. A fragment shader is invoked for a specific fragment at a specific position and can only output the values for this particular fragment (or discrad the whole fragment) that then get written into the framebuffer at exactly that pre-determined fragment position.

What you can do is not write your outputs to the framebuffer at all, but into some other storage, either an arbitrary image (using image load/store) or a shader storage buffer. But those two features require quite modern hardware capabilities (GL 4+ hardware). And in this case you could also do the whole thing using a proper compute shader in the first place (or an actual computing framework like CUDA or OpenCL, if you don't need any other OpenGL functionality).


Another way that also works on older hardware would be to do your stuff in the vertex shader instead of the fragment shader. This way you can just compute the vertex's clip space position (that then turns into the fragment position) accordingly. When using the geometry shader instead of the vertex shader you can even scatter data (compute more than one output for a single input) or discard stuff.