I'm trying to take advantage of a gpu's parallelism to make an image proccessing application. I'm having a shader, which takes two textures, and based on some uniform variables, computes an output texture. But instead of transparency alpha value, each texture pixel needs an extra metadata byte, mandatory in computation:
So I consider running the shader twice each frame, once to compute the Dynamic Metadata as a single byte texture, and once to calculate the resulting Paint Texture, which I need to be 3 bytes (to limit memory usage, as there might be quite some such textures loaded at once).
I find the above problem a bit complicated, I've used opengl to paint to the screen, but I need to paint to two different textures this time, which I do not know how to do. Besides, gl_FragColor built-in variable's type is vec4, but I need different output values.
- So, to sum it up a little, is it possible for the fragment shader to output anything other than a vec4?
- Is it possible to save to two different textures with a single call?
- Is it possible to make an editable texture to store changes, until the editing ends and the data have to be passed back to the cpu?
- What openGL calls would be most usefull for the above?
- Paint texture should also be able to be retrieved to be shown on the screen.
The above could very easily be done via blitting textures on the cpu. I could keep all the relevant data on the cpu, do all the work 60 times/sec, and update the relevant texture by passing the data from the cpu to the gpu. For changing relatively small regions of a texture each frame (about ~20% of the total scale of about 512x512 size textures), would you consider the above approach worth the trouble?