It seems like I am completely confused by OpenGL format conversions related to image load/store. I came from DX world, where things are relatively clear. For example:
- You create a texture with some specific format (say
RGBA32_FLOAT) - You declare an RW texture in your shader
RWTexture2D<float4>that matches the format - You read/write to/from texture
You can also bind say RGBA8_UNORMtexture to RWTexture2D<float4> and DirectX will perform format conversion in an obvious and clear way.
Now say I want to do the same thing in OpenGL. So I
- Create GL texture with GL_RGBA32F internal format
- In my shader, I declare
layout(rgba32f) uniform image2D myImg; - I bind my texture to image in shader with
glBindImageTexture( ..., GL_RGBA32F);
So I counted four places where I specify the format:
- When I create the texture
Two times when I declare image unit:
layout(rgba32f)image2Ditself tells me that all load/store operations will takevec4(notivec4oruvec4)
When I bind image with
glBindImageTexture
I cannot understand what is the purpose of layout(rgba32f), which seems totally redundant. It seems that knowing internal texture format is enough to perform all format conversions. What layout do I need to specify if my internal texture format is normalized RGBA8 and image is defined as uniform image2D myImg;? And why do I need to specify the layout at all, isn't it clear what kind of format conversions should be performed?
The only idea I have that justifies these layouts is to perform data reinterpretation like writing raw RGBA32F data to RGBA32U texture, which does not seem to be very useful for me. Besides, you have functions like floatBitsToInt() that do that work.
What is the purpose of passing the format to glBindImageTexture is a complete mystery for me.
So all these layouts is a huge source of confusion for me. Could you please help me better understand the reasoning behind them?
layout(format)if the texture iswriteonly. - Yakov Galka