1
votes

It seems like I am completely confused by OpenGL format conversions related to image load/store. I came from DX world, where things are relatively clear. For example:

  1. You create a texture with some specific format (say RGBA32_FLOAT)
  2. You declare an RW texture in your shader RWTexture2D<float4> that matches the format
  3. You read/write to/from texture

You can also bind say RGBA8_UNORMtexture to RWTexture2D<float4> and DirectX will perform format conversion in an obvious and clear way.

Now say I want to do the same thing in OpenGL. So I

  1. Create GL texture with GL_RGBA32F internal format
  2. In my shader, I declare layout(rgba32f) uniform image2D myImg;
  3. I bind my texture to image in shader with glBindImageTexture( ..., GL_RGBA32F);

So I counted four places where I specify the format:

  1. When I create the texture
  2. Two times when I declare image unit:

    • layout(rgba32f)
    • image2D itself tells me that all load/store operations will take vec4 (not ivec4 or uvec4)
  3. When I bind image with glBindImageTexture

I cannot understand what is the purpose of layout(rgba32f), which seems totally redundant. It seems that knowing internal texture format is enough to perform all format conversions. What layout do I need to specify if my internal texture format is normalized RGBA8 and image is defined as uniform image2D myImg;? And why do I need to specify the layout at all, isn't it clear what kind of format conversions should be performed? The only idea I have that justifies these layouts is to perform data reinterpretation like writing raw RGBA32F data to RGBA32U texture, which does not seem to be very useful for me. Besides, you have functions like floatBitsToInt() that do that work.

What is the purpose of passing the format to glBindImageTexture is a complete mystery for me.

So all these layouts is a huge source of confusion for me. Could you please help me better understand the reasoning behind them?

1
I think it should be pointed out, that all this juggling with images and image layouts is only required if you want to write to an image from an shader. If your intention to just source pixels from a texture everything can be inferred from the internal type. - datenwolf
@datenwolf well if I only want to read from the image, then what's the point in using images? Texture samplers already provide read-only random access. Or am I missing something? Again, I am trying to match things in DX and GL. In DX, you declare texture as RW with clear intent to write to random locations. Otherwise you should just use normal texture, not RW. - Egor
@datenwolf: that's backwards. You don't need the layout(format) if the texture is writeonly. - Yakov Galka

1 Answers

1
votes

What is the purpose of passing the format to glBindImageTexture is a complete mystery for me.

In the GL, one can often find the reasoning behind particular API decisions in the "Issues" section of the relevant extension specifications. In this case, Issue 31 of the GL_ARB_shader_image_load_store extension might be helpful:

(31) Why do we have a format parameter on BindImageTexture?

RESOLVED: It allows some amount of bit-casting, to view a texture with one format using another format. In addition to any benefits from viewing textures with a different format, it also permits atomics operations on some multi-component textures by allowing them to be viewed using R32I or R32UI formats.

In the EXT_shader_image_load_store extension, there was an additional benefit to working around a more severe limitation on the set of formats supported for stores -- only formats like R8, R16, R32F, RG32F, RGBA32F are supported there. Other formats not supported there can be viewed as supported formats (e.g., RGBA8 could map to R32UI), with shader code doing any needed packing and unpacking.

So one should consider the internal texture format and the format of the image as two different things. They do have to be compatible, but they don't have to match.

I cannot understand what is the purpose of layout(rgba32f), which seems totally redundant. It seems that knowing internal texture format is enough to perform all format conversions.

The internal format of the image (not the texture) might be enough - but that information is not part of the shader's state and not known at shader compile time. And the GL does not abstract that away behind the user's back. The format in the layout qualifier must match exactly the format for the associated image unit, otherwise the results will be undefined. The shader will just read or write the data in the specified format and if that does not match the actual one, you are screwed and the spec doesn't guarantee anything.

What layout do I need to specify if my internal texture format is normalized RGBA8 and image is defined as uniform image2D myImg;

layout(rgba8) is the only one allowed for that case as per the spec.

The only idea I have that justifies these layouts is to perform data reinterpretation like writing raw RGBA32F data to RGBA32U texture, which does not seem to be very useful for me.

You actually got that backwards. The layout(format) doesn't allow reinterpretation, the format parameter of glBindImageTexture() does, at least in a limited way.