I've been under the assumption that my gamma correction pipeline should be as follows:
- Use sRGB format for all textures loaded in (
GL_SRGB8_ALPHA8
) as all art programs pre-gamma correct their files. When sampling from aGL_SRGB8_ALPHA8
texture in a shader OpenGL will automatically convert to linear space. - Do all lighting calculations, post processing, etc. in linear space.
- Convert back to sRGB space when writing final color that will be displayed on the screen.
Note that in my case the final color write involves me writing from a FBO (which is a linear RGB texture) to the back buffer.
My assumption has been challenged as if I gamma correct in the final stage my colors are brighter than they should be. I set up for a solid color to be drawn by my lights of value { 255, 106, 0 }, but when I render I get { 255, 171, 0 } (as determined by print-screening and color picking). Instead of orange I get yellow. If I don't gamma correct at the final step I get exactly the right value of { 255, 106, 0 }.
According to some resources modern LCD screens mimic CRT gamma. Do they always? If not, how can I tell if I should gamma correct? Am I going wrong somewhere else?
Edit 1
I've now noticed that even though the color I write with the light is correct, places where I use colors from textures are not correct (but rather far darker as I would expect without gamma correction). I don't know where this disparity is coming from.
Edit 2
After trying GL_RGBA8
for my textures instead of GL_SRGB8_ALPHA8
, everything looks perfect, even when using the texture values in lighting computations (if I half the intensity of the light, the output color values are halfed).
My code is no longer taking gamma correction into account anywhere, and my output looks correct.
This confuses me even more, is gamma correction no longer needed/used?
Edit 3 - In response to datenwolf's answer
After some more experimenting I'm confused on a couple points here.
1 - Most image formats are stored non-linearly (in sRGB space)
I've loaded a few images (in my case both .png and .bmp images) and examined the raw binary data. It appears to me as though the images are actually in the RGB color space, as if I compare the values of pixels with an image editing program with the byte array I get in my program they match up perfectly. Since my image editor is giving me RGB values, this would indicate the image stored in RGB.
I'm using stb_image.h/.c to load my images and followed it all the way through loading a .png and did not see anywhere that it gamma corrected the image while loading. I also examined the .bmps in a hex editor and the values on disk matched up for them.
If these images are actually stored on disk in linear RGB space, how am I supposed to (programatically) know when to specify an image is in sRGB space? Is there some way to query for this that a more featured image loader might provide? Or is it up to the image creators to save their image as gamma corrected (or not) - meaning establishing a convention and following it for a given project. I've asked a couple artists and neither of them knew what gamma correction is.
If I specify my images are sRGB, they are too dark unless I gamma correct in the end (which would be understandable if the monitor output using sRGB, but see point #2).
2 - "On most computers the effective scanout LUT is linear! What does this mean though?"
I'm not sure I can find where this thought is finished in your response.
From what I can tell, having experimented, all monitors I've tested on output linear values. If I draw a full screen quad and color it with a hard-coded value in a shader with no gamma correction the monitor displays the correct value that I specified.
What the sentence I quoted above from your answer and my results would lead me to believe is that modern monitors output linear values (i.e. do not emulate CRT gamma).
The target platform for our application is the PC. For this platform (excluding people with CRTs or really old monitors), would it be reasonable to do whatever your response to #1 is, then for #2 to not gamma correct (i.e. not perform the final RGB->sRGB transformation - either manually or using GL_FRAMEBUFFER_SRGB)?
If this is so, what are the platforms on which GL_FRAMEBUFFER_SRGB is meant for (or where it would be valid to use it today), or are monitors that use linear RGB really that new (given that GL_FRAMEBUFFER_SRGB was introduced 2008)?
--
I've talked to a few other graphics devs at my school and from the sounds of it, none of them have taken gamma correction into account and they have not noticed anything incorrect (some were not even aware of it). One dev in particular said that he got incorrect results when taking gamma into account so he then decided to not worry about gamma. I'm unsure what to do in my project for my target platform given the conflicting information I'm getting online/seeing with my project.
Edit 4 - In response to datenwolf's updated answer
Yes, indeed. If somewhere in the signal chain a nonlinear transform is applied, but all the pixel values go unmodified from the image to the display, then that nonlinearity has already been pre-applied on the image's pixel values. Which means, that the image is already in a nonlinear color space.
Your response would make sense to me if I was examining the image on my display. To be sure I was clear, when I said I was examining the byte array for the image I mean I was examining the numerical value in memory for the texture, not the image output on the screen (which I did do for point #2). To me the only way I could see what you're saying to be true then is if the image editor was giving me values in sRGB space.
Also note that I did try examining the output on monitor, as well as modifying the texture color (for example, dividing by half or doubling it) and the output appeared correct (measured using the method I describe below).
How did you measure the signal response?
Unfortunately my methods of measurement are far cruder than yours. When I said I experimented on my monitors what I meant was that I output solid color full screen quad whose color was hard coded in a shader to a plain OpenGL framebuffer (which does not do any color space conversion when written to). When I output white, 75% gray, 50% gray, 25% gray and black the correct colors are displayed. Now here my interpretation of correct colors could most certainly be wrong. I take a screenshot and then use an image editing program to see what the values of the pixels are (as well as a visual appraisal to make sure the values make sense). If I understand correctly, if my monitors were non-linear I would need to perform a RGB->sRGB transformation before presenting them to the display device for them to be correct.
I'm not going to lie, I feel I'm getting a bit out of my depth here. I'm thinking the solution I might persue for my second point of confusion (the final RGB->sRGB transformation) will be a tweakable brightness setting and default it to what looks correct on my devices (no gamma correction).
gamma = 1/2.2
, then in order to make grayscale I apply inverse correction to all color channels withgamma=2.2
, then use linear transform 'gray = 0.2126 * r + 0.7152 * g + 0.0722 * b' and then transform the gray color withgamma = 1/2.2
before saving to bmp. surprisingly, without gamma-correction i get a better grayscale image... – John Smith