I have 10-bit YUV (V210) video frames coming in from a capture card, and I would like to unpack this data inside of a GLSL shader and ultimately convert to RGB for screen output. I'm using a Quadro 4000 card on Linux (OpenGL 4.3).
I am uploading the texture with the following settings:
video frame: 720x486 pixels
physically occupies 933120 bytes in 128-byte aligned memory (stride of 1920)
texture is currently uploaded as 480x486 pixels (stride/4 x height) since this matches the byte count of the data
internalFormat of GL_RGB10_A2
format of GL_RGBA
type of GL_UNSIGNED_INT_2_10_10_10_REV
filtering is currently set to GL_NEAREST
Here is the upload command for clarity:
int stride = ((m_videoWidth + 47) / 48) * 128;
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB10_A2, stride / 4, m_videoHeight, 0, GL_RGBA, GL_UNSIGNED_INT_2_10_10_10_REV, bytes);
The data itself is packed like so:
U Y V A | Y U Y A | V Y U A | Y V Y A
Or see Blackmagic's illustration here: http://i.imgur.com/PtXBJbS.png
Each texel is 32-bits total (10 bits each for "R,G,B" channels and 2 bits for alpha). Where it gets complicated is that 6 pixels are packed into this block of 128 bits. These blocks simply repeat the above pattern until the end of the frame.
I know that the components of each texel can be accessed with texture2D(tex, coord).rgb but since the order is not the same for every texel (e.g. UYV vs YUY), I know that the texture coordinates must be manipulated to account for that.
However, I'm not sure how to deal with the fact that there are simply more pixels packed into this texture than the GL knows about, which I believe means that I have to account for scaling up/down as well as min/mag filtering (I need bilinear) internally in my shader. The output window needs to be able to be any size (smaller, same or larger than the texture) so the shader should not have any constants related to that.
How can I accomplish this?