2
votes

I am attempting to do some processing in the pixel shader on a texture. The data for the texture is coming from a memory chunk of 8 bit data. The problem I am facing is how to read the data in the shader.

Code to create the texture and ressource view:

In OnD3D11CreateDevice:

D3D11_TEXTURE2D_DESC tDesc;
tDesc.Height = 480;
tDesc.Width = 640;
tDesc.Usage = D3D11_USAGE_DYNAMIC;
tDesc.MipLevels = 1;
tDesc.ArraySize = 1;
tDesc.SampleDesc.Count = 1;
tDesc.SampleDesc.Quality = 0;
tDesc.Format = DXGI_FORMAT_R8_UINT;
tDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
tDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
tDesc.MiscFlags = 0;
V_RETURN(pd3dDevice->CreateTexture2D(&tDesc, NULL, &g_pCurrentImage));
D3D11_SHADER_RESOURCE_VIEW_DESC rvDesc;
g_pCurrentImage->GetDesc(&tDesc);
rvDesc.Format = DXGI_FORMAT_R8_UINT;
rvDesc.Texture2D.MipLevels = tDesc.MipLevels;
rvDesc.Texture2D.MostDetailedMip = tDesc.MipLevels - 1;
rvDesc.ViewDimension = D3D_SRV_DIMENSION_TEXTURE2D;
V_RETURN(pd3dDevice->CreateShaderResourceView(g_pCurrentImage, &rvDesc, &g_pImageRV));    </code>

in OnD3D11FrameRender:

HRESULT okay;

if( !g_updateDone ) {
    D3D11_MAPPED_SUBRESOURCE resource;
    resource.pData = mImage.GetData();
    resource.RowPitch = 640;
    resource.DepthPitch = 1;
    okay = pd3dImmediateContext->Map(g_pCurrentImage, 0, D3D11_MAP_WRITE_DISCARD, 0, &resource);

    g_updateDone = true;
}

pd3dImmediateContext->PSSetShaderResources(0, 1, &g_pImageRV);

This returns no errors so far, everything seems to work.

The HLSL Shader:

//-----  
// Textures and Samplers  
//-----  

Texture2D <int> g_txDiffuse : register( t0 );  
SamplerState g_samLinear : register( s0 );  

//-----  
// shader input/output structure  
//-----  

struct VS_INPUT  
{  
    float4 Position     : POSITION; // vertex position   
    float2 TextureUV    : TEXCOORD0;// vertex texture coords   
};  

struct VS_OUTPUT  
{  
    float4 Position     : SV_POSITION; // vertex position   
    float2 TextureUV    : TEXCOORD0;   // vertex texture coords   
};  

//-----  
// Vertex shader  
//-----  
VS_OUTPUT RenderSceneVS( VS_INPUT input )  
{  
    VS_OUTPUT Output;  

    Output.Position = input.Position;  

    Output.TextureUV = input.TextureUV;   

    return Output;      
}  

//-----  
// Pixel Shader  
//-----  

float4 RenderScenePS( VS_OUTPUT In ) : SV_TARGET  
{   
    int3 loc;  
    loc.x = 0;  
    loc.y = 0;  
    loc.z = 1;  
    int r = g_txDiffuse.Load(loc);  
    //float fTest = (float) r;  

    return float4( In.TextureUV.x, In.TextureUV.y, In.TextureUV.x + In.TextureUV.y, 1);  
}

The thing is, I can't even debug it in PIX to see what r results in, because even with Shader optimization disabled, the line int r = ... is never reached

I tested

float fTest = (float) r;
return float4( In.TextureUV.x, In.TextureUV.y, In.TextureUV.x + In.TextureUV.y, fTest);

but this would result in "cannot map expression to pixel shader instruction set", even though it's a float.

So how do I read and use 8bit integers from a texture, and if possible, with no sampling at all.

Thanks for any feedback.

2
When you write a question, you just have to click the {} button to format code. It shouldn't be that difficult, if you spend just 3 seconds looking at the page before posting. There's even a big orange ? you might click which would explain it. I fixed some of your code, but the first one, with all the <br>'s interleaved, that's just too much trouble.jalf
But remember that the way you ask your question influences how/if people answer it. If your question looks like you spent less than 5 seconds on it, then most people who could answer it won't see why they should spend more time on it either. If you take the trouble of making your answer readable, so it seems like you actually want it to be read and answered, then people will be more willing to answer it. So format your code correctlyjalf

2 Answers

0
votes
loc.z = 1;

Should be 0 here, because texture mip levels is 1 in your case, and mipmaps start at 0 in HLSL for Load intrinsic.

0
votes

Oh my this is a really old question, I thought it said 2012!

But anyway as it's still open:

Due to the nature of GPU's being optimised for floating point arithmetic, you probably wont get a great deal of performance advantage by using a Texture2D<int> over a Texture2D<float>.

You could attempt to use a Texture2D<float> and then try:

return float4( In.TextureUV.x, In.TextureUV.y, In.TextureUV.x + In.TextureUV.y, g_txDiffuse.Load(loc));