1
votes

I am trying to some GPGPU calculations using HLSL and Direct3D 11. My goal is to calculate several images sequentially (one rendering per image), then sum them.

To achive this I created three textures: one is an input, two are for summarizing the results. The method I am trying: first I render to texture2 (and I use the texture and texture3 textures as input), then I render to texture3 with a slightly modified pixel shader (and I use the texture and texture2 textures as input). After this, I dump the textures to a PNG file. My problem is that somehow the second pixel shader doesn't get the result of the first shader (texture2), but the original, empty texture. I defined both texture, texture2, and texture3 with usage D3D11_USAGE_DEFAULT, and D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE binding flags.

My HLSL shader (shader1):

float4 main(PS_INPUT input) : SV_TARGET
{
...
float4 val5 = 0.25F * (val + val2 + val3 + val4) + tex3.Sample(sam, input.Tex4);
return val5;
}

The other shader (shader2):

float4 main(PS_INPUT input) : SV_TARGET
{
...
float4 val5 = 0.25F * (val + val2 + val3 + val4) + tex2.Sample(sam, input.Tex4);
return val5;
}

The code that performns the rendering and the dumping of textures:

context->OMSetRenderTargets(1, &targetViewTexture2, NULL);

float z = 280.0F;
GenMatrix(z);
context->VSSetShader(vs, NULL, 0);
context->PSSetShader(ps, NULL, 0);
context->PSSetSamplers(0, 1, &state);
context->VSSetConstantBuffers(0, 1, &cbuf);
context->Draw(6, 0);
swapChain->Present(0, 0);

context->OMSetRenderTargets(1, &targetViewTexture3, NULL);
z = 0.0F;
GenMatrix(z);
context->VSSetShader(vs, NULL, 0);
context->PSSetShader(ps2, NULL, 0);
context->PSSetSamplers(0, 1, &state);
context->VSSetConstantBuffers(0, 1, &cbuf);
context->Draw(6, 0);
swapChain->Present(0, 0);
DirectX::ScratchImage im;
DirectX::CaptureTexture(device, context, texture3, im);
const DirectX::Image *realImage = im.GetImage(0, 0, 0);
HRESULT hr2 = DirectX::SaveToWICFile(*realImage, DirectX::WIC_FLAGS_NONE, GUID_ContainerFormatPng, L"output_tex3.png", NULL);
DirectX::CaptureTexture(device, context, texture2, im);
realImage = im.GetImage(0, 0, 0);
hr2 = DirectX::SaveToWICFile(*realImage, DirectX::WIC_FLAGS_NONE, GUID_ContainerFormatPng, L"output_tex2.png", NULL);

The ouput of the first pass (ps) is correct, but the second is not. If I only leave the shader code:

    float4 val5 = tex2.Sample(sam, input.Tex4);

I get an empty image. Am I missing something? Do I have to call some methods to use the texture in the second pixel shader? I think I pinpointed the relevant sections of my large code base, but if you need more information just ask for it in the comment section.

1
Are you using Visual Studio 2012? Because it has very powerful Graphics debugging Tools. - Cédric Bignon
Why do you have a swapChain->Present in your code? - Cédric Bignon
I had swapChain->Present before, because I used the screen to look at the result of each step (I commented it out now). I use VS2012 but I don't know how to start; as the texture seems to be filled after the first Draw call, but seems to be empty in the second. - WebMonster
Have you tried to use the Visual Studio 2012 Graphics debugging Tools? In Debug->Graphics->Start Diagnostics. - Cédric Bignon
I've got an interesting result: using the debugging tools you mentioned I looked at the input texture and it is in fact not empty, but contains the correct value. The output might be incorrect because of other problem. - WebMonster

1 Answers

1
votes

The problem was that I used the texture both as input and output. You could use a texture either as input or render target in one pass.