I am currently working on a Windows Mixed Reality app using SharpDX, converting from another VR application platform. On Mixed Reality, the API asks the user to draw to a single provided back buffer that is a Texture2D array of size 2 (one element for each eye), but the VR framework asks the user to draw two separate textures that are submitted manually.
Preferably, I would like to be able to extract each of the individual texture elements in this array as a separate Texture2D, so that my VR backends can still draw to these textures as normal. The Mixed Reality sample app does not provide any help in this direction, since it instead uses an instanced draw call to draw to both textures at once. Is it possible in DirectX to get a reference to a single texture, or do I have to change my backend to use the array?
EDIT: According to the documentation, I found that RenderTargetViews appear to be the way to render to one resource as if it was a different resource, including rendering to an array slice as if it was a single texture. However, when I create two render target views with this format,
RenderTargetView l_target_view = new RenderTargetView(cameraBackBuffer.Device, cameraBackBuffer, new RenderTargetViewDescription()
{
Format = (SharpDX.DXGI.Format)parameters.Direct3D11BackBuffer.Description.Format,
Dimension = RenderTargetViewDimension.Texture2D,
Texture2DArray = new RenderTargetViewDescription.Texture2DArrayResource()
{
ArraySize = 1,
FirstArraySlice = 0
}
});
RenderTargetView r_target_view = new RenderTargetView(cameraBackBuffer.Device, cameraBackBuffer, new RenderTargetViewDescription()
{
Format = (SharpDX.DXGI.Format)parameters.Direct3D11BackBuffer.Description.Format,
Dimension = RenderTargetViewDimension.Texture2D,
Texture2DArray = new RenderTargetViewDescription.Texture2DArrayResource()
{
ArraySize = 1,
FirstArraySlice = 1
}
});
operations on both targets are applied to the first one only.