I know this had been asked a million times, but I just don't get some details.
As an example, lets say that I have a swap chain created and one staging ID3D11Texture2D
.
What I can do with it is load a bitmap into this 2D texture and then copy it to the render target (assuming the size and format of both resources are the same).
Now I would like to display a sprite over that. One solution is to use vertex and pixel shaders. My understanding problem starts here.
Vertex shader:
I guess I should draw 2 triangles (2 triangles makes a rectangle, or a quad).
DirectX is using left handed coordinate system, but I guess it's irrelevant here, because I'm dealing with 2D sprites, right?
For the same reason, I assume I can ignore the world->view->projection
transformations, right? Actually, I only need translation here in order to place the sprite at the right place on the screen, right?
Should the coordinates of these two triangles match the sprite dimensions, plus translation? In what order should I provide these vertices? Should the origin of the coordinate system be in the center of the sprite, or should the origin be at the top left corner of the sprite?
As an example, if I have 80x70 sprite, what would be the vertex values?
The real vertices: (X, Y, Z - with no translation applied)
1. -40, -35, 0
2. -40, 35, 0
3. 40, -35, 0
4. 40, 35, 0
5. -40, 35, 0
6. 40, -35, 0
Is that correct?
The rasterization step should call pixel shader once for each pixel in the sprite. The output of the vertex shader will be input for the pixel shader. Does that mean the pixel shader should have access to the sprite in order to return the correct pixel value when called?
FullScreenQuad
is using DirectX 12. I'm not familiar with it and it's quite complex. – some_user