1
votes

I'm using some demos that came with my OpenGL ES emulator as a starting point. The demo I'm currently using is 960x540 pixels with a simple 2D triangle drawn in the middle of the screen.

I see that the triangle has been drawn with floats, and that the lower-left corner of the screen is (-1,-1) while the upper-right corner of the screen is (1,1).

Is it possible to change this? I would like to use integers instead of floats, and have the lower-left corner of the screen be (1,1) while the upper-right corner is (960,540).

This would be much easier to work with because I plan to make a 2D platformer game. I do not want to split any pixels in this game; everything (textures, player movement, camera movement) will conform to whole-pixel coordinates.

I tried crunching the numbers (using the regular grid and floats) to draw a right triangle that was 24x24 pixels. However, it ended up being 24x23 pixels on the screen. So I didn't want to continue going forward if there was a more accurate, faster method that could use pixel coordinates.

2

2 Answers

2
votes

You can apply an appropriate transform using an orthogonal projection matrix with extents set to your resolution.

How to make such matrices: http://db-in.com/blog/2011/04/cameras-on-opengl-es-2-x/

Applying this transform in the vertex shader, allows you to directly supply pixel coords in vertex data.

The tricky part is, that pixels (the center of them) aren't located at whole numbers in this case! This is where things usually get odd => an appropriate search term may be 'pixel perfect OpenGL'.

Adding 0.5 to pixel coordinates in the vertex shader, which then can be supplied as whole numbers, should do the trick. Note, that in the vertex shader, you have to convert integer vertex coords to float, if floats are passed, you should round() them before.

If you want to supply normalized coordinates, {-1,-1} - { 1,1 } as you do it now (I don't recommend that):

vec2 pixelSize         = 2.0 / vec2(viewportWidth,viewportHeight);
vec2 halfPixelSize     = 1.0 / vec2(viewportWidth,viewportHeight);
vec2 pixelPerfectCoord = round(coord,pixelSize) + halfPixelSize
0
votes

The values coming out of the vertex shader should be ready for projection into the normal OpenGL eye space, but you needn't pass in vertices in that space. In ES 1 you'd have adjusted the projection stack; does your demo have any analogue to that?

You should also take the filling rules into account. A pixel is always filled if its centre is within the geometry; if its centre is on the boundary then the implementation will make a decision usually depending on which boundary the pixel is on — the top, bottom, left or right boundaries (which make more sense if you think in terms of the pixel output; so a pixel on the left boundary is one that would be first on a scanline, a pixel on the right would be last, a pixel on the top would start the polygon further up the screen, a pixel on the bottom would extend it further down).

The aim is that OpenGL doesn't draw pixels twice where geometry meets on an edge.

With that in mind, a triangle can be exactly 24 pixel units tall but if it runs from the centre of one pixel to the centre of another it will paint only [at most] 23 rows. So your maths may already be correct. Consider one polygon from y = 0 to y = 24 and one from y = 24 to y = 36 — you don't want them both painting at y = 24, especially as they may be partially transparent.