0
votes

As I understand it, in OpenGL polygons are usually clipped in clip space and only those triangles (or parts of the triangles if the clipping process splits them) that survive the comparison with +- w. This then requires implementation of a polygon clipping algorithm such as Sutherland-Hodgman.

I am implementing my own CPU rasterizer and for now would like to avoid doing that. I have the NDC coordinates of vertices available (not really normalized since I did not clip anything so the positions may not be in range [-1, 1]). I would like to interpolate these values for all pixels and only draw pixels the NDC coordinates of which fall within [-1, 1] in the x, y and z dimensions. I would then additionally perform the depth test.

Would this work? If yes what would the interpolation look like? Can I use the OpenGl spec (page 427 14.9) formula for attribute interpolation as described here? Alternatively, should I use the formula 14.10 which is used for depth (z) interpolation for all 3 coordinates (I don't really understand why a different one is used there)?

Update: I have tried interpolating the NDC values per pixel by two methods:

w0, w1, w2 are the barycentric weights of the vertices.

1) float x_ndc = w0 * v0_NDC.x + w1 * v1_NDC.x + w2 * v2_NDC.x; float y_ndc = w0 * v0_NDC.y + w1 * v1_NDC.y + w2 * v2_NDC.y; float z_ndc = w0 * v0_NDC.z + w1 * v1_NDC.z + w2 * v2_NDC.z;

2) float x_ndc = (w0*v0_NDC.x/v0_NDC.w + w1*v1_NDC.x/v1_NDC.w + w2*v2_NDC.x/v2_NDC.w) / (w0/v0_NDC.w + w1/v1_NDC.w + w2/v2_NDC.w); float y_ndc = (w0*v0_NDC.y/v0_NDC.w + w1*v1_NDC.y/v1_NDC.w + w2*v2_NDC.y/v2_NDC.w) / (w0/v0_NDC.w + w1/w1_NDC.w + w2/v2_NDC.w); float z_ndc = w0 * v0_NDC.z + w1 * v1_NDC.z + w2 * v2_NDC.z;

The clipping + depth test always looks like this:

if (-1.0f < z_ndc && z_ndc < 1.0f && z_ndc < currentDepth && 1.0f < y_ndc && y_ndc < 1.0f && -1.0f < x_ndc && x_ndc < 1.0f)

Case 1) corresponds to using equation 14.10 for their interpolation. Case 2) corresponds to using equation 14.9 for interpolation.

Results documented in gifs on imgur. 1) Strange things happen when the second cube is behind the camera or when I go into a cube. 2) Strange artifacts are not visible but as the camera approaches vertices, they start disappearing. And since this is the perspective correct interpolation of attributes vertices (nearer to the camera?) have greater weight so as soon as a vertex gets clipped this information is interpolated with strong weight to the triangle pixels.

Is all of this expected or have I done something wrong?

2
I don't know what you exect here. As Nicol Bolas already pointed out, the clipping is really important if any primitive intersects the z_eye=0 plane. And interpolation for such primitives will become completely meaningless (and for some vertex exactly at z_eye=0, you would even end up with a division by zero). Checking against -1 <= x,y,z <= 1 in NDC is not equivalent of -w <= x,y,z <= w in clip space, since the points fulfilling w <= x,y,z, <= -w (behind the camera) would also fulfill that. I have no idea what you mean by NDC.w, since in NDC, there is no w (or it would be 1).derhass
To summarize this: in my opinion, clipping in screen space just doesn't make the slightest sense (uness you do the rasterization itself in homogenous coordinates), and it will be a lot slower. So I don't know what you gain here.derhass
Thanks for the clarification derhass. I didn't understand Nicol's answer correctly then (I thought he meant that the math could work even in the case of clipping against z_eye = 0). In my code i was using NDC.w as the camera space z of the vertex. Admittedly that doesn't make sense after the perspective division.pseudomarvin

2 Answers

2
votes

I don't see any reason why this wouldn't work. But it will be ways slower than traditional clipping. Note, that you might get into trouble with triangles close to the projection center since they will be vanishingly small and might cause problems in the barycentric coordinate calculation.

The difference between equation 14.9 and 14.10 is, that depth is basically z/w (and remapped to [0, 1]). Since the perspective divide has already happened, it has to be left away during interpolation.

2
votes

Clipping against the near plane is not strictly necessary, unless the triangle goes to or past 0 in the camera-space Z. Once that happens, the homogeneous coordinate math gets weird.

Most hardware only bothers to clip triangles if they extend more than a screen's width outside the clip space or if they cross the camera-Z of zero. This kind of clipping is called "guard-band clipping", and it saves a lot of performance, since clipping isn't cheap.

So yes, the math can work fine. The main thing you have to do, when setting up your scan lines, is figure out where each of them start/end on screen. The interpolation math is the same either way.