2
votes

When mapping texture to a geometry when we can choose the filtering method between GL_NEAREST and GL_LINEAR.

In the examples we have a texture coordinate surrounded by the texels like so:

enter image description here

And it's explained how each algorithm chooses what color the fragment be, for example linear interpolate all the neighboring texels based on distance from the texture coordinate.

Isn't each texture coordinate is essentially the fragment position which are mapped to pixel on screen? So how these coordinates are smaller than the texels which are essentially pixels and the same size as fragments?

2
In general 3D the screen pixel doesn't map into single testure's pixel - perspective view will distort even planes. Or imagine sphere. So screen pixel doesn't hit exactly one texel, and you need a way to produce some color at some point on a texture with fractional coordinates, hence the need in filtering.Vlad

2 Answers

4
votes

A (2D) texture can be looked at as a function t(u, v), whose output is a "color" value. This is a pure function, so it will return the same value for the same u and v values. The value comes from a lookup table stored in memory, indexed by u and v, rather than through some kind of computation.

Texture "mapping" is the process whereby you associate a particular location on a surface with a particular location in the space of a texture. That is, you "map" a surface location to a location in a texture. As such, the inputs to the texture function t are often called "texture coordinates". Some surface locations may map to the same position on a texture, and some texture positions may not have surface locations mapped to them. It all depends on the mapping

An actual texture image is not a smooth function; it is a discrete function. It has a value at the texel locations (0, 0), and another value at (1, 0), but the value of a texture at (0.5, 0) is undefined. In image space, u and v are integers.

Your picture of a zoomed in part of the texture is incorrect. There are no values "between" the texels, because "between the texels" is not possible. There is no number between 0 and 1 on an integer number line.

However, any useful mapping from surface to the texture function is going to need to happen in a continuous space, not a discrete space. After all, it's unlikely that every fragment will land exactly on a location that maps to an exact integer within a texture. After all, especially in shader-based rendering, a shader can just invent a mapping arbitrarily. The "mapping" could be based on light directions (projective texturing), the elevation of a fragment relative to some surface, or anything a user might want. To a fragment shader, a texture is just a function t(u, v) which can be evaluated to produce a value.

So we really want that function to be in a continuous space.

The purpose of filtering is to create a continuous function t by inventing values in-between the discrete texels. This allows you to declare that u and v are floating-point values, rather than integers. We also get to normalize the texture coordinates, so that they're on the range [0, 1] rather than being based on the texture's size.

3
votes

Texture filtering does not decide what color the fragment should be. This is what the fragment shader does. However, the fragment shader may sample a texture at a given position to get a color. It may directly return that color or it can process it (e.g. add shading etc.)

Texture filtering happens at sampling. The texture coordinates are not necessarily perfect pixel positions. E.g., the texture could be the material of a 3D model that you show in a perspective view. Then a fragment may cover more than a single texel or it may cover less. Or it might not be aligned with the texture grid. In all cases you need some kind of filtering.

For applications that render a sprite at its original size without any deformation, you usually don't need filtering as you have a 1:1 mapping from screen pixels to texels.