I'm trying to render text as textured quads in perspective projection (do not want to use ortho projection), and I'm struggling with pixel alignment.
The setup is simple, I have text with align anchor point in 3D, I change its model transformation into billboard transformation, and calculate scale (triangle similarity) to have the text always the same size. Since geometry of text quads is constructed with world units corresponding to pixels, resulting text indeed seems to be the same size no matter camera orientation or anchor point offset.
Vector3d dist = camera.getPosition();
dist.sub(translation);
double pxFov = camera.getFOV() / camera.getScreenWidth();
double scale = Math.sin(pxFov) / Math.sin((Math.PI / 2) - pxFov)
* dist.length() * camera.getAspectRatio();
V.getRotationScale(R);
R.transpose();
M.setIdentity();
M.setRotation(R);
M.setScale(scale);
M.setTranslation(translation);
Where V is 4x4 camera view matrix, R temporary 3x3 matrix, and M is 4x4 model transformation matrix used for MVP calculation.
I found supposed solution, but it just slightly changed behaviour of rendered text, instead of fixing the problem.
When using vertex shader
void main () {
vec2 view = vec2(1280, 720);
vec4 cpos = MVP * vec4 (position, 1.0f);
vec2 p = floor(cpos.xy * view*0.5/cpos.w);
p += 0.5; // does not influence result
cpos.xy = p * (2.0/view*cpos.w);
gl_Position = cpos;
}
text does render in some places sharp, and in some places blurred
In case of simple vertex shader
void main () {
gl_Position = MVP * vec4 (position, 1.0f);
}
is the text either completely blurred or completely sharp
It seems logial getting vertex positions to viewport space, rounding it there and moving it back, but something seems to be missing.
EDIT: Explanation of how I'm calculating the scale factor.
Here you can see I'm getting right angle triangle as red, green and blue lines. Knowing lenght of red (camera-text anchor distance), angle between red and blue ((fov/2)/(screen width/2)), and angle between red and green being right angle, I can use law of sines to calculate length of the green line, which is also scale of one texel to have same size in current projection.
Scale of the text seems to be correct no matter camera/text orientation/position (desired 8 pixels). It is possible the scale is just lightly wrong and that results in the blur effect, but I fail to see how.
scale
calculation is supposed to work. The perspective division does not depend on the distance between the point and the camera, but the distance between the point and a plane containing the camera, and parallel to the image plane (or, in other words, the orthogonal projection of the distance onte the principal axis of the projection). Your scale factor should be wrong for any point not exactly on that principal axis. – derhasslength(pos - camPos)
, so what you use as "camera - text distance" is not the horizontal distance you have drawn in the figure. Furthermore, now you introduced apxFOV
, relating an angle to the screen size, which doesn't make any sense at all. – derhass