0
votes

let me introduce my answer.

This is triagnle rendered with webgl. Well it is a little enlarged ... enter image description here

And this is triangle, which I want to have: enter image description here

So Im looking for some shader, that will be able to blend edges of primitive triangle. I have an idea how to realize one, but Im probably not good enough to write it yet.

My idea is something like: Based on position of 3 vertices calculate for each fragment, how much does primitive cover pixel, and then set up transparency of this pixel based on calculated information...

I can get 2D coordinates from vertex shader and use them in fragment shader. Now I probably want to use gl_FragCoord.xy or gl_PointCoord.xy and calculate % pixel cover, but I not able to compare these values (it seems that units are different, I compute miles with milimetres and also 'point zero' is somewhere else for these vectors), so I can't calculate final transparency value.

Can anyone help me please? Just turn me correct way.

2

2 Answers

1
votes

There are lots ways to achieve this

You can render at a higher resolution. Make your canvas larger than the size its displayed, the browser will almost certainly bilinear interpolate the result. Example:

<canvas width="400" height="400" style="width: 200px; height 200px" />

declares a canvas with 400x400 backstore that is scaled to 200x200 when displayed.

Here's a fiddle.

Another technique would be to compute an alpha value in the shader such that you get the blending you want along the edge of the polygon.

I'm sure there are others. Most Canvas2D implementations are gpu accelerated and anti-aliased even if the GPU does not support anti-aliasing so you could try digging through one of those.

0
votes

The problem with your plan is that OpenGL applies its own test to decide which pixels to draw first — if the centre of the fragment lies inside the geometry boundary then it is rasterised, if it lies outside then it is not, if it lies exactly on the boundary then rasterisation depends on whether it is at the start or end of a horizontal or vertical run. The boundary condition ensures that where two triangles exactly meet, they never both contain the same fragments.

So if you compute coverage per fragment you're almost never going to get a number less than 50% (corners and other very thin bits of geometry being the exception). You're not going to get the complete anti-aliasing you desire. You'll get the antialiased version clipped by the aliased version.

Hardware achieves this by sampling multiple fragments per output pixel. You can simulate that by rendering to texture at a multiple of your output size, then scaling down. The mip map generation will filter the input image.

That all being said, have you tried just passing antialias as true when calling canvas.GetContext? That will use the hardware capabilities, subject to hardware and browser support.