5
votes

My aim is to pass an array of points to the shader, calculate their distance to the fragment and paint them with a circle colored with a gradient depending of that computation.

For example:
enter image description here
(From a working example I set up on shader toy)

Unfortunately it isn't clear to me how I should calculate and convert the coordinates passed for processing inside the shader.

What I'm currently trying is to pass two array of floats - one for x positions and one for y positions of each point - to the shader though a uniform. Then inside the shader iterate through each point like so:

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

uniform float sourceX[100];
uniform float sourceY[100];
uniform vec2 resolution;

in vec4 gl_FragCoord;

varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;

void main()
{
    float intensity = 0.0;

    for(int i=0; i<100; i++)
    {
        vec2 source = vec2(sourceX[i],sourceY[i]);
        vec2 position = ( gl_FragCoord.xy / resolution.xy );
        float d = distance(position, source);
        intensity += exp(-0.5*d*d);
    }

    intensity=3.0*pow(intensity,0.02);

    if (intensity<=1.0) 
        gl_FragColor=vec4(0.0,intensity*0.5,0.0,1.0);
    else if (intensity<=2.0)
        gl_FragColor=vec4(intensity-1.0, 0.5+(intensity-1.0)*0.5,0.0,1.0);
    else 
        gl_FragColor=vec4(1.0,3.0-intensity,0.0,1.0);
}

But that doesn't work - and I believe it may be because I'm trying to work with the pixel coordinates without properly translating them. Could anyone explain to me how to make this work?

Update:

The current result is: current result The sketch's code is:

PShader pointShader;

float[] sourceX;
float[] sourceY;


void setup()
{

  size(1024, 1024, P3D);
  background(255);

  sourceX = new float[100];
  sourceY = new float[100];
  for (int i = 0; i<100; i++)  
  {
    sourceX[i] = random(0, 1023);
    sourceY[i] = random(0, 1023);
  }


  pointShader = loadShader("pointfrag.glsl", "pointvert.glsl");  
  shader(pointShader, POINTS);
  pointShader.set("sourceX", sourceX);
  pointShader.set("sourceY", sourceY);
  pointShader.set("resolution", float(width), float(height));
}


void draw()
{
  for (int i = 0; i<100; i++) {   
    strokeWeight(60);
    point(sourceX[i], sourceY[i]);
  }
}

while the vertex shader is:

#define PROCESSING_POINT_SHADER

uniform mat4 projection;
uniform mat4 transform;


attribute vec4 vertex;
attribute vec4 color;
attribute vec2 offset;

varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;

void main() {

  vec4 clip = transform * vertex;
  gl_Position = clip + projection * vec4(offset, 0, 0);

  vertColor = color;
  center = clip.xy;
  pos = offset;
}
2
Can you expand on how it "doesn't work"? Apart from not handling point scale or viewport aspect ratio, nothing looks immediately wrong. Actually I'm not sure about the uniform arrays as there is a limit on the number of uniforms you can have. How do you set their values? Do you use uniform buffers?jozxyqk
Ignore that, setting a uniform array is fine, and it looks like you won't run out of locations given at least 1024, but you might want to look at uniform buffer objects.jozxyqk
Thanks for the reply, the problem is the shader draws all the circles black. From what I can get, there's some problems with the value of d. Here's what I get: ImageGiuseppe

2 Answers

2
votes

Update:

Based on the comments it seems you have confused two different approaches:

  1. Draw a single full screen polygon, pass in the points and calculate the final value once per fragment using a loop in the shader.
  2. Draw bounding geometry for each point, calculate the density for just one point in the fragment shader and use additive blending to sum the densities of all points.

The other issue is your points are given in pixels but the code expects a 0 to 1 range, so d is large and the points are black. Fixing this issue as @RetoKoradi describes should address the points being black, but I suspect you'll find ramp clipping issues when many are in close proximity. Passing points into the shader limits scalability and is inefficient unless the points cover the whole viewport.

As below, I think sticking with approach 2 is better. To restructure your code for it, remove the loop, don't pass in the array of points and use center as the point coordinate instead:

//calc center in pixel coordinates
vec2 centerPixels = (center * 0.5 + 0.5) * resolution.xy;

//find the distance in pixels (avoiding aspect ratio issues)
float dPixels = distance(gl_FragCoord.xy, centerPixels);

//scale down to the 0 to 1 range
float d = dPixels / resolution.y;

//write out the intensity
gl_FragColor = vec4(exp(-0.5*d*d));

Draw this to a texture (from comments: opengl-tutorial.org code and this question) with additive blending:

glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);

Now that texture will contain intensity as it was after your original loop. In another fragment shader during a full screen pass (draw a single triangle that covers the whole viewport), continue with:

uniform sampler2D intensityTex;
...
float intensity = texture2D(intensityTex, gl_FragCoord.xy/resolution.xy).r;
intensity = 3.0*pow(intensity, 0.02);
...

The code you have shown is fine, assuming you're drawing a full screen polygon so the fragment shader runs once for each pixel. Potential issues are:

  • resolution isn't set correctly
  • The point coordinates aren't in the range 0 to 1 on the screen.
  • Although minor, d will be stretched by the aspect ratio, so you might be better scaling the points up to pixel coordinates and diving distance by resolution.y.

This looks pretty similar to creating a density field for 2D metaballs. For performance you're best off limiting the density function for each point so it doesn't go on forever, then spatting discs into a texture using additive blending. This saves processing those pixels a point doesn't affect (just like in deferred shading). The result is the density field, or in your case per-pixel intensity.

These are a little related:

0
votes

It looks like the point center and fragment position are in different coordinate spaces when you subtract them:

vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);

Based on your explanation and code, source and source are in window coordinates, meaning that they are in units of pixels. gl_FragCoord is in the same coordinate space. And even though you don't show that directly, I assume that resolution is the size of the window in pixels.

This means that:

vec2 position = ( gl_FragCoord.xy / resolution.xy );

calculates the normalized position of the fragment within the window, in the range [0.0, 1.0] for both x and y. But then on the next line:

float d = distance(position, source);

you subtrace source, which is still in window coordinates, from this position in normalized coordinates.

Since it looks like you wanted the distance in normalized coordinates, which makes sense, you'll also need to normalize source:

vec2 source = vec2(sourceX[i],sourceY[i]) / resolution.xy;