2
votes

So I have a rather simple real-time 2d game that I am trying to add some nice glow to. To take it down to its most basic form it is simply circles and lies drawn on a black surface. And if you consider the scene from a hsv color space perspective all colors (except for black) have a "v" value of 100%.

Currently I have a sort of "accumulation" buffer where the current frame is joined with the previous frame. It works by using two off-screen buffers and a black texture.

  1. Buffer one activated-------------
  2. Lines and dots drawn
  3. Buffer one deactivated
  4. Buffer two activated-------------
  5. Buffer two contents drawn as a ful screen quad
  6. Black texture drawn with slight transparency over full screen
  7. Buffer one contents drawn
  8. Buffer two deactivated
  9. On Screen buffer activated-------
  10. Buffer two's contents drawn to screen

Right now all "lag" by far comes from latency on the cpu. The GPU handles all of this really well.

So I was thinking of maybe trying to spice things up abit by adding a glow effect to things. I was thinking perhaps for step 10 instead of using a regular texture shader, I could use one that draws the texture except with glow!

Unfortunately I am a bit confused on how to do this. Here are some reasons

  • Blur stuff. Mostly that some people claim that a Gaussian blur can be done real-time while others say you shouldn't. Also people mention another type of blur called a "focus" blur that I dont know what it is.
  • Most of the examples I can find use XNA. I need to have one that is written in a shader language that is like OpenGL es 2.0.
  • Some people call it glow, others call it bloom
  • Different blending modes? can be used to add the glow to the original texture.
  • How to combine vertical and horizontal blur? Perhaps in one draw call?

Anyway the process as I understand it for rendering glow is thus

  1. Cut out dark data from it
  2. Blur the light data (using Gaussian?)
  3. Blend the light data on-top of the original (screen blending?)

So far I have gotten to the point where I have a shader that draws a texture. What does my next step look like?

//Vertex
percision highp float;

attrivute vec2 positionCoords;
attribute vec2 textureCoords;
uniform mat4 matrix;
uniform float alpha;
varying vec2 v_texcoord;
varying float o_alpha;

void main()
{
   gl_Position = matrix * vec4(positionCoords, 0.0, 1.0);
   v_texcoord = textureCoords.xy;
   o_alpha = alpha;
}



//Fragment
varying vec2 v_texcoord;
uniform sampler2D s_texture;
varying float o_alpha;

void main()
{
     vec4 color = texture2D(s_texture, v_texcoord);
     gl_FragColor = vec4(color.r, color.g, color.b, color.a - o_alpha);
}

Also is this a feasible thing to do in real-time?

Edit: I probably want to do a 5px or less blur

2

2 Answers

0
votes

To address your initial confusion items:

  • Any kind of blur filter will effectively spread each pixel into a blob based on its original position, and accumulate this result additively for all pixels. The difference between filters is the shape of the blob.
    • For a Gaussian blur, this blob should be a smooth gradient, feathering gradually to zero around the edges. You probably want a Gaussian blur.
    • A "focus" blur would be an attempt to emulate an out-of-focus camera: rather than fading gradually to zero, its blob would spread each pixel over a hard-edged circle, giving a subtly different effect.
  • For a straightforward, one-pass effect, the computational cost is proportional to the width of the blur. This means that a narrow (e.g. 5px or less) blur is likely to be feasible as a real-time one-pass effect. (It is possible to achieve a wide Gaussian blur in real-time by using multiple passes and a multi-resolution pyramid, but I'd recommend trying something simpler first...)
  • You could reasonably call the effect either "glow" or "bloom". However, to me, "glow" connotes a narrow blur leading to a neon-like effect, while "bloom" connotes using a wide blur to emulate the visual effect of bright objects in a high-dynamic-range visual environment.
  • The blend mode determines how what you draw is combined with the existing colors in the target buffer. In OpenGL, activate blending with glEnable(GL_BLEND) and set the mode with glBlendFunc().
  • For a narrow blur, you should be able to do horizontal and vertical filtering in one pass.

To do fast one-pass full-screen sampling, you will need to determine the pixel increment in your source texture. It is fastest to determine this statically, so that your fragment shader doesn't need to compute it at run-time:

float dx = 1.0 / x_resolution_drawn_over;
float dy = 1.0 / y_resolution_drawn_over;

You can do a 3-pixel (1,2,1) Gaussian blur in one pass by setting your texture sampling mode to GL_LINEAR, and taking 4 samples from source texture t as follows:

float dx2 = 0.5*dx;  float dy2 = 0.5*dy;  // filter steps

[...]

vec2 a1 = vec2(x+dx2, y+dy2);
vec2 a2 = vec2(x+dx2, y-dy2);
vec2 b1 = vec2(x-dx2, y+dy2);
vec2 b2 = vec2(x-dx2, y-dy2);
result = 0.25*(texture(t,a1) + texture(t,a2) + texture(t,b1) + texture(t,b2));

You can do a 5-pixel (1,4,6,4,1) Gaussian blur in one pass by setting your texture sampling mode to GL_LINEAR, and taking 9 samples from source texture t as follows:

float dx12 = 1.2*dx; float dy12 = 1.2*dy;  // filter steps
float k0 = 0.375; float k1 = 0.3125;  // filter constants

vec4 filter(vec4 a, vec4 b, vec4 c) {
  return k1*a + k0*b + k1*c;
}

[...]

vec2 a1 = vec2(x+dx12, y+dy12);
vec2 a2 = vec2(x,      y+dy12);
vec2 a3 = vec2(x-dx12, y+dy12);
vec4 a = filter(sample(t,a1), sample(t,a2), sample(t,a3));

vec2 b1 = vec2(x+dx12, y     );
vec2 b2 = vec2(x,      y     );
vec2 b3 = vec2(x-dx12, y     );
vec4 b = filter(sample(t,b1), sample(t,b2), sample(t,b3));

vec2 c1 = vec2(x+dx12, y-dy12);
vec2 c2 = vec2(x,      y-dy12);
vec2 c3 = vec2(x-dx12, y-dy12);
vec4 c = filter(sample(t,c1), sample(t,c2), sample(t,c3));

result = filter(a,b,c);

I can't tell you if these filters will be real-time feasible on your platform; 9 samples/pixel at full resolution could be slow.

Any wider Gaussian would make separate horizontal and vertical passes advantageous; substantially wider Gaussian would require multi-resolution techniques for real-time performance. (Note that, unlike the Gaussian, filters such as the "focus" blur are not separable, which means they cannot be separated into horizontal and vertical passes...)

0
votes

Everything that @comingstorm has said is true, but there's a much easier way. Don't write the blur or glow yourself. Since you're on iOS, why not use CoreImage which has a number of interesting filters to choose from and which work in realtime already? For example, they have a Bloom filter which will likely produce the results you want. Also of interest might be the Gloom filter.

Chaining together CoreImage filters is much easier than writing shaders. You can create a CIImage from an OpenGL texture via [+CIImage imageWithTexture:size:flipped:colorSpace:].