3
votes

I have created a hairstyle with Blender 2.66 using hair particle system. This looks like this :

enter image description here

As you can see, the luminosity has been applied on line fragments. After conversion, I have exported my hairstyle mesh to OBJ file format. I parsed it in my program and the render looks like this :

enter image description here

The particules have been drawn as GL_LINES (in my OBJ file I have 2 vertices per face).

In another test program I wanted to test luminosity with a simple line fragment. Here's my vertices buffers :

static GLfloat vertices[6] =
{
    0.000000f, 0.000000f, -2.000000f,
    0.000000f, 0.000000f, 2.000000f
};

static GLfloat normals[6] =
{
    0.000000f, 1.000000f, 0.000000f,
    0.000000f, 1.000000f, 0.000000f
};

static GLfloat colors[6] =
{
    0.000000f, 0.000000f, 1.000000f,
    0.000000f, 0.000000f, 1.000000f
};

And the result (the line is rotating at the origin orthogonally to the X axe -> I have called glLineWidth(5.0f) to have a better visible result) :

enter image description here

With the real-time animation, I could see that the luminosity is correct but just on a specific'side' of the line. It's normal because a line segment is supposed to have an infinity of normals and I have just two normals (one per vertex). I want to precise that these two normals are the normal of the plane of equation Y = 0 -> n(0.0, 1.0, 0.0). So I wonder if it's possible to add several normals per vertex ? I believe OpengL can't do that. But maybe an other way is possible. Here's a drawing that explains what I want to do to compute a correct luminosity on each part of the line segment :

enter image description here

Like you can see it above on the first picture, Blender can compute luminosity in real-time on line segment. Furthermore, it's an OpenGL render that is present on this picture. So I'm sure it's possible to do that. I tried repeat the same line segment coordinates two time but for the second line I apply the opposite normal n2(0.0, -1.0, 0.0) but it does not work. The 'other side' is 'dark'. It's the same thing with two polygons. Currently, I use GLSL shader. I think the thing is possible using special shaders like Geometry shader or tesselation shader. Maybe CUDA language is required. But I'm not sure.

Does anyone can help me?

2

2 Answers

2
votes

This is a complex topic. You should begin by reading Marschner's 2003 paper on hair lighting. Once you have been confused enough, look at nvidia's explanation of this model (sec 23.3, Hair Shading), which includes nice diagrams and shader code.

Hope this helps!

1
votes

ananthonline has already given you some references, but those are total overkill for a simple line strand illumination model.

If your demands are not as advanced you can apply the Phong illumination model onto strands. You may ask "wait what, strands don't have normals, but you need those for Phong?" Well, yes and no. A line segment has an infinite number of normals, also known as a plane. Or in other words the line strand itself is a normal to a plane.

The phong model starts off the assumption of a Lambertian scattering model, i.e. the "more" perpendicular the angle of incidence, the brighter it gets. The math describing this is

I(phi) = I_max * cos( phi )

or, by substituting phi with a vector and cos( angle(↑a,↑b) ) = ↑a · ↑b where ||↑a|| = ||↑b|| = 1

I(↑a, ↑b) = I_max * ↑a · ↑b

Now let ↑c · ↑b = 0, where ||↑c|| = 1, then ↑a · ↑b = 1 - ↑a · ↑c. But ↑c · ↑b = 0 is the definition of a normal. Which means for a line segment of direction ↑b and direction to light source ↑c you can write the intensity as

I(↑b, ↑c) = I_max * (1 - ↑b · ↑c)

And that is a Phong illumination model for lines. And as it happens it's also exactly what Blender does.

Update: Specular reflection

You need the eye position only for the calculation of the specular reflex. You can do this as well, by assuming that the normal of the line strand is the light direction vector perpendicularized to the line strand. Let ↑b again be the direction of the line segment and ↑c the direction toward the light. Then applying the Gram-Schmidt orthonormalization method you can derive a in-situ normal ↑n by

↑n = normalize(↑c - ↑b·↑c)

Using that you can build the usual set of

- vertex position
- vertex "normal"
- light direction
- light half direction

in a line strand vertex shader and pass it to a regular Phong fragment shader and do the math as usual.