2
votes

Updated: It appears that most of this stuff is now gone. I think the way to do lighting properly in OpenGL ES 2.0 is to write your own shaders, pass the normals in any way you wish, then do the lighting in a shader. In particular, according to the OpenGL ES 2.0 docs, there is no longer a glMaterialFv, glLightFv, etc. (Let alone glNormalPointer and friends.) I think the reason this is compiling for me is because these things are provided in iOS by GLKit, though apparently they don't really work out of the box as drop-in replacements. However, GLKit provides some standard shaders to mimic the lighting functionality in OpenGL ES 1.1.

If anyone has any further information, I appreciate it, but I think that's the answer.


I'm new to OpenGL, so apologies for any confusion in advance. I'm using OpenGL ES 2.0 on iOS. Many of the examples I find on the Internet seem to be outdated. For example, the glBegin(GL_POLYGON) methodology seems to be deprecated, so I don't think I can use glNormal3f, as most older examples do.

I started from Ray Wenderlich's nice tutorial and integrated it with a game I'm building, successfully rendering the basic geometry of some rooms. Now I just want to add lighting. I'm not sure what's not working, but I suspect it has to do with normals.

I have made a confused attempt to enable GL_LIGHT0, largely taken from the OpenGL documentation.

//* Some attempts at lighting
GLfloat mat_specular[] = { 1.0, 1.0, 1.0, 1.0 };
GLfloat mat_shininess[] = { 50.0 };
GLfloat light0_position[] = { 0.0, 0.0, 7.99, 0.0 };
GLfloat light0_ambience[] = { 0.2, 0.2, 0.2, 1.0 };
GLfloat light0_specular[] = { 1.0, 1.0, 1.0, 1.0 };

glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular);
glMaterialfv(GL_FRONT, GL_SHININESS, mat_shininess);
glLightfv(GL_LIGHT0, GL_POSITION, light0_position);
glLightfv(GL_LIGHT0, GL_AMBIENT, light0_ambience);
glLightfv(GL_LIGHT0, GL_SPECULAR, light0_specular);
glShadeModel(GL_SMOOTH);

glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
//*/
glEnable(GL_DEPTH_TEST);

I'm using VBOS to pass vertex positions and colors to a pair of simple shaders. The following is from the tutorial:

_positionSlot = glGetAttribLocation(programHandle, "Position");
_colorSlot = glGetAttribLocation(programHandle, "SourceColor");

And then in the render: method (I've modified the Vertex structure to include normals between the position and color positions).

glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE,
                      sizeof(Vertex), 0);
glVertexAttribPointer(_colorSlot, 4, GL_FLOAT, GL_FALSE,
                      sizeof(Vertex), (GLvoid*) (sizeof(float) * 6));

All of this works very well, except for the lighting. I've calculated and recalculated my normals. For vertical, rectangular walls parallel to the X and Y axes, this is trivial. They are part of the buffer that is bound to GL_ARRAY_BUFFER, along with the position and color information.

I have tried repeatedly to use glNormalPointer:

glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(Vertex), (GLvoid*)(sizeof(float)*3));

The Vertex structure from the tutorial is now:

typedef struct {
    float Position[3];
    float Normal[3];
    float Color[4];
} Vertex;

The call above succeeds, but no lighting effects appear. Some sources suggest I should use glVertexAttribPointer for the normals as well. To this end, I modified the tutorial's vertex shader:

attribute vec4 Position;
attribute vec4 SourceColor;
attribute vec3 Normal;

varying vec4 DestinationColor;

uniform mat4 Projection;
uniform mat4 Modelview;

void main(void) {
    DestinationColor = SourceColor;
    gl_Position = Projection * Modelview * Position;
}

and added calls I thought would allow me to use that Normal variable (though I'm not sure how OpenGL would know what to do with it):

// just before glLinkProgram()
glBindAttribLocation(programHandle, 1, "Normal");

GLuint _normalSlot = glGetAttribLocation(programHandle, "Normal");

glVertexAttribPointer(_normalSlot, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)(sizeof(float)*3));

I notice that no matter where I put the Normal variable in the GLSL file, glBindAttribLocation always fails for that attribute, and glGetAttribLocation always returns -1 for that attribute. (I can change the location of the SourceColor attribute successfully using glBindAttribLocation, but it will not accept Normal.) There used to be a gl_Normal variable in older versions of OpenGL/GLSL, but apparently not any more.

I'm at a loss and just continue trying whatever occurs to me. My next step will be towards glInterleavedArrays. There are so many moving parts, it's hard to find the problem. It may be that my normals have been fine all along, and there's some problem with the light instead. Aside from lighting, is there any way to confirm that your normals are correct? Or should I be able to see anything at all from a light if the normals are wrong?

So apologies for the length and confusion of this post, but any detailed explanation of how normals are specified in this situation would be welcome. I repeatedly run into remarks that things are different now in OpenGL ES 2.0, but there are never any detailed examples. Thanks in advance.

1

1 Answers

7
votes

I've learned a lot since posting that and realize it was never really properly answered. The update is correct, but it's worth explaining a little more, since I've still never seen a very good explanation anywhere. This is brief and could benefit from a detailed, concrete example, but hopefully it hits the major points.

You usually transmit the normal vector as a triplet of vertex attributes in a VBO. You'll define a vertex struct in your code that contains minimally twelve floating-point attributes if you're working with textures: the three spatial coordinates (xyz) of the vertex, the three components of the normal vector at the vertex, the four color components (rgba) and the two components of the texture coordinates there. Depending on your lighting model, you may use more attributes of your own definition, depending on hardware limits, up to GL_MAX_VERTEX_ATTRIBS.

It's up to you precisely how you lay out the struct because you're the one who will be doing all the lighting computations; these things mean nothing to OpenGL ES 2.0. You have to build the GLSL vertex and fragment shader programs that compute the color of each pixel. You call glVertexAttribPointer once for each attribute vector you define, and this results in an attribute vector being passed to the vertex shader with the name you specify. You can pass it to the fragment shader from the vertex shader using a varying. The attribute values in the fragment shader will be determined by linear interpolation.

You will usually also pass in a number of uniforms with further lighting parameters. Use uniforms for anything that does not vary from one vertex to the next, like the position or intensity of a light or a material property like shininess. Use vertex attributes for anything that varies within your model.

Most of the magic should happen in the fragment shader. The simplest lighting effect that uses the normal vector is diffuse lighting, which you can produce with a fragment shader like this:

varying lowp vec4 FragmentPosition;
varying lowp vec3 FragmentNormal;
varying lowp vec4 FragmentColor;

uniform lowp vec3 LightPosition;
uniform lowp float LightIntensity;

void main() {
  // .xyz makes a vec3 out of a vec4
  lowp vec3 inLightFrame = FragmentPosition.xyz - LightPosition;
  lowp float distance = length(inLightFrame);
  // inverse square dependence of intensity
  gl_FragColor = max(0.0, -dot(inLightFrame, normalize(FragmentNormal))) 
    / distance / distance * LightIntensity * FragmentColor;
}

This assumes the LightPosition uniform vector is set in software and a vertex shader like:

attribute vec4 Position;
attribute vec3 Normal;
attribute vec4 Color;

varying vec4 FragmentPosition;
varying vec4 FragmentNormal;
varying vec4 FragmentColor;

void main() {
    FragmentPosition = Position;
    FragmentNormal = normalize(Normal);
    FragmentColor = Color;
}

In addition, there will be some calls to glVertexAttribPointer to define the Position, Normal and Color, such as:

glVertexAttribPointer(glGetAttribLocation(programHandle, "Position"), 3, GL_FLOAT, 
  GL_FALSE, sizeof(Vertex), (GLvoid*(sizeof(float)*0));
glVertexAttribPointer(glGetAttribLocation(programHandle, "Normal"), 3, GL_FLOAT, 
  GL_FALSE, sizeof(Vertex), (GLvoid*(sizeof(float)*3));
glVertexAttribPointer(glGetAttribLocation(programHandle, "Color"), 4, GL_FLOAT, 
  GL_FALSE, sizeof(Vertex), (GLvoid*(sizeof(float)*6));

This assumes a struct like

typedef struct {
  float Position[3];
  float Normal[3];
  GLubyte Color[4];
} Vertex;

This trivial example assumes a white light of unit intensity. You may want more lights, and you'll probably want to specify red, green and blue intensities for each. This example ignores material properties and ambient and specular lighting, as well as shadows. But this is how you handle normal vectors in OpenGL ES 2.0.