0
votes

I am rendering polyhedra in WebGL and I'm wondering if there's a more efficient and less error prone way of generating vertex positions and normals.

Ideally I would send a buffer containing only indices to the GPU and calculate the per-vertex positions and normals in the vertex shader. The vertex shader would look something like this:

uniform mat4 transform;

uniform ??? polyhedra; // some kind of representation of the polyhedra type    

attribute int index;

varying vec4 normal;
varying vec2 uv;

vec3 calculatePosition() { /* mathemagic w/index and polyhedra */ };
vec3 calculateNormal  () { /* mathemagic w/index and polyhedra */ };
vev2 calculateUV      () { /* mathemagic w/index and polyhedra */ };

void main() {
  gl_Position = transform * vec4(calculatePosition(), 1.0);
  normal      = transform * vec4(calculateNormal(), 1.0);
  uv          = calculateUV();
}

and then send it an array of vertex indices to index. For a cube rendered with two triangles per face with flat shading (which is important since it means that vertices can't be shared between different faces, since the normals are different per-face) I would send a buffer of indices [0..36) (6 sides * 2 triangles * 3 vertices each).

Is this possible efficiently and without heroic effort?

WebGL is essentially OpenGL ES 2.0, so no geometry shaders.

1
Other than the convenience of vector/matrix operators, there's no particular reason generating the vertices would be better done in GLSL than JavaScript.Kevin Reid

1 Answers

0
votes

Sure, this is possible... probably not practical, however.

You cannot generate vertices, but you can at least procedurally fill them with values using a vertex shader. One approach might be to use an FBO and compute the positions in a fragment shader and then use the vertex shader to fetch the positions/normals/uv using a texture lookup. This is about the best you can do since OpenGL ES 2.0 lacks transform feedback to store the data persistently.

However, if you are only generating these polyhedra one time there's little incentive to offload to the GPU in the first place. WebGL runs on a wide variety of devices, some have higher throughput than others. But in all cases, reading vertex position from a vertex buffer is quicker than a texture lookup. If WebGL had transform feedback, I might suggest otherwise, but this just sounds like a colossal waste of time.