I'm drawing a small number of fairly large meshes (terrain from srtm data in fact) in an android app, and I would like it to go faster (or run acceptably on slower devices). Currently I'm using 1 buffer of floats for vertex data and 1 buffer of bytes for normals.
Would combining the 2 buffers into 1 improve performance? I've found a few posts on here and in blogs claiming that a single buffer will be better, but with little hard evidence.
The test case uses 6 separate meshes, each with 65k vertices and 128k triangles. (I am using drawelements as each vertex is used in up to 8 triangles).
The colors (so far) are calculated from the point heights in the vertex shader, so I don't need to pass in color info as attribute data.
This is all Java code running in a standard android VM
fragment shader is unity
void main() {
gl_FragColor = v_Colour;
}
vertex shader is:
uniform mat4 u_MVPMatrix;
uniform mat4 u_MVMatrix;
uniform float u_many[8];
const int inflightx = 0; //vector to light in u_many
const int inflighty = 1;
const int inflightz = 2;
const int ambfactor = 3;
const vec3 basecolour1 = vec3(28.0 / 255.0, 188.0 / 255.0, 108.0 / 255.0);
const vec3 basecolour2 = vec3(150.0 / 255.0, 75.0 / 255.0, 0.0);
const vec3 basecolour3 = vec3(0.85, 0.85, 0.85);
attribute vec4 a_vpos;
attribute vec3 a_vertexnormal;
varying vec4 v_Colour;
void main() {
vec3 eyenormal = vec3(u_MVMatrix *vec4(a_vertexnormal, 0.0));
eyenormal = eyenormal / length(eyenormal);
vec3 basecolour;
if (a_vpos.z < 100.0) {
basecolour = basecolour1 + ((basecolour2-basecolour1) * a_vpos.z / 100.0);
} else {
basecolour = basecolour2 + ((basecolour3-basecolour2) * (a_vpos.z - 100.0) / 500.0);
}
v_Colour = vec4(((dot(eyenormal, vec3(u_many[inflightx],u_many[inflighty],u_many[inflightz])) + u_many[ambfactor]) * basecolour),1.0);
gl_Position = u_MVPMatrix * a_vpos;
}