I am using an WebGL float texture to store matrices. Every row has N matrices.
In the shader, I get these matrices, given the relative size of each vector, and each texture row.
For instance, if I have 10 matrices per row, and 10 rows, the shader gets:
u_vector_size = 1 / (10 * 4)
u_row_size = 1 / 10
With that information, I fetch 4 pixels from the texture:
uniform sampler2D u_boneMap;
uniform float u_vector_size;
uniform float u_row_size;
mat4 boneAtIndex(float column, float row) {
column *= u_vector_size * 4.0;
row *= u_row_size;
return mat4(texture2D(u_boneMap, vec2(column, row)),
texture2D(u_boneMap, vec2(column + u_vector_size, row)),
texture2D(u_boneMap, vec2(column + u_vector_size * 2.0, row)),
texture2D(u_boneMap, vec2(column + u_vector_size * 3.0, row)));
}
The problem? this doesn't work under some drivers.
I mostly see it on Macs, but it also happened on Windows, where the fetches get wrong data, and everything goes crazy.
I am assuming this is due to floating-point errors.
Mostly due to the fact that results change whether I use a texture with power of two dimensions or not. The data is all the same, just the size of each vector and row relative to the full texture size changes.
Normally I would use a texture buffer, however this is WebGL code, and WebGL only supports textures.
Is there any way to make this kind of fetching matrices out of a texture reliable and consistent?