I'm using a texture array to render Minecraft-style voxel terrain. It's working fantastic, but I noticed recently that GL_MAX_ARRAY_TEXTURE_LAYERS
is alot smaller than GL_MAX_TEXTURE_SIZE
.
My textures are very small, 8x8, but I need to be able to support rendering from an array of hundreds to thousands of them; I just need GL_MAX_ARRAY_TEXTURE_LAYERS
to be larger.
OpenGL 4.5 requires GL_MAX_ARRAY_TEXTURE_LAYERS
be at least 2048, which might suffice, but my application is targeting OpenGL 3.3, which only guarantees 256+.
I'm drawing up blanks trying to figure out a prudent workaround for this limitation; dividing up the rendering of terrain based on the max number of supported texture layers does not sound trivial at all to me.
I looked into whether ARB_sparse_texture
could help, but GL_MAX_SPARSE_ARRAY_TEXTURE_LAYERS_ARB
is the same as GL_MAX_ARRAY_TEXTURE_LAYERS
; that extension is just a workaround for VRAM usage rather than layer usage.
Can I just have my GLSL shader access from an array of sampler2DArray
? GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS
has to be at least 80+, so 80+ * 256+ = 20480+
and that would enough layers for my purposes. So, in theory could I do something like this?
const int MAXLAYERS = 256;
vec3 texCoord;
uniform sampler2DArray[] tex;
void main()
{
int arrayIdx = int(texCoord.z + 0.5f) / MAXLAYERS 256
float arrayOffset = texCoord.z % MAXLAYERS;
FragColor = texture(tex[arrayIdx],
vec3(texCoord.x, texCoord.y, arrayOffset));
}