I haven't dealt with the specific problem you've faced, but I've found texture pools to be immensely useful in OpenGL in terms of getting efficient results without having to put much thought into it.
In my case the problem was that I can't use the same texture for an input to a deferred shader as the texture used to output the results. Yet I often wanted to do just that:
// Make the texture blurry.
blur(texture);
Yet instead I was having to create 11 different textures with varying resolutions and having to swap between them as inputs and outputs for horizontal/vertical blur shaders with FBOs to get a decent-looking blur. I never liked GPU programming very much because some of the most complex state management I've ever encountered was often there. It felt incredibly wrong that I needed to go to the drawing board just to figure out how to minimize the number of textures allocated due to this fundamental requirement that texture inputs for shaders cannot also be used as texture outputs.
So I created a texture pool and OMG, it simplified things so much! It made it so I could just create temporary texture objects left and right and not worry about it because the destroying the texture object doesn't actually call glDeleteTextures
, it simply returns them to the pool. So I was able to finally be able to just do:
blur(texture);
... as I wanted all along. And for some funny reason, when I started using the pool more and more, it sped up frame rates. I guess even with all the thought I put into minimizing the number of textures being allocated, I was still allocating more than I needed in ways the pool eliminated (note that the actual real-world example does a whole lot more than blurs including DOF, bloom, hipass, lowpass, CMAA, etc, and the GLSL code is actually generated on the fly based on a visual programming language the users can use to create new shaders on the fly).
So I really recommend starting with exploring that idea. It sounds like it would be helpful for your problem. In my case I used this:
struct GlTextureDesc
{
...
};
... and it's a pretty hefty structure given how many texture parameters we can specify (pixel format, number of color components, LOD level, width, height, etc. etc.).
Yet the structure is comparable and hashable and ends up being used as a key in a hash table (like unordered_multimap
) along with the actual texture handle as the value associated.
That allows us to then do this:
// Provides a pool of textures. This allows us to conveniently rapidly create,
// and destroy texture objects without allocating and freeing an excessive number
// of textures.
class GlTexturePool
{
public:
// Creates an empty pool.
GlTexturePool();
// Cleans up any textures which haven't been accessed in a while.
void cleanup();
// Allocates a texture with the specified properties, retrieving an existing
// one from the pool if available. The function returns a handle to the
// allocated texture.
GLuint allocate(const GlTextureDesc& desc);
// Returns the texture with the specified key and handle to the pool.
void free(const GlTextureDesc& desc, GLuint texture);
private:
...
};
At which point we can create temporary texture objects left and right without worrying about excessive calls to glTexImage2D
and glDeleteTextures
.I found it enormously helpful.
Finally of note is that cleanup
function above. When I store textures in the hash table, I put a time stamp on them (using system real time). Periodically I call this cleanup function which then scans through the textures in the hash table and checks the time stamp. If a certain period of time has passed while they're just sitting there idling in the pool (say, 8 seconds), I call glDeleteTextures
and remove them from the pool. I use a separate thread along with a condition variable to build up a list of textures to remove the next time a valid context is available by periodically scanning the hash table, but if your application is all single-threaded, you might just invoke this cleanup function every few seconds in your main loop.
That said, I work in VFX which doesn't have quite as tight realtime requirements as, say, AAA games. There's more of a focus on offline rendering in my field and I'm far from a GPU wizard. There might be better ways to tackle this problem. However, I found it enormously helpful to start with this texture pool and I think it might be helpful in your case as well. And it's fairly trivial to implement (just took me half an hour or so).
This could still end up allocating and deleting lots and lots of textures if the texture sizes and formats and parameters you request to allocate/free are all over the place. There it might help to unify things a bit, like at least using POT (power of two) sizes and so forth and deciding on a minimum number of pixel formats to use. In my case that wasn't that much of a problem since I only use one pixel format and the majority of the texture temporaries I wanted to create are exactly the size of a viewport scaled up to the ceiling POT.
As for FBOs, I'm not sure how they help your immediate problem with excessive texture allocation/freeing either. I use them primarily for deferred shading to do post-processing for effects like DOF after rendering geometry in multiple passes in a compositing-style way applied to the 2D textures that result. I use FBOs for that naturally but I can't think of how FBOs immediately reduce the number of textures you have to allocate/deallocate, unless you can just use one big texture with an FBO and render multiple input textures to it to an offscreen output texture. In that case it wouldn't be the FBO helping directly so much as just being able to create one huge texture whose sections you can use as input/output instead of many smaller ones.