Summary
I'm currently creating a 3D tiled hex board in Three.js. For artistic and functional reasons, the tiles are each their own mesh comprised of a basic (unchanging) geometry with a generated array of maps in its material: Displacement, Diffuse, Normal.
I started noticing a reduction in FPS as I added more texture maps, which prompted me to look into the source. I have a 15x15 game board, meaning there are 225 individual meshed being rendered every frame. Each mesh, at the time, was comprised of 215 faces due to poor design, resulting in 48,375 faces in the scene.
Thinking it would cure the performance troubles, I redesigned the mesh to contain only 30 faces, totaling 6,750 faces across the scene; An astounding improvement. I was disappointed to find that an 86% reduction in faces had almost no effect on performance.
So, I resolved to find exactly what was causing the drop in performance. I set up an abstracted test environment, and used a grid of planes, 3x10 (To give them 30 faces, just like my own model). I tried out different grid sizes (Mesh counts) and applied materials of differing complexity. Here's what I found:
Materials Test
// /---------------------------------------------------\
// | Material | 15x15 | 20x20 | 25x25 |
// |---------------------|---------|---------|---------|------\
// | Flat Lambert Color | 60FPS | 48FPS | 30FPS | -00% |
// | Lambert Diffuse | 57FPS | 41FPS | 27FPS | -10% |
// | Blank Shader | 51FPS | 37FPS | 24FPS | -20% |
// | Full Shader (-H) | 49FPS | 32FPS | 21FPS | -30% |
// | Full Shader (+H) | 42FPS | 28FPS | 19FPS | -37% |
// \----------------------------------------------------------/
// | -00% | -33% | -55% |
// \-----------------------------/
Rows:
MeshLambertMaterial({color})was my baselineMeshLambertMaterial({map})suffered roughly a 10% performance hitShaderMaterial()using Default settings suffered roughly a 20% performance hitShaderMaterial()using a Diffuse Map suffered roughly a 30% performance hitShaderMaterial()using a Diffuse+Normal+Displacement maps suffered a 37% performance hit
Columns:
- 15x15 (225 Meshes) was my baseline
- 20x20 (400 Meshes) suffered a 33% performance hit
- 25x25 (625 Meshes) suffered a 55% performance hit
Synopsis
So I learned that there was a significant hit coming from the shaders I'm using and maps I'm applying. However, there's a much larger hit coming from the amount of "things". I wasn't sure if this was faces, meshes or otherwise so I ran another test. Using my baseline material (MeshLambertMaterial({ color: red })), I decided to test two variables: Number of sides and number of meshes. Here's what I found:
Face/Mesh Count Test
// 15x15 (225) Meshes @ 30 Faces = 6,750 Faces = 60 FPS
// 20x20 (400) Meshes @ 30 Faces = 12,000 Faces = 48 FPS
// 25x25 (625) Meshes @ 30 Faces = 18,750 Faces = 30 FPS
// 30x30 (900) Meshes @ 30 Faces = 27,000 Faces = 25 FPS
// 40x40 (1600) Meshes @ 30 Faces = 48,000 Faces = 15 FPS
// 50x50 (2500) Meshes @ 30 Faces = 75,000 Faces = 10 FPS
// 15x15 (225) Meshes @ 100 Faces = 22,500 Faces = 60 FPS
// 15x15 (225) Meshes @ 400 Faces = 90,000 Faces = 60 FPS
// 15x15 (225) Meshes @ 900 Faces = 202,500 Faces = 60 FPS
Synposis
This seems to show quite conclusively that the quantity of faces involved do not affect the frame rate much, if at all. Rather, that the amount of individual meshes being draw to the scene create virtually all of the performance drag. I'm not sure what exactly causes such lag; I would imagine there is a large amount of overhead per mesh. Perhaps there is away to eliminate some of this overhead?
Considerations
I have already considered merging my geometries. This does almost completely eliminate the drop in frame rate. However, as I stated in the beginning of this article I need each tile to be individually translatable, rotatable, scalable and otherwise modifiable. To my knowledge, this is not possible with merged geometries.
I have also considered defaulting to a merged geometry and recreating the geometries/scenes when a function that alters a tile is called. However, two problems exist with this approach:
- With 200-400 individual meshes on the board and being merged, this could take upwards of 1000ms to process and cause a noticeable visual stutter.
- Large effects, such as one that might "shake" or "wobble" all tiles simultaneously, would be just as laggy as the board is now and there would be no reason to implement them.
I hope to find a solution that eliminates this performance hit rather than attempts to avoid it.
Question
Which brings me to my question: Is there a more efficient way to render high quantities of individual meshes?