This is one of the most common issues in rendering with transparency. Many useful kinds of alpha blending are noncommutative: the order in which you draw things matters.
When drawing opaque surfaces, we use the z-buffer to resolve which fragment is frontmost on a pixel-by-pixel basis. Enable depth write/read, draw your triangles, and let the closest fragment win. This works regardless of drawing order.
When the surfaces are translucent, we can't naively expect the z-buffer to automatically produce the correct result; it can only hold one value per pixel at a a time. If we enable depth write/read, and draw in an arbitrary order, we have a good chance of rejecting fragments that should have contributed to the picture. That's the phenomenon illustrated in the left of your image above.
On the other hand, if we don't read the depth buffer, we have a high likelihood of incorrectly drawing on top of opaque geometry that's already been rendered, making translucent surfaces uncannily "float" in front of objects they should be occluded by.
We resolve these artifacts by first drawing opaque geometry with depth write/read enabled, then drawing translucent surfaces with depth write disabled. Crucially, though, unless you're using a more advanced technique like order-independent transparency (OIT, which is not a silver bullet), you must sort your geometry to get correct compositing. This, again, is because compositing is not generally commutative.
In 2017, SceneKit introduced "transparency modes" to make rendering translucent objects easier, especially convex objects, whose depth complexity tends to be low. Unfortunately, as mentioned at 50:07 in this video introducing the feature, individual polygons are not sorted when rendering, so transparency modes are not a complete solution.
I suspect the situation is much the same with RealityKit. Sorting polygons every time the camera moves is costly, and is not something you want to do for every translucent object in every scenario, so these engines don't tend to support it.
One way to get perfect rendering in this tricky case is to (1) ensure your geometry is not self-intersecting (if it is, it will be impossible to sort it for correct compositing), (2) put each hair card in its own node (gross, I know), and (3) sort your geometry so it is rendered back-to-front using double-sided materials and the "single layer" transparency mode. This sorting step will likely need to be done on the CPU, and the order in which to render the polygons can then be conveyed to SceneKit by setting the renderingOrder
property of the nodes comprising the translucent object.
Alternatively, you can use the SCNNodeRendererDelegate
API to intercept the drawing of the geometry and draw it yourself with Metal. This will allow the choice of rendering with OIT and drawing more efficiently by using using one node to represent the whole mesh. You might even be able to move the sort step to the GPU through clever use of the SCNSceneRendererDelegate
and SCNGeometrySource
APIs, but that's beyond the scope of this answer.