0
votes

The SceneKit rendering loop is well documented here https://developer.apple.com/documentation/scenekit/scnscenerendererdelegate and here https://www.raywenderlich.com/1257-scene-kit-tutorial-with-swift-part-4-render-loop. However neither of these documents explains what SceneKit does between calls to didApplyConstraints and willRenderScene.

I've modified my SCNSceneRendererDelegate to measure the time between each call and I can see that around 5ms elapses between those two calls. It isn't running my code in that time, but presumably some aspect of the way I've set up my scene is creating work which has to be done there. Any insight into what SceneKit is doing would be very helpful.

I am calling SceneKit myself from an MTKView's draw call (rather than using an SCNView) so that I can render the scene twice. The first render is normal, the second uses the depth buffer from the first but draws just a subset of the scene that I want to "glow" onto a separate colour buffer. That colour buffer is then scaled down, gaussian blurred, scaled back up and then blended over the top of the first scene (all with custom Metal shaders).

The 5ms spent between didApplyConstraints and willRenderScene started happening when I introduced this extra rendering pass. To control which nodes are in each scene I switch the opacity of a small number of parent nodes between 0 and 1. If I remove the code which switches opacity but keep everything else (so there are two rendering passes but they both draw everything) the extra 5ms disappears and the overall frame rate is actually faster even though much more rendering is happening.

I'm writing Swift targeting MacOS on a 2018 MacBook Pro.

UPDATE: mnuages has explained that changing the opacity causes SceneKit to rebuild the scene graph and it that explains part of the lost time. However I've now discovered that my use of a custom SCNProgram for the nodes in one rendering pass also triggers a 5ms pause between didApplyConstraints and willRenderScene. Does anyone know why this might be?

Here is my code for setting up the SCNProgram and the SCNMaterial, both done once:

let device = MTLCreateSystemDefaultDevice()!
let library = device.makeDefaultLibrary()
glowProgram = SCNProgram()
glowProgram.library = library
glowProgram.vertexFunctionName = "emissionGlowVertex"
glowProgram.fragmentFunctionName = "emissionGlowFragment"

...

let glowMaterial = SCNMaterial()
glowMaterial.program = glowProgram
let emissionImageProperty = SCNMaterialProperty(contents: emissionImage)
glowMaterial.setValue(emissionImageProperty, forKey: "tex")

Here's where I apply the material to the nodes:

let nodeWithGeometryClone = nodeWithGeometry.clone()
nodeWithGeometryClone.categoryBitMask = 2
let geometry = nodeWithGeometryClone.geometry!
nodeWithGeometryClone.geometry = SCNGeometry(sources: geometry.sources, elements: geometry.elements)
glowNode.addChildNode(nodeWithGeometryClone)
nodeWithGeometryClone.geometry!.firstMaterial = glowMaterial

The glow nodes are a deep clone of the regular nodes, but with an alternative SCNProgram. Here's the Metal code:

#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>

struct NodeConstants {
    float4x4 modelTransform;
    float4x4 modelViewProjectionTransform;
};

struct EmissionGlowVertexIn {
    float3 pos [[attribute(SCNVertexSemanticPosition)]];
    float2 uv [[attribute(SCNVertexSemanticTexcoord0)]];
};

struct EmissionGlowVertexOut {
    float4 pos [[position]];
    float2 uv;
};

vertex EmissionGlowVertexOut emissionGlowVertex(EmissionGlowVertexIn in [[stage_in]],
                                               constant NodeConstants &scn_node [[buffer(1)]]) {
    EmissionGlowVertexOut out;
    out.pos = scn_node.modelViewProjectionTransform * float4(in.pos, 1) + float4(0, 0, -0.01, 0);
    out.uv = in.uv;
    return out;
}

constexpr sampler linSamp = sampler(coord::normalized, address::clamp_to_zero, filter::linear);

fragment half4 emissionGlowFragment(EmissionGlowVertexOut in [[stage_in]],
                                    texture2d<half, access::sample> tex [[texture(0)]]) {
    return tex.sample(linSamp, in.uv);
}
1

1 Answers

0
votes

By changing the opacity of nodes you're invalidating parts of the scene graph which can result in additional work for the renderer.

It would be interesting to see if setting the camera's categoryBitMask is more performant (it doesn't modify the scene graph).