I am working on an App, that should render a chrome-style reflective sphere-like object inside a skybox (using a six faced cube map).
I am doing this in Swift using Scenekit with different approaches.
Everything is fine and perfectly reflected (see Figure 1 below) as long as I let Scenekit doing all the work - in other words, using a standard SCNMaterial with metalness 1.0, roughness 0.0 and color UIColor.white (using .physicallyBased as lighting model) attached to the firstMaterial of the node's geometry (including a directional light).
But the goal is to use a SCNProgram instead, (attached to the node's material) with its own Vertex and Fragment Shader - corresponding to Apples documentation about it. I have a working scenario, but the reflections are wrong on the object (as you can see below on Figure 2)
The main question is: Which are the correct Matrix values from the scn_node or the scn_frame (in the shaders.metal file) to use, to get the same reflection on the object as Scenekit does in Figure 1. But using the SCNProgram with the shaders only (and without the light). Unfortunately Apple gives not a lot of information about the different matrices that are filed to the shader by the SCNProgram and which one to use what for - or kind of examples.
Here is my current Vertex Shader in which I assume using some wrong Matrices (I left some out-commented code, to show what was tested already, not out-commented code corresponds 1:1 to Figure 2):
vertex SimpleVertexChromeOrig myVertexChromeOrig(MyVertexInput in [[ stage_in ]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
constant MyNodeBuffer& scn_node [[buffer(1)]])
{
SimpleVertexChromeOrig OUT;
OUT.position = scn_node.modelViewProjectionTransform * float4(in.position, 1.0);
// OUT.position = scn_frame.viewProjectionTransform * float4(in.position, 1.0);
float4 eyeSpacePosition = scn_frame.viewTransform * float4(in.position, 1.0);
float3 eyeSpaceEyeVector = normalize(-eyeSpacePosition).xyz;
// float3 eyeSpaceNormal = normalize(scn_frame.inverseViewTransform * float4(in.normal, 1.0)).xyz;
float3 eyeSpaceNormal = normalize(scn_node.normalTransform * float4(in.normal, 1.0)).xyz;
// Reflection and Refraction Vectors
float3 eyeSpaceReflection = reflect(-eyeSpaceEyeVector, eyeSpaceNormal);
OUT.worldSpaceReflection = (scn_node.inverseModelViewTransform * float4(eyeSpaceReflection, 1.0)).xyz;
// OUT.worldSpaceReflection = (scn_node.modelViewTransform * float4(eyeSpaceReflection, 1.0)).xyz;
// OUT.worldSpaceReflection = (scn_node.modelTransform * float4(eyeSpaceReflection, 1.0)).xyz;
return OUT;
}
Here is the current Fragment Shader (very default with cube-map sampler):
fragment float4 myFragmentChromeOrig(SimpleVertexChromeOrig in [[stage_in]],
texturecube<float, access::sample> cubeTexture [[texture(0)]],
sampler cubeSampler [[sampler(0)]])
{
float3 reflection = cubeTexture.sample(cubeSampler, in.worldSpaceReflection).rgb;
float4 color;
color.rgb = reflection;
color.a = 1.0;
return color;
}
This are the Matrices I get from the NodeBuffer (kind of automatically provided by the SCNProgram) - they must be just defined in a struct in the shader file to be accessible like so:
struct MyNodeBuffer {
float4x4 modelTransform;
float4x4 inverseModelTransform;
float4x4 modelViewTransform;
float4x4 inverseModelViewTransform;
float4x4 normalTransform;
float4x4 modelViewProjectionTransform;
float4x4 inverseModelViewProjectionTransform;
};
This is the Vertex Input struct:
typedef struct {
float3 position [[ attribute(SCNVertexSemanticPosition) ]];
float3 normal [[ attribute(SCNVertexSemanticNormal) ]]; // Phil
} MyVertexInput;
This is the Stuct filled by the Vertex Shader:
struct SimpleVertexChromeOrig
{
float4 position [[position]];
float3 worldSpaceReflection;
};
(The Skybox is always provided trough a SCNMaterialContent Property containing six images and is attached to sceneView.scene.background.contents)