I have a metal shader (written as a SCNProgram for an ARKit app) that takes a depth map of the current scene captured from smoothedSceneDepth. I would like to use the captured depth information to discard parts of any of my virtual objects that are behind a real world object. However I am having trouble getting the expected fragment depth in my shader.
Here's my basic fragment shader:
struct ColorInOut {
float4 position [[ position ]];
float2 depthTexCoords;
};
fragment float4 fragmentShader(
const ColorInOut in [[ stage_in ]],
depth2d<float, access::sample> sceneDepthTexture [[ texture(1) ]]
) {
constexpr sampler textureSampler(mag_filter::linear, min_filter::linear);
// Closest = 0.0m, farthest = 5.0m (lidar max)
// Seems to be in meters?
const float lidarDepth = sceneDepthTexture.sample(textureSampler, in.depthTexCoords);
float fragDepth = // ??? somehow get z distance to the current fragment in meters ???
// Compare the distances
const float zMargin = 0.1
if (lidarDepth < fragDepth - zMargin) {
discard_fragement()
}
...
}
My understanding was that position.z in a fragment shader should be in the range of closest=0 to farthest=1. However when I tried converting this back to real world distances using the current camera planes, the result seem off:
const float zNear = 0.001;
const float zFar = 1000;
float fragDepth = in.position.z * (zFar - zNear);
When I debugged the shader using return float4(fragDepth, 0, 0, 1);, the red is brightest when I am closest to the object and then falls off as I back away. Even if I use fragDepth = 1 - fragDepth, the depth seems to differ from lidarDepth.
Here's using 1 - fragDepth:
(I also tried using the mapping from this answer but wasn't able to get it working)
So my questions are:
What coordinate system is
in.position.zin?How can I transform
in.position.zinto a depth value I can compare against the captured depth information I already have? (or vice versa)

fragDepth = 1 - fragDepth, the depth seems to differ fromlidarDepth» how different are they? Do you have a screenshot? Have you tried usingSCNSceneBuffer'sinverseProjectionTransform? - mnuagesfragDepthis in meters, this should be much more gradual. The core problem is that I don't know what coordinate spacein.position.zis in though because I would not expect that1 - fragDepthwould be required at all - Matt Bierner