RealityView postProcess effect depth texture

Hello,

Question re: iOS RealityView postProcess. I've got a working postProcess kernel and I'd like to add some depth-based effects to it. Theoretically I should be able to just do:

encoder.setTexture(context.sourceDepthTexture, index: 1)

and then in the kernel: texture2d<float, access::read> depthIn [[texture(1)]]

...

outTexture.write(depthIn.read(gid), gid);

And I consistently see all black rendered to the view. The postProcess shader works, so that's not the issue. It just seems to not be receiving actual depth information.

(If I set a breakpoint at the encoder setTexture step, I can see preview the color texture of the scene, but the context's depthTexture looks like all NaN / blank.)

I've looked at all the WWDC samples, but they include ARView for all the depth sample code, which has a different set of configuration options than RealityView. So far I haven't seen anywhere to explicitly tell RealityView "include the depth information". So I'm not sure if I'm missing something there.

It appears that there is indeed a depth texture being passed, but it looks blank.

Is there a working example somewhere that we can reference?

Answered by DTS Engineer in 865081022

I have a feeling this is a misunderstanding of the depth texture itself, which does not contain linear depth, but instead contains a non-linear representation of the scene's depth. Odds are, most of these values are close to 0, which is why you are seeing an "all black" output with that compute kernel.

As an example, consider the following:

  1. The scene:
struct ContentView: View {
    var body: some View {
        RealityView { content in
            let box = ModelEntity(mesh: .generateBox(size: 1), materials: [SimpleMaterial(color: .red, isMetallic: false)])
            
            content.add(box)
            
            content.renderingEffects.customPostProcessing = .effect(CustomPostProcess())
        }
    }
}
  1. The effect:
final class CustomPostProcess: PostProcessEffect {
    
    let pipelineState: MTLComputePipelineState
    
    init() {
        let device = MTLCreateSystemDefaultDevice()!
        let libary = device.makeDefaultLibrary()!
        let function = libary.makeFunction(name: "customPostProcess")!
        pipelineState = try! device.makeComputePipelineState(function: function)
    }
    
    // Post process callback.
    func postProcess(context: borrowing PostProcessEffectContext<MTLCommandBuffer>) {
        let encoder = context.commandBuffer.makeComputeCommandEncoder()!
        
        encoder.setComputePipelineState(pipelineState)
        
        encoder.setTexture(context.sourceColorTexture, index: 0)
        encoder.setTexture(context.sourceDepthTexture, index: 1)
        encoder.setTexture(context.targetColorTexture, index: 2)
        
        let threadsPerGrid = MTLSize(width: context.sourceColorTexture.width,
                                     height: context.sourceColorTexture.height,
                                     depth: 1)
        
        let w = pipelineState.threadExecutionWidth
        let h = pipelineState.maxTotalThreadsPerThreadgroup / w
        let threadsPerThreadgroup = MTLSizeMake(w, h, 1)
        
        encoder.dispatchThreads(threadsPerGrid,
                                threadsPerThreadgroup: threadsPerThreadgroup)
        
        encoder.endEncoding()
    }
}
  1. The compute kernel:
#include <metal_stdlib>
using namespace metal;

[[kernel]]
void customPostProcess(uint2 gid [[thread_position_in_grid]],
                             texture2d<half, access::read> inColor [[texture(0)]],
                             texture2d<float, access::read> inDepth [[texture(1)]],
                             texture2d<float, access::write> outColor [[texture(2)]])
{
    if (!(gid.x < inColor.get_width() && gid.y < inColor.get_height())) {
        return;
    }
    
    float depth = inDepth.read(gid)[0];
        
    if (depth > FLT_EPSILON * 10) {
        outColor.write(float4(1.0, 0.0, 0.0, 1.0), gid);
    } else {
        outColor.write(float4(0.0, 1.0, 0.0, 1.0), gid);
    }
}

With this example setup, you should see a red box, and a green background, based on the depth information in the scene :)

--Greg

Accepted Answer

I have a feeling this is a misunderstanding of the depth texture itself, which does not contain linear depth, but instead contains a non-linear representation of the scene's depth. Odds are, most of these values are close to 0, which is why you are seeing an "all black" output with that compute kernel.

As an example, consider the following:

  1. The scene:
struct ContentView: View {
    var body: some View {
        RealityView { content in
            let box = ModelEntity(mesh: .generateBox(size: 1), materials: [SimpleMaterial(color: .red, isMetallic: false)])
            
            content.add(box)
            
            content.renderingEffects.customPostProcessing = .effect(CustomPostProcess())
        }
    }
}
  1. The effect:
final class CustomPostProcess: PostProcessEffect {
    
    let pipelineState: MTLComputePipelineState
    
    init() {
        let device = MTLCreateSystemDefaultDevice()!
        let libary = device.makeDefaultLibrary()!
        let function = libary.makeFunction(name: "customPostProcess")!
        pipelineState = try! device.makeComputePipelineState(function: function)
    }
    
    // Post process callback.
    func postProcess(context: borrowing PostProcessEffectContext<MTLCommandBuffer>) {
        let encoder = context.commandBuffer.makeComputeCommandEncoder()!
        
        encoder.setComputePipelineState(pipelineState)
        
        encoder.setTexture(context.sourceColorTexture, index: 0)
        encoder.setTexture(context.sourceDepthTexture, index: 1)
        encoder.setTexture(context.targetColorTexture, index: 2)
        
        let threadsPerGrid = MTLSize(width: context.sourceColorTexture.width,
                                     height: context.sourceColorTexture.height,
                                     depth: 1)
        
        let w = pipelineState.threadExecutionWidth
        let h = pipelineState.maxTotalThreadsPerThreadgroup / w
        let threadsPerThreadgroup = MTLSizeMake(w, h, 1)
        
        encoder.dispatchThreads(threadsPerGrid,
                                threadsPerThreadgroup: threadsPerThreadgroup)
        
        encoder.endEncoding()
    }
}
  1. The compute kernel:
#include <metal_stdlib>
using namespace metal;

[[kernel]]
void customPostProcess(uint2 gid [[thread_position_in_grid]],
                             texture2d<half, access::read> inColor [[texture(0)]],
                             texture2d<float, access::read> inDepth [[texture(1)]],
                             texture2d<float, access::write> outColor [[texture(2)]])
{
    if (!(gid.x < inColor.get_width() && gid.y < inColor.get_height())) {
        return;
    }
    
    float depth = inDepth.read(gid)[0];
        
    if (depth > FLT_EPSILON * 10) {
        outColor.write(float4(1.0, 0.0, 0.0, 1.0), gid);
    } else {
        outColor.write(float4(0.0, 1.0, 0.0, 1.0), gid);
    }
}

With this example setup, you should see a red box, and a green background, based on the depth information in the scene :)

--Greg

RealityView postProcess effect depth texture
 
 
Q