What is the right way to append a mask to a PhotogrammetrySample?

Problem

I'm trying to attach masks to PhotogrammetrySamples, but when I run a request on the samples, the app crashes with a Thread 5: signal SIGABRT and my console says

2022-01-31 13:38:13.575333-0800 HelloPhotogrammetry[5538:258947] [HelloPhotogrammetry] Data ingestion is complete.  Beginning processing...

libc++abi: terminating with uncaught exception of type std::__1::bad_function_call: std::exception

terminating with uncaught exception of type std::__1::bad_function_call: std::exception

App works fine if I run it without attaching the masks, or if I turn off object masking in the configuration. I'm at a bit of a loss for why it is breaking.

Process

On my iPhone, I've modified the sample capture app. After a capture session, I run a VNGeneratePersonSegmentationRequest(). I save the resulting buffer to disk using

context.pngRepresentation(of: CIImage(cvPixelBuffer: maskPixelBuffer), format: .L8, colorSpace: CGColorSpace(name: CGColorSpace.linearGray)!, options: [:])

Then, I Airdrop my files over to my laptop and run a modified version of the HelloPhotogrammetry sample app. I prepare an array of PhotogrammetrySamples with the image, depth, gravity, and mask data.

I create a PhotogrammetrySession using the samples array and have object masking enabled in my configuration.

When I process my request on the session, it looks like the data is ingested just fine, but breaks with some bad function call.

Related code

Here is how I construct my PhotogrammetrySample sequence, shortened for brevity:

private func makeSequenceFromStructuredFolder(folder: URL) -> [PhotogrammetrySample]{
        
        // Look in the sample folder, prepare arrays of capture data
        // For each capture set
            // Load color image, depth image, and mask as CIImages
            // Load and decode gravity data

            // Convert all CIImages into CVPixelBuffers
            // kCVPixelFormatType_32BGRA, <CGColorSpace 0x108e04f80> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; Display P3)
            _image = pixelBufferFromCIImage(image, context: context, pixelFormat: kCVPixelFormatType_32BGRA, colorSpace: image.colorSpace!)
            // kCVPixelFormatType_DepthFloat32, <CGColorSpace 0x100d1b000> (kCGColorSpaceICCBased; kCGColorSpaceModelMonochrome; Linear Gray)
            _depth = pixelBufferFromCIImage(depthImage, context: context, pixelFormat: kCVPixelFormatType_DepthFloat32, colorSpace: depthImage.colorSpace!)
            // kCVPixelFormatType_OneComponent8, <CGColorSpace 0x100d1b000> (kCGColorSpaceICCBased; kCGColorSpaceModelMonochrome; Linear Gray)
            _mask = pixelBufferFromCIImage(mask, context: context, pixelFormat: kCVPixelFormatType_OneComponent8, colorSpace: CGColorSpace(name: CGColorSpace.linearGray)!)
            
            // Prepare sample
            var sample = PhotogrammetrySample.init(id: index, image: _image!)
            if let _gravity = _gravity {
                sample.gravity = _gravity
            }
            if let _depth = _depth {
                sample.depthDataMap = _depth
            }
            if let _mask = _mask {
                sample.objectMask = _mask
            }
            
            // Append the sample
            sampleSequence.append(sample)
        }
        return sampleSequence
    }

And here is how I convert my CIImages into CVPixelBuffers.

func pixelBufferFromCIImage(_ image: CIImage, context: CIContext, pixelFormat: OSType, colorSpace: CGColorSpace) -> CVPixelBuffer? {
        var pixelBuffer: CVPixelBuffer?
        let attrs = [kCVPixelBufferCGImageCompatibilityKey: true,
             kCVPixelBufferCGBitmapContextCompatibilityKey: true] as CFDictionary
        let width: Int = Int(image.extent.width)
        let height: Int = Int(image.extent.height)
        let status = CVPixelBufferCreate(kCFAllocatorDefault,
                                         width,
                                         height,
                                         pixelFormat,
                                         attrs,
                                         &pixelBuffer)
        switch status {
        case kCVReturnInvalidPixelFormat:
            print("status == kCVReturnInvalidPixelFormat")
        case kCVReturnInvalidSize:
            print("status == kCVReturnInvalidSize")
        case kCVReturnPixelBufferNotMetalCompatible:
            print("status == kCVReturnPixelBufferNotMetalCompatible")
        case kCVReturnSuccess:
            print("status == kCVReturnSuccess")
        default:
            print("status is other")
        }
        
        guard (status == kCVReturnSuccess) else {
            return nil
        }
        
        context.render(image, to: pixelBuffer!, bounds: image.extent, colorSpace: colorSpace)
        return pixelBuffer
    }

Other attempted steps that ultimately failed

  • Scale the mask to be the same size as the color image using CIFilter.lanczosScaleTransform().
  • Create a binary mask using CIFilter.colorThreshold().
  • Render an intermediary image to be extra sure the right pixel format is being used for the mask.
  • Checked all image extents and made sure the color image and mask were the same size and rotation.
  • Read all documentation, looked for similar questions

I appreciate any help!

I'm having the exact same problem, so if you figure it out, I'd love to hear what the solution was!

What is the right way to append a mask to a PhotogrammetrySample?
 
 
Q