Render advanced 3D graphics and perform data-parallel computations using graphics processors using Metal.

Posts under Metal tag

199 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Implementing Scalable Order-Independent Transparency (OIT) in Metal
Hi, Apple’s documentation on Order-Independent Transparency (OIT) describes an approach using image blocks, where an array of size 4 is allocated per fragment to store depth and color in a tile shading compute pass. However, when increasing the scene’s depth complexity by adding more overlapping quads, the OIT implementation fails due to the fixed array size. Is there a way to dynamically allocate storage for fragments based on actual depth complexity encountered during rasterization, rather than using a fixed-size array? Specifically, can an adaptive array of fragments be maintained and sorted by depth, where the size grows as needed instead of being limited to 4 entries? Any insights or alternative approaches would be greatly appreciated. Thank you!
0
0
100
12h
Regarding AR App Submission Built in Xcode - Swift Student Challenge submission
Hello guys, I have a question regarding the submission requirements. My app uses ARKit and requires Metal files for shaders, which are not supported by Swift Playgrounds. Therefore, I developed my app using Xcode. (swift playgrounds returning error for metal file) Since my app relies on a real device for proper functionality, I would like to know if, under these circumstances, the scene build is performed by Xcode. If the build were instead done by Swift Playgrounds, my scene would not function correctly. I'm asking that because of this note Thank you for your time and assistance.
1
0
156
22h
Metal not working in Swift Playgrounds (SSC Scene)
Hi everyone, I'm currently working on a Swift Playgrounds project where I need to incorporate a Metal shader file. However, when I tried to include my shader file (PincushionShader.metal), I encountered the following error: Is it possible to use Metal shader files within Swift Playgrounds, it is really important for my swift student challenge scene? If not, are there any workarounds or recommended approaches for testing Metal shaders in a similar environment? Any guidance or suggestions would be greatly appreciated!
3
0
152
2h
Trouble Loading Precompiled Metal Shader (.metallib) into ShaderLibrary
I am currently finalizing my Swift Student Challenge submission, and Metal shaders are an essential part of my app. However, during submission, I noticed a note explaining: "Note: Xcode app playgrounds are run in Simulator", which is not possible for my app, as it also requires the camera of a physical device to function. So, I am currently transferring my app from Xcode into Swift Playgrounds, which I presume will run on physical devices. However, I noticed that Swift Playgrounds do not yet support Metal shaders directly, so I am now pre-compiling my shaders to load them at runtime instead. Note that all the code below was run either in the terminal or in Xcode. I have already compiled my Metal shaders with: xcrun -sdk iphoneos metal -o Shaders.ir -c Shaders.metal xcrun -sdk iphoneos metallib Shaders.ir -o Shaders.metallib Which seems to have run without any problems. When I run: let shaderPath = Bundle.main.path(forResource: "Shaders", ofType: "metallib") let shaderURL = URL(fileURLWithPath: shaderPath!) let shaderData = try! Data(contentsOf: shaderURL) do { let device = MTLCreateSystemDefaultDevice()! let library = try shaderData.withUnsafeBytes { bytes -> MTLLibrary? in let dispatchData = DispatchData(bytes: bytes) return try device.makeLibrary(data: dispatchData as __DispatchData) } print(library!.functionNames) } catch { print(error.localizedDescription) } My Metal shader functions are printed correctly in the console. However, based on my research, it seems like a MTLLibrary cannot be converted into a SwiftUI ShaderLibrary. That is why I am now looking at these two initializers: ShaderLibrary(url: URL) ShaderLibrary(data: Data) Which state: Creates a new Metal shader library from the contents of url/data, which must be the contents of precompiled Metal library. Functions compiled from the returned library will only be cached as long as the returned library exists., which I believe should work for my use case. However, the problem arises when I run this code: let shaderPath = Bundle.main.path(forResource: "Shaders", ofType: "metallib") let shaderURL = URL(fileURLWithPath: shaderPath!) let library = ShaderLibrary(url: shaderURL) My app consistently seems to crash on the ShaderLibrary initialization, rendering the app unusable. Why does ShaderLibrary(url: shaderURL) cause a crash, even though my .metallib file is valid? Are there additional requirements for loading a ShaderLibrary that I may have missed?
4
0
167
1d
Metal calls hanging/stuck if app is started quickly after login
Our app uses Metal for image processing. We have found that if our app (and its possible intensive image processing) is started quickly after user is logged in, then calls to Metal may be hanging/stuck for a good while. Example: it can take 1-2 minutes for something that usually takes 3-5 seconds! Metal threads are just hanging in a memmove... In Activity Monitor we see a lot of things are happening right after log-in. But why Metal calls are blocking for so long is unknown to us... The workaround is to wait a minute before we start our app and start intensive image processing using Metal. But hard to explain this workaround to end-users... It doesn't happen on all computers but fairly easy to reproduce on some computers. We are using macOS 15.3.1. M1/M3 Max. Any good ideas for how to proceed with this problem and possible reach out to Apple engineers? Thanks! :)
2
0
153
1w
Metal Integration with SwiftUI
Hello! I have asked this question in previous years, but I want to make sure I can be safe as each challenge could be different. Are applicants for the Swift Student Challenge allowed to use the features and technologies involved with Metal/MetalKit? Last year, the answer was yes. I have seen a few people here and there use it with Swift and won. I would like to know if we can use it for the 2025 challenge for this year as well. Thanks! :)
2
0
189
1w
alternative for CustomShader in visionOS
Following the post on https://developer.apple.com/documentation/realitykit/custommaterial it's simple to use shader for materials and get uniforms and params from each vertex. However it's not available for visionOS. Any alternative to use in this case? I want to write shader to fill material by myself. (I have shader experience from web, familiar with fragment shader)
1
0
165
1w
Concurrent conflicting texture writes
Hello! I need to "draw" a set of particles into the texture. It would be trivial in render encoder of course. However, I would like to implement the task in compute kernel. Every particle draw operation is expected to set 5 texels - "center" one and left/right/upper/lower. Particles can and will overlap, so concurrent draws are to be expected. I tried using texture atomics - atomic_store() to be more precise. This worked, albeit pretty slowly - too slow for my purpose. Just to test what would happen, I tried using normal texture write(). I was expecting to see some kind of visual artefacts, but to my surprise, it worked very well (and much faster). My question: is it safe? I understand that calling write() doesn't guarantee any ordering of the operations, so if multiple threads write to the same texel, the final value may come from any of those threads. But suppose all the threads were to write the very same color? Can I assume that the texel in question will have said color after the compute kernel finishes? I am using M2 Pro MacBook, but ideally I would love to get the answer for the all Apple Silicon devices. My texture format is R32Int (so as to be able to use atomics), but I could do with any single-channel format, the purpose of the texture is to be binary mask of sorts. Thanks!
0
0
169
2w
Rendering Order with ModelSortGroup
I have a huge sphere where the camera stays inside the sphere and turn on front face culling on my ShaderGraphMaterial applied on that sphere, so that I can place other 3D stuff inside. However when it comes to attachment, the object occlusion never works as I am expecting. Specifically my attachments are occluded by my sphere (some are not so the behavior is not deterministic. Then I suspect it was the issue of depth testing so I started using ModelSortGroup to reorder the rending sequence. However it doesn't work. As I was searching through the internet, this post's comments shows that ModelSortGroup simply doesn't work on attachments. So I wonder how should I tackle this issue now? To let my attachments appear inside my sphere. OS/Sys: VisionOS 2.3/XCode 16.3
1
0
248
2w
Learn Metal
I am interested in learning the Metal framework for rendering development. However, most of Apple’s official documentation uses Objective-C code. Therefore, I am seeking guidance on whether it is more advantageous for me to focus solely on learning Swift to gain proficiency in Metal.
2
0
443
3w
After updating CAMetalLayer.drawableSize, [CAMetalLayer nextDrawable:] frequently takes ~1s
I have a bare-bones Metal app setup where I attach a CAMetalLayer to a window that inherits from a NSWindow with a custom delegate. Everything else is vanilla. I'm also using metal-cpp and metal shader converter. I'm running into a issue where the application runs fine in the beginning, but once I resize the window, it starts hitching. It turns out that [CAMetalLayer nextDrawable:] frequently (but not always) takes around a full second (plus or minus a few milliseconds) to return once drawableSize has been updated. I've tried setting allowsNextDrawableTimeout to false which doesn't work; it returns a valid drawable after a second instead of nil. Setting displaySyncEnabled to false reduces the likelihood of this happening to around 50% from 90%+ but does not eliminate it. Setting maximumDrawableCount to 2 or 3 does not seem to make a difference. By dumping the resource IDs of the returned textures I've noticed something interesting: Before resizing, the layer seems to shuffle between 2 textures or at least 2 resource IDs, but after resizing it starts to create new textures for each returned drawable. Occasionally it seems to reuse a previous resource ID, but it does not seem to have anything to do with whether the method returns quickly or not. Why does this happen, and how can I fix it? Should I create a new CAMetalLayer when resizing the window instead of updating drawableSize?
3
0
360
3w
Use Metal to conver HDR Pixelbuffer to SDR Pixelbuffer
I see some demo show convert HDR video to SDR Pixelbuffer,such AVAssetReader、 AVVideoComposition 、AVComposition 、AVFoundation. But In some cases,I want to render HDR Pixelbuffer and record video. AVCaptureSession *session = [[AVCaptureSession alloc] init]; session.sessionPreset = AVCaptureSessionPresetHigh; AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; if ([videoDevice isVideoHDRSupported]) { NSError *error = nil; if ([videoDevice lockForConfiguration:&error]) { videoDevice.automaticallyAdjustsVideoHDREnabled = NO; videoDevice.videoHDREnabled = YES; // 开启 HDR [videoDevice unlockForConfiguration]; } else { NSLog(@"Error: %@", error.localizedDescription); } } Real-time processing of HDR data requires processing of video frame data (such as filters), ensuring that the processing chain supports 10-bit color depth and HDR metadata. And use imagesBuffer to object tracking, etc. How to solve this problem?
1
0
188
3w
CATransaction commit() crashed on background thread [EXC_BREAKPOINT: com.apple.root.****-qos.cooperative]
Problem Description We are developing a app for iOS and iPadOS that involves extensive custom drawing of paths, shapes, texts, etc. To improve drawing and rendering speed, we use CARenderer to generate cached images (CGImage) on a background thread. We adopted this approach based on this StackOverflow post: https://stackoverflow.com/a/75497329/9202699. However, we are experiencing frequent crashes in our production environment that we can hardly reproduce in our development environment. Despite months of debugging and seeking support from DTS and the Apple Feedback platform, we have not been able to fully resolve this issue. Our recent crash reports indicate that the crashes occur when calling CATransaction.commit(). We suspect that CATransaction may not be functioning properly outside the main thread. However, based on feedback from the Apple Feedback platform, we were advised to use CATransaction.begin() and CATransaction.commit() on a background thread. If anyone has any insights, we would greatly appreciate it. Code Sample The line CATransaction.commit() is causing the crash: [EXC_BREAKPOINT: com.apple.root.****-qos.cooperative] private let transactionLock = NSLock() // to ensure one transaction at a time private let device = MTLCreateSystemDefaultDevice()! @inline(never) static func drawOnCGImageWithCARenderer( layerRect: CGRect, itemsToDraw: [ItemsToDraw] ) -> CGImage? { // We have encapsulated everything related to CALayer and its // associated creations and manipulations within CATransaction // as suggested by engineers from Apple Feedback Portal. transactionLock.lock() CATransaction.begin() // Create the root layer. let layer = CALayer() layer.bounds = layerRect layer.masksToBounds = true // Add one sublayer for each item to draw itemsToDraw.forEach { item in // We have thousands or hundred thousands of drawing items to add. // Each drawing item may produce a CALayer, CAShapeLayer or CATextLayer. // This is also why we want to utilise CARenderer to leverage GPU rendering. layer.addSublayer( item.createCALayerOrCATextLayerOrCAShapeLayer() ) } // Create MTLTexture and CARenderer. let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor( pixelFormat: .rgba8Unorm, width: Int(layer.frame.size.width), height: Int(layer.frame.size.height), mipmapped: false ) textureDescriptor.usage = [MTLTextureUsage.shaderRead, .shaderWrite, .renderTarget] let texture = device.makeTexture(descriptor: textureDescriptor)! let renderer = CARenderer(mtlTexture: texture) renderer.bounds = layer.frame renderer.layer = layer.self /* ********************************************************* */ // From our crash report, this is where the crash happens. CATransaction.commit() /* ********************************************************* */ transactionLock.unlock() // Rendering layers onto MTLTexture using CARenderer. renderer.beginFrame(atTime: 0, timeStamp: nil) renderer.render() renderer.endFrame() // Draw MTLTexture onto image. guard let colorSpace = CGColorSpace(name: CGColorSpace.sRGB), let ciImage = CIImage(mtlTexture: texture, options: [.colorSpace: colorSpace]) else { return nil } // Convert CIImage to CGImage. let context = CIContext() return context.createCGImage(ciImage, from: ciImage.extent) }
0
1
220
3w
Instruments showing incorrect values
Hello, I’m encountering an issue with the Instruments app while running a benchmark on an M2 Ultra Mac Studio. Despite being certain that GPU activities involving memory read and write operations are occurring, all related performance counters consistently return 0. Interestingly, this problem does not occur when using the same code on an M1 MacBook Air, where the counters behave as expected. What could be causing this discrepancy? Any insights or suggestions would be greatly appreciated. Thank you!
0
0
175
4w
Request for gaze data in fully immersive Metal apps
Hi, We are trying to port our Unity app from other XR devices to Vision Pro. Thus it's way easier for us to use the Metal rendering layer, fully immersive. And to stay true to the platform, we want to keep the gaze/pinch interaction system. But we just noticed that, unlike Polyspatial XR apps, VisionOS XR in Metal does not provide gaze info unless the user is actively pinching... Which forbids any attempt to give visual feedback on what they are looking at (buttons, etc). Is this planned in Apple's roadmap ? Thanks
3
0
292
1w
CATransaction commit [Crashed: com.apple.root.user-initiated-qos.cooperative]
Description We are developing a app for iOS and iPadOS that involves extensive custom drawing of paths, shapes, texts, etc. To improve drawing and rendering speed, we use CARenderer to generate cached images (CGImage) on a background thread. We adopted this approach based on this StackOverflow post: https://stackoverflow.com/a/75497329/9202699. However, we are experiencing frequent crashes in our production environment that we cannot reproduce in our development environment. Despite months of debugging and seeking support from DTS and the Apple Feedback platform, we have not been able to fully resolve this issue. Our recent crash reports indicate that the crashes occur when calling CATransaction.commit(). Crash traceback The method names in this traceback are mapped to those in the code sample below. The app name has been masked. Crashed: com.apple.root.user-initiated-qos.cooperative 0 MyApp 0x887408 specialized static CAUtils.commitCATransaction() + 4340151304 (<compiler-generated>:4340151304) 1 MyApp 0x887408 specialized static CAUtils.commitCATransaction() + 4340151304 (<compiler-generated>:4340151304) 2 MyApp 0x8874a4 specialized static CAUtils.addDrawingItemsToRenderer(***) + 250 (CAUtils.swift:250) 3 MyApp 0x887710 specialized static CAUtils.drawOnCGImageWithCARenderer(***) + 267 (CAUtils.swift:267) 4 MyApp 0x8878c0 specialized static CAUtils.drawOnCGImageWithCARendererWithRetry(***) + 315 (CAUtils.swift:315) 5 MyApp 0x736294 XXXManager.generateCGImages(***) + 570 (XXXManager.swift:570) 6 MyApp 0x73404c closure #1 in XXXManager.updateCachedCGImages(***) + 427 (XXXManager.swift:427) 7 libswift_Concurrency.dylib 0x61104 swift::runJobInEstablishedExecutorContext(swift::Job*) + 252 8 libswift_Concurrency.dylib 0x62514 swift_job_runImpl(swift::Job*, swift::SerialExecutorRef) + 144 9 libdispatch.dylib 0x15d8c _dispatch_root_queue_drain + 392 10 libdispatch.dylib 0x16590 _dispatch_worker_thread2 + 156 11 libsystem_pthread.dylib 0x4c40 _pthread_wqthread + 228 12 libsystem_pthread.dylib 0x1488 start_wqthread + 8 Code Sample Below is a sample of our code. While the complete snippet is too long, the issue occurs in addDrawingItemsToRenderer. Please refer to the other methods for completeness and reference purposes. private let transactionLock = NSLock() private let deviceLock = NSLock() private let device = MTLCreateSystemDefaultDevice()! /// This is the method we call from outside. @inline(never) static func drawOnCGImageWithCARenderer( layerRect: CGRect, drawingItems: [DrawingItem] ) -> CGImage? { guard let (texture, renderer) = addDrawingItemsToRenderer( layerRect: layerRect, drawingItems: drawingItems ) else { return nil } renderer.beginFrame(atTime: 0, timeStamp: nil) renderer.render() renderer.endFrame() guard let colorSpace = CGColorSpace(name: CGColorSpace.sRGB), let ciImage = CIImage(mtlTexture: texture, options: [.colorSpace: colorSpace]) else { return nil } let context = CIContext() return context.createCGImage(ciImage, from: ciImage.extent) } /// This is the method will the crash happens @inline(never) fileprivate static func addDrawingItemsToRenderer( layerRect: CGRect, drawingItems: [DrawingItem] ) -> (MTLTexture, CARenderer)? { // We have encapsulated everything related to CALayer and its // associated creations and manipulations within CATransaction // as suggested by engineers from Apple Feedback Portal. beginCATransaction() defer { commitCATransaction() // The crash happens here } let (layer, imageWidth, imageHeight) = addDrawingItemsToLayer(layerRect: layerRect, drawingItems: drawingItems) return createTextureAndRenderer( layer: layer, imageWidth: imageWidth, imageHeight: imageHeight ) } // Below are all internal methods. We have split the method into very // granular parts and marked them as @inline(never) to prevent the // compiler from inlining our code, which may otherwise obscure usage // trackback information in our crash reports. @inline(never) fileprivate static func beginCATransaction() { transactionLock.lock() CATransaction.begin() } @inline(never) fileprivate static func commitCATransaction() { // From our crash report, we believe the crash happens on this line. CATransaction.commit() // It is unlikely that the lock cause the crash as we added it only recently // to ensure that there is only one transaction on our background thread, // and after we added this lock, the crash rate indeed lowered, but still // not fully disappear transactionLock.unlock() } -------------------------------- // The methods below are provided for reference and completeness. While // they may have issues, they do not frequently appear in our crash // reports as the one caused by `CATransaction.commit()` @inline(never) fileprivate static func addDrawingItemsToLayer( layerRect: CGRect, drawingItems: [DrawingItem] ) -> (layer: CALayer, imageWidth: CGFloat, imageHeight: CGFloat) { let layer = CALayer() layer.isGeometryFlipped = SharedAppUtils.isIOS layer.anchorPoint = CGPoint.zero layer.bounds = layerRect layer.masksToBounds = true for drawingItem in drawingItems { // We have thousands or hundred thousands of drawing items to add. // Each drawing item may produce a CALayer, CAShapeLayer or CATextLayer. // This is also why we want to utilise CARenderer to leverage GPU rendering. let sublayerForDrawingItem = drawingItem.createCALayerOrCATextLayerOrCAShapeLayer() layer.addSublayer(sublayerForDrawingItem) } let imageWidth = max(1, layer.frame.size.width * UIScreen.main.scale) let imageHeight = max(1, layer.frame.size.height * UIScreen.main.scale) layer.transform = CATransform3DMakeScale(UIScreen.main.scale, UIScreen.main.scale, 1) layer.frame = .init(origin: .zero, size: .init(width: imageWidth, height: imageHeight)) return (layer, imageWidth, imageHeight) } @inline(never) fileprivate static func createTextureAndRenderer( layer: CALayer, imageWidth: CGFloat, imageHeight: CGFloat ) -> (MTLTexture, CARenderer)? { deviceLock.lock() defer { deviceLock.unlock() } let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor( pixelFormat: .rgba8Unorm, width: Int(imageWidth), height: Int(imageHeight), mipmapped: false ) textureDescriptor.usage = [MTLTextureUsage.shaderRead, .shaderWrite, .renderTarget] guard let texture = device.makeTexture(descriptor: textureDescriptor) else { return nil } let renderer = CARenderer(mtlTexture: texture) renderer.bounds = layer.frame renderer.layer = layer.self return (texture, renderer) }
1
1
217
Jan ’25
Usage of colorCurves CIFilter
How can I use my RGB Curve points: let redCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.152), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)] let greenCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.247, y: 0.196), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)] let blueCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.184), CIVector(x: 0.466, y: 0.466), CIVector(x: 1, y: 1)] in colorCurvesFilter which I've found in Apple Docs: func colorCurves(inputImage: CIImage) -&gt; CIImage { let colorCurvesEffect = CIFilter.colorCurves() colorCurvesEffect.inputImage = inputImage colorCurvesEffect.curvesDomain = CIVector(x: 0, y: 1) colorCurvesEffect.curvesData = Data( bytes: [Float32]([ 0.0,0.0,0.0, 0.8,0.8,0.8, 1.0,1.0,1.0 ]), count: 36) colorCurvesEffect.colorSpace = CGColorSpaceCreateDeviceRGB() return colorCurvesEffect.outputImage! }
0
0
174
Jan ’25
Swift playground + metal crashes on swift 6
Following code crashes (sigsegv in lldb-rpc-server) when run as swift 6, but runs correctly when run as swift 5 (from "Metal by tutorials"): import PlaygroundSupport import MetalKit print("start") guard let device = MTLCreateSystemDefaultDevice() else { fatalError("GPU is not supported") } let frame = CGRect(x: 0, y: 0, width: 600, height: 600) let view = MTKView(frame: frame, device: device) view.clearColor = MTLClearColor(red: 1, green: 1, blue: 0.8, alpha: 1) let allocator = MTKMeshBufferAllocator(device: device) let mdlMesh = MDLMesh(sphereWithExtent: [0.75,0.75,0.75], segments: [100, 100], inwardNormals: false, geometryType: .triangles, allocator: allocator) let mesh = try MTKMesh(mesh: mdlMesh, device: device) guard let commandQueue = device.makeCommandQueue() else { fatalError("Could not create a command queue") } let shader = """ #include <metal_stdlib> using namespace metal; struct VertexIn { float4 position [[attribute(0)]]; }; vertex float4 vertex_main(const VertexIn vertex_in [[stage_in]]) { return vertex_in.position; } fragment float4 fragment_main() { return float4(1, 0, 0, 1); } """ print("A") let library = try device.makeLibrary(source: shader, options: nil) let vertexFunction = library.makeFunction(name: "vertex_main") let fragmentFunction = library.makeFunction(name: "fragment_main") let pipelineDescriptor = MTLRenderPipelineDescriptor() pipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm pipelineDescriptor.vertexFunction = vertexFunction pipelineDescriptor.fragmentFunction = fragmentFunction print("X") pipelineDescriptor.vertexDescriptor = MTKMetalVertexDescriptorFromModelIO(mesh.vertexDescriptor) let pipelineState = try device.makeRenderPipelineState(descriptor: pipelineDescriptor) guard let commandBuffer = commandQueue.makeCommandBuffer(), let renderPassDescriptor = view.currentRenderPassDescriptor, let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) else { fatalError() } renderEncoder.setRenderPipelineState(pipelineState) renderEncoder.setVertexBuffer(mesh.vertexBuffers[0].buffer, offset: 0, index: 0) guard let submesh = mesh.submeshes.first else { fatalError() } renderEncoder.drawIndexedPrimitives(type: .triangle, indexCount: submesh.indexCount, indexType: submesh.indexType, indexBuffer: submesh.indexBuffer.buffer, indexBufferOffset: 0) renderEncoder.endEncoding() guard let drawable = view.currentDrawable else { fatalError() } commandBuffer.present(drawable) commandBuffer.commit() print("test") PlaygroundPage.current.liveView = view Crash report: https://gist.githubusercontent.com/tumdum/8aa53bc806619c0d21c93a55fae07937/raw/370b00c07b08fff8856f9fc678de9888faa8d06e/crash.log I'm on macOS 15.1.1 (24B2091) + Xcode 16.2 (16C5032a)
0
1
246
3w
How to save a point cloud in the sample code "Capturing depth using the LiDAR camera" with the photoOutput
Hello dear community, I have the sample code from Apple “CapturingDepthUsingLiDAR” to access the LiDAR on my iPhone 12 Pro. My goal is to use the “photo output” function to generate a point cloud from a single image and then save it as a ply file. So far I have tested different approaches to create a .ply file from the depthmap, the intrinsic camera data and the rgba values. Unfortunately, I have had no success so far and the result has always been an incorrect point cloud. My question now is whether there are already approaches to this and whether anyone has any experience with it. Thank you very much in advance!!!
2
0
346
Jan ’25