Object Capture

RSS for tag

Turn photos from your iPhone or iPad into high‑quality 3D models that are optimized for AR using the new Object Capture API on macOS Monterey.

Posts under Object Capture tag

64 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

New Object Capture Sample Code
Hi, Object Capture's original sample code was released last year, and this year there was a talk about adding area mode to it. The talk links to the old Object Capture code - when can I expect to have the new one with area mode, and is there anything I can help you with to have it published faster? Thanks!
2
1
44
9h
Website Header Disabled In Mcbook
Hy my name is zubair and i a blogger. I have recently create a website on gold price in oman url omangoldprices.com I maked a well design header footer and every thing as compared with my competitors but the header of my website disabled and not working on Macbook and macbook pro and the logo does not showing on IOS version Iphone 6 12.6.1. I am very comfused about that can anyone help me in?
0
0
182
3w
Object Capture to USDZ Scaling
Im trying to take an object capture, and scale it. What I did so far is create a Reality Composer project, insert the .objcap file into the project, and then scaled it from 100%, to 200%. I then extracted it as a USDZ. it just won't show up in the Xcode preview now, and im not sure why it doesn't show. Is there any way to fix this? im going crazy trying to find a fix for this to work.
1
0
354
Apr ’24
Meet Object Capture for iOS
Hi ,
We are using Scanning objects using Object Capture app provided by apple it was working fine but suddenly it started crashing while scanning the object with bounding box settings.
Getting device log but showing different reasons for app crash.
Device : iPad Pro / iPhone 15 Pro
iOS : 17.4
Attaching the device log for crash, waiting for your response. -------------------------------------
Translated Report (Full Report Below)
-------------------------------------
Incident Identifier: 0797ADAF-D653-4C92-8AA0-300AA167002B
CrashReporter Key: 48724a3b30ef15e069f513afff7d1aa2e935a520
Hardware Model: iPhone16,1
Process: GuidedCapture [918]
Path: /private/var/containers/Bundle/Application/ACCE6C58-98F0-4DD7-AA6E-732190E0FD30/GuidedCapture.app/GuidedCapture
Identifier: com.example.apple-samplecode.GuidedCaptureH28X75MLUY
Version: 1.0 (1)
Code Type: ARM-64 (Native)
Role: Foreground
Parent Process: launchd [1]
Coalition: com.example.apple-samplecode.GuidedCaptureH28X75MLUY [743]
Date/Time: 2024-03-26 18:23:07.8269 +0530
Launch Time: 2024-03-26 18:20:02.5964 +0530
OS Version: iPhone OS 17.4.1 (21E236)
Release Type: User
Baseband Version: 1.55.04
Report Version: 104
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000001, 0x000000022e6577a4
Termination Reason: SIGNAL 5 Trace/BPT trap: 5
Terminating Process: exc handler [918]
Triggered by Thread: 33
Thread 33 name: Dispatch queue: com.apple.coreoc.queues.serial.session
Thread 33 Crashed:
0 CoreOC 0x22e6577a4 0x22e657400 + 932
1 CoreOC 0x22e657588 0x22e657401 + 391
2 CoreOC 0x22e697628 0x22e697221 + 1031
3 CoreOC 0x22e684864 0x22e684431 + 1075
4 CoreOC 0x22e6831ec 0x22e6828d5 + 2327
5 CoreOC 0x22e6c1fb4 0x22e6c1f5d + 87
6 CoreOC 0x22e5ef128 0x22e5ef105 + 35
7 libdispatch.dylib 0x19291113c _dispatch_call_block_and_release + 31
8 libdispatch.dylib 0x192912dd4 _dispatch_client_callout + 19
9 libdispatch.dylib 0x19291a400 _dispatch_lane_serial_drain + 747
10 libdispatch.dylib 0x19291af30 _dispatch_lane_invoke + 379
11 libdispatch.dylib 0x192925cb4 _dispatch_root_queue_drain_deferred_wlh + 287
12 libdispatch.dylib 0x192925528 _dispatch_workloop_worker_thread + 403
13 libsystem_pthread.dylib 0x1e69f8f20 _pthread_wqthread + 287
14 libsystem_pthread.dylib 0x1e69f8fc0 start_wqthread + 7
0
0
335
Mar ’24
Object Capture API Limitation Concerns
Hello, I'm currently building an app that implements the on-device object capture API to create 3D models. I have two concerns that I cannot find addressed anywhere on the internet: Can on-device object capture be performed by devices without LiDAR? I understand that depth data is necessary for making scale-accurate models - if there is an option to disable it, where would one specify that in code? Can models be exported to .obj instead of .usdz? From WWDC2021 at 3:00 it is mentioned that it is possible with the Apple Silicon API but what about with on-device scanning? I would be very grateful if anyone is knowledgeable enough to provide some insight. Thank you so much!
2
0
381
Mar ’24
Photogrammetry CLI Tool Error
Hi all, I took bunch of photos using Apple's 'Capture Sample' iOS app. Even though the all images in .HEIC/HEIF file format that CLI tool logs the bunch of the following errors and couldn't find any solution. 1-) HEIF file is expected. 2-) *** Assertion failure in OCReturn OCNonModularSPI_CMPhoto_readResolution(const OCHeicReadHandle, const NSURL *__strong, uint64_t *, uint64_t *)(), CMPhoto+NonModularSPI.m:1271
1
1
349
Mar ’24
Seeking Guidance on Extracting Point Cloud and Facial Measurements from Object Capture Scans
Hello Apple community, I am currently working with Object Capture and would appreciate some guidance on extracting specific data from the scans. I have successfully scanned objects, but I am now looking to obtain the point cloud and facial measurements from these scans. I have used https://developer.apple.com/documentation/RealityKit/guided-capture-sample as a reference for implementation. Point Cloud: How can I extract the point cloud data from my Object Capture scans? Are there any specific tools or methods recommended for this purpose? Facial Measurements: Is there a way to extract facial measurements accurately using Object Capture? Are there any built-in features or third-party tools that can assist with this? I've explored the documentation, but I would greatly benefit from any insights, tips, or recommended workflows from the community. Your expertise is highly appreciated! Thank you in advance.
0
0
329
Feb ’24
Photogrammetry on iOS 17.0+ with TrueDepth
Hello, I came across the Object Capture for iOS example from WWDC23, which utilizes LiDAR sensor. However, I’m interested in using the TrueDepth camera system instead. What I have tried is to save depth photos (.HEIC) to the Images/ folder (based on modifying the example below), which is hopefully used by the Photogrammetry session. But I haven’t been successful so far in starting the 3D reconstruction. Could there be something I’ve missed, or is the Object Capture sample code exclusively designed for LiDAR? Or maybe .HEIC is not the right format to use? Thank you for your assistance. import AVFoundation import UIKit class DepthPhotoCapture: NSObject, AVCapturePhotoCaptureDelegate { let photoOutput = AVCapturePhotoOutput() let captureSession = AVCaptureSession() override init() { super.init() setupCaptureSession() } func setupCaptureSession() { // Get the front camera (TrueDepth camera) guard let frontCamera = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .front) else { print("Unable to access front camera!") return } do { // Create an input object from the camera let input = try AVCaptureDeviceInput(device: frontCamera) // Add the input to the capture session captureSession.addInput(input) } catch { print("Unable to create AVCaptureDeviceInput: \(error)") } // Check if depth data capture is supported if photoOutput.isDepthDataDeliverySupported { // Enable depth data capture photoOutput.isDepthDataDeliveryEnabled = true } // Add the photo output to the capture session captureSession.addOutput(photoOutput) // Start the capture session captureSession.startRunning() } func captureDepthPhoto() { // Create a photo settings object let photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc]) photoSettings.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliveryEnabled // Capture a photo with depth data photoOutput.capturePhoto(with: photoSettings, delegate: self) } // Implement the AVCapturePhotoCaptureDelegate method func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { guard let imageData = photo.fileDataRepresentation() else { print("Error while generating image from photo capture data.") return } // Get the documents directory let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first! // Append the image directory and a unique image name let fileURL = documentsDirectory.appendingPathComponent("Images/").appendingPathComponent(UUID().uuidString).appendingPathExtension("heic") do { // Write the image data to the file try imageData.write(to: fileURL) print("Saved photo with depth data to \(fileURL)") } catch { print("Failed to write the image data to disk: \(error)") } } }
1
1
480
Jan ’24
iOS 17: Cannot find type 'PhotogrammetrySession' in scope (Object Capture)
Dear all, I'm building a SwiftUI-based frontend for the RealityKit Object Capture feature. It is using the iOS+macOS (native, not Catalyst) template on Xcode 15.1. When compiling for macOS 14.2, everything works as expected. When compiling for iPadOS 17.2, I receive the compiler error message "Cannot find type 'PhotogrammetrySession' in scope". As minimum deployment target, I selected iOS 17 and macOS 14 (see attachment). An example of the lines of code causing errors is import RealityKit typealias FeatureSensitivity = PhotogrammetrySession.Configuration.FeatureSensitivity // error: Cannot find type PhotogrammetrySession in scope typealias LevelOfDetail = PhotogrammetrySession.Request.Detail // error: Cannot find type PhotogrammetrySession in scope I made sure to wrap code that uses features unavailable on iOS (e.g. the .raw LOD setting) in #available checks. Is this an issue with Xcode or am I missing some compatibility check? The SDK clearly says /// Manages the creation of a 3D model from a set of images. /// /// For more information on using ``PhotogrammetrySession``, see /// <doc://com.apple.documentation/documentation/realitykit/creating-3d-objects-from-photographs>. @available(iOS 17.0, macOS 12.0, macCatalyst 15.0, *) @available(visionOS, unavailable) public class PhotogrammetrySession { /* ... */ } Thanks for any help with this :) And greetings from Köln, Germany ~ Alex
1
0
577
Jan ’24
ObjectCaptureView presents a memory leak
I have an app which shows a screen using ObjectCaptureView and each time that the view appears and disappears the memory increases around 400-500Mb. After check the memory graph I found that I was related to the SwiftUI's view which is creating the ARKit view under the hood. To be sure that the ObjectCaptureView was which has the memory leak I only commented in the view the line to create the ObjectCaptureView keeping the rest of the logic to handle the session state, feedback, etc.
1
1
476
Dec ’23
How to replicate ObjectCaptureSession's boundary restriction?
Hello, I want to use Apple's PhotogrammetrySession to scan a window. However, ObjectCaptureSession seems to be a monotasker and won't allow capture to occur with anything but a small object on a flat surface. So, I need to manually feed data into PhotogrammetrySession. But when I do, it focuses way too much on the scene behind the window, sacrificing detail on the window itself. Is there a way for me to either coax ObjectCaptureSession into capturing an area on the wall, or for me to restrict PhotogrammetrySession's target bounding box manually? How does ObjectCaptureSession communicate the limited bounding box to PhotogrammetrySession? Thanks, Sebastian
1
0
420
Dec ’23
Object Capture: Exporting model as OBJ.
https://developer.apple.com/documentation/realitykit/photogrammetrysession/request/modelfile(url:detail:geometry:) if the path given is a file path that contains a .usdz extension, then it will be saved as .usdz, or else if we provide a folder, it will save as OBJ; I tried it, but no use. Right before saving, it shows the folder that will be saved, but after I click on done and check the folder, it's always empty.
1
1
601
Nov ’23
iOS 17 ObjectCaptureSession beginNewScanPassAfterFlip
I am trying to get an implementation of Object Capture using ObjectCaptureSession in iOS 17. I have been following the example supplied by Apple, but I cannot get the session object to be in the correct state to allow ObjectCaptureSession::beginNewScanPassAfterFlip() to be called. I get the following error when I call session.beginNewScanPassAfterFlip() Can't beginNewScanPassAfterFlip() from state == capturing Must be .paused from .capturing To start with, there is no state of ObjectCaptureSession which is .paused , so is this talking about .isPaused? I have tried using session.pause() and confirm that it is paused using .isPaused but I get the same error as above. I have checked the output of session.state, and confirm it is .capturing I have put print statements in the example, and confirm that before session.beginNewScanPassAfterFlip() is called at line 104 of OnboardingButtonView the state is .capturing This goes against the documentation in this page: https://developer.apple.com/documentation/realitykit/objectcapturesession/beginnewscanpassafterflip() Note, I have also tried pausing the session and calling beginNewScanPassAfterFlip() but this results in the warning: I am hoping for some clarification if there is something I am missing?
0
0
460
Nov ’23
Does ObjectCaptureSession.startDetecting() output BoundingBox properties
I was able to run Scanning objects using Object Capture, and get the bounding box after tapping "Continue" button. Does the code output BoundingBox properties as well? For example, the vertex of all corners of the bounding box? In the Apple document, If success, the session state will eventually transition to .detecting and switch to the bounding box selection mode UI, what UI? Anyone has link so I can look into? Thanks!
0
0
450
Nov ’23