Meet Object Capture for iOS

RSS for tag

Discuss the WWDC23 Session Meet Object Capture for iOS

View Session

Posts under wwdc2023-10191 tag

47 Posts
Sort by:
Post marked as solved
1 Replies
453 Views
Hi, I'm watching https://developer.apple.com/videos/play/wwdc2023/10191 and would like to generate a high level detail object, but looks like that is not possible in iOS yet. However, the project has configuration.isOverCaptureEnabled = true which captures additional images for later transfer them to macOS. Is there a way to get the images from the phone? Thanks, Pitt
Posted
by pitt500.
Last updated
.
Post not yet marked as solved
2 Replies
606 Views
Hi, In the scanning objects using object capture project, when the content view is dismissed the AppDataModel is always retained and the deinit is never called. @StateObject var appModel: AppDataModel = AppDataModel.instance I am presenting the contentView using a UIHostingController let hostingController = UIHostingController(rootView: ContentView()) hostingController.modalPresentationStyle = .fullScreen present(hostingController, animated: true) I have tried to manually detach the listeners and setting the objectCaptureSession to nil. In the debug memory graph there is a coachingoverlay retaining the AppDataModel. I want to remove the appModel from memory when the contentView is dismissed. Any suggestions?
Posted
by gebs.
Last updated
.
Post marked as solved
4 Replies
1.7k Views
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS. I have two questions regarding the newly updated APIs. From WWDC23 session: "Meet Object Capture for iOS", I know that the Object Capture API uses Point Cloud data captured from iPhone LiDAR sensor. I want to know how to use the Point Cloud data captured on iPhone ObjectCaptureSession and use it to create 3D models on PhotogrammetrySession on MacOS. From the example code from WWDC21, I know that the PhotogrammetrySession utilizes depth map from captured photo images by embedding it into the HEIC image and use those data to create a 3D asset on PhotogrammetrySession on MacOS. I would like to know if Point Cloud data is also embedded into the image to be used during 3D reconstruction and if not, how else the Point Cloud data is inserted to be used during reconstruction. Another question is, I know that Point Cloud data is returned as a result from request to the PhtogrammetrySession.Request. I would like to know if this PointCloud data is the same set of data captured during ObjectCaptureSession from WWDC23 that is used to create ObjectCapturePointCloudView. Thank you to everyone for the help in advance. It's a real pleasure to be developing with all the updates to RealityKit and the Object Capture API.
Posted
by KKodiac.
Last updated
.
Post not yet marked as solved
1 Replies
798 Views
When I install and run the sample app Apple released just recently, everything works fine up until I try to start the capture. Bounding box sets up without a problem, but then every time, this error occurs: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[AVCapturePhotoOutput capturePhotoWithSettings:delegate:] You are not authorized to use custom shutter sounds' *** First throw call stack: (0x19d6e8300 0x195cd4f30 0x1b9bfdcb4 0x1cc4fbf98 0x1cc432964 0x19d6e8920 0x19d70552c 0x1cc4328f8 0x1cc4a8fac 0x19d6e8920 0x19d70552c 0x1cc4a8e44 0x23634923c 0x23637abfc 0x2362d21a4 0x2362d139c 0x236339874 0x23636dc04 0x1a67f9b74 0x1a68023ac 0x1a67fa964 0x1a67faa78 0x1a67fa5d0 0x1039c6b34 0x1039d80b4 0x1a6800188 0x1a67f94bc 0x1a67f9fd0 0x1a6800098 0x1a67f9504 0x23633777c 0x23637201c 0x2354d081c 0x2354c8658 0x1039c6b34 0x1039c9c20 0x1039e1078 0x1039dfacc 0x1039d6ebc 0x1039d6ba0 0x19d774e94 0x19d758594 0x19d75cda0 0x1df4c0224 0x19fbcd154 0x19fbccdb8 0x1a142f1a8 0x1a139df2c 0x1a1387c1c 0x102a5d944 0x102a5d9f4 0x1c030e4f8) libc++abi: terminating due to uncaught exception of type NSException I have no idea why this is happening, so any help would be appreciated. My iPad is running the latest iPadOS 17 Beta and the crash also occurs when I don't have it isn't connected to Xcode...
Posted
by chart_s.
Last updated
.
Post not yet marked as solved
1 Replies
461 Views
When running the code from the object capture event from WWDC 23 event I'm currently getting the error "dyld[607]: Symbol not found: _$s21DeveloperToolsSupport15PreviewRegistryPAAE7previewAA0D0VvgZ Referenced from: <411AA023-A110-33EA-B026-D0103BAE08B6> /private/var/containers/Bundle/Application/9E9526BF-C163-420D-B6E0-2DC9E02B3F7E/ObjectCapture.app/ObjectCapture Expected in: <0BD6AC59-17BF-3B07-8C7F-6D9D25E0F3AD> /System/Library/Frameworks/DeveloperToolsSupport.framework/DeveloperToolsSupport"
Posted
by mattura.
Last updated
.
Post not yet marked as solved
3 Replies
961 Views
Now that we have the Vision Pro, I really want to start using Apple's Object Capture API to transform real objects into 3D assets. I watched the latest Object Capture vid from WWDC 23 and noticed they were using a "sample app". Does Apple provide this sample app to VisionOS developers or do we have to build our own iOS app? Thanks and cheers!
Posted Last updated
.
Post not yet marked as solved
3 Replies
727 Views
Hi. Each time when I am trying to capture object using example from session https://developer.apple.com/videos/play/wwdc2023/10191 I have a crash. iPhone 14 Pro Max, iOS 17 beta 3. Xcode Version 15.0 beta 3 (15A5195k) Log: ObjectCaptureSession.: mobileSfM pose for the new camera shot is not consistent. <<<< PlayerRemoteXPC >>>> fpr_deferPostNotificationToNotificationQueue signalled err=-12 785 (kCMBaseObjectError_Invalidated) (item invalidated) at FigPlayer_RemoteXPC.m:829 Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED MTLCompiler: Compilation failed with XPC_ERROR_CONNECTION_INTERRUPTED on 3 try /Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSLibrary.mm:485: failed assertion `MPSLibrary::MPSKey_Create internal error: Unable to get MPS kernel NDArrayMatrixMultiplyNNA14_EdgeCase. Error: Compiler encountered an internal error ' /Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSLibrary.mm, line 485: error ''
Posted Last updated
.
Post marked as solved
2 Replies
728 Views
Hello, I am testing the updated PhotogrammetrySession based on test iOS app that I created that uses ObjectCaptureSession that was announced this year's WWDC23: "Meet Object Capture for iOS". I currently have server-side code using the RealityKit's PhotogrammetrySession on MacOS 14.0 beta which means that PhotogrammetrySession should be able to utilize Point Cloud data captured during the ObjectCaptureSession in the iOS app(as announced during WWDC23 session). My expectation was that the Point Cloud captured during ObjectCaptureSession was embedded into the image so that I only needed to import the HEIC image files to be used for PhotogrammetrySession on MacOS. However, I came across following warning message: Image Folder Reader: Cannot read temporal depth point clouds of sample (id = 20) for all my input images. The thing to note is, when I run the same PhotogrammetrySession on iOS, the Point Cloud data seem to be processed just fine. After digging into hex format of the HEIC image captured during ObjectCaptureSession, I came across the following line: mimeapplication/rdf+xml infe 6hvc1<infe 7uri octag:com:apple:oc:cameraTrackingState>infe 8uri octag:com:apple:oc:cameraCalibrationData=infe 9uri octag:com:apple:oc:2022:objectTransform:infe :uri octag:com:apple:oc:objectBoundingBox9infe ;uri octag:com:apple:oc:rawFeaturePoints7infe <uri octag:com:apple:oc:pointCloudData0infe =uri octag:com:apple:oc:version2infe >uri octag:com:apple:oc:segmentID=infe ?uri octag:com:apple:oc:wideToDepthTransform which, to me, seemed like the location of data which included Point Cloud data of respective HEIC image that was captured. So, the question is, is it possible for me to access these files, read them, and send its data to server-side PhotogrammetrySession to be processed alongside its respective HEIC image? Or am I getting this completely wrong?
Posted
by KKodiac.
Last updated
.
Post not yet marked as solved
0 Replies
474 Views
In WWDC 2021, It saids 'we also offer an interface for advanced workflows to provide a sequence of custom samples. A PhotogrammetrySample includes the image plus other optional data such as a depth map, gravity vector, or custom segmentation mask.' But in code, PhotogrammetrySession initialize with data saved directory. How can I give input of PhotogrammetrySamples to PhotogrammetrySession?
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
2 Replies
539 Views
Hi there, Just wondering when the sample project will be available. I am having trouble getting anything good out of the snippets and want to see the workings of the full project. Where/When can we get this ?
Posted Last updated
.
Post marked as solved
1 Replies
895 Views
I saw the at the WWDC23 session "Meet Object Capture for iOS" that the new tool that was released today along with Xcode 15 beta 2 called "Reality Composer Pro" will be capable of creating 3D models with Apple's PhotogrammetrySession. However, I do not see any of its features on the tool. Has anyone managed to find the feature for creating 3D models as shown in the session?
Posted
by KKodiac.
Last updated
.
Post not yet marked as solved
1 Replies
765 Views
I've saved a bunch of images on an iPhone XS without using the ObjectCapture API since it's not supported. Then I tried to use the PhotogrammetrySession but it fails with Error The operation couldn’t be completed. (CoreOC.PhotogrammetrySession.Error error 3.) CorePG: Initialization error = sessionError(reason: "CPGSessionCreate failed!") Internal photogrammetry session init failed for checkpointDirectory = Any idea why this would be the case? I managed to use my iPhone 14 Pro to successfully create a USDZ file with the on-device Photogrammetry using only images and no lidar. But it seems that it's not working for iPhone XS. Are there restrictions on Photogrammetry Session? I know it's iPhone 12 Pro and up for ObjectCapture, but what about photogrammetry on iOS 17? Thanks!
Posted Last updated
.
Post not yet marked as solved
1 Replies
530 Views
I found out that it works well in this new api on beta. But I want to make more customizable settings, so want to make one with AVFoundation. I want to know the AVCaptureSession and AVCapturePhotoSettings that is applied on this new API.
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
0 Replies
793 Views
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS. As I understand from the session: "Meet Object Capture for iOS", I realized that the API now accepts Point Cloud data from iPhone LiDAR sensor to create 3D assets. However, I was not able to find any source from official Apple Documentation on RealityKit and ObjectCapture that explains how to utilize Point Cloud data to create the session. I have two questions regarding this API. The original example from the documentation explains how to utilize the depth map from captured image by embedding the depth map into the HEIC image. This fact makes me assumed that PhotogrammetrySession also uses Point Cloud data that is embedded in the photo. Is this correct? I would also like to use the photos captured from iOS(and Point Cloud data) to use in PhotogrammetrySession on MacOS for full model detail. I know that PhotogrammetrySession provides PointCloud request result. Will using this output be the same as the one being captured on-device by the ObjectCaptureSession? Thanks everyone in advance and it's been a real pleasure working with the updated Object Capture APIs.
Posted
by KKodiac.
Last updated
.