Meet Object Capture for iOS

RSS for tag

Discuss the WWDC23 Session Meet Object Capture for iOS

View Session

Posts under wwdc2023-10191 tag

47 Posts
Sort by:
Post marked as solved
2 Replies
747 Views
Hello, I am testing the updated PhotogrammetrySession based on test iOS app that I created that uses ObjectCaptureSession that was announced this year's WWDC23: "Meet Object Capture for iOS". I currently have server-side code using the RealityKit's PhotogrammetrySession on MacOS 14.0 beta which means that PhotogrammetrySession should be able to utilize Point Cloud data captured during the ObjectCaptureSession in the iOS app(as announced during WWDC23 session). My expectation was that the Point Cloud captured during ObjectCaptureSession was embedded into the image so that I only needed to import the HEIC image files to be used for PhotogrammetrySession on MacOS. However, I came across following warning message: Image Folder Reader: Cannot read temporal depth point clouds of sample (id = 20) for all my input images. The thing to note is, when I run the same PhotogrammetrySession on iOS, the Point Cloud data seem to be processed just fine. After digging into hex format of the HEIC image captured during ObjectCaptureSession, I came across the following line: mimeapplication/rdf+xml infe 6hvc1<infe 7uri octag:com:apple:oc:cameraTrackingState>infe 8uri octag:com:apple:oc:cameraCalibrationData=infe 9uri octag:com:apple:oc:2022:objectTransform:infe :uri octag:com:apple:oc:objectBoundingBox9infe ;uri octag:com:apple:oc:rawFeaturePoints7infe <uri octag:com:apple:oc:pointCloudData0infe =uri octag:com:apple:oc:version2infe >uri octag:com:apple:oc:segmentID=infe ?uri octag:com:apple:oc:wideToDepthTransform which, to me, seemed like the location of data which included Point Cloud data of respective HEIC image that was captured. So, the question is, is it possible for me to access these files, read them, and send its data to server-side PhotogrammetrySession to be processed alongside its respective HEIC image? Or am I getting this completely wrong?
Posted
by
Post not yet marked as solved
2 Replies
552 Views
Hi there, Just wondering when the sample project will be available. I am having trouble getting anything good out of the snippets and want to see the workings of the full project. Where/When can we get this ?
Posted
by
Post not yet marked as solved
2 Replies
984 Views
With AVFoundation's builtInLiDARDepthCamera, if I save photo.fileDataRepresentation to heic, it only has Exif and TIFF metadata. But, RealityKit's object capture's heic image has not only Exif and TIFF, but also has HEIC metadata including camera calibration data. What should I do for AVFoundation's exported image has same meta data?
Posted
by
Post not yet marked as solved
0 Replies
485 Views
In WWDC 2021, It saids 'we also offer an interface for advanced workflows to provide a sequence of custom samples. A PhotogrammetrySample includes the image plus other optional data such as a depth map, gravity vector, or custom segmentation mask.' But in code, PhotogrammetrySession initialize with data saved directory. How can I give input of PhotogrammetrySamples to PhotogrammetrySession?
Posted
by
Post not yet marked as solved
1 Replies
815 Views
When I install and run the sample app Apple released just recently, everything works fine up until I try to start the capture. Bounding box sets up without a problem, but then every time, this error occurs: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[AVCapturePhotoOutput capturePhotoWithSettings:delegate:] You are not authorized to use custom shutter sounds' *** First throw call stack: (0x19d6e8300 0x195cd4f30 0x1b9bfdcb4 0x1cc4fbf98 0x1cc432964 0x19d6e8920 0x19d70552c 0x1cc4328f8 0x1cc4a8fac 0x19d6e8920 0x19d70552c 0x1cc4a8e44 0x23634923c 0x23637abfc 0x2362d21a4 0x2362d139c 0x236339874 0x23636dc04 0x1a67f9b74 0x1a68023ac 0x1a67fa964 0x1a67faa78 0x1a67fa5d0 0x1039c6b34 0x1039d80b4 0x1a6800188 0x1a67f94bc 0x1a67f9fd0 0x1a6800098 0x1a67f9504 0x23633777c 0x23637201c 0x2354d081c 0x2354c8658 0x1039c6b34 0x1039c9c20 0x1039e1078 0x1039dfacc 0x1039d6ebc 0x1039d6ba0 0x19d774e94 0x19d758594 0x19d75cda0 0x1df4c0224 0x19fbcd154 0x19fbccdb8 0x1a142f1a8 0x1a139df2c 0x1a1387c1c 0x102a5d944 0x102a5d9f4 0x1c030e4f8) libc++abi: terminating due to uncaught exception of type NSException I have no idea why this is happening, so any help would be appreciated. My iPad is running the latest iPadOS 17 Beta and the crash also occurs when I don't have it isn't connected to Xcode...
Posted
by
Post not yet marked as solved
2 Replies
958 Views
Is it possible to capture only manually (automatic off) on object capture api ? And can I proceed to capturing stage right a way? Only Object Capture API captures real scale object. Using AVFoundation or ARKit, I've tried using lidar capturing HEVC or create PhotogrammetrySample, It doesn't create real scale object. I think, during object capture api, it catches point cloud, intrinsic parameter, and it help mesh to be in real scale. Does anyone knows 'Object Capture With only manual capturing' or 'Capturing using AVFoundation for real scale mesh'
Posted
by
Post not yet marked as solved
3 Replies
751 Views
Hi. Each time when I am trying to capture object using example from session https://developer.apple.com/videos/play/wwdc2023/10191 I have a crash. iPhone 14 Pro Max, iOS 17 beta 3. Xcode Version 15.0 beta 3 (15A5195k) Log: ObjectCaptureSession.: mobileSfM pose for the new camera shot is not consistent. <<<< PlayerRemoteXPC >>>> fpr_deferPostNotificationToNotificationQueue signalled err=-12 785 (kCMBaseObjectError_Invalidated) (item invalidated) at FigPlayer_RemoteXPC.m:829 Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED MTLCompiler: Compilation failed with XPC_ERROR_CONNECTION_INTERRUPTED on 3 try /Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSLibrary.mm:485: failed assertion `MPSLibrary::MPSKey_Create internal error: Unable to get MPS kernel NDArrayMatrixMultiplyNNA14_EdgeCase. Error: Compiler encountered an internal error ' /Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSLibrary.mm, line 485: error ''
Posted
by
Post not yet marked as solved
1 Replies
715 Views
I am trying the demo code in https://developer.apple.com/documentation/realitykit/guided-capture-sample MacOS: 13.4.1 (22F82) XCode: 15 Beta 4 iPadOS: 17.0 Public Beta iPad: Pro 11 inch 2nd Generation (has Lidar Scanner) But I've got an error in the runtime: "Thread 1: Fatal error: ObjectCaptureSession is not supported on this device!"
Posted
by
Post marked as solved
1 Replies
462 Views
Hi, I'm watching https://developer.apple.com/videos/play/wwdc2023/10191 and would like to generate a high level detail object, but looks like that is not possible in iOS yet. However, the project has configuration.isOverCaptureEnabled = true which captures additional images for later transfer them to macOS. Is there a way to get the images from the phone? Thanks, Pitt
Posted
by
Post not yet marked as solved
1 Replies
476 Views
When running the code from the object capture event from WWDC 23 event I'm currently getting the error "dyld[607]: Symbol not found: _$s21DeveloperToolsSupport15PreviewRegistryPAAE7previewAA0D0VvgZ Referenced from: <411AA023-A110-33EA-B026-D0103BAE08B6> /private/var/containers/Bundle/Application/9E9526BF-C163-420D-B6E0-2DC9E02B3F7E/ObjectCapture.app/ObjectCapture Expected in: <0BD6AC59-17BF-3B07-8C7F-6D9D25E0F3AD> /System/Library/Frameworks/DeveloperToolsSupport.framework/DeveloperToolsSupport"
Posted
by
Post not yet marked as solved
2 Replies
621 Views
Hi, In the scanning objects using object capture project, when the content view is dismissed the AppDataModel is always retained and the deinit is never called. @StateObject var appModel: AppDataModel = AppDataModel.instance I am presenting the contentView using a UIHostingController let hostingController = UIHostingController(rootView: ContentView()) hostingController.modalPresentationStyle = .fullScreen present(hostingController, animated: true) I have tried to manually detach the listeners and setting the objectCaptureSession to nil. In the debug memory graph there is a coachingoverlay retaining the AppDataModel. I want to remove the appModel from memory when the contentView is dismissed. Any suggestions?
Posted
by
Post not yet marked as solved
8 Replies
1.6k Views
Sample project from: https://developer.apple.com/documentation/RealityKit/guided-capture-sample was fine with beta 3. In beta 4, getting these errors: Generic struct 'ObservedObject' requires that 'ObjectCaptureSession' conform to 'ObservableObject' Does anyone have a fix? Thanks
Posted
by
Post not yet marked as solved
0 Replies
456 Views
Is it possible for me to customize the ObjectCaptureView? I'd like to have the turn-table that indicates whether the photo was captured with point cloud image to have different foreground color. So I want the white part under the point clouds to be some other color that I specify. Would it be possible by extending the ObjectCapturePointCloudView?
Posted
by
Post not yet marked as solved
0 Replies
566 Views
We have implemented all the recent additions Apple made for this on the iOS side for guided capture using Lidar and image data via ObjectCaptureSession. After the capture finishes we are sending our images to PhotogrammetrySession on macOS to reconstruct models in higher quality (Medium) than the Preview quality that is currently supported on iOS. We have now done a few side by side captures of using the new ObjectCapureSession vs using the traditional capture via the AvFoundation framework but have not seen any improvements that were claimed during the session that Apple hosted at WWDC. As a matter of fact we feel that the results are actually worse because the images obtained through the new ObjectCaptureSession aren't as high quality as the images we get from AvFoundation. Are we missing something here? Is PhotogrammetrySession on macOS not using this new additional Lidar data or have the improvements been overstated? From the documentation it is not clear at all how the new Lidar data gets stored and how that data transfers. We are using iOS 17 beta 4 and macOS Sonoma Beta 4 in our testing. Both codebases have been compiled using Xcode 15 Beta 5.
Posted
by
Post not yet marked as solved
3 Replies
913 Views
Hello guys, I am trying to get ObjectCapturing up and running. Ob the physical device side everything works great (except sind the Framework update which needed code adjustments and still crashes while reconstructing). I marked every class with @available(iOS 17.0, *) and the projects also runs on devices with iOS 16. The problem is, that when i want to build the project on the simulator (i know it wont work there but the scan is part of a bigger App and I need to keep simulator functionality for testing other features), the build fails because he cant find the ObjectCaptureSession. Is there any known way to fix this? Thanks in advance! Kind Regards
Posted
by
Post not yet marked as solved
9 Replies
1k Views
The Object Capture Apple sample code crashes while generating the 3D model when using more than 10 images. The code was running fine in Xcode beta 4 (and the corresponding iOS version). Since beta 5 I get these crashes. When scanning with exactly 10 images the process runs through fine. Does anybody know a workaround for that?
Posted
by
Post not yet marked as solved
3 Replies
858 Views
Hello, after installing Xcode 15 beta and the sample project provided for object capture in wwdc23 I am getting the below error: dyld[2006]: Symbol not found: _$s19_RealityKit_SwiftUI20ObjectCaptureSessionC7Combine010ObservableE0AAMc Referenced from: <35FD44C0-6001-325E-9F2A-016AF906B269> /private/var/containers/Bundle/Application/776635FF-FDD4-4DE1-B710-FC5F27D70D4F/GuidedCapture.app/GuidedCapture Expected in: <6A96F77C-1BEB-3925-B370-266184BF844F> /System/Library/Frameworks/_RealityKit_SwiftUI.framework/_RealityKit_SwiftUI I am trying to run the sample project on an iPhone 12 Pro (iOS 17.0 (21A5291j)) Any help in solving this issue would be appreciated. Thank you.
Posted
by
Post marked as solved
1 Replies
610 Views
Running on iOS17 Beta 6 and getting the below issue. Conformance of 'ObjectCaptureSession.CaptureState' to protocol 'Equatable' was already stated in the type's module '_RealityKit_SwiftUI' Operator function '==' will not be used to satisfy the conformance to 'Equatable' 'ObjectCaptureSession.CaptureState' declares conformance to protocol 'Equatable' here Please help!
Posted
by