Meet Object Capture for iOS

RSS for tag

Discuss the WWDC23 Session Meet Object Capture for iOS

View Session

Posts under wwdc2023-10191 tag

47 Posts
Sort by:
Post not yet marked as solved
1 Replies
464 Views
In ARKit, I took few Color CVPixelBuffers and Depth CVPixelBuffers, ran PhotogrammetrySession with PhotogrammetrySamples. In my service, precise real scale is important, so I tried to figure out what is related to the rate of real scale model created. I did some experiments, and I set same number of images(10 pics), same object, same shot angles, distance to object(30cm, 50cm, 100cm). But even with above same controlled variables, sometimes, it generate real scale, and sometimes not. Because I couldn't get to source code of photogrammetry and how it work inside, I wonder do I miss and how can I create real scale every time if it's possible.
Posted
by
Post not yet marked as solved
1 Replies
460 Views
Scanning objects using Object Capture My iPad pro iPad OS 17.0 (21A329) has Lidar (can use roomPlan). When I run the demo I get an error: ObjectCaptureSession.isCurrentDeviceSupported: The device is not supported on this device. Did I miss something? Looking forward to your reply.
Posted
by
Post not yet marked as solved
1 Replies
432 Views
Hey There, I recently tried out the iOS 17 photogrammetry sample app, The results are very promising when compared to the iOS 16 apps The real world scale retention works amazing. However, my use case involves making the camera still and rotating the object instead, which was an option in iOS 16 but unfortunately removed in iOS 17 I wonder if there's a way to do so in iOS 17 app!
Post not yet marked as solved
0 Replies
499 Views
Hi, We are searching a solution to create 3D models in real life size using reflex cameras. We created an app for mac called Smart Capture that is using Object Capture to recreate 3D models from pictures. We used this project to digitize 5000 archeological findings of the Archaeological Park of Pompeii. We created a strong workflow using Orbitvu automated photography boxes with 3 reflex cameras for each box to speed up the capture process that allowed us to get a 3D model in less than 10 minutes (2-3 minutes to capture and about 7-8 minutes to process on a m2 max). The problem is that the resulting object has no size information and we have to manually take measurement and resize the 3d model accordingly, introducing a manual step and a possible error on the workflow. I was wondering if it's possible, using iOS 17 Object Capture APIs to get point cloud data which I could add to the reflex cameras pictures and process the whole package on the mac to retrieve the size of the real object. As far as I understood the only way to get it working before iOS 17 was to use depth information (I tried Sample Capture project), but the problem is that we have to work with small objects up to huge objects (our range is objects from about 1 to 25 inches) Do you have any clue on how to achieve this?
Posted
by
Post not yet marked as solved
0 Replies
345 Views
I have tested the sample app with a few objects, and it seems very robust. However, I would like to capture the interior of a car for a use case I have. It seems like this won't work because the bounding box is around the camera. Is there support in other APIs for this?
Posted
by
Post marked as solved
1 Replies
569 Views
When I am about to access the clipboard, the apple paste permission will prompt and ask for permission. But the localisation seem won't change the language if I change the phone language ? Scenario : If my phone at "English" Language for the first time, the paste permission will prompt in "English" which is correct, but then I switch the phone language to "Spanish", the paste permission prompt still in "English". I need to restart the phone, then only the prompt permission will be appear in "Spanish" language. If I switch back to "English", the prompt still remain in "Spanish" Language until I restart the phone. Any way we can override this in plist like other privacy permission ? Or this is a known bugs ? In iOS 16.6 I will attached the screenshot. Anyone can answer and help on this? Thank you so much.
Posted
by
Post marked as solved
1 Replies
591 Views
Running on iOS17 Beta 6 and getting the below issue. Conformance of 'ObjectCaptureSession.CaptureState' to protocol 'Equatable' was already stated in the type's module '_RealityKit_SwiftUI' Operator function '==' will not be used to satisfy the conformance to 'Equatable' 'ObjectCaptureSession.CaptureState' declares conformance to protocol 'Equatable' here Please help!
Posted
by
Post not yet marked as solved
3 Replies
840 Views
Hello, after installing Xcode 15 beta and the sample project provided for object capture in wwdc23 I am getting the below error: dyld[2006]: Symbol not found: _$s19_RealityKit_SwiftUI20ObjectCaptureSessionC7Combine010ObservableE0AAMc Referenced from: <35FD44C0-6001-325E-9F2A-016AF906B269> /private/var/containers/Bundle/Application/776635FF-FDD4-4DE1-B710-FC5F27D70D4F/GuidedCapture.app/GuidedCapture Expected in: <6A96F77C-1BEB-3925-B370-266184BF844F> /System/Library/Frameworks/_RealityKit_SwiftUI.framework/_RealityKit_SwiftUI I am trying to run the sample project on an iPhone 12 Pro (iOS 17.0 (21A5291j)) Any help in solving this issue would be appreciated. Thank you.
Posted
by
Post not yet marked as solved
9 Replies
1k Views
The Object Capture Apple sample code crashes while generating the 3D model when using more than 10 images. The code was running fine in Xcode beta 4 (and the corresponding iOS version). Since beta 5 I get these crashes. When scanning with exactly 10 images the process runs through fine. Does anybody know a workaround for that?
Posted
by
Post not yet marked as solved
3 Replies
872 Views
Hello guys, I am trying to get ObjectCapturing up and running. Ob the physical device side everything works great (except sind the Framework update which needed code adjustments and still crashes while reconstructing). I marked every class with @available(iOS 17.0, *) and the projects also runs on devices with iOS 16. The problem is, that when i want to build the project on the simulator (i know it wont work there but the scan is part of a bigger App and I need to keep simulator functionality for testing other features), the build fails because he cant find the ObjectCaptureSession. Is there any known way to fix this? Thanks in advance! Kind Regards
Posted
by
Post not yet marked as solved
0 Replies
547 Views
We have implemented all the recent additions Apple made for this on the iOS side for guided capture using Lidar and image data via ObjectCaptureSession. After the capture finishes we are sending our images to PhotogrammetrySession on macOS to reconstruct models in higher quality (Medium) than the Preview quality that is currently supported on iOS. We have now done a few side by side captures of using the new ObjectCapureSession vs using the traditional capture via the AvFoundation framework but have not seen any improvements that were claimed during the session that Apple hosted at WWDC. As a matter of fact we feel that the results are actually worse because the images obtained through the new ObjectCaptureSession aren't as high quality as the images we get from AvFoundation. Are we missing something here? Is PhotogrammetrySession on macOS not using this new additional Lidar data or have the improvements been overstated? From the documentation it is not clear at all how the new Lidar data gets stored and how that data transfers. We are using iOS 17 beta 4 and macOS Sonoma Beta 4 in our testing. Both codebases have been compiled using Xcode 15 Beta 5.
Posted
by
Post not yet marked as solved
0 Replies
446 Views
Is it possible for me to customize the ObjectCaptureView? I'd like to have the turn-table that indicates whether the photo was captured with point cloud image to have different foreground color. So I want the white part under the point clouds to be some other color that I specify. Would it be possible by extending the ObjectCapturePointCloudView?
Posted
by
Post not yet marked as solved
8 Replies
1.6k Views
Sample project from: https://developer.apple.com/documentation/RealityKit/guided-capture-sample was fine with beta 3. In beta 4, getting these errors: Generic struct 'ObservedObject' requires that 'ObjectCaptureSession' conform to 'ObservableObject' Does anyone have a fix? Thanks
Posted
by
Post not yet marked as solved
2 Replies
607 Views
Hi, In the scanning objects using object capture project, when the content view is dismissed the AppDataModel is always retained and the deinit is never called. @StateObject var appModel: AppDataModel = AppDataModel.instance I am presenting the contentView using a UIHostingController let hostingController = UIHostingController(rootView: ContentView()) hostingController.modalPresentationStyle = .fullScreen present(hostingController, animated: true) I have tried to manually detach the listeners and setting the objectCaptureSession to nil. In the debug memory graph there is a coachingoverlay retaining the AppDataModel. I want to remove the appModel from memory when the contentView is dismissed. Any suggestions?
Posted
by
Post not yet marked as solved
1 Replies
462 Views
When running the code from the object capture event from WWDC 23 event I'm currently getting the error "dyld[607]: Symbol not found: _$s21DeveloperToolsSupport15PreviewRegistryPAAE7previewAA0D0VvgZ Referenced from: <411AA023-A110-33EA-B026-D0103BAE08B6> /private/var/containers/Bundle/Application/9E9526BF-C163-420D-B6E0-2DC9E02B3F7E/ObjectCapture.app/ObjectCapture Expected in: <0BD6AC59-17BF-3B07-8C7F-6D9D25E0F3AD> /System/Library/Frameworks/DeveloperToolsSupport.framework/DeveloperToolsSupport"
Posted
by
Post marked as solved
1 Replies
453 Views
Hi, I'm watching https://developer.apple.com/videos/play/wwdc2023/10191 and would like to generate a high level detail object, but looks like that is not possible in iOS yet. However, the project has configuration.isOverCaptureEnabled = true which captures additional images for later transfer them to macOS. Is there a way to get the images from the phone? Thanks, Pitt
Posted
by
Post not yet marked as solved
1 Replies
689 Views
I am trying the demo code in https://developer.apple.com/documentation/realitykit/guided-capture-sample MacOS: 13.4.1 (22F82) XCode: 15 Beta 4 iPadOS: 17.0 Public Beta iPad: Pro 11 inch 2nd Generation (has Lidar Scanner) But I've got an error in the runtime: "Thread 1: Fatal error: ObjectCaptureSession is not supported on this device!"
Posted
by
Post not yet marked as solved
3 Replies
729 Views
Hi. Each time when I am trying to capture object using example from session https://developer.apple.com/videos/play/wwdc2023/10191 I have a crash. iPhone 14 Pro Max, iOS 17 beta 3. Xcode Version 15.0 beta 3 (15A5195k) Log: ObjectCaptureSession.: mobileSfM pose for the new camera shot is not consistent. <<<< PlayerRemoteXPC >>>> fpr_deferPostNotificationToNotificationQueue signalled err=-12 785 (kCMBaseObjectError_Invalidated) (item invalidated) at FigPlayer_RemoteXPC.m:829 Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED MTLCompiler: Compilation failed with XPC_ERROR_CONNECTION_INTERRUPTED on 3 try /Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSLibrary.mm:485: failed assertion `MPSLibrary::MPSKey_Create internal error: Unable to get MPS kernel NDArrayMatrixMultiplyNNA14_EdgeCase. Error: Compiler encountered an internal error ' /Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSLibrary.mm, line 485: error ''
Posted
by