Meet Object Capture for iOS

RSS for tag

Discuss the WWDC23 Session Meet Object Capture for iOS

View Session

Posts under wwdc2023-10191 tag

47 Posts
Sort by:
Post not yet marked as solved
1 Replies
803 Views
With code below, I added color and depth image from RealityKit ARView, and ran Photogrammetry on iOS device, the mesh looks fine, but the scale of the mesh is quit different with real world scale. let color = arView.session.currentFrame!.capturedImage let depth = arView.session.currentFrame!.sceneDepth!.depthMap //😀 Color let colorCIImage = CIImage(cvPixelBuffer: color) let colorUIImage = UIImage(ciImage: colorCIImage) let depthCIImage = CIImage(cvPixelBuffer: depth) let heicData = colorUIImage.heicData()! let fileURL = imageDirectory!.appendingPathComponent("\(scanCount).heic") do { try heicData.write(to: fileURL) print("Successfully wrote image to \(fileURL)") } catch { print("Failed to write image to \(fileURL): \(error)") } //😀 Depth let context = CIContext() let colorSpace = CGColorSpace(name: CGColorSpace.linearGray)! let depthData = context.tiffRepresentation(of: depthCIImage, format: .Lf, colorSpace: colorSpace, options: [.disparityImage: depthCIImage]) let depth_dir = imageDirectory!.appendingPathComponent("IMG_\(scanCount)_depth.TIF") try! depthData!.write(to: depth_dir, options: [.atomic]) print("depth saved") And also tried this. let colorSpace = CGColorSpace(name: CGColorSpace.linearGray) let depthCIImage = CIImage(cvImageBuffer: depth, options: [.auxiliaryDepth : true]) let context = CIContext() let linearColorSpace = CGColorSpace(name: CGColorSpace.linearSRGB) guard let heicData = context.heifRepresentation(of: colorCIImage, format: .RGBA16, colorSpace: linearColorSpace!, options: [.depthImage : depthCIImage]) else { print("Failed to convert combined image into HEIC format") return } Does Anyone know why and how to fix this?
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
2 Replies
946 Views
With AVFoundation's builtInLiDARDepthCamera, if I save photo.fileDataRepresentation to heic, it only has Exif and TIFF metadata. But, RealityKit's object capture's heic image has not only Exif and TIFF, but also has HEIC metadata including camera calibration data. What should I do for AVFoundation's exported image has same meta data?
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
1 Replies
447 Views
Scanning objects using Object Capture My iPad pro iPad OS 17.0 (21A329) has Lidar (can use roomPlan). When I run the demo I get an error: ObjectCaptureSession.isCurrentDeviceSupported: The device is not supported on this device. Did I miss something? Looking forward to your reply.
Posted
by RookieRed.
Last updated
.
Post not yet marked as solved
1 Replies
449 Views
In ARKit, I took few Color CVPixelBuffers and Depth CVPixelBuffers, ran PhotogrammetrySession with PhotogrammetrySamples. In my service, precise real scale is important, so I tried to figure out what is related to the rate of real scale model created. I did some experiments, and I set same number of images(10 pics), same object, same shot angles, distance to object(30cm, 50cm, 100cm). But even with above same controlled variables, sometimes, it generate real scale, and sometimes not. Because I couldn't get to source code of photogrammetry and how it work inside, I wonder do I miss and how can I create real scale every time if it's possible.
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
1 Replies
413 Views
Hey There, I recently tried out the iOS 17 photogrammetry sample app, The results are very promising when compared to the iOS 16 apps The real world scale retention works amazing. However, my use case involves making the camera still and rotating the object instead, which was an option in iOS 16 but unfortunately removed in iOS 17 I wonder if there's a way to do so in iOS 17 app!
Posted Last updated
.
Post not yet marked as solved
0 Replies
485 Views
Hi, We are searching a solution to create 3D models in real life size using reflex cameras. We created an app for mac called Smart Capture that is using Object Capture to recreate 3D models from pictures. We used this project to digitize 5000 archeological findings of the Archaeological Park of Pompeii. We created a strong workflow using Orbitvu automated photography boxes with 3 reflex cameras for each box to speed up the capture process that allowed us to get a 3D model in less than 10 minutes (2-3 minutes to capture and about 7-8 minutes to process on a m2 max). The problem is that the resulting object has no size information and we have to manually take measurement and resize the 3d model accordingly, introducing a manual step and a possible error on the workflow. I was wondering if it's possible, using iOS 17 Object Capture APIs to get point cloud data which I could add to the reflex cameras pictures and process the whole package on the mac to retrieve the size of the real object. As far as I understood the only way to get it working before iOS 17 was to use depth information (I tried Sample Capture project), but the problem is that we have to work with small objects up to huge objects (our range is objects from about 1 to 25 inches) Do you have any clue on how to achieve this?
Posted Last updated
.
Post not yet marked as solved
2 Replies
916 Views
Is it possible to capture only manually (automatic off) on object capture api ? And can I proceed to capturing stage right a way? Only Object Capture API captures real scale object. Using AVFoundation or ARKit, I've tried using lidar capturing HEVC or create PhotogrammetrySample, It doesn't create real scale object. I think, during object capture api, it catches point cloud, intrinsic parameter, and it help mesh to be in real scale. Does anyone knows 'Object Capture With only manual capturing' or 'Capturing using AVFoundation for real scale mesh'
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
6 Replies
2.4k Views
I'm really excited about the Object Capture APIs being moved to iOS, and the complex UI shown in the WWDC session. I have a few unanswered questions: Where is the sample code available from? Are the new Object Capture APIs on iOS limited to certain devices? Can we capture images from the front facing cameras?
Posted Last updated
.
Post not yet marked as solved
0 Replies
331 Views
I have tested the sample app with a few objects, and it seems very robust. However, I would like to capture the interior of a car for a use case I have. It seems like this won't work because the bounding box is around the camera. Is there support in other APIs for this?
Posted
by steezeman.
Last updated
.
Post not yet marked as solved
9 Replies
1k Views
The Object Capture Apple sample code crashes while generating the 3D model when using more than 10 images. The code was running fine in Xcode beta 4 (and the corresponding iOS version). Since beta 5 I get these crashes. When scanning with exactly 10 images the process runs through fine. Does anybody know a workaround for that?
Posted
by NickYaw.
Last updated
.
Post not yet marked as solved
8 Replies
1.5k Views
Sample project from: https://developer.apple.com/documentation/RealityKit/guided-capture-sample was fine with beta 3. In beta 4, getting these errors: Generic struct 'ObservedObject' requires that 'ObjectCaptureSession' conform to 'ObservableObject' Does anyone have a fix? Thanks
Posted
by iosdevil.
Last updated
.
Post not yet marked as solved
3 Replies
818 Views
Hello, after installing Xcode 15 beta and the sample project provided for object capture in wwdc23 I am getting the below error: dyld[2006]: Symbol not found: _$s19_RealityKit_SwiftUI20ObjectCaptureSessionC7Combine010ObservableE0AAMc Referenced from: <35FD44C0-6001-325E-9F2A-016AF906B269> /private/var/containers/Bundle/Application/776635FF-FDD4-4DE1-B710-FC5F27D70D4F/GuidedCapture.app/GuidedCapture Expected in: <6A96F77C-1BEB-3925-B370-266184BF844F> /System/Library/Frameworks/_RealityKit_SwiftUI.framework/_RealityKit_SwiftUI I am trying to run the sample project on an iPhone 12 Pro (iOS 17.0 (21A5291j)) Any help in solving this issue would be appreciated. Thank you.
Posted
by igyehia.
Last updated
.
Post marked as solved
1 Replies
550 Views
When I am about to access the clipboard, the apple paste permission will prompt and ask for permission. But the localisation seem won't change the language if I change the phone language ? Scenario : If my phone at "English" Language for the first time, the paste permission will prompt in "English" which is correct, but then I switch the phone language to "Spanish", the paste permission prompt still in "English". I need to restart the phone, then only the prompt permission will be appear in "Spanish" language. If I switch back to "English", the prompt still remain in "Spanish" Language until I restart the phone. Any way we can override this in plist like other privacy permission ? Or this is a known bugs ? In iOS 16.6 I will attached the screenshot. Anyone can answer and help on this? Thank you so much.
Posted
by niewsicpa.
Last updated
.
Post marked as solved
1 Replies
571 Views
Running on iOS17 Beta 6 and getting the below issue. Conformance of 'ObjectCaptureSession.CaptureState' to protocol 'Equatable' was already stated in the type's module '_RealityKit_SwiftUI' Operator function '==' will not be used to satisfy the conformance to 'Equatable' 'ObjectCaptureSession.CaptureState' declares conformance to protocol 'Equatable' here Please help!
Posted Last updated
.
Post not yet marked as solved
3 Replies
843 Views
Hello guys, I am trying to get ObjectCapturing up and running. Ob the physical device side everything works great (except sind the Framework update which needed code adjustments and still crashes while reconstructing). I marked every class with @available(iOS 17.0, *) and the projects also runs on devices with iOS 16. The problem is, that when i want to build the project on the simulator (i know it wont work there but the scan is part of a bigger App and I need to keep simulator functionality for testing other features), the build fails because he cant find the ObjectCaptureSession. Is there any known way to fix this? Thanks in advance! Kind Regards
Posted
by NavidF.
Last updated
.
Post not yet marked as solved
1 Replies
672 Views
I am trying the demo code in https://developer.apple.com/documentation/realitykit/guided-capture-sample MacOS: 13.4.1 (22F82) XCode: 15 Beta 4 iPadOS: 17.0 Public Beta iPad: Pro 11 inch 2nd Generation (has Lidar Scanner) But I've got an error in the runtime: "Thread 1: Fatal error: ObjectCaptureSession is not supported on this device!"
Posted Last updated
.
Post not yet marked as solved
0 Replies
529 Views
We have implemented all the recent additions Apple made for this on the iOS side for guided capture using Lidar and image data via ObjectCaptureSession. After the capture finishes we are sending our images to PhotogrammetrySession on macOS to reconstruct models in higher quality (Medium) than the Preview quality that is currently supported on iOS. We have now done a few side by side captures of using the new ObjectCapureSession vs using the traditional capture via the AvFoundation framework but have not seen any improvements that were claimed during the session that Apple hosted at WWDC. As a matter of fact we feel that the results are actually worse because the images obtained through the new ObjectCaptureSession aren't as high quality as the images we get from AvFoundation. Are we missing something here? Is PhotogrammetrySession on macOS not using this new additional Lidar data or have the improvements been overstated? From the documentation it is not clear at all how the new Lidar data gets stored and how that data transfers. We are using iOS 17 beta 4 and macOS Sonoma Beta 4 in our testing. Both codebases have been compiled using Xcode 15 Beta 5.
Posted
by lanxinger.
Last updated
.
Post not yet marked as solved
0 Replies
429 Views
Is it possible for me to customize the ObjectCaptureView? I'd like to have the turn-table that indicates whether the photo was captured with point cloud image to have different foreground color. So I want the white part under the point clouds to be some other color that I specify. Would it be possible by extending the ObjectCapturePointCloudView?
Posted
by KKodiac.
Last updated
.