Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

OBJECT CAPTURE API
Now that we have the Vision Pro, I really want to start using Apple's Object Capture API to transform real objects into 3D assets. I watched the latest Object Capture vid from WWDC 23 and noticed they were using a "sample app". Does Apple provide this sample app to VisionOS developers or do we have to build our own iOS app? Thanks and cheers!
3
0
1.1k
Jun ’23
Ventura Model output worse than Monterey
We run a photogrammetry studio for shoes and since upgrading computers to Ventura, the quality of the models is gotten significantly worse in terms of correctness of UV, shape and smoothness. We still have a computer using Ventura that I will not upgrade until this is figured out. I've run the same sets of files on Ventura vs Monterey, using the same apps (tested on many, not just our main one) and it is a marked difference on certain shoes (ones with larger smooth areas?). I'm convinced it's something in the API since different apps produce very similar results. I've reached out to Apple as well as the main app developer, but haven't heard back. I'm curious if anyone else has seen this or been able to dig into the code a bit more. Obviously it's not great that software that was working really well has taken a step back on a new OS. TIA.
0
0
478
Jul ’23
bug visionOS and the storyboard key in the plist?
I have an older app that is a mix of Swift & Objective-C. I have 2 groups of storyboards for the iPhone and the iPad using storyboard references. There seems to be a bug, when using the Simulator, it is loading the storyboard specified by the key "Main storyboard file base name" and not using the key "Main storyboard file base name (iPad)". I did change the first key to use the iPad storyboard & it then worked as expected in the visionOS simulator. The raw keys are: UIMainStoryboardFile UIMainStoryboardFile~ipad What should I do?
0
0
549
Jul ’23
How to display stereo images in Apple Vision Pro?
Hi community, I have a pair of stereo images, one for each eye. How should I render it on visionOS? I know that for 3D videos, the AVPlayerViewController could display them in fullscreen mode. But I couldn't find any docs relating to 3D stereo images. I guess my question can be brought up in a more general way: Is there any method that we can render different content for each eye? This could also be helpful to someone who only has sight on one eye.
7
0
2.3k
Jul ’23
WWDC 23 Object Capture 2023
When running the code from the object capture event from WWDC 23 event I'm currently getting the error "dyld[607]: Symbol not found: _$s21DeveloperToolsSupport15PreviewRegistryPAAE7previewAA0D0VvgZ Referenced from: <411AA023-A110-33EA-B026-D0103BAE08B6> /private/var/containers/Bundle/Application/9E9526BF-C163-420D-B6E0-2DC9E02B3F7E/ObjectCapture.app/ObjectCapture Expected in: <0BD6AC59-17BF-3B07-8C7F-6D9D25E0F3AD> /System/Library/Frameworks/DeveloperToolsSupport.framework/DeveloperToolsSupport"
1
0
530
Jul ’23
AppDataModel is retained
Hi, In the scanning objects using object capture project, when the content view is dismissed the AppDataModel is always retained and the deinit is never called. @StateObject var appModel: AppDataModel = AppDataModel.instance I am presenting the contentView using a UIHostingController let hostingController = UIHostingController(rootView: ContentView()) hostingController.modalPresentationStyle = .fullScreen present(hostingController, animated: true) I have tried to manually detach the listeners and setting the objectCaptureSession to nil. In the debug memory graph there is a coachingoverlay retaining the AppDataModel. I want to remove the appModel from memory when the contentView is dismissed. Any suggestions?
2
0
680
Jul ’23
Does anyone actually notice any improvements using the new ObjectCaptureSession with PhotogrammetrySession?
We have implemented all the recent additions Apple made for this on the iOS side for guided capture using Lidar and image data via ObjectCaptureSession. After the capture finishes we are sending our images to PhotogrammetrySession on macOS to reconstruct models in higher quality (Medium) than the Preview quality that is currently supported on iOS. We have now done a few side by side captures of using the new ObjectCapureSession vs using the traditional capture via the AvFoundation framework but have not seen any improvements that were claimed during the session that Apple hosted at WWDC. As a matter of fact we feel that the results are actually worse because the images obtained through the new ObjectCaptureSession aren't as high quality as the images we get from AvFoundation. Are we missing something here? Is PhotogrammetrySession on macOS not using this new additional Lidar data or have the improvements been overstated? From the documentation it is not clear at all how the new Lidar data gets stored and how that data transfers. We are using iOS 17 beta 4 and macOS Sonoma Beta 4 in our testing. Both codebases have been compiled using Xcode 15 Beta 5.
1
0
617
Jul ’23
Unable to run the sample code
Hello, after installing Xcode 15 beta and the sample project provided for object capture in wwdc23 I am getting the below error: dyld[2006]: Symbol not found: _$s19_RealityKit_SwiftUI20ObjectCaptureSessionC7Combine010ObservableE0AAMc Referenced from: <35FD44C0-6001-325E-9F2A-016AF906B269> /private/var/containers/Bundle/Application/776635FF-FDD4-4DE1-B710-FC5F27D70D4F/GuidedCapture.app/GuidedCapture Expected in: <6A96F77C-1BEB-3925-B370-266184BF844F> /System/Library/Frameworks/_RealityKit_SwiftUI.framework/_RealityKit_SwiftUI I am trying to run the sample project on an iPhone 12 Pro (iOS 17.0 (21A5291j)) Any help in solving this issue would be appreciated. Thank you.
4
0
932
Aug ’23
Difference between M1 and M2 outputs
Hi! My team has been playing with Object Capture on Mac for a while, and now that we got our hands on a MacBook Pro M2, we are starting to see differences between running the API on a Mac M1 and a Mac M2. The major difference observed is that on the M2 outputs, the object is correctly rotated, normal to the ground, while M1 outputs may have random rotations. This has been observed on latest Ventura 13.5.1 on the following machines: MacBook Pro M2 Pro 32Go MacBook Air M1 8Go and Mac mini 8Go. We have tested using both the PhotoCatch application and the HelloPhotogrammetry example (even using the same compiled binary). For sure we use the same options and same files on both sides. Is this expected behavior ? We would appreciate having the same rotation on both side. Regards
0
0
374
Aug ’23
Graphic Engineer
Hey There, I recently tried out the iOS 17 photogrammetry sample app, The results are very promising when compared to the iOS 16 apps The real world scale retention works amazing. However, my use case involves making the camera still and rotating the object instead, which was an option in iOS 16 but unfortunately removed in iOS 17 I wonder if there's a way to do so in iOS 17 app!
1
0
499
Sep ’23
With in Apple PhotogrammterySession, Variable related with real scale.
In ARKit, I took few Color CVPixelBuffers and Depth CVPixelBuffers, ran PhotogrammetrySession with PhotogrammetrySamples. In my service, precise real scale is important, so I tried to figure out what is related to the rate of real scale model created. I did some experiments, and I set same number of images(10 pics), same object, same shot angles, distance to object(30cm, 50cm, 100cm). But even with above same controlled variables, sometimes, it generate real scale, and sometimes not. Because I couldn't get to source code of photogrammetry and how it work inside, I wonder do I miss and how can I create real scale every time if it's possible.
1
0
545
Sep ’23
Question about Checkpoint Directory
Hello! I have a question about usage snapshots from ios17 sample app on macOS 14. I tried to export folders "Photos" and "Snapshots" captured from ios and then wrote like: let checkpointDirectoryPath = "/path/to/the/Snapshots/" let checkpointDirectoryURL = URL(fileURLWithPath: checkpointDirectoryPath, isDirectory: true) if #available(macOS 14.0, *) { configuration.checkpointDirectory = checkpointDirectoryURL } else { // Fallback on earlier versions } But I didn't notice any speed or performance improvements. It looks like "Snapshots" folder was simply ignored. Please advise what I can do so that "Snapshots" folder is affected during calculations.
0
0
345
Oct ’23
Downgrade iPad OS
I have several ipads that have been upgraded to 17.0.3 but I need to be able to back them up to 16.6.1 version. We have apps that do not work currently on 17. I have downloaded the 16.6.1 .ipsw file and every time I try to use it I get OS cannot be restored on "iPad". Personalization failed. Any way to get an os file that would work?
0
0
337
Oct ’23
Object Capture : Pose Information
Hi, In the newly released Object Capture API, for a PhotogrammetrySession, we can get the poses of the saved images, and the same images will be used to create the model. But in the sample project, https://developer.apple.com/documentation/realitykit/guided-capture-sample Only the 3D model that's generated will be saved, but for the others, pose, poses, bounds, point cloud, and model entity, there was a comment added, saying that it is // Not supported yet When will this be available for the developers ?? Can you give us a tentative date at least???
1
0
452
Nov ’23
View frame / bounds incorrect (iOS app on visionOS)
I'm running into an issue with the frame bounds of a Metal-based iOS app on the visionOS simulator. Here's a snapshot: That's the result of downloading Apple's sample code and running it in the simulator (Apple Vision Pro (Designed for iPad)). Is it a bug in the simulator / iOS->visionOS emulation, or is that sample code doing something odd that isn't compatible with visionOS? Thanks! Eddy
1
0
425
Nov ’23
How to replicate ObjectCaptureSession's boundary restriction?
Hello, I want to use Apple's PhotogrammetrySession to scan a window. However, ObjectCaptureSession seems to be a monotasker and won't allow capture to occur with anything but a small object on a flat surface. So, I need to manually feed data into PhotogrammetrySession. But when I do, it focuses way too much on the scene behind the window, sacrificing detail on the window itself. Is there a way for me to either coax ObjectCaptureSession into capturing an area on the wall, or for me to restrict PhotogrammetrySession's target bounding box manually? How does ObjectCaptureSession communicate the limited bounding box to PhotogrammetrySession? Thanks, Sebastian
1
0
433
Dec ’23