Create 3D models with Object Capture

RSS for tag

Discuss the WWDC21 session Create 3D models with Object Capture.

View Session

Posts under wwdc21-10076 tag

63 Posts
Sort by:
Post not yet marked as solved
2 Replies
774 Views
I am trying to make this to work but after building the command line app, I kept getting error when running https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app I am using MacBook-Air 2017 and Xcode 13.0 beta 4, Example error: 2021-08-09 14:43:03.714557+0530 HelloPhotogrammetry[3641:41200] Metal API Validation Enabled 2021-08-09 14:43:07.679733+0530 HelloPhotogrammetry[3641:41200] [HelloPhotogrammetry] Error creating session: cantCreateSession("A GPU that is not in low power mode is required. https://developer.apple.com/documentation/metal/mtldevice/1433409-lowpower") Program ended with exit code: 1
Post not yet marked as solved
0 Replies
313 Views
I have a question about Object Capture, a new API from Apple. I have created 3D models of a sofa, shoes and a bag using HelloPhotogrametory, a sample command line application. Only the sofa has a 3D model of the floor as well. Is there a way to avoid generating a 3D model of the floor? The photos used are HEIC, the number of photos is about 50, and the runtime option is the default of the sample.
Posted
by
Post not yet marked as solved
0 Replies
245 Views
I have a question about Object Capture, a new API from Apple. I have created 3D models of a sofa, shoes and a bag using HelloPhotogrametory, a sample command line application. Only the sofa has a 3D model of the floor and other surrounding objects, but is there any way to avoid creating 3D models of these surrounding objects? Is there any way to avoid creating 3D models of these objects? Or do you have any information about the limit size of these objects that do not create 3D models of them? The photos I am using for this shoot are HEIC, the number of photos is about 50, and the runtime option is the sample default.
Posted
by
Post not yet marked as solved
0 Replies
218 Views
Im working on Object Capture App using Photogrammetry Session, the session cannot created for some reason, error message: cantCreateSession("A GPU with supportsRaytracing is required.") My Mac: Mac Pro (2019) Graphics: AMD Radeon Pro Vega II 32 GB OS Version: 12.0 Beta (21A5304g) Code running fine on MacBook Pro 2020.
Posted
by
Post not yet marked as solved
0 Replies
256 Views
Im working on Object Capture App using Photogrammetry Session, the session cannot created for some reason, error message: cantCreateSession("A GPU with supportsRaytracing is required.") My Mac: Mac Pro (2019) Graphics: AMD Radeon Pro Vega II 32 GB OS Version: 12.0 Beta (21A5304g) Same code on MacBook Pro (16-inch, 2019) works fine without error. Graphics: AMD Radeon Pro 5300M 4 GB
Posted
by
Post not yet marked as solved
1 Replies
380 Views
I'm trying to gather some depth data in order to send off to Object Capture for processing. What depth file formats are supported? I can see from the capture sample code they are written in 32bit tiff grayscale converting depth to disparity. Are there any other formats supported? Unfortunately the documentation is very light on this. Do you know if 16bit png would be supported? Some more detail on this would go a long way, thank you.
Posted
by
Post not yet marked as solved
1 Replies
660 Views
My name is Daria. I represent a students team from Omsk, Russia. After WWDC21 we've decided to experiment with the Object Capture technology to reconstruct histrorical museum objects and place it as an art exhibition nearby the museum. We've talked with different museums. Our idea was supported by Vrubel museum (http://vrubel.ru). They provided us access to their historical sculptures (dated by 19th century). The following are reconstructed models, that we created with Object Capture technology: Young Woman Psyche Psyche with a butterfly Cupid's head Silvio Deer with a branch All together, we created the unique experience that available through iOS app to any peson walking around the museum. Video recording of the experience We would be glad to hear any feedback from Apple and scale our experiment to other museums!
Posted
by
Post not yet marked as solved
1 Replies
389 Views
Hi, I am trying to build and run the HelloPhotogrammetry command line app that I have downloaded from here. When I run the app, I get the following error: Error creating session: cantCreateSession("A GPU that is not in low power mode is required. https://developer.apple.com/documentation/metal/mtldevice/1433409-lowpower") Does this app require a Mac with a dedicated GPU? or can I somehow use my Macbook pro with an integrated GPU to run it?
Posted
by
Post marked as solved
1 Replies
648 Views
"Creating 3D models with Object Capture" the API provided is only working on console app now ? It's working perfect on console app but not on macOS GUI app. I couldn't find such information in the documents as well. In my GUI app, I am getting the error as, "Process got error: invalidRequest(RealityFoundation.PhotogrammetrySession.Request.modelFile(url: file:///Users/s***ik/Desktop/Pot_usdz/sam.usdz, detail: RealityFoundation.PhotogrammetrySession.Request.Detail.preview, geometry: nil), ".modelFile directory path file:///Users/snayvik/Desktop/Pot_usdz/ cannot be written to!")" Any help is appreciated.
Posted
by
Post not yet marked as solved
3 Replies
943 Views
Hi, I am trying to build and run the HelloPhotogrammetry app that is associated with WWDC21 session 10076 (available for download here). But when I run the app, I get the following error message: A GPU with supportsRaytracing is required I have a Mac Pro (2019) with an AMD Radeon Pro 580X 8 GB graphics card and 96 GB RAM. According to the requirements slide in the WWDC session, this should be sufficient. Is this a configuration issue or do I actually need a different graphics card (and if so, which one?). Thanks in advance.
Posted
by
Post not yet marked as solved
0 Replies
226 Views
As for the subject, CaptureSample App doesn't get depth data on iPhone12 Pro Max. I mean, the app reports that we correctly have depth data and photos captured are shown with depth data green badge attached but generated .tif files are always a full white file, no more details. Is there something i'm missing? Do i need to do something more to enable depth data acquisition? I'm taking pictures of objects no more distant than 50 cm
Posted
by
Post marked as solved
1 Replies
756 Views
I work in the thoroughbred industry. I am interested in capturing a 3D model of a racehorse (at rest) to later use in a dataset for analysis. A recent paper (see "Body measurement of riding horses with a versatile tablet-type 3D scanning device") used the iPhnoe 12, a commerical app (Scandy) and LiDAR to create 3D models of the horse. It reads as a fairly straightfoward process, however I was wondering if there was any benefit to using Object Capture over LiDAR. It would seem as easy to walk around the horse and capture a video and then create the process to extract frames from the video for Object Capture? In terms of creating 3D models, is one method better/more accurate than another?
Posted
by
Post not yet marked as solved
1 Replies
278 Views
What do you mean by "start by creating a session?" I'm an architect(building designer) and a novice programmer trying to learn to use the tools to help in my design process. But I'm kinda stuck in trying to start the engine, let alone drive it. Any advice would be helpful!
Posted
by
Post not yet marked as solved
1 Replies
632 Views
I'm on Mac OS 12 (Monterey) and Xcode 13 but it still get the error "Cannot find type 'PhotogrammetrySession' in scope" I tried restarting Xcode, tried restarting the Mac. But I still get the error. I have imported "RealityKit". I'm trying to run the HelloPhotogrammetry code provided by Apple.
Posted
by
Post not yet marked as solved
4 Replies
682 Views
Hi! I'm really excited to try the new ObjectCapture API. I have a iPhone 12 Pro (with the lidar) but have a old MacBook. I'm planning to get a new MacBook to run the RealityKit and Photogrammetry software, as given in this example: https://developer.apple.com/videos/play/wwdc2021/10076/. Are there any restrictions on the Mac hardware or is it fine as long as they support macOS 12.0+ Beta and Xcode 13.0+? Thanks!
Posted
by
Post not yet marked as solved
0 Replies
242 Views
I took photos of Nick shoes on the side of CaptureSample of WWDC2021. The initial position of the final generated Nick shoes is upside down. How do I position the initial position of the model as forward
Posted
by
Post not yet marked as solved
1 Replies
317 Views
Hello, I am trying to get the object capture command line example program working, but I am running into a weird error. "Cannot find 'PhotogrammetrySession' in scope." I am running xcode 13, and I am on MacOS 12.0 Beta (21A5522h). From what I have seen online this error only occurs when attempting to use object capture on an older version of MacOS. I am probably over looking something obvious, but I would appreciate any help. The object capture command line program example: (https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app)
Posted
by