Create 3D models with Object Capture

RSS for tag

Discuss the WWDC21 session Create 3D models with Object Capture.

View Session

Posts under wwdc21-10076 tag

63 Posts
Sort by:
Post not yet marked as solved
0 Replies
226 Views
As for the subject, CaptureSample App doesn't get depth data on iPhone12 Pro Max. I mean, the app reports that we correctly have depth data and photos captured are shown with depth data green badge attached but generated .tif files are always a full white file, no more details. Is there something i'm missing? Do i need to do something more to enable depth data acquisition? I'm taking pictures of objects no more distant than 50 cm
Posted Last updated
.
Post not yet marked as solved
1 Replies
660 Views
My name is Daria. I represent a students team from Omsk, Russia. After WWDC21 we've decided to experiment with the Object Capture technology to reconstruct histrorical museum objects and place it as an art exhibition nearby the museum. We've talked with different museums. Our idea was supported by Vrubel museum (http://vrubel.ru). They provided us access to their historical sculptures (dated by 19th century). The following are reconstructed models, that we created with Object Capture technology: Young Woman Psyche Psyche with a butterfly Cupid's head Silvio Deer with a branch All together, we created the unique experience that available through iOS app to any peson walking around the museum. Video recording of the experience We would be glad to hear any feedback from Apple and scale our experiment to other museums!
Posted
by melamory.
Last updated
.
Post not yet marked as solved
1 Replies
389 Views
Hi, I am trying to build and run the HelloPhotogrammetry command line app that I have downloaded from here. When I run the app, I get the following error: Error creating session: cantCreateSession("A GPU that is not in low power mode is required. https://developer.apple.com/documentation/metal/mtldevice/1433409-lowpower") Does this app require a Mac with a dedicated GPU? or can I somehow use my Macbook pro with an integrated GPU to run it?
Posted
by sbrnaderi.
Last updated
.
Post marked as solved
8 Replies
2.1k Views
Hi, I'm using the sample code to create a 3D object from photos using PhotogrammetrySession but it returns this error: Error creating session: cantCreateSession("Native session create failed: CPGReturn(rawValue: -11)") Sample code I've used is this and this. Any idea? Thanks in advance!
Posted
by Rubenfern.
Last updated
.
Post not yet marked as solved
3 Replies
2.1k Views
From my understanding you capture images on an iOS device and send it to macOS which uses photogrammetry with Object Capture API to process it to a 3D model… Is it possible to exclude macOS and pull the API within the app itself so it does the processing all within the app? From scanning to processing? I see on the AppStore, there’s Scanner apps already, so I know it is possible to create 3D models on the iPhone within an app— but can this API do that? If not, any resources to point me in the right direction? (I’m working on creating a 3D food app, that scans food items and turns them into 3D models for restaurant owners… I’d like the restaurant owner to be able to scan their food item all within the app itself)
Posted Last updated
.
Post not yet marked as solved
2 Replies
774 Views
I am trying to make this to work but after building the command line app, I kept getting error when running https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app I am using MacBook-Air 2017 and Xcode 13.0 beta 4, Example error: 2021-08-09 14:43:03.714557+0530 HelloPhotogrammetry[3641:41200] Metal API Validation Enabled 2021-08-09 14:43:07.679733+0530 HelloPhotogrammetry[3641:41200] [HelloPhotogrammetry] Error creating session: cantCreateSession("A GPU that is not in low power mode is required. https://developer.apple.com/documentation/metal/mtldevice/1433409-lowpower") Program ended with exit code: 1
Posted Last updated
.
Post not yet marked as solved
4 Replies
1.9k Views
Has anyone run into limitations with an 8gb RAM M1 Mac Mini? I'm expecting there are some compromises with only 8gb, but curious about real-world results. It's impressive to see that the M1 Mac Mini is capable of running PhotogrammetrySession at all, unlike my 2020 Intel MBP. I'm planning to buy one just for this purpose. The requirements in the slide from the presentation says that any M1 will work, whereas Intel chips need 16gb RAM and a 4gb AMD video card. I'm inclined to get a Mac Mini with 16gb, but that config isn't available near me for pickup and delivery is more than a week out. If I knew that 8gb was enough to process 150 or so photos at high quality that's probably all I would need and could save $200 and get it immediately. Side note: I've been doing photogrammetry on PCs for years and would run out of memory occasionally using Agisoft on a 64gb system, which I needed to upgrade to 128gb. Those were large datasets (500+ photos) covering several hundred square meters from a drone at high resolution. My object scanning needs won't be as demanding, however 8gb just doesn't seem like much to work with. But, I suppose that even Nvidia 3070 Ti's only have 8gb of video memory and the M1's unified memory architecture might make that a better comparison than thinking about traditional system memory...
Posted Last updated
.
Post not yet marked as solved
1 Replies
380 Views
I'm trying to gather some depth data in order to send off to Object Capture for processing. What depth file formats are supported? I can see from the capture sample code they are written in 32bit tiff grayscale converting depth to disparity. Are there any other formats supported? Unfortunately the documentation is very light on this. Do you know if 16bit png would be supported? Some more detail on this would go a long way, thank you.
Posted Last updated
.
Post not yet marked as solved
0 Replies
256 Views
Im working on Object Capture App using Photogrammetry Session, the session cannot created for some reason, error message: cantCreateSession("A GPU with supportsRaytracing is required.") My Mac: Mac Pro (2019) Graphics: AMD Radeon Pro Vega II 32 GB OS Version: 12.0 Beta (21A5304g) Same code on MacBook Pro (16-inch, 2019) works fine without error. Graphics: AMD Radeon Pro 5300M 4 GB
Posted
by Memtimen.
Last updated
.
Post not yet marked as solved
3 Replies
612 Views
I have tested Object Capture with the ios app and the command line tool on macos. I'm wondering what is the best Apple device to use to get the best quality (geometry and texture), there are several configurations that may not give the same results. I have installed ios 15 on a 11 pro max. The ios app outputs some depth data. Which cameras are used to compute the depth ? Does it use three cameras or two cameras ? If it uses only two cameras, what pair does it use ? In theory for me, if only two cameras are used, the best configuration is tele and wide. I'm afraid with configuration with only wide and ultra wide, the results will be less accurate. In short, can we get the same accuracy with an iphone 12 and with an ipad pro ? The ipad seems more ergonomic to measure an object instead of iphone. Does the lidar of the iphone 12 pro/ipad pro can also be used to improve results ?
Posted
by cglinel.
Last updated
.
Post not yet marked as solved
0 Replies
218 Views
Im working on Object Capture App using Photogrammetry Session, the session cannot created for some reason, error message: cantCreateSession("A GPU with supportsRaytracing is required.") My Mac: Mac Pro (2019) Graphics: AMD Radeon Pro Vega II 32 GB OS Version: 12.0 Beta (21A5304g) Code running fine on MacBook Pro 2020.
Posted
by Memtimen.
Last updated
.
Post not yet marked as solved
0 Replies
245 Views
I have a question about Object Capture, a new API from Apple. I have created 3D models of a sofa, shoes and a bag using HelloPhotogrametory, a sample command line application. Only the sofa has a 3D model of the floor and other surrounding objects, but is there any way to avoid creating 3D models of these surrounding objects? Is there any way to avoid creating 3D models of these objects? Or do you have any information about the limit size of these objects that do not create 3D models of them? The photos I am using for this shoot are HEIC, the number of photos is about 50, and the runtime option is the sample default.
Posted
by itorin.
Last updated
.
Post marked as solved
1 Replies
542 Views
Hi, I have a MacPro, and am looking to buy a Sapphire AMD RX 580 8GB GPU. (Since my PowerColor R9 280X 3GB is just shy of the minimum 4GB requirement...) And I'm wondering, what if I bought two RX 580's? Would Object Capture take advantage of a dual gpu setup? ... and if so, would it increase the performance? PS. Just to clarify - I don't want / not talking about doing a Dual-Link / CrossFire setup (since that practice is kinda "dead"...) ... just wondering if Object Capture would recognise "aaah, there are two identical GPUs in the system, lets use both..."
Posted
by danalien.
Last updated
.
Post not yet marked as solved
1 Replies
352 Views
I am using the sample code project associated with WWDC21 session 10076: Create 3D Models with Object Capture. Is there a folder of sample photos I can use for testing this Project? Thank you.
Posted
by mvallance.
Last updated
.
Post marked as solved
1 Replies
454 Views
I created a 3D model using Object Capture. https://developer.apple.com/videos/play/wwdc2021/10076/ I want to know where the image used to create the object model was taken on the object oriented coordinate. Can I get this information from the PhotogrammetrySession?
Posted
by sp9103.
Last updated
.
Post not yet marked as solved
0 Replies
243 Views
I'm running the HelloPhotogrammetry command line app with a few of the example set of images, but the results are different to the example USDZ files. With the two Nike trainers, the front gets clipped off for some reason. Am I doing something wrong?
Posted
by fnSG.
Last updated
.