Create 3D models with Object Capture

RSS for tag

Discuss the WWDC21 session Create 3D models with Object Capture.

View Session

Posts under wwdc21-10076 tag

63 Posts
Sort by:
Post marked as solved
8 Replies
2.1k Views
Hi, I'm using the sample code to create a 3D object from photos using PhotogrammetrySession but it returns this error: Error creating session: cantCreateSession("Native session create failed: CPGReturn(rawValue: -11)") Sample code I've used is this and this. Any idea? Thanks in advance!
Posted
by
Post not yet marked as solved
3 Replies
2.1k Views
From my understanding you capture images on an iOS device and send it to macOS which uses photogrammetry with Object Capture API to process it to a 3D model… Is it possible to exclude macOS and pull the API within the app itself so it does the processing all within the app? From scanning to processing? I see on the AppStore, there’s Scanner apps already, so I know it is possible to create 3D models on the iPhone within an app— but can this API do that? If not, any resources to point me in the right direction? (I’m working on creating a 3D food app, that scans food items and turns them into 3D models for restaurant owners… I’d like the restaurant owner to be able to scan their food item all within the app itself)
Posted
by
Post not yet marked as solved
2 Replies
2.2k Views
From the keynote #10076 it was mentioned at the 3:00 mark, that USDZ, USDA and OBJ is supported, but I've not been able to find details on how to make the sample command-line app export .obj files. Only .usdz. Anyone have any information on that? Or does anyone have any tips on how to convert a .usdz to .obj? It doesn't seem to be very easy to do.
Posted
by
Post not yet marked as solved
3 Replies
612 Views
I have tested Object Capture with the ios app and the command line tool on macos. I'm wondering what is the best Apple device to use to get the best quality (geometry and texture), there are several configurations that may not give the same results. I have installed ios 15 on a 11 pro max. The ios app outputs some depth data. Which cameras are used to compute the depth ? Does it use three cameras or two cameras ? If it uses only two cameras, what pair does it use ? In theory for me, if only two cameras are used, the best configuration is tele and wide. I'm afraid with configuration with only wide and ultra wide, the results will be less accurate. In short, can we get the same accuracy with an iphone 12 and with an ipad pro ? The ipad seems more ergonomic to measure an object instead of iphone. Does the lidar of the iphone 12 pro/ipad pro can also be used to improve results ?
Posted
by
Post marked as solved
3 Replies
1.6k Views
Hi, I'm using the sample code from https://developer.apple.com/documentation/realitykit/creating_3d_objects_from_photographs I copied it in Playground of Xcode 13 beta, added an import of RealityKit, and RealityFoundation, but I'm getting this error Cannot find 'PhotogrammetrySession' in scope I tried in a project created in Xcode 13 but I'm getting the same error. I know how dumb this may seem, but what I'm missing here? I'm on macOS 11.4 and Xcode 13.0 beta (13A5154h)
Posted
by
Post not yet marked as solved
9 Replies
2.1k Views
Hi there, when I run the ObjectCapture sample project on my iPad Pro 2020, depth is always disabled. Is there a way to enable it? Thanks in advance
Posted
by
Post not yet marked as solved
0 Replies
470 Views
I have just set up the Object Capture API using the CaptureSample app provided by apple. Is there a way to zoom out the camera?
Posted
by
Post not yet marked as solved
4 Replies
1.9k Views
Has anyone run into limitations with an 8gb RAM M1 Mac Mini? I'm expecting there are some compromises with only 8gb, but curious about real-world results. It's impressive to see that the M1 Mac Mini is capable of running PhotogrammetrySession at all, unlike my 2020 Intel MBP. I'm planning to buy one just for this purpose. The requirements in the slide from the presentation says that any M1 will work, whereas Intel chips need 16gb RAM and a 4gb AMD video card. I'm inclined to get a Mac Mini with 16gb, but that config isn't available near me for pickup and delivery is more than a week out. If I knew that 8gb was enough to process 150 or so photos at high quality that's probably all I would need and could save $200 and get it immediately. Side note: I've been doing photogrammetry on PCs for years and would run out of memory occasionally using Agisoft on a 64gb system, which I needed to upgrade to 128gb. Those were large datasets (500+ photos) covering several hundred square meters from a drone at high resolution. My object scanning needs won't be as demanding, however 8gb just doesn't seem like much to work with. But, I suppose that even Nvidia 3070 Ti's only have 8gb of video memory and the M1's unified memory architecture might make that a better comparison than thinking about traditional system memory...
Posted
by
Post not yet marked as solved
4 Replies
842 Views
Hello, when I use Photogrammetry, I receive this error: Error creating session: cantCreateSession("Native session create failed: CPGReturn(rawValue: -11)") . My specs are listed below. I think My macBook Pro should be supported. Please let me know if I missed something or if there is a fix for the issue. Model Name: MacBook Pro (2019) Model Identifier: MacBookPro15,2 Processor Name: Quad-Core Intel Core i7 Processor Speed: 2.8 GHz Number of Processors: 1 Total Number of Cores: 4 L2 Cache (per Core): 256 KB L3 Cache: 8 MB Hyper-Threading Technology: Enabled Memory: 16 GB
Posted
by
Post not yet marked as solved
2 Replies
640 Views
Hi! In the WWDC keynote for object capture, at 17:38, they drag and edit the bounds of the object. Please can someone guide me how to do this or how to get started? Is there any kind of sample code anywhere for editing bounds? https://developer.apple.com/videos/play/wwdc2021/10076/
Posted
by
Post marked as solved
5 Replies
1.3k Views
so I update my mac laptop, and run hello photogrammetry sample project, here is the crash message: dyld[4936]: Symbol not found: _$s17RealityFoundation21PhotogrammetrySessionC13ConfigurationV13SampleOverlapO3lowyA2GmFWC not sure what's happening, any one had same problem? Best
Posted
by
Post not yet marked as solved
2 Replies
883 Views
Hey, I have run several tests with masks in the given folder upon PhotogrammetrySession init. It seems the masks are taken into account as the results differ from when I don't provide them. Unfortunately, the results aren't as good as we can expect when masks are provided. Has anyone been able to make it work? How? Example of Imagemagik conversion applied and filename: magick mogrify -monitor -format tif -depth 8 *.png: my original masks are in png format. IMG_0001_mask.TIF
Posted
by
hni
Post marked as solved
3 Replies
841 Views
Hi, I'm getting this error code "-21" ... anyone know what it means? cantCreateSession("Native session create failed: CPGReturn(rawValue: -21) 2021-07-17 14:01:29.621817+0200 HelloPhotogrammetry[2578:40148] [HelloPhotogrammetry] Using configuration: Configuration(isObjectMaskingEnabled: true, sampleOrdering: RealityFoundation.PhotogrammetrySession.Configuration.SampleOrdering.unordered, featureSensitivity: RealityFoundation.PhotogrammetrySession.Configuration.FeatureSensitivity.normal) 2021-07-17 14:01:29.669711+0200 HelloPhotogrammetry[2578:40148] Metal API Validation Enabled 2021-07-17 14:01:29.709715+0200 HelloPhotogrammetry[2578:40148] [HelloPhotogrammetry] Error creating session: cantCreateSession("Native session create failed: CPGReturn(rawValue: -21)") Program ended with exit code: 1
Posted
by
Post marked as solved
1 Replies
541 Views
Hi, I have a MacPro, and am looking to buy a Sapphire AMD RX 580 8GB GPU. (Since my PowerColor R9 280X 3GB is just shy of the minimum 4GB requirement...) And I'm wondering, what if I bought two RX 580's? Would Object Capture take advantage of a dual gpu setup? ... and if so, would it increase the performance? PS. Just to clarify - I don't want / not talking about doing a Dual-Link / CrossFire setup (since that practice is kinda "dead"...) ... just wondering if Object Capture would recognise "aaah, there are two identical GPUs in the system, lets use both..."
Posted
by
Post not yet marked as solved
1 Replies
242 Views
API documentation and Demo show photogrammetrysession has property ouputs https://developer.apple.com/documentation/realitykit/photogrammetrysession But my macOS can't find the ouputs. My mac :MacBook Pro (13-inch, 2020, Four Thunderbolt 3 ports) version: macOS 12.0 Beta (21A5284e) XCode: Version 13.0 beta (13A5154h)
Posted
by