Post not yet marked as solved
Good afternoon!
I was wondering if it is possible to create AR objects with a sequential set of images on a completely homogeneous white background, e.g. a rotated 3D molecule from an online database-? When I try this, I get the error:
RealityFoundation.PhotogrammetrySession.Request.Detail.full, geometry: nil) had an error: reconstructionFailed("Camera alignment failed -- ensure that there is sufficient overlap between the images.")
On the other hand, I can change the ordering from 'sequential' to 'unordered' and remove the error, but I get a very sloppy image that is not nearly as detailed as the molecule example (attached photos).
I am using 120 photos that capture exactly 360deg around the molecule. Should I perhaps capture several rotations?
Thank you for any feedback!!
Post not yet marked as solved
So, I've modified the CaptureSample IOS app to take photos using the truedepth front camera. It worked perfectly, and I have TIF depth maps together with the gravity vector and the photos I took.
Using the HelloPhotogrammetry command line, I created the meshes without any problems.
I notice the meshes have a consistent size between then, for example, creating a mesh of my face and a mesh of my nose, the nose mesh fits perfectly on top of the nose on the face mesh! Great!
BUT, when I open the meshes in Maya, for example, they are really really tiny!
I was expecting to see the objects in the proper scale, and hopefully bee able to even take measurements in maya to see if they would match the real measurements of the scanned object, but they don't seem to come on the right size at all. I tried set Maya to meters, centimetres and milimetres, but it always imports the meshes really tiny. I have to apply a scale of 100 to be able to see the meshes. But then they don't measure correctly. By try and error, I was able to find that scaling the meshes by 86 would make then match the real world scale in centimetres.
Is there a proper space conversion that needs to be applied to the mesh to convert it to the real world scale?
Would the problem be that I'm using the truedepth camera instead of the back camera, and the depth map value is coming in a different scale than what HelloPhotogrammetry expects?
Post not yet marked as solved
Any images or PhotogrammetrySamples after 1000 will be rejected and ignored. This is regardless of image resolution, bit depth, and format.
This restriction is still present in macOS 12.0.1. Please remove this restriction.
Post not yet marked as solved
Hello, I am trying to get the object capture command line example program working, but I am running into a weird error. "Cannot find 'PhotogrammetrySession' in scope." I am running xcode 13, and I am on MacOS 12.0 Beta (21A5522h). From what I have seen online this error only occurs when attempting to use object capture on an older version of MacOS. I am probably over looking something obvious, but I would appreciate any help.
The object capture command line program example: (https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app)
Post not yet marked as solved
I took photos of Nick shoes on the side of CaptureSample of WWDC2021. The initial position of the final generated Nick shoes is upside down. How do I position the initial position of the model as forward
Post not yet marked as solved
Hi!
I'm really excited to try the new ObjectCapture API. I have a iPhone 12 Pro (with the lidar) but have a old MacBook. I'm planning to get a new MacBook to run the RealityKit and Photogrammetry software, as given in this example: https://developer.apple.com/videos/play/wwdc2021/10076/.
Are there any restrictions on the Mac hardware or is it fine as long as they support macOS 12.0+ Beta and Xcode 13.0+?
Thanks!
Post not yet marked as solved
I'm on Mac OS 12 (Monterey) and Xcode 13 but it still get the error "Cannot find type 'PhotogrammetrySession' in scope"
I tried restarting Xcode, tried restarting the Mac. But I still get the error. I have imported "RealityKit".
I'm trying to run the HelloPhotogrammetry code provided by Apple.
Post not yet marked as solved
What do you mean by "start by creating a session?" I'm an architect(building designer) and a novice programmer trying to learn to use the tools to help in my design process. But I'm kinda stuck in trying to start the engine, let alone drive it. Any advice would be helpful!
I work in the thoroughbred industry. I am interested in capturing a 3D model of a racehorse (at rest) to later use in a dataset for analysis.
A recent paper (see "Body measurement of riding horses with a versatile tablet-type 3D scanning device") used the iPhnoe 12, a commerical app (Scandy) and LiDAR to create 3D models of the horse. It reads as a fairly straightfoward process, however I was wondering if there was any benefit to using Object Capture over LiDAR. It would seem as easy to walk around the horse and capture a video and then create the process to extract frames from the video for Object Capture?
In terms of creating 3D models, is one method better/more accurate than another?
Post not yet marked as solved
As for the subject, CaptureSample App doesn't get depth data on iPhone12 Pro Max.
I mean, the app reports that we correctly have depth data and photos captured are shown with depth data green badge attached but generated .tif files are always a full white file, no more details.
Is there something i'm missing? Do i need to do something more to enable depth data acquisition?
I'm taking pictures of objects no more distant than 50 cm
Post not yet marked as solved
Hi,
I am trying to build and run the HelloPhotogrammetry app that is associated with WWDC21 session 10076 (available for download here).
But when I run the app, I get the following error message:
A GPU with supportsRaytracing is required
I have a Mac Pro (2019) with an AMD Radeon Pro 580X 8 GB graphics card and 96 GB RAM. According to the requirements slide in the WWDC session, this should be sufficient. Is this a configuration issue or do I actually need a different graphics card (and if so, which one?).
Thanks in advance.
Hi,
Is it possible to create a 3D model by ingesting a video ? As we create by using pictures ?
Is there any API provided by apple for this ?
Any help is appreciated.
Thanks
Post not yet marked as solved
Hey,
I have my working photogrammetry command line app. Plz help me embedding this app into my other macOS GUI app. I have completed my GUI app, just need to connect both the apps.
What is the appropriate way ?
"Creating 3D models with Object Capture" the API provided is only working on console app now ?
It's working perfect on console app but not on macOS GUI app.
I couldn't find such information in the documents as well.
In my GUI app, I am getting the error as, "Process got error: invalidRequest(RealityFoundation.PhotogrammetrySession.Request.modelFile(url: file:///Users/s***ik/Desktop/Pot_usdz/sam.usdz, detail: RealityFoundation.PhotogrammetrySession.Request.Detail.preview, geometry: nil), ".modelFile directory path file:///Users/snayvik/Desktop/Pot_usdz/ cannot be written to!")"
Any help is appreciated.
Post not yet marked as solved
Hi,
I am trying to build and run the HelloPhotogrammetry command line app that I have downloaded from here.
When I run the app, I get the following error:
Error creating session: cantCreateSession("A GPU that is not in low power mode is required. https://developer.apple.com/documentation/metal/mtldevice/1433409-lowpower")
Does this app require a Mac with a dedicated GPU? or can I somehow use my Macbook pro with an integrated GPU to run it?
Post not yet marked as solved
My name is Daria. I represent a students team from Omsk, Russia.
After WWDC21 we've decided to experiment with the Object Capture technology to reconstruct histrorical museum objects and place it as an art exhibition nearby the museum.
We've talked with different museums. Our idea was supported by Vrubel museum (http://vrubel.ru). They provided us access to their historical sculptures (dated by 19th century).
The following are reconstructed models, that we created with Object Capture technology:
Young Woman
Psyche
Psyche with a butterfly
Cupid's head
Silvio
Deer with a branch
All together, we created the unique experience that available through iOS app to any peson walking around the museum.
Video recording of the experience
We would be glad to hear any feedback from Apple and scale our experiment to other museums!
Post not yet marked as solved
I'm trying to gather some depth data in order to send off to Object Capture for processing. What depth file formats are supported? I can see from the capture sample code they are written in 32bit tiff grayscale converting depth to disparity. Are there any other formats supported? Unfortunately the documentation is very light on this. Do you know if 16bit png would be supported?
Some more detail on this would go a long way, thank you.
Post not yet marked as solved
Im working on Object Capture App using Photogrammetry Session, the session cannot created for some reason, error message:
cantCreateSession("A GPU with supportsRaytracing is required.")
My Mac: Mac Pro (2019)
Graphics: AMD Radeon Pro Vega II 32 GB
OS Version: 12.0 Beta (21A5304g)
Same code on MacBook Pro (16-inch, 2019) works fine without error.
Graphics: AMD Radeon Pro 5300M 4 GB
Post not yet marked as solved
Im working on Object Capture App using Photogrammetry Session, the session cannot created for some reason, error message:
cantCreateSession("A GPU with supportsRaytracing is required.")
My Mac: Mac Pro (2019)
Graphics: AMD Radeon Pro Vega II 32 GB
OS Version: 12.0 Beta (21A5304g)
Code running fine on MacBook Pro 2020.
Post not yet marked as solved
I have a question about Object Capture, a new API from Apple. I have created 3D models of a sofa, shoes and a bag using HelloPhotogrametory, a sample command line application. Only the sofa has a 3D model of the floor and other surrounding objects, but is there any way to avoid creating 3D models of these surrounding objects?
Is there any way to avoid creating 3D models of these objects? Or do you have any information about the limit size of these objects that do not create 3D models of them?
The photos I am using for this shoot are HEIC, the number of photos is about 50, and the runtime option is the sample default.