3D Graphics

RSS for tag

Discuss integrating three-dimensional graphics into your app.

Posts under 3D Graphics tag

48 Posts
Sort by:
Post marked as solved
1 Replies
755 Views
I work in the thoroughbred industry. I am interested in capturing a 3D model of a racehorse (at rest) to later use in a dataset for analysis. A recent paper (see "Body measurement of riding horses with a versatile tablet-type 3D scanning device") used the iPhnoe 12, a commerical app (Scandy) and LiDAR to create 3D models of the horse. It reads as a fairly straightfoward process, however I was wondering if there was any benefit to using Object Capture over LiDAR. It would seem as easy to walk around the horse and capture a video and then create the process to extract frames from the video for Object Capture? In terms of creating 3D models, is one method better/more accurate than another?
Posted
by
Post not yet marked as solved
0 Replies
333 Views
I have a 3D scene with a perspective camera and I'd like some of the elements to be projected using an orthographic projection instead. My use case is that I have some 3D elements with attached text nodes. I'd like the text on these nodes to always be the same size no matter how far away the camera is. Is there a way I can use SceneKit to mix and match? Or is there another technique I can use?
Posted
by
Post not yet marked as solved
1 Replies
631 Views
I'm on Mac OS 12 (Monterey) and Xcode 13 but it still get the error "Cannot find type 'PhotogrammetrySession' in scope" I tried restarting Xcode, tried restarting the Mac. But I still get the error. I have imported "RealityKit". I'm trying to run the HelloPhotogrammetry code provided by Apple.
Posted
by
Post not yet marked as solved
3 Replies
1.5k Views
My app uses SceneKit to do 3D rendering, and on the iPad Pro, it detects the 120Hz screen and lets you pick that as a target frames per second in the settings. All works well. On the iPhone 13 Pro, it can see the screen, and shows the option, but everything seems to be capped at 60Hz regardless of what you set the preferredFramesPerSecond of the SceneView to. Does anybody have an idea what I need to do on the iPhone to get this to work? Thanks!
Posted
by
Post not yet marked as solved
4 Replies
682 Views
Hi! I'm really excited to try the new ObjectCapture API. I have a iPhone 12 Pro (with the lidar) but have a old MacBook. I'm planning to get a new MacBook to run the RealityKit and Photogrammetry software, as given in this example: https://developer.apple.com/videos/play/wwdc2021/10076/. Are there any restrictions on the Mac hardware or is it fine as long as they support macOS 12.0+ Beta and Xcode 13.0+? Thanks!
Posted
by
Post not yet marked as solved
0 Replies
366 Views
Hi guys! I'm studying CoreML converting now. I want to convert a model which deals 3D point cloud data, but I can't make the code that determine input shape. 3d data sets shape depends on the number of points, and that is variable whenever LiDAR gets the 3d data. Is there any way I can do?
Posted
by
Post not yet marked as solved
2 Replies
660 Views
So, I've modified the CaptureSample IOS app to take photos using the truedepth front camera. It worked perfectly, and I have TIF depth maps together with the gravity vector and the photos I took. Using the HelloPhotogrammetry command line, I created the meshes without any problems. I notice the meshes have a consistent size between then, for example, creating a mesh of my face and a mesh of my nose, the nose mesh fits perfectly on top of the nose on the face mesh! Great! BUT, when I open the meshes in Maya, for example, they are really really tiny! I was expecting to see the objects in the proper scale, and hopefully bee able to even take measurements in maya to see if they would match the real measurements of the scanned object, but they don't seem to come on the right size at all. I tried set Maya to meters, centimetres and milimetres, but it always imports the meshes really tiny. I have to apply a scale of 100 to be able to see the meshes. But then they don't measure correctly. By try and error, I was able to find that scaling the meshes by 86 would make then match the real world scale in centimetres. Is there a proper space conversion that needs to be applied to the mesh to convert it to the real world scale? Would the problem be that I'm using the truedepth camera instead of the back camera, and the depth map value is coming in a different scale than what HelloPhotogrammetry expects?
Posted
by
Post not yet marked as solved
3 Replies
909 Views
I'm using Xcode 13 after recently updating to MacOS Monterey, and only after updating am I getting this error: [MTLDebugCommandBuffer lockPurgeableObjects]:2103: failed assertion `MTLResource 0x14a8a8cc0 (label: null), referenced in cmd buffer 0x149091400 (label: null) is in volatile or empty purgeable state at commit' I haven't changed my code at all between updating to the latest OS, and it worked perfectly before. How can I fix this? I don't think there should be any reason that I can't use a command buffer on a texture resource with a volatile/empty purgeable state.
Posted
by
Post marked as solved
9 Replies
838 Views
I have two triangels (T1,T2) and their vertecies. I want to know the line at which the triangles intersect. For the vertecies I use SIMD3. It would be great if someone could help me with my problem.
Posted
by
Post not yet marked as solved
2 Replies
457 Views
I've run into two (possibly related) problems involving mipmaps in BC7 RGBA Unorm textures. The first, and more serious, is a crash when uploading the last mipmap level of a texture. Thus far this has only happened on two machines, both running Catalina. Also, only certain textures cause the crash, but there doesn't seem to be anything unusual about them. From the crash reports: MacOS 10.15.7 19H1519, Intel Graphics 4000 (this is from a debug, single-threaded build) Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 com.apple.driver.AppleIntelHD4000GraphicsMTLDriver 0x00007fff25cff33d CpuSwizzleBlt + 11667 1 com.apple.driver.AppleIntelHD4000GraphicsMTLDriver 0x00007fff25ce7ca0 -[MTLIGAccelTexture replaceRegion:mipmapLevel:slice:withBytes:bytesPerRow:bytesPerImage:] + 1387 2 com.apple.driver.AppleIntelHD4000GraphicsMTLDriver 0x00007fff25ce7e2f -[MTLIGAccelTexture replaceRegion:mipmapLevel:withBytes:bytesPerRow:] + 74 MacOS 10.15.7 19H1419, Intel Graphics 5000 (this is from a release, multi-threaded build) Thread 9 Crashed: 0  com.apple.driver.AppleIntelHD5000GraphicsMTLDriver         0x00007fff2973b9c0 CpuSwizzleBlt + 9224 1  com.apple.driver.AppleIntelHD5000GraphicsMTLDriver         0x00007fff2972714b -[MTLIGAccelTexture replaceRegion:mipmapLevel:slice:withBytes:bytesPerRow:bytesPerImage:] + 1385 2  com.apple.driver.AppleIntelHD5000GraphicsMTLDriver         0x00007fff297272c3 -[MTLIGAccelTexture replaceRegion:mipmapLevel:withBytes:bytesPerRow:] + 64 I'll post the full crash reports to Feedback Assistant. The second problem only happens on Mojave, and results in what looks like garbled pixel data in the mipmaps (I don't have access to the machine to do a frame capture). I can work around this issue by disabling mipmaps in the texture sampler. There are no Metal validation errors, and neither problem happens on Big Sur (I don't yet have a Monterey machine). Uncompressed textures are fine, as well, although mipmaps for those are generated on-the-fly rather than uploaded. Padding the source pixel data doesn't help, so the seg fault likely isn't caused by a too-large or unaligned read. Has anyone else run into problems with mipmaps in BC7 compressed textures?
Posted
by
dwn
Post not yet marked as solved
2 Replies
510 Views
I have two iMacs where MTLDevice.currentAllocatedSize is acting strange--the reported size keeps rising, despite periodically freeing resources to keep under MTLDevice.recommendedMaxWorkingSetSize. The affected iMacs are both late 2014 models running MacOS Big Sur 11.6, one with an AMD Radeon R9 M290X and the other with an AMD Radeon R9 M295X. So far none of our other Macs have shown this behaviour, which suggests this may be an API or driver problem. I do have the option of using my own resource size estimates, but that's likely not as accurate as what the system reports, assuming MTLDevice.currentAllocatedSize is working properly. Any suggestions?
Posted
by
dwn
Post marked as solved
2 Replies
458 Views
I want to put on 3D object on websites, but I don’t know how to do. It doesn’t explain AR such as Apple’s product page. It explain like top page’s earth of github.com. In short, I want to put 3D object without page-jumping. This question is maybe not adopted here -apple developer forum-, however I want someone to answer this.
Posted
by
Post not yet marked as solved
1 Replies
355 Views
Will iPad ever receive these tools for object capture? Or at the very least xCode for the ability to use the command line apps for it? I have an M1 iPad Pro that should be able to do all that the M1 Macs can but it’s being held back by software limitations.
Posted
by
Post not yet marked as solved
0 Replies
212 Views
Total newbie here. And I recognize that this isn't strictly a developer question. But it is where Photo Capture lead me. So... We're having trouble with the OBJ files generated by Photo Catch being recognized by Cinema4D. Is anyone having thew same issue? There's not a lot to adjust in Photo Catch so I'm not entirely sure where we would go wrong here. If this isn't the right place for these questions. can you steer me to where I should go? Much obliged!
Posted
by
Post not yet marked as solved
1 Replies
268 Views
I'm trying to populate scene view inside Table cell (for 3D objects) as a subview. For first view load its working fine but as new child nodes are added/Updated to view that is not happening its remain static(objects are not updating on view). Tried with renderer delegate values of child nodes are updated but on view its not happening we have to switch between different scene then view is loaded properly. How to update or refresh subview continuously ?
Posted
by
Post not yet marked as solved
0 Replies
228 Views
The same graphics program runs in the webview, and the iPhone 12Pro has 15+% more GPU usage than the iPhone 11Pro. In theory, the performance of the A14 should be stronger than that of the A13. Is it possible to increase the GPU overhead by 15+% just because there are more 45x96 pixels? The fragment profiling in XCode Instrument, please see the screenshot.
Posted
by