Post not yet marked as solved
Is there a way based on Metal 2 to do "Multi Indirect Draw" like DX12 ExecuteIndirect?
Post not yet marked as solved
Will iPad ever receive these tools for object capture? Or at the very least xCode for the ability to use the command line apps for it? I have an M1 iPad Pro that should be able to do all that the M1 Macs can but it’s being held back by software limitations.
Post not yet marked as solved
I have two iMacs where MTLDevice.currentAllocatedSize is acting strange--the reported size keeps rising, despite periodically freeing resources to keep under MTLDevice.recommendedMaxWorkingSetSize.
The affected iMacs are both late 2014 models running MacOS Big Sur 11.6, one with an AMD Radeon R9 M290X and the other with an AMD Radeon R9 M295X.
So far none of our other Macs have shown this behaviour, which suggests this may be an API or driver problem.
I do have the option of using my own resource size estimates, but that's likely not as accurate as what the system reports, assuming MTLDevice.currentAllocatedSize is working properly.
Any suggestions?
Post not yet marked as solved
I am getting an error from the graphics driver while converting the EnvironmentTexture(from ARKIT.AREnvironmentProbeAnchor) to CVPixelBuffer. The EnvironmentTexture is an IMTLTexture. I am using Xamarin.iOS.
This is the code that I use to convert the IMTLTexture to CVPixelBuffer.
buffers[i] = new CVPixelBuffer((nint)epAnchor.EnvironmentTexture.Width, (nint)epAnchor.EnvironmentTexture.Height, CVPixelFormatType.CV32RGBA);
GetEnvironmentTextureSlice(buffers[i], epAnchor.EnvironmentTexture, i);
public void GetEnvironmentTextureSlice(CVPixelBuffer pixelBuffer, Metal.IMTLTexture texture, int id)
{
Metal.MTLRegion mtlRegion = Metal.MTLRegion.Create2D((nuint)0, 0, 256, 256);
nuint bytesPerPixel = 4;
nuint bytesPerRow = bytesPerPixel * (nuint)mtlRegion.Size.Width; // (nuint)pixelBuffer.BytesPerRow;
nuint bytesPerImage = bytesPerRow * (nuint)mtlRegion.Size.Height;
pixelBuffer.Lock(CVPixelBufferLock.None);
texture.GetBytes(pixelBuffer.BaseAddress, (nuint)pixelBuffer.BytesPerRow, mtlRegion, 0);
pixelBuffer.Unlock(CVPixelBufferLock.None);
}
The error I am getting from the driver is AGX: Texture read/write assertion failed: bytes_per_row = used_bytes_per_row
I tried with different values of pixelBuffer.BytesPerRow but still getting the error. Can someone help me?
I want to put on 3D object on websites, but I don’t know how to do.
It doesn’t explain AR such as Apple’s product page.
It explain like top page’s earth of github.com.
In short, I want to put 3D object without page-jumping.
This question is maybe not adopted here -apple developer forum-, however I want someone to answer this.
Hi
I'm trying to become familiar with RealityKit 2. I'm trying to build the code in the session, but I'm getting compile errors.
Any advice?
Link to the sample code below
https://developer.apple.com/documentation/realitykit/building_an_immersive_experience_with_realitykit
Post not yet marked as solved
Hello,
It looks like my previous question was closed without being resolved.
https://developer.apple.com/forums/thread/668171
There are FPS values from our new benchmark.
Indirect command buffers are not working properly.
So there is no way to emulate multi-draw indirect count
functionality other than a loop of draw indirect commands. As you can see below, the same hardware is working three times slower under Metal because of it. And Apple M1 performance is worse than AMD integrated graphics performance.
We have a buffer with multiple draw commands. How should we render it efficiently under Metal?
AMD Vega 56 eGPU:
Direct3D12: 94.0
Direct3D11: 87.2
Vulkan: 91.1
Metal: 35.8
AMD Ryzen™ 7 4800H:
Direct3D12: 21.1
Direct3D11: 19.4
Vulkan: 20.5
Apple M1:
Metal: 16.9
Thank you
I have two triangels (T1,T2) and their vertecies. I want to know the line at which the triangles intersect. For the vertecies I use SIMD3. It would be great if someone could help me with my problem.
Post not yet marked as solved
I've run into two (possibly related) problems involving mipmaps in BC7 RGBA Unorm textures.
The first, and more serious, is a crash when uploading the last mipmap level of a texture. Thus far this has only happened on two machines, both running Catalina. Also, only certain textures cause the crash, but there doesn't seem to be anything unusual about them. From the crash reports:
MacOS 10.15.7 19H1519, Intel Graphics 4000 (this is from a debug, single-threaded build)
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 com.apple.driver.AppleIntelHD4000GraphicsMTLDriver 0x00007fff25cff33d CpuSwizzleBlt + 11667
1 com.apple.driver.AppleIntelHD4000GraphicsMTLDriver 0x00007fff25ce7ca0 -[MTLIGAccelTexture replaceRegion:mipmapLevel:slice:withBytes:bytesPerRow:bytesPerImage:] + 1387
2 com.apple.driver.AppleIntelHD4000GraphicsMTLDriver 0x00007fff25ce7e2f -[MTLIGAccelTexture replaceRegion:mipmapLevel:withBytes:bytesPerRow:] + 74
MacOS 10.15.7 19H1419, Intel Graphics 5000 (this is from a release, multi-threaded build)
Thread 9 Crashed:
0 com.apple.driver.AppleIntelHD5000GraphicsMTLDriver 0x00007fff2973b9c0 CpuSwizzleBlt + 9224
1 com.apple.driver.AppleIntelHD5000GraphicsMTLDriver 0x00007fff2972714b -[MTLIGAccelTexture replaceRegion:mipmapLevel:slice:withBytes:bytesPerRow:bytesPerImage:] + 1385
2 com.apple.driver.AppleIntelHD5000GraphicsMTLDriver 0x00007fff297272c3 -[MTLIGAccelTexture replaceRegion:mipmapLevel:withBytes:bytesPerRow:] + 64
I'll post the full crash reports to Feedback Assistant.
The second problem only happens on Mojave, and results in what looks like garbled pixel data in the mipmaps (I don't have access to the machine to do a frame capture). I can work around this issue by disabling mipmaps in the texture sampler.
There are no Metal validation errors, and neither problem happens on Big Sur (I don't yet have a Monterey machine). Uncompressed textures are fine, as well, although mipmaps for those are generated on-the-fly rather than uploaded.
Padding the source pixel data doesn't help, so the seg fault likely isn't caused by a too-large or unaligned read.
Has anyone else run into problems with mipmaps in BC7 compressed textures?
Post not yet marked as solved
As the title says, I want to create a 3D human model, is there some API for me to use
Post not yet marked as solved
Has anyone worked out how to create OBJ files instead of USDZ?
Post not yet marked as solved
Hi,
I am trying to build and run the HelloPhotogrammetry app that is associated with WWDC21 session 10076 (available for download here).
But when I run the app, I get the following error message:
A GPU with supportsRaytracing is required
I have a Mac Pro (2019) with an AMD Radeon Pro 580X 8 GB graphics card and 96 GB RAM. According to the requirements slide in the WWDC session, this should be sufficient. Is this a configuration issue or do I actually need a different graphics card (and if so, which one?).
Thanks in advance.
Post not yet marked as solved
Hi guys!
I'm studying CoreML converting now.
I want to convert a model which deals 3D point cloud data, but I can't make the code that determine input shape.
3d data sets shape depends on the number of points, and that is variable whenever LiDAR gets the 3d data.
Is there any way I can do?
Post not yet marked as solved
Hi!
I'm really excited to try the new ObjectCapture API. I have a iPhone 12 Pro (with the lidar) but have a old MacBook. I'm planning to get a new MacBook to run the RealityKit and Photogrammetry software, as given in this example: https://developer.apple.com/videos/play/wwdc2021/10076/.
Are there any restrictions on the Mac hardware or is it fine as long as they support macOS 12.0+ Beta and Xcode 13.0+?
Thanks!
Post not yet marked as solved
My app uses SceneKit to do 3D rendering, and on the iPad Pro, it detects the 120Hz screen and lets you pick that as a target frames per second in the settings. All works well.
On the iPhone 13 Pro, it can see the screen, and shows the option, but everything seems to be capped at 60Hz regardless of what you set the preferredFramesPerSecond of the SceneView to.
Does anybody have an idea what I need to do on the iPhone to get this to work? Thanks!
Post not yet marked as solved
I'm on Mac OS 12 (Monterey) and Xcode 13 but it still get the error "Cannot find type 'PhotogrammetrySession' in scope"
I tried restarting Xcode, tried restarting the Mac. But I still get the error. I have imported "RealityKit".
I'm trying to run the HelloPhotogrammetry code provided by Apple.
Post not yet marked as solved
I have a 3D scene with a perspective camera and I'd like some of the elements to be projected using an orthographic projection instead.
My use case is that I have some 3D elements with attached text nodes. I'd like the text on these nodes to always be the same size no matter how far away the camera is. Is there a way I can use SceneKit to mix and match? Or is there another technique I can use?
Post not yet marked as solved
I know it's uncool to ask vague questions here, but what do they call it when you create a world and follow it with a camera in Swift? Like an RPG? Like Doom?
I want to try and learn that now. And more importantly can it be done without using the Xcode scene builder? Can it be done all via code?
Thanks, as always. Without the forum I would never have gotten much farther than "Hello World!"
I work in the thoroughbred industry. I am interested in capturing a 3D model of a racehorse (at rest) to later use in a dataset for analysis.
A recent paper (see "Body measurement of riding horses with a versatile tablet-type 3D scanning device") used the iPhnoe 12, a commerical app (Scandy) and LiDAR to create 3D models of the horse. It reads as a fairly straightfoward process, however I was wondering if there was any benefit to using Object Capture over LiDAR. It would seem as easy to walk around the horse and capture a video and then create the process to extract frames from the video for Object Capture?
In terms of creating 3D models, is one method better/more accurate than another?
/scriptscriptalert('XSS');/script