MetalKit

RSS for tag

Render graphics in a standard Metal view, load textures from many sources, and work efficiently with models provided by Model I/O using MetalKit.

MetalKit Documentation

Posts under MetalKit tag

59 Posts
Sort by:
Post not yet marked as solved
0 Replies
84 Views
Where do I start with this error? I am using the Metal Debugger and have.a bunch of stuck command buffers. how do I look at the command buffers to see the errors? My suspicion is that the cause is some sort of memory leak. Not having access to the source for Metal leaves me stuck. The following message shows up in the logging pane of the execution. Execution of the command buffer was aborted due to an error during execution. Ignored (for causing prior/excessive GPU errors) (00000004:kIOGPUCommandBufferCallbackErrorSubmissionsIgnored) Type: Error | Timestamp: 2024-04-11 14:16:13.464336-05:00 | Library: Metal | Subsystem: Metal | Category: Default | TID: 0x2a0b8c I just need some guidance
Posted
by mfstanton.
Last updated
.
Post not yet marked as solved
3 Replies
159 Views
It appears that when a class like the following: " class RoomCaptureViewController: UIViewController, RoomCaptureViewDelegate,ARSCNViewDelegate, MTKViewDelegate, ARSessionDelegate, RoomCaptureSessionDelegate. " has multiple delegates, the ordering of the priority of each message is delivered to a delegate by a priority sensitive order based algorithm and that one message can be processed by only one delegate and not passed off to other delegates if they don't have the proper entry points. Specifically I noted that changing the order seems to result in a delegate not getting a message that it should be seeing. Is there a "handoff" call that can be made after a delegate has seen a message but needs to pass it off to another delegate for processing? This is a protocol typically utilized in Interrupt handlers for PCIe and other messaging protocols and I have not been able to find a similar capability In the voluminous documentation available for IOS and Mac systems. I would also like to know how a message is dispatched by a class to the particular delegate for which the message was intended. Is there a detailed document that explains how the messaging protocol works that is not so fragmented as to require having multiple monitors open in order to form a coherent picture of the messaging interface for Delegates belonging to a class?
Posted
by mfstanton.
Last updated
.
Post not yet marked as solved
1 Replies
164 Views
I'm brand new to Metal. I've googled, but can't get the right answer to come up. (Thanks, unhelpful ChatGPT generated answers polluting everything, but I digress...) Ultimately, I'm trying to figure out how to use Metal to render 3D DICOM data on iOS specifically. If you're not familiar with DICOM, let's just say I've got a whole stack of CT image slices. Or to get really simple, I've got a cube of voxel values with differing values at each voxel coordinate. Where do I even start in Metal to render something like this? (I was trying to get the VTK toolkit compiled for iOS, which uses OpenGL, but that appears to be a dead end. And besides, Metal is supposed to be so much better.) Thanks for any tips/leads/suggestions/general pointers.
Posted Last updated
.
Post not yet marked as solved
0 Replies
131 Views
I'm trying to follow the metal-cpp tutorials I've found at https://developer.apple.com/metal/sample-code/?q=learn The program seems to be launching correctly (I can see the menu bar and interact with it), but nothing is rendered inside the window. I suppose the culprit is somewhere in the following function (I see it binds the device, the view and the window with the object in charge of drawing stuff in the view) void core::Application::applicationDidFinishLaunching(NS::Notification *pNotification) { CGRect frame = (CGRect){{100.0, 100.0}, {512.0, 512.0}}; m_Window->init(frame, NS::WindowStyleMaskClosable | NS::WindowStyleMaskTitled, NS::BackingStoreBuffered, false); m_Device = MTL::CreateSystemDefaultDevice(); m_View = MTK::View::alloc()->init(frame, m_Device); m_View->setColorPixelFormat(MTL::PixelFormat::PixelFormatBGRA8Unorm); m_View->setClearColor(MTL::ClearColor::Make(1.0, 0.0, 0.0, 1.0)); m_ViewDelegate = new graphics::ViewDelegate(m_Device); m_View->setDelegate(m_ViewDelegate); m_Window->setContentView(m_View); m_Window->setTitle(NS::String::string("Template 1", NS::StringEncoding::UTF8StringEncoding)); m_Window->makeKeyAndOrderFront(nullptr); NS::Application* nsApp = reinterpret_cast<NS::Application*>(pNotification->object()); nsApp->activateIgnoringOtherApps(true); } but, as you can infer from the fact that I'm failing at the very first tutorial of the bunch, I'm quite lost. I've tried debugging the app with the Xcode debugger and I saw that it never enters in this function. void ViewDelegate::drawInMTKView(MTK::View *pView) { m_Renderer->Draw(pView); } Can it be a symptom of some call missing from my code? Thank you in advance for your help
Posted
by p_Each.
Last updated
.
Post not yet marked as solved
2 Replies
198 Views
I have provided a test UIKit app which displays three different images, side by side, each inside a separate MTKView. Each image is tagged with a different color profile: Display P3 uRGB Test RGB (from an image supplied in Apple's ImageApp sample). I set up default values for all color spaces and formats. I then check if the image is tagged and, if so, I override those values with state from the tagged color space. The variables I am setting: “workingColorSpace” in the Metal CIContext, default = sRGB “workingFormat” in the Metal CIContext, default = RGBAf “outputColorSpace” in the Metal CIContext, default = displayP3 “colorPixelFormat” in the MTKView, default = bgra8Unorm “colorSpace” in a CIRenderDestination that I use in the MTKView delegate draw method The “colorSpace” default value = CGColorSpaceCreateDeviceRGB() I also set “pixelFormat” in CIRenderDestination with the MTKView.colorPixelFormat. If the image is tagged, I override the following values with the tagged colorSpace: CIContext.workingColorSpace CIContext.outputColorSpace CIRenderDestination.colorSpace If the tagged colorSpace.isWideGamutRGB = true, then I set the CIRenderDestination.colorSpace to extendedSRGB, ignoring the color space in the tagged wide gamut color space, as well as set the colorPixelFormat = bgr10_xr Results: The above scenario will properly render the DisplayP3 image, and the uRGB image. The “Test RGB” image fails: If I do not override the CIRenderDestination.colorSpace with a value from the tagged image, then the “Test RGB” image succeeds, but the “uRGB” image fails to render properly: Question: Do I have everything hooked up correctly and, if so, why does one image fail, and the other succeed? Link to sample project: https://www.dropbox.com/scl/fi/57u2fcrgdvys7jtzykzxt/ColorSpaceTest.zip?rlkey=unjeeiu7mi0wx9wfpylt78nwd&dl=0
Posted Last updated
.
Post not yet marked as solved
4 Replies
344 Views
I am working on an application where we are planning to use Metal for directly rendering custom content. When user looks at something on the rendered image, I want to get the position or ray of cursor (the point where the user is currently looking at) to render something else like a crosshair. Is it possible to get the cursor position information on VisionOS to accomplish this? How can I know if something is being hovered on by the eyes?
Posted Last updated
.
Post not yet marked as solved
7 Replies
340 Views
I wanna draw a pixel buffer directly on the screen with the Metal API. in OpenGL I can use glDrawPixels how to do it in Metal?
Posted
by Key-Real.
Last updated
.
Post marked as solved
1 Replies
279 Views
I'm currently using Metal to create a game board with floating balloons; each balloon is a SKSpriteNode with an image of a balloon attached. The user touches and drags the balloon to a second balloon, merging the two. Exactly how they get merged is based on input from the user. I have a UISegmentedControl that pops up where the user selects one of four responses and the merge occurs. Currently, the UISegmentedControl pops up in the middle of the game board; however, I would like it to overlay on top of the first balloon instead. I have tried this once the two balloons touch each other: bubble1.physicsBody!.velocity = CGVector(dx: 0, dy: 0) // Stopping the balloon requestView.frame.origin.x = bubble1.position.x requestView.frame.origin.y = bubble1.position.y Where requestView is a UIView (with the same dimensions of the balloon) with various subviews (including the UISegmentedControl) and bubble1 is the SKSpriteNode (balloon). However, when I add the requestView as a subview of the game board, it does not overlay on top of the SKSpriteNode (bubble1). In fact, each time I try it, it doesn't even seem to appear in the same space relative to the location of the bubble1. Any thoughts on what I might be doing wrong? Thanks!
Posted Last updated
.
Post not yet marked as solved
2 Replies
715 Views
I know opengl is marked as deprecated since ios12 but I have an old project using it and I want to update some feature of it then release the update version. So I'm wondering if I can still release an app using opengl to app store currently? (I know it's better to shift to MetalKit but for some reason I want to cut the cost if I can. )
Posted Last updated
.
Post not yet marked as solved
1 Replies
307 Views
I used metal and CompositorLayer to render an immersive space skybox. In this space, the window created by the Swift UI I created only displays the gray frosted glass background effect (it seems to ignore the metal-rendered skybox and only samples and displays the black background). why is that? Is there any solution to display the normal frosted glass background? Thank you very much!
Posted
by zane1024.
Last updated
.
Post not yet marked as solved
0 Replies
255 Views
Hi there, I've met a problem that in my working project build settings. There is no Metal Compiler Build Setting, but it works well in my demo project. And I'm certain I've move the shader files(.metal) into the bundle. How could I resolve this problem? XCode Version 15.0 13.6.1 (22G313)
Posted Last updated
.
Post not yet marked as solved
0 Replies
234 Views
In the project template for using ARKit with Metal, there's a definition for the memory alignment of the buffer that holds the SharedUniforms structure. It is defined like this: // The 16 byte aligned size of our uniform structures let kAlignedSharedUniformsSize: Int = (MemoryLayout<SharedUniforms>.size & ~0xFF) + 0x100 If I understood correctly, this line of code does this: Calculates the size of the SharedUniforms structure in bytes Clears out the last 8 bits of the size representation Adds 256 bytes to the size So if I'm not mistaken, this will round up the size of the SharedUniforms structure to the 256 bytes, and not 16 bytes as the comment suggests. Is there something I've overlooked since I can't wrap my head around how will this align the size to 16 bytes?
Posted
by BanSee.
Last updated
.
Post not yet marked as solved
3 Replies
648 Views
Is it possible to use the Metal API on vision Pro? I noticed that using MTKView in my visionOS app is not recognized, and also noticed other forum posts from months ago saying that MTKView is not yet supported. If it is still not an option, if and when will it be supported? Also wondering about metal-cpp support as well, since my app involves integrating an existing C++ library with visionOS (see here: https://github.com/MinVR/MinVR). Is this possible?
Posted Last updated
.
Post not yet marked as solved
0 Replies
285 Views
it appears that the Metal Debugging interface does not support this method, at least the function hashing algorithm does not have a pattern for it in the symbol dictionary as presented. Where do we get updated C- libraries and functions that sync with the things that are presented in the Demo Kits and Samples that Apple puts in the user domain? Why does this stuff get out into the wild insufficiently tested? It seems thet the demo kits made available to users should be included in the test domain used to verify new code releases. I came from a development environment where the 6 month release cycle involved automated execution of the test suite before it went beta or anywhere else.
Posted
by mfstanton.
Last updated
.
Post not yet marked as solved
1 Replies
435 Views
I've been asked to update our app from the current metal code to Metal 3, and one of the concerns is whether we have to worry about any changed or deprecated features. Is there a way to confirm that our app won't behave differently now that it's under Metal 3? I am able to compile the iOS app targeting iOS 16, and shaders and pipelines are working as expected, is that proof enough that we are good to go? I don't see any version declarations or import dependencies to specific metal versions (ie. like in openGL where you would declare the shader version with #version 150 or something)...
Posted
by Bond604.
Last updated
.
Post not yet marked as solved
0 Replies
344 Views
Related APIs: CAMetalLayer, WantedExtendedDynamicRangeContent, potentialEDRHeadroom, currentEDRHeadroom, [UIScreen mainScreen].brightness. Premise: I am a Video Player developer. The information I got from Apple’s development documentation about currentEDRHeadroom is that if I set WantedExtendedDynamicRangeContent = YES on a View, then the value of currentEDRHeadroom will be increased according to the current system brightness by a ratio that is not less than the brightness of potentialEDRHeadroom. This is not a problem when at least ios16. Question: Device: iPad pro 12.9 2022 model, iPadOS 17.2. turn off Reference Mode, turn off Night Shift mode, turn off automatic brightness (not turning it off actually has the same result), use the same picture or video. step: (1): Set [UIScreen mainScreen].brightness to 1.0 (or use system tools to adjust the system brightness to 100%), create the UIView where CAMetalLayer is located and set its parameter WantedExtendedDynamicRangeContent=YES, Start rendering. At this time, the potentialEDRHeadroom is always equal to 16 (this has always been the case in the iPad Pro 12.9 2022 model, which should be set by the hardware). At this time, the value of the currentEDRHeadroom is about 3.7-3.9, which means it can provide standard 100% brightness 3-4 times the brightness, My picture at this time is that there are no over exposure of pixels in the entire picture. (2): Set [UIScreen mainScreen].brightness to 0.8 (or use system tools to adjust the brightness to 80%) or lower, such as 50%, create the UIView where CAMetalLayer is located and set its parameter WantedExtendedDynamicRangeContent=YES, At this time, I set [UIScreen mainScreen].brightness to 1.0 (or use system tools to adjust the brightness to 100%). Now the maximum value of currentEDRHeadroom can only be about 1.79. My pictures is that the entire picture or video has over exploded pixels. Bug or Feature: I have the impression that this problem did not exist in iPadOS 16, and from Apple's documentation and the courses in Developer.app, there is no mention that the range of currentEDRHeadroom is related to the system brightness before creating CAMetalLayer. The documentation and tutorials talk about real-time system brightness. I want to know if this is a bug in iOS 17.2 or will it be designed this way in the future?
Posted
by liu103.
Last updated
.
Post not yet marked as solved
1 Replies
375 Views
Hello, I am creating an application that is cross-platform with Flutter, the problem is that when I launch my application on my Macbook there is only a black page displayed. This is a recurring problem with all Flutter applications on this Mac. When I debug my application, this is what appears in the console. Error submitting command buffer. 2023-12-27 15:58:12.468 tranzic[2333:21044] Error Domain=MTLCommandBufferErrorDomain Code=4 "Ignored (for causing prior/excessive GPU errors) (00000004:kIOAccelCommandBufferCallbackErrorSubmissionsIgnored)" UserInfo={NSLocalizedDescription=Ignored (for causing prior/excessive GPU errors) (00000004:kIOAccelCommandBufferCallbackErrorSubmissionsIgnored), MTLCommandBufferEncoderInfoErrorKey=( "<errorState: MTLCommandEncoderErrorStatePending, label: (null), debugSignposts: (null)>", "<errorState: MTLCommandEncoderErrorStatePending, label: (null), debugSignposts: (null)>", "<errorState: MTLCommandEncoderErrorStatePending, label: (null), debugSignposts: (null)>" )} Error submitting command buffer. 2023-12-27 15:58:18.455 tranzic[2333:21044] Error Domain=MTLCommandBufferErrorDomain Code=4 "Ignored (for causing prior/excessive GPU errors) (00000004:kIOAccelCommandBufferCallbackErrorSubmissionsIgnored)" UserInfo={NSLocalizedDescription=Ignored (for causing prior/excessive GPU errors) (00000004:kIOAccelCommandBufferCallbackErrorSubmissionsIgnored), MTLCommandBufferEncoderInfoErrorKey=( "<errorState: MTLCommandEncoderErrorStatePending, label: (null), debugSignposts: (null)>", "<errorState: MTLCommandEncoderErrorStatePending, label: (null), debugSignposts: (null)>", "<errorState: MTLCommandEncoderErrorStatePending, label: (null), debugSignposts: (null)>" )} I have a Macbook Pro mid-2012 running macOS Monterey and here's an issue I opened on the flutter repo for more details. https://github.com/flutter/flutter/issues/137859
Posted
by el2zay.
Last updated
.
Post not yet marked as solved
0 Replies
327 Views
Hello! The Aim of my project is as specified in the title and the code I am currently trying to modify uses CVPixelBufferGetBaseAddress to acquire the DepthData using LiDAR. For some context, I made use of the available "Capturing depth using the LiDAR camera" Documentation using AVFoundation and edited code after referring a few Q&A on Developer Forums. I have a few doubts and would be grateful if I could get insights or a push in the right direction. Regarding the LiDAR DepthData: Where is the Origin(0,0) and In what order is it saved? (Basically, how do the row and column correspond to the real world scenario) How do I add a touch gesture to fill in the values of X&Y for "distanceAtXYPoint" so that I can acquire the Depth data on user touch input rather than in real-time. The function for reference : //new function to show the depth data value in meters func depthDataOutput(syncedDepthData: AVCaptureSynchronizedDepthData) { let depthData = syncedDepthData.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat16) let depthMapWidth = CVPixelBufferGetWidthOfPlane(depthData.depthDataMap, 0) let depthMapHeight = CVPixelBufferGetHeightOfPlane(depthData.depthDataMap, 0) CVPixelBufferLockBaseAddress(depthData.depthDataMap, .readOnly) if let rowData = CVPixelBufferGetBaseAddress(depthData.depthDataMap)?.assumingMemoryBound(to: Float16.self) { //need to find a way to get the specific depthpoint (using row data) on touch gesture. //currently use the depth point when row&column = 0 let depthPoint = rowData[0] for y in 0...depthMapHeight-1 { var distancesLine = [Float16]() for x in 0...depthMapWidth-1 { let distanceAtXYPoint = rowData[y * depthMapWidth + x] } } print("⭐️Depth value of (0,0) point in meters: \(depthPoint)") } CVPixelBufferUnlockBaseAddress(depthData.depthDataMap, .readOnly) } The current real-time console log output is as shown below Also a slight concern is that the current output at (0,0) shows a value greater than 1m at times even when the real distance is probably a few cm. Any experience and countermeasures on this also would be greatly helpful. Thanks in advance.
Posted
by TSHKS.
Last updated
.