Post not yet marked as solved
Hi, I have been getting the following crash in my app which has a Custom Metal Render Engine
CoreFoundation -[__NSSetM clumpingFactor] + 264
libobjc.A.dylib __objc_empty_cache + 888
CoreAutoLayout DA979160-E330-3C35-BF6F-D3248DCC3246 + 67536
CoreAutoLayout DA979160-E330-3C35-BF6F-D3248DCC3246 + 68272
UIKitCore __OBJC_$_INSTANCE_METHODS__UIDatePickerCalendarTimeLabel + 600
UIKitCore __OBJC_$_INSTANCE_METHODS__UINavigationBarStarkVisualStyle + 100
UIKitCore ___79+[UISwitchModernVisualElement _modernThumbImageWithColor:mask:traitCollection:]_block_invoke_2 + 204
UIKitCore -[UISwitchModernVisualElement _switchTrackPositionAnimationWithFromValue:toValue:on:] + 388
UIKitCore -[UISwitchModernVisualElement _effectiveGradientImage] + 128
UIKitCore __OBJC_$_INSTANCE_METHODS__UISearchBarVisualProviderLegacy + 1924
QuartzCore CA::Layer::add_animation(CAAnimation*, __CFString const*) + 72
QuartzCore CA::Layer::remove_sublayer(CA::Transaction*, CALayer*) + 272
QuartzCore CA::OGL::Context::draw_elements(CA::OGL::PrimitiveMode, unsigned int, unsigned short const*, CA::OGL::Vertex const*, unsigned int, unsigned int, CA::OGL::ClipPlane const*) + 60
QuartzCore CAML::cgcolor_end(CAML::Context*, CAML::State*, char*, unsigned long) + 1252
QuartzCore native_window_swap(_EAGLNativeWindowObject*, unsigned int, double) + 712
QuartzCore -[CAStateControllerAnimation initWithLayer:key:] + 52
CoreFoundation ___CFSocketSetSocketReadBufferAttrs + 444
CoreFoundation __CFNonObjCEqual + 8
CoreFoundation __CFRelease + 952
Foundation 4E7D1FF6-6B64-3833-9E60-CC662AFE2647 + 36236
danmu -[DMEngineBase runMetalThread] (in DMEngineBase.mm:148)
Foundation 4E7D1FF6-6B64-3833-9E60-CC662AFE2647 + 1549068
libsystem_pthread.dylib _pthread_rwlock_unlock$VARIANT$armv81 + 160
libsystem_pthread.dylib __pthread_create + 1196
this crash seems caused by the Metal Thread Rendering which triggered a CoreAnimation drawing and finally crashed at CoreAutoLayout internal method :
Crashed View : appEnterForeground
Exception Name : NSInternalInconsistencyException
Exception Reason :
Modifications to the layout engine must not be performed from a background thread after it has been accessed from the main thread.
In the latest Version I have been trying solved this crash by add some protected like this :
- (void)runMetalThread
{
NSRunLoop *runLoop = [NSRunLoop currentRunLoop];
[_displayLink addToRunLoop:runLoop forMode:DMMetalRunLoopModelFrame];
BOOL continueRunLoop = YES;
while (continueRunLoop)
{
@autoreleasepool
{
[runLoop runMode:DMMetalRunLoopModelFrame beforeDate:[NSDate distantFuture]];
}
continueRunLoop = _continueRunLoop;
}
}
- (void)onBulletDraw:(CADisplayLink*)displayLink
{
self.renderer.stoped = !_isActive || _stoped;
self.renderer.paused = _isPaused;
[self.renderer onDanmuDraw:displayLink];
}
#pragma mark - Notification
- (void)willResignActive
{
self.isActive = NO;
MTLog(@"metal# engine %@ resign active", self);
}
- (void)didBecomeActive
{
//Protected by delay active the metal engine after app state didBecomeActive
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.7 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
if (UIApplication.sharedApplication.applicationState == UIApplicationStateActive) {
self.isActive = YES;
MTLog(@"metal# engine %@ become active", self);
}else{
MTLog(@"metal# engine still inactive after 0.7s");
}
});
}
this protected has make sure to STOPed the Metal Render after the app resign active, and
seems has some effect which reduced 84% crashes on my released app . but it did not completely solve this problem still has 16% crashes in some unknown scene .
I have been checked all my Metal Rendering thread code , I can be guaranteed that no method will trigger CoreAnimation drawing in this metal thread. So is there has an Perfective Solution of this bug ?
I've built a lightweight app that displays some advanced animated graphics UI using the Metal Kit
Unfortunately, the app launches too slow. The app's icon in the Dock bounces 3-5 times, the icon's bounce animation is jerky, and also the whole Mac becomes temporarily sluggish and may become much less responsive for a few seconds. So it's not merely a matter of my app's launch-time optimization. Also I don't have this problem with other apps that don't use Metal.
I narrowed down the problem and figured that the problem disappears if I comment out the code starting from -[CAMetalLayer setDevice:] in my -[NSApplicationDelegate applicationWillFinishLaunching:].
I also noticed in the Activity Monitor that when I launch my app, the kernel_task instantly becomes hugely active, taking up 100 or 200% of CPU, and quickly goes back to 3-4% after the launch is complete. This apparently causes the whole Mac to become sluggish temporarily.
I have a theory that for some reason, macOS maxes out the kernel_task when my app calls the GPU for initialization.
I know that the kernel_task is used to throttle CPU to avoid overheating, but my Macbook has normal temperature and no other CPU-intense tasks are running.
I am running MacBook Pro (16-inch, 2019)
2.6 GHz 6-Core Intel Core i7
16 GB 2667 MHz DDR4
AMD Radeon Pro 5300M 4 GB
Intel UHD Graphics 630 1536 MB
Can someone please advise about the possible causes of this problem and how to deal with it? My app is extremely lightweight and I really want it to launch instantly.
Post not yet marked as solved
Hello all,
I need my texture to be not premultiplied alpha, because is use the alpha for some additional calculations in the fragment shader.
At the moment I load my texture like this
let textureLoader = MTKTextureLoader(device: sceneView.device!)
myTexture = try textureLoader.newTexture(URL: URL(fileURLWithPath: path!), options: [MTKTextureLoader.Option.SRGB: false])
because I need the raw RGBA values.
But of course it gets premultiplied.
How can I turn off this behavior?
Post not yet marked as solved
I notice that when I open the Photos app on my iPhone 12 Pro, viewing Photos or Videos shot in HDR makes them brighter than the overall display brightness level.
On macOS, there are APIs like EDRMetadata on CAMetalLayer and maximumExtendedDynamicRangeColorComponentValue on NSScreen.
I did see
CAMetalLayer.wantsExtendedDynamicRangeContent, but I'm not sure if this does what I'm looking for.
The "Using Color Spaces to Display HDR Content" - https://developer.apple.com/documentation/metal/drawable_objects/displaying_hdr_content_in_a_metal_layer/using_color_spaces_to_display_hdr_content?language=objc documentation page describes setting the .colorspace on the CAMetalLayer for BT2020_PQ content, but it's not clear if this is referring to macOS or iOS. Is that the right way to get colors to be "brighter" than 1.0 on "XDR" mobile displays?
Post not yet marked as solved
I have CVPixelBuffer's in kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange, which is 10 bit HDR. I need to convert these to RGB and display in MTKView. I need to know the correct pixel format to use, the BT2020 conversion matrix, and displaying the 10 bit RGB pixel buffer in MTKView.
I'm using Xcode 12.5.1. When I follow a swift goose tutorial for Outline view and Treeview, I've followed it exactly. It fails with...
2021-09-04 09:32:44.832345-0700 treeview1[21236:868981] Metal API Validation Enabled
2021-09-04 09:32:44.851673-0700 treeview1[21236:868981] MTLIOAccelDevice bad MetalPluginClassName property (null)
2021-09-04 09:32:44.853029-0700 treeview1[21236:868981] +[MTLIOAccelDevice registerDevices]: Zero Metal services found
When I download swift goose's project code from GitHub, it works without any error.... The only thing I can think of is (1) When I set Xcode for a new project using macOS and app, I'm missing something, even though it basically looks all the same. (2) Something I'm missing in my Xcode set up due to the fact I've got the "free" Xcode verses developer? Something linking behind the scene's? Is there a difference?
Post not yet marked as solved
When I use metal to render, the application switch to the background resulting in metal rendering failure in iOS 15 sys.
How can I do?
Error:
Execution of the command buffer was aborted due to an error during execution.Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted)
Post not yet marked as solved
I have tried everything but it looks to be impossible to get MTKView to display full range of colors of HDR CIImage made from CVPixelBuffer (in 10bit YUV format). Only builtin layers such as AVCaptureVideoPreviewLayer, AVPlayerLayer, AVSampleBufferDisplayLayer are able to fully display HDR images on iOS. Is MTKView incapable of displaying full BT2020_HLG color range? Why does MTKView clip colors no matter even if I set pixel Color format to bgra10_xr or bgra10_xr_srgb?
convenience init(frame: CGRect, contentScale:CGFloat) {
self.init(frame: frame)
contentScaleFactor = contentScale
}
convenience init(frame: CGRect) {
let device = MetalCamera.metalDevice
self.init(frame: frame, device: device)
colorPixelFormat = .bgra10_xr
self.preferredFramesPerSecond = 30
}
override init(frame frameRect: CGRect, device: MTLDevice?) {
guard let device = device else {
fatalError("Can't use Metal")
}
guard let cmdQueue = device.makeCommandQueue(maxCommandBufferCount: 5) else {
fatalError("Can't make Command Queue")
}
commandQueue = cmdQueue
context = CIContext(mtlDevice: device, options: [CIContextOption.cacheIntermediates: false])
super.init(frame: frameRect, device: device)
self.framebufferOnly = false
self.clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0)
}
And then rendering code:
override func draw(_ rect: CGRect) {
guard let image = self.image else {
return
}
let dRect = self.bounds
let drawImage: CIImage
let targetSize = dRect.size
let imageSize = image.extent.size
let scalingFactor = min(targetSize.width/imageSize.width, targetSize.height/imageSize.height)
let scalingTransform = CGAffineTransform(scaleX: scalingFactor, y: scalingFactor)
let translation:CGPoint = CGPoint(x: (targetSize.width - imageSize.width * scalingFactor)/2 , y: (targetSize.height - imageSize.height * scalingFactor)/2)
let translationTransform = CGAffineTransform(translationX: translation.x, y: translation.y)
let scalingTranslationTransform = scalingTransform.concatenating(translationTransform)
drawImage = image.transformed(by: scalingTranslationTransform)
let commandBuffer = commandQueue.makeCommandBufferWithUnretainedReferences()
guard let texture = self.currentDrawable?.texture else {
return
}
var colorSpace:CGColorSpace
if #available(iOS 14.0, *) {
colorSpace = CGColorSpace(name: CGColorSpace.itur_2100_HLG)!
} else {
// Fallback on earlier versions
colorSpace = drawImage.colorSpace ?? CGColorSpaceCreateDeviceRGB()
}
NSLog("Image \(colorSpace.name), \(image.colorSpace?.name)")
context.render(drawImage, to: texture, commandBuffer: commandBuffer, bounds: dRect, colorSpace: colorSpace)
commandBuffer?.present(self.currentDrawable!, afterMinimumDuration: 1.0/Double(self.preferredFramesPerSecond))
commandBuffer?.commit()
}
I am new to metal, and am trying to move a material normal texture by an offset while also taking advantage of metal's geometry modifier. When I was using a PhysicallyBasedMaterial I was using this function in the session function in the ViewController:
waterMaterial.textureCoordinateTransform.offset.x += 0.0001
The normal is a png. This would move the texture every frame. Now that I'm using a CustomMaterial to take advantage of a geometryModifier this is no longer working. I can see the texture and am using the shader successfully but the texture itself is not moving. I assume I need to do this in my metal shader file. Possibly starting in this direction:
[[visible]]
void moveTexture(realitykit::geometry_parameters params)
{
auto normal = params.textures().normal();
}
Any help replicating the above functionality in metal would be much appreciated.
Post not yet marked as solved
I was following raywenderlich's Metal tutorial, but got stuck
rendering a texture on a plane, it seems to be showing only one color of
the image, not the entire image. I'm running on an iPad iOS 12.3.
The weirdest thing is that I can render a multicolored rectangle when the texture is nil, but can't render the texture.
Here's a repo for the project: https://github.com/TheJoseph-Dev/MyMetalProgram
May anyone help me?
Post not yet marked as solved
While developing my Metal application I noticed that making a draw call is a lot slower than using a tile shader. In particular, when operating on a 4k resolution texture it takes about 3ms to complete a draw call while the tile shader takes about 150ns. I was wondering, is a tile shader the preferred approach for drawing with Metal now? Or is there any particular reason why a typical draw call should be used.
Post not yet marked as solved
I have a 3D scene with a perspective camera and I'd like some of the elements to be projected using an orthographic projection instead.
My use case is that I have some 3D elements with attached text nodes. I'd like the text on these nodes to always be the same size no matter how far away the camera is. Is there a way I can use SceneKit to mix and match? Or is there another technique I can use?
Does Metal support utilizing the ray tracing acceleration hardware available in the Radeon 6000 series GPUs?
Can the 6000 series run the WWDC20 session 10012: Discover Ray Tracing with Metal demo?
Do any AMD GPUs support it or only the Intel integrated ones? The WWDC session video shows a sample forest scene running on a Mac Pro with the W5000 series AMD GPU.
Please see: https://developer.apple.com/forums/thread/651077
Hello,
We recently noticed that copying pixel data from a meta texture to memory is a lot slower on the new iPhones equipped with the A14 Bionic.
We tracked down the guilty function on MTLTexture and found that getBytes(_:bytesPerRow:from:mipmapLevel: runs 8 to 20 times slower than 2 years old iPhones (iPhone XR). To measure how long it takes, we used signposts.
We've created a dummy demo project where we convert a MTLTexture to a CVPixelBuffer in this project: https://github.com/alikaragoz/UsingARenderPipelineToRenderPrimitives
The interesting part is located at this line: https://github.com/alikaragoz/UsingARenderPipelineToRenderPrimitives/blob/41f7f4385a490e889b94ee2c8913ce532a43aacb/Renderer/MetalUtils.swift#L40
Do you guys have an idea about what could be the issue?
Post not yet marked as solved
I am trying to push content to an MTKView in SwiftUI, wrapped in a UIViewRepresentable by manually calling draw(in: MTKView) on the MTKViewDelegate.
My question is how to obtain and release the correct drawable from the 3 available.
As I only want to push draw calls from an external source, the view settings are:
mtkView.isPaused = true // only push data
mtkView.enableSetNeedsDisplay = false // only push data from our single source
mtkView.framebufferOnly = true // we don't render to anything but the screen
The MTKViewDelegate draw call is as follows:
func draw(in view: MTKView) {
autoreleasepool() {
let passDescriptor =
view.currentRenderPassDescriptor!
// make command buffer, encoder from descriptor
// encode data
let drawable = view.currentDrawable!
commandBuffer.present(drawable)
commandBuffer.commit()
}
}
This works fine for the first trigger of draw and on the second draw call raises [CAMetalLayerDrawable texture] should not be called after already presenting this drawable. Get a nextDrawable instead. and Each CAMetalLayerDrawable can only be presented once!
Setting mtkView.isPaused = false renders fine, so I suppose whatever internal loop is handling calling nextDrawable(). How should I go about ensuring that I am getting the next drawable and releasing the current one when I assume control of drawing?
Best regards,
Post not yet marked as solved
Hello, could you please advise how to use UIScrollView with MTKView. So I have 3 MTKViews and want to scroll them up and down. A have error with drawable
Hello everyone, I'm graphic beginner programmer
I want to use 3d texture on metal for my projects...
But I can't, because of error.
I try example of this link..
fragment half4 mip_fragment
(
VertexOutput in [[ stage_in ]],
texture2d<float> backface [[ texture(0) ]],
texture3d<float> volume [[ texture(1) ]]
)
{
constexpr sampler s(s_address::clamp_to_edge, t_address::clamp_to_edge, min_filter::linear, mag_filter::linear);
float3 rgb = backface.sample(s, in.pixelCoord).rgb;
float3 lookupColor = volume.sample(s, rgb, 0).rgb;
return half4(half3(lookupColor), 1.h);
}
But I get this errors.
Fragment Function(mip_fragment): incorrect type of texture (MTLTextureType2D) bound at texture binding at index 1 (expect MTLTextureType3D) for volume[0].
And app is crashed. Please help me.
Post not yet marked as solved
I am just starting to learn AR. Thanks for the help.
I am trying to bind large objects to a certain location in an open area. I tried to bind using an image, an object in a reality composer. After snapping, when moving, objects do not remain in the same place. ARGeoTrackingConfiguration is not available in my region. If you scan the world around you and then define it, then with a rainy day or the slightest change in the area (for example, mowing the lawn), the terrain will not be determined. What do you advise?
Hello,
I have two quads with different vertex coordinates.
How can I multiply the first quad color to mask components like Red or blue or green from the second quad color.
Post not yet marked as solved
Given six KTX, ASTC compressed textures -- all equal in size and attributes -- a.ktx, b.ktx, c.ktx, d.ktx, e.ktx, & f.ktx, I can embed them in the bundle and then create a working cube via-
let cube = MDLTexture(cubeWithImagesNamed: [ "a.ktx", "b.ktx", "c.ktx", "d.ktx", "e.ktx", "f.ktx"])
This can be assigned to background.contents and works great.
If, on the other hand, I have loaded those six textures from some other source into six separate MTLTextures, I cannot provide them as an array to background.contents (it fails with "image at index 0 is NULL). I have attempted to create a cube MTLTexture with the appropriate MTLTextureDescriptor.textureCubeDescriptor (using the pixel format and other attributes from the source textures), then copying the data via MTLBlitCommandEncoder, however the end result, while error free, is a cube that is wholly purple.
I suspect this may be that the source textures are ASTC compressed, but am a bit at a loss as the documentation is rather sparse. Everything else seems to be incredibly easy relative to this very simple need of creating a cube from textures that aren't named bundle items.
Any guidance or hints would be greatly appreciated.