Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics

Post

Replies

Boosts

Views

Activity

Issue with Hand Occlusion in a Metal CompositorLayer
I have an issue with hand occlusion in immersive mode. I have an entry view for the app and a Metal CompositorLayer (which is the immersive volume) where I have set .upperLimbVisibility(Visibility.hidden). The problem is that when I dismiss the entry view, sometimes it hides the hands and sometimes it doesn't (randomly). @main struct AVPainterApp: App { @State var hand: Int32 = 0 var body: some Scene { WindowGroup() { ContentView(hand: $hand) } .windowResizability(.contentSize) ImmersiveSpace(id: "ImmersiveSpace") { CompositorLayer(configuration: MetalLayerConfiguration()) { layerRenderer in SpatialSceneRun(layerRenderer, hand) } } .upperLimbVisibility(Visibility.hidden) .immersionStyle(selection: .constant(.full), in: .full) } }
1
0
220
4w
Sample Project for WWDC24 10092 Metal with Passthrough?
It’s great that we’ll be able to use Metal custom renderers in passthrough mode on visionOS. https://developer.apple.com/wwdc24/10092 This is a lot of complicated set-up, however. It’s also unclear how occlusion and custom algorithms / raytracing will work in tandem with scene understanding. May we have a project template and/or sample? Preferably with the C api and not just swift. This would be much-appreciated and helpful to everyone who wants this set-up. I’d like to see the whole process. Thank you for introducing this feature!
2
1
328
4w
State-of-the-Art 3D (no AR) on macOS using RealityKit?
What is the current recommendation for creating high-quality 3D content? The context is a hobbyist, specialised CAD app for macOS (with an iPadOS companion) that is mostly 2D but also offers a 3D visualization option (currently OpenGL). Somewhere down the line there might be an AR view but at the moment - certainly for macOS - it's purely generated 3D visualization, all rendered content. So starting with a rewrite of the 3D visualization in 2024 targeting macOS Sequoia/iPadOS 18 is RealityKit the suggested way forward? Cheers, Jay
4
0
407
4w
Metal 3.2 device memory coherency
I am seeking clarification regarding the new device-coherent memory (buffers and textures) in Metal 3.2. Do I understand the documentation correctly that this feature allows threads from different threadgroups to update data in device memory cooperatively? The documentation mentions, "[results of operations] are visible to other threads across thread groups if you synchronize them properly." How does one do proper synchronization? From what I understand, Metal has no device-scoped barriers.
1
0
354
4w
Best Practice to Add Objects at Eye Level in Reality Kit
I would think it would be common practice that when adding a new entity into your RealityView scene for them to appear in front of the user. And then the user places the entity in the scene. Image a puzzle piece appearing in front of you and you drag it to your puzzle board. if you move around your puzzle board you’d expect that wherever you are the new piece should appear in front of you. That seems applicable to a lot of applications. I can add a new entity using the head anchor but as we all know that transform is the identity so reparenting the entity to something (eg puzzle board) won’t work. I’ve been trying to use World positioning and query pose which helps but I’m stumped as to how to get the new entity to appear in front of me, no matter which way I turn. Looking for suggestions and guidance on this.
2
0
221
4w
OpenGL ES support on Apple Silicon Simulators
Hey folks, I have a legacy game that is running OpenGL ES - and it no longer works on the simulators that are running Apple Silicon, ie iPhone 15 Pro, or the 13" iPads. And yes, i'm also running on Apple Silicon (M1 Max). The apps work fine on the actual devices, but the simulator crashes on any glDrawElements with a stack that looks like the following: I have not yet seen an announcement about this not working but i've seen mention in other apps of stopping to support GL (https://github.com/maplibre/maplibre-native/issues/2351) Can anyone shed some light? I'm obviously going to try to fix it, or find a recent sample app from which to start to see what might be up. Or move to metal, but i hadn't bargained for that level of effort atm ;) Any suggestions appreciated!
5
0
404
4w
Wrong hitTest results in iOS 17.2
We’re experiencing an issue with wrong SceneKit hit testing results in iOS 17.2 compared with iOS 16.1 when using the either Metal or OpenGLES2 engines. Tapping on a 3D model to place a SCNNode // pointInScene: tapped point let hitResults = sceneView.hitTest(pointInScene, options: nil) return hitResults.first { $0.node.name?.compare("node_name") == .orderedSame }
2
0
245
4w
Placing text over images
What is the best way to display text over images - I'd like the image to fade to white underneath the text so that the text is easier to read since I have no control over the contents of the images. I thought about having a second label behind the actual label with the same text in a slightly larger font and white color. but I'd rather have it be a gradual fading of the image just under the text rather than what looks like 3D text. Any suggestions?
2
0
350
Jun ’24
Tap gesture collisions not fully detecting for distant large entities
I am working on an app where we are attempting to place large entities quite far away from the user, when trying to recognise a tap gesture on them though the gesture isn't being picked up for part of the model. It seems as though the larger and further a model is placed the more offset the collision shape seems to be. It responds to taps in a region that shrinks towards the bottom right. The actual size of the collision shape appears to be correct when viewed with the collision shape debug visualisation. I've been able to replicate this behaviour in the simulator and on a physical device. It's hard to explain in words, there's a video in the README for the repo here I've been able to replicate the issue in a simple sample app. Not sure if I might be using it wrong or if it is expected behaviour for tap gestures to be a bit off when places a large distance from the user. Appreciate any help, thanks. struct ImmersiveView: View { @State private var tapCount = 0 var body: some View { RealityView { content in let sphere = ModelEntity(mesh: .generateSphere(radius: 50), materials: [UnlitMaterial(color: .red)]) sphere.setPosition([500, 0, 0], relativeTo: nil) sphere.components.set([ InputTargetComponent(), CollisionComponent(shapes: [.generateBox(width: 250, height: 250, depth: 250)]), ]) content.add(sphere) } .gesture( SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in tapCount += 1 print(tapCount) } ) } } ``
2
0
459
Jun ’24
Does anyone know if HDR video is supported in a RealityView?
I have attempted to use VideoMaterial with HDR HLS stream, and also a TextureResource.DrawableQueue with rgba16Float in a ShaderGraphMaterial. I'm capturing to 64RGBAHalf with AVPlayerItemVideoOutput and converting that to rgba16Float. I don't believe it's displaying HDR properly or behaving like a raw AVPlayer. Since we can't configure any EDR metadata or color space for a RealityView, how do we display HDR video? Is using rgba16Float supposed to be enough? Is expecting the 64RGBAHalf capture to handle HDR properly a mistake and should I capture YUV and do the conversion myself? Thank you
1
0
407
Jun ’24
Problem with setNeedsDisplay:
Hi, with the default values the rotation take place but changing the value not. I bind a button to a slider action: -(IBAction)rotateXAction:(id)sender { NSLog(@"%@ \n",sender); BOOL yn = YES; _rotationX = [_sliderX intValue]; if(yn) printf("rotationX %d \n",_rotationX); // value o.k [self setNeedsDisplay:YES]; } The drawRect: will not be called. What is wrong with my code, please tell me. Uwe
1
0
399
Jun ’24
SwiftUI full screen animation uses less energy than Metal Game template
I've got a full-screen animation of a bunch of circles filled with gradients, with plenty of (careless) overdraw, plus real-time audio processing driving the animation, plus the overhead of SwiftUI's dependency analysis, and that app uses less energy (on iPhone 13) than the Xcode "Metal Game" template which is a rotating textured cube (a trivial GPU workload). Why is that? How can I investigate further? Does CoreAnimation have access to a compositor fast-path that a Metal app cannot access? Maybe another data point: when I do the same circles animation using SwiftUI's Canvas, the energy use is "Very High" and GPU utilization is also quite high. Eventually the phone's thermal state goes "Serious" and I get a message on the device that "Charging will resume when iPhone returns to normal temperature".
0
4
465
May ’24
[CAMetalLayer nextDrawable] returning nil because allocation failed.
Why do I get this error almost immediately on starting my rendering pass? Multiline BlockQuote. 2024-05-29 20:02:22.744035-0500 RoomPlanExampleApp[491:10341] [] <<<< AVPointCloudData >>>> Fig assert: "_dataBuffer" at bail (AVPointCloudData.m:217) - (err=0) 2024-05-29 20:02:22.744455-0500 RoomPlanExampleApp[491:10341] [] <<<< AVPointCloudData >>>> Fig assert: "_dataBuffer" at bail (AVPointCloudData.m:217) - (err=0) 2024-05-29 20:05:54.079981-0500 RoomPlanExampleApp[491:10025] [CAMetalLayer nextDrawable] returning nil because allocation failed. 2024-05-29 20:05:54.080144-0500 RoomPlanExampleApp[491:10341] [] <<<< AVPointCloudData >>>> Fig assert: "_dataBuffer" at bail (AVPointCloudData.m:217) - (err=0)
6
1
648
May ’24
CAMetalLayer VS AVSampleBufferDisplayLayer ( gpu usage, performance, ...)
I am a VOIP app developer. I am planning to develop a VOIP app on iOS using WebRTC that operates in PiP (Picture-in-Picture) mode. Since MTKView (CAMetalLayer) cannot be used in PiP mode, I am considering using AVSampleBufferDisplayLayer. Regarding this, I am curious about the performance differences between CAMetalLayer and AVSampleBufferDisplayLayer. As far as I know, CAMetalLayer utilizes the GPU. Does AVSampleBufferDisplayLayer also render using the GPU? If AVSampleBufferDisplayLayer renders using the GPU, will the rendering performance be similar? => Based on tests, there seems to be no difference in CPU usage between the two, which leads me to speculate that AVSampleBufferDisplayLayer also uses the GPU. If both use the GPU and there are no performance differences, is there a significant advantage to using CAMetalLayer? Thank you in advance.
1
0
326
May ’24
I'm trying to create a Magic Wand Selection Tool
So I have a class that does the selection but what I get back is a total mess. Can anyone help me with the code or possibly point me at an example class that performs a magic wand selection like in photoshop? Below is my current Code but I've also attempted to use metal and I've gotten an identical result in the same amount of time var red1: CGFloat = 0 var green1: CGFloat = 0 var blue1: CGFloat = 0 var alpha1: CGFloat = 0 selectedColor.getRed(&red1, green: &green1, blue: &blue1, alpha: &alpha1) var red2: CGFloat = 0 var green2: CGFloat = 0 var blue2: CGFloat = 0 var alpha2: CGFloat = 0 pixelColor.getRed(&red2, green: &green2, blue: &blue2, alpha: &alpha2) let tolerance = CGFloat(tolerance) return abs(red1 - red2) < tolerance && abs(green1 - green2) < tolerance && abs(blue1 - blue2) < tolerance && abs(alpha1 - alpha2) < tolerance } func getPixelData(from image: UIImage) -> [UInt8]? { guard let cgImage = image.cgImage else { return nil } let width = Int(cgImage.width) let height = Int(cgImage.height) let bytesPerPixel = 4 let bytesPerRow = bytesPerPixel * width let bitsPerComponent = 8 let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue var pixelData = [UInt8](repeating: 0, count: width * height * bytesPerPixel) guard let context = CGContext(data: &pixelData, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: bitmapInfo) else { return nil } context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height)) return pixelData } func performMagicWandSelection(inputImage: UIImage, selectedColor: UIColor, tolerance: Float) -> UIBezierPath? { guard let pixelData = getPixelData(from: inputImage) else { return nil } let width = Int(inputImage.size.width) let height = Int(inputImage.size.height) let bytesPerPixel = 4 let path = UIBezierPath() var isDrawing = false for y in 0..<height { for x in 0..<width { let index = (y * width + x) * bytesPerPixel let pixelColor = UIColor( red: CGFloat(pixelData[index]) / 255.0, green: CGFloat(pixelData[index + 1]) / 255.0, blue: CGFloat(pixelData[index + 2]) / 255.0, alpha: CGFloat(pixelData[index + 3]) / 255.0) if colorMatches(selectedColor: selectedColor, pixelColor: pixelColor, tolerance: tolerance) { let point = CGPoint(x: x, y: y) if !isDrawing { path.move(to: point) isDrawing = true } else { path.addLine(to: point) } } else { if isDrawing { path.close() isDrawing = false } } } } if isDrawing { path.close() } return path }
0
0
287
May ’24
Mistake in OpenGLView:drawRect
I rewrite an old project since OpenGL is deprecated and the downgrade to High Sierra goes with the lost of the old project. Here is the drawRect: - (void)drawRect:(NSRect)dirtyRect { [super drawRect:dirtyRect]; if(backgroundColor == 1) glClearColor(0.95f, 1.0, 1.0f, 1.0); else glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glTranslatef(0.0 ,0.0, -10.0); // After mistake I wrote PushMatrix() glTranslatef(-0.01 ,-0.01, -10.0); // Screen Shot 0 // glTranslatef(-0.25 ,-0.25, -10.0); Screen Shot 1 glLineWidth(1.0); glRotated(rotationX, 1, 0, 0); glRotated(rotationY, 0, 1, 0); glCallList(axes); // After mistake I wrote PopMatrix() // Axes Vertices of xx,y,z are 1.0 // With vertices +/-0.99 nothing is drawn [self glError]; [self.openGLContext flushBuffer]; } I hope you Accept my Text and hope you can help me ! Mit freundlichen Grüßen Uwe Screen Shot 0 Screen Shot 1
1
0
327
May ’24