Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics

Post

Replies

Boosts

Views

Activity

OpenGL ES support on Apple Silicon Simulators
Hey folks, I have a legacy game that is running OpenGL ES - and it no longer works on the simulators that are running Apple Silicon, ie iPhone 15 Pro, or the 13" iPads. And yes, i'm also running on Apple Silicon (M1 Max). The apps work fine on the actual devices, but the simulator crashes on any glDrawElements with a stack that looks like the following: I have not yet seen an announcement about this not working but i've seen mention in other apps of stopping to support GL (https://github.com/maplibre/maplibre-native/issues/2351) Can anyone shed some light? I'm obviously going to try to fix it, or find a recent sample app from which to start to see what might be up. Or move to metal, but i hadn't bargained for that level of effort atm ;) Any suggestions appreciated!
5
0
299
2w
Tap gesture collisions not fully detecting for distant large entities
I am working on an app where we are attempting to place large entities quite far away from the user, when trying to recognise a tap gesture on them though the gesture isn't being picked up for part of the model. It seems as though the larger and further a model is placed the more offset the collision shape seems to be. It responds to taps in a region that shrinks towards the bottom right. The actual size of the collision shape appears to be correct when viewed with the collision shape debug visualisation. I've been able to replicate this behaviour in the simulator and on a physical device. It's hard to explain in words, there's a video in the README for the repo here I've been able to replicate the issue in a simple sample app. Not sure if I might be using it wrong or if it is expected behaviour for tap gestures to be a bit off when places a large distance from the user. Appreciate any help, thanks. struct ImmersiveView: View { @State private var tapCount = 0 var body: some View { RealityView { content in let sphere = ModelEntity(mesh: .generateSphere(radius: 50), materials: [UnlitMaterial(color: .red)]) sphere.setPosition([500, 0, 0], relativeTo: nil) sphere.components.set([ InputTargetComponent(), CollisionComponent(shapes: [.generateBox(width: 250, height: 250, depth: 250)]), ]) content.add(sphere) } .gesture( SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in tapCount += 1 print(tapCount) } ) } } ``
2
0
379
3w
Wrong hitTest results in iOS 17.2
We’re experiencing an issue with wrong SceneKit hit testing results in iOS 17.2 compared with iOS 16.1 when using the either Metal or OpenGLES2 engines. Tapping on a 3D model to place a SCNNode // pointInScene: tapped point let hitResults = sceneView.hitTest(pointInScene, options: nil) return hitResults.first { $0.node.name?.compare("node_name") == .orderedSame }
2
0
204
2w
Does anyone know if HDR video is supported in a RealityView?
I have attempted to use VideoMaterial with HDR HLS stream, and also a TextureResource.DrawableQueue with rgba16Float in a ShaderGraphMaterial. I'm capturing to 64RGBAHalf with AVPlayerItemVideoOutput and converting that to rgba16Float. I don't believe it's displaying HDR properly or behaving like a raw AVPlayer. Since we can't configure any EDR metadata or color space for a RealityView, how do we display HDR video? Is using rgba16Float supposed to be enough? Is expecting the 64RGBAHalf capture to handle HDR properly a mistake and should I capture YUV and do the conversion myself? Thank you
1
0
368
4w
Problem with setNeedsDisplay:
Hi, with the default values the rotation take place but changing the value not. I bind a button to a slider action: -(IBAction)rotateXAction:(id)sender { NSLog(@"%@ \n",sender); BOOL yn = YES; _rotationX = [_sliderX intValue]; if(yn) printf("rotationX %d \n",_rotationX); // value o.k [self setNeedsDisplay:YES]; } The drawRect: will not be called. What is wrong with my code, please tell me. Uwe
1
0
354
4w
CGContextDrawLayerAtPoint Problems in Mac OS Sonoma
My app stopped working in Mac OS Sonoma 14.0 and I quickly isolated the problem to CGContextDrawLayerAtPoint. Two issues, first of all about 1/2 the time there was no data copied (the updated CGLayer did not show up in the window). Then the app would crash iin libswiftCore.dylib after about 5 updates with a very unusual message: "Fatal error: Duplicate keys of type 'DisplayList' were found in a Dictionary. This usually means either that the type violates Hashable's requirements, or that members of such a dictionary were mutated after insertion". This behavior showed up in builds built with XCode 13 on a Mac OS Montery platform, as well as XCode 15 on Mac OS Ventura when the app was run on Sonoma. My app uses a very traditional method to create an off-screen graphics context in drawRect: - (void)drawRect:(NSRect)dirtyRect { // Obtain context from the current NSGraphicsContext ... viewNSContext = [NSGraphicsContext currentContext]; viewCGContext = (CGContextRef)[viewNSContext graphicsPort]; drawingLayer = CGLayerCreateWithContext(viewCGContext, size, NULL); So the exact details of the off-screen drawing area were based upon the characteristics of the window being drawn to. Fortunately the work-around was very easy. By creating a custom CGBitmapContext everything was resolved. My drawing requirements are very basic, so a simple 32-bit RGB off-screen context was adequate. colorSpaceRef = CGColorSpaceCreateDeviceRGB(); bitMapContextRef = CGBitmapContextCreate(NULL, (int) rintf(size.width), (int) rintf(size.height), 8, 0, colorSpaceRef, kCGImageAlphaNoneSkipLast); drawingLayer = CGLayerCreateWithContext(bitMapContextRef, size, NULL); Once I changed to a bitmap offscreen context, problem resolved. In my case I verified that the portion of the window that was updated with the CGContextDrawLayerAtPoint was indeed restricted to the dirty part of the view rectangle, at least in Sonoma 14.5. Hope this helps someone else searching for the issue, as I found nothing in the Forums or online.
2
1
274
May ’24
Mistake in OpenGLView:drawRect
I rewrite an old project since OpenGL is deprecated and the downgrade to High Sierra goes with the lost of the old project. Here is the drawRect: - (void)drawRect:(NSRect)dirtyRect { [super drawRect:dirtyRect]; if(backgroundColor == 1) glClearColor(0.95f, 1.0, 1.0f, 1.0); else glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glTranslatef(0.0 ,0.0, -10.0); // After mistake I wrote PushMatrix() glTranslatef(-0.01 ,-0.01, -10.0); // Screen Shot 0 // glTranslatef(-0.25 ,-0.25, -10.0); Screen Shot 1 glLineWidth(1.0); glRotated(rotationX, 1, 0, 0); glRotated(rotationY, 0, 1, 0); glCallList(axes); // After mistake I wrote PopMatrix() // Axes Vertices of xx,y,z are 1.0 // With vertices +/-0.99 nothing is drawn [self glError]; [self.openGLContext flushBuffer]; } I hope you Accept my Text and hope you can help me ! Mit freundlichen Grüßen Uwe Screen Shot 0 Screen Shot 1
1
0
289
May ’24
CAMetalLayer VS AVSampleBufferDisplayLayer ( gpu usage, performance, ...)
I am a VOIP app developer. I am planning to develop a VOIP app on iOS using WebRTC that operates in PiP (Picture-in-Picture) mode. Since MTKView (CAMetalLayer) cannot be used in PiP mode, I am considering using AVSampleBufferDisplayLayer. Regarding this, I am curious about the performance differences between CAMetalLayer and AVSampleBufferDisplayLayer. As far as I know, CAMetalLayer utilizes the GPU. Does AVSampleBufferDisplayLayer also render using the GPU? If AVSampleBufferDisplayLayer renders using the GPU, will the rendering performance be similar? => Based on tests, there seems to be no difference in CPU usage between the two, which leads me to speculate that AVSampleBufferDisplayLayer also uses the GPU. If both use the GPU and there are no performance differences, is there a significant advantage to using CAMetalLayer? Thank you in advance.
1
0
282
May ’24
SwiftUI full screen animation uses less energy than Metal Game template
I've got a full-screen animation of a bunch of circles filled with gradients, with plenty of (careless) overdraw, plus real-time audio processing driving the animation, plus the overhead of SwiftUI's dependency analysis, and that app uses less energy (on iPhone 13) than the Xcode "Metal Game" template which is a rotating textured cube (a trivial GPU workload). Why is that? How can I investigate further? Does CoreAnimation have access to a compositor fast-path that a Metal app cannot access? Maybe another data point: when I do the same circles animation using SwiftUI's Canvas, the energy use is "Very High" and GPU utilization is also quite high. Eventually the phone's thermal state goes "Serious" and I get a message on the device that "Charging will resume when iPhone returns to normal temperature".
0
4
426
4w
I'm trying to create a Magic Wand Selection Tool
So I have a class that does the selection but what I get back is a total mess. Can anyone help me with the code or possibly point me at an example class that performs a magic wand selection like in photoshop? Below is my current Code but I've also attempted to use metal and I've gotten an identical result in the same amount of time var red1: CGFloat = 0 var green1: CGFloat = 0 var blue1: CGFloat = 0 var alpha1: CGFloat = 0 selectedColor.getRed(&red1, green: &green1, blue: &blue1, alpha: &alpha1) var red2: CGFloat = 0 var green2: CGFloat = 0 var blue2: CGFloat = 0 var alpha2: CGFloat = 0 pixelColor.getRed(&red2, green: &green2, blue: &blue2, alpha: &alpha2) let tolerance = CGFloat(tolerance) return abs(red1 - red2) < tolerance && abs(green1 - green2) < tolerance && abs(blue1 - blue2) < tolerance && abs(alpha1 - alpha2) < tolerance } func getPixelData(from image: UIImage) -> [UInt8]? { guard let cgImage = image.cgImage else { return nil } let width = Int(cgImage.width) let height = Int(cgImage.height) let bytesPerPixel = 4 let bytesPerRow = bytesPerPixel * width let bitsPerComponent = 8 let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue var pixelData = [UInt8](repeating: 0, count: width * height * bytesPerPixel) guard let context = CGContext(data: &pixelData, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: bitmapInfo) else { return nil } context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height)) return pixelData } func performMagicWandSelection(inputImage: UIImage, selectedColor: UIColor, tolerance: Float) -> UIBezierPath? { guard let pixelData = getPixelData(from: inputImage) else { return nil } let width = Int(inputImage.size.width) let height = Int(inputImage.size.height) let bytesPerPixel = 4 let path = UIBezierPath() var isDrawing = false for y in 0..<height { for x in 0..<width { let index = (y * width + x) * bytesPerPixel let pixelColor = UIColor( red: CGFloat(pixelData[index]) / 255.0, green: CGFloat(pixelData[index + 1]) / 255.0, blue: CGFloat(pixelData[index + 2]) / 255.0, alpha: CGFloat(pixelData[index + 3]) / 255.0) if colorMatches(selectedColor: selectedColor, pixelColor: pixelColor, tolerance: tolerance) { let point = CGPoint(x: x, y: y) if !isDrawing { path.move(to: point) isDrawing = true } else { path.addLine(to: point) } } else { if isDrawing { path.close() isDrawing = false } } } } if isDrawing { path.close() } return path }
0
0
263
May ’24
RealityView and Anchors
Hello, In the documentation for an ARView we see a diagram that shows that all Entity's are connected via AnchorEntity's to the Scene: https://developer.apple.com/documentation/realitykit/arview What happens when we are using a RealityView? Here the documentation suggests we direclty add Entity's: https://developer.apple.com/documentation/realitykit/realityview/ Three questions: Do we need to add Entitys to an AnchorEntity first and add this via content.add(...)? Is an Entity ignored by physics engine if attached via an Anchor? If both the AnchorEntity and an attached Entity is added via content.add(...), does is its anchor's position ignored?
1
0
370
May ’24
coordinate system in shareplay + spatial persona experience
I cannot figure out how shareplay + spatial persona place the origin of RealityKit's coordinate system I have an App on visionOS with immersiveSpace(.mix) scene. In the scene I am using ARKit to track my hand, creating a virtual object following the movement of my hand palm. Every frame I query positions from the HandAnchor to update the position of my object, using originFromAnchorTransform to correctly place my object in the scene. However when I try to adopt that into a shareplay experience with spatial persona, the virtual object's position update becomes a mess. With different template (.sidebyside or .conversational), the origin in my space appears with no pattern. I can always see that the virtual object don't follow my hand but in a random place. it seems that there was a differnce/transform between handAnchor's origin and immersiveSpace origin under stpatial persona + shareplay mode. isn't it? Or is there something I can try to use convert(displacement.inverse.rotation, from: .immersiveSpace, to: .scene) that mentioned here: https://developer.apple.com/documentation/realitykit/realitycoordinatespaceconverting and https://developer.apple.com/documentation/swiftui/environmentvalues/immersivespacedisplacement to do the translation and apply it on my virtual object. But not working yet. Can someone tells me how to do this correctly?
1
0
361
May ’24
Anchor an Reality scene on an image anchor
Developing a prototype Vision Pro app and would like to render a 3D scene made from Reality Composer Pro on an image anchor in a RealityView. But I have no luck so far to make it work and need some guidance to move on. I got the image file stored in the assets like below: And from below is the source codes: import SwiftUI import RealityKit import RealityKitContent struct AnchorView: View { @State var imageEntity: Entity = { let anchorEntity = AnchorEntity(.image(group: "AR Resources", name: "reanchor")) return anchorEntity }() var body: some View { RealityView { content in do { // Add the initial RealityKit content if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { imageEntity.addChild(scene) content.add(imageEntity) } } catch { print("Error occurs when adding reality view content: \(error)") } } } }
2
0
414
May ’24
How to Make Xcode Recognize Morph Targets in a DAE File Imported from Blender? I'm working on a project in Xcode where I need to use a 3D model with multiple morph targets (shape keys in Blender) for animations. The model, specifically the Wolf3D_He
I'm working on a project in Xcode where I need to use a 3D model with multiple morph targets (shape keys in Blender) for animations. The model, specifically the Wolf3D_Head node, contains dozens of morph targets which are crucial for my project. Here's what I've done so far: I verified the morph targets in Blender (I can see all the morph targets correctly when opening both the original .glb file and the converted .dae file in Blender). Given that Xcode does not support .glb file format directly, I converted the model to .dae format, aiming to use it in my Xcode project. After importing the .dae file into Xcode, I noticed that Xcode does not show any morph targets for the Wolf3D_Head node or any other node in the model. I've already attempted using tools like ColladaMorphAdjuster and another version by JakeHoldom to adjust the .dae file, hoping Xcode would recognize the morph targets, but it didn't resolve the issue. My question is: How can I make Xcode recognize and display the morph targets present in the .dae file exported from Blender? Is there a specific process or tool that I need to use to ensure Xcode properly imports all the morph target information from a .dae file? Tools tried: https://github.com/JonAllee/ColladaMorphAdjuster, https://github.com/JakeHoldom/ColladaMorphAdjuster Thanks in advance!
1
1
487
Mar ’24
How to make an entity move in a RealityView so that collisions can be detected
I'm trying to detect when two entities collide. The following code shows a very basic set-up. How do I get the upperSphere to move? If I set its physicsBody.mode to .dynamic it moves with gravity (and the collision is reported), but when in kinematic mode it doesn't respond to the impulse: struct CollisionView: View { @State var subscriptions: [EventSubscription] = [] var body: some View { RealityView { content in let upperSphere = ModelEntity(mesh: .generateSphere(radius: 0.04)) let lowerSphere = ModelEntity(mesh: .generateSphere(radius: 0.04)) upperSphere.position = [0, 2, -2] upperSphere.physicsBody = .init() upperSphere.physicsBody?.mode = .kinematic upperSphere.physicsMotion = PhysicsMotionComponent() upperSphere.generateCollisionShapes(recursive: false) lowerSphere.position = [0, 1, -2] lowerSphere.physicsBody = .init() lowerSphere.physicsBody?.mode = .static lowerSphere.generateCollisionShapes(recursive: false) let sub = content.subscribe(to: CollisionEvents.Began.self, on: nil) { _ in print("Collision!") } subscriptions.append(sub) content.add(upperSphere) content.add(lowerSphere) Task { try? await Task.sleep(for: .seconds(2)) print("Impulse applied") upperSphere.applyLinearImpulse([0, -1, 0], relativeTo: nil) } } } }
3
0
389
May ’24
Store USDZ with SwiftData
I am trying to store usdz files with SwiftData for now. I am converting usdz to data, then storing it with SwiftData My model import Foundation import SwiftData import SwiftUI @Model class Item { var name: String @Attribute(.externalStorage) var usdz: Data? = nil var id: String init(name: String, usdz: Data? = nil) { self.id = UUID().uuidString self.name = name self.usdz = usdz } } My function to convert usdz to data. I am currently a local usdz just to test if it is going to work. func usdzData() -> Data? { do { guard let usdzURL = Bundle.main.url(forResource: "tv_retro", withExtension: "usdz") else { fatalError("Unable to find USDZ file in the bundle.") } let usdzData = try Data(contentsOf: usdzURL) return usdzData } catch { print("Error loading USDZ file: \(error)") } return nil } Loading the items @Query private var items: [Item] ... var body: some View { ... ForEach(items) { item in HStack { Model3D(?????) { model in model .resizable() .scaledToFit() } placeholder: { ProgressView() } } } ... } How can I load the Model3D? I have tried: Model3D(data: item.usdz) Gives me the errors: Cannot convert value of type '[Item]' to expected argument type 'Binding<C>' Generic parameter 'C' could not be inferred Both errors are giving in the ForEach. I am able to print the content inside item: ForEach(items) { item in HStack { Text("\(item.name)") Text("\(item.usdz)") } } This above works fine for me. The item.usdz prints something like Optional(10954341 bytes) I would like to know 2 things: Is this the correct way to save usdz files into SwiftData? Or should I use FileManager? If so, how should I do that? Also how can I get the usdz from the storage (SwiftData) to my code and use it into Model3D?
3
0
355
May ’24
Game rejected due to Bitcoin logo - Copycats
Hi everyone, Last month, my endless runner game set in a crypto world was rejected for using the Bitcoin logo, labeled as a "Copycats" issue. We were asked to remove the logo and Bitcoin prices, even though the Bitcoin logo is public domain. I noticed other Bitcoin games on the AppStore were updated recently without any problems. I couldn't find any rules against using Bitcoin in the guidelines. When I appealed, they just told me to remove it again but didn't explain why. We've updated our game several times before, and only this small bug fix was rejected. Any advice on what are the next step? Anyone experienced this? Cheers
0
0
386
Mar ’24