Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics

Post

Replies

Boosts

Views

Activity

App using MetalKit creates many IOSurfaces in rapid succession, causing MTKView to freeze and app to hang
I've got an iOS app that is using MetalKit to display raw video frames coming in from a network source. I read the pixel data in the packets into a single MTLTexture rows at a time, which is drawn into an MTKView each time a frame has been completely sent over the network. The app works, but only for several seconds (a seemingly random duration), before the MTKView seemingly freezes (while packets are still being received). Watching the debugger while my app was running revealed that the freezing of the display happened when there was a large spike in memory. Seeing the memory profile in Instruments revealed that the spike was related to a rapid creation of many IOSurfaces and IOAccelerators. Profiling CPU Usage shows that CAMetalLayerPrivateNextDrawableLocked is what happens during this rapid creation of surfaces. What does this function do? Being a complete newbie to iOS programming as a whole, I wonder if this issue comes from a misuse of the MetalKit library. Below is the code that I'm using to render the video frames themselves: class MTKViewController: UIViewController, MTKViewDelegate { /// Metal texture to be drawn whenever the view controller is asked to render its view. private var metalView: MTKView! private var device = MTLCreateSystemDefaultDevice() private var commandQueue: MTLCommandQueue? private var renderPipelineState: MTLRenderPipelineState? private var texture: MTLTexture? private var networkListener: NetworkListener! private var textureGenerator: TextureGenerator! override public func loadView() { super.loadView() assert(device != nil, "Failed creating a default system Metal device. Please, make sure Metal is available on your hardware.") initializeMetalView() initializeRenderPipelineState() networkListener = NetworkListener() textureGenerator = TextureGenerator(width: streamWidth, height: streamHeight, bytesPerPixel: 4, rowsPerPacket: 8, device: device!) networkListener.start(port: NWEndpoint.Port(8080)) networkListener.dataRecievedCallback = { data in self.textureGenerator.process(data: data) } textureGenerator.onTextureBuiltCallback = { texture in self.texture = texture self.draw(in: self.metalView) } commandQueue = device?.makeCommandQueue() } public func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) { /// need implement? } public func draw(in view: MTKView) { guard let texture = texture, let _ = device else { return } let commandBuffer = commandQueue!.makeCommandBuffer()! guard let currentRenderPassDescriptor = metalView.currentRenderPassDescriptor, let currentDrawable = metalView.currentDrawable, let renderPipelineState = renderPipelineState else { return } currentRenderPassDescriptor.renderTargetWidth = streamWidth currentRenderPassDescriptor.renderTargetHeight = streamHeight let encoder = commandBuffer.makeRenderCommandEncoder(descriptor: currentRenderPassDescriptor)! encoder.pushDebugGroup("RenderFrame") encoder.setRenderPipelineState(renderPipelineState) encoder.setFragmentTexture(texture, index: 0) encoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1) encoder.popDebugGroup() encoder.endEncoding() commandBuffer.present(currentDrawable) commandBuffer.commit() } private func initializeMetalView() { metalView = MTKView(frame: CGRect(x: 0, y: 0, width: streamWidth, height: streamWidth), device: device) metalView.delegate = self metalView.framebufferOnly = true metalView.colorPixelFormat = .bgra8Unorm metalView.contentScaleFactor = UIScreen.main.scale metalView.autoresizingMask = [.flexibleWidth, .flexibleHeight] view.insertSubview(metalView, at: 0) } /// initializes render pipeline state with a default vertex function mapping texture to the view's frame and a simple fragment function returning texture pixel's value. private func initializeRenderPipelineState() { guard let device = device, let library = device.makeDefaultLibrary() else { return } let pipelineDescriptor = MTLRenderPipelineDescriptor() pipelineDescriptor.rasterSampleCount = 1 pipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm pipelineDescriptor.depthAttachmentPixelFormat = .invalid /// Vertex function to map the texture to the view controller's view pipelineDescriptor.vertexFunction = library.makeFunction(name: "mapTexture") /// Fragment function to display texture's pixels in the area bounded by vertices of `mapTexture` shader pipelineDescriptor.fragmentFunction = library.makeFunction(name: "displayTexture") do { renderPipelineState = try device.makeRenderPipelineState(descriptor: pipelineDescriptor) } catch { assertionFailure("Failed creating a render state pipeline. Can't render the texture without one.") return } } } My question is simply: what gives?
0
0
242
3w
Can I get a point on a texture in realityKit on VisionOS?
Hi, I am currently considering porting my AR game from SceneKit to RealityKit so it appears in 3D on Vision OS, but one crucial question to knonw if it can even be ported is: I need a tap on a 3D Model that is than translated into a tap onto the texture of that model. On visionOS this would be the gaze of the person so the question is if I can get the point of a texture is user is looking at (optimally always to I can have a hover effect, but if not at least when tapping the finger together). If that is not possible, is it possible to touch a reality kit object and get the location of that on the texture? All the best Christoph
4
0
285
3w
Error: apple/apple/game-porting-toolkit 1.1 did not build
I was trying update gptk, so i removed the old one ( installed by Xcode 15.0 ) and reinstalled it, but accuired a issue: Error: apple/apple/game-porting-toolkit 1.1 did not build Logs: /Users/CASEM/Library/Logs/Homebrew/game-porting-toolkit/00.options.out /Users/CASEM/Library/Logs/Homebrew/game-porting-toolkit/01.configure /Users/CASEM/Library/Logs/Homebrew/game-porting-toolkit/01.configure.cc /Users/CASEM/Library/Logs/Homebrew/game-porting-toolkit/02.make /Users/CASEM/Library/Logs/Homebrew/game-porting-toolkit/wine64-build If reporting this issue please do so to (not Homebrew/brew or Homebrew/homebrew-core): apple/apple Error: Your Xcode (15.0) is outdated. Please update to Xcode 15.4 (or delete it). Xcode can be updated from the App Store. So I upgrade the Xcode to 15.4 by app store and still encountered: Error: apple/apple/game-porting-toolkit 1.1 did not build Logs: /Users/CASEM/Library/Logs/Homebrew/game-porting-toolkit/00.options.out /Users/CASEM/Library/Logs/Homebrew/game-porting-toolkit/01.configure /Users/CASEM/Library/Logs/Homebrew/game-porting-toolkit/01.configure.cc /Users/CASEM/Library/Logs/Homebrew/game-porting-toolkit/02.make /Users/CASEM/Library/Logs/Homebrew/game-porting-toolkit/wine64-build If reporting this issue please do so to (not Homebrew/brew or Homebrew/homebrew-core): apple/apple I've been reading some post says use 15.1, but still get same error report. What should I do next? Any help woulbe appreciated.
0
0
377
3w
How to reduce draw call count in RealityKit
I'm trying to render a large number of entities, it looks like each ModelEntity causes a draw call, even if you share the ModelComponent so each Entity shares the mesh and materials. I tried to use the MeshInstanceCollection inside MeshResource to generate a large number of objects in the scene, the code works and draws many objects but the draw count is still one call per instance, this seems strange I would assume it should only be one draw call for the single entity since I have specified to use instancing in the resource. Has anybody else successfully used instancing in RealityKit to draw a large number go Entities (maybe around 10,000) or drawn this amount of items successfully with 60fps any other way? Here is some sample code that draws 100 cubes using instancing but still causes 100 draw calls. func instanceTest(scene: RealityKit.Scene) { var resource = MeshResource.generateBox(size: 0.2) var contents = MeshResource.Contents() contents.models = resource.contents.models var arr: [MeshResource.Instance] = [] var matrix = matrix_identity_float4x4 matrix[3, 0] = 0.5 for i in 0..<100 { let inst = MeshResource.Instance(id: "\(i)", model: "MeshModel", at: matrix) arr.append(inst) } contents.instances = MeshInstanceCollection(arr) let updatedResource = try? MeshResource.generate(from: contents) let unlitMaterial = UnlitMaterial(color: .red) let modelEntity = ModelEntity( mesh: updatedResource!, materials: [unlitMaterial] ) let anchor = AnchorEntity() anchor.addChild(modelEntity) scene.addAnchor(anchor) }
2
0
242
3w
Couldnt link game on game center
my game center account are linked both on my ipad and iphone but the activity on the game center didnt update when i play games on my ipad. It only linked to games on my phone. So all the games i‘ve downloaded on my ipad arent linked to the game center. Is there any solution so that the games i‘ve downloaded on my ipad could link to the game center? i have tried to turn on and off both on my ipad and iphone but it still only linked to the phone.
0
0
171
3w
How to create screen-space meshes selectively in RealityKit AR Mode Using New OrthographicCameraComponent?
I'd like to create meshes in RealityKit ( AR mode on iPad ) in screen-space, i.e. for UI. I noticed a lot of useful new functionality in RealityKit for the next OS versions, including the OrthographicCameraComponent here: https://developer.apple.com/documentation/realitykit/orthographiccameracomponent?changes=_3 I think this would help, but I need AR worldtracking as well as a regular perspective camera to work with the 3D elements. Firstly, can I have a camera attached selectively to a few entities, just for those entities? This could be the orthographic camera. Secondly, can I make it so those entities are always rendered in-front, in screenspace? (They'd need to follow the camera.) If I can't have multiple cameras, what can be done in that case? Is it actually better to use a completely different view / API for layering on-top of RealityKit? I would much rather keep everything in RealityKit, however, for simplicity.
0
0
273
3w
Using a scene from Reality Composer Pro in an IOS app?
I am trying to establish a workflow with using Reality Composer Pro to make scenes - I am grey boxing a scene using primitives at the moment. I have set up a cube with a texture material and a simple animation to spin. I am confused as to what I should be loading. I have created what I think is a scene asset in the package for the Reality Composer Project. Here is a code snippet: struct ContentView: View { var body: some View { RealityView { content in do { let scene = try await ModelEntity(named: "HOF") content.add(scene) } catch { print("Error loading scene: \(error.localizedDescription)") } } } } Here is the project layout in Reality Composer Pro:
1
0
281
3w
SceneKit or RealityKit for Non-AR Game Development
Hi everyone, I'm choosing a framework for developing a game that doesn't involve augmented reality (AR) and I'm unsure whether to use SceneKit or RealityKit. I would like to hear from Apple engineers on this matter. Which of these frameworks is better suited for creating non-AR games? Additionally, I'd like to know if it's possible to disable AR in RealityKit using the updated RealityView? Thanks in advance for your insights and recommendations!
2
0
448
3w
Reality Composer Pro inline documentation comments
The WWDC24 video "Build a spatial drawing app with RealityKit" https://developer.apple.com/wwdc24/10104 at 12:04 includes a slide showing a Reality Composer Pro shader graph that features wonderful inline documentation comment boxes: Are shader graph inline comments a new feature that Reality Composer Pro supports? This would be extraordinarily useful, as complex shader graphs can be challenging to decipher. If so, how are inline shader graph comments created in Reality Composer Pro?
1
0
278
3w
copyFromBuffer offset and size working even when not multiple of 4
Hi, Reading the copyFromBuffer documentation states that on macOS, sourceOffset, destinationOffset, and size "needs to be a multiple of 4, but can be any value in iOS and tvOS". However, I have noticed that, at least on my M2 Max, this limitation does not seem to exist as there are no warnings and the copy works correctly regardless of the offset value. I'm curious to know if this is something that should still be avoided. Is the multiple of 4 limitation reserved for non Apple Silicon devices and that note can be ignored for Apple Silicon? I ask because I am a contributor to Metal.jl, and recently noticed that our tests pass even when copying using copyWithBuffer with offsets and sizes that are not multiples of 4. If that coul cause issues/correctness problems, we would need to fix that. Thank you. Christian
0
0
189
3w
Unable to create PhysicsJoint using Entity's Geometric Pin
Hello, I'm trying to attach one entity to another entity via the new PhysicsFixedJoint. I have a usdz that contains a skeletal pose which expose the joints as pins as desired. However the when I access the pin, it is returning a GeometricPin, instead of an EntityGeometricPin as you would expect. I can't use the returned GeometricPin to create the joint. Am I missing something? Shouldn't access the Entity's pins object return EntityGeometricPins instead of GeometricPin? Here is the code sample: var body: some View { RealityView { content in if let scene = try? await Entity(named: "Scene", in: untitledBundle) { content.add(scene) let attack = try! Entity.load(named: "Attack01_SingleSword") let anchor = scene.findEntity(named: "Root") anchor?.addChild(attack) let sword = try! Entity.load(named: "OHS08_Sword") anchor?.addChild(sword) if let swordEntity = findModelComponentEntity(entity: sword) { let swordPin = swordEntity.pins.set( named: "test", position: SIMD3<Float>.zero ) if let attackEntity = findModelComponentEntity(entity: attack) { let attackPin = attackEntity.pins["root/pelvis/spine_01/spine_02/spine_03/clavicle_r/upperarm_r/lowerarm_r/hand_r/weapon_r"]! // This is returning GeomtricPin instead of the EntityGeometricPin that the "pins" object contains let joint = PhysicsFixedJoint( pin0: swordPin, pin1: attackPin // This is a compile error since it is not an EntityGeometricPin type ) try! joint.addToSimulation() } } } } }
1
0
247
3w
Metal and Swift Concurrency
Hi, Introducing Swift Concurrency to my Metal app has been a bit challenging as Swift Concurrency is limited by the cooperative thread pool. GPU work is obviously not CPU bound and can block forward moving progress, especially when using waitUntilCompleted on the command buffer. For concurrent render work this has the potential of under utilizing the CPU and even creating dead locks. My question is, what is the Metal's teams general recommendation when it comes to concurrency? It seems to me that Dispatch or OperationQueues are still the preferred way for Metal bound tasks in order to gain maximum performance? To integrate with Swift Concurrency my idea is to use continuations that kick off render jobs via Dispatch or Queues? Would this be the best solution to bridge async tasks with Metal work? Thanks!
4
0
279
3w
USDZ with vertex color
Hello, I have a USDC file with vertex color (WITHOUT textures), and it displays perfectly in Preview. If I package it in a zip (without compression) and rename the resulting file to USDZ, I can see it without any issues in AVP and Mac. However, if I send it to an iPhone, the vertex color does not display. Is there anything else I need to do besides packaging the USDC without compression in a ZIP? Thank you very much.
2
0
196
3w
Bug? Xcode 16 macOS 15 SDK on macos 14.5 causes Metal Shader Colors to be Wrong
I've been upgrading Xcode consistently for years and have never seen Metal shaders behave differently from one version to another until now. On macOS 14.5, Xcode 16 beta, suddenly several color outputs turn out completely black where there should be color. All validation is on and nothing seems to be wrong (and hasn't been since maybe Xcode version 11). I've attached two screens. The first is the normal color scheme, the second is in Xcode 16. The settings are the exact same. Normal: Buggy with black + transparent colors (so it seems like either colors are overflowing or are all 0s)? Before I file a bug report or code level request, may I have some thoughts on how to debug this? The only clue I have is that I'm using bindless to multiply color texture samples with color values from my vertex struct. But it still fails even if I use hard-coded values for the texture samples, meaning somehow the color values are not being sent to the shader correctly? This is the most stable part of my rendering pipeline, so I'm surprised if the issue is there. Thank you.
1
0
348
3w
Turn on ARView.ARSession.ARConfiguration.providesAudioData = true and add a ModelEntity to ARView (its Material is VideoMaterial (avPlayer: player) this video contains audio), and the video does not play properly?
import SwiftUI import RealityKit import ARKit import AVFoundation struct ContentView : View { var body: some View { ARViewContainer().edgesIgnoringSafeArea(.all) } } struct ARViewContainer: UIViewRepresentable { func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) arView.session.delegate = context.coordinator let worldConfig = ARWorldTrackingConfiguration() worldConfig.planeDetection = .horizontal // worldConfig.providesAudioData = true // open here -----> Error: arView.session.run(worldConfig) addTestEntity(arView: arView) return arView } func updateUIView(_ uiView: ARView, context: Context) {} func makeCoordinator() -> Coordinator { Coordinator() } class Coordinator: NSObject, ARSessionDelegate, ARSessionObserver { func session(_ session: ARSession, didOutputAudioSampleBuffer audioSampleBuffer: CMSampleBuffer) { } } } func addTestEntity(arView: ARView) { let mesh = MeshResource.generatePlane(width: 0.5, depth: 0.35) guard let url = Bundle.main.url(forResource: "videoplayback", withExtension: "mp4") else { return } let player = AVPlayer(url: url) let videoMaterial = VideoMaterial(avPlayer: player) let model = ModelEntity(mesh: mesh, materials: [videoMaterial]) model.transform.translation.y = 0.05 let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2))) anchor.children.append(model) player.play() arView.scene.anchors.append(anchor) } Error: failed to update STS state: Error Domain=com.apple.STS-N Code=1396929899 "Error: failed to signal change" UserInfo={NSLocalizedDescription=Error: failed to signal change} failed to update STS state: Error Domain=com.apple.STS-N Code=1396929899 "Error: failed to signal change" UserInfo={NSLocalizedDescription=Error: failed to signal change} ...... ARSession <0x125d88040>: did fail with error: Error Domain=com.apple.arkit.error Code=102 "Required sensor failed." UserInfo={NSLocalizedFailureReason=A sensor failed to deliver the required input., NSUnderlyingError=0x302922dc0 {Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" UserInfo={NSLocalizedDescription=Cannot Complete Action, NSLocalizedRecoverySuggestion=Try again later.}}, NSLocalizedRecoverySuggestion=Make sure that the application has the required privacy settings., NSLocalizedDescription=Required sensor failed.} iOS 17.5.1 Xcode 15.4
2
0
285
3w
Multiplayer testing with the new TabletopKit
Hey, Just watched and started investigating a new TabletopKit framework, which looks fantastic. I'm looking at how multiplayer can be tested. Found info about testing on multiple real devices here - https://developer.apple.com/documentation/tabletopkit/tabletopkitsample#Start-a-multiplayer-game-on-devices I wanted to ask about options for testing multiplayer on simulators, maybe with a simulator + a single real device. Unfortunately, having multiple VisionPros for indie development is unrealistic, so I hope there are ways to do it.
2
0
384
3w