Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

GameKit Turn Based Matches Push Notifications
I'm developing a game that supports GameKit turn based matches. What I don't understand is this: Is tapping on the Game Center notification push messages the only way for the GKTurnBasedEventListener to trigger? What if someone misses the push message (swiping it away by accident or something like that) but still wants to join? Is there some inbox somewhere where the pending messages can be seen or fetched? Also it was mentioned in a very old WWDC video (from 2013, I think that's the latest with information about turn based matches) that the notification also includes a badge for the icon. However, I do not understand how to implement that. Is there any documentation for that?
3
0
620
2w
ARView ignores multi-touch events
Hi, How to enable multitouch on ARView? Touch functions (touchesBegan, touchesMoved, ...) seem to only handle one touch at a time. In order to handle multiple touches at a time with ARView, I have to either: Use SwiftUI .simultaneousGesture on top of an ARView representable Position a UIView on top of ARView to capture touches and do hit testing by passing a reference to ARView Expected behavior: ARView should capture all touches via touchesBegan/Moved/Ended/Cancelled. Here is what I tried, on iOS 26.1 and macOS 26.1: ARView Multitouch The setup below is a minimal ARView presented by SwiftUI, with touch events handled inside ARView. Multitouch doesn't work with this setup. Note that multitouch wouldn't work either if the ARView is presented with a UIViewController instead of SwiftUI. import RealityKit import SwiftUI struct ARViewMultiTouchView: View { var body: some View { ZStack { ARViewMultiTouchRepresentable() .ignoresSafeArea() } } } #Preview { ARViewMultiTouchView() } // MARK: Representable ARView struct ARViewMultiTouchRepresentable: UIViewRepresentable { func makeUIView(context: Context) -> ARView { let arView = ARViewMultiTouch(frame: .zero) let anchor = AnchorEntity() arView.scene.addAnchor(anchor) let boxWidth: Float = 0.4 let boxMaterial = SimpleMaterial(color: .red, isMetallic: false) let box = ModelEntity(mesh: .generateBox(size: boxWidth), materials: [boxMaterial]) box.name = "Box" box.components.set(CollisionComponent(shapes: [.generateBox(width: boxWidth, height: boxWidth, depth: boxWidth)])) anchor.addChild(box) return arView } func updateUIView(_ uiView: ARView, context: Context) { } } // MARK: ARView class ARViewMultiTouch: ARView { required init(frame: CGRect) { super.init(frame: frame) /// Enable multi-touch isMultipleTouchEnabled = true cameraMode = .nonAR automaticallyConfigureSession = false environment.background = .color(.gray) /// Disable gesture recognizers to not conflict with touch events /// But it doesn't fix the issue gestureRecognizers?.forEach { $0.isEnabled = false } } required dynamic init?(coder decoder: NSCoder) { fatalError("init(coder:) has not been implemented") } override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { for touch in touches { /// # Problem /// This should print for every new touch, up to 5 simultaneously on an iPhone (multi-touch) /// But it only fires for one touch at a time (single-touch) print("Touch began at: \(touch.location(in: self))") } } } Multitouch with an Overlay This setup works, but it doesn't seem right. There must be a solution to make ARView handle multi touch directly, right? import SwiftUI import RealityKit struct MultiTouchOverlayView: View { var body: some View { ZStack { MultiTouchOverlayRepresentable() .ignoresSafeArea() Text("Multi touch with overlay view") .font(.system(size: 24, weight: .medium)) .foregroundStyle(.white) .offset(CGSize(width: 0, height: -150)) } } } #Preview { MultiTouchOverlayView() } // MARK: Representable Container struct MultiTouchOverlayRepresentable: UIViewRepresentable { func makeUIView(context: Context) -> UIView { /// The view that SwiftUI will present let container = UIView() /// ARView let arView = ARView(frame: container.bounds) arView.autoresizingMask = [.flexibleWidth, .flexibleHeight] arView.cameraMode = .nonAR arView.automaticallyConfigureSession = false arView.environment.background = .color(.gray) let anchor = AnchorEntity() arView.scene.addAnchor(anchor) let boxWidth: Float = 0.4 let boxMaterial = SimpleMaterial(color: .red, isMetallic: false) let box = ModelEntity(mesh: .generateBox(size: boxWidth), materials: [boxMaterial]) box.name = "Box" box.components.set(CollisionComponent(shapes: [.generateBox(width: boxWidth, height: boxWidth, depth: boxWidth)])) anchor.addChild(box) /// The view that will capture touches let touchOverlay = TouchOverlayView(frame: container.bounds) touchOverlay.autoresizingMask = [.flexibleWidth, .flexibleHeight] touchOverlay.backgroundColor = .clear /// Pass an arView reference to the overlay for hit testing touchOverlay.arView = arView /// Add views to the container. /// ARView goes in first, at the bottom. container.addSubview(arView) /// TouchOverlay goes in last, on top. container.addSubview(touchOverlay) return container } func updateUIView(_ uiView: UIView, context: Context) { } } // MARK: Touch Overlay View /// A UIView to handle multi-touch on top of ARView class TouchOverlayView: UIView { weak var arView: ARView? override init(frame: CGRect) { super.init(frame: frame) isMultipleTouchEnabled = true isUserInteractionEnabled = true } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { let totalTouches = event?.allTouches?.count ?? touches.count print("--- Touches Began --- (New: \(touches.count), Total: \(totalTouches))") for touch in touches { let location = touch.location(in: self) /// Hit testing. /// ARView and Touch View must be of the same size if let arView = arView { let entity = arView.entity(at: location) if let entity = entity { print("Touched entity: \(entity.name)") } else { print("Touched: none") } } } } override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent?) { let totalTouches = event?.allTouches?.count ?? touches.count print("--- Touches Cancelled --- (Cancelled: \(touches.count), Total: \(totalTouches))") } }
2
0
433
3w
macOS SwiftUI app with external 4K camera & sensors for Hospital Avatar: ARKit, MLX, and Thermal feasibility?
We are developing a standalone AI avatar application for hospital reception kiosks using Mac mini (M2/M4). The app runs on SwiftUI + RealityKit, displays on a 75-inch monitor, and utilizes a USB-connected 4K camera and external sensors (LiDAR/mmWave). We have several technical concerns regarding the transition from iPadOS to macOS. Could you please provide insights on the following? ARKit/Vision Framework on macOS with External Camera On iPadOS, ARKit provides robust Face Tracking. On macOS with an external USB 4K camera: Can we achieve real-time face tracking (expression/gaze/depth) with Vision framework or ARKit comparable to iPadOS performance? Are there any specific limitations for accessing the Neural Engine via Vision framework for real-time 4K video analysis on macOS? Accessing External Hardware (LiDAR/Sensors) in Sandbox We plan to connect external LiDAR and mmWave sensors (e.g., Akara) via USB/Bluetooth. Is it feasible to communicate with these custom drivers/devices within the App Sandbox environment? Would DriverKit be required, or can we use standard serial communication APIs? On-Device LLM (MLX) & Thermals We intend to run a local LLM (e.g., Llama 3 using MLX framework) for offline conversation, alongside 3D rendering. With the M2/M4 Mac mini fan design, is there a risk of thermal throttling during 10+ hours of continuous operation (simultaneous CoreML + 3D rendering)? Is the Mac Studio recommended over the Mac mini for this thermal profile? Long-running Speech API Are there any known issues (memory leaks, API limits) when using Spherch framework and AVSpeechSynthesizer continuously for over 10 hours daily? 3D Display Output Are there any macOS constraints for rendering a SwiftUI window in a specific 3D format (e.g., Side-by-Side) and outputting it via HDMI to a 3D digital signage display (fixed refresh rate/resolution)? Thank you for your assistance.
0
0
245
2w
SpriteKit Offline Rendering with SKRenderer
Hi! I'd like to share a technical sample app, SKRenderer Demo. This app demonstrates: Setting up SKRenderer Recording SpriteKit scenes to image sequences Recording SpriteKit scenes to video using IOSurface and AVFoundation Applying Core Image filters Exploring SpriteKit's simulation timing and physics determinism Use Case Record SpriteKit simulations as video or images for sharing and creating content. I explored several approaches, including the excellent view.texture(from:crop:) for live recording from SKView. The SKRenderer approach assumes recording happens asynchronously: you capture user interactions as commands during live interaction, then replay those commands through an offline render pass to generate the final output. I hope this helps others working on replay systems, simulation capture, or SpriteKit projects in general!
0
0
153
3w
TapGesture stops responding on ViewAttachmentComponent after disabling or removing and re-adding the Entity (visionOS 26)
Issue When an Entity with a ViewAttachmentComponent is: disabled using isEnabled = false removed using removeFromParent() and then enabled or added back again, the attached SwiftUI view is rendered correctly, but tap interactions stop working. Specifically: Button actions inside the attached view do not fire TapGesture closures on child views do not respond Expected Behavior Tap interactions inside the attached view should continue to work after the Entity is re-enabled or re-added. Actual Behavior After being disabled or removed once, all tap interactions stop responding. Comparison When displaying the same SwiftUI view using RealityViewAttachments, this issue does not occur. Removing and re-displaying the attachment still allows taps to work correctly. Reproduction Attached sample code reproduces the issue: A RealityView with an Entity that has a ViewAttachmentComponent The attached SwiftUI view contains a Toggle The toggle updates isEnabled on the Entity After toggling off and on, tap interactions stop responding Environment Xcode 26 visionOS 26 Question Is this expected behavior of ViewAttachmentComponent, or a bug? Is there a recommended way to temporarily hide or disable an Entity with ViewAttachmentComponent without breaking tap interactions? import SwiftUI import RealityKit struct GestureTestView: View { @State var sampleEnabled = true @State var sampleEntity: Entity? var body: some View { RealityView { contents, attachments in // After deleting and re-displaying it, taps no longer respond. let sample = Entity(components: ViewAttachmentComponent(rootView: SampleView())) // Executed successfully //let sample = attachments.entity(for: "SampleView")! contents.add(sample) sample.position = [0, 1.2, -1] sampleEntity = sample let toggleButton = Entity(components: ViewAttachmentComponent(rootView: ToggleButtonView(isOn: $sampleEnabled))) contents.add(toggleButton) toggleButton.position = [0, 1, -1] } update: { _, _ in // run update closure print(sampleEnabled) // update sample entity enable sampleEntity?.isEnabled = sampleEnabled } attachments: { Attachment(id: "SampleView") { SampleView() } } } } struct ToggleButtonView: View { @Binding var isOn: Bool var body: some View { VStack { Toggle(isOn: $isOn) { Text("Toggle") } } .padding() .glassBackgroundEffect() } } struct SampleView: View { var body: some View { VStack { Button { print("Hello, World!") } label: { Text("Hello, World!") .padding() } } .padding() .glassBackgroundEffect() } } #Preview(immersionStyle: .mixed) { GestureTestView() }
2
0
405
3w
Metal and Swift Concurrency
Hi, Introducing Swift Concurrency to my Metal app has been a bit challenging as Swift Concurrency is limited by the cooperative thread pool. GPU work is obviously not CPU bound and can block forward moving progress, especially when using waitUntilCompleted on the command buffer. For concurrent render work this has the potential of under utilizing the CPU and even creating dead locks. My question is, what is the Metal's teams general recommendation when it comes to concurrency? It seems to me that Dispatch or OperationQueues are still the preferred way for Metal bound tasks in order to gain maximum performance? To integrate with Swift Concurrency my idea is to use continuations that kick off render jobs via Dispatch or Queues? Would this be the best solution to bridge async tasks with Metal work? Thanks!
5
0
1.1k
Apr ’25
Clarifying when Game Center activity events fire relative to authentication
Hello, In our game we enforce an age gate before showing Game Center sign‑in. Only after the user passes the age gate do we call GKLocalPlayer.localPlayer.authenticateHandler. The reason I’m asking is that we want to reliably detect if the game was launched from a Game Center activity in the Games app (iOS 26+). If the user prefers to enter via activities, we don’t want to miss that event during cold start. Our current proposal is: Register a GKLocalPlayerListener early in didFinishLaunchingWithOptions: so the app is ready to catch events. Queue any incoming events in our dispatcher. Only process those events after the user passes the age gate and authentication succeeds. My questions are: Does player:wantsToPlayGameActivity:completionHandler: ever fire before authentication, or only after the local player is authenticated? If it only fires after authentication, is our “register early but gate processing” approach the correct way to ensure we don’t miss activity launches? Is there any recommended pattern to distinguish “activity launch” vs. “normal launch” in this age‑gate scenario? We want to respect Apple’s age gate requirements, but also ensure activity launches are not lost if the user prefers that entry point. Sorry if this is a stupid question — I just want to be sure we’re following the right pattern. Thanks for any clarification or best‑practice guidance!
2
0
372
5d
xCode26.x Metal4 classes do not compile
Hi, I am using xCode26.x. But my Metal4 classes are not compiling. I downloaded the sample code from Apple's website - https://developer.apple.com/documentation/Metal/processing-a-texture-in-a-compute-function. For example, I am getting errors like "Cannot find protocol declaration for 'MTL4CommandQueue'; I have hit a deadline. Any recommendations are very welcome. I have downloaded the Metal Tool chain. When I run the following commands on the terminal - xcodebuild -showComponent metalToolchain ; xcrun -f metal ; xcrun metal --version I get the following response - Asset Path: /System/Library/AssetsV2/com_apple_MobileAsset_MetalToolchain/86fbaf7b114a899754307896c0bfd52ffbf4fded.asset/AssetData Build Version: 17A321 Status: installed Toolchain Identifier: com.apple.dt.toolchain.Metal.32023 Toolchain Search Path: /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded/Metal.xctoolchain/usr/bin/metal Apple metal version 32023.830 (metalfe-32023.830.2) Target: air64-apple-darwin24.6.0 Thread model: posix InstalledDir: /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded/Metal.xctoolchain/usr/metal/current/bin
2
0
1.1k
Jan ’26
RealityKit, Attachments - not working
The simplest realityView (content, attachments in ... causes Contextual closure expects 1 argument but 2 were used in closure body. I have checked every example and i cannot understand why i get this error regardless of any content. Note: i have added Attachment(id: "test") to the attachment closure and get Attachment not is scope. imported both realityKit and SwiftUI.
2
0
305
Dec ’25
Multiple App Icons
Hi, I have an Unity game. I need to have multiple App Icons for my game for it to be able to be recognized in different countries. In other words, is it possible to have an iOS app in which the App Icon changes based on device locale/language? On Android this is possible using Unity Localization package "com.unity.localization"
0
0
259
Oct ’25
GKAccessPoint triggerAccessPointWithState handler not invoked on iOS 26.0 and iOS 15.8.4
GKAccessPoint triggerAccessPointWithState handler not invoked on iOS 26.0 and iOS 15.8.4 Incorrect/Unexpected Behaviour: When calling [GKAccessPoint.shared triggerAccessPointWithState:GKGameCenterViewControllerStateAchievements handler:^{}] on a real device running iOS 26 beta (iOS 26), the overlay appears as expected, but the handler block is never called. This behavior also not working correctly on previous iOS versions(tested on iOS 15.8.4) Steps to Reproduce: Authenticate GKLocalPlayer Call triggerAccessPointWithState:handler: with a block that logs or performs logic Observe that overlay appears, but block is not executed Behavior: UI appears correctly Handler is not invoked at all Expected Result: The handler should fire immediately after the dashboard is shown. Actual Result: The handler is never called. Usecase: As GKGameCenterViewController is deprecated we are moving to GKAccesspoint but due to above functionality issue we are unable to. Environment: Device: iPhone 16, iPhone 7 iOS: 26.0 and iOS 15.8.4 Xcode: 26.0 beta and Xcode 16.4
6
0
534
Oct ’25
Bone deformation
I have been tasked with creating content for the Apple Vision Pro. Just the 3D content and animation, not the programming end of things. I can't seem to get any kind of mesh deformation animation to import into Reality Composer Pro. By that I mean bones/skin, or even point cache. I work on PC, and my main software is 3DS Max, but I'm borrowing an iMac for this job, and was instructed to use RCP on it for testing before handing things off to the programmer. My files open and play fine in other USD programs, like Omniverse, or USD View, just not Reality Composer Pro. I've seen the dinosaur demo in AVP, so I know mesh deformation is possible. If there are other essential tools that might make this possible, I have not been made aware of them. I am experimenting with bouncing things off of Blender, in case that exports better, but not really having luck there either -though my results are different. Thanks.
0
0
123
Sep ’25
moveCharacter reports collision with itself
I'm running into an issue with collisions between two entities with a character controller component. In the collision handler for moveCharacter the collision has both hitEntity and characterEntity set to the same object. This object is the entity that was moved with moveCharacter() The below example configures 3 objects. stationary sphere with character controller falling sphere with character controller a stationary cube with a collision component if the falling sphere hits the stationary sphere then the collision handler reports both hitEntity and characterEntity to be the falling sphere. I would expect that the hitEntity would be the stationary sphere and the character entity would be the falling sphere. if the falling sphere hits the cube with a collision component the the hit entity is the cube and the characterEntity is the falling sphere as expected. Is this the expected behavior? The entities act as expected visually however if I want the spheres to react differently depending on what character they collided with then I am not getting the expected results. IE: If a player controlled character collides with a NPC then exchange resource with NPC. if player collides with enemy then take damage. import SwiftUI import RealityKit struct ContentView: View { @State var root: Entity = Entity() @State var stationary: Entity = createCharacter(named: "stationary", radius: 0.05, color: .blue) @State var falling: Entity = createCharacter(named: "falling", radius: 0.05, color: .red) @State var collisionCube: Entity = createCollisionCube(named: "cube", size: 0.1, color: .green) //relative to root @State var fallFrom: SIMD3<Float> = [0,0.5,0] var body: some View { RealityView { content in content.add(root) root.position = [0,-0.5,0.0] root.addChild(stationary) stationary.position = [0,0.05,0] root.addChild(falling) falling.position = fallFrom root.addChild(collisionCube) collisionCube.position = [0.2,0,0] collisionCube.components.set(InputTargetComponent()) } .gesture(SpatialTapGesture().targetedToAnyEntity().onEnded { tap in let tapPosition = tap.entity.position(relativeTo: root) falling.components.remove(FallComponent.self) falling.teleportCharacter(to: tapPosition + fallFrom, relativeTo: root) }) .toolbar { ToolbarItemGroup(placement: .bottomOrnament) { HStack { Button("Drop") { falling.components.set(FallComponent(speed: 0.4)) } Button("Reset") { falling.components.remove(FallComponent.self) falling.teleportCharacter(to: fallFrom, relativeTo: root) } } } } } } @MainActor func createCharacter(named name: String, radius: Float, color: UIColor) -> Entity { let character = ModelEntity(mesh: .generateSphere(radius: radius), materials: [SimpleMaterial(color: color, isMetallic: false)]) character.name = name character.components.set(CharacterControllerComponent(radius: radius, height: radius)) return character } @MainActor func createCollisionCube(named name: String, size: Float, color: UIColor) -> Entity { let cube = ModelEntity(mesh: .generateBox(size: size), materials: [SimpleMaterial(color: color, isMetallic: false)]) cube.name = name cube.generateCollisionShapes(recursive: true) return cube } struct FallComponent: Component { let speed: Float } struct FallSystem: System{ static let predicate: QueryPredicate<Entity> = .has(FallComponent.self) && .has(CharacterControllerComponent.self) static let query: EntityQuery = .init(where: predicate) let down: SIMD3<Float> = [0,-1,0] init(scene: RealityKit.Scene) { } func update(context: SceneUpdateContext) { let deltaTime = Float(context.deltaTime) for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) { let speed = entity.components[FallComponent.self]?.speed ?? 0.5 entity.moveCharacter(by: down * speed * deltaTime, deltaTime: deltaTime, relativeTo: nil) { collision in if collision.hitEntity == collision.characterEntity { print("hit entity has collided with itself") } print("\(collision.characterEntity.name) collided with \(collision.hitEntity.name) ") } } } } #Preview(windowStyle: .volumetric) { ContentView() }
1
0
174
Aug ’25
Improving person segmentation and occlusion quality in RealityKit
I’m building an app that uses RealityKit and specifically ARConfiguration.FrameSemantics.personSegmentationWithDepth. The goal is to insert an AR object into the scene behind a person, and an additional AR object in front of the person, while being as photo realistic as possible. Through testing, I’ve noticed that many times, the edges of the person segmentation mask are not well matched to the actual person, and parts of the person are transparent, with the AR object bleeding through. It’s sort of like a “bad green screen” effect, which I’d expect to see a little bit, but not to this extent. I’ve been testing on iPhone 16, iPhone 14 Pro, iPad Pro 12.9 inch 6th Generation, and iPhone 12 Pro, with similar results across all devices. I’m wondering what else I can do to improve this… either code changes, platform (like different iPhone models), or environment (like lighting, distance, etc). Attaching some example screen grabs and a minimum reproducible code sample. Appreciate any insights! import ARKit import SwiftUI import RealityKit struct RealityViewContainer: UIViewRepresentable { func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) arView.environment.sceneUnderstanding.options.insert(.occlusion) arView.renderOptions.insert(.disableMotionBlur) arView.renderOptions.insert(.disableDepthOfField) let configuration = ARWorldTrackingConfiguration() configuration.planeDetection = [.horizontal] if ARWorldTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth) { configuration.frameSemantics.insert(.personSegmentationWithDepth) } arView.session.run(configuration) arView.session.delegate = context.coordinator context.coordinator.arView = arView } func makeCoordinator() -> Coordinator { Coordinator(self) } class Coordinator: NSObject, ARSessionDelegate { var parent: RealityViewContainer var floorAnchor: ARPlaneAnchor? init(_ parent: RealityViewContainer) { self.parent = parent } func session(_ session: ARSession, didAdd anchors: [ARAnchor]) { if let arView,floorAnchor == nil { for anchor in anchors { if let horizontalPlaneAnchor = anchor as? ARPlaneAnchor, horizontalPlaneAnchor.alignment == .horizontal, horizontalPlaneAnchor.transform.columns.3.y < arView.cameraTransform.translation.y { // filter out ceiling floorAnchor = horizontalPlaneAnchor let backgroundEntity = BackgroundEntity() let anchorEntity = AnchorEntity(anchor: horizontalPlaneAnchor) anchorEntity.addChild(background) let foregroundEntity = ForegroundEntity() backgroundEntity.addChild(foregroundEntity) arView.scene.addAnchor(anchorEntity) arView.installGestures([.rotation, .translation], for: backgroundEntity) break // Stop after adding the first horizontal plane (floor) } } } } } }
1
0
112
May ’25
Updated Object Capure -- needs LiDAR?
I have two apps released -- ReefScan and ReefBuild -- that are based on the WWDC21 sample photogrammetry apps for iOS and MacOS. Those run fine without LiDAR and are used mostly for underwater models where LiDAR does not work at all. It now appears that the updated photogrammetry session requires LiDAR data, and building my app on current xcode results in a non-working app. Has the "old" version of photgrammetry session been broken by this update? It worked very well previously so I would hate to see this regression to needing LiDAR. Most of my users do not have that.
3
0
598
Mar ’25
SceneKit - different behavior when debugging
Hello, I'm currently working on my first SceneKit game and have encountered an issue related to moving an SCNNode using a UIPanGestureRecognizer. When I deploy the game to my iPhone via Xcode in debug mode, all interactions are smooth. However, when I stop the debugging session and run the game directly from the device (outside of Xcode), the SCNNode movement behaves inconsistently — it works sometimes smoothly and sometimes not and the interaction becomes choppy. The SCNNode movement is controlled using a UIPanGestureRecognizer. Do you have any ideas what might be causing the issue?
1
0
267
May ’25
ModelEntity(named:in:) fails to load USD file from RealityKitContent bundle with misleading error?
My experience has been that ModelEntity(named:in:) can be used to load a USD file with a simple structure consisting of entities and model entities, and, critically, it will flatten the entity hierarchy down to a single ModelEntity, presumably reducing the number of draw calls. However, can anyone verify that the following is true? If ModelEntity(named:in:) is used to load a USD file from a RealityKit content bundle, it may fail when the USD file contains more complex data, such as shader graph material definitions, or perhaps for some other reason. I am not sure. AND the error that ModelEntity(named:in:) throws in this case is Cannot load RealityKitContent entity: Failed to find resource with name "<name>" in bundle which would literally suggest that the file does not exist, instead of what I assume the error actually is, which is "the file exists but its entity hierarchy could not be flattened to a single ModelEntity" ? Is that an accurate description of the known behavior of ModelEntity:named:in:)? I understand that I could use Entity(named:in:) instead, without the flattening feature. My question is really more about the seemingly misleading error message. Thank you for any clarification you can provide.
2
0
183
May ’25
RealityKit VideoMaterial renders pink on iOS 18
our app is live, and it appears that since the ios 18 update - the VideoMaterial renders pink / purple color instead of the video (picture attached). the audio is rendered properly. we found that it occurs on old devices: iPhone 11 & iPhone SE 2020. I've found this thread of Andy Jazz on stackoverflow: Steps to Reproduce: Create a plane for the video screen. Apply a VideoMaterial using AVPlayerItem. Anchor the model entity to an ARImageAnchor. Expected Outcome: The video should play as a material on the plane in RealityKit. Actual Outcome: On iOS 18, the plane appears pink, indicating the VideoMaterial isn’t applied. What I’ve Tried: -Verified the video URL is correct. -Checked that the AVPlayerItem and VideoMaterial are initialised correctly. -Ensured the AVPlayer is playing the video. I also tried different formats (mov / mp4 / m4v), and verifying that the video's status is readyToPlay. any suggestions?
1
0
213
Jun ’25