Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Presenting images in RealityKit sample No Longer Builds
After updating to the latest visionOS beta, visionOS 26 Beta 4 (23M5300g) the ‘Presenting images in RealityKit’ sample from the following link no longer builds due to an error. https://developer.apple.com/documentation/RealityKit/presenting-images-in-realitykit Expected / Previous: Application builds and runs on device, working as described in the documentation. Reality: Application builds, but does not run on device due to an error (shown in screenshot) “Thread 1: EXC_BAD_ACCESS (code=1, address=0xb)”. The application still runs on the simulator, but not on device. When launching the app from Xcode, it builds and installs correctly but hangs due to the respective error. When loading the app from the Home Screen, the app does not load, and immediately returns to the Home Screen. This Xcode project previously ran with no changes to code - the only change was updating the visionOS system software to the latest version. visionOS 26 Beta 4 (23M5300g) Is anyone else experiencing this issue?
4
0
247
Aug ’25
visionOS: Unable to programmatically close child WindowGroup when parent window closes
Hi , I'm struggling with visionOS window management and need help with closing child windows programmatically. App Structure My app has a Main-Sub window hierarchy: AWindow (Home/Main) BWindow (Main feature window) CWindow (Tool window - child of BWindow) Navigation flow: AWindow → BWindow (switch, 1 window on screen) BWindow → CWindow (opens child, 2 windows on screen) I want BWindow and CWindow to be separate movable windows (not sheet/popover) so users can position them independently in space. The Problem CWindow doesn't close when BWindow closes by tapping the X button below the app (next to the window bar) User clicks X on BWindow → BWindow closes but CWindow remains CWindow becomes orphaned on screen Can close CWindow programmatically when switching BWindow back to AWindow App launch issue After closing both windows, CWindow is remembered as last window Reopening app shows only CWindow instead of BWindow User gets stuck in CWindow with no way back to BWindow I've Tried Environment dismissWindow in cleanup but its not working. // In BWindow.swift .onDisappear { if windowManager.isWindowOpen("cWindow") { dismissWindow(id: "cWindow") } } My App Structure Code Now // in MyNameApp.swift @main struct MyNameApp: App { var body: some Scene { WindowGroup(id: "aWindow") { AWindow() } WindowGroup(id: "bWindow") { BWindow() } WindowGroup(id: "cWindow") { CWindow() } } } // WindowStateManager.swift class WindowStateManager: ObservableObject { static let shared = WindowStateManager() @Published private var openWindows: Set<String> = [] @Published private var windowDependencies: [String: String] = [:] private init() {} func markWindowAsOpen(_ id: String) { markWindowAsOpen(id, parent: nil) } func markWindowAsClosed(_ id: String) { openWindows.remove(id) windowDependencies[id] = nil } func isWindowOpen(_ id: String) -> Bool { let isOpen = openWindows.contains(id) return isOpen } func markWindowAsOpen(_ id: String, parent: String? = nil) { openWindows.insert(id) if let parentId = parent { windowDependencies[id] = parentId } } func getParentWindow(of childId: String) -> String? { let parent = windowDependencies[childId] return parent } func getChildWindows(of parentId: String) -> [String] { let children = windowDependencies.compactMap { key, value in value == parentId ? key : nil } return children } func setNextWindowParent(_ parentId: String) { UserDefaults.standard.set(parentId, forKey: "nextWindowParent") } func getAndClearNextWindowParent() -> String? { let parent = UserDefaults.standard.string(forKey: "nextWindowParent") UserDefaults.standard.removeObject(forKey: "nextWindowParent") return parent } func forceCloseChildWindows(of parentId: String) { let children = getChildWindows(of: parentId) for child in children { markWindowAsClosed(child) NotificationCenter.default.post( name: Notification.Name("ForceCloseWindow"), object: nil, userInfo: ["windowId": child] ) forceCloseChildWindows(of: child) } } func hasMainWindowOpen() -> Bool { let mainWindows = ["main", "bWindow"] return mainWindows.contains { isWindowOpen($0) } } func cleanupOrphanWindows() { for (child, parent) in windowDependencies { if isWindowOpen(child) && !isWindowOpen(parent) { NotificationCenter.default.post( name: Notification.Name("ForceCloseWindow"), object: nil, userInfo: ["windowId": child] ) markWindowAsClosed(child) } } } } // BWindow.swift struct BWindow: View { @Environment(\.dismissWindow) private var dismissWindow @ObservedObject private var windowManager = WindowStateManager.shared var body: some View { VStack { Button("Open C Window") { windowManager.setNextWindowParent("bWindow") openWindow(id: "cWindow") } } .onAppear { windowManager.markWindowAsOpen("bWindow") } .onDisappear { windowManager.markWindowAsClosed("bWindow") windowManager.forceCloseChildWindows(of: "bWindow") } .onChange(of: scenePhase) { oldValue, newValue in if newValue == .background || newValue == .inactive { windowManager.forceCloseChildWindows(of: "bWindow") } } } } // CWindow.swift import SwiftUI struct cWindow: View { @ObservedObject private var windowManager = WindowStateManager.shared @State private var shouldClose = false var body: some View { // Content } .onDisappear { windowManager.markWindowAsClosed("cWindow") NotificationCenter.default.removeObserver( self, name: Notification.Name("ForceCloseWindow"), object: nil ) } .onChange(of: scenePhase) { oldValue, newValue in if newValue == .background { } } .onAppear { let parent = windowManager.getAndClearNextWindowParent() windowManager.markWindowAsOpen("cWindow", parent: parent) NotificationCenter.default.addObserver( forName: Notification.Name("ForceCloseWindow"), object: nil, queue: .main) { notification in if let windowId = notification.userInfo?["windowId"] as? String, windowId == "cWindow" { shouldClose = true } } } .onChange(of: shouldClose) { _, newValue in if newValue { dismissWindow() } } } The logs show everything executes correctly, but CWindow remains visible on screen. Questions Why doesn't dismissWindow(id:) work in cleanup scenarios? Is there a proper way to create a window relationships like parent-child relationships in visionOS? How can I ensure main windows open on app launch instead of tool windows? What's the recommended pattern for dependent windows in visionOS? Environment: Xcode 16.2, visionOS 2.0, SwiftUI
2
0
373
Aug ’25
Possible Bug - Hover Effects/Spatial Event Compatibilty with PSVR2 Controllers?
Hi, I would like clarification on whether the new hover effects feature introduced in vision os 26 supported pinch gestures through the psvr 2 controllers. In your sample application, I found that this was not working. Pulling the trigger on the controller whilst looking at the 3d object did not activate the hover effect spatial event in the sample application. (The object is showing the highlight though), only pinch clicking with my fingers seem to be registering/triggering the spatial event. I am using Vision OS 26.3 This is inconsistent with how the psvr2 controller behaves on swift ui views and ui view elements, where the trigger press does count as a button click. The sample I used was this one: https://developer.apple.com/documentation/compositorservices/rendering_hover_effects_in_metal_immersive_apps
2
0
913
Mar ’26
VisionOS 26 new hand Tracking: how to use it for replacing hands in a virtual environment.
I am still not finding resources to know how to replace hands in a full immersive Space.... I reached the goal by creating a ARKit session that can detect the USDZ hand mesh Joints and connect to the hand-tracked-joints.... but I feel that's not the best solution... I really want to use the RealityKit potential to track and replace hands (with USDZ skinned ones) in an immersive environment, but the only resources I found are from November 2023.... :( Can someone help me?
0
0
119
Jun ’25
CustomMaterial disable unlit tone mapping
Hi, since iOS 18 UnlitMaterial and ShaderGraphMaterial have the option to disable tone mapping, e.g via https://developer.apple.com/documentation/realitykit/unlitmaterial/init(applypostprocesstonemap:) Is it possible to do the same for CustomMaterial? I tried initializing a CustomMaterial based on an UnlitMaterial where tone mapping is disabled, like so: let unlitMat = UnlitMaterial(applyPostProcessToneMap: false) let customMaterial = try CustomMaterial( from: unlitMat, surfaceShader: surfaceShader, geometryModifier: geometryModifier ) but that does not seem to work. The colors of my texture still look altered in comparison to a plain UnlitMaterial or a ShaderGraphMaterial where its disabled. Any hints? Thank you!
1
0
148
Jun ’25
RealityKit_DirectionalLight_Question
My application calculates three distinct Meesus Double [x, y, z] Radian values to light a sphere in RealityKit with DirectionalLight. It is my understanding that I must use (simd_quatf) for each radian value to properly light the sphere in the view. The code correctly [orientates] the sphere with the combined (simd_quatf) DirectionalLight in the view, but the illumination (Z-axis) fails to properly illuminate the sphere with the expected result, compared to associated Meesus web page images. For the moment, I do not know how to correct the (Z-axis). Curious for a suggestion ... :] // Location values. let theLatitude: Double = 51.13107260 let theLongitude: Double = -114.01127910 let currentDate: Date = Date() struct TheCalculatedMoonPhaseTest_ContentView: View { var body: some View { VStack { if #available(macOS 15.0, *) { RealityView { content in let moonSphere_Entity = Entity.createSphere(radius: 0.90, color: .black) moonSphere.Entity.name = "MoonSphere" moonSphere.Entity.position = SIMD3<Float>(x: 0, y: 0, z: 0) content.add(moonSphere.Entity) let sunLight_Entity = createDirectionalLight(latitude: theLatitude, longitude: theLongitude, date: currentDate) content.add(sunLight_Entity) } // End of [RealityView] } else { // Earlier version required. } // End of [if #available(macOS 15.0, *)] } // End of [VStack] .background(Color.black) } // End of [var body: some View] // MARK: - 🟠🟠🟠🟠 [SET THE BACKGROUND COLOUR] 🟠🟠🟠🟠 var backgroundColor: Color = Color.init(.black) // MARK: - 🟠🟠🟠🟠 [CREATE THE DIRECTIONAL LIGHT FOR THE SPHERE] 🟠🟠🟠🟠 func createDirectionalLight(latitude: Double, longitude: Double, date: Date) -> Entity { let directionalLight = DirectionalLight() directionalLight.light.color = .white directionalLight.light.intensity = 1000000 directionalLight.shadow = DirectionalLightComponent.Shadow() directionalLight.shadow?.maximumDistance = 5 directionalLight.shadow?.depthBias = 1 // MARK: 🟠🟠🟠🟠 Retrieve the [MEESUS MOON AGE VALUES] from the [CONSTANT FOLDER] 🟠🟠🟠🟠 let theMeesusMoonAge_LunarAgeDaysValue = 25.90567592898601 if theMeesusMoonAge_LunarAgeDaysValue >= 23.10 && theMeesusMoonAge_LunarAgeDaysValue < (29.530588853 - 1.00) { let someCalculatedX_WestEastRadian: Float = Float(1.00) // Identify the sphere’s DirectionalLight Tilt Angle (Y) radian value :: // Note :: The following Tilt Angle is corrected to [Zenith] with the [MeesusCalculatedTilt_Angle] minus the [MeesusCalculatedPar_Angle]. let someCalculatedY_TiltAngleRadian: Float = Float(1.3396086) // Identify the sphere’s DirectionalLight Illumination (Z) radian Value :: // Note :: The Meesus calculated illumination fraction is converted to degrees, then converted to a radian value. let someCalculatedZ_IlluminationAngleRadian: Float = Float(0.45176168630244457) // <=== 14.3800% Illumination. // Define rotation angles in radians for X, Y, and Z axes. let x_Radians = someCalculatedX_WestEastRadian let y_Radians = someCalculatedY_TiltAngleRadian let z_Radians = someCalculatedZ_IlluminationAngleRadian // Identify and separate the quaternion [simd_quatf] for each Radian. let q_X = simd_quatf(angle: x_Radians, axis: SIMD3<Float>(1, 0, 0)) let q_Y = simd_quatf(angle: y_Radians, axis: SIMD3<Float>(0, 1, 0)) let q_Z = simd_quatf(angle: z_Radians, axis: SIMD3<Float>(0, 0, 1)) // Apply and combine the rotations, where order matters. let combinedRotation = q_Z * q_Y * q_X // Identify the [Combined Rotation]. // The [MyMoonMeesus] :: [WANING CRESCENT] calculated [combinedRotation] :: simd_quatf(real: 0.73715997, imag: SIMD3<Float>(0.24427173, 0.61516714, -0.13599981)) ° Radians // Normalize the [combinedRotation]. let theNormalizesRotation = simd_normalize(combinedRotation) // Identify the [Normalized Combined Rotation]. // The [MyMoonMeesus] :: [WANING CRESCENT] calculated [normalizedRotation] :: simd_quatf(real: 0.73715997, imag: SIMD3<Float>(0.24427173, 0.61516714, -0.13599981)) ° Radians // Assume the [theNormalizesRotation] appears reversed. let theCorrectedRotation = theNormalizesRotation.inverse // Identify the [Reversed Combined Rotation]. // The [MyMoonMeesus] :: [WANING CRESCENT] calculated [correctedRotation] :: simd_quatf(real: 0.73715997, imag: SIMD3<Float>(-0.24427173, -0.61516714, 0.13599981)) ° Radians // Apply the [Corrected Rotation] to the entity. directionalLight.transform.rotation *= theCorrectedRotation // Add the [directionalLight] to the scene :: let anchor = AnchorEntity() anchor.addChild(directionalLight) } // End of [if theMeesusMoonAge_LunarAgeDaysValue >= 23.10 && theMeesusMoonAge_LunarAgeDaysValue < (29.530588853 - 1.00)] return directionalLight } // End of [func createDirectionalLight(latitude: Double, longitude: Double, date: Date) -> Entity] } // End of [struct TheCalculatedMoonPhaseTest_ContentView: View] // MARK: 🟠🟠🟠🟠 [ENTITY HELPER EXTENSION] 🟠🟠🟠🟠 extension Entity { static func createSphere(radius: Float, color: NSColor) -> Entity { let mesh = MeshResource.generateSphere(radius: radius) var material = PhysicallyBasedMaterial() material.baseColor = .init(tint: color) let modelComponent = ModelComponent(mesh: mesh, materials: [material]) let entity = Entity() entity.components.set(modelComponent) entity.components.set(Transform()) return entity } // End of [static func createSphere(radius: Float, color: NSColor) -> Entity] } // End of [extension Entity] // Application Image :: Calgary // Website Image :: timeanddate // mooncalc.org
1
0
184
Feb ’26
AVQueuePlayer and AVPlayerLooper implementation.
I am trying to loop my videoMaterial. I have researched the AXQueuePlayer and AVPlayerLooper and tried to implement them into my code. Please see attached. There are no errors showing up but the videoMaterial is no longer working. Please see the attached for the working code that plays the videoMaterial. I am stumped can anyone help me solve this? Thank you.
0
0
114
Jun ’25
Any recommended content-aware compression strategy for .ktx textures in Reality Composer Pro?
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets. I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity. In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging: Low-complexity images are compressed more aggressively The final packaged file size varies based on content complexity Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment. Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro? Or any best practices to optimize .ktx sizes based on image complexity? Thanks!
0
0
247
May ’25
Template Project Entity Overlapping and Sticking Issues
Hello, There are three issues I am running into with a default template project + additional minimal code changes: the Sphere_Left entity always overlaps the Sphere_Right entity. when I release the Sphere_Left entity, it does not remain sticking to the Sphere_Right entity when I release the Sphere_Left entity, it distances itself from the Sphere_Right entity When I manipulate the Sphere_Right entity, these above 3 issues do not occur: I get a correct and expected behavior. These issues are simple to replicate: Create a new project in XCode Choose visionOS -> App, then click Next Name your project, and leave all other options as defaults: Initial Scene: Window, Immersive Space Renderer: RealityKit, Immersive Space: Mixed, then click Next Save you project anywhere... Replace the entire ImmersiveView.swift file with the below code. Run. Try to manipulate the left sphere, you should get the same issues I mentioned above If you restart the project, and manipulate only the right sphere, you should get the correct expected behaviors, and no issues. I am running this in macOS 26, XCode 26, on visionOS 26, all released lately. ImmersiveView Code: // // ImmersiveView.swift // import OSLog import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { private let logger = Logger(subsystem: "com.testentitiessticktogether", category: "ImmersiveView") @State var collisionBeganUnfiltered: EventSubscription? var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) // Add manipulation components setupManipulationComponents(in: immersiveContentEntity) collisionBeganUnfiltered = content.subscribe(to: CollisionEvents.Began.self) { collisionEvent in Task { @MainActor in handleCollision(entityA: collisionEvent.entityA, entityB: collisionEvent.entityB) } } } } } private func setupManipulationComponents(in rootEntity: Entity) { logger.info("\(#function) \(#line) ") let sphereNames = ["Sphere_Left", "Sphere_Right"] for name in sphereNames { guard let sphere = rootEntity.findEntity(named: name) else { logger.error("\(#function) \(#line) Failed to find \(name) entity") assertionFailure("Failed to find \(name) entity") continue } ManipulationComponent.configureEntity(sphere) var manipulationComponent = ManipulationComponent() manipulationComponent.releaseBehavior = .stay sphere.components.set(manipulationComponent) } logger.info("\(#function) \(#line) Successfully set up manipulation components") } private func handleCollision(entityA: Entity, entityB: Entity) { logger.info("\(#function) \(#line) Collision between \(entityA.name) and \(entityB.name)") guard entityA !== entityB else { return } if entityB.isAncestor(of: entityA) { logger.debug("\(#function) \(#line) \(entityA.name) already under \(entityB.name); skipping reparent") return } if entityA.isAncestor(of: entityB) { logger.info("\(#function) \(#line) Skip reparent: \(entityA.name) is an ancestor of \(entityB.name)") return } reparentEntities(child: entityA, parent: entityB) entityA.components[ParticleEmitterComponent.self]?.burst() } private func reparentEntities(child: Entity, parent: Entity) { let childBounds = child.visualBounds(relativeTo: nil) let parentBounds = parent.visualBounds(relativeTo: nil) let maxEntityWidth = max(childBounds.extents.x, parentBounds.extents.x) let childPosition = child.position(relativeTo: nil) let parentPosition = parent.position(relativeTo: nil) let currentDistance = distance(childPosition, parentPosition) child.setParent(parent, preservingWorldTransform: true) logger.info("\(#function) \(#line) Set \(child.name) parent to \(parent.name)") child.components.remove(ManipulationComponent.self) logger.info("\(#function) \(#line) Removed ManipulationComponent from child \(child.name)") if currentDistance > maxEntityWidth { let direction = normalize(childPosition - parentPosition) let newPosition = parentPosition + direction * maxEntityWidth child.setPosition(newPosition - parentPosition, relativeTo: parent) logger.info("\(#function) \(#line) Adjusted position: distance was \(currentDistance), now \(maxEntityWidth)") } } } fileprivate extension Entity { func isAncestor(of other: Entity) -> Bool { var current: Entity? = other.parent while let node = current { if node === self { return true } current = node.parent } return false } } #Preview(immersionStyle: .mixed) { ImmersiveView() .environment(AppModel()) }
8
0
467
Sep ’25
How to renew visionOS Enterprise API entitlement / license?
Hi everyone, I’m currently using the visionOS Enterprise APIs, and I noticed in the official documentation that: However, I couldn’t find any clear instructions on how to actually renew the license. What is the correct process to renew a visionOS Enterprise API license? Do I need to submit a new entitlement request again to renew? Is there any official step-by-step guide or documentation for renewal? Any advice or shared experience would be greatly appreciated 🙏 Thank you!
1
0
1k
3w
Nearby Sharing a Volume won't work
Hi, we've developed an app for Vision Pro that utilises the GroupActivitites SDK to provide shared experiences for our users. Remote Participation works great, but we can't get nearby sharing to work. The behaviour we're observing: User 1 engages share sheet from Volume, 2nd Vision Pro is visible. User 1 starts nearby sharing Session initialisation runs for approx. 30 seconds, then fails Sometimes, the nearby participant doesn't show up at all after the initialisation has failed once. As stated in the Configure your visionOS app for sharing with people nearby article, we didn't make any changes to our implementation to support nearby sharing. Any help would be greatly appreciated. Kind regards, David
3
0
963
Jan ’26
Lighting disabled inside PortalComponent in Progressive/Full Immersion (visionOS 26.2)
Hello, I am experiencing an issue where lighting is disabled within a Portal Component specifically when using Progressive or Full Immersion styles in visionOS 26.2. [Steps to Reproduce] Create a scene in Reality Composer Pro with custom lighting (Directional Light, IBL, etc.) and load it as an Entity. Add a PortalComponent to this Entity. Observe the Entity in an ImmersiveSpace under different Immersion Styles. [Observed Behavior] Mixed Immersion: Lighting works as expected.Progressive / Full Immersion: Lighting is disabled, causing the content to render incorrectly. [Additional Context] Without PortalComponent: The same Reality Composer Pro scene renders lighting correctly across all Immersion Styles (Mixed, Progressive, and Full).Regression: In applications built with Xcode 16.4 / visionOS 2.5, lighting inside the Portal functioned correctly. This issue appears to have emerged with Xcode 26.2 and visionOS 26.2. [Questions] Is the disabling of lighting inside Portals during Full/Progressive Immersion an intended architectural change or optimization in visionOS 26.2? Are there any workarounds, such as specific PortalComponent configurations or new API flags, to re-enable lighting in these immersion modes? Thank you.
1
0
543
Feb ’26
How to mix Animation and IKRig in RealityKit
I want an AR character to be able to look at a position while still playing the characters animation. So far, I managed to manually adjust a single bone rotation using skeletalComponent.poses.default = Transform( scale: baseTransform.scale, rotation: lookAtRotation, translation: baseTransform.translation ) which I run at every rendering update, while a full body animation is running. But of course, hardcoding single joints to point into a direction (in my case the head) does not look as nice, as if I were to run some inverse cinematic that includes, hips + neck + head joints. I found some good IKRig code in Composing interactive 3D content with RealityKit and Reality Composer Pro. But when I try to adjust rigs while animations are playing, the animations are usually winning over the IKRig changes to the mesh.
1
0
542
Aug ’25
VideoMaterial Black Screen on Vision Pro Device (Works in Simulator)
VideoMaterial Black Screen on Vision Pro Device (Works in Simulator) App Overview App Name: Extn Browser Bundle ID: ai.extn.browser Purpose: A visionOS web browser that plays 360°/180° VR videos in an immersive sphere environment Development Environment & SDK Versions Component Version Xcode 26.2 Swift 6.2 visionOS Deployment Target 26.2 Swift Concurrency MainActor isolation enabled App is released in the TestFlight. Frameworks Used SwiftUI - UI framework RealityKit - 3D rendering, MeshResource, ModelEntity, VideoMaterial AVFoundation - AVPlayer, AVAudioSession WebKit - WKWebView for browser functionality Network - NWListener for local proxy server Sphere Video Mechanism The app creates an immersive 360° video experience using the following approach: // 1. Create sphere mesh (10 meter radius for immersive viewing) let mesh = MeshResource.generateSphere(radius: 10.0) // 2. Create initial transparent material var material = UnlitMaterial() material.color = .init(tint: .clear) // 3. Create entity and invert sphere (negative X scale) let sphere = ModelEntity(mesh: mesh, materials: [material]) sphere.scale = SIMD3<Float>(-1, 1, 1) // Inverts normals for inside-out viewing sphere.position = SIMD3<Float>(0, 1.5, 0) // Eye level // 4. Create AVPlayer with video URL let player = AVPlayer(url: videoURL) // 5. Configure audio session for visionOS let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playback, mode: .moviePlayback, options: [.mixWithOthers]) try audioSession.setActive(true) // 6. Create VideoMaterial and apply to sphere let videoMaterial = VideoMaterial(avPlayer: player) if var modelComponent = sphere.components[ModelComponent.self] { modelComponent.materials = [videoMaterial] sphere.components.set(modelComponent) } // 7. Start playback player.play() ImmersiveSpace Configuration // browserApp.swift ImmersiveSpace(id: appModel.immersiveSpaceID) { ImmersiveView() .environment(appModel) } .immersionStyle(selection: .constant(.mixed), in: .mixed) Entitlements <!-- browser.entitlements --> <key>com.apple.security.app-sandbox</key> <true/> <key>com.apple.security.network.client</key> <true/> <key>com.apple.security.network.server</key> <true/> Info.plist Network Configuration <key>NSAppTransportSecurity</key> <dict> <key>NSAllowsArbitraryLoads</key> <true/> </dict> The Issue Behavior in Simulator: Video plays correctly on the inverted sphere surface - 360° video is visible and wraps around the user as expected. Behavior on Physical Vision Pro: The sphere displays a black screen. No video content is visible, though the sphere entity itself is present. Important: Not a DRM/Licensing Issue This issue is NOT related to Digital Rights Management (DRM) or FairPlay. I have tested with: Unlicensed raw MP4 video files (no DRM protection) Self-hosted video content with no copy protection Direct MP4 URLs from CDN without any licensing requirements The same black screen behavior occurs with all unprotected video sources, ruling out DRM as the cause. (Plain H.264 MP4, no DRM) Screen Recording: Working in Simulator The following screen recording demonstrates playing a 360° YouTube video in the immersive sphere on the visionOS Simulator: https://cdn.commenda.kr/screen-001.mov This confirms that the VideoMaterial and sphere rendering work correctly in the simulator, but the same setup shows a black screen on the physical Vision Pro device. Observations AVPlayer status reports .readyToPlay - The video appears to load successfully VideoMaterial is created without errors - No exceptions thrown Sphere entity renders - The geometry is visible (black surface) Audio session is configured - No errors during audio session setup Network requests succeed - The video URL is accessible from the device Same result with local/unprotected content - DRM is not a factor Console Logs (Device) The logging shows: Sphere created and added to scene AVPlayer created with correct URL VideoMaterial created and applied Player status transitions to .readyToPlay player.play() called successfully Rate shows 1.0 (playing) Despite all success indicators, the rendered output is black. Questions for Apple Are there known differences in VideoMaterial behavior between the visionOS Simulator and physical Vision Pro hardware? Does VideoMaterial(avPlayer:) require specific video codec/format requirements that differ on device? (The test video is a standard H.264 MP4) Is there a required Metal capability or GPU feature for VideoMaterial that may not be available in certain contexts on device? Does the immersion style (.mixed) affect VideoMaterial rendering on hardware? Are there additional entitlements required for video texture rendering in RealityKit on physical hardware? Attempted Solutions Configured AVAudioSession with .playback category Added delay before player.play() to ensure material is applied Verified sphere scale inversion (-1, 1, 1) Tested multiple video URLs (including raw, unlicensed MP4 files) Confirmed network connectivity on device Ruled out DRM/FairPlay issues by testing unprotected content Environment Details Device: Apple Vision Pro visionOS Version: 26.2 Xcode Version: 26.2 macOS Version: Darwin 25.2.0
0
0
280
Feb ’26
Request: More Fine-Grained Control in Object Capture (PhotogrammetrySession)
Hi Apple Team and Developers, First of all, I’d like to express my appreciation for the incredible results achieved using PhotogrammetrySession. I’ve been developing a portrait scanning app using Object Capture, and in many tests—especially with human models—I’ve found the reconstructed body surfaces are remarkably smooth and clean, often outperforming tools like Metashape and RealityCapture in terms of aesthetic results. However, I’ve encountered some challenges when working with complex areas like long hair overlapping the face. For instance, with female models where strands of hair partially occlude the face, the resulting mesh tends to merge the hair and facial geometry. This leads to distorted or “melted” facial features, likely due to ambiguity in the geometry estimation phase. Feature Suggestion: Would it be possible to allow developers to supply two versions of the input images: • One version (original) for texture generation • A pre-processed version (e.g., contrast-enhanced or CLAHE filtered) to guide mesh reconstruction only This would give us the flexibility to enhance edge features or shadow detail without affecting the final texture appearance. In other photogrammetry pipelines, applying image enhancement selectively before dense reconstruction improves geometry quality in low-contrast areas. Question: Is there any plan to support this kind of two-path workflow in future versions of PhotogrammetrySession? Or perhaps expose more intermediate stages or tunable parameters to developers? Also, any hints on what we can expect from WWDC 2025 regarding improvements to Object Capture or related vision/3D technologies? Thanks again for this powerful API. Looking forward to hearing insights from the team and other developers. Warm regards, KitCheng
0
0
130
May ’25
Low-latency live streaming using APMP Wide FoV
Is it possible to achieve sub-second end-to-end latency when displaying live streaming video using APMP (Apple Projected Media Profile) with Wide FoV? APMP supports HLS playback, but my understanding is that standard HLS introduces several seconds of latency. I would like to know whether APMP (especially Wide FoV) supports Low-Latency HLS, or if there are inherent limitations that make sub-second latency impractical. If APMP is not suitable for this use case, are there any recommended alternatives within AVFoundation or related frameworks for rendering wide-FoV live video with very low latency? Thank you for any insights.
1
0
505
Jan ’26
Vision Pro stucks on "Retrieving configuration"
Hi, after upgrading to 2.4.1 (from 1.0) my vision stucks on "Retrieving configuration" screen. Apple Store didn't support my case since it has been sold in USA and the product isn't still present in italian market. I don't have dev strap, how can I manage the issue? Thank you
0
0
121
May ’25
ARKit Face Tracking works in total darkness?
I’ve seen, mainly in discussions with AIs, that ARFaceTrackingConfiguration uses the same technology as Face ID and therefore should work in complete darkness. However, I haven’t been able to achieve this. Does anyone know if this is actually true? I'm using an iPhone 16 to test, and the Face ID works well in darkness.
0
0
141
Feb ’26
Presenting images in RealityKit sample No Longer Builds
After updating to the latest visionOS beta, visionOS 26 Beta 4 (23M5300g) the ‘Presenting images in RealityKit’ sample from the following link no longer builds due to an error. https://developer.apple.com/documentation/RealityKit/presenting-images-in-realitykit Expected / Previous: Application builds and runs on device, working as described in the documentation. Reality: Application builds, but does not run on device due to an error (shown in screenshot) “Thread 1: EXC_BAD_ACCESS (code=1, address=0xb)”. The application still runs on the simulator, but not on device. When launching the app from Xcode, it builds and installs correctly but hangs due to the respective error. When loading the app from the Home Screen, the app does not load, and immediately returns to the Home Screen. This Xcode project previously ran with no changes to code - the only change was updating the visionOS system software to the latest version. visionOS 26 Beta 4 (23M5300g) Is anyone else experiencing this issue?
Replies
4
Boosts
0
Views
247
Activity
Aug ’25
RotationSystem and RotationComponent API Updates for visionOS 26 Beta
Are there any changes to RotationSystem: System and RotationComponent: Component that I should be aware of to see if I need to update my use in my visionOS app?
Replies
1
Boosts
0
Views
63
Activity
Jun ’25
visionOS: Unable to programmatically close child WindowGroup when parent window closes
Hi , I'm struggling with visionOS window management and need help with closing child windows programmatically. App Structure My app has a Main-Sub window hierarchy: AWindow (Home/Main) BWindow (Main feature window) CWindow (Tool window - child of BWindow) Navigation flow: AWindow → BWindow (switch, 1 window on screen) BWindow → CWindow (opens child, 2 windows on screen) I want BWindow and CWindow to be separate movable windows (not sheet/popover) so users can position them independently in space. The Problem CWindow doesn't close when BWindow closes by tapping the X button below the app (next to the window bar) User clicks X on BWindow → BWindow closes but CWindow remains CWindow becomes orphaned on screen Can close CWindow programmatically when switching BWindow back to AWindow App launch issue After closing both windows, CWindow is remembered as last window Reopening app shows only CWindow instead of BWindow User gets stuck in CWindow with no way back to BWindow I've Tried Environment dismissWindow in cleanup but its not working. // In BWindow.swift .onDisappear { if windowManager.isWindowOpen("cWindow") { dismissWindow(id: "cWindow") } } My App Structure Code Now // in MyNameApp.swift @main struct MyNameApp: App { var body: some Scene { WindowGroup(id: "aWindow") { AWindow() } WindowGroup(id: "bWindow") { BWindow() } WindowGroup(id: "cWindow") { CWindow() } } } // WindowStateManager.swift class WindowStateManager: ObservableObject { static let shared = WindowStateManager() @Published private var openWindows: Set<String> = [] @Published private var windowDependencies: [String: String] = [:] private init() {} func markWindowAsOpen(_ id: String) { markWindowAsOpen(id, parent: nil) } func markWindowAsClosed(_ id: String) { openWindows.remove(id) windowDependencies[id] = nil } func isWindowOpen(_ id: String) -> Bool { let isOpen = openWindows.contains(id) return isOpen } func markWindowAsOpen(_ id: String, parent: String? = nil) { openWindows.insert(id) if let parentId = parent { windowDependencies[id] = parentId } } func getParentWindow(of childId: String) -> String? { let parent = windowDependencies[childId] return parent } func getChildWindows(of parentId: String) -> [String] { let children = windowDependencies.compactMap { key, value in value == parentId ? key : nil } return children } func setNextWindowParent(_ parentId: String) { UserDefaults.standard.set(parentId, forKey: "nextWindowParent") } func getAndClearNextWindowParent() -> String? { let parent = UserDefaults.standard.string(forKey: "nextWindowParent") UserDefaults.standard.removeObject(forKey: "nextWindowParent") return parent } func forceCloseChildWindows(of parentId: String) { let children = getChildWindows(of: parentId) for child in children { markWindowAsClosed(child) NotificationCenter.default.post( name: Notification.Name("ForceCloseWindow"), object: nil, userInfo: ["windowId": child] ) forceCloseChildWindows(of: child) } } func hasMainWindowOpen() -> Bool { let mainWindows = ["main", "bWindow"] return mainWindows.contains { isWindowOpen($0) } } func cleanupOrphanWindows() { for (child, parent) in windowDependencies { if isWindowOpen(child) && !isWindowOpen(parent) { NotificationCenter.default.post( name: Notification.Name("ForceCloseWindow"), object: nil, userInfo: ["windowId": child] ) markWindowAsClosed(child) } } } } // BWindow.swift struct BWindow: View { @Environment(\.dismissWindow) private var dismissWindow @ObservedObject private var windowManager = WindowStateManager.shared var body: some View { VStack { Button("Open C Window") { windowManager.setNextWindowParent("bWindow") openWindow(id: "cWindow") } } .onAppear { windowManager.markWindowAsOpen("bWindow") } .onDisappear { windowManager.markWindowAsClosed("bWindow") windowManager.forceCloseChildWindows(of: "bWindow") } .onChange(of: scenePhase) { oldValue, newValue in if newValue == .background || newValue == .inactive { windowManager.forceCloseChildWindows(of: "bWindow") } } } } // CWindow.swift import SwiftUI struct cWindow: View { @ObservedObject private var windowManager = WindowStateManager.shared @State private var shouldClose = false var body: some View { // Content } .onDisappear { windowManager.markWindowAsClosed("cWindow") NotificationCenter.default.removeObserver( self, name: Notification.Name("ForceCloseWindow"), object: nil ) } .onChange(of: scenePhase) { oldValue, newValue in if newValue == .background { } } .onAppear { let parent = windowManager.getAndClearNextWindowParent() windowManager.markWindowAsOpen("cWindow", parent: parent) NotificationCenter.default.addObserver( forName: Notification.Name("ForceCloseWindow"), object: nil, queue: .main) { notification in if let windowId = notification.userInfo?["windowId"] as? String, windowId == "cWindow" { shouldClose = true } } } .onChange(of: shouldClose) { _, newValue in if newValue { dismissWindow() } } } The logs show everything executes correctly, but CWindow remains visible on screen. Questions Why doesn't dismissWindow(id:) work in cleanup scenarios? Is there a proper way to create a window relationships like parent-child relationships in visionOS? How can I ensure main windows open on app launch instead of tool windows? What's the recommended pattern for dependent windows in visionOS? Environment: Xcode 16.2, visionOS 2.0, SwiftUI
Replies
2
Boosts
0
Views
373
Activity
Aug ’25
Possible Bug - Hover Effects/Spatial Event Compatibilty with PSVR2 Controllers?
Hi, I would like clarification on whether the new hover effects feature introduced in vision os 26 supported pinch gestures through the psvr 2 controllers. In your sample application, I found that this was not working. Pulling the trigger on the controller whilst looking at the 3d object did not activate the hover effect spatial event in the sample application. (The object is showing the highlight though), only pinch clicking with my fingers seem to be registering/triggering the spatial event. I am using Vision OS 26.3 This is inconsistent with how the psvr2 controller behaves on swift ui views and ui view elements, where the trigger press does count as a button click. The sample I used was this one: https://developer.apple.com/documentation/compositorservices/rendering_hover_effects_in_metal_immersive_apps
Replies
2
Boosts
0
Views
913
Activity
Mar ’26
VisionOS 26 new hand Tracking: how to use it for replacing hands in a virtual environment.
I am still not finding resources to know how to replace hands in a full immersive Space.... I reached the goal by creating a ARKit session that can detect the USDZ hand mesh Joints and connect to the hand-tracked-joints.... but I feel that's not the best solution... I really want to use the RealityKit potential to track and replace hands (with USDZ skinned ones) in an immersive environment, but the only resources I found are from November 2023.... :( Can someone help me?
Replies
0
Boosts
0
Views
119
Activity
Jun ’25
Apple Vision Pro AirDrop signal to MAC is unstable
Hope to achieve stable transmission And the colors are different. The colors in the glasses are not consistent with the colors projected on the screen.
Replies
0
Boosts
0
Views
108
Activity
Apr ’25
CustomMaterial disable unlit tone mapping
Hi, since iOS 18 UnlitMaterial and ShaderGraphMaterial have the option to disable tone mapping, e.g via https://developer.apple.com/documentation/realitykit/unlitmaterial/init(applypostprocesstonemap:) Is it possible to do the same for CustomMaterial? I tried initializing a CustomMaterial based on an UnlitMaterial where tone mapping is disabled, like so: let unlitMat = UnlitMaterial(applyPostProcessToneMap: false) let customMaterial = try CustomMaterial( from: unlitMat, surfaceShader: surfaceShader, geometryModifier: geometryModifier ) but that does not seem to work. The colors of my texture still look altered in comparison to a plain UnlitMaterial or a ShaderGraphMaterial where its disabled. Any hints? Thank you!
Replies
1
Boosts
0
Views
148
Activity
Jun ’25
RealityKit_DirectionalLight_Question
My application calculates three distinct Meesus Double [x, y, z] Radian values to light a sphere in RealityKit with DirectionalLight. It is my understanding that I must use (simd_quatf) for each radian value to properly light the sphere in the view. The code correctly [orientates] the sphere with the combined (simd_quatf) DirectionalLight in the view, but the illumination (Z-axis) fails to properly illuminate the sphere with the expected result, compared to associated Meesus web page images. For the moment, I do not know how to correct the (Z-axis). Curious for a suggestion ... :] // Location values. let theLatitude: Double = 51.13107260 let theLongitude: Double = -114.01127910 let currentDate: Date = Date() struct TheCalculatedMoonPhaseTest_ContentView: View { var body: some View { VStack { if #available(macOS 15.0, *) { RealityView { content in let moonSphere_Entity = Entity.createSphere(radius: 0.90, color: .black) moonSphere.Entity.name = "MoonSphere" moonSphere.Entity.position = SIMD3<Float>(x: 0, y: 0, z: 0) content.add(moonSphere.Entity) let sunLight_Entity = createDirectionalLight(latitude: theLatitude, longitude: theLongitude, date: currentDate) content.add(sunLight_Entity) } // End of [RealityView] } else { // Earlier version required. } // End of [if #available(macOS 15.0, *)] } // End of [VStack] .background(Color.black) } // End of [var body: some View] // MARK: - 🟠🟠🟠🟠 [SET THE BACKGROUND COLOUR] 🟠🟠🟠🟠 var backgroundColor: Color = Color.init(.black) // MARK: - 🟠🟠🟠🟠 [CREATE THE DIRECTIONAL LIGHT FOR THE SPHERE] 🟠🟠🟠🟠 func createDirectionalLight(latitude: Double, longitude: Double, date: Date) -> Entity { let directionalLight = DirectionalLight() directionalLight.light.color = .white directionalLight.light.intensity = 1000000 directionalLight.shadow = DirectionalLightComponent.Shadow() directionalLight.shadow?.maximumDistance = 5 directionalLight.shadow?.depthBias = 1 // MARK: 🟠🟠🟠🟠 Retrieve the [MEESUS MOON AGE VALUES] from the [CONSTANT FOLDER] 🟠🟠🟠🟠 let theMeesusMoonAge_LunarAgeDaysValue = 25.90567592898601 if theMeesusMoonAge_LunarAgeDaysValue >= 23.10 && theMeesusMoonAge_LunarAgeDaysValue < (29.530588853 - 1.00) { let someCalculatedX_WestEastRadian: Float = Float(1.00) // Identify the sphere’s DirectionalLight Tilt Angle (Y) radian value :: // Note :: The following Tilt Angle is corrected to [Zenith] with the [MeesusCalculatedTilt_Angle] minus the [MeesusCalculatedPar_Angle]. let someCalculatedY_TiltAngleRadian: Float = Float(1.3396086) // Identify the sphere’s DirectionalLight Illumination (Z) radian Value :: // Note :: The Meesus calculated illumination fraction is converted to degrees, then converted to a radian value. let someCalculatedZ_IlluminationAngleRadian: Float = Float(0.45176168630244457) // <=== 14.3800% Illumination. // Define rotation angles in radians for X, Y, and Z axes. let x_Radians = someCalculatedX_WestEastRadian let y_Radians = someCalculatedY_TiltAngleRadian let z_Radians = someCalculatedZ_IlluminationAngleRadian // Identify and separate the quaternion [simd_quatf] for each Radian. let q_X = simd_quatf(angle: x_Radians, axis: SIMD3<Float>(1, 0, 0)) let q_Y = simd_quatf(angle: y_Radians, axis: SIMD3<Float>(0, 1, 0)) let q_Z = simd_quatf(angle: z_Radians, axis: SIMD3<Float>(0, 0, 1)) // Apply and combine the rotations, where order matters. let combinedRotation = q_Z * q_Y * q_X // Identify the [Combined Rotation]. // The [MyMoonMeesus] :: [WANING CRESCENT] calculated [combinedRotation] :: simd_quatf(real: 0.73715997, imag: SIMD3<Float>(0.24427173, 0.61516714, -0.13599981)) ° Radians // Normalize the [combinedRotation]. let theNormalizesRotation = simd_normalize(combinedRotation) // Identify the [Normalized Combined Rotation]. // The [MyMoonMeesus] :: [WANING CRESCENT] calculated [normalizedRotation] :: simd_quatf(real: 0.73715997, imag: SIMD3<Float>(0.24427173, 0.61516714, -0.13599981)) ° Radians // Assume the [theNormalizesRotation] appears reversed. let theCorrectedRotation = theNormalizesRotation.inverse // Identify the [Reversed Combined Rotation]. // The [MyMoonMeesus] :: [WANING CRESCENT] calculated [correctedRotation] :: simd_quatf(real: 0.73715997, imag: SIMD3<Float>(-0.24427173, -0.61516714, 0.13599981)) ° Radians // Apply the [Corrected Rotation] to the entity. directionalLight.transform.rotation *= theCorrectedRotation // Add the [directionalLight] to the scene :: let anchor = AnchorEntity() anchor.addChild(directionalLight) } // End of [if theMeesusMoonAge_LunarAgeDaysValue >= 23.10 && theMeesusMoonAge_LunarAgeDaysValue < (29.530588853 - 1.00)] return directionalLight } // End of [func createDirectionalLight(latitude: Double, longitude: Double, date: Date) -> Entity] } // End of [struct TheCalculatedMoonPhaseTest_ContentView: View] // MARK: 🟠🟠🟠🟠 [ENTITY HELPER EXTENSION] 🟠🟠🟠🟠 extension Entity { static func createSphere(radius: Float, color: NSColor) -> Entity { let mesh = MeshResource.generateSphere(radius: radius) var material = PhysicallyBasedMaterial() material.baseColor = .init(tint: color) let modelComponent = ModelComponent(mesh: mesh, materials: [material]) let entity = Entity() entity.components.set(modelComponent) entity.components.set(Transform()) return entity } // End of [static func createSphere(radius: Float, color: NSColor) -> Entity] } // End of [extension Entity] // Application Image :: Calgary // Website Image :: timeanddate // mooncalc.org
Replies
1
Boosts
0
Views
184
Activity
Feb ’26
AVQueuePlayer and AVPlayerLooper implementation.
I am trying to loop my videoMaterial. I have researched the AXQueuePlayer and AVPlayerLooper and tried to implement them into my code. Please see attached. There are no errors showing up but the videoMaterial is no longer working. Please see the attached for the working code that plays the videoMaterial. I am stumped can anyone help me solve this? Thank you.
Replies
0
Boosts
0
Views
114
Activity
Jun ’25
Any recommended content-aware compression strategy for .ktx textures in Reality Composer Pro?
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets. I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity. In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging: Low-complexity images are compressed more aggressively The final packaged file size varies based on content complexity Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment. Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro? Or any best practices to optimize .ktx sizes based on image complexity? Thanks!
Replies
0
Boosts
0
Views
247
Activity
May ’25
Template Project Entity Overlapping and Sticking Issues
Hello, There are three issues I am running into with a default template project + additional minimal code changes: the Sphere_Left entity always overlaps the Sphere_Right entity. when I release the Sphere_Left entity, it does not remain sticking to the Sphere_Right entity when I release the Sphere_Left entity, it distances itself from the Sphere_Right entity When I manipulate the Sphere_Right entity, these above 3 issues do not occur: I get a correct and expected behavior. These issues are simple to replicate: Create a new project in XCode Choose visionOS -> App, then click Next Name your project, and leave all other options as defaults: Initial Scene: Window, Immersive Space Renderer: RealityKit, Immersive Space: Mixed, then click Next Save you project anywhere... Replace the entire ImmersiveView.swift file with the below code. Run. Try to manipulate the left sphere, you should get the same issues I mentioned above If you restart the project, and manipulate only the right sphere, you should get the correct expected behaviors, and no issues. I am running this in macOS 26, XCode 26, on visionOS 26, all released lately. ImmersiveView Code: // // ImmersiveView.swift // import OSLog import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { private let logger = Logger(subsystem: "com.testentitiessticktogether", category: "ImmersiveView") @State var collisionBeganUnfiltered: EventSubscription? var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) // Add manipulation components setupManipulationComponents(in: immersiveContentEntity) collisionBeganUnfiltered = content.subscribe(to: CollisionEvents.Began.self) { collisionEvent in Task { @MainActor in handleCollision(entityA: collisionEvent.entityA, entityB: collisionEvent.entityB) } } } } } private func setupManipulationComponents(in rootEntity: Entity) { logger.info("\(#function) \(#line) ") let sphereNames = ["Sphere_Left", "Sphere_Right"] for name in sphereNames { guard let sphere = rootEntity.findEntity(named: name) else { logger.error("\(#function) \(#line) Failed to find \(name) entity") assertionFailure("Failed to find \(name) entity") continue } ManipulationComponent.configureEntity(sphere) var manipulationComponent = ManipulationComponent() manipulationComponent.releaseBehavior = .stay sphere.components.set(manipulationComponent) } logger.info("\(#function) \(#line) Successfully set up manipulation components") } private func handleCollision(entityA: Entity, entityB: Entity) { logger.info("\(#function) \(#line) Collision between \(entityA.name) and \(entityB.name)") guard entityA !== entityB else { return } if entityB.isAncestor(of: entityA) { logger.debug("\(#function) \(#line) \(entityA.name) already under \(entityB.name); skipping reparent") return } if entityA.isAncestor(of: entityB) { logger.info("\(#function) \(#line) Skip reparent: \(entityA.name) is an ancestor of \(entityB.name)") return } reparentEntities(child: entityA, parent: entityB) entityA.components[ParticleEmitterComponent.self]?.burst() } private func reparentEntities(child: Entity, parent: Entity) { let childBounds = child.visualBounds(relativeTo: nil) let parentBounds = parent.visualBounds(relativeTo: nil) let maxEntityWidth = max(childBounds.extents.x, parentBounds.extents.x) let childPosition = child.position(relativeTo: nil) let parentPosition = parent.position(relativeTo: nil) let currentDistance = distance(childPosition, parentPosition) child.setParent(parent, preservingWorldTransform: true) logger.info("\(#function) \(#line) Set \(child.name) parent to \(parent.name)") child.components.remove(ManipulationComponent.self) logger.info("\(#function) \(#line) Removed ManipulationComponent from child \(child.name)") if currentDistance > maxEntityWidth { let direction = normalize(childPosition - parentPosition) let newPosition = parentPosition + direction * maxEntityWidth child.setPosition(newPosition - parentPosition, relativeTo: parent) logger.info("\(#function) \(#line) Adjusted position: distance was \(currentDistance), now \(maxEntityWidth)") } } } fileprivate extension Entity { func isAncestor(of other: Entity) -> Bool { var current: Entity? = other.parent while let node = current { if node === self { return true } current = node.parent } return false } } #Preview(immersionStyle: .mixed) { ImmersiveView() .environment(AppModel()) }
Replies
8
Boosts
0
Views
467
Activity
Sep ’25
How to renew visionOS Enterprise API entitlement / license?
Hi everyone, I’m currently using the visionOS Enterprise APIs, and I noticed in the official documentation that: However, I couldn’t find any clear instructions on how to actually renew the license. What is the correct process to renew a visionOS Enterprise API license? Do I need to submit a new entitlement request again to renew? Is there any official step-by-step guide or documentation for renewal? Any advice or shared experience would be greatly appreciated 🙏 Thank you!
Replies
1
Boosts
0
Views
1k
Activity
3w
Nearby Sharing a Volume won't work
Hi, we've developed an app for Vision Pro that utilises the GroupActivitites SDK to provide shared experiences for our users. Remote Participation works great, but we can't get nearby sharing to work. The behaviour we're observing: User 1 engages share sheet from Volume, 2nd Vision Pro is visible. User 1 starts nearby sharing Session initialisation runs for approx. 30 seconds, then fails Sometimes, the nearby participant doesn't show up at all after the initialisation has failed once. As stated in the Configure your visionOS app for sharing with people nearby article, we didn't make any changes to our implementation to support nearby sharing. Any help would be greatly appreciated. Kind regards, David
Replies
3
Boosts
0
Views
963
Activity
Jan ’26
Lighting disabled inside PortalComponent in Progressive/Full Immersion (visionOS 26.2)
Hello, I am experiencing an issue where lighting is disabled within a Portal Component specifically when using Progressive or Full Immersion styles in visionOS 26.2. [Steps to Reproduce] Create a scene in Reality Composer Pro with custom lighting (Directional Light, IBL, etc.) and load it as an Entity. Add a PortalComponent to this Entity. Observe the Entity in an ImmersiveSpace under different Immersion Styles. [Observed Behavior] Mixed Immersion: Lighting works as expected.Progressive / Full Immersion: Lighting is disabled, causing the content to render incorrectly. [Additional Context] Without PortalComponent: The same Reality Composer Pro scene renders lighting correctly across all Immersion Styles (Mixed, Progressive, and Full).Regression: In applications built with Xcode 16.4 / visionOS 2.5, lighting inside the Portal functioned correctly. This issue appears to have emerged with Xcode 26.2 and visionOS 26.2. [Questions] Is the disabling of lighting inside Portals during Full/Progressive Immersion an intended architectural change or optimization in visionOS 26.2? Are there any workarounds, such as specific PortalComponent configurations or new API flags, to re-enable lighting in these immersion modes? Thank you.
Replies
1
Boosts
0
Views
543
Activity
Feb ’26
How to mix Animation and IKRig in RealityKit
I want an AR character to be able to look at a position while still playing the characters animation. So far, I managed to manually adjust a single bone rotation using skeletalComponent.poses.default = Transform( scale: baseTransform.scale, rotation: lookAtRotation, translation: baseTransform.translation ) which I run at every rendering update, while a full body animation is running. But of course, hardcoding single joints to point into a direction (in my case the head) does not look as nice, as if I were to run some inverse cinematic that includes, hips + neck + head joints. I found some good IKRig code in Composing interactive 3D content with RealityKit and Reality Composer Pro. But when I try to adjust rigs while animations are playing, the animations are usually winning over the IKRig changes to the mesh.
Replies
1
Boosts
0
Views
542
Activity
Aug ’25
VideoMaterial Black Screen on Vision Pro Device (Works in Simulator)
VideoMaterial Black Screen on Vision Pro Device (Works in Simulator) App Overview App Name: Extn Browser Bundle ID: ai.extn.browser Purpose: A visionOS web browser that plays 360°/180° VR videos in an immersive sphere environment Development Environment & SDK Versions Component Version Xcode 26.2 Swift 6.2 visionOS Deployment Target 26.2 Swift Concurrency MainActor isolation enabled App is released in the TestFlight. Frameworks Used SwiftUI - UI framework RealityKit - 3D rendering, MeshResource, ModelEntity, VideoMaterial AVFoundation - AVPlayer, AVAudioSession WebKit - WKWebView for browser functionality Network - NWListener for local proxy server Sphere Video Mechanism The app creates an immersive 360° video experience using the following approach: // 1. Create sphere mesh (10 meter radius for immersive viewing) let mesh = MeshResource.generateSphere(radius: 10.0) // 2. Create initial transparent material var material = UnlitMaterial() material.color = .init(tint: .clear) // 3. Create entity and invert sphere (negative X scale) let sphere = ModelEntity(mesh: mesh, materials: [material]) sphere.scale = SIMD3<Float>(-1, 1, 1) // Inverts normals for inside-out viewing sphere.position = SIMD3<Float>(0, 1.5, 0) // Eye level // 4. Create AVPlayer with video URL let player = AVPlayer(url: videoURL) // 5. Configure audio session for visionOS let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playback, mode: .moviePlayback, options: [.mixWithOthers]) try audioSession.setActive(true) // 6. Create VideoMaterial and apply to sphere let videoMaterial = VideoMaterial(avPlayer: player) if var modelComponent = sphere.components[ModelComponent.self] { modelComponent.materials = [videoMaterial] sphere.components.set(modelComponent) } // 7. Start playback player.play() ImmersiveSpace Configuration // browserApp.swift ImmersiveSpace(id: appModel.immersiveSpaceID) { ImmersiveView() .environment(appModel) } .immersionStyle(selection: .constant(.mixed), in: .mixed) Entitlements <!-- browser.entitlements --> <key>com.apple.security.app-sandbox</key> <true/> <key>com.apple.security.network.client</key> <true/> <key>com.apple.security.network.server</key> <true/> Info.plist Network Configuration <key>NSAppTransportSecurity</key> <dict> <key>NSAllowsArbitraryLoads</key> <true/> </dict> The Issue Behavior in Simulator: Video plays correctly on the inverted sphere surface - 360° video is visible and wraps around the user as expected. Behavior on Physical Vision Pro: The sphere displays a black screen. No video content is visible, though the sphere entity itself is present. Important: Not a DRM/Licensing Issue This issue is NOT related to Digital Rights Management (DRM) or FairPlay. I have tested with: Unlicensed raw MP4 video files (no DRM protection) Self-hosted video content with no copy protection Direct MP4 URLs from CDN without any licensing requirements The same black screen behavior occurs with all unprotected video sources, ruling out DRM as the cause. (Plain H.264 MP4, no DRM) Screen Recording: Working in Simulator The following screen recording demonstrates playing a 360° YouTube video in the immersive sphere on the visionOS Simulator: https://cdn.commenda.kr/screen-001.mov This confirms that the VideoMaterial and sphere rendering work correctly in the simulator, but the same setup shows a black screen on the physical Vision Pro device. Observations AVPlayer status reports .readyToPlay - The video appears to load successfully VideoMaterial is created without errors - No exceptions thrown Sphere entity renders - The geometry is visible (black surface) Audio session is configured - No errors during audio session setup Network requests succeed - The video URL is accessible from the device Same result with local/unprotected content - DRM is not a factor Console Logs (Device) The logging shows: Sphere created and added to scene AVPlayer created with correct URL VideoMaterial created and applied Player status transitions to .readyToPlay player.play() called successfully Rate shows 1.0 (playing) Despite all success indicators, the rendered output is black. Questions for Apple Are there known differences in VideoMaterial behavior between the visionOS Simulator and physical Vision Pro hardware? Does VideoMaterial(avPlayer:) require specific video codec/format requirements that differ on device? (The test video is a standard H.264 MP4) Is there a required Metal capability or GPU feature for VideoMaterial that may not be available in certain contexts on device? Does the immersion style (.mixed) affect VideoMaterial rendering on hardware? Are there additional entitlements required for video texture rendering in RealityKit on physical hardware? Attempted Solutions Configured AVAudioSession with .playback category Added delay before player.play() to ensure material is applied Verified sphere scale inversion (-1, 1, 1) Tested multiple video URLs (including raw, unlicensed MP4 files) Confirmed network connectivity on device Ruled out DRM/FairPlay issues by testing unprotected content Environment Details Device: Apple Vision Pro visionOS Version: 26.2 Xcode Version: 26.2 macOS Version: Darwin 25.2.0
Replies
0
Boosts
0
Views
280
Activity
Feb ’26
Request: More Fine-Grained Control in Object Capture (PhotogrammetrySession)
Hi Apple Team and Developers, First of all, I’d like to express my appreciation for the incredible results achieved using PhotogrammetrySession. I’ve been developing a portrait scanning app using Object Capture, and in many tests—especially with human models—I’ve found the reconstructed body surfaces are remarkably smooth and clean, often outperforming tools like Metashape and RealityCapture in terms of aesthetic results. However, I’ve encountered some challenges when working with complex areas like long hair overlapping the face. For instance, with female models where strands of hair partially occlude the face, the resulting mesh tends to merge the hair and facial geometry. This leads to distorted or “melted” facial features, likely due to ambiguity in the geometry estimation phase. Feature Suggestion: Would it be possible to allow developers to supply two versions of the input images: • One version (original) for texture generation • A pre-processed version (e.g., contrast-enhanced or CLAHE filtered) to guide mesh reconstruction only This would give us the flexibility to enhance edge features or shadow detail without affecting the final texture appearance. In other photogrammetry pipelines, applying image enhancement selectively before dense reconstruction improves geometry quality in low-contrast areas. Question: Is there any plan to support this kind of two-path workflow in future versions of PhotogrammetrySession? Or perhaps expose more intermediate stages or tunable parameters to developers? Also, any hints on what we can expect from WWDC 2025 regarding improvements to Object Capture or related vision/3D technologies? Thanks again for this powerful API. Looking forward to hearing insights from the team and other developers. Warm regards, KitCheng
Replies
0
Boosts
0
Views
130
Activity
May ’25
Low-latency live streaming using APMP Wide FoV
Is it possible to achieve sub-second end-to-end latency when displaying live streaming video using APMP (Apple Projected Media Profile) with Wide FoV? APMP supports HLS playback, but my understanding is that standard HLS introduces several seconds of latency. I would like to know whether APMP (especially Wide FoV) supports Low-Latency HLS, or if there are inherent limitations that make sub-second latency impractical. If APMP is not suitable for this use case, are there any recommended alternatives within AVFoundation or related frameworks for rendering wide-FoV live video with very low latency? Thank you for any insights.
Replies
1
Boosts
0
Views
505
Activity
Jan ’26
Vision Pro stucks on "Retrieving configuration"
Hi, after upgrading to 2.4.1 (from 1.0) my vision stucks on "Retrieving configuration" screen. Apple Store didn't support my case since it has been sold in USA and the product isn't still present in italian market. I don't have dev strap, how can I manage the issue? Thank you
Replies
0
Boosts
0
Views
121
Activity
May ’25
ARKit Face Tracking works in total darkness?
I’ve seen, mainly in discussions with AIs, that ARFaceTrackingConfiguration uses the same technology as Face ID and therefore should work in complete darkness. However, I haven’t been able to achieve this. Does anyone know if this is actually true? I'm using an iPhone 16 to test, and the Face ID works well in darkness.
Replies
0
Boosts
0
Views
141
Activity
Feb ’26