Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Triangle Recommendation Limit for M5 Vision Pro
For the M2 Apple Vision Pro, there's "a general guideline, we recommend no more than 500 thousand triangles for an immersive scene, with 250 thousand for applications in the shared space." --https://developer.apple.com/videos/play/wwdc2024/10186/?time=147 Is there a revised recommendation for the M5 Apple Vision Pro?
2
1
765
Nov ’25
Custom Material half visible..?
I'm currently implementing 180° / 360° immersive video for my app. I easily implemented 360° by just applying VideoMaterial to flipped sphere. But I'm stuck at 180°. I'm trying to implement by applying VideoMaterial to hemisphere (half sphere). I want to make VideoMaterial to be visible half front sphere and half back sphere transparent / clear. Would there be any advice / information / idea to implement this? Your help would be grateful.
0
0
111
Oct ’25
Material showing only half?
Hi, I'm currently implementing 180° / 360° property for immersive video in my app. I was able to implement 360° easily by just giving VideoMaterial to flipped sphere. However, I'm bit stuck at 180°. I want to implement by setting VideoMaterial to hemisphere mesh. But since RealityKit doesn't provide default function such like MeshResource.generateHemisphere yet, I just want to apply VideoMaterial half front visible, and half back transparent. I thought this would make my sphere looks like hemisphere. But I can't find my way to implement this method.. I would appreciate any advice / idea / information that might help.
0
0
77
Oct ’25
Developer Strap Gen 2 - Only USB2 Speeds
I am testing out the Gen 2 of the developer strap on my Vision Pro M2 and I have only been able to get USB 2 speeds when connecting it to my MacBook Pro Max M3. I used the official Apple Thunderbolt 4 cable, which does get Thunderbolt speeds on my T7 Touch drive. Has anyone figured out a solution for this issue? The Gen 2 developer strap does advertise 20 Gb/s speeds.
5
2
1.2k
Nov ’25
Logitech Muse: OS-level support?
I've been experimenting with the Muse pen and understand that it can be accessed by my app through a SpatialTrackingSession, but is there any current or planned support for devices like this as for general UI input like game controllers are? For example, using the button as a tap analogue for SwiftUI views.
0
0
103
Oct ’25
mipmapsMode trade-off?
I am building a 360 photo viewer in VisionOS 26. Which allows the user to choose a 2 by 1 jpg and then renders it with a sphere mesh entity. And I use: TextureResource(contentsOf: url, options: options). I noticed two situations here in terms of mipmaps options. When setting "mipmapsMode: .none": The graphic quality within the "gaze area" looks sharp and clear The two poles (top and bottom) are perfectly rendered Massive shimmer around the "gaze area" When setting "mipmapsMode: .allocateAndGenerateAll": The graphic looks slightly blurrier than in ".none" within the "gaze area" The two poles are very blurry and hard to recognize the texture Much less shimmer around the "gaze area" My question would be: Is there a way to have the perfect graphic quality in ".none" without the massive shimmer? Thank you! Screenshots: mipmapsMode: .none mipmapsMode: .allocateAndGenerateAll
0
0
221
Oct ’25
PSVR2 controller button quirks
I have an open Feedback conversation with Apple on this topic, but I am curious if others have run into this, or want to try out my sample code in their set up. there are two API’s for reading controller buttons, axis, and D pads: GCPhysicalInputProfile and GCControllerLiveInput. There are inconsistencies in behaviour between the two of them. Apple recommends we use GCControllerLiveInput, however, there are some capabilities on these controllers that are only accessible through GCPhysicalInputProfile, as I’ll discuss below. PSVR2 R2/L2 buttons, a.k.a. triggers, have force input analogue values. These can only be accessed on GCPhysicalInputProfile PSVR2 thumbstick direction values are read through “axes” on GCPhysicalInputProfile, but only “dpads” on GCControllerLiveInput on both GCPhysicalInputProfile and GCControllerLiveInput, All pressed events of all buttons are fired properly using generic aliases ( Trigger, Grip ,Menu, Right Thumbstick, Left Thumbstick, Right Button A & B (Circle & Cross), Left Button A&B (Triangle and Square) ). Apple reserves the system button as the equivalent of a home button for the OS. on GCPhysicalInputProfile, touch events are fired when the button is also pressed, but not for only touches. on GCControllerLiveInput , Touch events only works for the following buttons: Left Thumbstick, Right Thumbstick, Right Button A (Circle), and Right Button B (Cross). But Right Button B touch event isn’t labelled correctly, it fires as the Right Button A event. I observed this inside ALVR which uses a polling based approach to event processing: https://github.com/alvr-org/alvr-visionos/blob/17b5968f9d894944b53e97134b39dfce0993302a/ALVRClient/WorldTracker.swift#L301 To simplify to see this on a very simple app, I used the Apple example TrackingAccessories application: https://developer.apple.com/documentation/ARKit/tracking-accessories-in-volumetric-windows I’ve attached the code that replaces the AccessoryTrackingModel class. I added code that prints out what is touched/pressed, see the trackAllConnectedSpatialControllers method: https://github.com/svrc/TrackingAccessories
6
0
786
3w
Nearby Sharing a Volume won't work
Hi, we've developed an app for Vision Pro that utilises the GroupActivitites SDK to provide shared experiences for our users. Remote Participation works great, but we can't get nearby sharing to work. The behaviour we're observing: User 1 engages share sheet from Volume, 2nd Vision Pro is visible. User 1 starts nearby sharing Session initialisation runs for approx. 30 seconds, then fails Sometimes, the nearby participant doesn't show up at all after the initialisation has failed once. As stated in the Configure your visionOS app for sharing with people nearby article, we didn't make any changes to our implementation to support nearby sharing. Any help would be greatly appreciated. Kind regards, David
2
0
265
3w
Performance drop when particle emitter is combined with video play
Hi All, We're a studio building an app and as part of a scene we have a 3D asset with a smoke particle emitter and a curved mesh that plays video. I notice that when the video alone is played or the particle effect alone is done then the scene works fine but the frame rate drops drastically when both are turned on. How do I solve this because this is an important storytelling feature.
2
0
290
Oct ’25
Logitech Muse API?
Hello, I've pre-ordered the Logitech Muse with hopes of developing with it, but have yet to find any documentation relating to the capabilities it will have/any APIs that will be available to take advantage of the Muse. Is anyone aware of what might become available? Thank you in advance.
1
0
137
Oct ’25
Share Extensions embedded in visionOS apps
I'm trying to add a feature to my app to allow a user to import items from other apps, like Safari, via the share sheet. I've done this many times on iOS/iPadOS easily with a Share Extension. From what I can tell, Xcode tells me share extensions are not available on visionOS - though my experience on device tells me differently (It seems Reminders, Notes & more implement them somehow.) I was finally able to get it working on device only...but I can now no longer test in the simulator, and have not found a way to distribute this app. When attempting to run on the simulator, I get this issue: Please try again later. Appex bundle at /Users/jason/Library/Developer/CoreSimulator/Devices/09A70160-4F4F-4F5E-B679-F6F7D876D7EF/data/Library/Caches/com.apple.mobile.installd.staging/temp.6OAEZp/extracted/LaunchBar.app/PlugIns/LaunchBarShareExtension.appex with id co.swiftfox.LaunchBar.ShareExtension specifies a value (com.apple.share-services) for the NSExtensionPointIdentifier key in the NSExtension dictionary in its Info.plist that does not correspond to a known extension point. When trying to archive an upload to test flight, I get this similar error: Invalid Info.plist value. The value for the key 'DTPlatformName' in bundle LaunchBar.app/PlugIns/LaunchBarShareExtension.appex is invalid. (ID: 207610c7-b7e1-48be-959b-22a43cd32d16) The app is for visionOS only - which I'm thinking might be the problem? The share extension is "Designed For iPhone" and requires me to include iPhone as a run destination. In the worst case I can build an iPhone UI for the app but I'd rather not, as it is very specific to visionOS. Has anyone successfully launched a share extension on a visionOS only app? I have an iPad app with a share extension that shows up fine on visionOS, but the issue seems to be specifically with visionOS only apps.
1
0
107
Oct ’25
version update in Vision Pro
Hi, I'm developing an app for Vision Pro using Xcode, while updating the latest update, things that worked in my app suddenly didn't. in my app flow I'm tapping spheres to get their positions, from some reason I get an offset from where I tap to where a marker on that position is showing up. here's the part of code that does that, and a part that is responsible for an alignment that happens afterwards: func loadMainScene(at position: SIMD3) async { guard let content = self.content else { return } do { let rootEntity = try await Entity(named: "surgery 16.09", in: realityKitContentBundle) rootEntity.scale = SIMD3<Float>(repeating: 0.5) rootEntity.generateCollisionShapes(recursive: true) self.modelRootEntity = rootEntity let bounds = rootEntity.visualBounds(relativeTo: nil) print("📏 Model bounds: center=\(bounds.center), extents=\(bounds.extents)") let pivotEntity = Entity() pivotEntity.addChild(rootEntity) self.pivotEntity = pivotEntity let modelAnchor = AnchorEntity(world: [1, 1.3, -0.8]) modelAnchor.addChild(pivotEntity) content.add(modelAnchor) updateModelOpacity(0.5) self.modelAnchor = modelAnchor rootEntity.visit { entity in print("👀 Entity in model: \(entity.name)") if entity.name.lowercased().hasPrefix("focus") { entity.generateCollisionShapes(recursive: true) entity.components.set(InputTargetComponent()) print("🎯 Made tappable: \(entity.name)") } } print("✅ Model loaded with collisions") guard let sphere = placementSphere else { return } let sphereWorldXform = sphere.transformMatrix(relativeTo: nil) var newXform = sphereWorldXform newXform.columns.3.y += 0.1 // move up by 20 cm let gridAnchor = AnchorEntity(world: newXform) self.gridAnchor = gridAnchor content.add(gridAnchor) let baseScene = try await Entity(named: "Scene", in: realityKitContentBundle) let gridSizeX = 18 let gridSizeY = 10 let gridSizeZ = 10 let spacing: Float = 0.05 let startX: Float = -Float(gridSizeX - 1) * spacing * 0.5 + 0.3 let startY: Float = -Float(gridSizeY - 1) * spacing * 0.5 - 0.1 let startZ: Float = -Float(gridSizeZ - 1) * spacing * 0.5 for i in 0..<gridSizeX { for j in 0..<gridSizeY { for k in 0..<gridSizeZ { if j < 2 || j > gridSizeY - 5 { continue } // remove 2 bottom, 4 top let cell = baseScene.clone(recursive: true) cell.name = "Sphere" cell.scale = .one * 0.02 cell.position = [ startX + Float(i) * spacing, startY + Float(j) * spacing, startZ + Float(k) * spacing ] cell.generateCollisionShapes(recursive: true) gridCells.append(cell) gridAnchor.addChild(cell) } } } content.add(gridAnchor) print("✅ Grid added") } catch { print("❌ Failed to load: \(error)") } } private func handleModelOrGridTap(_ tappedEntity: Entity) { guard let modelRootEntity = modelRootEntity else { return } let localPosition = tappedEntity.position(relativeTo: modelRootEntity) let worldPosition = tappedEntity.position(relativeTo: nil) switch tapStep { case 0: modelPointA = localPosition modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0, 0])) print("📍 Model point A: \(localPosition)") tapStep += 1 case 1: modelPointB = localPosition modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0.5, 0])) print("📍 Model point B: \(localPosition)") tapStep += 1 case 2: targetPointA = worldPosition targetMarkerA = createMarker(at: worldPosition,color: [0, 1, 0]) modelAnchor?.addChild(targetMarkerA!) print("✅ Target point A: \(worldPosition)") tapStep += 1 case 3: targetPointB = worldPosition targetMarkerB = createMarker(at: worldPosition,color: [0, 0, 1]) modelAnchor?.addChild(targetMarkerB!) print("✅ Target point B: \(worldPosition)") alignmentReady = true tapStep += 1 default: print("⚠️ Unexpected tap on model helper at step \(tapStep)") } } func alignModel2Points() { guard let modelPointA = modelPointA, let modelPointB = modelPointB, let targetPointA = targetPointA, let targetPointB = targetPointB, let modelRootEntity = modelRootEntity, let pivotEntity = pivotEntity, let modelAnchor = modelAnchor else { print("❌ Missing data for alignment") return } let modelVec = modelPointB - modelPointA let targetVec = targetPointB - targetPointA let modelLength = length(modelVec) let targetLength = length(targetVec) let scale = targetLength / modelLength let modelDir = normalize(modelVec) let targetDir = normalize(targetVec) var axis = cross(modelDir, targetDir) let axisLength = length(axis) var rotation = simd_quatf() if axisLength < 1e-6 { if dot(modelDir, targetDir) > 0 { rotation = simd_quatf(angle: 0, axis: [0,1,0]) } else { let up: SIMD3<Float> = [0,1,0] axis = cross(modelDir, up) if length(axis) < 1e-6 { axis = cross(modelDir, [1,0,0]) } rotation = simd_quatf(angle: .pi, axis: normalize(axis)) } } else { let dotProduct = dot(modelDir, targetDir) let clampedDot = max(-1.0, min(dotProduct, 1.0)) let angle = acos(clampedDot) rotation = simd_quatf(angle: angle, axis: normalize(axis)) } modelRootEntity.scale = .one * scale modelRootEntity.orientation = rotation let transformedPointA = rotation.act(modelPointA * scale) pivotEntity.position = -transformedPointA modelAnchor.position = targetPointA alignedModelPosition = modelAnchor.position print("✅ Aligned with scale \(scale), rotation \(rotation)")
2
0
290
Oct ’25
visionOS – Starting GroupActivity FaceTime Call dismisses Immersive Space
Hello, I am in the process of implementing SharePlay support in my visionOS app. Everything runs fine when I test locally, but when my app is distributed via TestFlight, calling try await activity.activate() shows the SharePlay dialog as usual, but then when I start a new FaceTime call, my ImmersiveSpace gets dismissed. This is only happening when the app is distributed via TestFlight, when I run it locally the ImmersiveSpace stays active as expected. Looking at the console on my Mac I found this log: Invalid initial client settings class: UIApplicationSceneClientSettings; expected class: MRUISharedApplicationSceneClientSettings; bundle ID: com.apple.facetime; scene ID: com.apple.facetime:SFBSystemService-DDA8C751-C0C4-487E-AD85-59EF4E6C6050 Does anyone have an idea how I can fix this? It's driving me nuts and I wasted over a day looking for a workaround but so far been unsuccessful. Thanks!
7
1
976
3w
ManipulationComponent create parent/child crash
Hello, If you add a ManipulationComponent to a RealityKit entity and then continue to add instructions, sooner or later you will encounter a crash with the following error message: Attempting to move entity “%s” (%p) under “%s” (%p), but the new parent entity is currently being removed. Changing the parent/child entities of an entity in an event handler while that entity is already being reassigned is not supported. CoreSimulator 1048 – Device: Apple Vision Pro 4K (B87DD32A-E862-4791-8B71-92E50CE6EC06) – Runtime: visionOS 26.0 (23M336) – Device Type: Apple Vision Pro The problem occurs precisely with this code: ManipulationComponent.configureEntity(object) I adapted Apple's ObjectPlacementExample and made the changes available via GitHub. The desired behavior is that I add entities to ManipulationComponent and then Realitiykit runs stably and does not crash randomly. GitHub Repo Thanks Andre
3
0
483
Oct ’25
ARKit Eye Tracking Calibration Issues - Word-Level Reading Tracking Feasibility
Hi Apple Developer Community, I'm developing an eye-tracking application using ARKit's ARFaceTrackingConfiguration and ARFaceAnchor.blendShapes for gaze detection using Xcode. I'm experiencing several calibration and accuracy issues and would appreciate insights from the community. Current Implementation Using ARFaceAnchor.blendShapes (.eyeLookUpLeft, .eyeLookDownLeft, .eyeLookInLeft, .eyeLookOutLeft, etc.) Implementing custom sensitivity curves and smoothing algorithms Applying baseline correction and coordinate mapping Using quadratic regression for calibration point mapping Issues I'm Facing 1. Calibration Mismatch Red dot position doesn't align with where I'm actually looking Significant offset between intended gaze point and actual cursor position Calibration seems to drift or become inaccurate over time 2. Extreme Eye Movement Requirements Need to make exaggerated eye movements to reach screen edges/corners Natural eye movements don't translate to proportional cursor movement Difficulty reaching certain screen regions even with calibration 3. Sensitivity and Stability Issues Cursor jitters or jumps around when looking at center Too much sensitivity to micro-movements Inconsistent behavior between calibration and normal operation 4. I also noticed that tracking on calibration screen as well as tracking on reading screen works better as expected when head movement is there, but I do not want much head movement. I want tracking with normal eye movement while reading an Ebook. Primary Question: Word-Level Eye Tracking Feasibility Is word-level eye tracking (tracking gaze as users read through individual words in an ebook) technically feasible with current iPhone/iPad hardware? I understand that Apple's built-in eye tracking is primarily an accessibility feature for UI navigation. However, I'm wondering if the TrueDepth camera and ARKit's eye tracking capabilities are sufficient for: Tracking natural reading patterns (left-to-right, line-by-line progression) Detecting which specific words a user is looking at Maintaining accuracy for sustained reading sessions (15-30 minutes) Working reliably across different users and lighting conditions Questions for the Community Hardware Limitations: Are iPhone/iPad TrueDepth cameras capable of the precision needed for word-level tracking, or is this beyond current hardware capabilities? Calibration Best Practices: What calibration strategies have worked best for accurate gaze mapping? How many calibration points are typically needed? Reading-Specific Challenges: Are there particular challenges when tracking reading behavior vs. general gaze tracking? Alternative Approaches: Are there better approaches than ARKit blend shapes for this use case? Current Setup Devices: iPhone 14 Pro iOS Version: iOS 18.3 ARKit Version: Latest available Any insights, experiences, or technical guidance would be greatly appreciated. I'm particularly interested in hearing from developers who have worked on similar eye tracking applications or have experience with the limitations and capabilities of ARKit's eye tracking features. Thank you for your time and expertise!
0
0
718
Oct ’25
Request File Access from Unity for Apple Vision Pro
Hi, I am trying to load files from the Apple Vision Pro's storage into a Unity App (using Apple visionOS XR Plugin and not PolySpatial package). So far, I've tried using UnitySimpleFileBrowser and UnityStandaloneFileBrowser (both aren't made for the Vision Pro and don't work there), and then implemented my own naive file browser that at least allows me to view directories (that I can see from the App Sandbox). This is of course very limited: Gray folders can't be accessed, the only 3 available ones don't contain anything where a user would put files through the "Files" app. I know that an app can request access to these "Files & Folders": So my question is: Is there a way to request this access for a Unity-built app at the moment? If yes, what do I need to do? I've looked into the generated Xcode project's "Capabilities", but did not find anything related to file access. Any help is appreciated!
5
0
343
Oct ’25
AVPlayer stutters when using AVPlayerItemVideoOutput
We’re trying to build a custom player for Unity. For this, we’re using AVPlayer with AVPlayerItemVideoOutput to get textures. However, we noticed that playback is not smooth and the stream often freezes. For testing, we used this 8K video: https://deovr.com/nwfnq1 The video was played using the following code: @objc public func playVideo(urlString: String) { guard let url = URL(string: urlString) else { return } let pItem = AVPlayerItem(url: url) playerItem = pItem pItem.preferredForwardBufferDuration = 10.0 let pixelBufferAttributes: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, kCVPixelBufferMetalCompatibilityKey as String: true, ] let output = AVPlayerItemVideoOutput( pixelBufferAttributes: pixelBufferAttributes ) pItem.add(output) playerItemObserver = pItem.observe(\.status) { [weak self] pItem, _ in guard pItem.status == .readyToPlay else { return } self?.playerItemObserver = nil self?.player.play() } player = AVPlayer(playerItem: pItem) player.currentItem?.preferredPeakBitRate = 35_000_000 } When AVPlayerItemVideoOutput is attached, the video stutters and the log looks like this: 🟢 Playback likely to keep up 🟡 Buffer ahead: 4.08s | buffer: 4.08s 🟡 Buffer ahead: 4.08s | buffer: 4.08s 🟡 Buffer ahead: -0.07s | buffer: 0.00s 🟡 Buffer ahead: 2.94s | buffer: 3.49s 🟡 Buffer ahead: 2.50s | buffer: 4.06s 🟡 Buffer ahead: 1.74s | buffer: 4.30s 🟡 Buffer ahead: 0.74s | buffer: 4.30s 🟠 Playback may stall 🛑 Buffer empty 🟡 Buffer ahead: 0.09s | buffer: 4.30s 🟠 Playback may stall 🟠 Playback may stall 🛑 Buffer empty 🟠 Playback may stall 🟣 Buffer full 🟡 Buffer ahead: 1.41s | buffer: 1.43s 🟡 Buffer ahead: 1.41s | buffer: 1.43s 🟡 Buffer ahead: 1.07s | buffer: 1.43s 🟣 Buffer full 🟡 Buffer ahead: 0.47s | buffer: 1.65s 🟠 Playback may stall 🛑 Buffer empty 🟡 Buffer ahead: 0.10s | buffer: 1.65s 🟠 Playback may stall 🟡 Buffer ahead: 1.99s | buffer: 2.03s 🟡 Buffer ahead: 1.99s | buffer: 2.03s 🟣 Buffer full 🟣 Buffer full 🟡 Buffer ahead: 1.41s | buffer: 2.00s 🟡 Buffer ahead: 0.68s | buffer: 2.27s 🟡 Buffer ahead: 0.09s | buffer: 2.27s 🟠 Playback may stall 🛑 Buffer empty 🟠 Playback may stall When we remove AVPlayerItemVideoOutput from the player, the video plays smoothly, and the output looks like this: 🟢 Playback likely to keep up 🟡 Buffer ahead: 1.94s | buffer: 1.94s 🟡 Buffer ahead: 1.94s | buffer: 1.94s 🟡 Buffer ahead: 1.22s | buffer: 2.22s 🟡 Buffer ahead: 1.05s | buffer: 3.05s 🟡 Buffer ahead: 1.12s | buffer: 4.12s 🟡 Buffer ahead: 1.18s | buffer: 5.18s 🟡 Buffer ahead: 0.72s | buffer: 5.72s 🟡 Buffer ahead: 1.27s | buffer: 7.28s 🟡 Buffer ahead: 2.09s | buffer: 3.03s 🟡 Buffer ahead: 4.16s | buffer: 6.10s 🟡 Buffer ahead: 6.66s | buffer: 7.09s 🟡 Buffer ahead: 5.66s | buffer: 7.09s 🟡 Buffer ahead: 4.66s | buffer: 7.09s 🟡 Buffer ahead: 4.02s | buffer: 7.45s 🟡 Buffer ahead: 3.62s | buffer: 8.05s 🟡 Buffer ahead: 2.62s | buffer: 8.05s 🟡 Buffer ahead: 2.49s | buffer: 3.53s 🟡 Buffer ahead: 2.43s | buffer: 3.38s 🟡 Buffer ahead: 1.90s | buffer: 3.85s We’ve tried different attribute settings for AVPlayerItemVideoOutput. We also removed all logic related to reading frame data, but the choppy playback still remained. Can you advise whether this is a player issue or if we’re doing something wrong?
1
0
398
Oct ’25
How to update TextureResource with MTLTexture?
Hi I have a monitoring app, that will take input video from uvc and process it using Metal, and eventually get a MTLTexture. The problem I'm facing is I have to convert MTLTexture to CGImage then call TextureResource.replace, which is super slow. Metal processing speed is same as input frame rate(50pfs), but MTLTexture -> CGImage -> TextureResource only got 7fps... Is there any way I can make it faster?
2
0
419
Oct ’25