visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts

Post

Replies

Boosts

Views

Activity

Tipkit for VisionOS (TabView, etc.)
I am trying to create a user flow where I can guide the user how to navigate through my app. I want to add a tip on a TabView that indicates user to navigate to a specific tab. I have seen this work with iOS properly but I am a little lost as VisionOS is not responding the same for .popoverTip etc. Any guidance is appreciated!
0
0
200
Jul ’25
visionOS Window launch size
Before visionOS Beta 4 it was possible to define the launch size in the Info.plist using PreferredLaunchSize like so: <key>UILaunchPlacementParameters</key> <dict> <key>PreferredLaunchSize</key> <dict> <key>Height</key> <integer>750</integer> <key>Width</key> <integer>750</integer> </dict> </dict> In visionOS Beta 4 this now doesn't work anymore and the window opens in a 16:9 format and then will scale down to the .defaultSize of the WindowGroup with an animation. Settings, Notes, Safari still open with a different default size though, including the launch screen. How are we supposed to do this now?
2
1
1.2k
Jul ’25
Personas and Lighting in a visionOS 26 (or earlier) SharePlay Session
Use case: When SharePlay -ing a fully immersive 3D scene (e.g. a virtual stage), I would like to shine lights on specific Personas, so they show up brighter when someone in the scene is recording the feed (think a camera person in the scene wearing Vision Pro). Note: This spotlight effect only needs to render in the camera person's headset and does NOT need to be journaled or shared. Before I dive into this, my technical question: Can environmental and/or scene lighting affect Persona brightness in a SharePlay? If not, is there a way to programmatically make Personas "brighter" when recording? My screen recordings always seem to turn out darker than what's rendered in environment, and manually adjusting the contrast tends to blow out the details in a Persona's face (especially in visionOS 26).
0
0
102
Jun ’25
ShaderGraphMaterial.getParameter(handle:) always return nil, is this expected behavior?
I've loaded a ShaderGraphMaterial from a RealityKit content bundle and I'm attempting to access the initial values of its parameters using getParameter(handle:), but this method appears to always return nil: let shaderGraphMaterial = try await ShaderGraphMaterial(named: "MyMaterial", from: "MyFile") let namedParameterValue = shaderGraphMaterial.getParameter(name: "myParameter") // This prints the value of the `myParameter` parameter, as expected. print("namedParameterValue = \(namedParameterValue)") let handle = ShaderGraphMaterial.parameterHandle(name: "myParameter") let handleParameterValue = shaderGraphMaterial.getParameter(handle: handle) // Expected behavior: prints the value of the `myParameter` parameter, as above. // Observed behavior: prints `nil`. print("handleParameterValue = \(handleParameterValue)") Is this expected behavior? Based on the documentation at https://developer.apple.com/documentation/realitykit/shadergraphmaterial/getparameter(handle:) I'd expect getParameter(handle:) to return the value of the parameter, just as getParameter(name:) does. I've tested this on iOS 18.5 and iOS 26.0 beta 2. Assuming this getParameter(handle:) works as designed, is the following ShaderGraphMaterial extension an appropriate workaround, or can you recommend a better approach? Thank you. public extension ShaderGraphMaterial { /// Reassigns the values of all named material parameters using the handle-based API. /// /// This works around an issue where, at least as of RealityKit 26.0 beta 2 and /// earlier, `getParameter(handle:)` will always return `nil` when used to read the /// initial value of a shader graph material parameter read using /// `ShaderGraphMaterial(named:from:in:)`, whereas `getParameter(name:)` will work /// as expected. private mutating func copyNamedParametersToHandles() { for parameterName in self.parameterNames { if let value = self.getParameter(name: parameterName) { let handle = ShaderGraphMaterial.parameterHandle(name: parameterName) do { try self.setParameter(handle: handle, value: value) } catch { assertionFailure("Cannot set parameter value") } } } } }
1
0
366
Jun ’25
VisionOS 26 - threadsPerThreadgroup limit causing crash on device (but not in simulator)
Hi all, I'm running into an issue with an app that previously worked fine on device using visionOS 2.0. After updating to visionOS 26, the same code runs fine in the simulator but crashes on the device with the following error: -[MTLDebugComputeCommandEncoder _validateThreadsPerThreadgroup:]:1330: failed assertion `(threadsPerThreadgroup.width(32) * threadsPerThreadgroup.height(32) * threadsPerThreadgroup.depth(1))(1024) must be <= 832. (kernel threadgroup size limit)` Is there any documented way to check or increase the allowed threadsPerThreadgroup size on Apple Vision Pro? Or any recommended workaround for this regression? Thanks in advance!
3
0
177
Jun ’25
WebXR and PSVR2
We got very excited when we saw support for the PSVR2 on WWDC! Particularly interesting is WebXR to us, so we got the controllers to give it a try. Unfortunately they only register as gamepads in the navigator but not as XRInputDevice's We went through the experimental flags and didn't find something that is directly related. Is there a flag we missed? If not, when do you have PSVR2 support planned for WebXR?
1
5
263
Jun ’25
Can we access a "Locked in Place" value when a window has been locked without being snapped to a surface?
Starting in visionOS 26, users can snap windows to surfaces. These windows are locked in place and are later restored by visionOS. We can access the snapped data with surfaceSnappingInfo docs. Users can also lock a free-floating (unsnapped) window from a context menu in the window controls. Is there a way to tell when a window has been locked without being snapped to a surface?
7
3
198
Jun ’25
Apple Vision Pro Developer Strap: Video Out and Recording Time Limit
Any way to extend the video recording time in Reality Composer Pro from 3:00 to any longer value, such as editing preferences in Terminal or other workaround? Is there any way to use the strap and a USB-C cable as a live video stream input source that would mirror to Quicktime or some other video capture tool? I am assuming there is no online documentation or user manual for the strap, but please correct me if I'm wrong. Thank you.
1
0
151
Jun ’25
How to play blend shape animations or morph animations exported from blender in Vision Pro apps using Reality Kit
So I am exporting a .usdc file from blender that already has some morph animations. The animations play well in blender but when I export I cannot seem to play them in RealityKit or RCP. Entity.availableAnimations is an empty array. Not of the child objects in the entity hierarchy has an animation library component with it. Maybe I am exporting it wrong but I tried multiple combinations but doesn't seem to work. Here are my export settings in blender The original file I purchased is an FBX file that has the animation but when I try to directly get it in RealityConverter it doesn't seem to play animations.
2
0
182
Jun ’25
How to load a USDZ inside of a Reality Composer Pro package as ModelEntity in RealityKit ?
I need a MeshResource from ModelEntity to generate a box collider, but ModelEntity fails to load USDZ files from the Reality Composer Pro (RCP) bundle. This code works for loading an Entity: // Successfully loads as generic Entity previewEntity = try await Entity(named: fileName, in: realityKitContentBundle) But this fails when trying to load as ModelEntity: // Fails to load as ModelEntity modelEntity = try await ModelEntity(named: fileName, in: realityKitContentBundle) I found this thread mentioning: "You'll likely go from USDZ to Entity which contains a MeshResource when you load/init the USDZ file." But it doesn't explain ​how to actually extract the MeshResource. Could anyone advise: How to properly load USDZ files as ModelEntity from RCP bundles? How to extract MeshResource from a successfully loaded Entity? Alternative approaches to generate box colliders if direct extraction isn't possible? Sample code for extraction or workarounds would be greatly appreciated!
1
0
188
Jun ’25
Extracting IP with swift on visionOS
Hey everyone, I’m developing an app for visionOS where I need to display the Apple Vision Pro’s current IP address. For this I’m using the following code, which works for iOS, macOS, and visionOS in the simulator. Only on a real Apple Vision Pro it’s unable to extract an IP. Could it be that visionOS currently doesn’t allow this? Have any of you had the same experience and found a workaround? var address: String = "no ip" var ifaddr: UnsafeMutablePointer<ifaddrs>? = nil if getifaddrs(&ifaddr) == 0 { var ptr = ifaddr while ptr != nil { defer { ptr = ptr?.pointee.ifa_next } let interface = ptr?.pointee let addrFamily = interface?.ifa_addr.pointee.sa_family if addrFamily == UInt8(AF_INET) { if let name: Optional<String> = String(cString: (interface?.ifa_name)!), name == "en0" { var hostname = [CChar](repeating: 0, count: Int(NI_MAXHOST)) getnameinfo(interface?.ifa_addr, socklen_t((interface?.ifa_addr.pointee.sa_len)!), &hostname, socklen_t(hostname.count), nil, socklen_t(0), NI_NUMERICHOST) address = String(cString: hostname) } } } freeifaddrs(ifaddr) } return address } Thanks in advance for any insights or tips! Best Regards, David
2
1
160
Jun ’25
Please provide access to face tracking blendshapes on vision os
Apple, please provide access to face tracking blend shapes on vision os, just like you do on iOS. You have the best eye and face tracking implementation on the market, please let us use it. There is a sizable audience who will buy the headset just for it. I personally know multiple people who are not buying the headset simply because you locked those features out. No raw camera access is needed, just abstracted blendshapes values. You will make the headset so much more useful if you do this simple thing.
1
1
136
Jun ’25
WebXR Consent Dialog
Based on the "Build immersive web experiences with WebXR"-Video for visionOS there is no way to disable the consent prompts for entering an immersive experience or consent hand-tracking. For the microphone it's possible to "greenlight" specific websites for mic input, which works great. I'd welcome it, if it were possible to add specific websites in the settings, in which those consent dialogs aren't shown each time. In my opinion, the user interaction through a button that launches the experience would be sufficient to not disorient.
0
1
99
Jun ’25
'tangents' was deprecated in visionOS 2.0: Use cp_drawable_compute_projection instead
I'm using a class with tangents to render on RealityKit for VisionOS but in Vision26 it cause a crash on App and there not documentation how implement cp_drawable_compute_projection I have tried a few options but without success. Could you help me to implement it ? The part of code is: return drawable.views.map { view in let userViewpointMatrix = (simdDeviceAnchor * view.transform).inverse let projectionMatrix = ProjectiveTransform3D( leftTangent: Double(view.tangents[0]), rightTangent: Double(view.tangents[1]), topTangent: Double(view.tangents[2]), bottomTangent: Double(view.tangents[3]), nearZ: Double(drawable.depthRange.y), farZ: Double(drawable.depthRange.x), reverseZ: true ) let screenSize = SIMD2(x: Int(view.textureMap.viewport.width), y: Int(view.textureMap.viewport.height)) return ModelRendererViewportDescriptor(viewport: view.textureMap.viewport, projectionMatrix: .init(projectionMatrix), viewMatrix: userViewpointMatrix * translationMatrix * rotationMatrix * scalingMatrix * commonUpCalibration, screenSize: screenSize) }
1
0
150
Jun ’25
Escalated to App Review Board Again — Urgent Concern Over Process Breakdown
Dear App Review Team, We are raising serious concerns regarding the ongoing review process for the app (App ID: 6737148404), which has once again been escalated to the App Review Board—just days after the previous issue was finally resolved. We submitted a new build on June 18 containing a standard feature update. The app entered "In Review" promptly, but after 3 days with no communication, it has now been escalated to the board again, likely. This pattern is deeply troubling. What’s particularly alarming is that it appears the current reviewer lacks the necessary context or understanding of the app, and instead of engaging with us or taking ownership of the review, they’ve defaulted to escalating it without providing any rationale or feedback. This repeated hand-off to the board without accountability or explanation is effectively grinding our release process to a halt. Based on past experience, we suspect we are now waiting for another board meeting early next week—which means yet another week lost to silence and uncertainty. This is not a scalable or sustainable process. This is one of the top performing apps on visionOS, and yet every iteration—regardless of how minor—is delayed by escalations and inaction. These delays are damaging to our business, destabilizing to our user experience, and increasingly eroding our trust in the App Review process. We urgently request: Immediate clarity on the current review status A direct line of communication with someone accountable for the review An explanation as to why nearly every update requires board involvement We’ve complied fully with all App Store guidelines. All standard support channels have already been exhausted without resolution. This cycle must change. We are ready and willing to work collaboratively, but we need responsiveness and consistency from Apple in return. App ID: 6737148404
1
0
159
Jun ’25