visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts

Post

Replies

Boosts

Views

Activity

visionOS plane anchor rotation and wall direction are inconsistent
I have a problem with the wall plane detection using visionOS/ARKit: I am using ARKitSession's PlaneDetectionProvider detection.wall in the space of visionOS. I recorded the position and rotation information of the first detected plane, but found that the rotation value will be facing when the user starts the space. There is a deviation in different directions. That is to say, even if the plane is located on the same wall, the rotation quaternion will be different. I hope that no matter from which direction the user enters the scan, the real direction of the wall can be correctly obtained so that the virtual content can be accurately aligned with the wall. I have tried to use anchor.originFromAnchorTransform or Transform.rotation directly, but the rotation value is still affected by the user's initial orientation. In addition, I would like to know whether the user's initial orientation will affect the location information. If so, please provide a solution. Thank you!
1
0
466
Sep ’25
Misaligned visionOS Simulator Home Position
Using Xcode v26 Beta 6 on macOS v26 Beta 25a5349a When pressing on the home button of the visionOS simulator, I am not positioned in the middle of the room like would normally be. This occurred when moving a lot in the space to find an element added to an ImmersiveSpace. How to resolve: restart simulator device. See attached the pictures of the visionOSSimulatorCorrectHomePosition and the visionOSSimulatorMisallignedHomePosition.
2
0
874
Sep ’25
How to apply the same SystemImage to both mainEmitter and spawnedEmitter without clipping in ParticleEmitterComponent?
Hi everyone, I’m currently learning about ParticleEmitterComponentParticleEmitterComponent and exploring the sample app provided in the Simulating particles in your visionOS app documentation. In the sample app, when I set the EmitterPreset to fireworks from the settings panel on the left side of the window and choose SystemImage, I noticed two issues: The image applied to mainEmitter appears clipped or cropped. The image on spawnedEmitter does not update to the selected SystemImage. What I want to achieve: Apply the same SystemImage to both mainEmittermainEmitter and spawnedEmitterspawnedEmitter so that it displays correctly without clipping. Remove the animation that changes the size of spawnedEmitterspawnedEmitter over time and keep it at a constant size. Could someone explain which properties should be adjusted to achieve this behavior? Any guidance or examples would be greatly appreciated. Thanks in advance!
0
0
443
Sep ’25
Unable to play .aivu with VideoPlayerComponent
I’m trying to play an Apple Immersive video in the .aivu format using VideoPlayerComponent using the official documentation found here: https://developer.apple.com/documentation/RealityKit/VideoPlayerComponent Here is a simplified version of the code I'm running in another application: import SwiftUI import RealityKit import AVFoundation struct ImmersiveView: View { var body: some View { RealityView { content in let player = AVPlayer(url: Bundle.main.url(forResource: "Apple_Immersive_Video_Beach", withExtension: "aivu")!) let videoEntity = Entity() var videoPlayerComponent = VideoPlayerComponent(avPlayer: player) videoPlayerComponent.desiredImmersiveViewingMode = .full videoPlayerComponent.desiredViewingMode = .stereo player.play() videoEntity.components.set(videoPlayerComponent) content.add(videoEntity) } } } Full code is here: https://github.com/tomkrikorian/AIVU-VideoPlayerComponentIssueSample But the video does not play in my project even though the file is correct (It can be played in Apple Immersive Video Utility) and I’m getting this error when the app crashes: App VideoPlayer+Component Caption: onComponentDidUpdate Media Type is invalid Domain=SpatialAudioServicesErrorDomain Code=2020631397 "xpc error" UserInfo={NSLocalizedDescription=xpc error} CA_UISoundClient.cpp:436 Got error -4 attempting to SetIntendedSpatialAudioExperience [0x101257490|InputElement #0|Initialize] Number of channels = 0 in AudioChannelLayout does not match number of channels = 2 in stream format. Video I’m using is the official sample that can be found here but tried several different files shot from my clients and the same error are displayed so the issue is definitely not the files but on the RealityKit side of things: https://developer.apple.com/documentation/immersivemediasupport/authoring-apple-immersive-video Steps to reproduce the issue: - Open AIVUPlayerSample project and run. Look at the logs. All code can be found in ImmersiveView.swift Sample file is included in the project Expected results: If I followed the documentation and samples provided, I should see my video played in full immersive mode inside my ImmersiveSpace. Am i doing something wrong in the code? I'm basically following the documentation here. Feedback ticket: FB19971306
3
0
778
Aug ’25
Looking for solutions to build a video chat app (Omegle/Chatroulette style) on Vision Pro
Hi everyone, We are working on a prototype app for Apple Vision Pro that is similar in functionality to Omegle or Chatroulette, but exclusively for Vision Pro owners. The core idea is: – a matching system where one user connects to another through a virtual persona; – real-time video and audio transmission; – time limits for sessions with the ability to extend them; – users can skip a match and move on to the next one. We have explored WebRTC and Twilio, but unfortunately, they don’t fit our use case. Question: What alternative services or SDKs are available for implementing real-time video/audio communication on Vision Pro that would work with this scenario? Has anyone encountered a similar challenge and can recommend which technologies or tools to use? Thanks in advance!
2
0
253
Aug ’25
Unity PolySpatia
Hello, We are experiencing an issue with apparent lag or latency when interacting with objects in Unity using the XR Interaction Toolkit. Even when interactables are configured with the Movement Type set to Instantaneous, objects exhibit a noticeable delay when following hand movement. This issue is particularly apparent when the hand moves at higher speeds. We would like to know: is any additional configuration required, or is there something else we need to do to resolve this? Thank you very much for your assistance.
2
0
448
Aug ’25
VisionOS: WKWebView stops rendering after WindowGroup is closed
I'm building a visionOS app where users can place canvases into the 3D environment. These canvases are RealityKit entities that render web content using a WKWebView. The web view is regularly snapshotted, and its image is applied as a texture on the canvas surface. When the user taps on a canvas, a WindowGroup is opened that displays the same shared WKWebView instance. This works great: both the canvas in 3D and the WindowGroup reflect the same live web content. The canvas owns the WKWebView and keeps a strong reference to it. Problem: Once the user closes the WindowGroup, the 3D canvas stops receiving snapshot updates. The snapshot loop task is still running (verified via logs), but WKWebView.takeSnapshot(...) never returns — the continuation hangs forever. No error is thrown, no image is returned. If the user taps the canvas again (reopening the window), snapshots resume and everything works again. @MainActor private func getSnapshot() async -> UIImage? { guard !webView.isLoading else { return nil } let config = WKSnapshotConfiguration() config.rect = webView.bounds config.afterScreenUpdates = true return await withCheckedContinuation { continuation in webView.takeSnapshot(with: config) { image, _ in continuation.resume(returning: image) } } } What I’ve already verified: The WKWebView is still in memory. The snapshot loop (Task) is still running; it just gets stuck inside takeSnapshot(...). I tried keeping the WKWebView inside a hidden UIWindow (with .alpha = 0.01, .windowLevel = .alert + 1, and isHidden = false). Suspicion It seems that once a WKWebView is passed into SwiftUI and rendered (e.g., via UIViewRepresentable), SwiftUI takes full control over its lifecycle. SwiftUI tells WebKit "This view got closed" and WebKit reacts by stopping the WKWebView. Even though it still exists in memory and is being hold by a strong reference to it in the Canvas-Entity. Right now, this feels like a one-way path: Starting from the canvas, tapping it to open the WindowGroup works fine. But once the WindowGroup is closed, the WebView freezes, and snapshots stop until the view is reopened. Questions Is there any way under visionOS to: Keep a WKWebView rendering after its SwiftUI view (WindowGroup) is dismissed? Prevent WebKit from suspending or freezing the view? Embed the view in a persistent system-rendered context to keep snapshotting functional? For context, here's the relevant SwiftUI setup struct MyWebView: View { var body: some View { if let webView = WebViewManager.shared.getWebView() { WebViewContainer(webView: webView) } } } struct WebViewContainer: UIViewRepresentable { let webView: WKWebView func makeUIView(context: Context) -> UIView { return webView } func updateUIView(_ uiView: UIView, context: Context) {} }
1
0
161
Aug ’25
Scan IBeacons on VisionPro
I'm trying to scan for Beacon's in a visionos app iBeacon detection via CoreLocation (like CLBeacon or CLBeaconRegion) isn't supported on visionOS, so I went for CoreBluetooth + Manual Beacon Parsing Problem is I don't get my beacons scanned, only the other bluetooth devices central.scanForPeripherals(withServices: nil, options: [CBCentralManagerScanOptionAllowDuplicatesKey:true]) what should I do?
1
0
95
Aug ’25
Perspective problem
Hi, I called it "perspective problem", but I'm not quite sure what it is. I have a tag that I track with builtin camera. I calculate its pose, then use extrinsics and device anchor to calculate where to place entity with model. When I place an entity that overlaps with physical object and start to look at it from different angles, the virtual object begins to move. Initially I thought that it's something wrong with calculations, or some image distortion closer to camera edges is affecting tag detection. To check, I calculated the position only once and displayed entity there, the physical tracked object is not moving. Now, when I move my head, so the object is more to the left, or right in my field of view, the virtual object becomes misaligned to the left, or right. It feels like a parallax effect, but distance from me to entity and to physical object are exactly the same. Is that expected, because of some passthrough correction magic? And if so, can I somehow correct it back, so the entity always overlaps with object? I'm currently on v26 beta 5. I also don't quite understand the camera extrinsics, because it seems that I need to flip it around X by 180 degrees to make it work in deviceAnchor * extrinsics.inverse * tag (shouldn't it be in same coordinates as all other RealityKit things?).
3
0
225
Aug ’25
Unexpected behavior when writing entities and loading realityFiles.
I have a simple visionOS app that creates an Entity, writes it to the device, and then attempts to load it. However, when the entity file get overwritten, it affects the ability for the app to load it correctly. Here is my code for saving the entity. import SwiftUI import RealityKit import UniformTypeIdentifiers struct ContentView: View { var body: some View { VStack { ToggleImmersiveSpaceButton() Button("Save Entity") { Task { // if let entity = await buildEntityHierarchy(from: urdfPath) { let type = UTType.realityFile let filename = "testing.\(type.preferredFilenameExtension ?? "bin")" let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] let fileURL = documentsURL.appendingPathComponent(filename) do { let mesh = MeshResource.generateBox(size: 1, cornerRadius: 0.05) let material = SimpleMaterial(color: .blue, isMetallic: true) let modelComponent = ModelComponent(mesh: mesh, materials: [material]) let entity = Entity() entity.components.set(modelComponent) print("Writing \(fileURL)") try await entity.write(to: fileURL) } catch { print("Failed writing") } } } } .padding() } } Every time I press "Save Entity", I see a warning similar to: Writing file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality Failed to set dependencies on asset 1941054755064863441 because NetworkAssetManager does not have an asset entity for that id. When I open the immersive space, I attempt to load the same file: import SwiftUI import RealityKit import UniformTypeIdentifiers struct ImmersiveView: View { @Environment(AppModel.self) private var appModel var body: some View { RealityView { content in guard let type = UTType.realityFile.preferredFilenameExtension else { return } let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] let fileURL = documentsURL.appendingPathComponent("testing.\(type)") guard FileManager.default.fileExists(atPath: fileURL.path) else { print("❌ File does not exist at path: \(fileURL.path)") return } if let entity = try? await Entity(contentsOf: fileURL) { content.add(entity) } } } } I also get errors after I overwrite the entity (by pressing "Save Entity" after I have successfully loaded it once). The warnings that appear when the Immersive space attempts to load the new entity are: Asset 13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'. Asset 8308977590385781534 Scene (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Scene_0.compiledscene failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to read archive entry. AssetLoadRequest failed because asset failed to load '13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh' (Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'.) The order of operations to make this happen: Launch app Press "Save Entity" to save the entity "Open Immersive Space" to view entity Press "Save Entity" to overwrite the entity "Open Immersive Space" to view entity, failed asset load request Also Launch app, the entity should still be save from last time the app ran "Open Immersive Space" to view entity Press "Save Entity" to overwrite the entity "Open Immersive Space" to view entity, failed asset load request NOTE: It appears I can get it to work slightly better by pressing the "Save Entity" button twice before attempting to view it again in the immersive space.
0
0
165
Aug ’25
How to Obtain License File for Main Camera Access Entitlement in visionOS (Email Was Deactivated During Approval)
Hi everyone, I'm developing a visionOS application using Unity with an enterprise developer account. I applied for the Main Camera Access entitlement, but at the time of submission, the email address associated with my Apple ID was deactivated, so I couldn’t receive any email communication from Apple. Later, I updated the email address for my Apple ID. Now, in the Apple Developer portal under Identifiers, I can see that my app has been granted Main Camera Access, and I can also add the corresponding capability in Xcode. However, according to Apple’s documentation(https://developer.apple.com/documentation/visionos/building-spatial-experiences-for-business-apps-with-enterprise-apis): “To use entitlements, you need to include both the entitlement file and a corresponding license file in your app. After Apple approves your app for one or more entitlements, you receive a license file, along with additional instructions.” I never received this license file, possibly due to the deactivated email. I don't know where to find it or how to retrieve it now. What exactly is this license file? If it was originally sent to an unreachable email, how can I request it again or get it resent? Where in the Apple Developer portal (or elsewhere) can I access or download this file? Any help or guidance would be greatly appreciated! Thanks in advance.
2
0
101
Aug ’25
Unexpected lines appear in PDFView on visionOS 26 beta
I’m attempting to display a PDF file in a visionOS application using the PDFView in PDFKit. When running on device with visionOS 26, a horizontal solid line appears on some pages, while on other pages, both a horizontal and vertical solid line appear. These lines do not appear in Xcode preview canvas (macOS, visionOS) on device running visionOS 2.5 on Mac running macOS 15.6 I thought that this could possibly be the page breaks, but setting displaysPageBreaks = false did not appear to be effective. Are there any other settings that could be causing the lines to display? Code Example struct ContentView: View { @State var pdf: PDFDocument? = nil var body: some View { PDFViewWrapper(pdf: pdf) .padding() } } #Preview(windowStyle: .automatic) { ContentView(pdf: PDFDocument(url: Bundle.main.url(forResource: "SampleApple", withExtension: "pdf")!)) .environment(AppModel()) } struct PDFViewWrapper: UIViewRepresentable { let pdf: PDFDocument? func makeUIView(context: Context) -> PDFView { let view = PDFView() view.document = pdf view.displaysPageBreaks = false return view } func updateUIView(_ uiView: PDFView, context: Context) { uiView.document = pdf } } Tested with Xcode Version 16.4 (16F6) Xcode Version 26.0 beta 5 (17A5295f) visionOS 2.5 visionOS 26 Beta 5 I
2
0
96
Aug ’25