visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts

Post

Replies

Boosts

Views

Activity

white gap between objects in RealityView
I want to display a huge image in RealityView in 3d space on Vision Pro. of course instead of one giant file I'm using a lot of big images. to achieve this, I'm generating multiple planes exactly beside each others and put each image on them. although the planes are exactly beside each others but there is still a white gap between them.(image below) **Does anybody know how to fix this issue? **
0
0
112
May ’25
visionOS NavigationSplitView - Refreshable ProgressView Disappears
Description I've encountered an issue with NavigationSplitView on visionOS when using a refreshable ScrollView or List in the detail view. The Problem: When implementing pull-to-refresh in the detail view of a NavigationSplitView, the ProgressView disappears and generates this warning: Trying to convert coordinates between views that are in different UIWindows, which isn't supported. Use convertPoint:fromCoordinateSpace: instead. I discovered that if the detail view includes a .navigationTitle(), the ProgressView remains visible and works correctly! Below is a minimal reproducible example showing this behavior. When you run this code, you'll notice: The sidebar refreshable works fine The detail refreshable works only when .navigationTitle("Something") is present Remove the navigationTitle and the detail view's refresh indicator disappears minimal Demo import SwiftUI struct MinimalRefreshableDemo: View { @State private var items = ["Item 1", "Item 2", "Item 3"] @State private var detailItems = ["Detail 1", "Detail 2", "Detail 3"] @State private var selectedItem: String? = "Item 1" var body: some View { NavigationSplitView { List(items, id: \.self, selection: $selectedItem) { item in Text(item) } .refreshable { items = ["Item 1", "Item 2", "Item 3"] } .navigationTitle("Chat") } detail: { List { ForEach(detailItems, id: \.self) { item in Text(item) .frame(height: 100) .frame(maxWidth: .infinity) } } .refreshable { detailItems = ["Detail 1", "Detail 2", "Detail 3"] } .navigationTitle("Something") } } } #Preview { MinimalRefreshableDemo() } Is this expected behavior? Has anyone else encountered this issue or found a solution that doesn't require adding a navigation title?
1
0
61
May ’25
How to use CharacterControllerComponent.
I am trying to implement a ChacterControllerComponent using the following URL. https://developer.apple.com/documentation/realitykit/charactercontrollercomponent I have written sample code, but PhysicsSimulationEvents.WillSimulate is not executed and nothing happens. import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { let gravity: SIMD3<Float> = [0, -50, 0] let jumpSpeed: Float = 10 enum PlayerInput { case none, jump } @State private var testCharacter: Entity = Entity() @State private var myPlayerInput = PlayerInput.none var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) testCharacter = immersiveContentEntity.findEntity(named: "Capsule")! testCharacter.components.set(CharacterControllerComponent()) let _ = content.subscribe(to: PhysicsSimulationEvents.WillSimulate.self, on: testCharacter) { event in print("subscribe run") let deltaTime: Float = Float(event.deltaTime) var velocity: SIMD3<Float> = .zero var isOnGround: Bool = false // RealityKit automatically adds `CharacterControllerStateComponent` after moving the character for the first time. if let ccState = testCharacter.components[CharacterControllerStateComponent.self] { velocity = ccState.velocity isOnGround = ccState.isOnGround } if !isOnGround { // Gravity is a force, so you need to accumulate it for each frame. velocity += gravity * deltaTime } else if myPlayerInput == .jump { // Set the character's velocity directly to launch it in the air when the player jumps. velocity.y = jumpSpeed } testCharacter.moveCharacter(by: velocity * deltaTime, deltaTime: deltaTime, relativeTo: nil) { event in print("playerEntity collided with \(event.hitEntity.name)") } } } } } } The scene is loaded from RCP. It is simple, just a capsule on a pedestal. Do I need a separate code to run testCharacter from this state?
0
0
104
May ’25
SFSpeechRecognizer is not working inside visionOS 2.4 simulator
I know there has been issues with SFSpeechRecognizer in iOS 17+ inside the simulator. Running into issues with speech not being recognised inside the visionOS 2.4 simulator as well (likely because it borrows from iOS frameworks). Just wondering if anyone has any work arounds or advice for this simulator issue. I can't test on device because I don't have an Apple Vision Pro. Using Swift 6 on Xcode 16.3. Below are the console logs & the code that I am using. Console Logs BACKGROUND SPATIAL TAP (hit BackgroundTapPlane) SpeechToTextManager.startRecording() called [0x15388a900|InputElement #0|Initialize] Number of channels = 0 in AudioChannelLayout does not match number of channels = 2 in stream format. iOSSimulatorAudioDevice-22270-1: Abandoning I/O cycle because reconfig pending iOSSimulatorAudioDevice-22270-1: Abandoning I/O cycle because reconfig pending iOSSimulatorAudioDevice-22270-1: Abandoning I/O cycle because reconfig pending iOSSimulatorAudioDevice-22270-1: Abandoning I/O cycle because reconfig pending iOSSimulatorAudioDevice-22270-1: Abandoning I/O cycle because reconfig pending iOSSimulatorAudioDevice-22270-1: Abandoning I/O cycle because reconfig pending SpeechToTextManager.startRecording() completed successfully and recording is active. GameManager.onTapToggle received. speechToTextManager.isAvailable: true, speechToTextManager.isRecording: true GameManager received tap toggle callback. Tapped Object: None BACKGROUND SPATIAL TAP (hit BackgroundTapPlane) GESTURE MANAGER - User is already recording, stopping recording SpeechToTextManager.stopRecording() called GameManager.onTapToggle received. speechToTextManager.isAvailable: true, speechToTextManager.isRecording: false Audio data size: 134400 bytes Recognition task error: No speech detected <--- Code private(set) var isRecording: Bool = false private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest? private var recognitionTask: SFSpeechRecognitionTask? @MainActor func startRecording() async throws { logger.debug("SpeechToTextManager.startRecording() called") guard !isRecording else { logger.warning("Cannot start recording: Already recording.") throw AppError.alreadyRecording } currentTranscript = "" processingError = nil audioBuffer = Data() isRecording = true do { try await configureAudioSession() try await Task.detached { [weak self] in guard let self = self else { throw AppError.internalError(description: "SpeechToTextManager instance deallocated during recording setup.") } try await self.audioProcessor.configureAudioEngine() let (recognizer, request) = try await MainActor.run { () -> (SFSpeechRecognizer, SFSpeechAudioBufferRecognitionRequest) in guard let result = self.createRecognitionRequest() else { throw AppError.configurationError(description: "Speech recognition not available or SFSpeechRecognizer initialization failed.") } return result } await MainActor.run { self.recognitionRequest = request } await MainActor.run { self.recognitionTask = recognizer.recognitionTask(with: request) { [weak self] result, error in guard let self = self else { return } if let error = error { // WE ENTER INTO THIS BLOCK, ALWAYS self.logger.error("Recognition task error: \(error.localizedDescription)") self.processingError = .speechRecognitionError(description: error.localizedDescription) return } . . . } } . . . }.value } catch { . . . } } @MainActor func stopRecording() { logger.debug("SpeechToTextManager.stopRecording() called") guard isRecording else { logger.debug("Not recording, nothing to do") return } isRecording = false Task.detached { [weak self] in guard let self = self else { return } await self.audioProcessor.stopEngine() let finalBuffer = await self.audioProcessor.getAudioBuffer() await MainActor.run { self.recognitionRequest?.endAudio() self.recognitionTask?.cancel() } . . . } }
0
0
96
May ’25
Granting Microphone Access through in-app authorisation request on the VisionOS 2.4 Simulator causes it to crash
When requesting authorisation to gain access to the user's microphone in the visionOS 2.4 simulator, the simulator crashes upon the user clicking allow. Using Swift 6, code below. Bug Report: FB17667361 @MainActor private func checkMicrophonePermission() async { logger.debug("Checking microphone permissions...") // Get the current permission status let micAuthStatus = AVAudioApplication.shared.recordPermission logger.debug("Current Microphone permission status: \(micAuthStatus.rawValue)") if micAuthStatus == .undetermined { logger.info("Requesting microphone authorization...") // Use structured concurrency to wait for permission result let granted = await withCheckedContinuation { continuation in AVAudioApplication.requestRecordPermission() { allowed in continuation.resume(returning: allowed) } } logger.debug("Received microphone permission result: \(granted)") // Convert to SFSpeechRecognizerAuthorizationStatus for consistency let status: SFSpeechRecognizerAuthorizationStatus = granted ? .authorized : .denied // Handle the authorization status handleAuthorizationStatus(status: status, type: "Microphone") // If granted, configure audio session if granted { do { try await configureAudioSession() } catch { logger.error("Failed to configure audio session after microphone authorization: \(error.localizedDescription)") self.isAvailable = false self.processingError = .audioSessionError(description: "Failed to configure audio session after microphone authorization") } } } else { // Convert to SFSpeechRecognizerAuthorizationStatus for consistency let status: SFSpeechRecognizerAuthorizationStatus = (micAuthStatus == .granted) ? .authorized : .denied handleAuthorizationStatus(status: status, type: "Microphone") // If already granted, configure audio session if micAuthStatus == .granted { do { try await configureAudioSession() } catch { logger.error("Failed to configure audio session for existing microphone authorization: \(error.localizedDescription)") self.isAvailable = false self.processingError = .audioSessionError(description: "Failed to configure audio session for existing microphone authorization") } } } }
1
1
92
May ’25
Requesting user Permission for Speech Framework crashes visionOS simulator
When a new application runs on VisionOS 2.4 simulator and tries to access the Speech Framework, prompting a request for authorisation to use Speech Recognition, the application freezes. Using Swift 6. Report Identifier: FB17666252 @MainActor func checkAvailabilityAndPermissions() async { logger.debug("Checking speech recognition availability and permissions...") // 1. Verify that the speechRecognizer instance exists guard let recognizer = speechRecognizer else { logger.error("Speech recognizer is nil - speech recognition won't be available.") reportError(.configurationError(description: "Speech recognizer could not be created."), context: "checkAvailabilityAndPermissions") self.isAvailable = false return } // 2. Check recognizer availability (might change at runtime) if !recognizer.isAvailable { logger.error("Speech recognizer is not available for the current locale.") reportError(.configurationError(description: "Speech recognizer not available."), context: "checkAvailabilityAndPermissions") self.isAvailable = false return } logger.trace("Speech recognizer exists and is available.") // 3. Request Speech Recognition Authorization // IMPORTANT: Add `NSSpeechRecognitionUsageDescription` to Info.plist let speechAuthStatus = SFSpeechRecognizer.authorizationStatus() // FAILS HERE logger.debug("Current Speech Recognition authorization status: \(speechAuthStatus.rawValue)") if speechAuthStatus == .notDetermined { logger.info("Requesting speech recognition authorization...") // Use structured concurrency to wait for permission result let authStatus = await withCheckedContinuation { continuation in SFSpeechRecognizer.requestAuthorization { status in continuation.resume(returning: status) } } logger.debug("Received authorization status: \(authStatus.rawValue)") // Now handle the authorization result let speechAuthorized = (authStatus == .authorized) handleAuthorizationStatus(status: authStatus, type: "Speech Recognition") // If speech is granted, now check microphone if speechAuthorized { await checkMicrophonePermission() } } else { let speechAuthorized = (speechAuthStatus == .authorized) handleAuthorizationStatus(status: speechAuthStatus, type: "Speech Recognition") // If speech is already authorized, check microphone if speechAuthorized { await checkMicrophonePermission() } } }
1
0
83
May ’25
Symbol not found: _$sSo22CLLocationCoordinate2DVSE12CoreLocationMc when building for visionOS 2.5 with Xcode 16.3
Hello, I'm encountering a runtime crash when building my visionOS app with Xcode 16.3 for visionOS 2.5. Our existing AppStore/Testflight app is also instantly crashing on visionOS 2.5 when opened but works fine on e.g visionOS 2.4. The app builds successfully but crashes on launch with this symbol lookup error (slightly adjusted because the forum complained regarding sensitive data): Symbol not found: _$sSo22CLLocationCoordinate2DVSE12CoreLocationMc Referenced from: <XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX> /private/var/containers/Bundle/Application/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/MyApp.app/MyApp.debug.dylib Expected in: <XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX> /usr/lib/swift/libswiftCoreLocation.dylib dyld config: DYLD_LIBRARY_PATH=/usr/lib/system/introspection DYLD_INSERT_LIBRARIES=/usr/lib/libLogRedirect.dylib:/usr/lib/libBacktraceRecording.dylib:/usr/lib/libMainThreadChecker.dylib:/System/Library/PrivateFrameworks/GPUToolsCapture.framework/GPUToolsCapture:/usr/lib/libViewDebuggerSupport.dylib I've already implemented my own Codable conformance for CLLocationCoordinate2D: extension CLLocationCoordinate2D: Codable { // implementation details... } This worked fine on previous visionOS/Xcode versions. Has anyone encountered this issue or found a solution? System details: macOS version: 15.3.2 Xcode version: 16.3 visionOS target: 2.5 Thank you!
2
0
80
May ’25
App Stuck In Review For 8 Days, Repeated Board Escalations Severely Impacting Release Cycle
Dear App Review Team, I am writing to express deep concern over the repeated and prolonged delays the app is facing during the App Review process. Once again, a recent build—submitted on May 12—has been stuck in the "In Review" state for over a week. I have been informed that, as has become routine, the submission has been escalated to the App Review Board without any clear timeline or communication. This is not an isolated incident. Nearly every feature update submitted is referred to the board, resulting in unpredictable, weeks-long delays that severely disrupt the development and release cycle. These escalations consistently happen without clear reasoning, and without any proactive outreach or follow-up from the review team. The app is currently one of the highest-performing indie titles on the visionOS App Store, and I am committed to delivering the highest quality experience for users on Apple platforms. However, the current review process is making it increasingly difficult to operate responsibly or efficiently. The lack of transparency and responsiveness is not only frustrating—it is actively harming product stability, user trust, and overall business health. I am requesting immediate attention and action on this matter. Specifically: Clear communication on the status of the current review. An explanation as to why the updates are repeatedly escalated to the board. A path toward a more predictable and professional review process moving forward. I am fully committed to maintaining a positive and productive relationship with Apple, but the current pattern is unsustainable. App ID: 6737148404
1
0
101
May ’25
Using a WKWebView inside RealityView attachment causes crashes.
I have an attachment anchored to the head motion, and I put a WKWebView as the attachment. When I try to interact with the web view, the app crashes with the following errors: *** Assertion failure in -[UIGestureGraphEdge initWithLabel:sourceNode:targetNode:directed:], UIGestureGraphEdge.m:28 *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid parameter not satisfying: targetNode' *** First throw call stack: (0x18e529340 0x185845e80 0x192c2283c 0x2433874d4 0x243382ebc 0x2433969a8 0x24339635c 0x243396088 0x243907760 0x2438e4c94 0x24397b488 0x24397e28c 0x243976a20 0x242d7fdc0 0x2437e6e88 0x2437e6254 0x18e4922ec 0x18e492230 0x18e49196c 0x18e48bf3c 0x18e48b798 0x1d3156090 0x2438c8530 0x2438cd240 0x19fde0d58 0x19fde0a64 0x19fa5890c 0x10503b0bc 0x10503b230 0x2572247b8) libc++abi: terminating due to uncaught exception of type NSException *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid parameter not satisfying: targetNode' *** First throw call stack: (0x18e529340 0x185845e80 0x192c2283c 0x2433874d4 0x243382ebc 0x2433969a8 0x24339635c 0x243396088 0x243907760 0x2438e4c94 0x24397b488 0x24397e28c 0x243976a20 0x242d7fdc0 0x2437e6e88 0x2437e6254 0x18e4922ec 0x18e492230 0x18e49196c 0x18e48bf3c 0x18e48b798 0x1d3156090 0x2438c8530 0x2438cd240 0x19fde0d58 0x19fde0a64 0x19fa5890c 0x10503b0bc 0x10503b230 0x2572247b8) terminating due to uncaught exception of type NSException Message from debugger: killed This is the code for the RealityView struct ImmersiveView: View { @Environment(AppModel.self) private var appModel var body: some View { RealityView { content, attachments in let anchor = AnchorEntity(AnchoringComponent.Target.head) if let sceneAttachment = attachments.entity(for: "test") { sceneAttachment.position = SIMD3<Float>(0,0,-3.5) anchor.addChild(sceneAttachment) } content.add(anchor) } attachments: { Attachment(id: "test") { WebViewWrapper(webView: appModel.webViewModel.webView) } } } } This is the appModel: import SwiftUI import WebKit /// Maintains app-wide state @MainActor @Observable class AppModel { let immersiveSpaceID = "ImmersiveSpace" enum ImmersiveSpaceState { case closed case inTransition case open } var immersiveSpaceState = ImmersiveSpaceState.closed public let webViewModel = WebViewModel() } @MainActor final class WebViewModel { let webView = WKWebView() func loadViz(_ addressStr: String) { guard let url = URL(string: addressStr) else { return } webView.load(URLRequest(url: url)) } } struct WebViewWrapper: UIViewRepresentable { let webView: WKWebView func makeUIView(context: Context) -> WKWebView { webView } func updateUIView(_ uiView: WKWebView, context: Context) { } } and finally the ContentView where I added a button to load the webpage: struct ContentView: View { @Environment(AppModel.self) private var appModel var body: some View { VStack { ToggleImmersiveSpaceButton() Button("Go") { appModel.webViewModel.loadViz("http://apple.com") } } .padding() } }
1
0
70
May ’25
How to add visual thickness to a glass background view
Hi guys, In visionOS, when using a ZStack decorated with .glassBackgroundEffect(), you can see the 3D glass background from the front, but when viewed from the side, the view appears to have no thickness. However, I noticed that in an app built by Apple, when viewing a glass background view from the side, it appears to have thickness. I tried adding .frame(depth:) to a glass background view, but it appears as two separate layers spaced by the depth value. My question is: Is there a view modifier that adds visual thickness to a glass background view, as shown in the picture? Or, if not, how should I write a custom view modifier to achieve this effect? Thanks!
0
0
85
May ’25
Transparent Material Turning into Occlusion Material
I am trying to create an object in immersive space that is partially transparent (~50% opacity). I have implemented this in a few different ways including creating a model entity and setting its opacity component to 0.5, and creating a custom material with blending set to a transparent opacity of 0.5. These both work partially, as they behaved as intended for many cases, but seemingly randomly would act like occlusion material and block any other immersive content behind them, showing the real world instead. Some notes: I am using RealityKit to render the semi-transparent object and an opaque object that is behind the semi-transparent object. I am using VisionOS 2.1, and am updating the location of the semi-transparent object often. Both objects are ModelEntities. I would appreciate any guidance on how to implement this. Please let me know if there are any other questions.
3
0
132
May ’25
PhotosPicker obtains the result information of the selected video
In the AVP project, a selector pops up, only wanting to filter spatial videos. When selecting the material of one of the spatial videos, the selection result returns empty. How can we obtain the video selected by the user and get the path and the URL of the file The code is as follows: PhotosPicker(selection: $selectedItem, matching: .videos) { Text("Choose a spatial photo or video") } func loadTransferable(from imageSelection: PhotosPickerItem) -> Progress { return imageSelection.loadTransferable(type: URL.self) { result in DispatchQueue.main.async { // guard imageSelection == self.imageSelection else { return } print("加载成功的图片集合:(result)") switch result { case .success(let url?): self.selectSpatialVideoURL = url print("获取视频链接:(url)") case .success(nil): break // Handle the success case with an empty value. case .failure(let error): print("spatial错误:(error)") // Handle the failure case with the provided error. } } } }
4
0
93
May ’25
Reality Composer Pro 2.0 shader graphs can't be loaded on visionOS 1
Using Reality Composer Pro 2.0, I created a simple shader graph that displays a texture on an unlit surface: On visionOS 2 beta, I can successfully use ShaderGraphMaterial(named:from:in:) to load that shader graph material and assign it to a model entity. However, on visionOS 1.2 and earlier, either in Simulator or on the device, ShaderGraphMaterial(named:from:in:) fails and I see the following logged to the console: If, using Reality Composer Pro 1.0, I experimentally open the same project and delete and recreate exactly the same nodes above, then ShaderGraphMaterial(named:from:in:) works as expected on visionOS 1.2. Is it a known issue that Reality Composer 2 can't be used with visionOS 1? Is this intentional behavior? I've submitted feedback as FB14828873, including a sample project and repro steps. If possible, I would appreciate guidance from an Apple engineer, like "This is a known issue for [list of node types]" or "Reality Composer Pro 2 is not supported for visionOS 1 development, please refer to [documentation]" or "We recommend [workaround]." Thank you.
7
0
1.4k
May ’25
VisionOS 2.4 Required - But doesn't warn user?
I have an AVP app thats already been approved that requires VisionOS 2.4 for the Background Assets feature. The project is set to 2.4 min, but when doing test flights folks that did not have 2.4 did not receive a warning or anything they just get a download error and then think its a bug. Is there a way to only allow devices with 2.4 to download a app? Im new to Apple dev but I'd assume this is standard. Thanks!
0
0
79
May ’25
Launching a timeline on a specific model via notification
Hello! I’m familiar with the discussion on “Sending messages to the scene”, and I’ve successfully used that code. However, I have several instances of the same model in my scene. Is it possible to make only one specific model respond to a notification? For example, can I pass something like RealityKit.NotificationTrigger.SourceEntity in userInfo or use another method to target just one instance?
1
1
87
May ’25
Background Download Support for Large Video Files in visionOS App
Hi everyone, I'm developing a visionOS app that allows users to download large video files (similar to a movie download experience, with each file being around 10 GB). I've successfully implemented the core video download functionality using URLSession, and everything works as expected while the app is active. Now, I’m looking to support background downloading. Specifically, I want users to be able to start a download and then leave the app (e.g., switch apps or return to the home screen) while the download continues in the background. Additionally, I’d like to confirm a specific scenario: If the user starts a download, then removes the headset (keeping the device turned on and connected to power), will the download continue in the background? Or does visionOS suspend the app or downloads in this case? I’m considering using a background URLSessionConfiguration (as done in iOS/macOS) to enable this behavior, but I’m not sure if it behaves the same way on visionOS or if there are special limitations or best practices when handling large downloads on this platform. Any insights or official guidance would be greatly appreciated! Thanks!
1
0
60
May ’25