As in the title: openImmersiveSpace works as expected. The ImmersiveSpace in the ID opens normally but the function never resolves a Result. Here is a workaround I used to make user it was on the UI thread and scenePhase was active:
`
@MainActor func openSpaceWithStateCheck() async {
if scenePhase == .active {
Task {
switch await openImmersiveSpace(id: "RoomCaptureInteraction") {
case .opened:
isCapturingImagery = true
break
case .error:
print("!! An error occurred when trying to open the immersive space captureRoomImagery")
case .userCancelled:
print("!! The user declined opening immersive space captureRoomImagery")
@unknown default:
print("!! unknown default result of opening space")
break
}
}
} else {
print("Scene not active, deferring immersive space opening")
}
}
I'm on visionOS 2.4 and SDK 2.2.
I have tried uninstalling the app and rebuilding. Tried simply opening an empty ImmersiveSpace.
The consistency of the ImmersiveSpace opening at least means I can work around it. Even dismissImmersiveSpace works normally and closes the immersive space. But a workaround seems hamfisted.
visionOS
RSS for tagDiscuss developing for spatial computing and Apple Vision Pro.
Posts under visionOS tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
As seen in the screenshot below, I set up a visionOS project at the beginning in Xcode 16 and later add iOS as a different platform it's also running.
And I use swiftUI as the framework for UI layout here. It runs smoothly on visionOS. However, when it comes to iOS, even if I have included modifiers as .edgesIgnoringSafeArea(.all), the display still shrinks to only weird certain area on the screen (I also tried to modify the ContentView() window size in App file and it doesn't work). Anybody has any idea for the across platform UI settings? What should I do to make the UI on iOS display normally (and it looks even more weird when it comes to Tab bars)?
Hi, i just wanna ask, Is it possible to run YOLOv3 on visionOS using the main camera to detect objects and show bounding boxes with labels in real-time? I’m wondering if camera access and custom models work for this, or if there’s a better way. Any tips?
I have an app that has a WKWebView for watching YouTube videos. When the videos are windowed the audio seems fine, positionally as well. All perfectly.
When I fullscreen the video and it goes into the native visionOS video player the audio messes up.
It will suddenly sound like it is in your ears, or maybe even just one ear channel, or the position will be wrong. It might be fine for a moment but the second I touch the controls or move the window the sound jumps across the room, away from the window, or switches to stereo.
Sometimes exiting windows entirely you will still hear the videos playing. Even if you open the window back up and go to another screen and open another video, now you hear 2 videos playing at the same time with no way to stop the first one in the background, requiring to force restart the app.
It is all sorts of glitchy. I haven't the slightest clue what is happening here. I am strongly feeling this is a visionOS bug.
I tried using AVAudioSession to change some of the sound settings, and that makes zero difference in behavior.
Multiple testers have also reported this behavior and it has been seen on both visionOS 2.3 and 2.4 betas.
Thanks for the help! This is driving me mad! It is extremely consistent behavior!
In Vision OS app, We have two types of windows:
Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content.
Immersive Space Window – This opens only when a user starts streaming or playing a video.
Issue:
When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting. However, this causes the app to restart instead of resuming from its previous state.
Desired Behavior:
I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting.
Attempts & Challenges:
Tried managing opacity, visibility, and state preservation, but none worked as expected.
Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground.
Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
I am develop visionOS app. I am now very interested in Metal and Compositor Services, but I have not explored them in depth. I know that Metal has a higher degree of control freedom. I am wondering if using Compositor Services will have fewer functions than RealityKit in AR technology (such as scene reconstruction and understanding, hover effect, etc.).
I'm developing the VisionOS app. I want to know how to play spatial audio in addition to RealityKit? If it's iOS or macOS, how to play spatial audio in addition to RealityKit?
In Vision OS app, I have two types of windows:
Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content.
Immersive Space Window – This opens only when a user starts streaming or playing a video.
Issue:
When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting from immersive space. However, this causes the app to restart instead of resuming from its previous state.
Desired Behavior:
I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting.
Attempts & Challenges:
Tried managing opacity, visibility but none worked as expected.
Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground.
Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
Hello,
A lot of the RealityKit APIs (Ex. LowLevelMesh, LowLevelTexture, etc.) are marked with MainActor so they needed to be accessed on the main thread.
This creates issues when we need to perform expensive GPU related operations since now we need to perform those on the main thread. This results in bottlenecks and hangs in our application. We would like to use a multi-threaded approach to solve these problems which is difficult to do here. We are constantly streaming data whether the app is just appearing or the user is interacting with our application so we need to be able to perform these operations on a separate thread.
Any advice on how to achieve this using RealityKit?
Thank you.
as in the environments we have real tiem reflections of movies on a screen or reflections of the surrounding hood in the background...
could i get a metallic surface getting accurate reflections of a box on top ?
i don't mean getting a rpobe or hdr cubemap, i mean the same accurate reflections as the water of the mt hood with movie i'm wacthing in other app
I am currently creating an app where two people share an instance of an immersive space so that they are able to point to certain things in the immersive space. Right now, other people are hidden behind the immersive space, and even with people awareness enabled for everything, people are still too difficult to see. I've found this documentation (https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people) which describes what I want to do, but it is only listed as working on iOS an iPadOS. Is there anything similar to this that will work on VisionOS?
Hi!
I attempted to run a sample project for detecting human pose in photos, which can be found here:
https://developer.apple.com/documentation/vision/detecting-human-body-poses-in-3d-with-vision
The project works perfectly when run on my Macbook Pro M1, but it fails on Apple Vision Pro. After selecting the photo an endless loading screen is presented and the following output is produced in the console:
Failed to initialize 2D Detection Algorithm.
Failed to initialize 2D Pose Estimation Algorithm.
Failed to initialize algorithm modules
Network path is nil: (null)
Failed to initialize 2D Detection Algorithm.
Failed to initialize 2D Pose Estimation Algorithm.
Failed to initialize algorithm modules
Unable to perform the request: Error Domain=com.apple.Vision Code=9 "Async status object reported as failed but without an error" UserInfo={NSLocalizedDescription=Async status object reported as failed but without an error}.
de-activating session 70138 after timeout
It seems that VNDetectHumanBodyPose3DRequest is failing on Vision Pro for some reason. Are there any additional requirements for running Vision framework on VisionOS, that I might be missing?
Hi!
I attempted running a sample project for detecting human pose in 3D with vision framework, that can be found here: https://developer.apple.com/documentation/vision/detecting-human-body-poses-in-3d-with-vision.
It works perfectly on my Macbook Pro M1, but fails on Apple Vision Pro. After selecting a photo, an endless loading screen is displayed and the following message is produced in the console:
Failed to initialize 2D Detection Algorithm.
Failed to initialize 2D Pose Estimation Algorithm.
Failed to initialize algorithm modules
Network path is nil: (null)
Failed to initialize 2D Detection Algorithm.
Failed to initialize 2D Pose Estimation Algorithm.
Failed to initialize algorithm modules
Unable to perform the request: Error Domain=com.apple.Vision Code=9 "Async status object reported as failed but without an error" UserInfo={NSLocalizedDescription=Async status object reported as failed but without an error}.
de-activating session 70138 after timeout
Is human pose detection expected to work on VisionOS? Is there any special configuration required, that I might be missing?
Sorry for the cross-post but it's now two days in and this isn't fixed.
If you try to use Xcode 16.3b3 with visionOS, it won't download the visionOS SDK, gives a 'network error' so you can't use the latest beta for Apple Vision Pro.
FB16927025
FB16917874
FB16910449
Hello everyone can you help me, i have requested main camera access API Enterprise and have got the license to, and i have setting up the project main camera access demo from apple with my new license and have create app bundle and identifier for it but when i tried to deploy it test flight i got some error say "Profile doesn't support Main Camera Access" and "Profile doesn't include the com.apple.developer.arkit.main-camera-access.alow entitlement, even have do it it app Certificates, Identifiers & Profiles and add the additional capability Main Camera Access. can you help me fixing this so that i can use Main Camera Access Entitlement
TLDR: Applying a clipShape in a hoverEffect closure is preventing taps from getting through to buttons nested within an ornament.
I need to make a custom ornament menu, similar to the stock ornament available via TabView but with some visual tweaks. It displays icons, and then expands to display a label as the user hovers. Example:
I've put together a piece of sample code, following guidance from WWDC docs:
VStack {
}
.ornament(attachmentAnchor: .scene(.leading)) {
VStack {
ForEach(0...7, id:\.self) { index in
Button(action: {
print(index) // <---- This will not print
}) {
HStack {
Text("\(index)")
Text(" button")
}
}
}
}
.padding(12)
.glassBackgroundEffect()
.hoverEffect { effect, isActive, proxy in
effect
.clipShape(RoundedRectangle(cornerRadius: 36)
.size(width: isActive ? proxy.size.width : 72, height: proxy.size.height, anchor: .leading)
)
}
}
}
The buttons in this code cannot be interacted with, as the print statement never executes. What am I missing here? I've managed to get some weird behavior, sometimes a specific clipShape (like a circle) will allow a tap on a single button, but not others.
I have been using ARKit to get hand tracking data on a continuous loop by implementing the AnchorUpdateSequence.
I want to try out the .predicted hand tracking, but it seems as though using ARKit session and HandTrackingProvider do not allow me to enable this feature?
I am developing an app which needs high-quality immersion on VisionOS. I found that when some messages pop up, the virtual object will get transparent so the immersion is broken. How could I disable such pop-up messages when the ImmersiveSpace is open
.
I'm developing a visionOS app using Unity and CoreBluetooth to connect to BLE devices. However, after adding the required Bluetooth permissions in Info.plist, the app does not prompt the user for Bluetooth access, and scanning for peripherals does not work.
I am using Entity of RealityKit to display virtual content, however I find that sometimes the real object in front of the virtual content can not occulude the virtual content.
For example, I place an Entity in a room, but when I walk into another room, I can still see the Entity through the wall.
I wonder how should I fix the problem. Thank you!