Hi community,
I have a pair of stereo images, one for each eye. How should I render it on visionOS?
I know that for 3D videos, the AVPlayerViewController could display them in fullscreen mode. But I couldn't find any docs relating to 3D stereo images.
I guess my question can be brought up in a more general way: Is there any method that we can render different content for each eye? This could also be helpful to someone who only has sight on one eye.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
How do you force quit apps on the Vision Pro simulator?
I've read online you can double press the digital crown on a real Vision Pro device but there is no such button on the simulator. So far I've tried
Pressing the home button twice (⇧⌘H)
Pressing the Siri button twice (⌥⇧⌘H)
None of them works
I'm on Xcode 15.0 beta 5 (15A5209g) & visionOS 1.0 beta 2 (21N5207e)
What will be the pros and cons if unreal engine is used in apple vision pro?
What are the issues that unreal will face in future on apple vision pro?
What will be the specifications required for unreal to work on vision pro?
I want to open a view in my App that contains a Model3D view and I want that object to rotate continuously around the Y axis while it is visible. Is it possible to animate the rotation of a Model3D view?
I've tried this code, but the object just sits there and doesn't rotate.
import RealityKit
import RealityKitContent
import SwiftUI
struct QuantumComputerArea: View {
@State var degreesRotating = 0.0
var body: some View {
VStack {
Model3D(named: "quantumComputer") { phase in
switch phase {
case .empty:
ProgressView()
case let .failure(error):
Text(error.localizedDescription)
case let .success(model):
model
.resizable()
.scaledToFit()
.offset(x: -75, y: 0)
.rotation3DEffect(.degrees(degreesRotating), axis: (x: 0, y: 1, z: 0))
@unknown default:
fatalError()
} //phase
} //Model3D
.onAppear {
withAnimation(Animation.linear(duration: 10).repeatForever(autoreverses: false)) {
degreesRotating = 360
}
}
} //VStack
} //View
} //View
I'm probably missing something simple but if anyone has any suggestions (including use a RealityView) I'd be grateful for the advice.
I tried to show spatial photo on my application by swiftUI's Image but it just show flat version of it even I Use Vision Pro,
so, how can I show spatial photo to users,
does there any options for this?
Is there any way to detect if an entity is being looked at in a RealityView. I know it is possible to add a "HoverEffectComponent()" which will highlight the entity a little when you gaze on it, but there doesn't seem to be any way to call a function from this. There is also no GazeGesture or anything similar.
Rendering the scene onto a RenderTarget with twice the resolution of the Drawable, and then downsampling to the Drawable, causes the image to appear distorted.
Modifications were made on the Xcode VisionOS template
Foveation should be enabled by default
struct ContentStageConfiguration: CompositorLayerConfiguration {
func makeConfiguration(capabilities: LayerRenderer.Capabilities, configuration: inout LayerRenderer.Configuration) {
configuration.depthFormat = .depth32Float
configuration.colorFormat = .bgra8Unorm_srgb
let foveationEnabled = capabilities.supportsFoveation
configuration.isFoveationEnabled = foveationEnabled
let options: LayerRenderer.Capabilities.SupportedLayoutsOptions = foveationEnabled ? [.foveationEnabled] : []
let supportedLayouts = capabilities.supportedLayouts(options: options)
configuration.layout = supportedLayouts.contains(.layered) ? .layered : .dedicated
}
}
To avoid errors, rasterizationRateMap is not set.
var renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = self.renderTarget.currentFrameColor
renderPassDescriptor.renderTargetWidth = self.renderTarget.currentFrameColor.width
renderPassDescriptor.renderTargetHeight = self.renderTarget.currentFrameColor.height
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].storeAction = .store
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 0.0)
renderPassDescriptor.depthAttachment.texture = self.renderTarget.currentFrameDepth
renderPassDescriptor.depthAttachment.loadAction = .clear
renderPassDescriptor.depthAttachment.storeAction = .store
renderPassDescriptor.depthAttachment.clearDepth = 0.0
//renderPassDescriptor.rasterizationRateMap = drawable.rasterizationRateMaps.first
if layerRenderer.configuration.layout == .layered {
renderPassDescriptor.renderTargetArrayLength = drawable.views.count
}
The rendering process is as follows:
I’ve submitted the following feedback:
FB13820942 (List Outline View Not Using Accent Color on Disclosure Caret for visionOS)
I’d appreciate help on this to see if I’m doing something wrong or indeed it’s the way visionOS currently works and it’s a suggested feedback.
The new Mac virtual display feature on visionOS 2 offers a curved/panoramic window. I was wondering if this is simply a property that can be applied to a window, or if it involves an immersive mode or SceneKit/RealityKit?
I am getting the error "Initializing hosting entity without a context" in the console when I build and run my game in XCode 16.0 beta, targeting Vision Pro OS 2.0 (22N5252n).
Not sure where the error is originating.
Is there a way to provide CLHeading in visionOS so I can more accurately understand the direction in which a user is facing?
Topic:
Spatial Computing
SubTopic:
General
Thank you again for pushing the web forward in VisionOS 2, super exciting!
The latest WWDC24 video touched on VR experiences for VisionOS2.0 using WebXR, however there was no mention of passthrough AR experiences.
Samples such as this one are not supported:
https://immersive-web.github.io/webxr-samples/immersive-ar-session.html
In Settings > Safari, there is a feature flag for the AR WebXR module, but enabling it did not seem to change anything.
Is this the expected behavior at this time? Any developer preview(s) we could try?
I downloaded Xcode 16 and updated my macOS to 15, but I keep getting this error when trying to build the game in simulator or in device
[xrsimulator] Exception thrown: The operation couldn’t be completed. (realitytool.RKAssetsCompiler.RKAssetsCompilerError error 3.)
For visionOS 2.0+, it has been announced the object tracking feature. Is there any support for PolySpatial in Unity or is it only available in Swift and Xcode?
I started a visionOS app using Apple's new "App Environment" template, and when I looked at the UV mapping for the half SkyDome, the bottom edge had a UV 'Y' value of 0.318.
Naively, I had assumed the bottom edge of a half dome would have a UV 'Y' value of 0.5 (half way up the texture map).
Is this the standard UV mapping for half a SkyDome?
It has caused some issues when I've applied some HDRIs.
In the Discover RealityKit APIs for iOS, macOS, and visionOS presentation, there was a slide at the end highlighting new features not covered in the video. One of them was surface subdivision, but I have not been able to find any documentation or APIs that support this feature. Does anyone have any further details or how this works in RealityKit?
Hello,
I have an iOS app that is using SwiftUI but the gesture code is written using UIGestureRecognizer. When I run this app on visionOS using the "Designed for iPad" destination and try to use any of my gestures I see this warning in the console:
Trying to convert coordinates between views that are in different UIWindows, which isn't supported. Use convertPoint:fromCoordinateSpace: instead.
But I don't see any visible problems with the gestures.
I see this warning printed out after the gesture takes place but before any of our gesture methods get kicked off. So now I am wondering if this is something we need to deal with or some internal work that needs to happen in UIKit.
Does anyone have any thoughts on this?
I'm having trouble pairing my apple vision pro to my macbook pro M3, my macbook pro is on sonoma 14.6 and i have tested pairing a visionOS1.2 and 2.0 vision pro but it still doesn't work, i have a mac mini that pairs and connects fine to the headsets and those are the steps i tried to do on vision pro and macbook pro to pair them together until now but with no success :
On the same windows wifi hotspot
On the same iPhone hotspot
On an other wifi hotspot
Tried to clear remote devices, still not recognized
tried to turn off and turn on developper mode still nothing
tried to reset network parameters
tried to restart headset
tried to restart Xcode
tried to restart mac
just after restart the headset showed up and i clicked pair and typed in the code but then the headset was still in "disconnected" and couldn't connect to mac
tried to restart mac and headset
tried to rename headset
tried to switch mac
tried 1 headset on at a time
tried to clean build folder
deleted contents of ~/Library/Developer/Xcode/DerivedData
tried sudo defaults write "/Library/Preferences/com.apple.mDNSResponder.plist NoMulticastAdvertisements" -bool true
tried to deactivate the firewall
I have a visionOS app that plays audio using AVAudioEngine and presents both a window and an immersive space. If I close the window, the audio session gets interrupted and attempting to restart the session and audio engine has no effect. I need to dismiss the app, then reopen it, which reopens the main window, in order for audio to start playing again.
This is in all visionOS 2 betas. Note that I have background audio enabled for my app.
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug.
After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere.
This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!