HoverEffectComponent on macOS 15 and iOS 18 works fine using RealityView, but seems to be ignored when ARView (even with a SwiftUI UIViewRepresentable) is used.
Feedback ID: FB15080805
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have been experimenting with the Hello World sample app from https://developer.apple.com/documentation/visionos/world and I came across behavior that appears inconsistent with user-facing documentation describing the device controls at https://support.apple.com/en-gb/guide/apple-vision-pro/tan1e2a29e00/visionos
I tried pressing simulator's "Home" button while "Objects in Orbit" immersive space was presented alongside with the main application window. According to user documentation, pressing Digital Crown should take the user directly to Home View. In my test a single press only dismissed the immersive space, I needed another press to "exit" the app and go to Home View.
Is this behavior expected? I am assuming that "Home" button in the simulator behaves as if the user pressed Digital Crown on the device, I don't have access to the actual hardware.
Hi!
I wanna know if it's possible mirroring a Vision Pro to other Vision Pros.
If it's possible, how do I work on it? Can I get some hints?
Hi,
My app allows users to share and view spatial photos.
For viewing spatial photos, I'm using a plane in a RealityView that has a camera index switch material node, which takes the stereo images as the inputs.
For sharing native spatial photos taken on the vision pro, prior to visionOS 2.0, I extract the stereo image pair and merge them into a single side-by-side image to upload to the app's backend.
However, since visionOS 2.0 introduced generating spatial photos from normal photos, I've been seeing some unexpected behaviours in my app, while on the other hand, they can be viewed correctly in the system Photos app:
Sometimes the extracted images have different size, the right image is smaller than the left image. See the first image in the google drive below, taken with iPhone 15 Pro.
Even if the image pair have the same size, when viewed in my app, it has some artefacts, especially around the edge of objects which are closer to the camera. See the second image in the google drive below, taken with iPhone 11.
Google drive link here:
https://drive.google.com/drive/folders/1UTfpxvO3-ChqshwfyzY5E_KCgk8VgUaa
I know that now Quicklook preview application can support viewing spatial photos now, but I would like to keep it the way I implemented in the app, for compatibility concerns.
Below is a code snippet that deals with the extraction. Please point out the correct way to extract stereo image pair from a generated spatial photo.
Happy to submit a code-level support request if more information is needed.
// the data is from photos picker item
let data = try await photo.loadTransferable(type: Data.self)
let source = CGImageSourceCreateWithData(data as CFData, nil)
let sbsImage = source.extractSpatialPhoto()
extension CGImageSource {
func extractSpatialPhoto() -> UIImage? {
guard let leftCGImage = extractSpatialImage(at: 0),
let rightCGImage = extractSpatialImage(at: 1)
else {
return nil
}
let leftImage = UIImage(ciImage: leftCGImage)
let rightImage = UIImage(ciImage: rightCGImage)
guard leftImage.size == rightImage.size else {
return nil
}
// merge left + right
let size = CGSize(width: leftImage.size.width * 2, height: leftImage.size.height)
UIGraphicsBeginImageContextWithOptions(size, true, 1.0)
leftImage.draw(in: CGRect(x: 0, y: 0, width: leftImage.size.width, height: leftImage.size.height))
rightImage.draw(in: CGRect(x: leftImage.size.width, y: 0, width: rightImage.size.width, height: rightImage.size.height))
let mergedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return mergedImage
}
// not sure if this actually works
func extractSpatialImage(at index: Int) -> CIImage? {
guard let cgImage = CGImageSourceCreateImageAtIndex(self, index, nil) else {
return nil
}
var ciImage = CIImage(cgImage: cgImage)
if let properties = CGImageSourceCopyPropertiesAtIndex(self, index, nil) as? [String: Any],
let heifDictionary = properties[kCGImagePropertyHEIFDictionary as String] as? [String: Any],
let extrinsics = heifDictionary[kIIOMetadata_CameraExtrinsicsKey as String] as? [String: Any],
let position = extrinsics[kIIOCameraExtrinsics_Position as String] as? [Double]
{
// Default baseline is 64mm (0 for left camera, 0.064m for right camera)
let standardBaseline = 0.064
// Check if it's the right image (should be at [0.064, 0, 0])
let isRightImage = (index == 1)
let expectedPosition = isRightImage ? standardBaseline : 0.0
// Calculate the translation needed to align to standard baseline
let positionDelta = position[0] - expectedPosition
// Apply translation only if there's a mismatch in position
if positionDelta != 0 {
let transform = CGAffineTransform(translationX: CGFloat(positionDelta), y: 0)
ciImage = ciImage.transformed(by: transform)
}
}
return ciImage
}
}
My visionOS app uses an immersive view. If the app encounters an error, I want to present an alert.
I tried in a demo app to present such an alert, but it is not shown. Nearly the same code on iOS presents an alert window.
Here is my demo code, based on Apple's Immersive Environment App template:
import SwiftUI
import RealityKit
import RealityKitContent
struct ErrorInfo: LocalizedError, Equatable {
var errorDescription: String?
var failureReason: String?
}
struct ImmersiveView: View {
@State private var presentAlert = false
let error = ErrorInfo(
errorDescription: "My error",
failureReason: "No reason"
)
var body: some View {
RealityView { content, attachments in
let mesh = MeshResource.generateBox(width: 1.0, height: 0.05, depth: 1.0)
var material = UnlitMaterial()
material.color.tint = .red
let boardEntity = ModelEntity(mesh: mesh, materials: [material])
boardEntity.transform.translation = [0, 0, -3]
content.add(boardEntity)
}
update: { content, attachments in
// …
}
attachments: {
// …
}
.onAppear {
presentAlert = true
}
.alert(
isPresented: $presentAlert,
error: error,
actions: { error in
}, message: { error in
Text(error.failureReason!)
})
}
}
Since I cannot see any alert, is something wrong with my code? How should an alert be presented in immersive space?
Topic:
Spatial Computing
SubTopic:
General
With the new ImagePresentationComponent in visionOS 26, how can text/overlays be shown on top of the image as seen in the Spatial Gallery app?
I am trying to run widgets on visionOS 26. Specifically I am trying to pin them to the simulator room's walls, however I am unable to do so.
Is this a limitation with the visionOS simulator right now, or am I missing a trick here?
I have Mac mini M4 with 16GB memory, the Xcode is 16.1, when I test my Vision Pro App with the Simulator, it is very slow and system shows the memory is under the high pressure.
How do I run/test/debug the application on Vision Pro directly? Tried to add my Vision Pro to my developer account, it didn't work due to cannot find UDID, when I hook the USB to the battery, it only shows Battery device ID.
Topic:
Spatial Computing
SubTopic:
General
Hey there,
since SceneView has been marked as „deprecated“ for SwiftUI, I‘m wondering which alternatives should be considered for the following situation:
I have a SwiftUI app (for iOS and iPadOS) where users can view (with rotate, scale, move gestures) 3D models (USDZ) in a scene. The models will be downloaded from web backend and called via local URL paths.
What I tested:
I‘ve tried ARView in .nonAR mode, RealityView, however I didn‘t get the expected response -> User can rotate, scale the 3D models in a virtual space.
ARView in nonAR mode still shows the object like in normal AR mode without camera stream.
I tried to add Gestures to the RealityView on iOS - loading USDZ 3D models worked but the gestures didn’t).
Model3D is only available for visionOS (that would be amazing to have it for iOS)
I also checked QuickLook Preview however it works pretty strange via Filepicker etc, which is not the way how the user should load the 3D models in my app.
Maybe I missed something, I couldn’t find anything which can help me. I‘m pretty much stucked adopting the latest and greatest frameworks/APIs in my App and taking the next steps porting my app to visionOS.
Long story short 😃:
Does someone have an idea what is the alternative to SceneView for USDZ 3D models?
I appreciate your support!!
Thanks in advance!
My visionOS requires access to users' personal photos. The trigger mechanism is: when user firstly opens a FooView, a task attached to that FooView and calling let status = PHPhotoLibrary.authorizationStatus(for: .readWrite), if the status is .notDetermined, then calling PHPhotoLibrary.requestAuthorization(for: .readWrite, handler: authCompletionHandler) to let visionOS pop out a window to request Photo access.
However, the app crashes every time when user selects Limited Access and the system try to pop out a photo library picker. And btw, I have set Prevent limited photos access alert to Yes, but it shouldn't affect the behavior here I guess.
There was a debugger message here:
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Presentations are not permitted within volumetric window scenes.'
However, the window this view belongs to is a .plain style window (though there were 3D object appearing in the other view of same windowgroup)
This is my code snippet if this helps:
checkAndUpdatePhotoAuthorization is just a wrapper of PHPhotoLibrary.authorizationStatus(for: .readWrite)
private func checkAndUpdatePhotoAuthorization() -> PHAuthorizationStatus {
let currentStatus = PHPhotoLibrary.authorizationStatus(for: .readWrite)
switch currentStatus {
case .authorized:
print("Photo library access authorized.")
isPhotoGalleryAuthorized = true
isPhotoGalleryLimited = false
isPhotoGalleryAccessRestricted = false
isPhotoGalleryDetermined = true
case .limited:
print("Photo library access limited.")
isPhotoGalleryLimited = true
isPhotoGalleryAuthorized = false
isPhotoGalleryAccessRestricted = false
isPhotoGalleryDetermined = true
case .notDetermined:
isPhotoGalleryDetermined = false
print("Photo library access not determined.")
case .denied:
print("Photo library access denied.")
isPhotoGalleryAuthorized = false
isPhotoGalleryLimited = false
isPhotoGalleryAccessRestricted = false
showSettingsAlert = true
isPhotoGalleryDetermined = true
case .restricted:
print("Photo library access restricted.")
isPhotoGalleryAuthorized = false
isPhotoGalleryLimited = false
isPhotoGalleryAccessRestricted = true
showPhotoAuthExplainationAlert = true
isPhotoGalleryDetermined = true
@unknown default:
print("Photo library Unknown authorization status.")
isPhotoGalleryAuthorized = false
isPhotoGalleryLimited = false
isPhotoGalleryAccessRestricted = false
isPhotoGalleryDetermined = true
}
return currentStatus
}
And then FooView attaches task to fire up checkAndUpdatePhotoAuthorization()
var body: some View {
EmptyView()
}
.task {
try? await Task.sleep(for: .seconds(1.0))
let status = self.checkAndUpdatePhotoAuthorization()
if status == .notDetermined {
DispatchQueue.main.async {
PHPhotoLibrary.requestAuthorization(for: .readWrite, handler: authCompletionHandler)
}
}
Another thing worth to mention is that SOMETIMES it won't crash when running on a debug build. But it crashes when it comes to TF.
Any other idea? Big thanks in advance
XCode version: 16.2 beta 3
VisionOS version: 2.2
I want to display a model image in the windowGroup window. This image is not unique. To display the model image, how should I convert it into an image
I tried to use the application icon from sample project https://developer.apple.com/documentation/visionos/diorama, but the 3 layers of the app icon are not separated when I hover on the icon in the Vision Pro simulator. Could you please advise how to fix the problem? I am using the latest Xcode Version 15.4 (15F31d). Thank you.
I'm setting:
.immersionStyle(selection: .constant(.progressive(0.1...1.0, initialAmount: 0.1)), in: .progressive(0.1...1.0, initialAmount: 0.1))
In UnityVisionOSSettings.swift before build out in Xcode.
I'm having an issue where this only works on occasion. Seems random. I'll either get no immersion level available (crown dial is greyed out and no changes can be made) or it will only allow 0.5 - 1.0 immersion (dial will go below 0.5 but springs back to 0.5 when released).
With no changes to my setup or how I'm setting immersionStyle I've been able to get this to work as I would expect. Wondering if there is some bug that would be causing this to fail. I've tested a simple NativeSDK progressive immersion style with same code for custom setting and it works everytime, so it's something related to Unity.
Here is the entire UnityVisionOSSettings that, from as far as I can tell, are controlling this:
`// GENERATED BY BUILD
import Foundation
import SwiftUI
import PolySpatialRealityKit
import UnityFramework
let unityStartInBatchMode = false
extension UnityPolySpatialApp {
func initialWindowName() -> String { return "Unbounded" }
func getAllAvailableWindows() -> [String] { return ["Bounded-0.500x0.500x0.500", "Unbounded"] }
func getAvailableWindowsForMatch() -> [simd_float3] { return [] }
func displayProviderParameters() -> DisplayProviderParameters { return .init(
framebufferWidth: 1830,
framebufferHeight: 1600,
leftEyePose: .init(position: .init(x: 0, y: 0, z: 0),
rotation: .init(x: 0, y: 0, z: 0, w: 1)),
rightEyePose: .init(position: .init(x: 0, y: 0, z: 0),
rotation: .init(x: 0, y: 0, z: 0, w: 1)),
leftProjectionHalfAngles: .init(left: -1, right: 1, top: 1, bottom: -1),
rightProjectionHalfAngles: .init(left: -1, right: 1, top: 1, bottom: -1)
)
}
@SceneBuilder
var mainScenePart0: some Scene {
ImmersiveSpace(id: "Unbounded", for: UUID.self) { uuid in
PolySpatialContentViewWrapper(minSize: .init(1.000, 1.000, 1.000), maxSize: .init(1.000, 1.000, 1.000))
.environment(\.pslWindow, PolySpatialWindow(uuid.wrappedValue, "Unbounded", .init(1.000, 1.000, 1.000)))
.onImmersionChange() { oldContext, newContext in
PolySpatialWindowManagerAccess.onImmersionChange(oldContext.amount, newContext.amount)
}
KeyboardTextField().frame(width: 0, height: 0).modifier(LifeCycleHandlerModifier())
} defaultValue: { UUID() } .upperLimbVisibility(.automatic)
.immersionStyle(selection: .constant(.progressive(0.1...1.0, initialAmount: 0.1)), in: .progressive(0.1...1.0, initialAmount: 0.1))
WindowGroup(id: "Bounded-0.500x0.500x0.500", for: UUID.self) { uuid in
PolySpatialContentViewWrapper(minSize: .init(0.100, 0.100, 0.100), maxSize: .init(0.500, 0.500, 0.500))
.environment(\.pslWindow, PolySpatialWindow(uuid.wrappedValue, "Bounded-0.500x0.500x0.500", .init(0.500, 0.500, 0.500)))
KeyboardTextField().frame(width: 0, height: 0).modifier(LifeCycleHandlerModifier())
} defaultValue: { UUID() } .windowStyle(.volumetric).defaultSize(width: 0.500, height: 0.500, depth: 0.500, in: .meters).windowResizability(.contentSize) .upperLimbVisibility(.automatic) .volumeWorldAlignment(.gravityAligned)
}
@SceneBuilder
var mainScene: some Scene {
mainScenePart0
}
struct LifeCycleHandlerModifier: ViewModifier {
func body(content: Content) -> some View {
content
.onOpenURL(perform: { url in
UnityLibrary.instance?.setAbsoluteUrl(url.absoluteString)
})
}
}
}`
Hello! I’m excited to see that Look to Scroll has been included in visionOS 26 Beta. I’m aiming to achieve a feature where the user’s gaze at a specific edge automatically scrolls to that position. However, I’ve experimented with ScrollView and haven’t been able to trigger this functionality. Could you advise if additional API modifiers are necessary? Thank you!
At the moment the map kit APls only support non-volumetric maps (i.e. in a window or in a volume, but on a 2D surface).
Is support for 3D volumetric maps in VisionOS in the works? And if so when can we expect it to be available?
Hi, are we allowed to push the default support in Package.swift up to iOS 18 to allow for the latest APIs?
And with the terms of the competition, can we use stock 3D USDZ assets?
Thank you!
If I correctly understand, a new Enterprise API has been introduced In visionOS 26 allowing to fix windows to the user frame of reference, implementing a something like an "head up display", with the window tracking the user movements.
Is this API only available to enterprise applications, and if so is there a plan to make it available for every kind of app?
With Xcode 26, loading ressources with RealityKit is extremely slow.
Here my project takes almost 50 seconds to load.
I also get multiple Hang detected messages in the console:
When I uncheck "Debug executable" in the schema, the same project loads in 2 seconds.
I'm using RealityKit asynchronous loading:
private static func loadFromRealityComposerPro(
named entityName: String,
fromSceneNamed sceneName: String
) async -> Entity? {
var entity: Entity?
do {
let scene = try await Entity(
named: sceneName,
in: visionPetsContentBundle
)
entity = scene.findEntity(named: entityName)
} catch {
print(
"Error loading \(entityName) from scene \(sceneName): \(error.localizedDescription)"
)
}
return entity
}
Anyone having the same problem?
Topic:
Spatial Computing
SubTopic:
General
Hi,
One of the great features introduced in WWDC24 is the Translation API. But unfortunately it's currently unavailable on visionOS.
My question is, does Apple have any plan to support it on visionOS as well? If so, what's the ETA for this feature?
I would really like to see it on visionOS, otherwise I'll have to pay Google to use their translation API.
Hi,
I'm trying to correct the lens distortion in frames provided by Enterprise API camera frame provider. The frames provided seem to have only in/extrinsics info, but not the distortion lookup table.
Is there some magic setting, or function to do that (I can't seem to find anything like this)? Or is there a way to use AVCameraCalibrationData together with provider?