There was no mention if the Vision Pro could be used outside. Several of the other AR/VR systems out there are prohibited from this (sensors overload). Can the Vision Pro be used in sunlight?
Thanks!
visionOS
RSS for tagDiscuss developing for spatial computing and Apple Vision Pro.
Posts under visionOS tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
How to create a visionOS project in Xcode15.0beta
Has Apple worked out how WebXR authored projects in Safari operate with VisionOS? Quest has support already. And I imagine many cross platform experiences (especially for professional markets where the apps are on windows through web) would be serve well with this.
Is there documentation for this?
Does visionOS support mapkit?
Is there support for using multiple UV channels in AR QuickLook in iOS17?
One important use case would be to put a tiling texture in an overlapping tiling UV set while mapping Ambient Occlusion to a separate unwrapped non-overlapping UV set. This is very important to author 3D content combining high-resolution surface detail and high-quality Ambient Occlusion data while keeping file size to a minimum.
Hi,
I was wondering after watching the WWDC23 session, Meet Core Location for spatial computing, does the Apple Vision Pro have GPS? Or does it provide Core Location functionality via Wi-Fi?
Also, in Unity, we use Input.location to get latitude and longitude. When developing in Unity with Apple Vision Pro, do we use Input.location to get latitude and longitude?
Best regards.
Sadao Tokuyama
https://1planet.co.jp/
https://twitter.com/tokufxug
Heyyy just wondering, for those of you who are planning on delving into VisionOS, what is your current background? Is it XR/AR/VR/software dev or others?
I just grabbed the portal code made available for testing and ran into this error when trying to run in simulator Vision Pro
Thread 1: Fatal error: SwiftUI Scene ImmersiveSpace requires a UISceneSessionRole of "UISceneSessionRoleImmersiveSpaceApplication" for key UIApplicationPreferredDefaultSceneSessionRole in the Application Scene Manifest.
It seems that @Model is not fully supported in visionOS. I've tried both an iPad app and a native visionOS app, and both crash when trying to use SwiftData. It's a minimal app from the template (no data) that I add a little data code to:
import SwiftData
@Model
final class mapPin {
var lat : Double
var lon : Double
init(lat: Double, lon: Double) {
self.lat = lat
self.lon = lon
}
}
Building for visionOS produces:
/var/folders/9p/ppkjrfhs393__cqm9z57k9mr0000gn/T/swift-generated-sources/@__swiftmacro_7Vision16mapPin5ModelfMm_.swift:2:13: error: declaration name '_$backingData' is not covered by macro 'Model'
private var _$backingData: any SwiftData.BackingData<mapPin> = SwiftData.DefaultBackingData(for: mapPin.self)
^
/Users/arenberg/Developer/Beta/Vision1/Vision1/ContentView.swift:13:7: note: in expansion of macro 'Model' here
final class mapPin {
^~~~~~~~~~~~~~
/var/folders/9p/ppkjrfhs393__cqm9z57k9mr0000gn/T/swift-generated-sources/@__swiftmacro_7Vision16mapPin5ModelfMc_.swift:1:1: error: type 'mapPin' does not conform to protocol 'PersistentModel'
extension mapPin : SwiftData.PersistentModel {}
^
/Users/arenberg/Developer/Beta/Vision1/Vision1/ContentView.swift:13:7: note: in expansion of macro 'Model' here
final class mapPin {
^~~~~~~~~~~~~~
/var/folders/9p/ppkjrfhs393__cqm9z57k9mr0000gn/T/swift-generated-sources/@__swiftmacro_7Vision16mapPin5ModelfMc_.swift:1:1: note: do you want to add protocol stubs?
extension mapPin : SwiftData.PersistentModel {}
^
/Users/arenberg/Developer/Beta/Vision1/Vision1/ContentView.swift:13:7: note: in expansion of macro 'Model' here
final class mapPin {
^~~~~~~~~~~~~~
/var/folders/9p/ppkjrfhs393__cqm9z57k9mr0000gn/T/swift-generated-sources/@__swiftmacro_7Vision16mapPin5ModelfMm_.swift:2:64: error: module 'SwiftData' has no member named 'DefaultBackingData'
private var _$backingData: any SwiftData.BackingData<mapPin> = SwiftData.DefaultBackingData(for: mapPin.self)
^~~~~~~~~ ~~~~~~~~~~~~~~~~~~
/Users/arenberg/Developer/Beta/Vision1/Vision1/ContentView.swift:13:7: note: in expansion of macro 'Model' here
final class mapPin {
^~~~~~~~~~~~~~```
There is a project tutorial for visionOS Metal rendering in immersive mode here (https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal?language=objc), but there is no downloadable sample project. Would Apple please provide sample code? The set-up is non-trivial.
Is MTKView intentionally unavailable on visionOS or is this an issue with the current beta?
Registering simulator runtime with CoreSimulator failed.
Domain: DVTDownloadableErrorDomain
Code: 29
User Info: {
DVTErrorCreationDateKey = "2023-06-22 17:40:21 +0000";
}
The service used to manage runtime disk images (simdiskimaged) crashed or is not responding
Domain: com.apple.CoreSimulator.SimError
Code: 410
System Information
macOS Version 13.4.1 (Build 22F82)
Xcode 15.0 (22181.22) (Build 15A5161b)
Timestamp: 2023-06-22T12:40:21-05:00
How to Recreate:
Download XCode-beta
Create VisionProApp
Click on top where it says visionOS not installed
Wait for it to download
Get above error
Hope someone can help!
The documentation is very sparse
I was starting to test visionOS SDK on an existing project that has been running fine on iPad (iOS 17) with Xcode 15. It can be configured to run on visionOS simulator on a MacBook that runs M1 chip without any change in Xcode’s project Build Settings.
However the Apple Vision Pro simulator doesn’t appear when I run Xcode 15 on Intel MacBook Pro, unless I change the SUPPORTED_PLATFORMS key on the Xcode’s project Build Settings to visionOS.
Although, I can understand that a MacBook pro running M1 / M2 chip would be the ideal platform to run the visionOS simulator, it’s so much better if we can run the visionOS simulator on iPadOS, as it has the same arm64 architecture, and it has all the hardware needed to run camera, GPS, and Lidar.
The Mac is not a good simulator, even though it has an M1 / M2 chip, first of all:
It doesn’t have a dual facing camera (front and back)
It doesn’t have a Lidar
It doesn’t have GPS
It doesn’t have a 5G cellular radio
It’s not portable enough for developers to design use cases around spatial computing
Last but not least, I have problems or not very clear on simulating ARKit with actual camera frames on a VisionPro simulator, while I would estimate this can be simulated perfectly on an iPadOS.
My suggestion is to provide us developers with a simulator that can be run on iPadOS, that will increase developers adoption and improve the design and prototyping phase of apps running on the actual Vision Pro device.
Hi
Is anyone aware of a way to emulate the digital crown within the Vision Pro Simulator? I've tried various key/scrolling combinations without any luck.
I am interested to see how the progressive immersion mode works.
Thanks
I spent many hours attempting to overcome the problems surrounding the installation of the VisionOS Simulator. The final round of steps I took to install the system is:
Completely remove existing XCode installations (mixed with XCode 14 and XCode 15 beta 2).
Download the Xcode 15 beta 2 packages via https://developer.apple.com/download/ with all platforms selected.
Extract Xcode 15 app
Move it to Applications
Installation of VisionOS Simulator fails, close XCode 15
Open the visionOS_1_beta_Simulator_Runtime
Manually move the contents of /Library/Developer/CoreSimulator/Profiles/Runtimes/xrOS 1.0.simruntime to the same path locally.
Open Xcode-beta and received notifications "could not verify the developer"
Ran sudo spctl --master-disable (not feeling great about that)
Open Xcode-beta, and see that the simulator is installed but "Not compatible with Xcode 15.0 beta 2"
Between the developer not verified (an app from Apple) and the incompatibility on a platform built specifically to support things like the VisionOS Simulator, I'm disappointed in this roll out.
Any ideas of what could be going wrong? I'll leave some details below that may lead to some answers.
Location: Switzerland (Swiss Apple AppStore)
Chip: M1 Pro
Memory: 32GB
macOS: Ventura 14.4.1
Available space on Startup disk: ~1.4 TB
Newbie developer here developing a VisionOS app that requires live camera and microphone access.
Struggling to understand the access permission policy of the Vision Pro, I am seeking help from the community on where I can find policies around whether and how an app can gain access to Vision Pro's cameras and microphone.
Hoping that getting user consent is all it requires for the access.
Any help is appreciated!
Hi all,
Does anyone know the dimensions of screenshots from Vision Pro?
Thanks
I am working on a project where changes in a window are reflected in a volumetric view which includes a RealityView. I have a shared data model between the window and volumetric view, but it unclear to me how I can programmatically refresh the RealityViewContent. Initially I tried holding the RealityViewContent passed from the RealityView closure in the data model, and I also tried embedding a .sink into the closure, but because the RealityViewContent is inout, neither of those work. And changes to the window's contents do not cause the RealityView's update closure fire. Is there a way to notify the RealityViewContent to update?