Hi, I'm trying to run the visionOS simulator, however all I'm seeing is a black screen. I tried running the tvOS simulator and it's working fine, so seems to just be an issue with visionOS. The Xcode preview doesn't work either for visionOS.
visionOS
RSS for tagDiscuss developing for spatial computing and Apple Vision Pro.
Posts under visionOS tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Is it possible to specify a default window size for a 2D window in visionOS? I know this is normally achieved by modifying the WindowGroup with .defaultSize(width:height:), but I get an error that this was not included in "xrOS". I am able to specify .defaultSize(width:height:depth:) for a volumetric window, but this doesn't have any effect when applied to a 2D one.
I have an iPad app I've just started testing in visionOS and it's gone pretty good so far except one issue, none of the long-press or swipe gestures in my List work.
The app is SwiftUI based. so I'm using a List with the swipeActions and contextMenu modifiers.
Could these be broke, unsupported or am I not understand how to initiate these in the simulator?
For a long press I'd assume just holding down on the mouse button should work. This appears to work in Safari.
Translated Report (Full Report Below)
Incident Identifier: 65E8ECA4-02EA-4B3C-ABF6-1D403C816375
CrashReporter Key: 023C6B08-044C-AF61-DEC5-A0D76D107688
Hardware Model: MacBookPro17,1
Process: BossXX-RD [75031]
Path: /Users/USER/Library/Developer/CoreSimulator/Devices/9DD37986-B14F-4D2F-A5AC-E66A3D3B079F/data/Containers/Bundle/Application/4108A013-C774-4CD3-A2E2-0FCCE31CA5D0/BossXX-RD.app/BossXX-RD
Identifier: com.***.***
Version: 11.120 (11.120)
Code Type: X86-64 (Native)
Role: Foreground
Parent Process: launchd_sim [74675]
Coalition: com.apple.CoreSimulator.SimDevice.9DD37986-B14F-4D2F-A5AC-E66A3D3B079F [157714]
Responsible Process: SimulatorTrampoline [3052]
Date/Time: 2023-06-25 19:54:36.0512 +0800
Launch Time: 2023-06-25 19:54:26.2936 +0800
OS Version: macOS 13.4 (22F66)
Release Type: User
Report Version: 104
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000001, 0x000000010f36654c
Termination Reason: SIGNAL 9 Killed: 9
Terminating Process: debugserver [75046]
Triggered by Thread: 0
Thread 0 Crashed:
0 ??? 0x10f36653c ???
1 ??? 0x10f3c5280 ???
2 dyld 0x20f74421a dyld4::prepareSim(dyld4::RuntimeState&, char const*) + 968
3 dyld 0x20f742abc dyld4::prepare(dyld4::APIs&, dyld3::MachOAnalyzer const*) + 249
4 dyld 0x20f7423bd start + 1805
Thread 1:: com.apple.rosetta.exceptionserver
0 ??? 0x7ff7fff3c694 ???
Thread 2:: com.apple.rosetta.debugserver
0 ??? 0x7ff7fff3c694 ???
Thread 0 crashed with X86 Thread State (64-bit):
rax: 0x0000000000000001 rbx: 0x00000003161e8e48 rcx: 0x000000011795a000 rdx: 0x00000003161e8ad0
rdi: 0x0000000000000000 rsi: 0x0000000000000001 rbp: 0x00000003161e8b80 rsp: 0x00000003161e8ac8
r8: 0x0000000000000000 r9: 0x0000000000000001 r10: 0x0000000000000000 r11: 0x00000001177dd740
r12: 0x00000003161e8ae0 r13: 0x00000001177df960 r14: 0x0000000000000000 r15: 0x0000000000000001
rip: rfl: 0x0000000000000283
tmp0: 0x000000020f73dd03 tmp1: 0x0000000000000003 tmp2: 0x000000020f770340
Binary Images:
0x20f73c000 - 0x20f7d7fff dyld () <9e98a840-a3ac-31c1-ab97-829af9bd6864> /usr/lib/dyld
0x11795a000 - 0x1179c2fff dyld_sim () /Volumes/VOLUME//dyld_sim
0x0 - 0xffffffffffffffff ??? () <00000000-0000-0000-0000-000000000000> ???
Error Formulating Crash Report:
dyld_process_snapshot_get_shared_cache failed
EOF
Model: MacBookPro17,1, BootROM 8422.121.1, proc 8:4:4 processors, 16 GB, SMC
Graphics: Apple M1, Apple M1, Built-In
Display: Color LCD, 2560 x 1600 Retina, Main, MirrorOff, Online
Display: PHL 272B7QPJ, 2560 x 1440 (QHD/WQHD - Wide Quad High Definition), MirrorOff, Online
Memory Module: LPDDR4, Hynix
AirPort: spairport_wireless_card_type_wifi (0x14E4, 0x4378), wl0: Jan 12 2023 05:52:26 version 18.20.383.14.7.8.149 FWID 01-1469d19d
Bluetooth: Version (null), 0 services, 0 devices, 0 incoming serial ports
Network Service: Wi-Fi, AirPort, en0
Network Service: iPhone, Ethernet, en9
USB Device: USB31Bus
USB Device: USB3.0 Hub
USB Device: USB2.0 Hub
USB Device: iPhone
USB Device: Apple Watch Magnetic Charging Cable
USB Device: USB 2.0 BILLBOARD
USB Device: USB31Bus
Thunderbolt Bus: MacBook Pro, Apple Inc.
Thunderbolt Bus: MacBook Pro, Apple Inc.
After taking a look at the Deliver Video Content for Spatial Experiences session, alongside the Destination Video sample code, I'm a bit unclear on how one might go about creating stereoscopic content that can be bundled up as a MV-HEVC file and played on Vision Pro.
I see the ISO Base Media File Format and
Apple HEVC Stereo Video format specifications, alongside new mvhevc1440x1440 output presets in AVFoundation, but I'm unclear what sort of camera equipment could be used to create stereoscopic content and how one might be able to create a MV-HEVC using a command-line tool that leverages AVFoundation/VideoToolbox, or something like Final Cut Pro.
Is such guidance available on how to film and create this type of file? Thanks!
For those with existing projects that rely on a SceneDelegate and are primarily UIKit-based, I am curious how one might launch a window with a style of volumetric or an ImmersiveSpace. Is this possible without having a SwiftUI-based @Main entry point?
How do you enable WebXR support in visionOS's Safari using the simulator? Is there a hidden option or flag somewhere? I've seen videos showcasing WebXR in the simulator, I believe, so I think it is possible.
Hello
Although the documentation says PointLight is available for visionos (https://developer.apple.com/documentation/realitykit/pointlight)
It doesn't work when I try to add a light
let light = DirectionalLight()
throws error : 'DirectionalLight' is unavailable in xrOS
Will lights be available later in visionOS RealityKit?
Hi
I created the VisionOS demo app and tried to use custom material shader for a box model.
But I failed to compile the project.
XCode says that "'CustomMaterial' is unavailable in xrOS".
Is there possible way to use custom shader for ModelEntity of RealityKit in VisionOS?
SceneReconstructionProvider.isSupported and PlaneDetectionProvider.isSupported both return false when running in the simulator (Xcode 15b2).
There is no mention of this in release notes. Seems that this makes any kind of AR apps that depend on scene understanding impossible to run in the sim.
For example, this code described in this article is not possible to run in simulator: https://developer.apple.com/documentation/visionos/incorporating-surroundings-in-an-immersive-experience
Am I missing something or is this really the current state of the sim?
Does this mean if we want to build mixed-immersion apps we need to wait to get access to Vision Pro hardware?
Throughout the WWDC guides and videos, Apple claims existing iOS and iPadOS apps will run in the visionOS Pro Simulator "unmodified." But none of my existing UIKit apps do.
xrOS is installed and it does run for new project templates. But even after making sure "Apple Vision (Designed for iPad)" is added to my project settings, the destination never appears in the picker UI.
What is Xcode detecting? If existing apps must be using SwiftUI or somesuch, then Apple needs to state that as a requirement.
And if that is the case, is there a migration example of meeting this requirement without breaking or rewriting my apps?
Will I be able to open an ARSession with ARFaceTrackingConfiguration on visionOS?
Will I be able to have the face blendshapes?
Hi,
I would like to learn how to create custom materials using Shader Graph in Reality Composer Pro. I would like to know more about Shader Graph in general, including node descriptions and how the material's display changes when nodes are connected. However, I cannot find a manual for Shader Graph in Reality Composer Pro. This leaves me totally clueless on how to create custom materials.
Thanks.
Sadao Tokuyama
https://1planet.co.jp/
https://twitter.com/tokufxug
Hi,
is there a way in visionOS to anchor an entity to the POV via RealityKit?
I need an entity which is always fixed to the 'camera'.
I'm aware that this is discouraged from a design perspective as it can be visually distracting. In my case though I want to use it to attach a fixed collider entity, so that the camera can collide with objects in the scene.
Edit:
ARView on iOS has a lot of very useful helper properties and functions like cameraTransform (https://developer.apple.com/documentation/realitykit/arview/cameratransform)
How would I get this information on visionOS? RealityViews content does not seem offer anything comparable.
An example use case would be that I would like to add an entity to the scene at my users eye-level, basically depending on their height.
I found https://developer.apple.com/documentation/realitykit/realityrenderer which has an activeCamera property but so far it's unclear to me in which context RealityRenderer is used and how I could access it.
Appreciate any hints, thanks!
In full immersive (VR) mode on visionOS, if I want to use compositor services and a custom Metal renderer, can I still get the user’s hands texture so my hands appear as they are in reality? If so, how?
If not, is this a valid feature request in the short term? It’s purely for aesthetic reasons. I’d like to see my own hands, even in immersive mode.
When I create an USDZ file from the original Reality Composer (non pro) and view it in the Vision OS simulator, the transforms and rotations don't look similar.
For example a simple Tap and Flip behaviour does not rotate similar in Vision OS.
Should we regard RC as discontinued sw and only work with RC-pro?
Hopefully Apple will combine the features from the original RC into the new RC pro !
Hello,
When an iOS app runs on Vision Pro in compatible mode, is there a flag such as isiOSAppOnVision to determine the underlying OS at runtime? Just like the ProcessInfo.isiOSAppOnMac. It will be useful to optimize the app for visionOS.
Already checked but not useful:
#if os(xrOS) does not work in compatible mode since no code is recompiled.
UIDevice.userInterfaceIdiom returns .pad instead of .reality.
Thanks.
Apple docs for RealityView state:
You can also use the optional update closure on your RealityView to update your RealityKit content in response to changes in your view’s state."
Unfortunately, I've not been able to get this to work.
All of my 3D content is programmatically generated - I'm not using any external 3D modeling tools. I have an object that conforms to @ObservableObject. Its @Published variables define the size of a programmatically created Entity. Using the initializer values of these @Published variables, the 1st rendering of the RealityView { content in } works like a charm.
Basic structure is this:
var body: some View {
RealityView { content in
// create original 3D content using initial values of @Published variables - works perfect
} update: { content in
// modify 3D content in response to changes of @Published variables - never works
}
Debug statements show that the update: closure gets called as expected - based upon changes in the viewModel's @Published variables. However, the 3D content never changes - even though the 3D content is based upon the @Published variables.
Obviously, if the @Published variables are used in the 1st rendering, and the update: closure is called whenever changes occur to these @Published variables, then why isn't the update: closure updating the RealityKit content as described in the Apple docs?
I've tried everything I can think of - including removing all objects in the update: closure and replacing them with the same call that populated them in the 1st rendering. Debug statements show that the new @Published values are correct as expected when the update: closure is called, but the RealityView never changes.
Known limitation of the beta? Programmer error? Thoughts?
This is the HelloWorld project from https://developer.apple.com/documentation/visionos/world
In ViewModel.swift, there are tons of:
Expansion of macro 'ObservationTracked' produced an unexpected 'init' accessor
I got this even after "Clean Build Folder"
Like most of us i too have this doubt,that in vision pro will the users get an additional storage option to store the data that is captured...or will we be getting to choose only the cloud storage option store....