How can I request access to Enterprise API for VisionPro with an individual developer account? I wanted it for learning and testing
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Sorry for the cross-post but it's now two days in and this isn't fixed.
If you try to use Xcode 16.3b3 with visionOS, it won't download the visionOS SDK, gives a 'network error' so you can't use the latest beta for Apple Vision Pro.
FB16927025
FB16917874
FB16910449
Hi Apple Team,
We noticed the following exciting changelog in the latest macOS 26 beta:
A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451)
However after trying this on the latest beta and running some tests we do not see any differences on objects with low textures such as single coloured surfaces. Is there anything we are missing? the machine is definitely connected to the internet but we have no way of knowing from the logs if the new model is being used?
thanks
Platform: visionOS 2.6
Framework: RealityKit, SwiftUIComponent: ImagePresentationComponent
I’m working with the new ImagePresentationComponent from visionOS 26 and hitting a rendering limitation when switching to .spatialStereoImmersive viewing mode within a WindowGroup context.
This is what I’m seeing:
Pure immersive space: ImagePresentationComponent with .spatialStereoImmersive mode works perfectly in a standalone ImmersiveSpace
Mode switching API: All mode transitions work correctly (logs confirm the component updates)
Spatial content: .spatialStereo mode renders correctly in both window and immersive contexts.
This is where it’s breaking for me:
Window context: When the same RealityView + ImagePresentationComponent is placed inside a WindowGroup (even when that window is floating in a mixed immersive space), switching to .spatialStereoImmersive mode shows no visual change
The API calls succeed, state updates correctly, but the immersive content doesn’t render.
Apple’s Spatial Gallery demonstrates exactly what I’m trying to achieve:
Spatial photos displayed in a window with what feels like horizontal scroll view using system window control bar, etc.
Tapping a spatial photo smoothly transitions it to immersive mode in-place.
The immersive content appears to “grow” from the original window position by just changing IPC viewing modes.
This proves the functionality should be possible, but I can’t determine the correct configuration.
So, my question to is:
Is there a specific RealityView or WindowGroup configuration required to enable immersive content rendering from window contexts that you know of?
Are there bounds/clipping settings that need to be configured to allow immersive content to “break out” of window constraints?
Does .spatialStereoImmersive require a specific rendering context that’s not available in windowed RealityView instances?
How do you think Apple’s SG app achieves this functionality?
For a little more context:
All viewing modes are available: [.mono, .spatialStereo, .spatialStereoImmersive]
The spatial photos are valid and work correctly in pure immersive space
Mixed immersive space is active when testing window context
No errors or warnings in console beyond the successful mode switching logs I’m getting
Any insights into the proper configuration for window-hosted immersive content
I have a visionOS app that plays audio using AVAudioEngine and presents both a window and an immersive space. If I close the window, the audio session gets interrupted and attempting to restart the session and audio engine has no effect. I need to dismiss the app, then reopen it, which reopens the main window, in order for audio to start playing again.
This is in all visionOS 2 betas. Note that I have background audio enabled for my app.
In VisionOS versions 2.1 and 2.2, I’m encountering a significant limitation when using the .immersionStyle(selection: .constant(.mixed), in: .mixed) mode, specifically in mixed immersive style. Here’s a breakdown of the behavior:
In full immersion mode (.immersionStyle(selection: .constant(.full), in: .full)), users can interact with and manipulate system windows while inside a 3D model, allowing typical interactions like moving windows, pinching, or activating UI switches.
However, in mixed immersive mode, using the exact same layout “inside” a 3D model (which doesn’t visually obstruct the window), users are unable to interact with window content or move the window. Basic interactions like pinching or toggling switches require users to physically touch these elements in AR space, which is inconsistent with the behavior in full immersion.
From a usability perspective, this restriction seems unnecessary, as the software should ideally allow for similar interaction capabilities across both immersive styles. The expected behavior is to enable window manipulation within a 3D model in mixed mode, matching the functionality observed in full immersion.
The scene in question is a House in which the user is placed during the immersion that's why I am referring to the user being "Inside" of the scene.
Has anyone else experienced this or found a workaround?
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug.
After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere.
This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
Suppose there was an immersiveSpace, and an Entity() being added to the space as child entity of the content. This entity is responsible for playing background music by calling prepareAudio, gaining a controller and play the music. (check the basic code below)
When it was playing music, a .plain window and an immersiveSpace are both presented. I believe this immersiveSpace is holding the handle of the controller so as long as immersiveSpace is open, the music won't stop.
However if I close the .plain window (by closing system-level close button), the music just stopped. But the immersiveSpace is still open. If right now I check the value of controller.isPlaying, it was still true. But you just cannot hear the music anymore.
To reproduce, simply open an visionOS template App project, selecting volume and full immersive, and replace some code inImmersiveView.swift with the code below. Also simply drag any .mp3 file and replace the AudioFileResource's name. And you could reproduce this bug.
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
// Put skybox here. See example in World project available at
// https://developer.apple.com/
if let audioResource = try? await AudioFileResource(named: "anyMP3file.mp3") {
let ent = Entity()
immersiveContentEntity.addChild(ent)
let controller = ent.prepareAudio(audioResource)
controller.play()
}
}
}
I wonder why this happen? I mean how should I keep the music playing when I close the .plain window?
Thanks!
Hello,
I've been tinkering a bit with TextComponent.
Based on the docs it seems like this component should always render sharp and nice text, no matter how close the user gets:
RealityKit dynamically adjusts the backing size to a value that results in high-fidelity text at its current location.
And it does on visionOS, but on iOS and macOS the text gets pixelated when I get close to it, as if its just rendering it once as a plain image texture.
Can anyone tell me if this is expected behavior or a bug?
Here two screenshots for comparison (iPhone and Vision Pro):
Thanks!
When using RoomPlan to collect data and processing it with StructureBuilder, the app crashes.
Crash thread:
RoomScanCore.offlineFloorPlanGeneration
How should I deal with this issue? I’ve already implemented crash capture, but no crash was logged—the app just crashes directly.
RoomScanCore.offlineFloorPlanGeneration
We're developing an iOS application that integrates RoomCaptureSession with ARSCNView for room scanning. Our implementation differs from the standard RoomCaptureView because we need custom UI guidance with 3D dots placed in the scanning environment to guide users through the capture process.
Bug Description:
The application crashes when users attempt to scan multiple rooms or apartments in sequence. The crash specifically occurs with the following pattern:
User successfully scans first room with multiple hotspots (working correctly)
User stops scanning, moves to a new room
In the new room, first 1-2 hotspots work correctly
Application crashes when attempting to scan additional hotspots
Technical Details:
Error: SLAM Anchor assertion failure in SlamAnchor.cpp:37 : HasValidPose()
Crash occurs in Thread 27 with CAPIDetectionOutputFwdNode
Error suggests invalid positioning when placing AR anchors
Steps to Reproduce:
Start room scan
Complete multiple hotspot captures in first room
Stop scanning
Start new room scan
Capture 1-2 hotspots successfully
Attempt additional hotspot captures -> crashes
Attempted Solutions:
Implemented anchor cleanup between sessions
Added position validation before anchor placement
Implemented ARSession error handling
Added proper thread management for AR operations
Environment:
Device: iPhone 14 Pro (LiDAR equipped)
iOS Version: 18.1.1 (22B91)
Testing through TestFlight
Crash Log Details:
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Triggered by Thread: 27
Thread 27 Crashed:
0 libsystem_kernel.dylib 0x00000001f0cc91d4 __pthread_kill + 8
1 libsystem_pthread.dylib 0x0000000228e12ef8 pthread_kill + 268
2 libsystem_c.dylib 0x00000001a86bbad8 abort + 128
3 AppleCV3D 0x0000000234d71a28 cv3d::vio::capi::SlamAnchor::SlamAnchor
Question:
Is there a recommended approach for handling multiple room captures with custom ARSCNView integration? The standard RoomCaptureView implementation doesn't show this behavior, but we need the custom guidance functionality that ARSCNView provides.
Crash Log
Code and full crash logs can be provided if needed.
Hello,
Let me ask you a question about Apple Immersive Video.
https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/
I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format.
First, I would like to know if it is possible to integrate Apple Immersive Video into an app.
Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app?
It would be great if you could also share any helpful website resources.
I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos.
As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements.
I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed?
I’ve asked several questions, and that’s all for now. Thank you in advance!
How do you force quit apps on the Vision Pro simulator?
I've read online you can double press the digital crown on a real Vision Pro device but there is no such button on the simulator. So far I've tried
Pressing the home button twice (⇧⌘H)
Pressing the Siri button twice (⌥⇧⌘H)
None of them works
I'm on Xcode 15.0 beta 5 (15A5209g) & visionOS 1.0 beta 2 (21N5207e)
For visionOS 2.0+, it has been announced the object tracking feature. Is there any support for PolySpatial in Unity or is it only available in Swift and Xcode?
Hi,
I was wondering if the Enterprise API for visionOS 2 includes access to the raw Lidar data from the Apple Vision Pro, or any intermediate data representation (like the depthMap as shown in this post)? Or if there would be any way to get access to this data?
Thanks in advance!
We have discovered that our UIViewRepresentable view isn't being dismantled after its window is dismissed via dismissWindow().
This seems to result in a leak of our custom Coordinator class. Every time the user opens a new window, a new Coordinator is created; if the user then dismisses the window manually, or we dismiss it programmatically, the Coordinator remains in memory with no way to destroy it.
Is this expected behavior? How can we be sure to clean up our Coordinator when the view's window is closed? Thanks.
I am trying to only apply a drag gesture to specific entities that has a specific component. My entities has the component on it along with the input target and collision component. The gestures work when I use .targetedToAnyEntity() modifier but .targetedToEntity(where:) modifier fails
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(scene)
}
}
.gesture(
DragGesture()
.targetedToEntity(where: .has(ToyComponent.self))
.onChanged({ value in
value.entity.position = value.convert(value.location3D, from: .local, to: value.entity.parent!)
})
)
}
}
What could be wrong here?
hi guys,
I'm working in VFX industry and I've got the question that, is it possible to create immersive video directly from virtual scene created in DCC software like maya, rendered into footage, then coded into immersive video, and finally play in in vision pro?
thanks.
Topic:
Spatial Computing
SubTopic:
General
Spatial photo in RealityView has a default corner radius. I made a parallel effect with spatial photos in ScrollView(like Spatial Gallery), but the corner radius disappeared on left and right spatial photos. I've tried .clipShape and .mask modifiers, but they did't work. How to clip or mask spatial photo with corner radius effect?
Hello,
There are odd artifacts (one looks like an image placeholder) appearing when dismissing an immersive space which is displaying an ImagePresentationComponent. Both artifacts look like widgets..
See below our simple code displaying the ImagePresentationComponent and the images of the odd artifacts that appear briefly when dismissing the immersive space.
import OSLog
import RealityKit
import SwiftUI
struct ImmersiveImageView: View {
let logger = Logger(subsystem: AppConstant.SUBSYSTEM, category: "ImmersiveImageView")
@Environment(AppModel.self) private var appModel
var body: some View {
RealityView { content in
if let currentMedia = appModel.currentMedia,
var imagePresentationComponent = currentMedia.imagePresentationComponent {
let imagePresentationComponentEntity = Entity()
switch currentMedia.type {
case .iphoneSpatialMovie:
logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatial3DImmersive
case .twoD:
logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatial3DImmersive
case .visionProConvertedSpatialPhoto:
logger.info("\(#function) \(#line) spatialStereoImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatialStereoImmersive
default :
logger.error("\(#function) \(#line) Unsupported media type \(currentMedia.type)")
assertionFailure("Unsupported media type \(currentMedia.type)")
}
imagePresentationComponentEntity.components.set(imagePresentationComponent)
imagePresentationComponentEntity.position = AppConstant.Position.spacialImagePosition
content.add(imagePresentationComponentEntity)
}
let toggleViewAttachmentComponent = ViewAttachmentComponent(rootView: ToggleImmersiveSpaceButton())
let toggleViewAttachmentComponentEntity = Entity(components: toggleViewAttachmentComponent)
toggleViewAttachmentComponentEntity.position = SIMD3<Float>(
AppConstant.Position.spacialImagePosition.x + 1,
AppConstant.Position.spacialImagePosition.y,
AppConstant.Position.spacialImagePosition.z
)
toggleViewAttachmentComponentEntity.scale = AppConstant.Scale.attachments
content.add(toggleViewAttachmentComponentEntity)
}
}
}