The new Mac virtual display feature on visionOS 2 offers a curved/panoramic window. I was wondering if this is simply a property that can be applied to a window, or if it involves an immersive mode or SceneKit/RealityKit?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am experimenting with RealityKit to set up a portal. Everything works, but I was wondering where the scene's origin is with respect to the front of the portal window?
From experiments, the origin's X and Y appear to be at the center of the portal window, while the origin's Z appearing to be about a meter behind the portal window.
Is this (at least roughly) correct? Is it documented anywhere?
PS. I began with the standard visionOS app and edited the Reality Composer Pro file to create the scene.
Hello, I am creating a VisionOS application where a simple web browser is implemented in an immersive view by using WKWebView. Presentations cause a crash with "Presentations are not permitted within volumetric window scenes."
I am trying to only apply a drag gesture to specific entities that has a specific component. My entities has the component on it along with the input target and collision component. The gestures work when I use .targetedToAnyEntity() modifier but .targetedToEntity(where:) modifier fails
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(scene)
}
}
.gesture(
DragGesture()
.targetedToEntity(where: .has(ToyComponent.self))
.onChanged({ value in
value.entity.position = value.convert(value.location3D, from: .local, to: value.entity.parent!)
})
)
}
}
What could be wrong here?
This is related to the WWDC presentation, What's new in Metal rendering for immersive apps..
Specifically, the macOS spatial streaming to visionOS feature: For reference: the page in the docs.
The presentation demonstrates it using a full immersive space and Metal rendering using compositor services.
I'd like clarity on a few things:
Is the remote device wireless, or must the visionOS device be connected via a wired connected?
Is there a limit to the number of remote devices, and if not, could macOS render different things per remote device simultaneously?
Can I also use mixed mode with passthrough enabled, instead of just a fully-immersive mode?
Can I use RealityKit instead of Metal? If so, may I have an example, or would someone point to an example?
Hello Developers,
I am currently in the initial planning stages of my bachelor thesis in computer science, where I will be developing an application in collaboration with a manufacturer of large-scale machinery. One of the core features I aim to implement is the ability for multiple Apple Vision Pro users to view the same object in augmented reality simultaneously, each from their respective positions relative to the object.
I am still exploring how best to achieve this feature. My initial approach involves designating one device as the host of a "room" within the application, allowing other users to join. If I can accurately determine the relative positions of all users to the host device, it should be possible to display the AR content correctly in terms of angle, size, and location for each user.
Despite my research, I haven't found much information on similar projects, and I would appreciate any insights or suggestions. Specifically, I am curious about common approaches for synchronizing AR experiences across multiple devices. Given that the Apple Vision Pro does not have a GPS sensor, I am also looking for alternative methods to precisely determine the positions of multiple devices relative to each other.
Any advice or shared experiences would be greatly appreciated!
Best regards,
Revin
Hi Apple Team,
We noticed the following exciting changelog in the latest macOS 26 beta:
A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451)
However after trying this on the latest beta and running some tests we do not see any differences on objects with low textures such as single coloured surfaces. Is there anything we are missing? the machine is definitely connected to the internet but we have no way of knowing from the logs if the new model is being used?
thanks
When I show a window while a sky sphere is shown, the handles to drag/close/resize the window are hidden. The colliders still work, so they are there, but only the visuals are hidden. I already know from another project, that this also happens to volumes.
They only appear once you get closer to the window or if the sky sphere gets removed.
Is this a known issue or is there a fix for that?
.persistentSystemOverlays(.visible)does not fix it
Xcode 16.3.0 Beta, visionOS 2.4
After updating to the latest visionOS beta, visionOS 26 Beta 4 (23M5300g) the ‘Presenting images in RealityKit’ sample from the following link no longer builds due to an error. https://developer.apple.com/documentation/RealityKit/presenting-images-in-realitykit
Expected / Previous:
Application builds and runs on device, working as described in the documentation.
Reality:
Application builds, but does not run on device due to an error (shown in screenshot) “Thread 1: EXC_BAD_ACCESS (code=1, address=0xb)”. The application still runs on the simulator, but not on device. When launching the app from Xcode, it builds and installs correctly but hangs due to the respective error. When loading the app from the Home Screen, the app does not load, and immediately returns to the Home Screen.
This Xcode project previously ran with no changes to code - the only change was updating the visionOS system software to the latest version. visionOS 26 Beta 4 (23M5300g)
Is anyone else experiencing this issue?
Hi,
I have used the template code for Plane Detection and placing models on them from here https://developer.apple.com/documentation/visionos/placing-content-on-detected-planes
This source code did not copy the animations in the preview model to the PlacedModel and hence I modified it to do a manual copy of animations and textures. There is a function called materialize() that does this and I was able to modify it to get it working where the placed models are now animating. The issue is when I apply gestures on them like drag or rotate. For those models that go through this logic I'm unable to add gestures even though I'm making sure that Collision and Input Target is set on the Placed Models. Has anyone been able to get this working or is it even a possibility?
My materialize function
func materialize() -> PlacedObject {
let shapes = previewEntity.components[CollisionComponent.self]!.shapes
// Clone render content first as we need its materials
let clonedRenderContent = renderContent.clone(recursive: true)
print("To be finding main model: \(descriptor.displayName)")
// Find the main model in preview hierarchy
func findMainModel(_ entity: Entity) -> Entity? {
if entity.name == descriptor.displayName.replacingOccurrences(of: " ", with: "_") {
print("Found main model: \(entity.name)")
return entity
}
for child in entity.children {
if child.name == descriptor.displayName.replacingOccurrences(of: " ", with: "_") {
print("Found main model in children: \(child.name)")
return child
}
}
return nil
}
// Clone hierarchy preserving structure, names, and materials
func cloneHierarchy(_ entity: Entity) -> Entity {
print("Cloning: \(entity.name)")
let cloned: Entity
if let model = entity as? ModelEntity {
// Clone with recursive false to handle children manually
cloned = model.clone(recursive: false)
if let clonedModel = cloned as? ModelEntity,
let originalMaterials = model.model?.materials {
// Preserve the original model's materials
clonedModel.model?.materials = originalMaterials
}
} else {
cloned = Entity()
}
// Preserve name and transform
cloned.name = entity.name
cloned.transform = entity.transform
// Clone children
for child in entity.children {
let clonedChild = cloneHierarchy(child)
cloned.addChild(clonedChild)
}
return cloned
}
print("=== Cloning Preview Structure ===")
// Clone the preview hierarchy with proper structure
let clonedStructure = cloneHierarchy(previewEntity)
// Find and use the main model
if let mainModel = findMainModel(clonedStructure) {
print("Using main model for PlacedObject")
let modelEntity: ModelEntity
if let asModel = mainModel as? ModelEntity {
print("Using asModel ")
modelEntity = asModel
} else {
modelEntity = ModelEntity()
modelEntity.name = mainModel.name
// Copy children and transforms
for child in mainModel.children {
modelEntity.addChild(child)
}
modelEntity.transform = mainModel.transform
}
// Add collision component here
let collisionComponent = CollisionComponent(shapes: shapes, isStatic: false,
filter: CollisionFilter(group: PlacedObject.collisionGroup, mask: .all))
modelEntity.components.set(collisionComponent)
// Create the placed object
let placedObject = PlacedObject(descriptor: descriptor, renderContentToClone: modelEntity, shapes: shapes)
// Set input target on the placed object itself
placedObject.components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect]))
return placedObject
} else {
print("Fallback to original render content")
let placedObject = PlacedObject(descriptor: descriptor, renderContentToClone: clonedRenderContent, shapes: shapes)
placedObject.components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect]))
return placedObject
}
}
My PlacedObject class where the init has the recursive cloning removed because it is handled in materialize
class PlacedObject: Entity {
let fileName: String
// The 3D model displayed for this object.
private let renderContent: ModelEntity
static let collisionGroup = CollisionGroup(rawValue: 1 << 29)
// The origin of the UI attached to this object.
// The UI is gravity aligned and oriented towards the user.
let uiOrigin = Entity()
var affectedByPhysics = false {
didSet {
guard affectedByPhysics != oldValue else { return }
if affectedByPhysics {
components[PhysicsBodyComponent.self]!.mode = .static
} else {
components[PhysicsBodyComponent.self]!.mode = .static
}
}
}
var isBeingDragged = false {
didSet {
affectedByPhysics = !isBeingDragged
}
}
var positionAtLastReanchoringCheck: SIMD3<Float>?
var atRest = false
init(descriptor: ModelDescriptor, renderContentToClone: ModelEntity, shapes: [ShapeResource]) {
fileName = descriptor.fileName
// renderContent = renderContentToClone.clone(recursive: true)
renderContent = renderContentToClone
super.init()
name = renderContent.name
// Apply the rendered content’s scale to this parent entity to ensure
// that the scale of the collision shape and physics body are correct.
scale = renderContent.scale
renderContent.scale = .one
// Make the object respond to gravity.
let physicsMaterial = PhysicsMaterialResource.generate(restitution: 0.0)
let physicsBodyComponent = PhysicsBodyComponent(shapes: shapes, mass: 1.0, material: physicsMaterial, mode: .static)
components.set(physicsBodyComponent)
components.set(CollisionComponent(shapes: shapes, isStatic: false,
filter: CollisionFilter(group: PlacedObject.collisionGroup, mask: .all)))
addChild(renderContent)
addChild(uiOrigin)
uiOrigin.position.y = extents.y / 2 // Position the UI origin in the object’s center.
// Allow direct and indirect manipulation of placed objects.
components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect]))
// Add a grounding shadow to placed objects.
renderContent.components.set(GroundingShadowComponent(castsShadow: true))
}
required init() {
fatalError("`init` is unimplemented.")
}
}
Thanks
I tried to show spatial photo on my application by swiftUI's Image but it just show flat version of it even I Use Vision Pro,
so, how can I show spatial photo to users,
does there any options for this?
I’m working with RealityView in visionOS and noticed that the content closure seems to run twice, causing content.add to be called twice automatically. This results in duplicate entities being added to the scene unless I manually check for duplicates. How can I fix that? Thanks.
Hi, currently tinkering with a little shareplay app for the Vision Pro that allows people to facetime and shareplay to play with random 3d models (as well as move them around, which should sync the model positions for everyone in relative space).
When the users start their facetime call, then open the immersive space to see the 3d models, the models load in properly in context of the group immersive space's coordinate system, and moving the models reflects the new positions real-time for each participant.
The main issue comes if/when users use the digital crown to re-center their view. It appears to re-center the model and view, which is expected. However, it also seems to re-position the model/root entity to match the user's origin. Not sure if this is intentional or not, but this essentially makes it so that it "de-syncs" the model (so me moving the model next to someone does not reflect it 1:1 - it still moves properly, but the new "initial" position after re-centering makes it offset).
Is there a potential solution or work-around for this such that re-centering the view doesn't de-sync the model/entity's position?
Rough code for my RealityView component is below:
RealityView { content, attachments in
content.add(appModel.originEntity)
appModel.originEntity.addChild(appModel.modelContainerEntity)
appModel.setInitialModelPosition()
configureGestures(forModel: appModel.modelContainerEntity)
configureToolbarAttachment(content: content, attachments: attachments)
} update: { content, _ in
// I have modified the Apple provided gesture components to
// send the app model the new positions/rotations
// as well as broadcast the position/rotation to shareplay participants
// When user re-centers view, it seems to also re-position the model
// so that its origin is at the local user's origin, rather than
// the original origin
// Can we receive a notification that user has re-centered view?
// Or some other work-around?
appModel.modelContainerEntity.setPosition(appModel.modelState.position, relativeTo: nil)
appModel.modelContainerEntity.setOrientation(.init(appModel.modelState.rotation3d), relativeTo: nil)
} attachments: {
Attachment(id: "customViewAttachment") {
CustomView()
}
}
.installGestures()
Please let me know if anything wasn't clear or if more information is needed. Thanks!
I'm trying to develop an immersive visionOS app, which you can move an Entity having a PerspectiveCamera as its child in immersive space, and render the camera view on 2D window.
According to this thread, this seems to can be achieved using RealityRenderer. But when I added the scene entity loaded from realityKitContentBundle to realityRenderer.entities, I needed to clone all entities of the scene, otherwise all entities in the immersive space will disappear.
@Observable
@MainActor
final class OffscreenRenderModel {
private let renderer: RealityRenderer
private let colorTexture: MTLTexture
init(scene: Entity) throws {
renderer = try RealityRenderer()
// If not clone entities in the scene, all entities in the immersive space will disappear
renderer.entities.append(scene.clone(recursive: true))
let camera = PerspectiveCamera()
renderer.activeCamera = camera
renderer.entities.append(camera)
...
}
}
Is this the expected behavior? Or is there any other way to do this (move camera in immersive space and render its output on 2D window)?
Here is my sample code:
https://github.com/TAATHub/RealityKitPerspectiveCamera
Is there a way to provide CLHeading in visionOS so I can more accurately understand the direction in which a user is facing?
Topic:
Spatial Computing
SubTopic:
General
I'm working on a school project that allows users to open a .USDZ file (using Quick Look) on the webpage while using Apple Vision Pro to put the object in their physical envirnment, the project is deployed on Vercel. I'm testing the page with my apple vision pro, when I click open the .USDZ file, I'm seeing a triangle with an exclamation mark while it's trying to load, but it won't load. Does anybody know how to troubleshoot this issue?
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for visionOS.
I saw that there is a new way to add SwiftUI View attachments in my RealityView, what advantages does this have over the old way?
Attachments can now be added directly to your entities with ViewAttachmentComponent. The removes the need to declare your attachments upfront in your RealityView initializer and then add those attachments as child entities. The new approach provides greater flexibility. Canyon Crosser and Petite Asteroids both utilize the new approach.
ManipulationComponent looks really cool! Right now my app has a series of complicated custom gestures. What gestures does it handle for me exactly, and are there any situations where I should prefer my own custom gestures?
ManipulationComponent provides natural interaction with virtual objects. It seamlessly handles translation and rotation. You can easily add manipulation to a SwiftUI view like Model3D with the manipulable view modifier.
The new Object Manipulation API is great for most apps, and is a breeze to implement, but sometimes you might want a more custom feel, and that’s ok! Custom gestures are still fully supported for that scenario.
I saw that there is a new API to also access the right main camera. What can I do with this?
Correct, in visionOS 26, you can access the left and right main cameras. You can even access them simultaneously as a stereo pair. Camera access still requires a managed entitlement and an enterprise license, see Accessing the main camera for more details about those requirements.
More computer vision and machine learning use-cases are unlocked with access to both cameras, we are excited to see what you will do!
What do I need to do to add spatial accessory input for my app?
First, use the GameController framework to establish a connection with the spatial accessory, and then listen for events from the controller. Then, you can use either RealityKit, ARKit, or a combination of both to track the accessory, anchor virtual content to it, and fine tune the accessory interaction with the content in your app.
For more details, check out Discovering and tracking spatial game controllers and styli.
By far, the most difficulty with implementing visionOS apps is SwiftUI window management…placing, opening, closing, etc. Are there any improvements to window management in visionOS 26?
Yes! We recommend watching Set the scene with SwiftUI in visionOS.
You can use the defaultLaunchBehavior to choose whether a particular window is presented (or suppressed) at launch. You can also prevent a window like a secondary toolbar from launching as the initial window using .restorationBehavior(.disabled). Adopting best practices for persistent UI provides a great overview of SwiftUI window management on visionOS.
As for placing windows, there is still no API for an app to specify the placement of its windows other than relative placement. If that is a feature you are interested in, please file an enhancement request for it using Feedback Assistant!
How to get access to the Enterprise API?
First, request the entitlement and license through your Apple Developer or enterprise account. Once these have been granted, include the license and entitlement in your project. Then you can build, test, and distribute as an in-house app.
Topic:
Spatial Computing
SubTopic:
General
Hi,
We’ve been successfully using the RoomPlan API in our application for over two years. Recently, however, users have reported encountering persistent capture errors during their sessions. Specifically, the errors observed are:
CaptureError.worldTrackingFailure
CaptureError.exceedSceneSizeLimit
What we have observed:
Persistent Errors: The errors continue to occur even after initiating new capture sessions.
Normal Usage: Our implementation adheres to typical usage patterns of the RoomPlan API without exceeding any documented room size limits.
Limited Feature Usage: We are not utilizing the WorldTracking feature for the StructureBuilder functionality to stitch rooms together.
Potential State Caching: Given that these errors persist across sessions, we suspect that there might be memory or state cached between sessions that is not being cleared, particularly since we are not taking advantage of StructureBuilder.
Request:
Could you please advise if there is any internal caching or memory retention between capture sessions that might lead to these errors? Additionally, we would appreciate guidance on how to clear or manage this state when the StructureBuilder feature is not in use.
Here is a generalised version of our capture session initialization code to help diagnose the issue.
struct RoomARCaptureView: UIViewRepresentable {
typealias Handler = (CapturedRoom, Error?) -> Void
@Binding var stop: Bool
@Binding var done: Bool
let completion: Handler?
func makeUIView(context: Self.Context) -> RoomCaptureView {
let view = RoomCaptureView(frame: .zero)
view.delegate = context.coordinator
view.captureSession.run(configuration: .init())
return view
}
func updateUIView(_ uiView: RoomCaptureView, context: Self.Context) {
if stop {
// Stop the session only once, multiple times causes issues with the final presentation
uiView.captureSession.stop()
stop = false
done = true
}
}
static func dismantleUIView(_ uiView: RoomCaptureView, coordinator: Self.Coordinator) {
uiView.captureSession.stop()
}
func makeCoordinator() -> ARViewCoordinator {
ARViewCoordinator(completion)
}
@objc(ARViewCoordinator)
class ARViewCoordinator: NSObject, RoomCaptureViewDelegate {
var completion: Handler?
public required init?(coder: NSCoder) {}
public func encode(with coder: NSCoder) {}
public init(_ completion: Handler?) {
super.init()
self.completion = completion
}
public func captureView(shouldPresent roomDataForProcessing: CapturedRoomData, error: (Error)?) -> Bool {
return true
}
public func captureView(didPresent processedResult: CapturedRoom, error: (Error)?) {
completion?(processedResult, error)
}
}
}
Thank you for your assistance.
Hey there,
since SceneView has been marked as „deprecated“ for SwiftUI, I‘m wondering which alternatives should be considered for the following situation:
I have a SwiftUI app (for iOS and iPadOS) where users can view (with rotate, scale, move gestures) 3D models (USDZ) in a scene. The models will be downloaded from web backend and called via local URL paths.
What I tested:
I‘ve tried ARView in .nonAR mode, RealityView, however I didn‘t get the expected response -> User can rotate, scale the 3D models in a virtual space.
ARView in nonAR mode still shows the object like in normal AR mode without camera stream.
I tried to add Gestures to the RealityView on iOS - loading USDZ 3D models worked but the gestures didn’t).
Model3D is only available for visionOS (that would be amazing to have it for iOS)
I also checked QuickLook Preview however it works pretty strange via Filepicker etc, which is not the way how the user should load the 3D models in my app.
Maybe I missed something, I couldn’t find anything which can help me. I‘m pretty much stucked adopting the latest and greatest frameworks/APIs in my App and taking the next steps porting my app to visionOS.
Long story short 😃:
Does someone have an idea what is the alternative to SceneView for USDZ 3D models?
I appreciate your support!!
Thanks in advance!