At a recent community meeting we were wondering how Apple creates this soft-edge effect around the occlusion cutouts. We see this effect on keyboard cutouts, iPhone cutouts, and in progressive spaces.
An example: Notice the soft edged around the occlusion cutout for the keyboard
One of our members created some Shader Graph materials to explore soft edges. These work by sending data into the opacity channel of the PreviewSurface node.
Unfortunately, the Occlusion Surface nodes lack any sort of input. If you know how to blend these concepts with RealityKit Occlusion, please let us know!
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
The landing page for visionOS 26 mentions
The Unified Coordinate Conversion API makes moving views and entities between scenes straightforward — even between views and ARKit accessory anchors.
This WWDC session very briefly shows a single example of using this, but with no context. For example, they discuss a way to tell the distance between a Model3D and an entity in a RealityView. But they don't provide any details for how they are referencing the entity (bolts in the slide).
The session used the BOT-anist example project that we saw in visionOS 2, but the version on in the Sample Code library has not been updated with these examples.
I was able to put together a simple example where we can get the position of a window relative to the world origin. It even updates when the user recenters.
struct Lab080: View {
@State private var posX: Float = 0
@State private var posY: Float = 0
@State private var posZ: Float = 0
var body: some View {
GeometryReader3D { geometry in
VStack {
Text("Unified Coordinate Conversion")
.font(.largeTitle)
.padding(24)
VStack {
Text("X: \(posX)")
Text("Y: \(posY)")
Text("Z: \(posZ)")
}
.font(.title)
.padding(24)
}
.onGeometryChange3D(for: Point3D.self) { proxy in try! proxy
.coordinateSpace3D()
.convert(value: Point3D.zero, to: .worldReference)
} action: { old, new in
posX = Float(new.x)
posY = Float(new.y)
posZ = Float(new.z)
}
}
}
}
This is all that I've been able to figure out so far. What other features are included in this new Unified Coordinate Conversion?
Can we use this to get the position of one window relative to another? Can we use this to get the position of a view in a window relative to an entity in a RealityView, for example in a Volume or Immersive Space? What else can Unified Coordinate Conversion do?
Are there documentation pages that I'm missing? I'm not sure what to search for. Are there any Sample projects that use these features? Any additional information would be very helpful.
Topic:
Spatial Computing
SubTopic:
General
Is there any way to render a RealityView to an Image/UIImage like we used to be able to do using SCNView.snapshot() ?
ImageRenderer doesn't work because it renders a SwiftUI view hierarchy, and I need the currently presented RealityView with camera background and 3D scene content the way the user sees it
I tried UIHostingController and UIGraphicsImageRenderer like
extension View {
func snapshot() -> UIImage {
let controller = UIHostingController(rootView: self)
let view = controller.view
let targetSize = controller.view.intrinsicContentSize
view?.bounds = CGRect(origin: .zero, size: targetSize)
view?.backgroundColor = .clear
let renderer = UIGraphicsImageRenderer(size: targetSize)
return renderer.image { _ in
view?.drawHierarchy(in: view!.bounds, afterScreenUpdates: true)
}
}
}
but that leads to the app freezing and sending an infinite loop of
[CAMetalLayer nextDrawable] returning nil because allocation failed.
Same thing happens when I try
return renderer.image { ctx in
view.layer.render(in: ctx.cgContext)
}
Now that SceneKit is deprecated, I didn't want to start a new app using deprecated APIs.
I downloaded the file through Scoot, and when I remove VisionPro, the app will call the StreamDelegate method and return ". endEncountered". How can I solve this problem?
Thank you!
I am creating a vision pro app with a 3D model, it has a mesh hierarchy of head, hands, feet etc. I want the character to look towards the camera, but am not able to access head of character through sceneKit nor reality kit. when I try to print names of the child meshes, it only prints till the character, it does iterate through all the body parts. Can anyone help?
Do you retain a reference to your content (RealityViewContent) events? For example, the Manipulation Events docs from Apple use _ to discard the result. In theory the event should keep working while the content is alive.
_ = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in
event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false)
}
_ = content.subscribe(to: ManipulationEvents.WillEnd.self) { event in
event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .red, isMetallic: false)
}
We could store these events in state. I've seen this in a few samples and apps.
@State var beginSubscription: EventSubscription?
...
beginSubscription = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in
event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false)
}
The main advantage I see is that we can be more explicit about when we remove the event. Are there other reasons to keep a reference to these events?
When I was developing the visionOS 26beta Widget, I found that it could not work normally when the real vision OS was running, and an error would appear.
Please adopt container background api
It is worth mentioning that this problem does not occur on the visionOS virtual machine.
Does anyone know what the reason and solution are, or whether this is a visionOS error that needs Feedback? Thank you!
I thought the ARCoachingOverlayView was a nice touch, so each apps ARKit coaching was recognizable and I used it in my ARView/ARSCNView based apps.
Now with RealityView, is there any replacement planned?
Or should we just use UIViewRepresentable and wrap ARCoachingOverlayView?
I developed an app on VisionPro and created a button that allows users to exit the app instead of being forced to exit. I use the ”exit (0)“ scheme to exit the application, but when I re-enter, the loaded window is not the initial window, so is there any relevant code that can be used? Thank you
hi guys,
I'm working in VFX industry and I've got the question that, is it possible to create immersive video directly from virtual scene created in DCC software like maya, rendered into footage, then coded into immersive video, and finally play in in vision pro?
thanks.
Topic:
Spatial Computing
SubTopic:
General
iOS currently restricts background Bluetooth advertising and scanning in order to preserve battery life and protect user privacy. While these restrictions serve important purposes, they also limit legitimate use cases where users have explicitly opted in to proximity-based experiences.
The core challenge is that modern social applications need a way to detect when users are physically present at the same location or event without requiring every participant to keep their app in the foreground. Under the current system, background BLE advertising is heavily throttled and can only transmit a limited payload, background scanning intervals are sparse and unpredictable, peer-to-peer proximity detection cannot be maintained reliably when apps are in the background, and Background App Refresh is non-deterministic, making any kind of time-based proximity validation impossible.
A proposed enhancement would be to introduce an “Enhanced Proximity Permission.” This would allow developers to enable reliable background BLE advertising and scanning for declared time windows, such as a maximum of eight hours. It would also allow devices running the same app to detect each other’s proximity using ephemeral, rotating identifiers that preserve privacy, with clear user consent and prominent indicators whenever the feature is active.
Unlocking this capability would open up new categories of applications. Live events could offer automatic attendance tracking at concerts, conferences, or sports venues. Retail environments could support opt-in foot traffic analysis and dwell-time insights. Social apps could allow users to find friends at festivals, campuses, or other large venues. Safety applications could extend to crowd density monitoring and contact tracing beyond COVID-era needs. Gaming could offer real-world multiplayer experiences based on physical proximity, and transportation providers could verify rideshare pickups or measure public transit flows automatically.
Privacy safeguards would remain central. Permissions would be time-boxed and expire after an event or session. A mandatory visual indicator would be displayed whenever proximity tracking is active. A user-facing dashboard would show all apps granted enhanced proximity access. Permissions would automatically be revoked after a period of non-use, and only ephemeral tokens not permanent identifiers would be broadcast.
The industry impact would be significant. With this enhancement, iOS could power the next generation of location-aware social platforms while maintaining Apple’s leadership in privacy through explicit user control and transparency. Current alternatives, such as requiring users to keep apps in the foreground or deploying dedicated hardware beacons, produce poor user experiences and constrain innovation in spatial computing and social applications.
Can anyone from Apple consider this change? Having to buy iBeacons is brutal and means slower adoption. Please reconsider this for users who opt in.
Hi everyone,
I’m encountering a memory overflow issue in my visionOS app and I’d like to confirm if this is expected behavior or if I’m missing something in cleanup.
App Context
The app showcases apartments in real scale using AR.
Apartments are heavy USDZ models (hundreds of thousands of triangles, high-resolution textures).
Users can walk inside the apartments, and performance is good even close to hardware limits.
Flow
The app starts in a full immersive space (RealityView) for selecting the apartment.
When an apartment is selected, a new ImmersiveSpace opens and the apartment scene loads.
The scene includes multiple USDZ models, EnvironmentResources, and dynamic textures for skyboxes.
When the user dismisses the experience, we attempt cleanup:
Nulling out all entity references.
Removing ModelComponents.
Clearing cached textures and skyboxes.
Forcing dictionaries/collections to empty.
Despite this cleanup, memory usage remains very high.
Problem
After dismissing the ImmersiveSpace, memory does not return to baseline.
Check the attached screenshot of the profiling made using Instruments:
Initial state: ~30MB (main menu).
After loading models sequentially: ~3.3GB.
Skybox textures bring it near ~4GB.
After dismissing the experience (at ~01:00 mark): memory only drops slightly (to ~2.66GB).
When loading the second apartment, memory continues to increase until ~5GB, at which point the app crashes due to memory pressure.
The issue is consistently visible under VM: IOSurface in Instruments. No leaks are detected.
So it looks like RealityKit (or lower-level frameworks) keeps caching meshes and textures, and does not free them when RealityView is ended. But for my use case, these resources should be fully released once the ImmersiveSpace is dismissed, since new apartments will load entirely different models and textures.
Cleanup Code Example
Here’s a simplified version of the cleanup I’m doing:
func clearAllRoomEntities() {
for (entityName, entity) in entityFromMarker {
entity.removeFromParent()
if let modelEntity = entity as? ModelEntity {
modelEntity.components.removeAll()
modelEntity.children.forEach { $0.removeFromParent() }
modelEntity.clearTexturesAndMaterials()
}
entityFromMarker[entityName] = nil
removeSkyboxPortals(from: entityName)
}
entityFromMarker.removeAll()
}
extension ModelEntity {
func clearTexturesAndMaterials() {
guard var modelComponent = self.model else { return }
for index in modelComponent.materials.indices {
removeTextures(from: &modelComponent.materials[index])
}
modelComponent.materials.removeAll()
self.model = modelComponent
self.model = nil
}
private func removeTextures(from material: inout any Material) {
if var pbr = material as? PhysicallyBasedMaterial {
pbr.baseColor.texture = nil
pbr.emissiveColor.texture = nil
pbr.metallic.texture = nil
pbr.roughness.texture = nil
pbr.normal.texture = nil
pbr.ambientOcclusion.texture = nil
pbr.clearcoat.texture = nil
material = pbr
} else if var simple = material as? SimpleMaterial {
simple.color.texture = nil
material = simple
}
}
}
Questions
Is this expected RealityKit behavior (textures/meshes cached internally)?
Is there a way to force RealityKit to release GPU resources tied to USDZ models when they’re no longer used?
Should dismissing the ImmersiveSpace automatically free those IOSurfaces, or do I need to handle this differently?
Any guidance, best practices, or confirmation would be hugely appreciated.
Thanks in advance!
Using Xcode v26 Beta 6 on macOS v26 Beta 25a5349a
When pressing on the home button of the visionOS simulator, I am not positioned in the middle of the room like would normally be. This occurred when moving a lot in the space to find an element added to an ImmersiveSpace.
How to resolve: restart simulator device.
See attached the pictures of the visionOSSimulatorCorrectHomePosition and the visionOSSimulatorMisallignedHomePosition.
Hi, I'm developing a prototype VisionOS game.
How to access the bones or joints information when exporting a USD file from Blender to RCP?
The animation in RCP works fine and the joints' information is correctly embedded in the USDA file (with usdchecker). However, RCP does not read it in USDA, USDC or USDZ. It should be possible based on Apple WWDC24 (Compose Interactive 3D content in RCP).
I want to attach and detach an entity to a particular bone in certain moments. So I need the bones' data. They are standard mixamo animations. My mesh is a single unified mesh. Using Blender 4.4
I have a mesh based animation 3D model, that means every frame it’s a new mesh. I import it into RealityView, but can’t play it‘s animation, RealityKit tells me this model has no animations by using print(entity.availableAnimations).
I have a problem with the wall plane detection using visionOS/ARKit:
I am using ARKitSession's PlaneDetectionProvider detection.wall in the space of visionOS. I recorded the position and rotation information of the first detected plane, but found that the rotation value will be facing when the user starts the space. There is a deviation in different directions. That is to say, even if the plane is located on the same wall, the rotation quaternion will be different.
I hope that no matter from which direction the user enters the scan, the real direction of the wall can be correctly obtained so that the virtual content can be accurately aligned with the wall.
I have tried to use anchor.originFromAnchorTransform or Transform.rotation directly, but the rotation value is still affected by the user's initial orientation.
In addition, I would like to know whether the user's initial orientation will affect the location information. If so, please provide a solution.
Thank you!
Entity.animate() makes entity animation much easier, but in many cases, I want to break the progress because of some gestures, I couldn't find any way to do this, including tried entity.stopAllAnimations(), I have to wait till Entity.animate() completes.
iOS 26 / visionOS 26
If I long press on an element, the sidebar disappears and then a Done appears on the screen, but nothing else changes, so what are the Environments in Vision Pro's Simulator?
I want to open the control center in Vision Pro’s Xcode simulator. Can I open it? If I can, please tell me how to do it. Thank you.
Spatial photo in RealityView has a default corner radius. I made a parallel effect with spatial photos in ScrollView(like Spatial Gallery), but the corner radius disappeared on left and right spatial photos. I've tried .clipShape and .mask modifiers, but they did't work. How to clip or mask spatial photo with corner radius effect?