Hi everyone,
I’m encountering a memory overflow issue in my visionOS app and I’d like to confirm if this is expected behavior or if I’m missing something in cleanup.
App Context
The app showcases apartments in real scale using AR.
Apartments are heavy USDZ models (hundreds of thousands of triangles, high-resolution textures).
Users can walk inside the apartments, and performance is good even close to hardware limits.
Flow
The app starts in a full immersive space (RealityView) for selecting the apartment.
When an apartment is selected, a new ImmersiveSpace opens and the apartment scene loads.
The scene includes multiple USDZ models, EnvironmentResources, and dynamic textures for skyboxes.
When the user dismisses the experience, we attempt cleanup:
Nulling out all entity references.
Removing ModelComponents.
Clearing cached textures and skyboxes.
Forcing dictionaries/collections to empty.
Despite this cleanup, memory usage remains very high.
Problem
After dismissing the ImmersiveSpace, memory does not return to baseline.
Check the attached screenshot of the profiling made using Instruments:
Initial state: ~30MB (main menu).
After loading models sequentially: ~3.3GB.
Skybox textures bring it near ~4GB.
After dismissing the experience (at ~01:00 mark): memory only drops slightly (to ~2.66GB).
When loading the second apartment, memory continues to increase until ~5GB, at which point the app crashes due to memory pressure.
The issue is consistently visible under VM: IOSurface in Instruments. No leaks are detected.
So it looks like RealityKit (or lower-level frameworks) keeps caching meshes and textures, and does not free them when RealityView is ended. But for my use case, these resources should be fully released once the ImmersiveSpace is dismissed, since new apartments will load entirely different models and textures.
Cleanup Code Example
Here’s a simplified version of the cleanup I’m doing:
func clearAllRoomEntities() {
for (entityName, entity) in entityFromMarker {
entity.removeFromParent()
if let modelEntity = entity as? ModelEntity {
modelEntity.components.removeAll()
modelEntity.children.forEach { $0.removeFromParent() }
modelEntity.clearTexturesAndMaterials()
}
entityFromMarker[entityName] = nil
removeSkyboxPortals(from: entityName)
}
entityFromMarker.removeAll()
}
extension ModelEntity {
func clearTexturesAndMaterials() {
guard var modelComponent = self.model else { return }
for index in modelComponent.materials.indices {
removeTextures(from: &modelComponent.materials[index])
}
modelComponent.materials.removeAll()
self.model = modelComponent
self.model = nil
}
private func removeTextures(from material: inout any Material) {
if var pbr = material as? PhysicallyBasedMaterial {
pbr.baseColor.texture = nil
pbr.emissiveColor.texture = nil
pbr.metallic.texture = nil
pbr.roughness.texture = nil
pbr.normal.texture = nil
pbr.ambientOcclusion.texture = nil
pbr.clearcoat.texture = nil
material = pbr
} else if var simple = material as? SimpleMaterial {
simple.color.texture = nil
material = simple
}
}
}
Questions
Is this expected RealityKit behavior (textures/meshes cached internally)?
Is there a way to force RealityKit to release GPU resources tied to USDZ models when they’re no longer used?
Should dismissing the ImmersiveSpace automatically free those IOSurfaces, or do I need to handle this differently?
Any guidance, best practices, or confirmation would be hugely appreciated.
Thanks in advance!
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Since updating to iOS 26.0 (and confirmed on 26.1), ARBodyTrackingConfiguration no longer detects a valid ARBodyAnchor on devices with LiDAR (e.g., iPhone 15 Pro, iPhone 17 Pro Max).
This issue reproduces in custom projects and Apple’s official sample “Capturing Body Motion in 3D”.
The AR session runs normally, but the delegate call:
func session(_ session: ARSession, didUpdate anchors: [ARAnchor])
never yields an ARBodyAnchor with valid joint transforms.
All joints return nil when calling:
body.skeleton.modelTransform(for: jointName)
resulting in 0 valid joints per frame.
Environment
• Device: iPhone 17 Pro Max (LiDAR)
• iOS: 26.0 / 26.1
• Xcode: 16.0 (stable)
• Framework: ARKit + RealityKit
• Configuration used:
config.worldAlignment = .gravityAndHeading
config.isAutoFocusEnabled = true
config.environmentTexturing = .none
session.run(config)
Also tested: with and without frameSemantics = .bodyDetection
Expected Behavior
ARBodyAnchor should be detected and body.skeleton should contain ~89 valid joints with continuous updates.
I am using AccessoryTrackingProvider from ARKit to get the transform of the PSVR2 controller via originFromAnchorTransform of the AccessoryAnchor. I also am trying to use AnchorEntity on the controller using RealityKit
However, none of the three options for Accessory.LocationName, which should be used to define the AnchorEntity target, seem to match the position on the controller which is being sent from ARKit.
The picture attached is showing two transforms:
RealityKit - using .gripSurface to define the AnchoringComponent.Target.accesssory location.
ARKit - using originFromAnchorTransform for AccessoryTrackingProvider.
They are not aligned at the same point.
As for the other options of Accessory.LocationName, using .aim is located at the tip of the controller and .grip is the same position as .gripSurface but with a different orientation.
I am wondering why there is not an option for Accessory.LocationName that actually matches the transform captured by ARKit?
i'd like to have a little bit control over the transparency of the videomaterial. is there any way to prepare a shadergraph unlit shader and use it with the videomaterial.
I have an arguably massive project and am not sure if the issue is with the assets or my approach in the code.
the error says : Tool terminated due to error "SIGNAL 6:Abort trap:6"
Basically I have around 15-20 assets (usda files built out of usdz files). In the code i am loading a scene with all the usda files and then have the functions to enable and disable a particular asset when needed.
This was working as intended when i am using dummy assets(with less polygons, lesser textures)
But when i placed the actual assets the error appears and persists. Do I have a bad approach of loading all the scenes at once?
Previously i have used an approach which loads the scenes when needed and that involved some lag before rendering the assets. But my current approach(when using dummies) works like a dime rendering and hiding the assets in realtime with no lag.
Kindly suggest any workarounds.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer
AR / VR
RealityKit
Reality Composer Pro
https://developer.apple.com/documentation/realitykit/videomaterial
The documentation: "Video materials support transparency if the source video’s file format also supports transparency."
I have a transparency video(Hand.mov, HEVC with alpha), I can show the video with transparency background correctly on Vision Pro Simulates, but on physic Device the video has a black background. I'm sure the video format is ok because I can see get the texture from video and display it on an UnlitMaterial.
How can I show the transparency video correctly with the RealityKit/VideoMaterial?
Hi there,
I received an enterprise license file to include enhanced object tracking configuration for the Vision Pro. My account is part of the team which got the allowance from Apple to use this capability. Unfortunately, although I followed the guide, I do not find the Object Tracking capability when I try to add it to my project. There are other capabilities like Main Camera on the Vision Pro, but not for Object Tracking. I am using Xcode 26.1 and visionOS 26.1. What am I missing here?
Thanks in advance,
Matthias
After adding TextComponents to my Entities on visionOS, I have observed that visualBounds will ignore the TextComponents.
Documentation states that it should render a rounded rectangle mesh. These mashes are visible on the device, but not visible in the debugger ("Capture Entity Hierarchy") and ignored by visualBounds.
Am I missing something?
static func makeDirection(_ direction: Direction) -> Entity {
let text = Entity()
text.name = direction.rawValue
text.setScale(SIMD3(repeating: 5), relativeTo: nil)
text.transform.rotation = direction.rotation
text.components.set(direction.textComponent)
return text
}
My workaround is to add a disabled ModelEntity and take its bounds 😬
Environment
visionOS 26.1, Xcode 26.1.1
Problem
When a WindowGroup opens an ImmersiveSpace and the user closes the window via X button, the async Task in .onDisappear gets cancelled before dismissImmersiveSpace() completes, leaving the ImmersiveSpace active with no way to exit.
Steps
WindowGroup opens ImmersiveSpace in .onAppear
User clicks X to close window
.onDisappear fires but async cleanup cancelled
ImmersiveSpace remains active, user trapped
Expected
ImmersiveSpace dismissed when window closes
Actual
ImmersiveSpace remains active
Code
.onAppear {
Task {
await openImmersiveSpace(id: "VideoCallMainCamera")
}
}
.onDisappear {
Task {
await dismissImmersiveSpace() // Gets cancelled
}
}
What I've Tried
Task in .onDisappear ❌
scenePhase monitoring ❌
High priority Task ❌
.restorationBehavior(.disabled) + .defaultLaunchBehavior(.suppressed) ✅ (prevents restoration but doesn't fix immediate cleanup)
Question
What's the recommended pattern for ensuring ImmersiveSpace cleanup when WindowGroup closes? Is there a way to block window closure until async cleanup completes, or should ImmersiveSpaces automatically dismiss with their parent window?
Problem Description:
I am developing an application that runs in the Shared Space on Apple Vision Pro using Unity. When using the UI ScrollView (Scroll View) component, I found that the Mask / RectMask2D does not function in the Shared Space.
Scrolling content is not masked or cropped; it extends beyond the view boundary and is displayed directly.
The same UI works correctly across platforms such as Unity Editor, iOS, and macOS, but the issue only occurs in the shared space of Vision Pro.
Reproduction steps:
Create a ScrollView in Unity.
Add a Mask or RectMask2D to the viewport.
Deploy the application to Apple Vision Pro and run it in Shared Space mode.
Sliding content will not be clipped by the mask, and the masked area is entirely ineffective.
Expected behavior:
The content of ScrollView should be properly clipped by Mask / RectMask2D and should not render outside the mask boundary.
Actual results:
In the shared space of Vision Pro, the mask is ineffective, causing scrolling content to extend beyond the designated area and resulting in severe UI distortion.
Environmental Information:
Device: Apple Vision Pro
Mode: Shared Space
Unity Version: 6000.0.40f1
visionOS version: visionOS 26.0
Unity PolySpatial Version: 2.0.4
Impact
This issue causes Unity UI to fail to display correctly on Vision Pro, preventing ScrollView from properly clipping content, which impacts the UI experience and interaction effects in practical applications.
Expected Result: When running a Unity app in the shared space of visionOS, the Mask / RectMask2D of ScrollView functions correctly
Hi all,
I am currently developing a game in Unity for VisionOS and I'd prefer to use the PSVR2 controllers as a source of the raycast for menu selection instead of the default VisionOS gaze for my specific use case. Is there a way to access the IMU of PSVR2 controllers to do this instead of just using eyegaze + controller click for selection? Is there a specific configuration for GCController from within Unity maybe?
Thank you!
I recently added pushWindow to my app, and I discovered that in visionOS 26.2 RC (23N301), pushWindow followed by dismissWindow no longer works as expected.
Specifically, if the user moves the pushed window, then when the pushed window is later dismissed, the parent window's position isn't aligned with the pushed window's new position. Its original position is restored instead.
Curiously, the bug only happens when an app is launched from the visionOS home view, and not when an app is launched from Xcode. It also doesn't happen in the visionOS 26.2 simulator.
Another interesting detail is that while the parent window is hidden, if the user long-presses the Digital Crown and then dismisses the pushed window, the parent window's position seems to be immune from the Digital Crown scene reorientation. It's restored to its original real world position.
Demo video: https://youtu.be/zR3t2ON3Wz0
I've submitted feedback as FB21287011 with a sample app and detailed repro steps.
Has anyone else encountered this issue already and figured out a workaround? It would be nice if I could get pushWindow to work correctly in my app.
Thanks everybody! 😀
Hi,
We’ve been successfully using the RoomPlan API in our application for over two years. Recently, however, users have reported encountering persistent capture errors during their sessions. Specifically, the errors observed are:
CaptureError.worldTrackingFailure
CaptureError.exceedSceneSizeLimit
What we have observed:
Persistent Errors: The errors continue to occur even after initiating new capture sessions.
Normal Usage: Our implementation adheres to typical usage patterns of the RoomPlan API without exceeding any documented room size limits.
Limited Feature Usage: We are not utilizing the WorldTracking feature for the StructureBuilder functionality to stitch rooms together.
Potential State Caching: Given that these errors persist across sessions, we suspect that there might be memory or state cached between sessions that is not being cleared, particularly since we are not taking advantage of StructureBuilder.
Request:
Could you please advise if there is any internal caching or memory retention between capture sessions that might lead to these errors? Additionally, we would appreciate guidance on how to clear or manage this state when the StructureBuilder feature is not in use.
Here is a generalised version of our capture session initialization code to help diagnose the issue.
struct RoomARCaptureView: UIViewRepresentable {
typealias Handler = (CapturedRoom, Error?) -> Void
@Binding var stop: Bool
@Binding var done: Bool
let completion: Handler?
func makeUIView(context: Self.Context) -> RoomCaptureView {
let view = RoomCaptureView(frame: .zero)
view.delegate = context.coordinator
view.captureSession.run(configuration: .init())
return view
}
func updateUIView(_ uiView: RoomCaptureView, context: Self.Context) {
if stop {
// Stop the session only once, multiple times causes issues with the final presentation
uiView.captureSession.stop()
stop = false
done = true
}
}
static func dismantleUIView(_ uiView: RoomCaptureView, coordinator: Self.Coordinator) {
uiView.captureSession.stop()
}
func makeCoordinator() -> ARViewCoordinator {
ARViewCoordinator(completion)
}
@objc(ARViewCoordinator)
class ARViewCoordinator: NSObject, RoomCaptureViewDelegate {
var completion: Handler?
public required init?(coder: NSCoder) {}
public func encode(with coder: NSCoder) {}
public init(_ completion: Handler?) {
super.init()
self.completion = completion
}
public func captureView(shouldPresent roomDataForProcessing: CapturedRoomData, error: (Error)?) -> Bool {
return true
}
public func captureView(didPresent processedResult: CapturedRoom, error: (Error)?) {
completion?(processedResult, error)
}
}
}
Thank you for your assistance.
We have been using RoomPlan in our app for 2+ years. Through a combination of in-app and manual coaching on scanning best practices, most users are able to achieve high-quality scans on a consistent basis.
In recent weeks, however, we have observed an increase in reports of degraded scanning performance, even from veteran users who had not previously encountered issues. The RoomCaptureView overlay is jittery and crooked, and the resulting scan file has significant issues, even for simple, well-lit rectangular rooms.
It is difficult to troubleshoot these issues given the number of variables at play, and the overall volume of reports is still relatively low, but we'd appreciate any guidance on known issues or workarounds that could help unblock our users who are being affected by this. I noticed that this post includes an acknowledgement of FB14454922 and FB15035788. Our issues seem slightly different as the scans are simply inaccurate and jittery without failing outright. I haven't found any other threads on similar issues.
What is the reason the hand-tracking joints have these axes? I'm trying to create a virtual hands model and that's a mess.
I posted https://developer.apple.com/forums/thread/809481 yesterday about an issue I discovered with pushWindow in visionOS 26.2 RC, but today I discovered a second problem with pushWindow.
If window A calls pushWindow to present window B, and the user pins window B to a wall, the following unexpected behaviors are observed:
Window B spontaneously disappears.
If the user re-launches the (still running) app from the visionOS home view, both window A and window B appear simultaneously. I assume only window B should be visible at this point, since window A pushed window B.
If the user closes window B, it's now impossible to present window B again. Calls to pushWindow appear to be ignored.
If the user force-quits the app and relaunches it, and pushWindow is called again, window B appears, but window A remains visible.
I also noticed this surprising behavior:
This broken state of pushWindow behavior now affects all other apps on the system that may call pushWindow in the future, not just the app whose pushed window was pinned above.
A workaround is to reboot the device, and then the system will behave as expected until the next time the user pins a pushed window.
We're trying to switch from using main camera access on Arkit to screen-capture with passthrough however we're facing some issues and it seems a bit complicated to debug.
We have set up a broadcast Extension, set up some logs on the sample Handler but we get nothing in the console nor that the recording starts, we set up the picker as well and we can see our extension in the control center as one of the choices but clicking start, results in it stopping in less than one second after.
The only message that is rather contradictory we see in the console.app is the following
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license
and just right after
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
Hi,
we've developed an app for Vision Pro that utilises the GroupActivitites SDK to provide shared experiences for our users.
Remote Participation works great, but we can't get nearby sharing to work.
The behaviour we're observing:
User 1 engages share sheet from Volume, 2nd Vision Pro is visible.
User 1 starts nearby sharing
Session initialisation runs for approx. 30 seconds, then fails
Sometimes, the nearby participant doesn't show up at all after the initialisation has failed once.
As stated in the Configure your visionOS app for sharing with people nearby article, we didn't make any changes to our implementation to support nearby sharing.
Any help would be greatly appreciated.
Kind regards,
David
I have this problem on VisionOS. When I dismiss and reopen a window from a ImagePresentationComponent, the window misses the resize ui elements when I look at the window corners. The rest of the window ui elements (drag, close...) are there. Resizing was possible before the window was dismissed.
The code is something like this:
WindowGroup(id: "image-display-window",.....
}
.windowResizability(.automatic)
.windowStyle(.plain)
I call dismissWindow() from the window view and it is dismissed correctly.
Then I call openWindow(id: "image-display-window", value: data) from another view to reopen it. It reopens but it missing the possibility to resize.
Anyone knows how to fix this?
Thanks.
I'm trying to develop an app that broadcasts what the user sees (priorly we were using main camera access) but now we'd like to investigate and try with this option.
I have set up the BroadcastExtension, I've added the picker, I click on my button, I can see my broadcast extension in the options list in the control center, once I click start, it stops after 1 second more or less.
I'm not able to get anything in the console from my Sample Handler (prints or logs or anything).
I can see however in the console.app some misleading information (one after the other):
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
We have the entreprise license, the capability and I did add the capability on the extension target as well.