Hello,
I am in the process of implementing SharePlay support in my visionOS app. Everything runs fine when I test locally, but when my app is distributed via TestFlight, calling try await activity.activate() shows the SharePlay dialog as usual, but then when I start a new FaceTime call, my ImmersiveSpace gets dismissed.
This is only happening when the app is distributed via TestFlight, when I run it locally the ImmersiveSpace stays active as expected.
Looking at the console on my Mac I found this log:
Invalid initial client settings class: UIApplicationSceneClientSettings; expected class: MRUISharedApplicationSceneClientSettings; bundle ID: com.apple.facetime; scene ID: com.apple.facetime:SFBSystemService-DDA8C751-C0C4-487E-AD85-59EF4E6C6050
Does anyone have an idea how I can fix this? It's driving me nuts and I wasted over a day looking for a workaround but so far been unsuccessful.
Thanks!
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Posts under RealityKit tag
200 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have a question I guess more for the Apple team.
But why are there no totally 3D experiences for the Vision Pro lineup?
I know they have given us tools to implement unity 3D games into iPhone and I guess you can also build it in RealityKit. But why at this moment are 3D games limited to just iPad and iPhone and can't you bring that into Vision Pro?
Just to explain. When I say a totally 3D game, I mean games like Gorn. I mean the Vision Pro is definitely powerful enough, but it just feels limited to tabletop games and AR games.
Is this something Apple is thinking about implementing?
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
ARKit
Reality Composer
RealityKit
Reality Composer Pro
The issue reproducible with empty project. When you run it and tap "Open immersive space" it takes a couple of minutes to respond. The issue only reproducible on real device with debugger attached. Reproducible other developers too (not specific to my environment). Issue doesn't exists in Xcode 16.
Afer initial long delay subsequent opens works fine.
Console logs:
nw_socket_copy_info [C1:2] getsockopt TCP_INFO failed [102: Operation not supported on socket]
nw_socket_copy_info getsockopt TCP_INFO failed [102: Operation not supported on socket]
Failed to set dependencies on asset 9303749952624825765 because NetworkAssetManager does not have an asset entity for that id.
void * _Nullable NSMapGet(NSMapTable * _Nonnull, const void * _Nullable): map table argument is NULL
PSO compilation completed for driver shader copyFromBufferToTexture so=0 sbpr=256 sbpi=16384 ss=(64, 64, 1) p=70 sc=1 ds=0 dl=0 do=(0, 0, 0) in 1997
XPC connection interrupted
<<<< FigAudioSession(AV) >>>> audioSessionAVAudioSession_CopyMXSessionProperty signalled err=-19224 (kFigAudioSessionError_UnsupportedOperation) (getMXSessionProperty unsupported) at FigAudioSession_AVAudioSession.m:606
Failed to load item AXCodeItem<0x14706f250> [Rank:6000] SpringBoardUIServices [AXBundle name:/System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle/SpringBoardUIServices] [Platforms and Targets:{ iOS = SpringBoardUIServices; } Framework] [Excluded: (null)]. error: Error Domain=AXLoading Code=0 "URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle" UserInfo={NSLocalizedDescription=URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle}
Failed to load item AXCodeItem<0x14706f250> [Rank:6000] SpringBoardUIServices [AXBundle name:/System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle/SpringBoardUIServices] [Platforms and Targets:{ iOS = SpringBoardUIServices; } Framework] [Excluded: (null)]. error: Error Domain=AXLoading Code=0 "URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle" UserInfo={NSLocalizedDescription=URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle}
[b30780-MRUIFeedbackTypeButtonWithBackgroundTouchDown] Playback timed out before completion (after 3111 ms)
Failed to set dependencies on asset 7089614247973236977 because NetworkAssetManager does not have an asset entity for that id.
If I trigger the apple rating modal in an Immersive space it appears on the ground in (0,0,0) I need it to be in front of the user like push notification perimssion does or other permissions requests.
Hello,
If you add a ManipulationComponent to a RealityKit entity and then continue to add instructions, sooner or later you will encounter a crash with the following error message:
Attempting to move entity “%s” (%p) under “%s” (%p), but the new parent entity is currently being removed. Changing the parent/child entities of an entity in an event handler while that entity is already being reassigned is not supported.
CoreSimulator 1048 – Device: Apple Vision Pro 4K (B87DD32A-E862-4791-8B71-92E50CE6EC06) – Runtime: visionOS 26.0 (23M336) – Device Type: Apple Vision Pro
The problem occurs precisely with this code:
ManipulationComponent.configureEntity(object)
I adapted Apple's ObjectPlacementExample and made the changes available via GitHub.
The desired behavior is that I add entities to ManipulationComponent and then Realitiykit runs stably and does not crash randomly.
GitHub Repo
Thanks
Andre
I'm working on an application for viewing AMF models on macOS, using RealityKit. AMF supports several different ways to color models, including per-vertex color (where the color of a triangle is interpolated from vertex to vertex) as well as per-face color (where the color of the triangle is the same across the entire face).
I'm trying to figure out how to support those color models using a RealityKit mesh. Apple's documentation (https://developer.apple.com/documentation/realitykit/modifying-realitykit-rendering-using-custom-materials) talks about per-vertex colors, but I haven't found a way to create a mesh that includes per-vertex colors, other than use a texture map (which might be the correct solution).
Can someone give me some pointers?
Hey there, I’m currently planning to use RealityKit in a new multiplatform app I’m building. Unfortunately, I noticed that WatchOS is not supported for RealityKit, while SceneKit is getting deprecated. However, I’d like to maintain the same codebase across platforms. What are my options?
Hi team, I'm looking for the RealityKit debugger in Xcode 26 beta 3. I'm running a RealityKit app on my iPad running iPadOS 26 b3, but the debugger option is not there in Xcode.
Hello,
In my project, I have attached a ManipulationComponent to Entity A and as expected, I'm able interact with it using the built-in gestures. I have another Entity B which is a child of A that I would like to interact with as well, so I attempted to add a ManipulationComponent to B. However, no gestures seem to be registered on B; I can still interact with A but B cannot be interacted with despite having ManipulationComponents on both entities.
So I'm wondering if I'm just doing something wrong, if this is an issue with the ManipulationComponent, or if this is a limitation of the API.
Attached is the code used to add the ManipulationComponent to an Entity and it was done on both A and B:
let mc = ManipulationComponent()
model.components.set(mc)
var boxShape = ShapeResource.generateBox(width: 0.25, height: 0.05, depth: 0.25)
boxShape = boxShape.offsetBy(translation: simd_float3(0, -0.05, -0.25))
ManipulationComponent.configureEntity(model, collisionShapes: [boxShape])
if var mc = model.components[ManipulationComponent.self] {
mc.releaseBehavior = .stay
mc.dynamics.inertia = .low
model.components.set(mc)
}
I am using visionOS 26.0; let me know if there's any additional information needed.
Hi all,
I've developed some code that enables an arcball camera interaction with my scene. I've done this using components and systems. The implementation feels a bit messy as I've got gesture code on my realityView, and then a bunch of other code that uses those gesture inputs in my component and system.
Is there a demo app, or some example code that shows a nice way to encapsulate these things in to one item for custom cameras, something like Apple's .realityViewCameraControls(.orbit)
If not can anyone recommend an approach to take?
This post documents an issue I reported in feedback FB19610114 and see if anyone knows of a workaround. Here is a copy of the feedback.
Short version
Manipulation (SwiftUI OR RealityKit) fails to translate entities after changing rooms. By changing rooms, I mean a human wearing an Apple Vision Pro leaving one room and entering another room. Once this issue occurs, it impacts all apps that use these features. A device restart is the only solution I have to fix it.
Feedback FB19610114
This is an odd one. I'm using the new Manipulation Component in visionOS 26. Most of the time this works well. Sometime it stops working and when it does the only way to get it working again is to reboot the headset.
When this happens, I can continue to rotate and scale items, but translation no longer works. It is as if the item is stuck to a fixed point in the parent scene (window, volume, etc). When this bug occurs, it affects every app across the entire operating system that is using manipulation, including the RealityKit component AND the SwiftUI version. This is not limited to one app and is not limited to apps that I am working on. Once this error occurs, it affects literally any application across the operating system that is using this API, including apps from Apple.
I won't speculate on the cause of this, but I do know of one way where I can always get it to happen.
Here is how to reproduce it:
Make an Xcode project with a single entity that uses the Manipulation Component. There is no need to customize the configuration of this component. The default implementation will work.
Build and run this app on device. You can keep running from device or quit and launch the app like normal on device.
Open the app and manipulate the entity - it should work as expected.
Physically walk into another room. It is vital that you leave the current room that you are in and enter a different room entirely.
Use the digital crown to recenter your view and bring your window or volume to you.
Test the manipulation on the entity again - it should still be working as expected at this point.
Physically, move yourself and your headset into the original room where you started.
Use the digital crown to recenter your view and bring your window or volume to you.
Test the manipulation on the entity again - you should now see the issue.
When I follow the steps above, then 100% of the time manipulation translation stops working at this point. It will impact any application using this API. The only way to fix it is to restart my headset.
A few points to keep in mind
It does not matter if an app is actively being run from Xcode.
When this occurs, it impacts every single app, not just one.
When this occurs, rotation and scaling continue to work, but the entity/view cannot be translated.
This impacts BOTH the SwiftUI version and the RealityKit version.
When this occurs, the only way to "fix" it is to reboot the device.
Hello Apple team,
I'm working on an iOS AR app using SwiftUI and RealityKit,
and I was wondering if the Cinematic API can be used with a RealityKit scene. I’d like to achieve a shallow depth of field while keeping the 3D asset in focus, and vice versa.
Thanks!
When using the new RealityKit Manipulation Component on Entities, indirect input will never translate the entity - no matter what settings are applied. Direct manipulation works as expected for both translation and rotation.
Is this intended behaviour? This is different from how indirect manipulation works on Model3D. How else can we get translation from this component?
visionOS 26 Beta 2
Build from macOS 26 Beta 2 and Xcode 26 Beta 2
Attached is replicable sample code, I have tried this in other projects with the same results.
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "MovieFilmReel", in: reelRCPBundle) {
ManipulationComponent.configureEntity(immersiveContentEntity, allowedInputTypes: .all, collisionShapes: [ShapeResource.generateBox(width: 0.2, height: 0.2, depth: 0.2)])
immersiveContentEntity.position.y = 1
immersiveContentEntity.position.z = -0.5
var mc = ManipulationComponent()
mc.releaseBehavior = .stay
immersiveContentEntity.components.set(mc)
content.add(immersiveContentEntity)
}
}
}
Hi,
I am in the process of implementing SharePlay into our app. The shared experience opens an Immersive Space and we set systemCoordinator.configuration.supportsGroupImmersiveSpace = true
Now visionOS establishes a shared coordinate space for the immersive space.
From the docs:
To achieve consistent positioning of RealityKit entities across multiple devices in an immersive space during a SharePlay session
There are cases where we want to position content in front of the user (independent of the shared session, and for each user individually). Normally to do that we use the transform retrieved via worldTrackingProvider.queryDeviceAnchor.originFromAnchorTransform
to position content in front of the user (plus some Z Offset and smooth interpolation).
This works fine in non-SharePlay instances and the device transform is where I would expect it to be but during the FaceTime call deviceAnchor.originFromAnchorTransform seems to use the shared origin of the immersive space and then I end up with a transform that might be offset.
Here is a video of the issue in action: https://streamable.com/205r2p
The blue rect is place using AnchorEntity(.head, trackingMode: .continuous). This works regardless of the call and the entity is always placed based on the head position.
The green rect is adjusted on every frame using the transform I get from worldTrackingProvider.queryDeviceAnchor. As you can see it's offset.
Is there any way I can query query this transform locally for the user during a FaceTime call?
Also I would like to know if it's possible to disable this automatic entity transform syncing behavior?
Setting entity.synchronization = nil results in the entity not showing up at all.
https://developer.apple.com/documentation/realitykit/synchronizationcomponent
Is SynchronizationComponent only relevant for the legacy MultiPeerConnectivity approach?
Thank you!
Hello,
There are three issues I am running into with a default template project + additional minimal code changes:
the Sphere_Left entity always overlaps the Sphere_Right entity.
when I release the Sphere_Left entity, it does not remain sticking to the Sphere_Right entity
when I release the Sphere_Left entity, it distances itself from the Sphere_Right entity
When I manipulate the Sphere_Right entity, these above 3 issues do not occur: I get a correct and expected behavior.
These issues are simple to replicate:
Create a new project in XCode
Choose visionOS -> App, then click Next
Name your project, and leave all other options as defaults: Initial Scene: Window, Immersive Space Renderer: RealityKit, Immersive Space: Mixed, then click Next
Save you project anywhere...
Replace the entire ImmersiveView.swift file with the below code.
Run.
Try to manipulate the left sphere, you should get the same issues I mentioned above
If you restart the project, and manipulate only the right sphere, you should get the correct expected behaviors, and no issues.
I am running this in macOS 26, XCode 26, on visionOS 26, all released lately.
ImmersiveView Code:
//
// ImmersiveView.swift
//
import OSLog
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
private let logger = Logger(subsystem: "com.testentitiessticktogether", category: "ImmersiveView")
@State var collisionBeganUnfiltered: EventSubscription?
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
// Add manipulation components
setupManipulationComponents(in: immersiveContentEntity)
collisionBeganUnfiltered = content.subscribe(to: CollisionEvents.Began.self) { collisionEvent in
Task { @MainActor in
handleCollision(entityA: collisionEvent.entityA, entityB: collisionEvent.entityB)
}
}
}
}
}
private func setupManipulationComponents(in rootEntity: Entity) {
logger.info("\(#function) \(#line) ")
let sphereNames = ["Sphere_Left", "Sphere_Right"]
for name in sphereNames {
guard let sphere = rootEntity.findEntity(named: name) else {
logger.error("\(#function) \(#line) Failed to find \(name) entity")
assertionFailure("Failed to find \(name) entity")
continue
}
ManipulationComponent.configureEntity(sphere)
var manipulationComponent = ManipulationComponent()
manipulationComponent.releaseBehavior = .stay
sphere.components.set(manipulationComponent)
}
logger.info("\(#function) \(#line) Successfully set up manipulation components")
}
private func handleCollision(entityA: Entity, entityB: Entity) {
logger.info("\(#function) \(#line) Collision between \(entityA.name) and \(entityB.name)")
guard entityA !== entityB else { return }
if entityB.isAncestor(of: entityA) {
logger.debug("\(#function) \(#line) \(entityA.name) already under \(entityB.name); skipping reparent")
return
}
if entityA.isAncestor(of: entityB) {
logger.info("\(#function) \(#line) Skip reparent: \(entityA.name) is an ancestor of \(entityB.name)")
return
}
reparentEntities(child: entityA, parent: entityB)
entityA.components[ParticleEmitterComponent.self]?.burst()
}
private func reparentEntities(child: Entity, parent: Entity) {
let childBounds = child.visualBounds(relativeTo: nil)
let parentBounds = parent.visualBounds(relativeTo: nil)
let maxEntityWidth = max(childBounds.extents.x, parentBounds.extents.x)
let childPosition = child.position(relativeTo: nil)
let parentPosition = parent.position(relativeTo: nil)
let currentDistance = distance(childPosition, parentPosition)
child.setParent(parent, preservingWorldTransform: true)
logger.info("\(#function) \(#line) Set \(child.name) parent to \(parent.name)")
child.components.remove(ManipulationComponent.self)
logger.info("\(#function) \(#line) Removed ManipulationComponent from child \(child.name)")
if currentDistance > maxEntityWidth {
let direction = normalize(childPosition - parentPosition)
let newPosition = parentPosition + direction * maxEntityWidth
child.setPosition(newPosition - parentPosition, relativeTo: parent)
logger.info("\(#function) \(#line) Adjusted position: distance was \(currentDistance), now \(maxEntityWidth)")
}
}
}
fileprivate extension Entity {
func isAncestor(of other: Entity) -> Bool {
var current: Entity? = other.parent
while let node = current {
if node === self { return true }
current = node.parent
}
return false
}
}
#Preview(immersionStyle: .mixed) {
ImmersiveView()
.environment(AppModel())
}
In courses like Compose interactive 3D content in Reality Composer Pro Realitykit Engineers recommended working with Reality Composer Pro to create RealityKit packages to embed in our Realitykit Xcode projects.
And, comparing the workflow to Unity/Unreal, I can see the reasoning since it is nice to prepare scenes/materials/assets visually.
Now when we also want to run a Xcode Cloud CI/CD pipeline this seems to come into conflict:
When adding a basic *.usdz to the RealityKitContent.rkassets folder, every build we run on Xcode cloud fails with:
Compile Reality Asset RealityKitContent.rkassets
❌realitytool requires Metal for this operation and it is not available in this build environment
I have also found this related forum post here but it was specifically about compiling a *.skybox.
A bit of background on what our app is doing:
We have a RealityKit ARView session running.
During this period we place objects in RealityKit.
At some point user can "take photo" and we use session.captureHighResolutionFrame to capture a frame.
We then use captured frame and frame.camera.projectPoint to project my objects back to 2D
Issue we found is that on devices that have iOS26, first photo user takes and the first frame received from session.captureHighResolutionFrame gives incorrect CGPoint for frame.camera.projectPoint. If user takes the second photo with the same camera phostion, second frame received from session.captureHighResolutionFrame gives correct CGPoint for frame.camera.projectPoint
I notices some difference between first and subsequent frames that i believe is corresponding with the issue. Yaw value of camera (frame.camera.eulerAngles.y) on first frame is not correct ( inconsistent with any subsequent frame)
I also created a small example app and i followed Building an Immersive Experience with RealityKit example to create it. The issue exists in this app for iOS26, while iOS18.* has consistent values between first and subsequent captured frames.
Note:
The yaw value seems to differ more if we start session in portrait but take photo in landscape.
Example result for 3 captured frames:
Frame captured with yaw: 1.4855177402496338
Frame captured with yaw: -0.08803760260343552
Frame captured with yaw: -0.08179682493209839
Example code:
class CustomARView: ARView, ARSessionDelegate {
required init(frame: CGRect) { super.init(frame: frame) }
required init?(coder decoder: NSCoder) { fatalError("init(coder:) has not been implemented")}
func setup() {
let singleTap = UITapGestureRecognizer(target: self, action: #selector(handleTap))
addGestureRecognizer(singleTap)
}
@objc
func handleTap(_ gestureRecognizer: UIGestureRecognizer) {
Task {
do {
let frame = try await session.captureHighResolutionFrame()
print("Frame captured with yaw: \(Double(frame.camera.eulerAngles.y))")
} catch { }
}
}
}
struct CustomARViewUIViewRepresentable: UIViewRepresentable {
func makeUIView(context: Context) -> some UIView {
let arView = CustomARView(frame: .zero)
arView.setup()
return arView
}
func updateUIView(_ uiView: UIViewType, context: Context) { }
}
struct ContentView: View {
var body: some View {
CustomARViewUIViewRepresentable()
.frame(maxWidth: .infinity, maxHeight: .infinity)
.ignoresSafeArea()
}
}
let component = GestureComponent(DragGesture())
iOS: ☑️
visionOS: ❌
This bug from beta to public, please fix it.
Basically, take just the Xcode 26 AR App template, where we put the ContentView as the detail end of a NavigationStack.
Opening app, the app uses < 20MB of memory. Tapping on Open AR the memory usage goes up to ~700MB for the AR Scene. Tapping back, the memory stays up at ~700MB.
Checking with Debug memory graph I can still see all the RealityKit classes in the memory, like ARView, ARRenderView, ARSessionManager.
Here's the sample app to illustrate the issue.
PS: To keep memory pressure on the system low, there should be a way of freeing all the memory the AR uses for apps that only occasionally show AR scenes.
Hi, I downloaded and ran https://developer.apple.com/documentation/realitykit/rendering-stereoscopic-video-with-realitykit
and noticed that memory usage grows linearly.
I replaced the sample video with a different 8k side by side video, and the app crashed almost immediately due to memory leak.
it looks like the culprit is from makeMutablePixelBuffer() function and the allocated pixelBuffers are not recycled after being used.
screenshot is from a physical device.