I found that unofficial apps like ChatGPT and Shadowrocket can use widgets in VisionOS 26. How is this achieved? How can I also enable widgets for apps I develop?
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Posts under RealityKit tag
200 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am building a 360 photo viewer in VisionOS 26. Which allows the user to choose a 2 by 1 jpg and then renders it with a sphere mesh entity. And I use: TextureResource(contentsOf: url, options: options).
I noticed two situations here in terms of mipmaps options.
When setting "mipmapsMode: .none":
The graphic quality within the "gaze area" looks sharp and clear
The two poles (top and bottom) are perfectly rendered
Massive shimmer around the "gaze area"
When setting "mipmapsMode: .allocateAndGenerateAll":
The graphic looks slightly blurrier than in ".none" within the "gaze area"
The two poles are very blurry and hard to recognize the texture
Much less shimmer around the "gaze area"
My question would be: Is there a way to have the perfect graphic quality in ".none" without the massive shimmer?
Thank you!
Screenshots:
mipmapsMode: .none
mipmapsMode: .allocateAndGenerateAll
Hello,
If you add a ManipulationComponent to a RealityKit entity and then continue to add instructions, sooner or later you will encounter a crash with the following error message:
Attempting to move entity “%s” (%p) under “%s” (%p), but the new parent entity is currently being removed. Changing the parent/child entities of an entity in an event handler while that entity is already being reassigned is not supported.
CoreSimulator 1048 – Device: Apple Vision Pro 4K (B87DD32A-E862-4791-8B71-92E50CE6EC06) – Runtime: visionOS 26.0 (23M336) – Device Type: Apple Vision Pro
The problem occurs precisely with this code:
ManipulationComponent.configureEntity(object)
I adapted Apple's ObjectPlacementExample and made the changes available via GitHub.
The desired behavior is that I add entities to ManipulationComponent and then Realitiykit runs stably and does not crash randomly.
GitHub Repo
Thanks
Andre
Hi, I have a hand model that is in FBX and I'm exporting it to USD in Blender. I get a skinned mesh and while I can track the whole hand how do I track each joint and assign it and animate the skinned mesh itself. All my attempts say this is not possible in RealityKit as of now. True?
Hi All,
We're a studio building an app and as part of a scene we have a 3D asset with a smoke particle emitter and a curved mesh that plays video. I notice that when the video alone is played or the particle effect alone is done then the scene works fine but the frame rate drops drastically when both are turned on.
How do I solve this because this is an important storytelling feature.
Hello,
I've been trying to leverage instanced rendering in RealityKit on visionOS but have not had success.
RealityKit states this is supported:
https://developer.apple.com/documentation/realitykit/validating-usd-files
https://developer.apple.com/videos/play/wwdc2021/10075/?time=1373
https://developer.apple.com/videos/play/wwdc2023/10099/?time=772
RealityKit Trace metrics
Validating instancing is working:
To test I made a base visionOS app with immersive space and the entity replaced with my test usdz file. I've been using the RealityKit Trace profiling template in xcode instruments in the immersive space and volume closed. This gets consistent draw call results.
If I have a single sphere mesh with one material I get one draw call, but the number of draw calls grows linearly with mesh count no matter how my entity is configured.
What I've tried
Create a test scene in blender, export with instancing enabled
Create a test scene in Reality Composer Pro using references
Author usda files by hand based on the OpenUSD spec
Programatically create a MeshResource with Contents at runtime
References
https://openusd.org/release/api/_usd__page__scenegraph_instancing.html
https://developer.apple.com/documentation/realitykit/meshresource
https://developer.apple.com/documentation/realitykit/meshresource/instance
Thank you
Hello,
I'm getting started for my project with Xcode Cloud since I upgraded to the macOS Sequioa Beta and Xcode 16 now refuses to archive builds for TestFlight.
Somewhere very late in the build process I get the following error:
realitytool requires Metal for this operation and it is not available in this build environment
The log says this happens at:
Compile Skybox urban.skybox
My project uses RealityKit. How can I fix this issue?
Thanks!
I have a question I guess more for the Apple team.
But why are there no totally 3D experiences for the Vision Pro lineup?
I know they have given us tools to implement unity 3D games into iPhone and I guess you can also build it in RealityKit. But why at this moment are 3D games limited to just iPad and iPhone and can't you bring that into Vision Pro?
Just to explain. When I say a totally 3D game, I mean games like Gorn. I mean the Vision Pro is definitely powerful enough, but it just feels limited to tabletop games and AR games.
Is this something Apple is thinking about implementing?
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
ARKit
Reality Composer
RealityKit
Reality Composer Pro
The issue reproducible with empty project. When you run it and tap "Open immersive space" it takes a couple of minutes to respond. The issue only reproducible on real device with debugger attached. Reproducible other developers too (not specific to my environment). Issue doesn't exists in Xcode 16.
Afer initial long delay subsequent opens works fine.
Console logs:
nw_socket_copy_info [C1:2] getsockopt TCP_INFO failed [102: Operation not supported on socket]
nw_socket_copy_info getsockopt TCP_INFO failed [102: Operation not supported on socket]
Failed to set dependencies on asset 9303749952624825765 because NetworkAssetManager does not have an asset entity for that id.
void * _Nullable NSMapGet(NSMapTable * _Nonnull, const void * _Nullable): map table argument is NULL
PSO compilation completed for driver shader copyFromBufferToTexture so=0 sbpr=256 sbpi=16384 ss=(64, 64, 1) p=70 sc=1 ds=0 dl=0 do=(0, 0, 0) in 1997
XPC connection interrupted
<<<< FigAudioSession(AV) >>>> audioSessionAVAudioSession_CopyMXSessionProperty signalled err=-19224 (kFigAudioSessionError_UnsupportedOperation) (getMXSessionProperty unsupported) at FigAudioSession_AVAudioSession.m:606
Failed to load item AXCodeItem<0x14706f250> [Rank:6000] SpringBoardUIServices [AXBundle name:/System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle/SpringBoardUIServices] [Platforms and Targets:{ iOS = SpringBoardUIServices; } Framework] [Excluded: (null)]. error: Error Domain=AXLoading Code=0 "URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle" UserInfo={NSLocalizedDescription=URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle}
Failed to load item AXCodeItem<0x14706f250> [Rank:6000] SpringBoardUIServices [AXBundle name:/System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle/SpringBoardUIServices] [Platforms and Targets:{ iOS = SpringBoardUIServices; } Framework] [Excluded: (null)]. error: Error Domain=AXLoading Code=0 "URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle" UserInfo={NSLocalizedDescription=URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle}
[b30780-MRUIFeedbackTypeButtonWithBackgroundTouchDown] Playback timed out before completion (after 3111 ms)
Failed to set dependencies on asset 7089614247973236977 because NetworkAssetManager does not have an asset entity for that id.
If I trigger the apple rating modal in an Immersive space it appears on the ground in (0,0,0) I need it to be in front of the user like push notification perimssion does or other permissions requests.
Hey there, I’m currently planning to use RealityKit in a new multiplatform app I’m building. Unfortunately, I noticed that WatchOS is not supported for RealityKit, while SceneKit is getting deprecated. However, I’d like to maintain the same codebase across platforms. What are my options?
Hi team, I'm looking for the RealityKit debugger in Xcode 26 beta 3. I'm running a RealityKit app on my iPad running iPadOS 26 b3, but the debugger option is not there in Xcode.
Hello,
In my project, I have attached a ManipulationComponent to Entity A and as expected, I'm able interact with it using the built-in gestures. I have another Entity B which is a child of A that I would like to interact with as well, so I attempted to add a ManipulationComponent to B. However, no gestures seem to be registered on B; I can still interact with A but B cannot be interacted with despite having ManipulationComponents on both entities.
So I'm wondering if I'm just doing something wrong, if this is an issue with the ManipulationComponent, or if this is a limitation of the API.
Attached is the code used to add the ManipulationComponent to an Entity and it was done on both A and B:
let mc = ManipulationComponent()
model.components.set(mc)
var boxShape = ShapeResource.generateBox(width: 0.25, height: 0.05, depth: 0.25)
boxShape = boxShape.offsetBy(translation: simd_float3(0, -0.05, -0.25))
ManipulationComponent.configureEntity(model, collisionShapes: [boxShape])
if var mc = model.components[ManipulationComponent.self] {
mc.releaseBehavior = .stay
mc.dynamics.inertia = .low
model.components.set(mc)
}
I am using visionOS 26.0; let me know if there's any additional information needed.
Hi all,
I've developed some code that enables an arcball camera interaction with my scene. I've done this using components and systems. The implementation feels a bit messy as I've got gesture code on my realityView, and then a bunch of other code that uses those gesture inputs in my component and system.
Is there a demo app, or some example code that shows a nice way to encapsulate these things in to one item for custom cameras, something like Apple's .realityViewCameraControls(.orbit)
If not can anyone recommend an approach to take?
Hello Apple team,
I'm working on an iOS AR app using SwiftUI and RealityKit,
and I was wondering if the Cinematic API can be used with a RealityKit scene. I’d like to achieve a shallow depth of field while keeping the 3D asset in focus, and vice versa.
Thanks!
Hi,
I am in the process of implementing SharePlay into our app. The shared experience opens an Immersive Space and we set systemCoordinator.configuration.supportsGroupImmersiveSpace = true
Now visionOS establishes a shared coordinate space for the immersive space.
From the docs:
To achieve consistent positioning of RealityKit entities across multiple devices in an immersive space during a SharePlay session
There are cases where we want to position content in front of the user (independent of the shared session, and for each user individually). Normally to do that we use the transform retrieved via worldTrackingProvider.queryDeviceAnchor.originFromAnchorTransform
to position content in front of the user (plus some Z Offset and smooth interpolation).
This works fine in non-SharePlay instances and the device transform is where I would expect it to be but during the FaceTime call deviceAnchor.originFromAnchorTransform seems to use the shared origin of the immersive space and then I end up with a transform that might be offset.
Here is a video of the issue in action: https://streamable.com/205r2p
The blue rect is place using AnchorEntity(.head, trackingMode: .continuous). This works regardless of the call and the entity is always placed based on the head position.
The green rect is adjusted on every frame using the transform I get from worldTrackingProvider.queryDeviceAnchor. As you can see it's offset.
Is there any way I can query query this transform locally for the user during a FaceTime call?
Also I would like to know if it's possible to disable this automatic entity transform syncing behavior?
Setting entity.synchronization = nil results in the entity not showing up at all.
https://developer.apple.com/documentation/realitykit/synchronizationcomponent
Is SynchronizationComponent only relevant for the legacy MultiPeerConnectivity approach?
Thank you!
Hello,
There are three issues I am running into with a default template project + additional minimal code changes:
the Sphere_Left entity always overlaps the Sphere_Right entity.
when I release the Sphere_Left entity, it does not remain sticking to the Sphere_Right entity
when I release the Sphere_Left entity, it distances itself from the Sphere_Right entity
When I manipulate the Sphere_Right entity, these above 3 issues do not occur: I get a correct and expected behavior.
These issues are simple to replicate:
Create a new project in XCode
Choose visionOS -> App, then click Next
Name your project, and leave all other options as defaults: Initial Scene: Window, Immersive Space Renderer: RealityKit, Immersive Space: Mixed, then click Next
Save you project anywhere...
Replace the entire ImmersiveView.swift file with the below code.
Run.
Try to manipulate the left sphere, you should get the same issues I mentioned above
If you restart the project, and manipulate only the right sphere, you should get the correct expected behaviors, and no issues.
I am running this in macOS 26, XCode 26, on visionOS 26, all released lately.
ImmersiveView Code:
//
// ImmersiveView.swift
//
import OSLog
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
private let logger = Logger(subsystem: "com.testentitiessticktogether", category: "ImmersiveView")
@State var collisionBeganUnfiltered: EventSubscription?
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
// Add manipulation components
setupManipulationComponents(in: immersiveContentEntity)
collisionBeganUnfiltered = content.subscribe(to: CollisionEvents.Began.self) { collisionEvent in
Task { @MainActor in
handleCollision(entityA: collisionEvent.entityA, entityB: collisionEvent.entityB)
}
}
}
}
}
private func setupManipulationComponents(in rootEntity: Entity) {
logger.info("\(#function) \(#line) ")
let sphereNames = ["Sphere_Left", "Sphere_Right"]
for name in sphereNames {
guard let sphere = rootEntity.findEntity(named: name) else {
logger.error("\(#function) \(#line) Failed to find \(name) entity")
assertionFailure("Failed to find \(name) entity")
continue
}
ManipulationComponent.configureEntity(sphere)
var manipulationComponent = ManipulationComponent()
manipulationComponent.releaseBehavior = .stay
sphere.components.set(manipulationComponent)
}
logger.info("\(#function) \(#line) Successfully set up manipulation components")
}
private func handleCollision(entityA: Entity, entityB: Entity) {
logger.info("\(#function) \(#line) Collision between \(entityA.name) and \(entityB.name)")
guard entityA !== entityB else { return }
if entityB.isAncestor(of: entityA) {
logger.debug("\(#function) \(#line) \(entityA.name) already under \(entityB.name); skipping reparent")
return
}
if entityA.isAncestor(of: entityB) {
logger.info("\(#function) \(#line) Skip reparent: \(entityA.name) is an ancestor of \(entityB.name)")
return
}
reparentEntities(child: entityA, parent: entityB)
entityA.components[ParticleEmitterComponent.self]?.burst()
}
private func reparentEntities(child: Entity, parent: Entity) {
let childBounds = child.visualBounds(relativeTo: nil)
let parentBounds = parent.visualBounds(relativeTo: nil)
let maxEntityWidth = max(childBounds.extents.x, parentBounds.extents.x)
let childPosition = child.position(relativeTo: nil)
let parentPosition = parent.position(relativeTo: nil)
let currentDistance = distance(childPosition, parentPosition)
child.setParent(parent, preservingWorldTransform: true)
logger.info("\(#function) \(#line) Set \(child.name) parent to \(parent.name)")
child.components.remove(ManipulationComponent.self)
logger.info("\(#function) \(#line) Removed ManipulationComponent from child \(child.name)")
if currentDistance > maxEntityWidth {
let direction = normalize(childPosition - parentPosition)
let newPosition = parentPosition + direction * maxEntityWidth
child.setPosition(newPosition - parentPosition, relativeTo: parent)
logger.info("\(#function) \(#line) Adjusted position: distance was \(currentDistance), now \(maxEntityWidth)")
}
}
}
fileprivate extension Entity {
func isAncestor(of other: Entity) -> Bool {
var current: Entity? = other.parent
while let node = current {
if node === self { return true }
current = node.parent
}
return false
}
}
#Preview(immersionStyle: .mixed) {
ImmersiveView()
.environment(AppModel())
}
In courses like Compose interactive 3D content in Reality Composer Pro Realitykit Engineers recommended working with Reality Composer Pro to create RealityKit packages to embed in our Realitykit Xcode projects.
And, comparing the workflow to Unity/Unreal, I can see the reasoning since it is nice to prepare scenes/materials/assets visually.
Now when we also want to run a Xcode Cloud CI/CD pipeline this seems to come into conflict:
When adding a basic *.usdz to the RealityKitContent.rkassets folder, every build we run on Xcode cloud fails with:
Compile Reality Asset RealityKitContent.rkassets
❌realitytool requires Metal for this operation and it is not available in this build environment
I have also found this related forum post here but it was specifically about compiling a *.skybox.
A bit of background on what our app is doing:
We have a RealityKit ARView session running.
During this period we place objects in RealityKit.
At some point user can "take photo" and we use session.captureHighResolutionFrame to capture a frame.
We then use captured frame and frame.camera.projectPoint to project my objects back to 2D
Issue we found is that on devices that have iOS26, first photo user takes and the first frame received from session.captureHighResolutionFrame gives incorrect CGPoint for frame.camera.projectPoint. If user takes the second photo with the same camera phostion, second frame received from session.captureHighResolutionFrame gives correct CGPoint for frame.camera.projectPoint
I notices some difference between first and subsequent frames that i believe is corresponding with the issue. Yaw value of camera (frame.camera.eulerAngles.y) on first frame is not correct ( inconsistent with any subsequent frame)
I also created a small example app and i followed Building an Immersive Experience with RealityKit example to create it. The issue exists in this app for iOS26, while iOS18.* has consistent values between first and subsequent captured frames.
Note:
The yaw value seems to differ more if we start session in portrait but take photo in landscape.
Example result for 3 captured frames:
Frame captured with yaw: 1.4855177402496338
Frame captured with yaw: -0.08803760260343552
Frame captured with yaw: -0.08179682493209839
Example code:
class CustomARView: ARView, ARSessionDelegate {
required init(frame: CGRect) { super.init(frame: frame) }
required init?(coder decoder: NSCoder) { fatalError("init(coder:) has not been implemented")}
func setup() {
let singleTap = UITapGestureRecognizer(target: self, action: #selector(handleTap))
addGestureRecognizer(singleTap)
}
@objc
func handleTap(_ gestureRecognizer: UIGestureRecognizer) {
Task {
do {
let frame = try await session.captureHighResolutionFrame()
print("Frame captured with yaw: \(Double(frame.camera.eulerAngles.y))")
} catch { }
}
}
}
struct CustomARViewUIViewRepresentable: UIViewRepresentable {
func makeUIView(context: Context) -> some UIView {
let arView = CustomARView(frame: .zero)
arView.setup()
return arView
}
func updateUIView(_ uiView: UIViewType, context: Context) { }
}
struct ContentView: View {
var body: some View {
CustomARViewUIViewRepresentable()
.frame(maxWidth: .infinity, maxHeight: .infinity)
.ignoresSafeArea()
}
}
let component = GestureComponent(DragGesture())
iOS: ☑️
visionOS: ❌
This bug from beta to public, please fix it.