Hi everyone,
I’m currently learning about ParticleEmitterComponentParticleEmitterComponent and exploring the sample app provided in the Simulating particles in your visionOS app documentation.
In the sample app, when I set the EmitterPreset to fireworks from the settings panel on the left side of the window and choose SystemImage, I noticed two issues:
The image applied to mainEmitter appears clipped or cropped.
The image on spawnedEmitter does not update to the selected SystemImage.
What I want to achieve:
Apply the same SystemImage to both mainEmittermainEmitter and spawnedEmitterspawnedEmitter so that it displays correctly without clipping.
Remove the animation that changes the size of spawnedEmitterspawnedEmitter over time and keep it at a constant size.
Could someone explain which properties should be adjusted to achieve this behavior? Any guidance or examples would be greatly appreciated.
Thanks in advance!
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I’m trying to play an Apple Immersive video in the .aivu format using VideoPlayerComponent using the official documentation found here:
https://developer.apple.com/documentation/RealityKit/VideoPlayerComponent
Here is a simplified version of the code I'm running in another application:
import SwiftUI
import RealityKit
import AVFoundation
struct ImmersiveView: View {
var body: some View {
RealityView { content in
let player = AVPlayer(url: Bundle.main.url(forResource: "Apple_Immersive_Video_Beach", withExtension: "aivu")!)
let videoEntity = Entity()
var videoPlayerComponent = VideoPlayerComponent(avPlayer: player)
videoPlayerComponent.desiredImmersiveViewingMode = .full
videoPlayerComponent.desiredViewingMode = .stereo
player.play()
videoEntity.components.set(videoPlayerComponent)
content.add(videoEntity)
}
}
}
Full code is here:
https://github.com/tomkrikorian/AIVU-VideoPlayerComponentIssueSample
But the video does not play in my project even though the file is correct (It can be played in Apple Immersive Video Utility) and I’m getting this error when the app crashes:
App VideoPlayer+Component Caption: onComponentDidUpdate Media Type is invalid
Domain=SpatialAudioServicesErrorDomain Code=2020631397 "xpc error" UserInfo={NSLocalizedDescription=xpc error}
CA_UISoundClient.cpp:436 Got error -4 attempting to SetIntendedSpatialAudioExperience
[0x101257490|InputElement #0|Initialize] Number of channels = 0 in AudioChannelLayout does not match number of channels = 2 in stream format.
Video I’m using is the official sample that can be found here but tried several different files shot from my clients and the same error are displayed so the issue is definitely not the files but on the RealityKit side of things:
https://developer.apple.com/documentation/immersivemediasupport/authoring-apple-immersive-video
Steps to reproduce the issue:
- Open AIVUPlayerSample project and run. Look at the logs.
All code can be found in ImmersiveView.swift
Sample file is included in the project
Expected results:
If I followed the documentation and samples provided, I should see my video played in full immersive mode inside my ImmersiveSpace.
Am i doing something wrong in the code? I'm basically following the documentation here.
Feedback ticket: FB19971306
I have 2 planes with textures on. I want these planes to intersect [ –|– ], and I want the blend mode to be additive. Currently I get z fighting on the planes, and I can't see how to set blend modes.
I've done this before in Unity and Godot in a fairly straight forward manner.
How do I accomplish this with RealityKit, preferably using code only (my scene is quite dynamic)?
Do I need to do it with a shader manually? How can I stop the z fighting?
I am rewriting an unfinished SceneKit project as RealityKit (NonAR). As far as I can see, RealityKit is missing basic fog functionality?
Fog was simple & easy to implement in SCeneKit (fogStartDistance / fogEndDistance / fogDensityExponent / fogColor). Are there any plans to implement something like this in RealityKit?
Are there any simple workarounds?
Topic:
Graphics & Games
SubTopic:
RealityKit
In the CanyonCrosser example project, some RealityKit systems are implemented as classes while others are structs. What’s the reason for using different types?
Post can be removed.
I'm running into an issue with collisions between two entities with a character controller component. In the collision handler for moveCharacter the collision has both hitEntity and characterEntity set to the same object. This object is the entity that was moved with moveCharacter()
The below example configures 3 objects.
stationary sphere with character controller
falling sphere with character controller
a stationary cube with a collision component
if the falling sphere hits the stationary sphere then the collision handler reports both hitEntity and characterEntity to be the falling sphere. I would expect that the hitEntity would be the stationary sphere and the character entity would be the falling sphere.
if the falling sphere hits the cube with a collision component the the hit entity is the cube and the characterEntity is the falling sphere as expected.
Is this the expected behavior? The entities act as expected visually however if I want the spheres to react differently depending on what character they collided with then I am not getting the expected results. IE: If a player controlled character collides with a NPC then exchange resource with NPC. if player collides with enemy then take damage.
import SwiftUI
import RealityKit
struct ContentView: View {
@State var root: Entity = Entity()
@State var stationary: Entity = createCharacter(named: "stationary", radius: 0.05, color: .blue)
@State var falling: Entity = createCharacter(named: "falling", radius: 0.05, color: .red)
@State var collisionCube: Entity = createCollisionCube(named: "cube", size: 0.1, color: .green)
//relative to root
@State var fallFrom: SIMD3<Float> = [0,0.5,0]
var body: some View {
RealityView { content in
content.add(root)
root.position = [0,-0.5,0.0]
root.addChild(stationary)
stationary.position = [0,0.05,0]
root.addChild(falling)
falling.position = fallFrom
root.addChild(collisionCube)
collisionCube.position = [0.2,0,0]
collisionCube.components.set(InputTargetComponent())
}
.gesture(SpatialTapGesture().targetedToAnyEntity().onEnded { tap in
let tapPosition = tap.entity.position(relativeTo: root)
falling.components.remove(FallComponent.self)
falling.teleportCharacter(to: tapPosition + fallFrom, relativeTo: root)
})
.toolbar {
ToolbarItemGroup(placement: .bottomOrnament) {
HStack {
Button("Drop") {
falling.components.set(FallComponent(speed: 0.4))
}
Button("Reset") {
falling.components.remove(FallComponent.self)
falling.teleportCharacter(to: fallFrom, relativeTo: root)
}
}
}
}
}
}
@MainActor
func createCharacter(named name: String, radius: Float, color: UIColor) -> Entity {
let character = ModelEntity(mesh: .generateSphere(radius: radius), materials: [SimpleMaterial(color: color, isMetallic: false)])
character.name = name
character.components.set(CharacterControllerComponent(radius: radius, height: radius))
return character
}
@MainActor
func createCollisionCube(named name: String, size: Float, color: UIColor) -> Entity {
let cube = ModelEntity(mesh: .generateBox(size: size), materials: [SimpleMaterial(color: color, isMetallic: false)])
cube.name = name
cube.generateCollisionShapes(recursive: true)
return cube
}
struct FallComponent: Component {
let speed: Float
}
struct FallSystem: System{
static let predicate: QueryPredicate<Entity> = .has(FallComponent.self) && .has(CharacterControllerComponent.self)
static let query: EntityQuery = .init(where: predicate)
let down: SIMD3<Float> = [0,-1,0]
init(scene: RealityKit.Scene) {
}
func update(context: SceneUpdateContext) {
let deltaTime = Float(context.deltaTime)
for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) {
let speed = entity.components[FallComponent.self]?.speed ?? 0.5
entity.moveCharacter(by: down * speed * deltaTime, deltaTime: deltaTime, relativeTo: nil) { collision in
if collision.hitEntity == collision.characterEntity {
print("hit entity has collided with itself")
}
print("\(collision.characterEntity.name) collided with \(collision.hitEntity.name) ")
}
}
}
}
#Preview(windowStyle: .volumetric) {
ContentView()
}
Hi,
I'm rewriting my game from SceneKit to RealityKit, and I'm having trouble implementing the following scenario:
I tap on the iPhone screen to select an Entity that I want to drag.
If an Entity was tapped, it should then be possible to drag it left, right, etc.
SceneKit solution:
func CGPointToSCNVector3(_ view: SCNView, depth: Float, point: CGPoint) -> SCNVector3 {
let projectedOrigin = view.projectPoint(SCNVector3Make(0, 0, Float(depth)))
let locationWithz = SCNVector3Make(Float(point.x), Float(point.y), Float(projectedOrigin.z))
return view.unprojectPoint(locationWithz)
}
and then I was calling:
SCNView().hitTest(location, options: [SCNHitTestOption.firstFoundOnly:true])
the code was called inside of the UIPanGestureRecognizer in my UIViewController.
Could I reuse that code or should I go with the SwiftUI approach - something like that:
var body: some View {
RealityView {
....
} .gesture(TapGesture().onEnded {
})
?
I already have this code:
@State private var location: CGPoint?
.onTapGesture { location in
self.location = location
}
I'm trying to identify the entity that was tapped within the RealityView like that:
RealityView { content in
let box: ModelEntity = createBox() // for now there is only one box, however there will be many boxes
content.add(box)
let anchor = AnchorEntity(world: [0, 0, 0])
content.add(anchor)
_ = content.subscribe(to: SceneEvents.Update.self) { event in
//TODO: find tapped entity, so that it could be dragged inside of the DragGesture()
}
Any help would be appreciated.
I also noticed that if I create a TapGesture like that:
TapGesture(count: 1)
.targetedToAnyEntity()
and add it to my view using .gesture() then it is not triggered.
RealityKit spatial audio crackles and pops on iOS 26.0 beta 5.
It works correctly on iOS 18.6 and visionOS 26.0 beta 5.
The APIs used are AudioPlaybackController, Entity.prepareAudio, Entity.play
Videos of the expected and observed behavior are attached to the feedback FB19423059.
The audio should be a consistent, repeating sound, but it seems oddly abbreviated and the volume varies unexpectedly.
Thank you for investigating this issue.
Hi all,
I've encountered a potential issue with how the winding order of geometry is handled when their transformations involve negative scaling.
I created a simple test asset, a single triangle, to demonstrate this. The triangle's vertices are defined in a counter-clockwise ("right-handed") winding order, and its transform has a negative scale on the X-axis. According to the OpenUSD specification, this negative determinant in the transformation matrix should effectively reverse the winding order of the geometry:
However, any given gprim's local-to-world transformation can flip its effective orientation, when it contains an odd number of negative scales. This condition can be reliably detected using the (Jacobian) determinant of the local-to-world transform: if the determinant is less than zero, then the gprim's orientation has been flipped, and therefore one must apply the opposite handedness rule when computing its surface normals (or just flip the computed normals) for the purposes of hidden surface detection and lighting calculations.
When I view the asset in tools like Blender or Preview on macOS, it behaves as expected. The triangle's effective orientation is flipped to CW.
However, when the same asset is viewed in Reality Composer Pro or with QuickLook on iOS, its effective orientation remains CCW. In other words, the triangle faces the opposite direction.
My questions for the community and Apple are:
Is this behavior in RealityKit a known issue?
If this is a known issue, is there official guidance for DCC tools on how to export USDZ assets to ensure they appear correctly in the Apple ecosystem?
Any insights or recommendations would be greatly appreciated.
During regular use, RealityKit generates an excessive amount of internal logging that is not actionable by third party developers. When developing an iOS RealityKit/ARKit app, this makes the Xcode console challenging to use for regular work.
(FB19173812)
See screenshots below.
Xcode does have an option for filtering out logging from specific SDKs, but enabling this feature to suppress the logging of RealityKit and related SDKs like PHASE is something developers have to do dozens of times each day. After a year of developing a RealityKit app, this process becomes frustrating.
If SDKs like Foundation, UIKit, and SwiftUI generated as much logging as RealityKit and related SDKs, Xcode's console would be unusable.
Is there any way to disable the logging of RealityKit and PHASE permanently?
Thank you for any help you provide.
Hi all,
I’m running into an issue when trying to reconstruct a 3D model using PhotogrammetrySession on macOS from a set of images captured via the iOS Object Capture sample app, specifically in Area mode.
When I attempt to create the model from these images (using the raw Images/ folder exported directly from the capture session), I get the following errors:
ERROR cv3dapi.pg: Internal error codes (2): 4011 4012
WARN cv3dapi.pg: Internal warning codes (1): 4507
Output error with code = -15
requestError: CoreOC.PhotogrammetrySession.Error.processError
I use the "Images" directory directly exported from Object Capture with my iphone 12 pro max (has lidar) set to "area mode" in the object capture app
here is an example heic image metadata from the sequence.
heif-info Images/00044.869568833.HEIC
MIME type: image/heic
main brand: heic
compatible brands: mif1, MiHE, MiPr, miaf, MiHB, heic
image: 3024x4032 (id=49), primary
tiles: 6x8, tile size: 512x512
colorspace: YCbCr, 4:2:0
bit depth: 8
thumbnail: 240x320
color profile: nclx
alpha channel: no
depth channel: yes
size: 192x256
bits per pixel: 8
z-near: 1.173828
z-far: 2.552734
d-min: undefined
d-max: undefined
representation: uniform Z
metadata:
Exif: 960 bytes
uri /tag:apple.com,2023:ObjectCapture#CameraTrackingState: 4 bytes
uri /tag:apple.com,2023:ObjectCapture#CameraCalibrationData: 1015 bytes
uri /tag:apple.com,2023:ObjectCapture#ObjectTransform: 48 bytes
uri /tag:apple.com,2023:ObjectCapture#ObjectBoundingBox: 48 bytes
uri /tag:apple.com,2023:ObjectCapture#RawFeaturePoints: 832 bytes
uri /tag:apple.com,2023:ObjectCapture#PointCloudData: 23984 bytes
uri /tag:apple.com,2023:ObjectCapture#BundleVersion: 5 bytes
uri /tag:apple.com,2023:ObjectCapture#SegmentID: 4 bytes
uri /tag:apple.com,2024:ObjectCapture#SessionUUID: 36 bytes
uri /tag:apple.com,2024:ObjectCapture#CaptureMode: 4 bytes
uri /tag:apple.com,2023:ObjectCapture#Feedback: 4 bytes
uri /tag:apple.com,2023:ObjectCapture#WideToDepthCameraTransform: 48 bytes
uri /tag:apple.com,2023:ObjectCapture#TemporalDepthPointClouds: 864026 bytes
transformations:
angle (ccw): 270
region annotations:
none
properties:
camera intrinsic matrix:
focal length: 2813.695557; 2813.695557
principal point: 1522.338502; 2002.843018
skew: 0.000000
camera extrinsic matrix:
rotation matrix:
-0.695 0.344 -0.632
0.007 -0.875 -0.483
-0.719 -0.340 0.606
Questions:
• What do internal error codes 4011 and 4012 refer to?
• Is there something specific about Area mode captures that require preprocessing before they’re compatible with PhotogrammetrySession?
• Has anyone successfully reconstructed a model from an Area mode session using the stock Apple tools?
NOTE: I can provide the folder of images for debugging if that would help!
Hi team, I'm looking for the RealityKit debugger in Xcode 26 beta 3. I'm running a RealityKit app on my iPad running iPadOS 26 b3, but the debugger option is not there in Xcode.
Hi everyone,
I’m running into an issue with RealityKit when trying to animate BlendShapes (ShapeKeys) while a skeletal animation is playing. The model is a rigged character in .usdz format with both predefined skeletal animations and BlendShapes (exported from Blender).
The problem: when I play any animation using entity.playAnimation(...), the BlendShapes stop responding. Calling setBlendShapes(...) still logs that weights are being updated, but the visual changes are not visible.
The exact same blend shape animation works perfectly when no animation is playing.
In SceneKit the same model works as expected: shape keys get animated during animation playback. But not in realitykit
Still, as soon as an animation starts, the shape keys don’t animate anymore.
Here’s the test project on GitHub that demonstrates the issue clearly:
https://github.com/IAMTHEBURT/RealityKitWitnBlendShapesSample
The goal is to play facial expressions (like blinking or talking) while a body animation (like waving) is playing.
Is this a known limitation in RealityKit? Or is there a recommended way to combine skeletal animations with real-time BlendShape updates?
Thanks in advance for any insights.
Hello RealityKit developers,
I'm currently working on physics simulations in my visionOS app and am trying to adapt the concepts from the official sample Simulating physics joints in your RealityKit app.
In the sample, a sphere is connected to the ceiling using a PhysicsRevoluteJoint to create a hinge-like simulation. I've successfully modified this setup to use a PhysicsSphericalJoint instead.
The basic replacement works as expected: pin1 (attached to the sphere) rotates freely around pin0 (attached to the ceiling), much like a ball-and-socket joint should, removing all translational degrees of freedom.
My challenge lies with the PhysicsSphericalJoint's angularLimitInYZ property. The documentation mentions that this property allows limiting the rotation around the Y and Z axes, defining an "elliptical cone shape around the x-axis of pin0." However, I'm struggling to understand how to specify these values to achieve a desired rotational limit.
If I have a sphere that is currently capable of rotating 360 degrees around pin0 (like a free-spinning ball on a string), how would I use angularLimitInYZ to restrict its rotation to a certain height or angular range, preventing it from completing a full circle?
Specifically, I'm trying to achieve a "swing" like behavior where the sphere oscillates back and forth but cannot rotate completely overhead or underfoot. What values or approach should I use for the angularLimitInYZ tuple to define such a restricted pendulum-like motion?
Any insights, code examples, or explanations on how to properly configure angularLimitInYZ for this kind of behavior would be incredibly helpful!
The following code is modified from the sample.
extension MainView {
func addPinsTo(ballEntity: Entity, attachmentEntity: Entity) throws {
let hingeOrientation = simd_quatf(from: [1, 0, 0], to: [0, 0, 1])
let attachmentPin = attachmentEntity.pins.set(
named: "attachment_hinge",
position: .zero,
orientation: hingeOrientation
)
let relativeJointLocation = attachmentEntity.position(
relativeTo: ballEntity
)
let ballPin = ballEntity.pins.set(
named: "ball_hinge",
position: relativeJointLocation,
orientation: hingeOrientation
)
// Create a PhysicsSphericalJoint between the two pins.
let revoluteJoint = PhysicsSphericalJoint(pin0: attachmentPin, pin1: ballPin)
try revoluteJoint.addToSimulation()
}
}
The following image is a screenshot of the operation when changing to PhysicsSphericalJoint.
Thank you in advance for your assistance.
Hello everyone,
I'm working on a visionOS application using RealityKit and am encountering a common coordinate system challenge when integrating 3D models created in Blender.
My goal is to display and dynamically update the Transform (position, rotation, scale) of models created in Blender within RealityKit.
The issue arises because Blender's default coordinate system is Z-up, and while exporting to USD/USDZ, I don't have a reliable "Y-up" export option that correctly reorients the model and its transform data for RealityKit's Y-up convention. This means I'm essentially exporting models with their "up" direction along the Z-axis.
When I load these Z-up exported models into RealityKit, they are often oriented incorrectly. To then programmatically update their Transform (e.g., move them, rotate them based on game logic, or apply physics), I need to ensure that the Transform values I set align with RealityKit's Y-up system, even though the original model data was authored in a Z-up context.
My questions are:
What is the recommended transformation process (e.g., using simd_quatf or simd_float4x4) to convert a Transform that was conceptually defined in a Z-up coordinate system to RealityKit's Y-up coordinate system? Specifically, when I have a Transform (or its translation, rotation, scale components) from a Z-up context, how should I apply this to a RealityKit Entity so it appears and behaves correctly in a Y-up world?
Are there any existing convenience APIs or helper functions within RealityKit, simd, or other Apple frameworks that simplify this Z-up to Y-up Transform conversion process? Or is a manual application of a transformation quaternion (e.g., simd_quatf(angle: -.pi / 2, axis: [1, 0, 0])) the standard approach?
Any guidance, code examples, or best practices from those who have faced similar challenges would be greatly appreciated!
Thank you.
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
Reality Composer
RealityKit
Reality Composer Pro
visionOS
The following code using the new GestureComponent demonstrates inconsistency. The tap gesture prints output, but the drag gesture does not.
I already checked this post, which points to this seemingly outdated sample code
I assume that example is deprecated in favour of the now built in version of GestureComponent.
Nonetheless, there are no compiler warnings or errors, it just fails silently.
TapGesture, LongPressGesture, MagnifyGesture, RotateGesture all work, so this feels like an oversight.
RealityView { content in
let testEntity = ModelEntity(mesh: .generateBox(size: .init(x: 1, y: 1, z: 1)))
testEntity.position = SIMD3<Float>(0,0,-1)
testEntity.components.set(InputTargetComponent())
testEntity.components.set(CollisionComponent(
shapes: [.generateBox(size: .init(x: 1, y: 1, z: 1))]
))
let testGesture = TapGesture()
.onEnded { value in
print("Tapped")
}
testEntity.components.set(GestureComponent(testGesture))
let dragGesture = DragGesture()
.onEnded { value in
print("Dragged")
}
testEntity.components.set(GestureComponent(dragGesture))
content.add(testEntity)
}
In SceneKit, when creating an .scn file from a rigged model, the framework created an SCNNode for each bone/joint, so you could add and remove child nodes directly to and from joints, and like any other SCNNode, you could access world position and world orientation for each joint. The analog would be for joints to be accessible as child entities of a ModelEntity in RealityKit. I am unable to proceed with migrating my project from SceneKit because of this, as there does not seem to be a way to even access the true world position of a joint with the current jointNames/jointTransforms paradigm.
The translation information from the given transforms is insufficient to determine the location of a joint at any given time, and other approaches like creating a GeometricPin for the given joint name and attaching it to another entity do not seem to work. So conveniently being able to attach an item to the hand of a rigged model was trivial in SceneKit and now feels impossible in RealityKit.
I am not the first person to notice this, and am feeling demoralized about proceeding with RealityKit with such a critical piece of functionality blocked
https://stackoverflow.com/questions/76726241/how-do-i-attach-an-entity-to-a-skeletons-joint-in-realitykit
Will this be addressed in some way?
Breaking Through PolySpatial's ~8k Object Limit – Seeking Alternative Approaches for Large-Scale Digital Twins
Confirmed: PolySpatial make Doubles MeshFilter Count – Hard Limit at ~8k Active Objects (15.9k Total)
Project Context & Research Goals
I’m developing an industrial digital twin application for Apple Vision Pro using Unity’s PolySpatial framework (RealityKit rendering in Unbounded_Volume mode). The scene contains complex factory environments with:
Production line equipment Many fragmented grid objects need to be merged.)
Dynamic product racks (state-switchable assets)
Animated worker avatars
To optimize performance, I’m systematically testing visionOS’s rendering capacity limits. Through controlled stress tests, I’ve identified a critical threshold:
Key Finding
When the total MeshFilter count reaches 15,970 (system baseline + 7,985 user-created objects × 2 due to PolySpatial cloning), the application crashes consistently. This suggests:
PolySpatial’s mirroring mechanism effectively doubles GameObject overhead
An apparent hard limit exists around ~8k active mesh objects in practice
Objectives for This Discussion
Verify if others have encountered similar limits with PolySpatial/RealityKit
Understand whether this is a:
Memory constraint (per-app allocation)
Render pipeline limit (Metal draw calls)
Unity-specific PolySpatial behavior
Explore optimization strategies beyond brute-force object reduction
Why This Matters
Industrial metaverse applications require rendering thousands of interactive objects . Confirming these limits will help our team:
Design safer content guidelines
Prioritize GPU instancing/LOD investments
Potentially contribute back to PolySpatial’s optimization
I’d appreciate insights from engineers who’ve:
Pushed similar large-scale scenes in visionOS
Worked around PolySpatial’s cloning overhead
Discovered alternative capacity limits (vertices/draw calls)
TL;DR: RealityKit and Reality Composer Pro aren't forward or backward compatible with each other, and the resulting error message is terse and unhelpful. (FB14828873)
So far, I've been sticking with Xcode 16.4 for development and only using Xcode 26.0 beta experimentally.
Yesterday, I used xcode-select to switch to Xcode 26.0 beta 3 to test it, but I forgot to switch back.
Consequently, this morning I unintentionally used the future Reality Composer Pro (the version included with Xcode 26) to make a small change to a USD file.
Now I realize that if I'm unlucky, it's possible Reality Composer Pro may have silently introduced a small change into the USD file that may make RealityKit fail to read the file on iOS 18 and visionOS 2, which in the past has resulted in hours of debugging to track down the source of the failure, often a single line in the USD file that RealityKit can't communicate to me other than with the error "the operation couldn't be completed".
As an analogy, this situation is as if, during regular development (not involving Reality Composer Pro), Xcode didn't warn you about specific API version conflicts, but instead failed with a generic error message, without highlighting the line in your Swift file that was the source of the error.