If I have one portal on the ceiling and one on the floor, can a tall Entity cross multiple portals at once? Will the opposing portal directions cause it to fail?
No matter what I try for the crossingMode and clippingMode of the PortalComponent I can only get it to fully work for one portal at a time.
I have tried flipping the normals for the crossingMode and clippingMode planes.
I have also tried creating a ceiling portal plane with inverted normals.
It seems like whatever Entity is passing through a portal has one portal it wants to deal with at a time and that's it.
My other option is to create portals using occlusion but I prefer the simplest way.
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I want to compare the colors of two model entities (spheres). How can i do it? The method i'm currently trying to apply is as follows
case let .color(controlColor) = controlMaterial.baseColor, controlColor == .green {
// Flip target sphere colour
if let targetMaterial = targetsphere.model?.materials.first as? SimpleMaterial,
case let .color(targetColor) = targetMaterial.baseColor, targetColor == .blue {
targetsphere.model?.materials = [SimpleMaterial(color: .green, isMetallic: false)] // Change to |1⟩
} else {
targetsphere.model?.materials = [SimpleMaterial(color: .blue, isMetallic: false)] // Change to |0⟩
}
}
This method (baseColor) was deprecated in swift 15.0 changes to 'color' but i cannot compare the value color to each other.👾
Hello, I am experiencing an issue with a character animation using RealityKit.
I have a file created in Blender that contains the rigged Character, a sword, and a shield. The sword and the shield have bones connected to the character's hands so they can follow the character's animation. When I run the animation in Blender and preview the exported USDZ file on Mac, I can see the sword and shield attached to the hands and the animation is fine. But when I add the USDZ model in RealityKit and play the animation, only the character is animating, the sword and the shield are not moving at all.
This is the code I use to animate the character:
private func loadAnimations() {
let unifiedAnimations = children[0].availableAnimations.first!.definition
let animationResource = try! AnimationResource.generate(with: unifiedAnimations)
self.children[0].playAnimation(
animationResource.repeat()
)
}
Here is a link with a Demo Xcode project, the Blender file, and a video:
https://www.dropbox.com/scl/fi/ypq2iwxc5f9dwzjggsvin/AppleTest.zip?rlkey=wiag3rg44urhjdh2wal8cdx2u&st=vbpf7x11&dl=0
STEPS TO REPRODUCE
I have created a demo project that displays the character on a horizontal surface, and the animation starts playing.
Run the App. The ARView will be set up, and a yellow square will appear in the middle of the screen.
When a horizontal surface is detected the yellow Square will change indicating that the surface is found.
Tap on the screen to load the USDZ model and position it in the yellow square's position.
The Animation will start playing and you can see that the character is animating, but the sword and shield remain still.
Thanks
Topic:
Graphics & Games
SubTopic:
RealityKit
Hi everyone,
I’m experiencing a critical issue with USDZ files created in Reality Composer on an iPad 9th Generation (iPadOS 18.3). The files work perfectly on iPads from the 10th Generation onwards and on iPad Pros. However, on older devices like the iPad 9th Generation and older iPhones, QuickLook (file preview) crashes when opening them.
This is a major issue because these USDZ files are part of an exhibition where artworks are extended with AR elements via a web page. If some visitors cannot view the 3D content, it significantly impacts the experience.
What’s puzzling is that two years ago, we exported USDZ files from Reality Composer, made them available via a website, and they worked flawlessly on all devices, including older iPads and iPhones. Now, with the latest iPadOS, they consistently crash on older devices.
Has anyone encountered a similar issue? Are there known limitations with QuickLook on older devices, or is there a way to optimize the USDZ files to prevent crashes? Could this be related to changes in iPadOS or RealityKit? Any advice or workaround would be greatly appreciated!
Thanks in advance!
I'm trying to build a Shader in "Reality Composer Pro" that updates from a start time. Initially I tried the following:
The idea was that when the startTime was 0, the output would be 0, but then I would set startTime from within code and this would be compared with the current GPU time, and difference used to drive another part of the shader graph:
if
let testEntity = root.findEntity(named: "Test"),
var shaderGraphMaterial = testEntity.components[ModelComponent.self]?.materials.first as? ShaderGraphMaterial
{
let time = CFAbsoluteTimeGetCurrent()
try! shaderGraphMaterial.setParameter(name: "StartTime", value: .float(Float(time)))
testEntity.components[ModelComponent.self]?.materials[0] = shaderGraphMaterial
}
However, I haven't found a reference to the time the shader would be using.
So now I am trying to write an EntityAction to achieve the same effect. Instead of comparing a start time to the GPU's time I'm trying to animate one of the shader's uniform input. However, I'm not sure how to specify the bind target. Here's my attempt so far:
import RealityKit
struct ShaderAction: EntityAction {
let startValue: Float
let targetValue: Float
var animatedValueType: (any AnimatableData.Type)? { Float.self }
static func registerEntityAction() {
ShaderAction.subscribe(to: .updated) { event in
guard let animationState = event.animationState else { return }
let value = simd_mix(event.action.startValue, event.action.targetValue, Float(animationState.normalizedTime))
animationState.storeAnimatedValue(value)
}
}
}
extension Entity {
func updateShader(from startValue: Float, to targetValue: Float, duration: Double) {
let fadeAction = ShaderAction(startValue: startValue, targetValue: targetValue)
if let shaderAnimation = try? AnimationResource.makeActionAnimation(for: fadeAction, duration: duration, bindTarget: .material(0).customValue) {
playAnimation(shaderAnimation)
}
}
}
'''
Currently when I run this I get an assertion failure: 'Index out of range (operator[]:line 797) index = 260, max = 8'
Furthermore, even if it didn't crash I don't understand how to pass a binding to the custom shader value "startValue".
Any clues of how to achieve this effect - even if it's a completely different way.
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
I want to use SwiftUI and RealityView to get AR scene understanding data (ARMeshAnchor) on iOS devices with LiDAR. The only way we can do that is by using ARSession (unless there is another way).
However in previous iOS 18 builds there was this function:
https://developer.apple.com/documentation/realitykit/spatialtrackingsession/run(_:session:arconfiguration:)
, which worked with SpatialTrackingSession and a custom ARSession together. This function in the the latest iOS and Xcode has since been removed in the RealityKit framework but still there on documentation.
I also wanted to get ARFaceAnchor data which I still cannot get without ARSession, the closest I can get is by using:
let target = AnchoringComponent.Target.face
let anchoringComponent = AnchoringComponent(target, trackingMode: .predicted)
entity = Entity()
entity!.components.set(anchoringComponent)
But I still can't find a way to get the current frame (ARFrame) or the anchors ([ARAnchor]) in the view.
Alternatively if I use if I use this function: https://developer.apple.com/documentation/realitykit/spatialtrackingsession/run(_:) and start the ARSession separately. The session (didUpdate and didAdd) only runs for a few frames before getting interrupted.
And if I completely remove SpatialTrackingConfiguration and just run the ARSession. There still is a valid tracked entity for the AnchoringComponent.Target.face component. IF in the configuration for the ARSession I use the ARWorldTrackingConfiguration with face tracking. And I still get updated facial data each frame. But the ARSession didUpdate or didAdd functions don't get called passed the first few frames.
Interestingly if I switch the RealityViewCameraContent.RealityViewCamera to .virtual. I get ARMeshAnchor and ARFaceAnchor data, but no camera feed (as expected). This with or without SpatialTrackingConfiguration.
My overarching question is what is the proper way to access ARMeshAnchors and other ARAnchors created by the system and track them live while also using SwiftUI.
GitHub Repo with sample project can be found here: https://github.com/bpate75/RealityViewTesting
Following the post on
https://developer.apple.com/documentation/realitykit/custommaterial it's simple to use shader for materials and get uniforms and params from each vertex. However it's not available for visionOS. Any alternative to use in this case? I want to write shader to fill material by myself. (I have shader experience from web, familiar with fragment shader)
Hey I wanted to make an app that tracks changes in the room and room lightning and I was wondering if its possible to use VirtualEnvironmentProbeComponent to obtain the EnvironmentResource image and store it?
If so are there any example of similar operation I could use?
Thank you!
Hello,
I'm writing an EntityAction that animates a material base tint between two different colours. However, the colour that is being actually set differs in RGB values from that requested.
For example, trying to set an end target of R0.5, G0.5, B0.5, results in a value of R0.735357, G0.735357, B0.735357. I can also see during the animation cycle that intermediate actual tint values are also incorrect, versus those being set.
My understanding is the the values of material base colour are passed as a SIMD4. Therefore I have a couple of helper extensions to convert a UIColor into this format and mix between two colours. Note however, I don't think the issue is with this functions - even if their outputs are wrong, the final value of the base tint doesn't match the value being set.
I wondered if this was a colour space issue?
import simd
import RealityKit
import UIKit
typealias Float4 = SIMD4<Float>
extension Float4 {
func mixedWith(_ value: Float4, by mix: Float) -> Float4 {
Float4(
simd_mix(x, value.x, mix),
simd_mix(y, value.y, mix),
simd_mix(z, value.z, mix),
simd_mix(w, value.w, mix)
)
}
}
extension UIColor {
var float4: Float4 {
var r: CGFloat = 0.0
var g: CGFloat = 0.0
var b: CGFloat = 0.0
var a: CGFloat = 0.0
getRed(&r, green: &g, blue: &b, alpha: &a)
return Float4(Float(r), Float(g), Float(b), Float(a))
}
}
struct ColourAction: EntityAction {
let startColour: SIMD4<Float>
let targetColour: SIMD4<Float>
var animatedValueType: (any AnimatableData.Type)? { SIMD4<Float>.self }
init(startColour: UIColor, targetColour: UIColor) {
self.startColour = startColour.float4
self.targetColour = targetColour.float4
}
static func registerEntityAction() {
ColourAction.subscribe(to: .updated) { event in
guard let animationState = event.animationState else { return }
let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime))
animationState.storeAnimatedValue(interpolatedColour)
}
}
}
extension Entity {
func updateColour(from currentColour: UIColor, to targetColour: UIColor, duration: Double, endAction: @escaping (Entity) -> Void = { _ in }) {
let colourAction = ColourAction(startColour: currentColour, targetColour: targetColour, endedAction: endAction)
if let colourAnimation = try? AnimationResource.makeActionAnimation(for: colourAction, duration: duration, bindTarget: .material(0).baseColorTint) {
playAnimation(colourAnimation)
}
}
}
The EntityAction can only be applied to an entity with a ModelComponent (because of the material), so it can be called like so:
guard
let modelComponent = entity.components[ModelComponent.self],
let material = modelComponent.materials.first as? PhysicallyBasedMaterial else
{
return
}
let currentColour = material.baseColor.tint
let targetColour = UIColor(_colorLiteralRed: 0.5, green: 0.5, blue: 0.5, alpha: 1.0)
entity.updateColour(from:currentColour, to: targetColour, duration: 2)
I have a scene built up in RealityComposerPro, in which I've added a ParticleEmitter with isEmitting set to False and 'Loop' set to True.
In my app, when I toggle isEmitting to True there can be a delay of a few seconds before the ParticleEmitter starts.
However, if I programatically add the emitter in code at that point, it starts immediately.
To be clear, I'm seeing this on the VisionOS simulator - I don't have access to a device at this time.
Am I misunderstanding how to control the ParticleEmitter when I need precise control on when it starts.
Hi there,
I've discovered an issue with gesture handling in RealityKit when setting the camera’s fieldOfViewOrientation to horizontal. For instance, if I render a simple box at the center of the view with a collision shape that exactly matches its dimensions, the actual hit area behaves as if it's smaller than the box. Additionally, when attempting to drag the box away from the center, the hit area appears misaligned—offset slightly towards the center.
Since the default fieldOfViewOrientation is vertical and everything works as expected in that mode, it seems that the gesture system might be assuming a vertical FOV. Given that the API allows setting it to horizontal, perhaps gestures should function correctly regardless of the orientation?
Thank you!
Topic:
Graphics & Games
SubTopic:
RealityKit
as in the environments we have real tiem reflections of movies on a screen or reflections of the surrounding hood in the background...
could i get a metallic surface getting accurate reflections of a box on top ?
i don't mean getting a rpobe or hdr cubemap, i mean the same accurate reflections as the water of the mt hood with movie i'm wacthing in other app
Hello,
I created a new project with the provided template for Immersive Environments.
Straight out of box I build to both the Simulator and to Vision Pro and the provided Environment looks like this.
What's interesting is that in Reality Composer Pro, it looks correct so how do I achieve the same look?
Thank you in advance!
I am currently using RealityKit (perspective camera) to render a character in my swiftUI app.
The character has customization such as clothing items and hair and all objects are properly weighted to the rig.
The way the model is setup in Blender is like so: Groups of objects that will be swapped (ex: Shoes -> Shoes objects) and an armature. I then export it to usdc with all objects active. This is the resulting entity hierarchy, viewed in Reality Composer Pro:
My problem is that when I export with the Armature Modifier applied to the objects, so that animations get exported, the ModelComponent gets flattened to the armature and swapping entities is no longer as simple as removing the entity with the corresponding name.
What's the best practice here? Should animation be exported separately and then applied to the skeleton? If so, how is that achieved? I'm not really sure how to proceed here.
I am currently using RealityKit (perspective camera) to render a character in my swiftUI app.
The character has customization such as clothing items and hair and all objects are properly weighted to the rig.
The way the model is setup in Blender is like so: Groups of objects that will be swapped (ex: Shoes -> Shoes objects) and an armature. I then export it to usdc with all objects active. This is the resulting hierarchy:
Before exporting for the animation (armature modifier applied), I simply had to store the Model entities and swap them in but now when I export with the Armature Modifier applied, so that animations get exported, the ModelComponent gets flattened to the armature and swapping entities and applying new materials to them is no longer as simple.
Here's a demo blend file and usdc export with a setup like mine, having an animated bone to swing a cube and sphere, to be swapped so that only one is visible https://www.dropbox.com/scl/fo/be2q6qcztc83z7c4gj1w0/AMapxWc_ip2KZ8oTOYDUMv8?rlkey=rcdaggcxq06dyen09mw5mqmem&st=bnc0d7j0&dl=0
This is how I'm loading the entity and removing a part, with the demo files
import SwiftUI
import RealityKit
struct SwapDemoView: View {
var body: some View {
RealityView { content in
let camera = PerspectiveCamera()
camera.transform.translation = SIMD3(x: 0, y: 0.1, z: 3)
guard let root = try? await Entity(named: "simpleSwapDemo") else { fatalError("simpleSwapDemo.usdc is not present") }
print(root) // Get initial hierarchy
guard let cube = root.findEntity(named: "Cube") else { fatalError("Entity cube doesn't exist") }
cube.removeFromParent() // <-- Cube is still visible after removal
print(root) // Get hierarchy to confirm removal of cube
let resource = root.availableAnimations[0]
root.playAnimation(resource.repeat())
content.add(root)
content.add(camera)
}
.background(.white)
}
}
And this is what the entity hierarchy looks like in RealityKit before cube removal
▿ 'root' : Entity, children: 1
⟐ SynchronizationComponent
⟐ AnimationLibraryComponent
⟐ Transform
▿ 'Armature' : ModelEntity, children: 2
⟐ SynchronizationComponent
⟐ ModelComponent
⟐ SkeletalPosesComponent
⟐ AnimationLibraryComponent
⟐ Transform
▿ 'Armature' : Entity
⟐ SynchronizationComponent
⟐ Transform
▿ 'Primitives' : Entity, children: 2
⟐ SynchronizationComponent
⟐ Transform
▿ 'Sphere' : Entity, children: 1
⟐ SynchronizationComponent
⟐ Transform
▿ 'Sphere' : Entity
⟐ SynchronizationComponent
⟐ Transform
▿ 'Cube' : Entity, children: 1
⟐ SynchronizationComponent
⟐ Transform
▿ 'Cube' : Entity
⟐ SynchronizationComponent
⟐ Transform
And here's the hierarchy after removal
▿ 'root' : Entity, children: 1
⟐ SynchronizationComponent
⟐ AnimationLibraryComponent
⟐ Transform
▿ 'Armature' : ModelEntity, children: 2
⟐ SynchronizationComponent
⟐ ModelComponent
⟐ SkeletalPosesComponent
⟐ AnimationLibraryComponent
⟐ Transform
▿ 'Armature' : Entity
⟐ SynchronizationComponent
⟐ Transform
▿ 'Primitives' : Entity, children: 1
⟐ SynchronizationComponent
⟐ Transform
▿ 'Sphere' : Entity, children: 1
⟐ SynchronizationComponent
⟐ Transform
▿ 'Sphere' : Entity
⟐ SynchronizationComponent
⟐ Transform
And this is the result:
What's the best practice here? Should animation be exported separately and then applied to the skeleton? If so, how is that achieved? I'm not really sure how to proceed here.
I’m building an app that uses RealityKit and specifically ARConfiguration.FrameSemantics.personSegmentationWithDepth.
The goal is to insert an AR object into the scene behind a person, and an additional AR object in front of the person, while being as photo realistic as possible.
Through testing, I’ve noticed that many times, the edges of the person segmentation mask are not well matched to the actual person, and parts of the person are transparent, with the AR object bleeding through. It’s sort of like a “bad green screen” effect, which I’d expect to see a little bit, but not to this extent. I’ve been testing on iPhone 16, iPhone 14 Pro, iPad Pro 12.9 inch 6th Generation, and iPhone 12 Pro, with similar results across all devices.
I’m wondering what else I can do to improve this… either code changes, platform (like different iPhone models), or environment (like lighting, distance, etc).
Attaching some example screen grabs and a minimum reproducible code sample. Appreciate any insights!
import ARKit
import SwiftUI
import RealityKit
struct RealityViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.environment.sceneUnderstanding.options.insert(.occlusion)
arView.renderOptions.insert(.disableMotionBlur)
arView.renderOptions.insert(.disableDepthOfField)
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal]
if ARWorldTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth) {
configuration.frameSemantics.insert(.personSegmentationWithDepth)
}
arView.session.run(configuration)
arView.session.delegate = context.coordinator
context.coordinator.arView = arView
}
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
class Coordinator: NSObject, ARSessionDelegate {
var parent: RealityViewContainer
var floorAnchor: ARPlaneAnchor?
init(_ parent: RealityViewContainer) {
self.parent = parent
}
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
if let arView,floorAnchor == nil {
for anchor in anchors {
if let horizontalPlaneAnchor = anchor as? ARPlaneAnchor,
horizontalPlaneAnchor.alignment == .horizontal,
horizontalPlaneAnchor.transform.columns.3.y < arView.cameraTransform.translation.y { // filter out ceiling
floorAnchor = horizontalPlaneAnchor
let backgroundEntity = BackgroundEntity()
let anchorEntity = AnchorEntity(anchor: horizontalPlaneAnchor)
anchorEntity.addChild(background)
let foregroundEntity = ForegroundEntity()
backgroundEntity.addChild(foregroundEntity)
arView.scene.addAnchor(anchorEntity)
arView.installGestures([.rotation, .translation], for: backgroundEntity)
break // Stop after adding the first horizontal plane (floor)
}
}
}
}
}
}
Topic:
Graphics & Games
SubTopic:
RealityKit
our app is live, and it appears that since the ios 18 update - the VideoMaterial renders pink / purple color instead of the video (picture attached). the audio is rendered properly.
we found that it occurs on old devices: iPhone 11 & iPhone SE 2020.
I've found this thread of Andy Jazz on stackoverflow:
Steps to Reproduce:
Create a plane for the video screen.
Apply a VideoMaterial using AVPlayerItem.
Anchor the model entity to an ARImageAnchor.
Expected Outcome:
The video should play as a material on the plane in RealityKit.
Actual Outcome:
On iOS 18, the plane appears pink, indicating the VideoMaterial isn’t applied.
What I’ve Tried:
-Verified the video URL is correct.
-Checked that the AVPlayerItem and VideoMaterial are initialised correctly.
-Ensured the AVPlayer is playing the video.
I also tried different formats (mov / mp4 / m4v), and verifying that the video's status is readyToPlay.
any suggestions?
As part of the WWDC25 Keynote, a technology was announced that can present 2D images as 3D spatial scenes. This announcement is supported by a Press Release.
...developers can use the Spatial Scene API to make their app experience even more immersive. Zillow is taking advantage of the API for their Zillow Immersive app, allowing users to see images of homes and apartments with the rich depth and dimension that spatial scenes offer.
The feature also appears in the Photos App on iOS 26 Developer Beta 1. Tapping "Spatial Scene" on any photo opens a view of that photo with a parallax effect. I've searched the WWDC sessions and new documentation and have come up short. Reaching out here for help.
Is there any documentation for Spatial Scene API? Or any guidance on how to implement the spatial scene in iOS?
Had anyone experienced convexCast causing a crash and what might be behind it?
Here's the call stack:
We're using RealityKit to create a science education AR app for iOS, iPadOS, and visionOS.
In the WWDC25 session video "Bring your SceneKit project to RealityKit" https://developer.apple.com/videos/play/wwdc2025/288 at 8:15, it's explained that when using RealityKit, RealityView should be used in all cases, whereas in the past, SceneKit required SCNView, SceneView, or ARSCNView, depending on an app's requirements.
Because the initial development of our app on iOS predates iOS 18's RealityView, our app currently uses ARView to render RealityKit AR content on iOS and iPadOS.
Is it recommended that we migrate to RealityView, or can we safely continue using our existing ARView implementation? We'd prefer to avoid unnecessary development cost.
If migrating from ARView to RealityView is recommended, what specific benefits should we expect from this transition?
Thank you.