What is the current [most recent] best practice to instancing Meshes in RealityKit?
I see both MeshInstanceComponent and MeshInstanceCollection.
My intent is to bind a transform to a Circle Agent (GameplayKit Agent), and feed that result to Instancing.
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi!
Using ARView in UIKit or through a UIViewRepresentable in SwiftUI, we can do:
arView.debugOptions = [.showPhysics, .showStatistics]
What is the equivalent in RealityView?
Topic:
Graphics & Games
SubTopic:
RealityKit
Hey there, I’m currently planning to use RealityKit in a new multiplatform app I’m building. Unfortunately, I noticed that WatchOS is not supported for RealityKit, while SceneKit is getting deprecated. However, I’d like to maintain the same codebase across platforms. What are my options?
Hi team, I'm looking for the RealityKit debugger in Xcode 26 beta 3. I'm running a RealityKit app on my iPad running iPadOS 26 b3, but the debugger option is not there in Xcode.
Hi all,
I've encountered a potential issue with how the winding order of geometry is handled when their transformations involve negative scaling.
I created a simple test asset, a single triangle, to demonstrate this. The triangle's vertices are defined in a counter-clockwise ("right-handed") winding order, and its transform has a negative scale on the X-axis. According to the OpenUSD specification, this negative determinant in the transformation matrix should effectively reverse the winding order of the geometry:
However, any given gprim's local-to-world transformation can flip its effective orientation, when it contains an odd number of negative scales. This condition can be reliably detected using the (Jacobian) determinant of the local-to-world transform: if the determinant is less than zero, then the gprim's orientation has been flipped, and therefore one must apply the opposite handedness rule when computing its surface normals (or just flip the computed normals) for the purposes of hidden surface detection and lighting calculations.
When I view the asset in tools like Blender or Preview on macOS, it behaves as expected. The triangle's effective orientation is flipped to CW.
However, when the same asset is viewed in Reality Composer Pro or with QuickLook on iOS, its effective orientation remains CCW. In other words, the triangle faces the opposite direction.
My questions for the community and Apple are:
Is this behavior in RealityKit a known issue?
If this is a known issue, is there official guidance for DCC tools on how to export USDZ assets to ensure they appear correctly in the Apple ecosystem?
Any insights or recommendations would be greatly appreciated.
hello i am new to apple ecosystem and development i have some coding experience with c# now i like to develop my game for iphone 16 and up(due to ability to run ai models) but i am having hard time figuring out what to use there is a lot of resources for scene-kit but on its doc page it says its deprecated so i look at the reality-kit docs and tutorials and its strictly tells how to develop for visionos and i am really confused about this since there is no tutorials that shows how to develop a game for ios with reality-kit that does not focus visionos. i just want to develop for iphone 16 and up but i cant find resources focuses at that.
Topic:
Graphics & Games
SubTopic:
RealityKit
Hi all,
I've developed some code that enables an arcball camera interaction with my scene. I've done this using components and systems. The implementation feels a bit messy as I've got gesture code on my realityView, and then a bunch of other code that uses those gesture inputs in my component and system.
Is there a demo app, or some example code that shows a nice way to encapsulate these things in to one item for custom cameras, something like Apple's .realityViewCameraControls(.orbit)
If not can anyone recommend an approach to take?
Hello Apple team,
I'm working on an iOS AR app using SwiftUI and RealityKit,
and I was wondering if the Cinematic API can be used with a RealityKit scene. I’d like to achieve a shallow depth of field while keeping the 3D asset in focus, and vice versa.
Thanks!
I have been tasked with creating content for the Apple Vision Pro. Just the 3D content and animation, not the programming end of things.
I can't seem to get any kind of mesh deformation animation to import into Reality Composer Pro. By that I mean bones/skin, or even point cache.
I work on PC, and my main software is 3DS Max, but I'm borrowing an iMac for this job, and was instructed to use RCP on it for testing before handing things off to the programmer. My files open and play fine in other USD programs, like Omniverse, or USD View, just not Reality Composer Pro.
I've seen the dinosaur demo in AVP, so I know mesh deformation is possible. If there are other essential tools that might make this possible, I have not been made aware of them.
I am experimenting with bouncing things off of Blender, in case that exports better, but not really having luck there either -though my results are different.
Thanks.
Topic:
Graphics & Games
SubTopic:
RealityKit
A bit of background on what our app is doing:
We have a RealityKit ARView session running.
During this period we place objects in RealityKit.
At some point user can "take photo" and we use session.captureHighResolutionFrame to capture a frame.
We then use captured frame and frame.camera.projectPoint to project my objects back to 2D
Issue we found is that on devices that have iOS26, first photo user takes and the first frame received from session.captureHighResolutionFrame gives incorrect CGPoint for frame.camera.projectPoint. If user takes the second photo with the same camera phostion, second frame received from session.captureHighResolutionFrame gives correct CGPoint for frame.camera.projectPoint
I notices some difference between first and subsequent frames that i believe is corresponding with the issue. Yaw value of camera (frame.camera.eulerAngles.y) on first frame is not correct ( inconsistent with any subsequent frame)
I also created a small example app and i followed Building an Immersive Experience with RealityKit example to create it. The issue exists in this app for iOS26, while iOS18.* has consistent values between first and subsequent captured frames.
Note:
The yaw value seems to differ more if we start session in portrait but take photo in landscape.
Example result for 3 captured frames:
Frame captured with yaw: 1.4855177402496338
Frame captured with yaw: -0.08803760260343552
Frame captured with yaw: -0.08179682493209839
Example code:
class CustomARView: ARView, ARSessionDelegate {
required init(frame: CGRect) { super.init(frame: frame) }
required init?(coder decoder: NSCoder) { fatalError("init(coder:) has not been implemented")}
func setup() {
let singleTap = UITapGestureRecognizer(target: self, action: #selector(handleTap))
addGestureRecognizer(singleTap)
}
@objc
func handleTap(_ gestureRecognizer: UIGestureRecognizer) {
Task {
do {
let frame = try await session.captureHighResolutionFrame()
print("Frame captured with yaw: \(Double(frame.camera.eulerAngles.y))")
} catch { }
}
}
}
struct CustomARViewUIViewRepresentable: UIViewRepresentable {
func makeUIView(context: Context) -> some UIView {
let arView = CustomARView(frame: .zero)
arView.setup()
return arView
}
func updateUIView(_ uiView: UIViewType, context: Context) { }
}
struct ContentView: View {
var body: some View {
CustomARViewUIViewRepresentable()
.frame(maxWidth: .infinity, maxHeight: .infinity)
.ignoresSafeArea()
}
}
Developing a prototype Vision Pro app and would like to render a 3D scene made from Reality Composer Pro on an image anchor in a RealityView. But I have no luck so far to make it work and need some guidance to move on.
I got the image file stored in the assets like below:
And from below is the source codes:
import SwiftUI
import RealityKit
import RealityKitContent
struct AnchorView: View {
@State var imageEntity: Entity = {
let anchorEntity = AnchorEntity(.image(group: "AR Resources", name: "reanchor"))
return anchorEntity
}()
var body: some View {
RealityView { content in
do
{
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle)
{
imageEntity.addChild(scene)
content.add(imageEntity)
}
}
catch
{
print("Error occurs when adding reality view content: \(error)")
}
}
}
}
after launching a nearby exoerience on quick look or inside our app, all the user in the group watch sometimes teh model blinking abd becoming transparet...
... just one user hasnt the issue, either the one who launched shareplay or the user who force align the immersive space in front
weird
I implemented an EntityAction to change the baseColor tint - and had it working on VisionOS 2.x.
import RealityKit
import UIKit
typealias Float4 = SIMD4<Float>
extension UIColor {
var float4: Float4 {
if
cgColor.numberOfComponents == 4,
let c = cgColor.components
{
Float4(Float(c[0]), Float(c[1]), Float(c[2]), Float(c[3]))
} else {
Float4()
}
}
}
struct ColourAction: EntityAction {
// MARK: - PUBLIC PROPERTIES
let startColour: Float4
let targetColour: Float4
// MARK: - PUBLIC COMPUTED PROPERTIES
var animatedValueType: (any AnimatableData.Type)? { Float4.self }
// MARK: - INITIATION
init(startColour: UIColor, targetColour: UIColor) {
self.startColour = startColour.float4
self.targetColour = targetColour.float4
}
// MARK: - PUBLIC STATIC FUNCTIONS
@MainActor static func registerEntityAction() {
ColourAction.subscribe(to: .updated) { event in
guard let animationState = event.animationState else { return }
let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime))
animationState.storeAnimatedValue(interpolatedColour)
}
}
}
extension Entity {
// MARK: - PUBLIC FUNCTIONS
func changeColourTo(_ targetColour: UIColor, duration: Double) {
guard
let modelComponent = components[ModelComponent.self],
let material = modelComponent.materials.first as? PhysicallyBasedMaterial
else {
return
}
let colourAction = ColourAction(startColour: material.baseColor.tint, targetColour: targetColour)
if let colourAnimation = try? AnimationResource.makeActionAnimation(for: colourAction, duration: duration, bindTarget: .material(0).baseColorTint) {
playAnimation(colourAnimation)
}
}
}
This doesn't work in VisionOS 26. My current fix is to directly set the material base colour - but this feels like the wrong approach:
@MainActor static func registerEntityAction() {
ColourAction.subscribe(to: .updated) { event in
guard
let animationState = event.animationState,
let entity = event.targetEntity,
let modelComponent = entity.components[ModelComponent.self],
var material = modelComponent.materials.first as? PhysicallyBasedMaterial
else { return }
let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime))
material.baseColor.tint = UIColor(interpolatedColour)
entity.components[ModelComponent.self]?.materials[0] = material
animationState.storeAnimatedValue(interpolatedColour)
}
}
So before I raise this as a bug, was I doing anything wrong in the former version and got lucky? Is there a better approach?
Hello, I’m porting my UIKit/SceneKit app to SwiftUI/RealityKit and I’m wondering how to change the camera target programmatically. I created a simple scene in Reality Composer Pro with two spheres. My goal is straightforward: when the user taps a sphere, the camera should look at it as the main target.
Following Apple’s videos, I implemented the .gesture modifier and it is printing the tapped sphere correctly, but updating my targetEntity state doesn’t change anything, so the camera won't update its target. Is there a way to access the scene content at that level? Or what else should I do?
Here’s my current code implementation:
Thanks!
Is there any standard way of efficiently showing a MTLTexture on a RealityKit Entity?
I can't find anything proper on how to , for example, generate a LowLevelTexture out of a MTLTexture. Closest match was this two year old thread.
In the old SceneKit app, we would just do
guard let material = someNode.geometry?.materials.first else { return }
material.diffuse.contents = mtlTexture
Our flow is as follows (for visualizing the currently detected object):
Camera-Stream -> CoreML Segmentation -> Send the relevant part of the MLShapedArray-Tensor to a MTLComputeShader that returns a MTLTexture -> Show the resulting texture on a 3D object to the user
Hi everyone,
I’m currently learning about ParticleEmitterComponentParticleEmitterComponent and exploring the sample app provided in the Simulating particles in your visionOS app documentation.
In the sample app, when I set the EmitterPreset to fireworks from the settings panel on the left side of the window and choose SystemImage, I noticed two issues:
The image applied to mainEmitter appears clipped or cropped.
The image on spawnedEmitter does not update to the selected SystemImage.
What I want to achieve:
Apply the same SystemImage to both mainEmittermainEmitter and spawnedEmitterspawnedEmitter so that it displays correctly without clipping.
Remove the animation that changes the size of spawnedEmitterspawnedEmitter over time and keep it at a constant size.
Could someone explain which properties should be adjusted to achieve this behavior? Any guidance or examples would be greatly appreciated.
Thanks in advance!
I’m trying to play an Apple Immersive video in the .aivu format using VideoPlayerComponent using the official documentation found here:
https://developer.apple.com/documentation/RealityKit/VideoPlayerComponent
Here is a simplified version of the code I'm running in another application:
import SwiftUI
import RealityKit
import AVFoundation
struct ImmersiveView: View {
var body: some View {
RealityView { content in
let player = AVPlayer(url: Bundle.main.url(forResource: "Apple_Immersive_Video_Beach", withExtension: "aivu")!)
let videoEntity = Entity()
var videoPlayerComponent = VideoPlayerComponent(avPlayer: player)
videoPlayerComponent.desiredImmersiveViewingMode = .full
videoPlayerComponent.desiredViewingMode = .stereo
player.play()
videoEntity.components.set(videoPlayerComponent)
content.add(videoEntity)
}
}
}
Full code is here:
https://github.com/tomkrikorian/AIVU-VideoPlayerComponentIssueSample
But the video does not play in my project even though the file is correct (It can be played in Apple Immersive Video Utility) and I’m getting this error when the app crashes:
App VideoPlayer+Component Caption: onComponentDidUpdate Media Type is invalid
Domain=SpatialAudioServicesErrorDomain Code=2020631397 "xpc error" UserInfo={NSLocalizedDescription=xpc error}
CA_UISoundClient.cpp:436 Got error -4 attempting to SetIntendedSpatialAudioExperience
[0x101257490|InputElement #0|Initialize] Number of channels = 0 in AudioChannelLayout does not match number of channels = 2 in stream format.
Video I’m using is the official sample that can be found here but tried several different files shot from my clients and the same error are displayed so the issue is definitely not the files but on the RealityKit side of things:
https://developer.apple.com/documentation/immersivemediasupport/authoring-apple-immersive-video
Steps to reproduce the issue:
- Open AIVUPlayerSample project and run. Look at the logs.
All code can be found in ImmersiveView.swift
Sample file is included in the project
Expected results:
If I followed the documentation and samples provided, I should see my video played in full immersive mode inside my ImmersiveSpace.
Am i doing something wrong in the code? I'm basically following the documentation here.
Feedback ticket: FB19971306
I am rewriting an unfinished SceneKit project as RealityKit (NonAR). As far as I can see, RealityKit is missing basic fog functionality?
Fog was simple & easy to implement in SCeneKit (fogStartDistance / fogEndDistance / fogDensityExponent / fogColor). Are there any plans to implement something like this in RealityKit?
Are there any simple workarounds?
Topic:
Graphics & Games
SubTopic:
RealityKit
I have 2 planes with textures on. I want these planes to intersect [ –|– ], and I want the blend mode to be additive. Currently I get z fighting on the planes, and I can't see how to set blend modes.
I've done this before in Unity and Godot in a fairly straight forward manner.
How do I accomplish this with RealityKit, preferably using code only (my scene is quite dynamic)?
Do I need to do it with a shader manually? How can I stop the z fighting?
In the CanyonCrosser example project, some RealityKit systems are implemented as classes while others are structs. What’s the reason for using different types?