I'm building a proof of concept application leveraging the PlaneDetectionProvider to generate UI and interactive elements on a horizontal plane the user is looking at. I'm able to create a cube at the centroid of the plane and change it's location via position. However, I can't seem to rotate the cube programmatically and from this forum post in September I'm not sure if the modelEntity.move functionality is still bugged or the documentation is not up to date.
if let planeCentroid = planeEntity.centroid {
// Create a cube at the centroid
let cubeMesh = MeshResource.generateBox(size: 0.1) // Create a cube with side length of 0.1 meters
let cubeMaterial = SimpleMaterial(color: .blue, isMetallic: false)
let cubeEntity = ModelEntity(mesh: cubeMesh, materials: [cubeMaterial])
cubeEntity.position = planeCentroid
cubeEntity.position.y += 0.3048
planeEntity.addChild(cubeEntity)
let rotationY = simd_quatf(angle: Float(45.0 * .pi/180.0), axis: SIMD3(x: 0, y: 1, z: 0))
let cubeTransform = Transform(rotation: rotationY)
cubeEntity.move(to: cubeTransform, relativeTo: planeEntity, duration: 5, timingFunction: .linear)
Ideally, I'd like to have the cube start/stop rotation when the user pinches on the plane mesh but I'd be happy just to see it rotate!
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
I am trying to use the subdivision mesh rendering option.
I can see it working in RealityComposerPro:
But not when loading asset and displaying in Simulator:
Using this code:
import SwiftUI
import RealityKit
import RealityKitContent
struct AirspaceView: View {
// MARK: - VIEW BODY
var body: some View {
RealityView { content in
if let a = try? await Entity(named: "Models/Test/Test.usdc", in: realityKitContentBundle) {
content.add(a)
}
}
}
}
Any ideas why?
Hi, is there any way to use front camera to do the motion capture?
I want recognize if the user raised there hands up with the front camera on iPhone.
I was able to do it with the back camera, not the front.
Also, if there is any sample code, or document, I would be super happy.
Waiting for your reply!!
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
Swift Student Challenge
Swift
ARKit
Swift Playground
I have a 3D model with morphing animation that works correctly in Blender.
I exported this model as a USDZ file and tried to display it in an Xcode-developed visionOS app, but the morphing animation does not play.
What I Have Tried:
Morphing animation works correctly in Blender.
After exporting to USDZ, the morphing animation does not play in the Xcode app.
Linear motion animations (such as object movement) work fine.
Behavior in Reality Converter:
GLB files do not display.
USDZ files load, but morphing animations do not play.
What I Want to Know:
Is there a way to play morphing animations in an Xcode-developed app?
Does RealityKit support morphing animations?
Can morphing animations be played in an Xcode-developed app?
If RealityKit does not support morphing animations, what alternative methods can be used to play them?
I am looking for a way to use the existing animations without recreating them.
Additional Information:
I have both the Blender file (where animations work) and the USDZ file (where animations do not play).
I am developing a visionOS app using Xcode.
Any advice or solutions would be greatly appreciated.
Thank you in advance!
My experience has been that ModelEntity(named:in:) can be used to load a USD file with a simple structure consisting of entities and model entities, and, critically, it will flatten the entity hierarchy down to a single ModelEntity, presumably reducing the number of draw calls.
However, can anyone verify that the following is true?
If ModelEntity(named:in:) is used to load a USD file from a RealityKit content bundle, it may fail when the USD file contains more complex data, such as shader graph material definitions, or perhaps for some other reason. I am not sure.
AND the error that ModelEntity(named:in:) throws in this case is
Cannot load RealityKitContent entity: Failed to find resource with name "<name>" in bundle
which would literally suggest that the file does not exist, instead of what I assume the error actually is, which is "the file exists but its entity hierarchy could not be flattened to a single ModelEntity" ?
Is that an accurate description of the known behavior of ModelEntity:named:in:)?
I understand that I could use Entity(named:in:) instead, without the flattening feature. My question is really more about the seemingly misleading error message.
Thank you for any clarification you can provide.
I'm new here so I don't know what's this function belongs to which topic... Sorry about that!
I watched the WWDC stream and I am really interested in this function, I'm wondering if this function could be used in my apps.
I looked up the document but I find it only support visionOS(i'm not sure about that, but I saw the demo is base on the visionOS)
Breaking Through PolySpatial's ~8k Object Limit – Seeking Alternative Approaches for Large-Scale Digital Twins
Confirmed: PolySpatial make Doubles MeshFilter Count – Hard Limit at ~8k Active Objects (15.9k Total)
Project Context & Research Goals
I’m developing an industrial digital twin application for Apple Vision Pro using Unity’s PolySpatial framework (RealityKit rendering in Unbounded_Volume mode). The scene contains complex factory environments with:
Production line equipment Many fragmented grid objects need to be merged.)
Dynamic product racks (state-switchable assets)
Animated worker avatars
To optimize performance, I’m systematically testing visionOS’s rendering capacity limits. Through controlled stress tests, I’ve identified a critical threshold:
Key Finding
When the total MeshFilter count reaches 15,970 (system baseline + 7,985 user-created objects × 2 due to PolySpatial cloning), the application crashes consistently. This suggests:
PolySpatial’s mirroring mechanism effectively doubles GameObject overhead
An apparent hard limit exists around ~8k active mesh objects in practice
Objectives for This Discussion
Verify if others have encountered similar limits with PolySpatial/RealityKit
Understand whether this is a:
Memory constraint (per-app allocation)
Render pipeline limit (Metal draw calls)
Unity-specific PolySpatial behavior
Explore optimization strategies beyond brute-force object reduction
Why This Matters
Industrial metaverse applications require rendering thousands of interactive objects . Confirming these limits will help our team:
Design safer content guidelines
Prioritize GPU instancing/LOD investments
Potentially contribute back to PolySpatial’s optimization
I’d appreciate insights from engineers who’ve:
Pushed similar large-scale scenes in visionOS
Worked around PolySpatial’s cloning overhead
Discovered alternative capacity limits (vertices/draw calls)
Hi team, I'm looking for the RealityKit debugger in Xcode 26 beta 3. I'm running a RealityKit app on my iPad running iPadOS 26 b3, but the debugger option is not there in Xcode.
Hi all,
I've encountered a potential issue with how the winding order of geometry is handled when their transformations involve negative scaling.
I created a simple test asset, a single triangle, to demonstrate this. The triangle's vertices are defined in a counter-clockwise ("right-handed") winding order, and its transform has a negative scale on the X-axis. According to the OpenUSD specification, this negative determinant in the transformation matrix should effectively reverse the winding order of the geometry:
However, any given gprim's local-to-world transformation can flip its effective orientation, when it contains an odd number of negative scales. This condition can be reliably detected using the (Jacobian) determinant of the local-to-world transform: if the determinant is less than zero, then the gprim's orientation has been flipped, and therefore one must apply the opposite handedness rule when computing its surface normals (or just flip the computed normals) for the purposes of hidden surface detection and lighting calculations.
When I view the asset in tools like Blender or Preview on macOS, it behaves as expected. The triangle's effective orientation is flipped to CW.
However, when the same asset is viewed in Reality Composer Pro or with QuickLook on iOS, its effective orientation remains CCW. In other words, the triangle faces the opposite direction.
My questions for the community and Apple are:
Is this behavior in RealityKit a known issue?
If this is a known issue, is there official guidance for DCC tools on how to export USDZ assets to ensure they appear correctly in the Apple ecosystem?
Any insights or recommendations would be greatly appreciated.
Hi,
I'm rewriting my game from SceneKit to RealityKit, and I'm having trouble implementing the following scenario:
I tap on the iPhone screen to select an Entity that I want to drag.
If an Entity was tapped, it should then be possible to drag it left, right, etc.
SceneKit solution:
func CGPointToSCNVector3(_ view: SCNView, depth: Float, point: CGPoint) -> SCNVector3 {
let projectedOrigin = view.projectPoint(SCNVector3Make(0, 0, Float(depth)))
let locationWithz = SCNVector3Make(Float(point.x), Float(point.y), Float(projectedOrigin.z))
return view.unprojectPoint(locationWithz)
}
and then I was calling:
SCNView().hitTest(location, options: [SCNHitTestOption.firstFoundOnly:true])
the code was called inside of the UIPanGestureRecognizer in my UIViewController.
Could I reuse that code or should I go with the SwiftUI approach - something like that:
var body: some View {
RealityView {
....
} .gesture(TapGesture().onEnded {
})
?
I already have this code:
@State private var location: CGPoint?
.onTapGesture { location in
self.location = location
}
I'm trying to identify the entity that was tapped within the RealityView like that:
RealityView { content in
let box: ModelEntity = createBox() // for now there is only one box, however there will be many boxes
content.add(box)
let anchor = AnchorEntity(world: [0, 0, 0])
content.add(anchor)
_ = content.subscribe(to: SceneEvents.Update.self) { event in
//TODO: find tapped entity, so that it could be dragged inside of the DragGesture()
}
Any help would be appreciated.
I also noticed that if I create a TapGesture like that:
TapGesture(count: 1)
.targetedToAnyEntity()
and add it to my view using .gesture() then it is not triggered.
Post can be removed.
I have 2 planes with textures on. I want these planes to intersect [ –|– ], and I want the blend mode to be additive. Currently I get z fighting on the planes, and I can't see how to set blend modes.
I've done this before in Unity and Godot in a fairly straight forward manner.
How do I accomplish this with RealityKit, preferably using code only (my scene is quite dynamic)?
Do I need to do it with a shader manually? How can I stop the z fighting?
I'm developing 3D Scanner works on iPad.
I'm using AVCapturePhoto and Photogrammetry Session.
photoCaptureDelegate is like below:
extension PhotoCaptureDelegate: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
let fileUrl = CameraViewModel.instance.imageDir!.appendingPathComponent("\(PhotoCaptureDelegate.name)\(id).heic")
let img = CIImage(cvPixelBuffer: photo.pixelBuffer!, options: [ .auxiliaryDepth: true, .properties: photo.metadata ])
let depthData = photo.depthData!.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB)
let fileData = CIContext().heifRepresentation(of: img, format: .RGBA8, colorSpace: colorSpace!, options: [ .avDepthData: depthData ])
try? fileData!.write(to: fileUrl, options: .atomic)
}
}
But, Photogrammetry session spits warning messages:
Sample 0 missing LiDAR point cloud!
Sample 1 missing LiDAR point cloud!
Sample 2 missing LiDAR point cloud!
Sample 3 missing LiDAR point cloud!
Sample 4 missing LiDAR point cloud!
Sample 5 missing LiDAR point cloud!
Sample 6 missing LiDAR point cloud!
Sample 7 missing LiDAR point cloud!
Sample 8 missing LiDAR point cloud!
Sample 9 missing LiDAR point cloud!
Sample 10 missing LiDAR point cloud!
The session creates a usdz 3d model but scale is not correct.
I think the point cloud can help Photogrammetry session to find right scale, but I don't know how to attach point cloud.
Developing a prototype Vision Pro app and would like to render a 3D scene made from Reality Composer Pro on an image anchor in a RealityView. But I have no luck so far to make it work and need some guidance to move on.
I got the image file stored in the assets like below:
And from below is the source codes:
import SwiftUI
import RealityKit
import RealityKitContent
struct AnchorView: View {
@State var imageEntity: Entity = {
let anchorEntity = AnchorEntity(.image(group: "AR Resources", name: "reanchor"))
return anchorEntity
}()
var body: some View {
RealityView { content in
do
{
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle)
{
imageEntity.addChild(scene)
content.add(imageEntity)
}
}
catch
{
print("Error occurs when adding reality view content: \(error)")
}
}
}
}
Hi I have setup animation using timeline in Reality Composer Pro like below
This get triggered by a notification posts from the code.Once this time line triggered, I want to repeat this 2 animations in the timeline unitll user takes the next action. How can I make these repeat forever?
I have two apps released -- ReefScan and ReefBuild -- that are based on the WWDC21 sample photogrammetry apps for iOS and MacOS. Those run fine without LiDAR and are used mostly for underwater models where LiDAR does not work at all. It now appears that the updated photogrammetry session requires LiDAR data, and building my app on current xcode results in a non-working app. Has the "old" version of photgrammetry session been broken by this update? It worked very well previously so I would hate to see this regression to needing LiDAR. Most of my users do not have that.
Topic:
Graphics & Games
SubTopic:
RealityKit
Hi
Hopefully someone can share some ideas on how to accomplish this.
I know we can load models from realityKitContentBundle like
let model = try? await Entity(named: “testModel”, in: realityKitContentBundle)
But this is in the root of RealityKitContent.rkassets , if I have the models in some subfolder then I have to add the complete path like
let model = try? await Entity(named: “/superModels/testModel”, in: realityKitContentBundle)
What I want is to be able to search recursively in all folders for that file as I have several subfolders with different models.
Any suggestion ?
Thanks in advance.
Guillermo
When importing FBX animations (generated by Cinema 4d or Blender) the models come in very far way and cannot resize or zoomed in. I have tried every setting from both programs to no avail. Is there a secret to providing the right export options? When importing without animations/and rigging the model imports fine and correct size. But once motion is included, something is awry. I also tried changing base units in Converter, but did not work. I have attache my model heirarchy in C4D as well as the imported result. It appears the animation is imported, as I can see it move, but can barely see it :)
I can't create any breakpoint in my Xcode after I upgraded to macOS 15.4
macOS: Version 15.4 (24E248)
visionOS Simulator: 2.3
Xcode: Version 16.2 (16C5032a)
My app works well without any breakpoints.
But if I create any breakpoint it shows me this:
Couldn't find the Objective-C runtime library in loaded images.
Message from debugger: The LLDB RPC server has crashed. You may need to manually terminate your process. The crash log is located in ~/Library/Logs/DiagnosticReports and has a prefix 'lldb-rpc-server'. Please file a bug and attach the most recent crash log.
Hi, following the recent deprecation of SceneKit, I'm trying to move a couple of my SceneKit projects to RealityKit.
One thing I can't seem to find is how to change the content scale factor when using a RealityView in SwiftUI. It was really easy to do in SceneKit with just a SCNView property, and it seems that it's also possible when using ARView, but I can't find a way to do it with a RealityView. Maybe it's a SwiftUI limitation?
Topic:
Graphics & Games
SubTopic:
RealityKit