RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

Posts under RealityKit tag

165 Posts

Post

Replies

Boosts

Views

Activity

Unexpected Behavior in Entity Movement System When Using AVAudioPlayer in visionOS Development
I am currently developing an app for visionOS and have encountered an issue involving a component and system that moves an entity up and down within a specific Y-axis range. The system works as expected until I introduce sound playback using AVAudioPlayer. Whenever I use AVAudioPlayer to play sound, the entity exhibits unexpected behaviors, such as freezing or becoming unresponsive. The freezing of the entity's movement is particularly noticeable when playing the audio for the first time. After that, it becomes less noticeable, but you can still feel it, especially when the audio is played in quick succession. Also, the issue is more noticable on real device than the simulator // // IssueApp.swift // Issue // // Created by Zhendong Chen on 2/1/25. // import SwiftUI @main struct IssueApp: App { var body: some Scene { WindowGroup { ContentView() } .windowStyle(.volumetric) } } // // ContentView.swift // Issue // // Created by Zhendong Chen on 2/1/25. // import SwiftUI import RealityKit import RealityKitContent struct ContentView: View { @State var enlarge = false var body: some View { RealityView { content, attachments in // Add the initial RealityKit content if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { if let sphere = scene.findEntity(named: "Sphere") { sphere.components.set(UpAndDownComponent(speed: 0.03, minY: -0.05, maxY: 0.05)) } if let button = attachments.entity(for: "Button") { button.position.y -= 0.3 scene.addChild(button) } content.add(scene) } } attachments: { Attachment(id: "Button") { VStack { Button { SoundManager.instance.playSound(filePath: "apple_en") } label: { Text("Play audio") } .animation(.none, value: 0) .fontWeight(.semibold) } .padding() .glassBackgroundEffect() } } .onAppear { UpAndDownSystem.registerSystem() } } } // // SoundManager.swift // LinguaBubble // // Created by Zhendong Chen on 1/14/25. // import Foundation import AVFoundation class SoundManager { static let instance = SoundManager() private var audioPlayer: AVAudioPlayer? func playSound(filePath: String) { guard let url = Bundle.main.url(forResource: filePath, withExtension: ".mp3") else { return } do { audioPlayer = try AVAudioPlayer(contentsOf: url) audioPlayer?.play() } catch let error { print("Error playing sound. \(error.localizedDescription)") } } } // // UpAndDownComponent+System.swift // Issue // // Created by Zhendong Chen on 2/1/25. // import RealityKit struct UpAndDownComponent: Component { var speed: Float var axis: SIMD3<Float> var minY: Float var maxY: Float var direction: Float = 1.0 // 1 for up, -1 for down var initialY: Float? init(speed: Float = 1.0, axis: SIMD3<Float> = [0, 1, 0], minY: Float = 0.0, maxY: Float = 1.0) { self.speed = speed self.axis = axis self.minY = minY self.maxY = maxY } } struct UpAndDownSystem: System { static let query = EntityQuery(where: .has(UpAndDownComponent.self)) init(scene: RealityKit.Scene) {} func update(context: SceneUpdateContext) { let deltaTime = Float(context.deltaTime) // Time between frames for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) { guard var component: UpAndDownComponent = entity.components[UpAndDownComponent.self] else { continue } // Ensure we have the initial Y value set if component.initialY == nil { component.initialY = entity.transform.translation.y } // Calculate the current position let currentY = entity.transform.translation.y // Move the entity up or down let newY = currentY + (component.speed * component.direction * deltaTime) // If the entity moves out of the allowed range, reverse the direction if newY >= component.initialY! + component.maxY { component.direction = -1.0 // Move down } else if newY <= component.initialY! + component.minY { component.direction = 1.0 // Move up } // Apply the new position entity.transform.translation = SIMD3<Float>(entity.transform.translation.x, newY, entity.transform.translation.z) // Update the component with the new direction entity.components[UpAndDownComponent.self] = component } } } Could someone help me with this?
2
0
334
Feb ’25
Place a box on a wall or on the floor
Hi, I wanted to do something quite simple: Put a box on a wall or on the floor. My box: let myBox = ModelEntity( mesh: .generateBox(size: SIMD3<Float>(0.1, 0.1, 0.01)), materials: [SimpleMaterial(color: .systemRed, isMetallic: false)], collisionShape: .generateBox(size: SIMD3<Float>(0.1, 0.1, 0.01)), mass: 0.0) For that I used Plane Detection to identify the walls and floor in the room. Then with SpatialTapGesture I was able to retrieve the position where the user is looking and tap. let position = value.convert(value.location3D, from: .local, to: .scene) And then positioned my box myBox.setPosition(position, relativeTo: nil) When I then tested it I realized that the box was not parallel to the wall but had a slightly inclined angle. I also realized if I tried to put my box on the wall to my left the box was placed perpendicular to this wall and not placed on it. After various searches and several attempts I ended up playing with transform.matrix to identify if the plane is wall or a floor, if it was in front of me or on the side and set up a rotation on the box to "place" it on the wall or a floor. let surfaceTransform = surface.transform.matrix let surfaceNormal = normalize(surfaceTransform.columns.2.xyz) let baseRotation = simd_quatf(angle: .pi, axis: SIMD3<Float>(0, 1, 0)) var finalRotation: simd_quatf if acos(abs(dot(surfaceNormal, SIMD3<Float>(0, 1, 0)))) < 0.3 { logger.info("Surface: ceiling/floor") finalRotation = simd_quatf(angle: surfaceNormal.y > 0 ? 0 : .pi, axis: SIMD3<Float>(1, 0, 0)) } else if abs(surfaceNormal.x) > abs(surfaceNormal.z) { logger.info("Surface: left/right") finalRotation = simd_quatf(angle: surfaceNormal.x > 0 ? .pi/2 : -.pi/2, axis: SIMD3<Float>(0, 1, 0)) } else { logger.info("Surface: front/back") finalRotation = baseRotation } Playing with matrices is not really my thing so I don't know if I'm doing it right. Could you tell me if my tests for the orientation of the walls are correct? During my tests I don't always correctly identify whether the wall is in front or on the side. Is this generally the right way to do it? Is there an easier way to do this? Regards Tof
0
0
408
Feb ’25
Is there a way to scale a RealityKit ShapeResource?
I can generate a ShapeResource from a ReakityKit entity's extents. Could I apply some scaling to the generated shape. Is there a way to do that? // model is a ModelResource and bounds is a BoundingBox var shape = ShapeResource.generateConvex(from: model.mesh); shape = shape.offsetBy(translation: bounds.center) // How can I scale the shape to fit within the bounds? The following API only provide the rotation and translation support. and I cannot find the scale support. offsetBy(rotation: simd_quatf = simd_quatf(ix: 0, iy: 0, iz: 0, r: 1), translation: SIMD3<Float> = SIMD3<Float>()) I can put the ShapeResource on an entity and scale the entity. But, I would like to know if it is possible to scale the ShapeResource itself without attaching it to an entity.
3
0
515
Feb ’25
Rendering Order with ModelSortGroup
I have a huge sphere where the camera stays inside the sphere and turn on front face culling on my ShaderGraphMaterial applied on that sphere, so that I can place other 3D stuff inside. However when it comes to attachment, the object occlusion never works as I am expecting. Specifically my attachments are occluded by my sphere (some are not so the behavior is not deterministic. Then I suspect it was the issue of depth testing so I started using ModelSortGroup to reorder the rending sequence. However it doesn't work. As I was searching through the internet, this post's comments shows that ModelSortGroup simply doesn't work on attachments. So I wonder how should I tackle this issue now? To let my attachments appear inside my sphere. OS/Sys: VisionOS 2.3/XCode 16.3
1
0
475
Feb ’25
Obect Capture App getting rejected for not supporting non Pro Models
Hi, I created an app using iOS Object Capture API which works only on Lidar enabled phones. It's a limitation of the Api provided by apple itself. I Submitted an app for Review , but It is getting rejected (Twice) saying it doesnt work on non pro models. Even though I explained that capturing Needs Lidar and supported only in PRO models, It still gets rejected after testing in Non Pro models. is there a way out?
1
0
331
Feb ’25
What causes »ARSessionDelegate is retaining X ARFrames« console warning?
Hi, since iOS 15 I've repeatedly noticed the console warning »ARSessionDelegate is retaining X ARFrames. This can lead to future camera frames being dropped« even for rather simple projects using RealityKit and ARKit. Could someone from the ARKit team please elaborate what causes this warning and what can be done to avoid it? If I remember correctly I didn't even assign an ARSessionDelegate. Thank you!
4
1
3.3k
Feb ’25
Equipment collision issue in TabletopKit
Hey! I'm facing an issue with Equipment collision when adding and moving TabletopKit equipment with different pose rotations. Let me share a very simple TabletopKit setup as an example: Table struct Table: Tabletop { var shape: TabletopShape = .rectangular(width: 1, height: 1, thickness: 0.01) var id: EquipmentIdentifier = .tableID } Board struct Board: Equipment { let id: EquipmentIdentifier = .boardID var initialState: BaseEquipmentState { .init( parentID: .tableID, seatControl: .restricted([]), pose: .init(position: .init(), rotation: .zero), boundingBox: .init(center: .zero, size: .init(1.0, 0, 1.0)) ) } } Equipment struct Object: EntityEquipment { var id: ID var size: SIMD2<Float> var position: SIMD2<Double> var rotation: Float var entity: Entity var initialState: BaseEquipmentState init(id: Int, size: SIMD2<Float>, position: SIMD2<Double>, rotation: Float) { self.id = EquipmentIdentifier(id) self.size = size self.position = position self.rotation = rotation self.entity = objectEntity self.initialState = .init( parentID: .boardID, seatControl: .any, pose: .init( position: .init(x: position.x, z: position.y), rotation: .degrees(Double(rotation)) ), entity: entity ) } } Setup class GameSetup { var setup: TableSetup init(root: Entity) { setup = TableSetup(tabletop: Table()) setup.add(equipment: Board()) setup.add(seat: PlayerSeat()) let object1 = Object( id: 2, size: .init(x: 0.1, y: 0.1), position: .init(x: 0.1, y: -0.1), rotation: 0 ) let object2 = Object( id: 3, size: .init(x: 0.2, y: 0.1), position: .init(x: -0.1, y: -0.1), rotation: 90 ) setup.add(equipment: object1) setup.add(equipment: object2) } } ‎ The issue When I add two equipment entities with different rotation poses, the collisions between them behave oddly. If one is 90º and the other 0º, for example, the former will intersect with the latter as if its bounding box was not rotated as you can see below: But if both equipment have the example rotation (e.g. 0 or 90º), though, then there's no collision issue at all, which seems to indicate their bounding box were correctly rotated: ‎‎ I'd really appreciate some help understanding if this is a bug or if I'm just missing something. Thanks in advance!
3
0
808
Feb ’25
Automatic Plane Measurements like in the Apple Measure App
I’m working on an iOS app that needs to measure the area of planes or surfaces, like the length and width of objects, just like the Apple Measure app does. I’ve been exploring ARKit, but I’m curious if there are any APIs or techniques that can help automate the process of detecting and measuring planes. Specifically, I’m looking for a way to automatically detect and measure planes (e.g., from a top-down view). For example: Measuring a box width and length. I have attached a screenshot and a video of the Apple Measure App doing it. Does Apple provide any tools or APIs for this, or are there any best practices I should know about? I’d love to hear from anyone who’s tackled something similar. Video: https://drive.google.com/file/d/1BxM7fIbFxsCsYwY7w8ZxIeq_4WTGkkwA/view?usp=drive_link
2
0
686
Jan ’25
generateConvex(from mesh: MeshResource) crashes instead of throwing a Swift error
I have a MeshResource and I would like to create a collision component from it. let childBounds = child.visualBounds(relativeTo: self) var childShape: ShapeResource do { // Crashed by the following line instead of throwing a Swift Error childShape = try await ShapeResource.generateConvex(from: childModel.mesh); } catch { childShape = ShapeResource.generateBox(size: childBounds.extents) childShape = childShape.offsetBy(translation: childBounds.center) } Based on this document: https://developer.apple.com/documentation/realitykit/shaperesource/generateconvex(from:)-6upj9 Will throw an error if mesh does not define a nonempty convex volume. For example, will fail if all the vertices in mesh are coplanar. But, the method crashes the app instead of throwing a Swift error Incident Identifier: 35CD58F8-FFE3-48EA-85D3-6D241D8B0B4C CrashReporter Key: FE6790CA-6481-BEFD-CB26-F4E27652BEAE Hardware Model: Mac15,11 ... Version: 1.0 (1) Code Type: ARM-64 (Native) Role: Foreground Parent Process: launchd_sim [2057] Coalition: com.apple.CoreSimulator.SimDevice.85A2B8FA-689F-4237-B4E8-DDB93460F7F6 [1496] Responsible Process: SimulatorTrampoline [910] Date/Time: 2025-01-26 16:13:17.5053 +0800 Launch Time: 2025-01-26 16:13:09.5755 +0800 OS Version: macOS 15.2 (24C101) Release Type: User Report Version: 104 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000001, 0x00000001abf841d0 Termination Reason: SIGNAL 5 Trace/BPT trap: 5 Terminating Process: exc handler [17316] Triggered by Thread: 0 Thread 0 Crashed: 0 CoreRE 0x1abf841d0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedron + 232 1 CoreRE 0x1abf845f0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedronFromMesh + 868 2 RealityFoundation 0x1d25613bc static ShapeResource.generateConvex(from:) + 148 Here is the message on the app console from Xcode /Library/Caches/com.apple.xbs/Sources/REKit_Sim/ThirdParty/PhysX/physx/source/physxcooking/src/convex/QuickHullConvexHullLib.cpp (935) : internal error : QuickHullConvexHullLib::findSimplex: Simplex input points appers to be coplanar. Failed to cook convex mesh (0x3) assertion failure: 'convexPolyhedronShape != nullptr' (REAssetManagerCollisionShapeAssetCreateConvexPolyhedron:line 356) Bad parameters passed for convex mesh creation. Message from debug The above crash happened on a visionOS simulator (visionOS 2.2 (22N840)
1
0
408
Jan ’25
How to find the camera transform (or view matrix) in the world coordinate from a camera frame
I'm trying to implement a prototype to render virtual objects in a mixed immersive space on the camer frames captured by CameraFrameProvider. Here are what I have done: Get camera's instrinsics from frame.primarySample.parameters.intrinsics Get camera's extrinsics from frame.primarySample.parameters.extrinsics Get the device anchor by worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) Setup a RealityKit.RealityRenderer to render virtual objects on the captured camera frames let realityRenderer = try RealityKit.RealityRenderer() realityRenderer.cameraSettings.colorBackground = .outputTexture() let cameraEntity = PerspectiveCamera() // see https://developer.apple.com/forums/thread/770235 let cameraTransform = deviceAnchor.originFromAnchorTransform * extrinsics.inverse cameraEntity.setTransformMatrix(cameraTransform, relativeTo: nil) cameraEntity.camera.near = 0.01 cameraEntity.camera.far = 100 cameraEntity.camera.fieldOfViewOrientation = .horizontal // manually calculated based on camera intrinsics cameraEntity.camera.fieldOfViewInDegrees = 105 realityRenderer.entities.append(cameraEntity) realityRenderer.activeCamera = cameraEntity Virtual objects, which should be seen in the camera frames, are clipped out by the camera transform. If I use deviceAnchor.originFromAnchorTransform as the camera transform, virtual objects can be rendered on camera frames at wrong positions (I think it is because the camera extrinsics isn't used to adjust the camera to the correct position). My question is how to use the camera extrinsic matrix for this purpose? Does the camera extrinsics point to a similar orientation of the device anchor with some minor rotation and postion change? Here is an extrinsics from a camera frame. It seems that the direction of Y-axis and Z-axis are flipped by the extrinsics. So the camera is point to a wrong direction. simd_float4x4([[0.9914258, 0.012555369, -0.13006608, 0.0], // X-axis [-0.0009778949, -0.9946325, -0.10346654, 0.0], // Y-axis [-0.13066702, 0.10270659, -0.98609203, 0.0], // Z-axis [0.024519, -0.019568002, -0.058280986, 1.0]]) // translation
3
0
753
Jan ’25
USDZ with Blend Shapes Workflow Recommendations
Hi, since RealityKit 4 now supports Blend Shapes I was wondering if there are any workflow or tooling recommendations to bake/export them into a USDZ. Are Blender or Cinema4D capable to do that out of the box? Should we look into NVIDIA omniverse (https://docs.omniverse.nvidia.com/connect/latest/blender/manual.htm) So far this topic seems very sparsely documented and I would appreciate any hints. Thank you!
3
0
1.3k
Jan ’25
Reality kit Entities Appearing to Lag in a Full or Progressive Style Immersive Space When Opened with Environment Turned On
PLATFORM AND VERSION Vision OS Development environment: Xcode 16.2, macOS 15.2 Run-time configuration: visionOS 2.3 (On Real Device, Not simulator) Please someone confirm I'm not crazy and this issue is actually out of my control. Spent hours trying to fix my app and running profiles because thought it was an issue related to my apps performance. Finally considered chance it was issue with API itself and made sample app to isolate problem, and it still existed in it. The issue is when a model entity moves around in a full space that was launched when the system environment immersion was turned up before opening it, the entities looks very choppy as they move around. If you take off the headset while still in the space, and put it back on, this fixes it and then they move smoothly as they should. In addition, you can also leave the space, and then turn the system environment immersion all the way down before launching the full space again, this will also make the entity moves smoothly as it should. If you launch a mixed immersion style instead of a full immersion style, this issue never arrises. The issue only arrises if you launch the space with either a full style, or progressive style, while the system immersion level is turned on. STEPS TO REPRODUCE https://github.com/nathan-707/ChoppyEntitySample Open my test project, its a small, modified vision os project template that shows it clearly. otherwise: create immersive space with either full or progressive immersion style. setup a entity in kinematic mode, apply a velocity to it to make it pass over your head when the space appears. if you opened the space while the Apple Vision Pros system environment was turned up, the entity will look choppy. if you take the headset off while in the same space, and put it back on, it will fix the issue and it will look smooth. alternatively if you open the space with the system immersion environment all the way down, you will also not run into the issue. Again, issue also does not happen if space launched is in mixed style.
1
0
527
Jan ’25
RealityView Not Refreshing With SwiftData
Hi, I am trying to update what entities are visible in my RealityView. After the SwiftData set is updated, I have to restart the app for it to appear in the RealityView. Also, the RealityView does not close when I move to a different tab. It keeps everything on and tracking, leaving the model in the same location I left it. import SwiftUI import RealityKit import MountainLake import SwiftData struct RealityLakeView: View { @Environment(\.modelContext) private var context @Query private var items: [Item] var body: some View { RealityView { content in print("View Loaded") let lakeScene = try? await Entity(named: "Lake", in: mountainLakeBundle) let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2))) @MainActor func addEntity(name: String) { if let lakeEntity = lakeScene?.findEntity(named: name) { // Add the Cube_1 entity to the RealityView anchor.addChild(lakeEntity) } else { print(name + "entity not found in the Lake scene.") } } addEntity(name: "Island") for item in items { if(item.enabled) { addEntity(name: item.value) } } // Add the horizontal plane anchor to the scene content.add(anchor) content.camera = .spatialTracking } placeholder: { ProgressView() } .edgesIgnoringSafeArea(.all) } } #Preview { RealityLakeView() }
3
0
497
Jan ’25
Combining ARKit Face Tracking with High-Resolution AVCapture and Perspective Rendering on Front Camera
Subject: Combining ARKit Face Tracking with High-Resolution AVCapture and Perspective Rendering on Front Camera Message: Hello Apple Developer Community, We’re developing an application using the front camera that requires both real-time ARKit face tracking/guidance and the capture of high-resolution still images via AVCaptureSession. Our goal is to leverage ARKit’s depth and face data to render a captured image from another perspective post-capture, maintaining high image quality. Our Approach: Real-Time ARKit Guidance: Utilize ARKit (e.g., ARFaceTrackingConfiguration) for continuous face tracking, depth, and scene understanding to guide the user in real time. High-Resolution Capture Transition: At the moment of capture, we plan to pause the ARKit session and switch to an AVCaptureSession to take a high-resolution image. We assume that for a front-facing image, the subject’s face is directly front-on, and the relative pose between the face and camera remains the same during the transition. The only variation we expect is a change in distance. Our intention is to minimize the delay between the last ARKit frame and the high-res capture to maintain temporal consistency, assuming that aside from distance, the face-camera relative pose remains unchanged. Post-Processing Perspective Rendering: Using the last ARKit face data (depth, pose, and landmarks) along with the high-resolution 2D image, we aim to render the scene from another perspective. We want to correct the perspective of the 2D image using SceneKit or RealityKit, leveraging the collected ARKit scene information to achieve a natural, high-quality rendering from a different viewpoint. The rendering should match the quality of a normally captured high-resolution image, adjusting for the difference in distance while using the stored ARKit data to correct perspective. Our Questions: Session Transition Best Practices: What are the recommended best practices to seamlessly pause ARKit and switch to a high-resolution AVCapture session on the front camera How can we minimize user movement or other issues during this brief transition, given our assumption that the face-camera pose remains largely consistent except for distance changes? Data Integration for Perspective Rendering: How can we effectively integrate stored ARKit face, depth, and pose data with the high-res image to perform accurate perspective correction or rendering from another viewpoint? Given that we assume the relative pose is constant except for distance, are there strategies or APIs to leverage this assumption for simplifying the perspective transformation? Perspective Correction with SceneKit/RealityKit: What techniques or workflows using SceneKit or RealityKit are recommended for correcting the perspective of a captured 2D image based on ARKit scene data? How can we use these frameworks to render the high-resolution image from an alternative perspective, while maintaining image quality and fidelity? 4. Pitfalls and Guidelines: What common pitfalls should we be aware of when combining ARKit tracking data with high-res capture and post-processing for perspective rendering? Are there performance considerations, recommended thresholds for acceptable temporal consistency, or validation techniques to ensure the ARKit data remains applicable at the moment of high-res capture? We appreciate any advice, sample code references, or documentation pointers that could assist us in implementing this workflow effectively. Thank you!
2
0
710
Jan ’25