RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

Posts under RealityKit tag

200 Posts

Post

Replies

Boosts

Views

Activity

Does VisionOS have the equivalent of ARView.physicsOrigin?
I'm trying to scale the physics of a scene without changing the apparent size to avoid the low-speed zeroing-out of motion that the physics simulation does. I found a technique for using separate simulation and physics roots in the docs, but it relies on ARView, which VisionOS doesn't have. This seems more elegant than scaling absolutely everything with shared root -- any chance I'm just failing in my searches to find the equivalent functionality?
0
0
21
4h
RealityKit fill the background environment
I am new to RealityKit and Metal and I am building a RealityKit app that renders a procedural LowLevelMesh road. But the left and right side of the road is a complete green terrain mesh object and it doesn't look great. What I want is to add some rocks, tall trees and dence bushes (or weed) to make it look like the player is in the woods. But when I add many of those objects then the performance drains. What is the best approach to fill background empty spaces in the scene?
2
0
267
1d
RealityView content disappears when selecting Lock In Place on visionOS
Hi, I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace. A red cube is placed 1m in front of the user via RealityView. Long press the X button on any floating window Select "Lock In Place". The cube disappears immediately. Expected: Cube remains visible after window is locked Actual: Cube disappears. Minimal reproducible code: var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3<Float>(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
0
0
330
3d
RealityView content disappears when selecting Lock In Place on visionOS
I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace A red cube is placed 1m in front of the user via RealityView Long press the X button on any floating window Select "Lock In Place" The cube disappears immediately Expected: Cube remains visible after window is locked Actual: Cube disappears. Note: "Follow Me" does NOT reproduce this issue. Minimal reproducible code: struct ImmersiveView: View { var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
0
0
281
3d
ManipulationComponent Not Translating using indirect input
When using the new RealityKit Manipulation Component on Entities, indirect input will never translate the entity - no matter what settings are applied. Direct manipulation works as expected for both translation and rotation. Is this intended behaviour? This is different from how indirect manipulation works on Model3D. How else can we get translation from this component? visionOS 26 Beta 2 Build from macOS 26 Beta 2 and Xcode 26 Beta 2 Attached is replicable sample code, I have tried this in other projects with the same results. var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "MovieFilmReel", in: reelRCPBundle) { ManipulationComponent.configureEntity(immersiveContentEntity, allowedInputTypes: .all, collisionShapes: [ShapeResource.generateBox(width: 0.2, height: 0.2, depth: 0.2)]) immersiveContentEntity.position.y = 1 immersiveContentEntity.position.z = -0.5 var mc = ManipulationComponent() mc.releaseBehavior = .stay immersiveContentEntity.components.set(mc) content.add(immersiveContentEntity) } } }
15
5
1.9k
5d
RealityKit equivalent of ARGeoAnchor?
In ARKit there is ARGeoAnchor, which lets you anchor content using latitude and longitude so objects stay fixed to a real-world location. Is there an equivalent feature in RealityKit? I want to place points in the world and make sure they don't move or drift after placement. If RealityKit doesn't support this directly, what is the recommended approach?
0
0
451
1w
Can I use metal shader if I use RealityKit to build a VisionOS app?
I asked AI to build a realistic ocean shader for a VisionOS project using RealityKit. It gave a bunch instructions and asked me to connect large amount of nodes for the ShaderGraph in the Reality Composer Pro. I am just wondering if AI can help to generate the Metal shader code directly, or build the ShaderGraph nodes for me automatically.
0
0
491
1w
Best approach for animating a speaking avatar in a macOS/iOS SwiftUI application
I am developing a macOS application using SwiftUI (with an iOS version as well). One feature we are exploring is displaying an avatar that reads or speaks dynamically generated text produced by an AI service. The basic flow would be: Text generated by an AI service Text converted to speech using a TTS engine An avatar (2D or 3D) rendered in the app that animates lip movement synchronized with the speech Ideally the avatar would render locally on the device. Questions: What Apple frameworks would be most appropriate for implementing a speaking avatar? SceneKit RealityKit SpriteKit (for 2D avatars) Is there any recommended way to drive lip-sync animation from speech audio using Apple frameworks? Does AVSpeechSynthesizer expose phoneme or viseme timing information that could be used for avatar animation? If such timing information is not available, what is the recommended approach for synchronizing character mouth animation with speech audio on macOS/iOS? Are there examples of real-time character animation synchronized with speech on macOS/iOS? Any architectural guidance or references would be greatly appreciated.
0
0
514
1w
Position and orientation of a window in an immersive space
Is it possible to retrieve the position and orientation of a window that is opened in an immersive space? The following code: struct MyWindow: View { var body: some View { VStack { Text("Hello") } .onGeometryChange3D(for: Point3D.self) { proxy in try! proxy .coordinateSpace3D() .convert(value: Point3D.zero, to: .worldReference) } action : { point in print(point) } } } seems to work for the position, but I also need the orientation.
0
0
790
1w
RealityKit - Full 3D experience
I have a question I guess more for the Apple team. But why are there no totally 3D experiences for the Vision Pro lineup? I know they have given us tools to implement unity 3D games into iPhone and I guess you can also build it in RealityKit. But why at this moment are 3D games limited to just iPad and iPhone and can't you bring that into Vision Pro? Just to explain. When I say a totally 3D game, I mean games like Gorn. I mean the Vision Pro is definitely powerful enough, but it just feels limited to tabletop games and AR games. Is this something Apple is thinking about implementing?
1
0
1.4k
1w
RealityKit crashes when rendering SpriteKit scene with SKShapeNode in postProcess callback
I'm converting my game from SceneKit to RealityKit. It has a SpriteKit overlay that according to Explore advanced rendering with RealityKit 2 I can add with the code below. The code runs fine if the SKScene only contains a SKSpriteNode (see the commented out line), but when I add a SKShapeNode with a fillColor instead, the app crashes with this error: -[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5970: failed assertion `Draw Errors Validation MTLDepthStencilDescriptor uses frontFaceStencil but MTLRenderPassDescriptor has a nil stencilAttachment texture MTLDepthStencilDescriptor uses backFaceStencil but MTLRenderPassDescriptor has a nil stencilAttachment texture ' I don't know enough about low-level graphics and stencils yet to figure out a quick solution, so I would appreciate if anyone could share an easy fix or explanation of what's wrong. Thanks! class ViewController: NSViewController { var device: MTLDevice! var renderer: SKRenderer! override func loadView() { let arView = ARView(frame: NSScreen.main!.frame) view = arView arView.renderCallbacks.prepareWithDevice = { [weak self] device in guard let self = self else { return } self.device = device renderer = SKRenderer(device: MTLCreateSystemDefaultDevice()!) let scene = SKScene() let shape = SKShapeNode(rectOf: CGSize(width: 10, height: 10)) shape.fillColor = .red scene.addChild(shape) // scene.addChild(SKSpriteNode(color: .red, size: CGSize(width: 10, height: 10))) renderer.scene = scene } arView.renderCallbacks.postProcess = { [weak self] context in guard let self = self else { return } let encoder = context.commandBuffer.makeBlitCommandEncoder() encoder?.copy(from: context.sourceColorTexture, to: context.targetColorTexture) encoder?.endEncoding() renderer.update(atTime: context.time) let descriptor = MTLRenderPassDescriptor() descriptor.colorAttachments[0].loadAction = .load descriptor.colorAttachments[0].storeAction = .store descriptor.colorAttachments[0].texture = context.targetColorTexture renderer.render(withViewport: CGRect(x: 0, y: 0, width: context.targetColorTexture.width, height: context.targetColorTexture.height), commandBuffer: context.commandBuffer, renderPassDescriptor: descriptor) } } }
2
0
939
2w
ImageAnchoringSource from URL
Hello, I was wondering how I can initialize an ImageAnchoringSource using https://developer.apple.com/documentation/realitykit/anchoringcomponent/imageanchoringsource/init(_:) When I construct one using a URL, it doesn't seem to be tracked and I see in the following when I debug print the component: ▿ 0 : AnchoringComponent ▿ target : Target ▿ referenceImage : 1 element ▿ from : ImageAnchoringSource ▿ url : Optional<URL> ▿ some : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _url : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _parseInfo : nil - _baseParseInfo : nil - name : nil - group : nil ▿ trackingMode : TrackingMode - trackingMode : 2 Is there a specific format for the parseInfo? When I use the same image to make an image anchoring source by group and name in AR Resources, it is tracked. Thank you!
2
1
1.1k
2w
Highlight or select entity in RealityKit
I'm in the process of converting my SceneKit game to RealityKit. In SceneKit I used to be able to mark nodes as selected by setting SCNMaterial.emission with a custom color. I can do the same with PhysicallyBasedMaterial.emissiveColor, but I'd like to keep my entitities unaffected by the scene lights by using UnlitMaterial. In SceneKit I can set a category mask to indicate what light should affect what node, but there doesn't seem to be such a thing in RealityKit. So at the moment it seems like I have to choose between being able to mark an entity as selected, or having entities unaffected by lighting, but not both. Is there some effect or component I can use to mark entities as selected by applying some coloring regardless of the material used?
2
0
189
2w
RealityKit animation with bindTarget: .opacity doesn't work
I want to fade objects in and out, and while setting an entity's OpacityComponent works, animating it doesn't seem to do anything. In the following code the second sphere should fade out, but it keeps its initial opacity. On the other hand, the animation that changes its transform works. What am I doing wrong? class ViewController: NSViewController { override func loadView() { let arView = ARView(frame: NSScreen.main!.frame) let anchor = AnchorEntity(.world(transform: matrix_identity_float4x4)) arView.scene.addAnchor(anchor) let sphere = ModelEntity(mesh: .generateSphere(radius: 0.5)) anchor.addChild(sphere) sphere.components.set(OpacityComponent(opacity: 0.1)) let sphere2 = ModelEntity(mesh: .generateSphere(radius: 0.5)) sphere2.position = .init(x: 0.2, y: 0, z: 0) anchor.addChild(sphere2) sphere2.components.set(OpacityComponent(opacity: 0.1)) sphere.playAnimation(try! AnimationResource.makeActionAnimation(for: FromToByAction(to: 0, timing: .linear), duration: 1, bindTarget: .opacity)) sphere.playAnimation(try! AnimationResource.makeActionAnimation(for: FromToByAction(to: Transform(translation: SIMD3(x: 0.1, y: 0, z: 0)), timing: .linear), duration: 1, bindTarget: .transform)) view = arView } }
4
0
389
2w
RealityKit game keeps using ~65% CPU even with empty scene
I'm trying to convert my game from SceneKit to RealityKit. I noticed that even when the scene is static (nothing moves), RealityKit keeps using CPU. In SceneKit, CPU goes down to 0% with a static scene. With this simplest of games, RealityKit keeps using about 65% CPU: class ViewController: NSViewController { override func loadView() { view = ARView(frame: NSScreen.main!.frame) } } Is this expected or a bug? I created FB22125047.
0
1
101
2w
Crash when Displaying RealityView on Multiple Screen only Connecting with Xcode
I have an iOS app that uses RealityView to display some models and interact with them, and the app uses regular iOS app navigations, then a challenge I'm facing is how to maintain multiple RealityView on multiplescreens. For example Screen A has a RealityView, and then I navigate to Screen B (also has a RealityView) using stack based navigation, when I do so I got a crash -[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5970: failed assertion `Draw Errors Validation Fragment Function(fsRealityPbr): argument envProbeTable[0] from Buffer(7) with offset(0) and length(16) has space for 16 bytes, but argument has a length(864). Fragment Function(fsRealityPbr): incorrect type of texture (MTLTextureType2D) bound at Texture binding at index 20 (expect MTLTextureTypeCubeArray) for envProbeDiffuseArray[0]. Interestingly this crash only happens when debugging with Xcode, not happens when the app runs on its own. I'm not sure what I'm doing is anti-pattern or it's some Xcode debugging limitation.
0
0
742
3w
Draw An Outline Around a Model Entity
Hi, Is there a resource or sample code about how to draw an outline around a mesh in RealityKit? Typically, this is useful for visualizing a selection, like in Reality Composer Pro. How to achieve such effect? A shader material? A post-process effect in ARView or RealityRenderer? Methods such as duplicating the entity mesh, scaling it, and using material.faceCulling = .front did not look good in my experiments. Thank you.
1
0
520
3w
Reality Kit Scene
Hi, I’m wondering whether RealityKit has its own scene management system, since it uses ARView (backed by ARKit) to present AR content. Does RealityKit manage scenes independently, or does it rely entirely on ARKit’s scene handling? Thank you.
1
0
148
3w
RealityView attachment draw order
My visionOS 26.3 app displays a diorama-like scene in a RealityView in a mixed immersive space, about 1 meter square, with view attachments floating above the scene. Each view attachment fades out after user interaction, by animating the view's opacity. What I'm observing is that depending on the position of a view attachment relative to the scene and the camera, an unwanted cutout effect is observed (presumably because of draw order issues), as shown in the right column in the screenshots below. YouTube video link of these sequences: https://youtu.be/oTuo0okKCkc (19 seconds) My question: How does visionOS determine the view attachment draw order relative to the RealityView scene? If I better understood how the draw order is determined, I could modify my scene to ensure that the view attachments were always drawn after the scene, fixing the unwanted cutout effect. I've successfully used ModelSortGroupComponent to control the draw order of entities within the RealityView scene, but my understanding is that this approach cannot be used with view attachments. I've submitted FB22014370 about this issue. Thank you.
0
0
213
Feb ’26
Does VisionOS have the equivalent of ARView.physicsOrigin?
I'm trying to scale the physics of a scene without changing the apparent size to avoid the low-speed zeroing-out of motion that the physics simulation does. I found a technique for using separate simulation and physics roots in the docs, but it relies on ARView, which VisionOS doesn't have. This seems more elegant than scaling absolutely everything with shared root -- any chance I'm just failing in my searches to find the equivalent functionality?
Replies
0
Boosts
0
Views
21
Activity
4h
RealityKit fill the background environment
I am new to RealityKit and Metal and I am building a RealityKit app that renders a procedural LowLevelMesh road. But the left and right side of the road is a complete green terrain mesh object and it doesn't look great. What I want is to add some rocks, tall trees and dence bushes (or weed) to make it look like the player is in the woods. But when I add many of those objects then the performance drains. What is the best approach to fill background empty spaces in the scene?
Replies
2
Boosts
0
Views
267
Activity
1d
RealityView content disappears when selecting Lock In Place on visionOS
Hi, I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace. A red cube is placed 1m in front of the user via RealityView. Long press the X button on any floating window Select "Lock In Place". The cube disappears immediately. Expected: Cube remains visible after window is locked Actual: Cube disappears. Minimal reproducible code: var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3<Float>(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
Replies
0
Boosts
0
Views
330
Activity
3d
RealityView content disappears when selecting Lock In Place on visionOS
I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace A red cube is placed 1m in front of the user via RealityView Long press the X button on any floating window Select "Lock In Place" The cube disappears immediately Expected: Cube remains visible after window is locked Actual: Cube disappears. Note: "Follow Me" does NOT reproduce this issue. Minimal reproducible code: struct ImmersiveView: View { var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
Replies
0
Boosts
0
Views
281
Activity
3d
ManipulationComponent Not Translating using indirect input
When using the new RealityKit Manipulation Component on Entities, indirect input will never translate the entity - no matter what settings are applied. Direct manipulation works as expected for both translation and rotation. Is this intended behaviour? This is different from how indirect manipulation works on Model3D. How else can we get translation from this component? visionOS 26 Beta 2 Build from macOS 26 Beta 2 and Xcode 26 Beta 2 Attached is replicable sample code, I have tried this in other projects with the same results. var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "MovieFilmReel", in: reelRCPBundle) { ManipulationComponent.configureEntity(immersiveContentEntity, allowedInputTypes: .all, collisionShapes: [ShapeResource.generateBox(width: 0.2, height: 0.2, depth: 0.2)]) immersiveContentEntity.position.y = 1 immersiveContentEntity.position.z = -0.5 var mc = ManipulationComponent() mc.releaseBehavior = .stay immersiveContentEntity.components.set(mc) content.add(immersiveContentEntity) } } }
Replies
15
Boosts
5
Views
1.9k
Activity
5d
RealityKit equivalent of ARGeoAnchor?
In ARKit there is ARGeoAnchor, which lets you anchor content using latitude and longitude so objects stay fixed to a real-world location. Is there an equivalent feature in RealityKit? I want to place points in the world and make sure they don't move or drift after placement. If RealityKit doesn't support this directly, what is the recommended approach?
Replies
0
Boosts
0
Views
451
Activity
1w
Can I use metal shader if I use RealityKit to build a VisionOS app?
I asked AI to build a realistic ocean shader for a VisionOS project using RealityKit. It gave a bunch instructions and asked me to connect large amount of nodes for the ShaderGraph in the Reality Composer Pro. I am just wondering if AI can help to generate the Metal shader code directly, or build the ShaderGraph nodes for me automatically.
Replies
0
Boosts
0
Views
491
Activity
1w
Best approach for animating a speaking avatar in a macOS/iOS SwiftUI application
I am developing a macOS application using SwiftUI (with an iOS version as well). One feature we are exploring is displaying an avatar that reads or speaks dynamically generated text produced by an AI service. The basic flow would be: Text generated by an AI service Text converted to speech using a TTS engine An avatar (2D or 3D) rendered in the app that animates lip movement synchronized with the speech Ideally the avatar would render locally on the device. Questions: What Apple frameworks would be most appropriate for implementing a speaking avatar? SceneKit RealityKit SpriteKit (for 2D avatars) Is there any recommended way to drive lip-sync animation from speech audio using Apple frameworks? Does AVSpeechSynthesizer expose phoneme or viseme timing information that could be used for avatar animation? If such timing information is not available, what is the recommended approach for synchronizing character mouth animation with speech audio on macOS/iOS? Are there examples of real-time character animation synchronized with speech on macOS/iOS? Any architectural guidance or references would be greatly appreciated.
Replies
0
Boosts
0
Views
514
Activity
1w
Position and orientation of a window in an immersive space
Is it possible to retrieve the position and orientation of a window that is opened in an immersive space? The following code: struct MyWindow: View { var body: some View { VStack { Text("Hello") } .onGeometryChange3D(for: Point3D.self) { proxy in try! proxy .coordinateSpace3D() .convert(value: Point3D.zero, to: .worldReference) } action : { point in print(point) } } } seems to work for the position, but I also need the orientation.
Replies
0
Boosts
0
Views
790
Activity
1w
RealityKit - Full 3D experience
I have a question I guess more for the Apple team. But why are there no totally 3D experiences for the Vision Pro lineup? I know they have given us tools to implement unity 3D games into iPhone and I guess you can also build it in RealityKit. But why at this moment are 3D games limited to just iPad and iPhone and can't you bring that into Vision Pro? Just to explain. When I say a totally 3D game, I mean games like Gorn. I mean the Vision Pro is definitely powerful enough, but it just feels limited to tabletop games and AR games. Is this something Apple is thinking about implementing?
Replies
1
Boosts
0
Views
1.4k
Activity
1w
RealityKit crashes when rendering SpriteKit scene with SKShapeNode in postProcess callback
I'm converting my game from SceneKit to RealityKit. It has a SpriteKit overlay that according to Explore advanced rendering with RealityKit 2 I can add with the code below. The code runs fine if the SKScene only contains a SKSpriteNode (see the commented out line), but when I add a SKShapeNode with a fillColor instead, the app crashes with this error: -[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5970: failed assertion `Draw Errors Validation MTLDepthStencilDescriptor uses frontFaceStencil but MTLRenderPassDescriptor has a nil stencilAttachment texture MTLDepthStencilDescriptor uses backFaceStencil but MTLRenderPassDescriptor has a nil stencilAttachment texture ' I don't know enough about low-level graphics and stencils yet to figure out a quick solution, so I would appreciate if anyone could share an easy fix or explanation of what's wrong. Thanks! class ViewController: NSViewController { var device: MTLDevice! var renderer: SKRenderer! override func loadView() { let arView = ARView(frame: NSScreen.main!.frame) view = arView arView.renderCallbacks.prepareWithDevice = { [weak self] device in guard let self = self else { return } self.device = device renderer = SKRenderer(device: MTLCreateSystemDefaultDevice()!) let scene = SKScene() let shape = SKShapeNode(rectOf: CGSize(width: 10, height: 10)) shape.fillColor = .red scene.addChild(shape) // scene.addChild(SKSpriteNode(color: .red, size: CGSize(width: 10, height: 10))) renderer.scene = scene } arView.renderCallbacks.postProcess = { [weak self] context in guard let self = self else { return } let encoder = context.commandBuffer.makeBlitCommandEncoder() encoder?.copy(from: context.sourceColorTexture, to: context.targetColorTexture) encoder?.endEncoding() renderer.update(atTime: context.time) let descriptor = MTLRenderPassDescriptor() descriptor.colorAttachments[0].loadAction = .load descriptor.colorAttachments[0].storeAction = .store descriptor.colorAttachments[0].texture = context.targetColorTexture renderer.render(withViewport: CGRect(x: 0, y: 0, width: context.targetColorTexture.width, height: context.targetColorTexture.height), commandBuffer: context.commandBuffer, renderPassDescriptor: descriptor) } } }
Replies
2
Boosts
0
Views
939
Activity
2w
ImageAnchoringSource from URL
Hello, I was wondering how I can initialize an ImageAnchoringSource using https://developer.apple.com/documentation/realitykit/anchoringcomponent/imageanchoringsource/init(_:) When I construct one using a URL, it doesn't seem to be tracked and I see in the following when I debug print the component: ▿ 0 : AnchoringComponent ▿ target : Target ▿ referenceImage : 1 element ▿ from : ImageAnchoringSource ▿ url : Optional<URL> ▿ some : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _url : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png - _parseInfo : nil - _baseParseInfo : nil - name : nil - group : nil ▿ trackingMode : TrackingMode - trackingMode : 2 Is there a specific format for the parseInfo? When I use the same image to make an image anchoring source by group and name in AR Resources, it is tracked. Thank you!
Replies
2
Boosts
1
Views
1.1k
Activity
2w
Highlight or select entity in RealityKit
I'm in the process of converting my SceneKit game to RealityKit. In SceneKit I used to be able to mark nodes as selected by setting SCNMaterial.emission with a custom color. I can do the same with PhysicallyBasedMaterial.emissiveColor, but I'd like to keep my entitities unaffected by the scene lights by using UnlitMaterial. In SceneKit I can set a category mask to indicate what light should affect what node, but there doesn't seem to be such a thing in RealityKit. So at the moment it seems like I have to choose between being able to mark an entity as selected, or having entities unaffected by lighting, but not both. Is there some effect or component I can use to mark entities as selected by applying some coloring regardless of the material used?
Replies
2
Boosts
0
Views
189
Activity
2w
RealityKit animation with bindTarget: .opacity doesn't work
I want to fade objects in and out, and while setting an entity's OpacityComponent works, animating it doesn't seem to do anything. In the following code the second sphere should fade out, but it keeps its initial opacity. On the other hand, the animation that changes its transform works. What am I doing wrong? class ViewController: NSViewController { override func loadView() { let arView = ARView(frame: NSScreen.main!.frame) let anchor = AnchorEntity(.world(transform: matrix_identity_float4x4)) arView.scene.addAnchor(anchor) let sphere = ModelEntity(mesh: .generateSphere(radius: 0.5)) anchor.addChild(sphere) sphere.components.set(OpacityComponent(opacity: 0.1)) let sphere2 = ModelEntity(mesh: .generateSphere(radius: 0.5)) sphere2.position = .init(x: 0.2, y: 0, z: 0) anchor.addChild(sphere2) sphere2.components.set(OpacityComponent(opacity: 0.1)) sphere.playAnimation(try! AnimationResource.makeActionAnimation(for: FromToByAction(to: 0, timing: .linear), duration: 1, bindTarget: .opacity)) sphere.playAnimation(try! AnimationResource.makeActionAnimation(for: FromToByAction(to: Transform(translation: SIMD3(x: 0.1, y: 0, z: 0)), timing: .linear), duration: 1, bindTarget: .transform)) view = arView } }
Replies
4
Boosts
0
Views
389
Activity
2w
RealityKit game keeps using ~65% CPU even with empty scene
I'm trying to convert my game from SceneKit to RealityKit. I noticed that even when the scene is static (nothing moves), RealityKit keeps using CPU. In SceneKit, CPU goes down to 0% with a static scene. With this simplest of games, RealityKit keeps using about 65% CPU: class ViewController: NSViewController { override func loadView() { view = ARView(frame: NSScreen.main!.frame) } } Is this expected or a bug? I created FB22125047.
Replies
0
Boosts
1
Views
101
Activity
2w
Crash when Displaying RealityView on Multiple Screen only Connecting with Xcode
I have an iOS app that uses RealityView to display some models and interact with them, and the app uses regular iOS app navigations, then a challenge I'm facing is how to maintain multiple RealityView on multiplescreens. For example Screen A has a RealityView, and then I navigate to Screen B (also has a RealityView) using stack based navigation, when I do so I got a crash -[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5970: failed assertion `Draw Errors Validation Fragment Function(fsRealityPbr): argument envProbeTable[0] from Buffer(7) with offset(0) and length(16) has space for 16 bytes, but argument has a length(864). Fragment Function(fsRealityPbr): incorrect type of texture (MTLTextureType2D) bound at Texture binding at index 20 (expect MTLTextureTypeCubeArray) for envProbeDiffuseArray[0]. Interestingly this crash only happens when debugging with Xcode, not happens when the app runs on its own. I'm not sure what I'm doing is anti-pattern or it's some Xcode debugging limitation.
Replies
0
Boosts
0
Views
742
Activity
3w
Draw An Outline Around a Model Entity
Hi, Is there a resource or sample code about how to draw an outline around a mesh in RealityKit? Typically, this is useful for visualizing a selection, like in Reality Composer Pro. How to achieve such effect? A shader material? A post-process effect in ARView or RealityRenderer? Methods such as duplicating the entity mesh, scaling it, and using material.faceCulling = .front did not look good in my experiments. Thank you.
Replies
1
Boosts
0
Views
520
Activity
3w
Equivalent in RealityKit of SceneKit's SCNAction.customAction to run custom animation
I'm currently converting my game from SceneKit to RealityKit. What's the easiest way to run an animation in which every frame I want to run custom code, similar to SCNAction.customAction which accepts a callback that is called repeatedly until the action completes?
Replies
0
Boosts
0
Views
257
Activity
3w
Reality Kit Scene
Hi, I’m wondering whether RealityKit has its own scene management system, since it uses ARView (backed by ARKit) to present AR content. Does RealityKit manage scenes independently, or does it rely entirely on ARKit’s scene handling? Thank you.
Replies
1
Boosts
0
Views
148
Activity
3w
RealityView attachment draw order
My visionOS 26.3 app displays a diorama-like scene in a RealityView in a mixed immersive space, about 1 meter square, with view attachments floating above the scene. Each view attachment fades out after user interaction, by animating the view's opacity. What I'm observing is that depending on the position of a view attachment relative to the scene and the camera, an unwanted cutout effect is observed (presumably because of draw order issues), as shown in the right column in the screenshots below. YouTube video link of these sequences: https://youtu.be/oTuo0okKCkc (19 seconds) My question: How does visionOS determine the view attachment draw order relative to the RealityView scene? If I better understood how the draw order is determined, I could modify my scene to ensure that the view attachments were always drawn after the scene, fixing the unwanted cutout effect. I've successfully used ModelSortGroupComponent to control the draw order of entities within the RealityView scene, but my understanding is that this approach cannot be used with view attachments. I've submitted FB22014370 about this issue. Thank you.
Replies
0
Boosts
0
Views
213
Activity
Feb ’26