Developing a prototype Vision Pro app and would like to render a 3D scene made from Reality Composer Pro on an image anchor in a RealityView. But I have no luck so far to make it work and need some guidance to move on.
I got the image file stored in the assets like below:
And from below is the source codes:
import SwiftUI
import RealityKit
import RealityKitContent
struct AnchorView: View {
@State var imageEntity: Entity = {
let anchorEntity = AnchorEntity(.image(group: "AR Resources", name: "reanchor"))
return anchorEntity
}()
var body: some View {
RealityView { content in
do
{
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle)
{
imageEntity.addChild(scene)
content.add(imageEntity)
}
}
catch
{
print("Error occurs when adding reality view content: \(error)")
}
}
}
}
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have attempted to use VideoMaterial with HDR HLS stream, and also a TextureResource.DrawableQueue with rgba16Float in a ShaderGraphMaterial.
I'm capturing to 64RGBAHalf with AVPlayerItemVideoOutput and converting that to rgba16Float.
I don't believe it's displaying HDR properly or behaving like a raw AVPlayer.
Since we can't configure any EDR metadata or color space for a RealityView, how do we display HDR video? Is using rgba16Float supposed to be enough?
Is expecting the 64RGBAHalf capture to handle HDR properly a mistake and should I capture YUV and do the conversion myself?
Thank you
I have various USDZ files in my visionOS app. Loading the USDZ files works quite well. I only have problems with the positioning of the 3D model. For example, I have a USDZ file that is displayed directly above me. I can't move the model or perform any other actions on it. If I sit on a chair or stand up again, the 3D model automatically moves with me. This is my source code for loading the USDZ files:
struct ImmersiveView: View {
@State var modelName: String
@State private var loadedModel = Entity()
var body: some View {
RealityView { content in
if let usdModel = try? await Entity(named: modelName) {
print("====> \(modelName) : \(usdModel) <====")
let bounds = usdModel.visualBounds(relativeTo: nil).extents
usdModel.scale = SIMD3<Float>(1.0, 1.0, 1.0)
usdModel.position = SIMD3<Float>(0.0, 0.0, 0.0)
usdModel.components.set(CollisionComponent(shapes: [.generateBox(size: bounds)]))
usdModel.components.set(HoverEffectComponent())
usdModel.components.set(InputTargetComponent())
loadedModel = usdModel
content.add(usdModel)
}
}
}
}
I only want the 3D models from the USDZ files to be displayed later, and later on, to be able to move them via gestures. Moving the models is step 2. First, I need to make sure the models are displayed correctly. What have I forgotten or done wrong?
Using Reality Composer Pro 2.0, I created a simple shader graph that displays a texture on an unlit surface:
On visionOS 2 beta, I can successfully use ShaderGraphMaterial(named:from:in:) to load that shader graph material and assign it to a model entity.
However, on visionOS 1.2 and earlier, either in Simulator or on the device, ShaderGraphMaterial(named:from:in:) fails and I see the following logged to the console:
If, using Reality Composer Pro 1.0, I experimentally open the same project and delete and recreate exactly the same nodes above, then ShaderGraphMaterial(named:from:in:) works as expected on visionOS 1.2.
Is it a known issue that Reality Composer 2 can't be used with visionOS 1?
Is this intentional behavior?
I've submitted feedback as FB14828873, including a sample project and repro steps.
If possible, I would appreciate guidance from an Apple engineer, like "This is a known issue for [list of node types]" or "Reality Composer Pro 2 is not supported for visionOS 1 development, please refer to [documentation]" or "We recommend [workaround]."
Thank you.
My app uses RealityKit with an arView with World tracking and a scene construction mesh. Working well.
As I understand it, the default camera selection using ARIkit, is ultra wide camera. However, in the camera app, there is a 0.5 option to increase the field of view further.
Is there any way to enable this 0.5x option using code? Or any control over FOV using RealityKit and a mesh?
Thanks!
Topic:
Graphics & Games
SubTopic:
RealityKit
I have a neural network model for segmentation, I successfully integrated it and am getting a grayscale image. Next, I need to apply the segmentation mask in RealityKit to achieve the occlusion effect (like person segmentation). I tried doing it through post-processing and other methods, but none of them worked. Is there any example of how this can be done in RealityKit?
I'm building a proof of concept application leveraging the PlaneDetectionProvider to generate UI and interactive elements on a horizontal plane the user is looking at. I'm able to create a cube at the centroid of the plane and change it's location via position. However, I can't seem to rotate the cube programmatically and from this forum post in September I'm not sure if the modelEntity.move functionality is still bugged or the documentation is not up to date.
if let planeCentroid = planeEntity.centroid {
// Create a cube at the centroid
let cubeMesh = MeshResource.generateBox(size: 0.1) // Create a cube with side length of 0.1 meters
let cubeMaterial = SimpleMaterial(color: .blue, isMetallic: false)
let cubeEntity = ModelEntity(mesh: cubeMesh, materials: [cubeMaterial])
cubeEntity.position = planeCentroid
cubeEntity.position.y += 0.3048
planeEntity.addChild(cubeEntity)
let rotationY = simd_quatf(angle: Float(45.0 * .pi/180.0), axis: SIMD3(x: 0, y: 1, z: 0))
let cubeTransform = Transform(rotation: rotationY)
cubeEntity.move(to: cubeTransform, relativeTo: planeEntity, duration: 5, timingFunction: .linear)
Ideally, I'd like to have the cube start/stop rotation when the user pinches on the plane mesh but I'd be happy just to see it rotate!
I have two apps released -- ReefScan and ReefBuild -- that are based on the WWDC21 sample photogrammetry apps for iOS and MacOS. Those run fine without LiDAR and are used mostly for underwater models where LiDAR does not work at all. It now appears that the updated photogrammetry session requires LiDAR data, and building my app on current xcode results in a non-working app. Has the "old" version of photgrammetry session been broken by this update? It worked very well previously so I would hate to see this regression to needing LiDAR. Most of my users do not have that.
Topic:
Graphics & Games
SubTopic:
RealityKit
Hello everyone,
Since last night, the Object Capture feature in my app has stopped working. Whenever I try to use it, a blank screen is displayed instead of the expected functionality.
I’ve also tested several other apps that rely on Object Capture, and they are experiencing the same issue. This makes me think it might not be a problem specific to my device or app.
I’ve already tried restarting my device and ensuring all apps are up to date, but the issue persists.
Does anyone have more information about this issue? If so, is there any update on when it might be resolved?
Thank you in advance for your help!
Best regards
Hi
Hopefully someone can share some ideas on how to accomplish this.
I know we can load models from realityKitContentBundle like
let model = try? await Entity(named: “testModel”, in: realityKitContentBundle)
But this is in the root of RealityKitContent.rkassets , if I have the models in some subfolder then I have to add the complete path like
let model = try? await Entity(named: “/superModels/testModel”, in: realityKitContentBundle)
What I want is to be able to search recursively in all folders for that file as I have several subfolders with different models.
Any suggestion ?
Thanks in advance.
Guillermo
Hi,
When I attach BillboardComponent to anchor entities, I am no longer able to retrieve the tapped entity anymore because the collision shapes of the entity are messed up due to always orienting it towards the camera. And it does not updated the collision shapes because if I try pressing everywhere that is not my model entity, I get a hit out of nowhere.
I tried updating the collision shapes of the entity every frame:
for child in existingPassport.mainEntity!.children {
child.generateCollisionShapes(recursive: true)
}
However, nothing comes of it, and it is not a smart solution in the first places because it is too heavy to recreate the shapes every frame.
I am using the usual AR View Controller that works when I comment out the BillboardComponent line just fine:
private func setupTapRecognizer() {
let tapRecognizer = UITapGestureRecognizer(target: self, action: #selector(handleTap))
arView.addGestureRecognizer(tapRecognizer)
}
@objc func handleTap(_ recognizer: UITapGestureRecognizer) {
print("handle tap URL 1")
let location = recognizer.location(in: arView)
if let entity = arView.entity(at: location) {
print("handle tap URL 2")
// Assuming each entity has a URL stored in a component
if let urlComponent = entity.components[URLComponent.self] {
webViewPresenter?.presentFullScreenWebView(url: urlComponent.url)
print("handle tap URL: \(urlComponent.url)")
}
}
}
How should we tackle this issue on iOS 18?
Thanks!
I am trying to model something similar to an odometer in RealityKit, where 3D numbers scroll up or down, as they increase or decrease, within a container entity. Is there a way for an Entity to clip its children so that anything that extends beyond its dimensions is not rendered?
I want to compare the colors of two model entities (spheres). How can i do it? The method i'm currently trying to apply is as follows
case let .color(controlColor) = controlMaterial.baseColor, controlColor == .green {
// Flip target sphere colour
if let targetMaterial = targetsphere.model?.materials.first as? SimpleMaterial,
case let .color(targetColor) = targetMaterial.baseColor, targetColor == .blue {
targetsphere.model?.materials = [SimpleMaterial(color: .green, isMetallic: false)] // Change to |1⟩
} else {
targetsphere.model?.materials = [SimpleMaterial(color: .blue, isMetallic: false)] // Change to |0⟩
}
}
This method (baseColor) was deprecated in swift 15.0 changes to 'color' but i cannot compare the value color to each other.👾
Hi! I'm having issues retrieving the intrinsics matrix of camera poses for photogrammetry sessions.
The camera object always seems to be nil, no matter what dataset I use.
From the documentation, I can't see any indication of this issue, is there something I need to do on the code side? Or it's something related to the photo dataset?
I'm on MacOS 15.2
Hello, I am experiencing an issue with a character animation using RealityKit.
I have a file created in Blender that contains the rigged Character, a sword, and a shield. The sword and the shield have bones connected to the character's hands so they can follow the character's animation. When I run the animation in Blender and preview the exported USDZ file on Mac, I can see the sword and shield attached to the hands and the animation is fine. But when I add the USDZ model in RealityKit and play the animation, only the character is animating, the sword and the shield are not moving at all.
This is the code I use to animate the character:
private func loadAnimations() {
let unifiedAnimations = children[0].availableAnimations.first!.definition
let animationResource = try! AnimationResource.generate(with: unifiedAnimations)
self.children[0].playAnimation(
animationResource.repeat()
)
}
Here is a link with a Demo Xcode project, the Blender file, and a video:
https://www.dropbox.com/scl/fi/ypq2iwxc5f9dwzjggsvin/AppleTest.zip?rlkey=wiag3rg44urhjdh2wal8cdx2u&st=vbpf7x11&dl=0
STEPS TO REPRODUCE
I have created a demo project that displays the character on a horizontal surface, and the animation starts playing.
Run the App. The ARView will be set up, and a yellow square will appear in the middle of the screen.
When a horizontal surface is detected the yellow Square will change indicating that the surface is found.
Tap on the screen to load the USDZ model and position it in the yellow square's position.
The Animation will start playing and you can see that the character is animating, but the sword and shield remain still.
Thanks
Topic:
Graphics & Games
SubTopic:
RealityKit
Hi everyone,
I’m experiencing a critical issue with USDZ files created in Reality Composer on an iPad 9th Generation (iPadOS 18.3). The files work perfectly on iPads from the 10th Generation onwards and on iPad Pros. However, on older devices like the iPad 9th Generation and older iPhones, QuickLook (file preview) crashes when opening them.
This is a major issue because these USDZ files are part of an exhibition where artworks are extended with AR elements via a web page. If some visitors cannot view the 3D content, it significantly impacts the experience.
What’s puzzling is that two years ago, we exported USDZ files from Reality Composer, made them available via a website, and they worked flawlessly on all devices, including older iPads and iPhones. Now, with the latest iPadOS, they consistently crash on older devices.
Has anyone encountered a similar issue? Are there known limitations with QuickLook on older devices, or is there a way to optimize the USDZ files to prevent crashes? Could this be related to changes in iPadOS or RealityKit? Any advice or workaround would be greatly appreciated!
Thanks in advance!
I'm trying to build a Shader in "Reality Composer Pro" that updates from a start time. Initially I tried the following:
The idea was that when the startTime was 0, the output would be 0, but then I would set startTime from within code and this would be compared with the current GPU time, and difference used to drive another part of the shader graph:
if
let testEntity = root.findEntity(named: "Test"),
var shaderGraphMaterial = testEntity.components[ModelComponent.self]?.materials.first as? ShaderGraphMaterial
{
let time = CFAbsoluteTimeGetCurrent()
try! shaderGraphMaterial.setParameter(name: "StartTime", value: .float(Float(time)))
testEntity.components[ModelComponent.self]?.materials[0] = shaderGraphMaterial
}
However, I haven't found a reference to the time the shader would be using.
So now I am trying to write an EntityAction to achieve the same effect. Instead of comparing a start time to the GPU's time I'm trying to animate one of the shader's uniform input. However, I'm not sure how to specify the bind target. Here's my attempt so far:
import RealityKit
struct ShaderAction: EntityAction {
let startValue: Float
let targetValue: Float
var animatedValueType: (any AnimatableData.Type)? { Float.self }
static func registerEntityAction() {
ShaderAction.subscribe(to: .updated) { event in
guard let animationState = event.animationState else { return }
let value = simd_mix(event.action.startValue, event.action.targetValue, Float(animationState.normalizedTime))
animationState.storeAnimatedValue(value)
}
}
}
extension Entity {
func updateShader(from startValue: Float, to targetValue: Float, duration: Double) {
let fadeAction = ShaderAction(startValue: startValue, targetValue: targetValue)
if let shaderAnimation = try? AnimationResource.makeActionAnimation(for: fadeAction, duration: duration, bindTarget: .material(0).customValue) {
playAnimation(shaderAnimation)
}
}
}
'''
Currently when I run this I get an assertion failure: 'Index out of range (operator[]:line 797) index = 260, max = 8'
Furthermore, even if it didn't crash I don't understand how to pass a binding to the custom shader value "startValue".
Any clues of how to achieve this effect - even if it's a completely different way.
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
Following the post on
https://developer.apple.com/documentation/realitykit/custommaterial it's simple to use shader for materials and get uniforms and params from each vertex. However it's not available for visionOS. Any alternative to use in this case? I want to write shader to fill material by myself. (I have shader experience from web, familiar with fragment shader)
When importing FBX animations (generated by Cinema 4d or Blender) the models come in very far way and cannot resize or zoomed in. I have tried every setting from both programs to no avail. Is there a secret to providing the right export options? When importing without animations/and rigging the model imports fine and correct size. But once motion is included, something is awry. I also tried changing base units in Converter, but did not work. I have attache my model heirarchy in C4D as well as the imported result. It appears the animation is imported, as I can see it move, but can barely see it :)
Hi, is there any way to use front camera to do the motion capture?
I want recognize if the user raised there hands up with the front camera on iPhone.
I was able to do it with the back camera, not the front.
Also, if there is any sample code, or document, I would be super happy.
Waiting for your reply!!
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
Swift Student Challenge
Swift
ARKit
Swift Playground