Hi everyone,
I'm developing an ARKit app using RealityKit and encountering an issue where a video displayed on a 3D plane shows up as a pink screen instead of the actual video content.
Here's a simplified version of my setup:
func createVideoScreen(video: AVPlayerItem, canvasWidth: Float, canvasHeight: Float, aspectRatio: Float, fitsWidth: Bool = true) -> ModelEntity {
let width = (fitsWidth) ? canvasWidth : canvasHeight * aspectRatio
let height = (fitsWidth) ? canvasWidth * (1/aspectRatio) : canvasHeight
let screenPlane = MeshResource.generatePlane(width: width, depth: height)
let videoMaterial: Material = createVideoMaterial(videoItem: video)
let videoScreenModel = ModelEntity(mesh: screenPlane, materials: [videoMaterial])
return videoScreenModel
}
func createVideoMaterial(videoItem: AVPlayerItem) -> VideoMaterial {
let player = AVPlayer(playerItem: videoItem)
let videoMaterial = VideoMaterial(avPlayer: player)
player.play()
return videoMaterial
}
Despite following the standard process, the video plane renders pink. Has anyone encountered this before, or does anyone know what might be causing it?
Thanks in advance!
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
is there a way in visionOS to anchor an entity to the POV via RealityKit?
I need an entity which is always fixed to the 'camera'.
I'm aware that this is discouraged from a design perspective as it can be visually distracting. In my case though I want to use it to attach a fixed collider entity, so that the camera can collide with objects in the scene.
Edit:
ARView on iOS has a lot of very useful helper properties and functions like cameraTransform (https://developer.apple.com/documentation/realitykit/arview/cameratransform)
How would I get this information on visionOS? RealityViews content does not seem offer anything comparable.
An example use case would be that I would like to add an entity to the scene at my users eye-level, basically depending on their height.
I found https://developer.apple.com/documentation/realitykit/realityrenderer which has an activeCamera property but so far it's unclear to me in which context RealityRenderer is used and how I could access it.
Appreciate any hints, thanks!
I have attempted to use VideoMaterial with HDR HLS stream, and also a TextureResource.DrawableQueue with rgba16Float in a ShaderGraphMaterial.
I'm capturing to 64RGBAHalf with AVPlayerItemVideoOutput and converting that to rgba16Float.
I don't believe it's displaying HDR properly or behaving like a raw AVPlayer.
Since we can't configure any EDR metadata or color space for a RealityView, how do we display HDR video? Is using rgba16Float supposed to be enough?
Is expecting the 64RGBAHalf capture to handle HDR properly a mistake and should I capture YUV and do the conversion myself?
Thank you
Using Reality Composer Pro 2.0, I created a simple shader graph that displays a texture on an unlit surface:
On visionOS 2 beta, I can successfully use ShaderGraphMaterial(named:from:in:) to load that shader graph material and assign it to a model entity.
However, on visionOS 1.2 and earlier, either in Simulator or on the device, ShaderGraphMaterial(named:from:in:) fails and I see the following logged to the console:
If, using Reality Composer Pro 1.0, I experimentally open the same project and delete and recreate exactly the same nodes above, then ShaderGraphMaterial(named:from:in:) works as expected on visionOS 1.2.
Is it a known issue that Reality Composer 2 can't be used with visionOS 1?
Is this intentional behavior?
I've submitted feedback as FB14828873, including a sample project and repro steps.
If possible, I would appreciate guidance from an Apple engineer, like "This is a known issue for [list of node types]" or "Reality Composer Pro 2 is not supported for visionOS 1 development, please refer to [documentation]" or "We recommend [workaround]."
Thank you.
I’m having issues getting Collision Shapes working in Reality Composer on iPadOS, or with Reality Composer Pro via Xcode on macOS?
I’ve posted a video recorded through my Vision Pro showing the issue.
The project i’m working on is a Dice Rolling application. The dice don’t appear to be working set as Collision Shape=Automatic, which I assume takes into account the actual silhouette of the shape.
https://youtu.be/upPtQY4QOAk?si=yyx6rbSSmVkLxBLg
They also don’t rest on their face when they land.
Anyone experience this type of behavior and found a solution? I’m currently doing this with Reality Composer, but most likely will also be wanting to get it to work properly in Reality Composer Pro as well.
Thx!
Has anyone come across the issue that setting GKLocalPlayer.local.authenticateHandler breaks a RealityView's world tracking on iOS / iPadOS 18 beta 5?
I'm in the process of upgrading my app to make use of the much appreciated RealityView unification, using RealityView not only on visionOS but now also on iOS and iPadOS. In my RealityView, I enable world tracking on iOS like this:
content.camera = .worldTracking
However, device position and orientation were ignored (the camera remained static) and there was no camera pass-through. Then I discovered that the issue disappeared when I remove the line
GKLocalPlayer.local.authenticateHandler = { viewController, error in
// ... some more code ...
}
So I filed FB14731139 and hope that it will be resolved before the release of iOS / iPadOS 18.
I'm writing a RealityKit/ARKit app that runs on iOS.
Starting with Xcode 16.0 beta 1, at least through Xcode 16.1 beta 2 (16B5014f), in the iOS 18 simulator, my app randomly crashes in about 20% of app sessions the first time it attempts to present an ARView.
The crashes seem to occur at multiple points within RealityKit and Metal.
Below, I've included screenshots of the call stacks of the crashes, which occur as a result of both EXC_BAD_ACCESS and assertion failures within RealityKit.
The app only crashes in the iOS 18 simulator, and does not crash in the iOS 17 simulator or earlier.
The app only crashes in the simulator, and does not crash on a device running iOS 18.
Before I investigate further, I'd appreciate it if an Apple engineer could give me a sense of if these crashes are most likely the result of known issues within RealityKit and/or the simulator, or if your opinion is that there are probably bugs in my app's code.
I've submitted several feedback issues in the past, and I'd love to submit this issue too, but I expect that I would spend many hours attempting to create a repro case in a sample app. Understandably, I'd rather not spend this time if an Apple engineer could tell me this is a known issue, for example.
Thank you.
In macOS project with RealityKit and SwiftUI, adding OrthographicCameraComponent causes crashes in both Xcode Preview and at runtime.
import SwiftUI
import RealityKit
struct ContentView: View {
var body: some View {
RealityView { content in
var camera = Entity()
var component = OrthographicCameraComponent()
component.scale = 5
camera.position = [0, 0, 5]
camera.components.set(component)
content.add(camera)
content.add(ModelEntity(mesh: .generateSphere(radius: 1)))
}
}
}
#Preview {
ContentView()
}
Has anyone faced this issue or knows a fix?
The farther away the center of a large entity is, the less accurate the positioning is?
For example I am changing only the y-axis position of an entity that is tens of meters long, but i notice x and z drifting slowly the farther away the center of the entity is. I would not expect the x and z to move.
It might be compounding rounding errors somewhere, or maybe the RealityKit engine is deciding not to be super precise about distant objects? Otherwise I just have a bug somewhere.
Topic:
Graphics & Games
SubTopic:
RealityKit
In SceneKit, when creating an .scn file from a rigged model, the framework created an SCNNode for each bone/joint, so you could add and remove child nodes directly to and from joints, and like any other SCNNode, you could access world position and world orientation for each joint. The analog would be for joints to be accessible as child entities of a ModelEntity in RealityKit. I am unable to proceed with migrating my project from SceneKit because of this, as there does not seem to be a way to even access the true world position of a joint with the current jointNames/jointTransforms paradigm.
The translation information from the given transforms is insufficient to determine the location of a joint at any given time, and other approaches like creating a GeometricPin for the given joint name and attaching it to another entity do not seem to work. So conveniently being able to attach an item to the hand of a rigged model was trivial in SceneKit and now feels impossible in RealityKit.
I am not the first person to notice this, and am feeling demoralized about proceeding with RealityKit with such a critical piece of functionality blocked
https://stackoverflow.com/questions/76726241/how-do-i-attach-an-entity-to-a-skeletons-joint-in-realitykit
Will this be addressed in some way?
Is there any standard way of efficiently showing a MTLTexture on a RealityKit Entity?
I can't find anything proper on how to , for example, generate a LowLevelTexture out of a MTLTexture. Closest match was this two year old thread.
In the old SceneKit app, we would just do
guard let material = someNode.geometry?.materials.first else { return }
material.diffuse.contents = mtlTexture
Our flow is as follows (for visualizing the currently detected object):
Camera-Stream -> CoreML Segmentation -> Send the relevant part of the MLShapedArray-Tensor to a MTLComputeShader that returns a MTLTexture -> Show the resulting texture on a 3D object to the user
I am creating a RealityKit scene that will contain over 12,000 duplicate cubes arranged in a circle (see image below). This is for some high-energy physical simulations I am doing. I accomplish this scene by creating a single cube and cloning it a bunch of times. So, I there is a single MeshResource and Material even though there are a lot of entities. I have confirmed this by checking with Swift's === operator. Even with this, the program is unworkably slow.
Any suggestions or tricks that could help with this type of scene?
Using a single geometry was the trick to getting SceneKit to work fast with geometries like this. I've been updating my software to RealityKit because I far prefer the structure of RealityKit over SceneKit.
Hello Dev Community,
I've been thinking over Apple's preference for USDZ for AR and 3D content, especially when there's the widely used GLTF. I'm keen to discuss and hear your insights on this choice.
USDZ, backed by Apple, has seen a surge in the AR community. It boasts advantages like compactness, animation support, and ARKit compatibility. In contrast, GLTF too is a popular format with its own merits, like being an open standard and offering flexibility.
Here are some of my questions toward the use of USDZ:
Why did Apple choose USDZ over other 3D file formats like GLTF?
What benefits does USDZ bring to Apple's AR and 3D content ecosystem?
Are there any limitations of USDZ compared to other file formats?
Could factors like compatibility, security, or integration ease have influenced Apple's decision?
I would love to hear your thoughts on this. Feel free to share any experiences with USDZ or other 3D file formats within Apple's ecosystem!
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
RealityKit
USDZ
Reality Converter
Reality Composer Pro
Hi experts,
When I open a USDZ file which contains perspective cameras by "Files" app in IOS 18.2/iPadOS 18.2, I can't see anything. And when I open the USDZ file in IOS 18.1/iPadOS 18.1, it works well.
On the other hand, when I open a USDZ file which contains orthographic cameras in IOS 18.1 or IOS 18.2, the scene is stuck.
Could you help to solve these issues please?
Thanks.
Hi! I'm having issues retrieving the intrinsics matrix of camera poses for photogrammetry sessions.
The camera object always seems to be nil, no matter what dataset I use.
From the documentation, I can't see any indication of this issue, is there something I need to do on the code side? Or it's something related to the photo dataset?
I'm on MacOS 15.2
Hello!
I'm currently building an app where I feed images into a Photogrammetry session to create a USDZ. Pretty straightforward, works great. We've recently started some testing on older devices, and have discovered that Photogrammetry is requiring devices that have LIDAR (we've seen some console logs referencing LIDAR if we stumble through a photogrammetry process without checking isSupported first)
Judging from @swredcam's posting about ReefScan from November 24 (https://developer.apple.com/forums/thread/769221) it looks like Photogrammetry did work on those non-LIDAR devices. In my own testing on an iPhone 12 mini with iOS 17, PhotogrammetrySession says it's not supported.
Since we're only feeding in a sequence of photos that have never had depth data, and they process fine on pro/max devices, we're curious why this would require a LIDAR sensor to work, when it seems like it did work without LIDAR in the past. Or is there some other limitation of non-pro devices that is causing photogrammetry to not be supported (especially on today's really powerful hardware)
Thanks!
++md
I am an AR developer working on Apple Silicon Macs. Currently, Reality Composer Pro does not allow exporting .reality files, and Reality Composer (classic) is not available for Apple Silicon. This creates a gap in the workflow for ARKit/RealityKit developers who need interactive .reality files for use in Xcode projects.
Having the ability to export .reality files directly from Reality Composer Pro on Mac would greatly streamline development and enable a fully native workflow on modern Macs. Alternatively, bringing Reality Composer (classic) to Apple Silicon would also resolve this issue.
I have submitted this as a feature request via Feedback Assistant (FB17900386). I encourage others with similar needs to reply or submit feedback as well.
Thank you!
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
ARKit
Reality Composer
RealityKit
Reality Composer Pro
Hi! I watched the WWDC25 session "Bring your SceneKit project to RealityKit" which seemed like a great resource for those of us transitioning from the now-deprecated SceneKit framework. The session mentioned that the full sample code for the project would be available to download, but I haven't been able to find it in the Code section of the video page or in the Sample Code Library.
Has the sample code been released yet? Having the project code would make it much easier to follow along with the RealityKit changes shown in the video. Thanks again for the great session.
I'm developing 3D Scanner works on iPad.
I'm using AVCapturePhoto and Photogrammetry Session.
photoCaptureDelegate is like below:
extension PhotoCaptureDelegate: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
let fileUrl = CameraViewModel.instance.imageDir!.appendingPathComponent("\(PhotoCaptureDelegate.name)\(id).heic")
let img = CIImage(cvPixelBuffer: photo.pixelBuffer!, options: [ .auxiliaryDepth: true, .properties: photo.metadata ])
let depthData = photo.depthData!.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB)
let fileData = CIContext().heifRepresentation(of: img, format: .RGBA8, colorSpace: colorSpace!, options: [ .avDepthData: depthData ])
try? fileData!.write(to: fileUrl, options: .atomic)
}
}
But, Photogrammetry session spits warning messages:
Sample 0 missing LiDAR point cloud!
Sample 1 missing LiDAR point cloud!
Sample 2 missing LiDAR point cloud!
Sample 3 missing LiDAR point cloud!
Sample 4 missing LiDAR point cloud!
Sample 5 missing LiDAR point cloud!
Sample 6 missing LiDAR point cloud!
Sample 7 missing LiDAR point cloud!
Sample 8 missing LiDAR point cloud!
Sample 9 missing LiDAR point cloud!
Sample 10 missing LiDAR point cloud!
The session creates a usdz 3d model but scale is not correct.
I think the point cloud can help Photogrammetry session to find right scale, but I don't know how to attach point cloud.
Hi I have setup animation using timeline in Reality Composer Pro like below
This get triggered by a notification posts from the code.Once this time line triggered, I want to repeat this 2 animations in the timeline unitll user takes the next action. How can I make these repeat forever?