Hi,
I'm trying to rotate an entity in VisionPro.
Most of the code is the same as the Diorama code from WWDC23.
The problem I'm having is that the rotiation occurs but the axis of the rotation is not the center of my object.
It seems to be centered on the zero coordinate of the immersive space . How do I change the rotation3DEffect to tell it to rotate around the entity? Not the space?
Is it even possible?
This is the code, the rotation is at the end.
var body: some View {
@Bindable var viewModel = viewModel
RealityView { content, _ in
do {
let entity = try await Entity(named: "DioramaAssembled", in: RealityKitContent.RealityKitContentBundle)
viewModel.rootEntity = entity
content.add(entity)
viewModel.updateScale()
// Offset the scene so it doesn't appear underneath the user or conflict with the main window.
entity.position = SIMD3<Float>(0, 0, -2)
subscriptions.append(content.subscribe(to: ComponentEvents.DidAdd.self, componentType: PointOfInterestComponent.self, { event in
createLearnMoreView(for: event.entity)
}))
entity.generateCollisionShapes (recursive: true)
entity.components.set(InputTargetComponent())
} catch {
print("Error in RealityView's make: \(error)")
}
}
.rotation3DEffect(.radians(currentrotateByX), axis: .y)
.rotation3DEffect(.radians(currentrotateByY), axis: .x)
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Post
Replies
Boosts
Views
Activity
Hello, I'm currently building an app that implements the on-device object capture API to create 3D models. I have two concerns that I cannot find addressed anywhere on the internet:
Can on-device object capture be performed by devices without LiDAR? I understand that depth data is necessary for making scale-accurate models - if there is an option to disable it, where would one specify that in code?
Can models be exported to .obj instead of .usdz? From WWDC2021 at 3:00 it is mentioned that it is possible with the Apple Silicon API but what about with on-device scanning?
I would be very grateful if anyone is knowledgeable enough to provide some insight. Thank you so much!
Hi,
I am investigating how to emit the following in my visionOS app.
https://www.hiroakit.com/archives/1432
https://blog.terresquall.com/2020/01/getting-your-emission-maps-to-work-in-unity/
Right now, I'm trying various things with Shader Graph in Reality Composer Pro, but I can't tell from the official documentation and WWDC session videos what the individual functions and combined effects of Reality Composer Pro's Shader Graph nodes are, I am having a hard time understanding the effects of the individual functions and combinations of them.
I have a feeling that such luminous materials and expressions are not possible in visionOS to begin with. If there is a way to achieve this, please let me know.
Thanks.
I'm trying to better understand how loading entities works. If I do this:
RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "RCP_Scene", in: realityKitContentBundle) {
content.add(scene)
}
}
It returns the root with the two objects I have in the scene (sphere_01 and sphere_02). If I add a drag gesture to this entity it works on the root and gets applied to both sphere_01 and sphere_02 together (they both indiviually have collision and input components set to allow gestures). How do I get individual control of sphere_01 and sphere_02? Is it possible to load the root scene, as I'm doing above, and have individual control?
Is there any way to specify a clip volume or clipping planes on either a RealityView or the underlying RealityKit entity on visionOS? This was easy on SceneKit with shader modifiers, or in OpenGL, or WebGL, or with RealityKit on iOS or macOS with CustomMaterial surface shader, but CustomMaterial is not supported on visionOS.
Hello,
I've been trying to leverage instanced rendering in RealityKit on visionOS but have not had success.
RealityKit states this is supported:
https://developer.apple.com/documentation/realitykit/validating-usd-files
https://developer.apple.com/videos/play/wwdc2021/10075/?time=1373
https://developer.apple.com/videos/play/wwdc2023/10099/?time=772
RealityKit Trace metrics
Validating instancing is working:
To test I made a base visionOS app with immersive space and the entity replaced with my test usdz file. I've been using the RealityKit Trace profiling template in xcode instruments in the immersive space and volume closed. This gets consistent draw call results.
If I have a single sphere mesh with one material I get one draw call, but the number of draw calls grows linearly with mesh count no matter how my entity is configured.
What I've tried
Create a test scene in blender, export with instancing enabled
Create a test scene in Reality Composer Pro using references
Author usda files by hand based on the OpenUSD spec
Programatically create a MeshResource with Contents at runtime
References
https://openusd.org/release/api/_usd__page__scenegraph_instancing.html
https://developer.apple.com/documentation/realitykit/meshresource
https://developer.apple.com/documentation/realitykit/meshresource/instance
Thank you
We are developing an AR app which uses spatial audio. If we want to use Realitykit to create the app, will we need to use a MacBook Pro running Silicon?
I'm developing an app for Apple Vision Pro and have a question about RealityKit. Recently, I attempted to use drag gestures to manipulate two entities, A and B, with my left and right hands respectively. The two entities belong to the same RealityView.
I anticipated that I could move Entity A with my left hand and Entity B with my right hand independently. However, I noticed that the movement of one hand affects both entities simultaneously.
Presumably, DragGesture().onChanged is triggered twice for each entity. In an attempt to properly pair each hand with its corresponding entity, I investigated the platform.manipulatorGroup in the debugger. However, I encountered a compile error when trying to access the platform variable.
Is it feasible to pair each hand with a specific entity and move both objects separately?
Thank you in advance.
Hi,
I am implementing player using RealityKit's VideoPlayerComponent and AVPlayer. When app enter immersive space, playback beigns. But only audio playabck, I can't see video. Do I need specify entity's position and size?
struct MyApp: App {
@State private var playerImmersionStyle: ImmersionStyle = .full
var body: some Scene {
WindowGroup {
ContentView()
}
.defaultSize(width: 800, height: 200)
ImmersiveSpace(id: "playerImmersionStyle") {
ImmersiveSpaceView()
}
.immersionStyle(selection: $playerImmersionStyle, in: playerImmersionStyle)
}
func application(_ application: UIApplication,
configurationForConnecting connectingSceneSession: UISceneSession,
options: UIScene.ConnectionOptions) -> UISceneConfiguration {
return UISceneConfiguration(name: "My Scene Configuration", sessionRole: connectingSceneSession.role)
}
}
struct PlayerViewEx: View {
let entity = Entity()
var body: some View {
RealityView() { content in
let entity = makeVideoEntity()
content.add(entity)
}
}
public func makeVideoEntity() -> Entity {
let url = Bundle.main.url(forResource: "football", withExtension: "mov")!
let asset = AVURLAsset(url: url)
let playerItem = AVPlayerItem(asset: asset)
let player = AVPlayer()
var videoPlayerComponent = VideoPlayerComponent(avPlayer: player)
videoPlayerComponent.isPassthroughTintingEnabled = true
entity.components[VideoPlayerComponent.self] = videoPlayerComponent
entity.scale *= 0.4
player.replaceCurrentItem(with: playerItem)
player.play()
return entity
}
}
#Preview {
PlayerViewEx()
}
Adding AVPlayer as attachment on the side using RealityKit. The video in it thought is not aligned. And thoughts on what could be going wrong?
RealityView { content, attachments in
let url = self.video.resolvedURL
let asset = AVURLAsset(url: url)
let playerItem = AVPlayerItem(asset: asset)
var videoPlayerComponent = VideoPlayerComponent(avPlayer: player)
videoPlayerComponent.isPassthroughTintingEnabled = true
// entity.components[VideoPlayerComponent.self] = videoPlayerComponent
entity.position = [0, 0, 0]
entity.scale *= 0.50
player.replaceCurrentItem(with: playerItem)
player.play()
content.add(entity)
} update: { content, attachments in
// if content.entities.count < 2 {
if showAnotherPlayer {
if let attachment = attachments.entity(for: "Attachment") {
playerModel.loadVideo(library.selectedVideo!, presentation: .fullWindow)
//4. Position the Attachment and add it to the RealityViewContent
attachment.position = [1.0, 0, 0]
attachment.scale *= 1.0
//let radians = -45.0 * Float.pi / 180.0
//attachment.transform.rotation += simd_quatf(angle: radians, axis: SIMD3<Float>(0,1,0))
let entity = content.entities.first
attachment.setParent(entity)
content.add(attachment)
}
}
if showLibrary {
if let attachment = attachments.entity(for: "Featured") {
//4. Position the Attachment and add it to the RealityViewContent
attachment.position = [0.0, -0.3, 0]
attachment.scale *= 0.7
//let radians = -45.0 * Float.pi / 180.0
//attachment.transform.rotation += simd_quatf(angle: radians, axis: SIMD3<Float>(0,1,0))
let entity = content.entities.first
attachment.setParent(entity)
viewModel.attachment = attachment
content.add(attachment)
}
} else {
if let scene = content.entities.first?.scene {
let _ = print("found scene")
}
if let featuredEntity = content.entities.first?.scene?.findEntity(named: "Featured") {
let _ = print("featured entity found")
}
if let attachment = viewModel.attachment {
let _ = print("-- removing attachment")
if let anchor = attachment.anchor {
let _ = print("-- removing anchor")
anchor.removeFromParent()
}
attachment.removeFromParent()
content.remove(attachment)
} else {
let _ = print("the attachment is missing")
}
}
// }
} attachments: {
Attachment(id: "Attachment") {
PlayerView()
.frame(width: 2048, height: 1024)
.environment(library)
.environment(playerModel)
.onAppear {
DispatchQueue.main.asyncAfter(deadline: .now()+1) {
playerModel.play()
}
}
.onDisappear {
}
}
if showLibrary {
Attachment(id: "Featured") {
VideoListView(title: "Featured",
videos: library.videos,
cardStyle: .full,
cardSpacing: 20) { video in
library.selectedVideo = video
showAnotherPlayer = true
}
.frame(width: 2048, height: 1024)
}
}
}
PlayerView
I am trying to make a shader for a disco ball lighting effect for my app. I want the light to reflect on the scene mesh.
i was curious if anyone has pointers on how to do this in shader graph in reality composer pro or writing a surface shader.
The effect rotates the dots as the ball spins.
This is the effect in the apple clips that applies the effect to the scene mesh
Hey, I'm wondering what would be the proper way to add RealityView content asynchronously, while doing the heavy lifting in a background thread. My use case is that I am generating procedural geometry which takes a few seconds to complete. Meanwhile I would like the UI to show other geometry / UI elements and the Main thread to be responsive.
Basically what I would like to do, in pseudocode, is:
runInBackgroundThread {
let geometry = generateGeometry() // CPU intensive, takes 1-2 s
let entity = createEntity(geometry) // CPU intensive, takes ~1 s
let material = try! await ShaderGraphMaterial(..)
entity.model!.materials = [material]
runInMainThread {
addToRealityViewContent(entity)
}
}
With this I am running into so many issues with especially the material, which apparently cannot be constructed on a non-main thread and cannot be passed over thread borders.
The transparency in reality kit is not rendered properly from specific ordinal axes. It seems like it is a depth sorting issue where it is rejecting some transparent surfaces when it should not. Some view directions relative to specific ordinal axes are fine. I have not narrowed down which specific axes is the problem. This is true across particle systems and/or meshes. It is very easy to replicate this issues using multiple transparent meshes or particle systems.
In the above gif you can see the problem in multiple instances, the fire and snow particles are sorted behind the terrain, which has transparency since it is a procedural blend of grass, rock, and ice, but it is correctly sorted in front of the opaque materials such the rocks and wood.
In the above gif, it is two back to back grid meshes (since dual sided rendering is not supported) that have a custom surface shader to animate the mesh in a wave and also apply transperency. You can see in the distance, where the transparency seems to be rendered/overlapped correctely, but at the overlap approaches the screen (and crosses an ordinal axes) it renders black for the transparent portion of the surface, when the green of the mesh that is behind should be rendered.
This is a blocking problem for the development of this demo.
I'm trying to get a similar experience to Apple TV's immersive videos, but I cannot figure out how to present the AVPlayerViewController controls detached from the video.
I am able to use the same AVPlayer in a window and projected on a VideoMaterial, but I can't figure out how to just present the controls, while displaying the video only in the 3D entity, without having a 2D projection in any view.
Is this even possible?
I'm trying to build a project with a moderately complex Reality Composer Pro project, but am unable to because my Mac mini (2023, 8GB RAM) keeps running out of memory.
I'm wondering if there are any known memory leaks in realitytool, but basically the tool is taking up 20-30GB (!) memory during builds.
I have a Mac Pro for content creation, which is why I didn't go for more RAM on the mini – it was supposed to just be a build machine for Apple Silicon compatibility, as my Pro is Intel.
But, I'm kinda stuck here.
I have a scene that builds fine, but any time I had a USD – in this case a tree asset – with lots of instances, or a lot of geometry, I run into the memory issue. I've tried greatly simplifying the model, but even a 2MB USD is resulting in the crash. I'm failing to see how adding a 2MB asset would cause the memory of realitytool to balloon so much during builds.
If someone from Apple is willing to look, I can provide the scene – but it's proprietary so I can't just post it publicly here.
Hello, I would like to change the aspect (scale, texture, color) of a 3D element (Model Entity) when I hovered it with my eyes. What should I do If I want to create a request for this feature? And how would I know if it will ever be considered or when it will appear?
I'm currently developing an application where the models present inside a volumetric window may exceed the clipping boundaries of the window. ( Which I currently understand to be a maximum of 2m )
Because of this, as models move through the clipping boundaries, the interior of the models becomes visible. If possible, I'd like to cap these interiors with a solid fill so as to make them more visually appealing.
However, as far as I can tell, I'm quite limited in how I might be able to achieve this when using RealityKit on VisionOS.
Some approaches I've seen to accomplish similar effects seem to use multiple passes of model geometries rendering into stencil buffers and using that to inform whether or not a cap should be drawn. However, afiact, if I have opted into using a RealityView and RealityKit, I don't have the level of control over my render pipeline such that I can render ModelEntities and also have multiple rendering passes over the set of contained entities to render into a stencil buffer that I then provide to a separate set of "capping planes" ( how I currently imagine I might accomplish this effect ).
Alternatively ( due to the nature of the models I'm using ) I considered using a height map to construct an approximation of a surface cap, but how I might use a shader to construct a height map of rendered entities seems similarly difficult using the VisionOS RealityView pipeline. It is not obvious to me how I could use a ShaderGraphMaterial to render to an arbitrary image buffer that I might then pass to other functions to use as an input; ShaderGraphMaterial seems biased to the fact that all image inputs and outputs are either literal files or the actual rendered buffer.
Would anyone out there have already created an effect like this that might have some advice? Or, potentially correct any misunderstandings I have with regards to accessing the Metal pipeline for RealityView or using ShaderGraphMaterial to construct a height map?
extension Entity {
func addPanoramicImage(for media: WRMedia) {
let subscription=TextureResource.loadAsync(named:"image_20240425_201630").sink(
receiveCompletion: { switch $0 {
case .finished: break
case .failure(let error): assertionFailure("(error)")
}
},
receiveValue: { [weak self] texture in
guard let self = self else { return }
var material = UnlitMaterial()
material.color = .init(texture: .init(texture))
self.components.set(ModelComponent(
mesh: .generateSphere(radius: 1E3),
materials: [material] ))
self.scale *= .init(x: -1, y: 1, z: 1)
self.transform.translation += SIMD3(0.0, -1, 0.0) } ) components.set(Entity.WRSubscribeComponent(subscription: subscription))
}
problem:
case .failure(let error): assertionFailure("(error)")
Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
I can't find a way to download a USDZ at runtime and load it into a Reality View with Reality kit.
As an example, imagine downloading one of the 3D models from this Apple Developer page: https://developer.apple.com/augmented-reality/quick-look/
I think the process should be:
Download the file from the web and store in temporary storage with the FileManager API
Load the entity from the temp file location using Entity.init (I believe Entity.load is being deprecated in Swift 6 - throws up compiler warning) - https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file
Add the entity to content in the Reality View.
I'm doing this at runtime on vision os in the simulator. I can get this to work with textures using slightly different APIs so I think the logic is sound but in that case I'm creating the entity with a mesh and material. Not sure if file size has an effect.
Is there any official guidance or a code sample for this use case?
Context
https://developer.apple.com/forums/thread/751036
I found some sample code that does the process I described in my other post for ModelEntity here: https://www.youtube.com/watch?v=TqZ72kVle8A&ab_channel=ZackZack
At runtime I'm loading:
Immersive scene in a RealityView from Reality Compose Pro with the robot model baked into the file (not remote - asset in project)
A Model3D view that pulls in the robot model from the web url
A RemoteObjectView (RealityView) which downloads the model to temp, creates a ModelEntity, and adds it to the content of the RealityView
Method 1 above is fine, but Methods 2 + 3 load the model with a pure black texture for some reason.
Ideal state is Methods 2 + 3 look like the Method 1 result (see screenshot).
Am I doing something wrong? e.g. I shouldn't use multiple Reality Views at once?
Screenshot
Code
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
// Add an ImageBasedLight for the immersive content
guard let resource = try? await EnvironmentResource(named: "ImageBasedLight") else { return }
let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25)
immersiveContentEntity.components.set(iblComponent)
immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity))
// Put skybox here. See example in World project available at
// https://developer.apple.com/
}
}
Model3D(url: URL(string: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz")!)
SkyboxView()
// RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/retrotv/tv_retro.usdz")
RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz")
}
}