RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

Posts under RealityKit tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

RealityKit, DrawableQueue, and synchronizing scene updates
I have a visionOS app that utilizes DrawableQueue and CADisplayLink to update an Entity, TextureResource tied to the drawable, and a Material that uses that TextureResource. TextureResource gets updated with when a video frame is ready. Material properties can get updated from the video or from other sources. Current process: when each video frame is ready, we get the next drawable, render to it, present it, and make an Entity update (e.g. transform). However, I’m experiencing jitter in the rendered content where it seems that the updates to the entity and the drawable being presented are milliseconds off from each other. Should I be using Drawable.presentOnSceneUpdate() to ensure all updates happen in the same update cycle? And if so, do you have any additional details on how to correctly use this function (the docs are unclear)?
0
1
155
1w
visionOS object tracking
We have a visionOS project started with visionOS 1.1 and now with visionOS we want to use object tracking so we ad the Anchor component to our scene in Reality Composer Pro and select a referenceObject but when we try to run the app on a physical device we get the following error: [xrsimulator] Exception thrown: This asset (ARReferenceObject_0.compiledarreferenceobject) is not supported by this version of RealityKit.
1
0
119
1w
Biānjí qì (Reality composer pro) guānyú IBL zǔjiàn shèzhì wèntí 36 / 5,000 Editor (Reality Composer Pro) about IBL component settings
In the editor (Reality composer pro), I can edit and mount the IBL component in real time, and the preview effect in the editor is normal. When loading the parsed scene using xcode, some models will appear black. I have tried many model formats (usda, usdc, usdz), and the final effect is the same. However, I can create IBL effects through code, and the effect is normal. I suspect that the IBL component exported by the Realitykit parsing editor has a maximum number of material balls.
0
0
141
1w
App Environment SkyDome's UV values
I started a visionOS app using Apple's new "App Environment" template, and when I looked at the UV mapping for the half SkyDome, the bottom edge had a UV 'Y' value of 0.318. Naively, I had assumed the bottom edge of a half dome would have a UV 'Y' value of 0.5 (half way up the texture map). Is this the standard UV mapping for half a SkyDome? It has caused some issues when I've applied some HDRIs.
0
0
185
2w
How to optimise RealityKit performance with many similar objects
I have code such as the following. The performance on the Vision Pro seems to get quite bad once I hit a few thousand of these models. It feels like I should be able to optimise this somehow, perhaps using instancing. Is that possible with RealityKit in visionOS 2? let material = UnlitMaterial(color: .white) let sphereModel = ModelEntity( mesh: .generateSphere(radius: 0.001), materials: [material]) for index in 0..<5000 { let point = generatedPoints[index] let model = sphereModel.clone(recursive: false) model.position = [point.x, point.y, point.z] parent.addChild(starModel) }
0
0
206
2w
EnvironmentResource.generate(fromEquirectangular:) does not compile with Xcode 16.0 beta 2
Hi! It seems that Xcode 16 beta 2 thinks that EnvironmentResource.generate(fromEquirectangular:) is unavailable even when the minimum target remains set to visionOS 1.0 (it is deprecated but Xcode reports a build error). The only way I was able to keep this in place for visionOS 1.x while compiling with Xcode 16 was the following: var environment: EnvironmentResource? if #available(visionOS 2.0, *) { environment = try? await EnvironmentResource(equirectangular: skyBoxWithSun()) } else { fatalError("EnvironmentResource.generate(fromEquirectangular:) does not compile with Xcode 16.0 beta 2.") } #else let environment = try? await EnvironmentResource.generate(fromEquirectangular: skyBoxWithSun()) #endif This will build with both Xcode 15.4 and 16 beta 2, but obviously crash when built with Xcode 16 and run on visionOS 1.x Do I have any better options? I would like to add some visionOS 2.0 features (e.g. try to replace my custom skybox with the new dynamic lighting) but prefer to maintain backward compatibility for now.
1
0
217
1w
PortalComponent invalid
it is my code let portal = Entity() portal.components[ModelComponent.self] = .init(mesh: .generatePlane(width: Float(size.width), height: Float(size.height), cornerRadius: 0.02), materials: [PortalMaterial()]) portal.components[PortalComponent.self] = .init(target: world) portal.components[PortalComponent.self]?.clippingPlane = .init(position: SIMD3(x: 0, y: 0, z: 0), normal: SIMD3(x: 0, y: 0, z: 0)) portal.components.set(HoverEffectComponent()) I added RealityView to multiple HStacks and implemented the portal effect. I found that the portal effect would cause confusion in the rendering level on some machines, as shown in the figure
2
0
186
1w
3D Object Capture not working on iphone 12 pro
The 3D object capture feature doesn’t seem to work on my iphone 12 pro. The circle that is supposed to show up when you begin to begin to move around the object doesnt show up so object capture doesn’t even begin. It says ‘more light..’ or ‘move closer’ but this doesnt happen on my iphone 14 pro. Works perfectly fine on that even with the same lighting. How can this be fixed?
1
0
201
1w
Object Tracking Training: Objects to avoid
I'm working on training an object tracking model in CreateML for visionOS that has fan blades on it and looking to try to train while ignoring a section of the geometry. When I train currently, it can detect the object if the fan blades are in the same orientation as when scanned but if they move it struggles. I see there is an "objects to avoid" data source that can be added but upon reading the description, I don't think it does what I'm needing but I could be wrong. Is there anyway to have the training ignore a part of the geometry that has a significant effect on the silhouette of the object?
0
0
253
2w
Disable reverb effect in immersive spaces
I'm developing an app where a user can bring a video or content from a WKWebView into an immersive space using SwiftUI attachments on a RealityView. This works just fine, but I'm having some trouble configuring how the audio from the web content should sound in an immersive space. When in windowed mode, content playing sounds just fine and very natural. The spatial audio effect with head tracking is pronounced and adds depth to content with multichannel or Dolby Atmos audio. When I move the same web view into an immersive space however, the audio becomes excessively echoey, as if a large amount of reverb has been put onto the audio. The spatial audio effect is also decreased, and while still there, is no where near as immersive. I've tried the following: Setting all entities in my space to use channel audio, including the web view attachment. for entity in content.entities { entity.channelAudio = ChannelAudioComponent() entity.ambientAudio = nil entity.spatialAudio = nil } Changing the AVAudioSessionSpatialExperience: And I've also tried every soundstage size and anchoring strategy, large works the best, but doesn't remove that reverb. let experience = AVAudioSessionSpatialExperience.headTracked( soundStageSize: .large, anchoringStrategy: .automatic ) try? AVAudioSession.sharedInstance().setIntendedSpatialExperience(experience) I'm also aware of ReverbComponent in visionOS 2 (which I haven't updated to just yet), but ideally I need a way to configure this for visionOS 1 users too. Am I missing something? Surely there's a way for developers to stop the system messing with the audio and applying these effects? A few of my users have complained that the audio sounds considerably worse in my cinema immersive space compared to in a window.
2
0
316
2w
Help with Virtual Hands
At the end of the WWDC 2023 sessoion https://developer.apple.com/wwdc23/10111 the session talks about implementing Virtual Hands. However the Code shown in the Video is not correct anymore with the latest version of VisionOS and also the example “SpaceGloves“ Entity referenced in the Video is not documented and there is no example project resource to reference either. I have been trying for the last two weeks to implement the same basic example shown in this video, including skinning and rigging my own hand meshes But have had significant issues doing so and found no working code/usdz examples to reference across different internet resource. Is it possible for a working project/code example including USDZ files for the Hand Meshes to be provided in order to test a working example of this as a starting point for building my own virtual hands? Thanks for your help.
0
3
289
3w
Blender to Reality Composer Pro 2.0 to SwiftUI + RealityKit visionOS Best Practices
Hi, I'm very new to 3D and am currently porting a SwiftUI iOS app to visionOS 2.0. I saw WWDC24 feature Blender in multiple spatial videos, and have begun integrating Blender models and animations into my VisionOS app (I would also like to integrate skeletons and programmatic rigging, more on that later). I'm wondering if there are “Best Practices” for this workflow - from Blender to USD to RCP 2.0 to visionOS 2 in Xcode. I’ve cobbled together the following that has some obvious holes: I’ve been able to find some pre-rigged and pre-animated models online that can serve as a great starting point. As a reference, here is a free model from SketchFab - a simple rigged skeleton with 6 built in animations: https://sketchfab.com/3d-models/skeleton-character-low-poly-8856e0138f424d68a8e0b40e185951f6 When exporting to USD from Blender, I haven’t been able to export more than one animation per USD file. Is there a workflow to export multiple animations in a single USDC file, or is this just not possible? As a temporary workaround, here is a python script I’ve been using to loop through all Blender animations, and export a model for each animation: import bpy import os # Set the directory where you want to save the USD files output_directory = “/path/to/export” # Ensure the directory exists if not os.path.exists(output_directory): os.makedirs(output_directory) # Function to export current scene as USD def export_scene_as_usd(output_path, start_frame, end_frame): bpy.context.scene.frame_start = start_frame bpy.context.scene.frame_end = end_frame # Export the scene as a USD file bpy.ops.wm.usd_export( filepath=output_path, export_animation=True ) # Save the current scene name original_scene = bpy.context.scene.name # Iterate through each action and export it as a USD file for action in bpy.data.actions: # Create a new scene for each action bpy.context.window.scene = bpy.data.scenes[original_scene].copy() new_scene = bpy.context.scene # Link the action to all relevant objects for obj in new_scene.objects: if obj.animation_data is not None: obj.animation_data.action = action # Determine the frame range for the action start_frame, end_frame = action.frame_range # Export the scene as a USD file output_path = os.path.join(output_directory, f"{action.name}.usdc") export_scene_as_usd(output_path, int(start_frame), int(end_frame)) # Delete the temporary scene to free memory bpy.data.scenes.remove(new_scene) print("Export completed.") I have also been able to successfully export rigging armatures as a single Skeleton - each “bone” showing getting imported into Reality Composer Pro 2.0 when exporting/importing manually. I would like to have all of these animations available in a single scene to be used in a RealityView in visionOS - so I have placed all animation models in a RCP scene and created named Timeline Action animations for each, showing the correct model and hiding the rest when triggering specific animations. I apply materials/textures to each so they appear the same, using Shader Graph. Then in SwiftUI I use notifications (as shown here - https://forums.developer.apple.com/forums/thread/756978) to trigger each RCP Timeline Action animation from code. Two questions: Is there a better way than to have multiple models of the same skeleton - each with a different animation - in a scene to be able to trigger multiple animations? Or would this require recreating Blender animations using skeleton rigging and keyframes from within RCP Timelines? If I want to programmatically create custom animations and move parts of the skeleton/armatures - do I need to do this by defining custom components in RCP, using IKRig and define movement of each of the “bones” in Xcode? I’m looking for any tips/tricks/workflow from experienced engineers or 3D artists that can create a more efficient/optimized workflow using Blender, USD, RCP 2 and visionOS 2 with SwiftUI. Thanks so much, I appreciate any help! I am very excited about all the new tools that keep evolving to make spatial apps really fun to build!
1
2
281
3w
The app “SGKJ_3DRealisticModel” has been killed by the operating system because it is using too much memory.
Hello! I encountered a problem when developing a project using VisionOS. I placed a file with a model size of 294.8MB in the project and tried to load the file model using the "FileManager. default. contentsOfDirectory (atPath: resourcesPath). filter {$0. hasSuffix (". usdz ")}" code. However, when the program was loaded, it was directly shut down by the system, prompting me to exceed memory. I would like to ask how to solve this problem?
0
0
113
3w
Xbox controller and visionOS 2
I am having problems getting button input from an Xbox game controller. I have the visionOS 2 beta on my Apple Vision Pro, and I am trying to use an Xbox game controller with a RealityView following the instructions from the WWDC session Explore game input in visionOS. The notification about a game controller is picking up the game controller, finds GCInputButtonA, and I am setting closures for touchedChangedHandler, pressedChangedHandler, and valueChangedHandler that just print an os_log statement. buttonA.valueChangedHandler = { button, value, pressed in os_log("Got valueChangedHandler") } At the end of RealityView, I have the modifier RealityView { content in // stuff } .handlesGameControllerEvents(matching: .gamepad) But I am never seeing the log message appear in the console when I press the 'A' button (or any other button). Any ideas what I might be doing wrong? The Xbox controller is pretty old. Settings is reporting it as version 9.0.3
1
0
304
4w
Add a modifier to a single model in the Reality Composer Pro scene
We can add many models in the Reality Composer Pro scene, but when I use RealityView to display and add modifiers in SwiftUI, the modifiers will have Effect, and I don't want to do this. I hope this modifier will be valid for a single model in the Reality Composer Pro scenario. May I ask how to add modifiers to a single model in the Reality Composer Pro scene?
2
0
291
2w