Hi there!
I´m trying to make a 360 image carousel in RealityView/SwiftUI with very large textures. I´ve managed to load one 12K 360 image and showing it on a inverted sphere with a ShaderMaterialGraph made in Reality Composer Pro. When I try to load the next image I get an out of memory error. The carousel works fine with smaller textures.
My question is. How do I release the memory from the current texture before loading the next?
In theory the garbagecollector should erase it eventually?
Hope someone can help =)
Thanks in advance!
Best regards,
Kim
Reality Composer Pro
RSS for tagPrototype and produce content for AR experiences using Reality Composer Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
The documentation at https://developer.apple.com/documentation/visionos/designing-realitykit-content-with-reality-composer-pro states:
Reality Composer Pro treats your imported assets as read-only.
This is a huge obstacle for me, as I need to do multiple adjustments to the scene.
I somehow managed to actually import one of my assets into the scene and can manipulate it directly, but now I can't figure out how I did this.
As I have to prepare further assets and would like to do this directly in Reality Composer Pro, I'm looking for a way to actually load them into the scene.
Any idea how this can be done?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
`error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults")
error: Tool exited with code 1
I'm having a heck of a time getting this to work. I'm trying to add an event notification at the end of a timeline animation to trigger something in code but I'm not receiving the notification from RC Pro. I've watched that Compose Interactive 3D Content video quite a few times now and have tried many different ways. RC Pro has the correct ID names on the notifications. I'm not a programmer at all. Just a lowly 3D artist. Here is my code...
import SwiftUI
import RealityKit
import RealityKitContent
extension Notification.Name {
static let button1Pressed = Notification.Name("button1pressed")
static let button2Pressed = Notification.Name("button2pressed")
static let button3Pressed = Notification.Name("button3pressed")
}
struct MainButtons: View {
@State private var transitionToNextSceneForButton1 = false
@State private var transitionToNextSceneForButton2 = false
@State private var transitionToNextSceneForButton3 = false
@Environment(AppModel.self) var appModel
@Environment(\.dismissWindow) var dismissWindow
// Notification publishers for each button
private let button1PressedReceived = NotificationCenter.default.publisher(for: .button1Pressed)
private let button2PressedReceived = NotificationCenter.default.publisher(for: .button2Pressed)
private let button3PressedReceived = NotificationCenter.default.publisher(for: .button3Pressed)
var body: some View {
ZStack {
RealityView { content in
// Load your RC Pro scene that contains the 3D buttons.
if let immersiveContentEntity = try? await Entity(named: "MainButtons", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
}
}
// Optionally attach a gesture if you want to debug a generic tap:
.gesture(
TapGesture().targetedToAnyEntity().onEnded { value in
print("3D Object tapped")
_ = value.entity.applyTapForBehaviors()
// Do not post a test notification here—rely on RC Pro timeline events.
}
)
}
.onAppear {
dismissWindow(id: "main")
// Remove any test notification posting code.
}
// Listen for distinct button notifications.
.onReceive(button1PressedReceived) { (output) in
print("Button 1 pressed notification received")
transitionToNextSceneForButton1 = true
}
.onReceive(button2PressedReceived.receive(on: DispatchQueue.main)) { _ in
print("Button 2 pressed notification received")
transitionToNextSceneForButton2 = true
}
.onReceive(button3PressedReceived.receive(on: DispatchQueue.main)) { _ in
print("Button 3 pressed notification received")
transitionToNextSceneForButton3 = true
}
// Present next scenes for each button as needed. For example, for button 1:
.fullScreenCover(isPresented: $transitionToNextSceneForButton1) {
FacilityTour()
.environment(appModel)
}
// You can add additional fullScreenCover modifiers for button 2 and 3 transitions.
}
}
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Graphics and Games
Xcode
SwiftUI
Reality Composer Pro
I have a scene that has been assembled in RCP but I'm losing the correct hierarchy and transforms when running the scene in the headset or the simulator.
This is in RCP:
This is at runtime with the debugger:
As you can see the "MAIN_WAGON" entity is gone and part of the hierarchy are now children of "TRAIN_ROOT" instead.
Another issue is that not only part of the hieararchy disappears, it also reverts back to default values of the transform instead of what is set in RCP:
This is in RCP:
This is in the simulator/headset:
I'm filing a feedback ticket too and will post the number here.
Anyone had a similar issue and found a fix or workaround ?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer
RealityKit
Reality Composer Pro
Reality Composer is no longer available in XCode 15 Release. Is this intended or will be available in later releases? Do I need to revert to Xcode15 Beta8 to get Reality Composer?
I made an animation in Blender using geometry nodes that I exported to USDC file (then I used Reality Converter to convert to USDZ) and I can see the animation when viewing from the finder but does not play after importing to RCP. Any idea how I can play the animation? Or can the animation be accessed through Xcode?
Thanks!
Hi, I'm very new to 3D and am currently porting a SwiftUI iOS app to visionOS 2.0.
I saw WWDC24 feature Blender in multiple spatial videos, and have begun integrating Blender models and animations into my VisionOS app (I would also like to integrate skeletons and programmatic rigging, more on that later).
I'm wondering if there are “Best Practices” for this workflow - from Blender to USD to RCP 2.0 to visionOS 2 in Xcode. I’ve cobbled together the following that has some obvious holes:
I’ve been able to find some pre-rigged and pre-animated models online that can serve as a great starting point. As a reference, here is a free model from SketchFab - a simple rigged skeleton with 6 built in animations:
https://sketchfab.com/3d-models/skeleton-character-low-poly-8856e0138f424d68a8e0b40e185951f6
When exporting to USD from Blender, I haven’t been able to export more than one animation per USD file. Is there a workflow to export multiple animations in a single USDC file, or is this just not possible?
As a temporary workaround, here is a python script I’ve been using to loop through all Blender animations, and export a model for each animation:
import bpy
import os
# Set the directory where you want to save the USD files
output_directory = “/path/to/export”
# Ensure the directory exists
if not os.path.exists(output_directory):
os.makedirs(output_directory)
# Function to export current scene as USD
def export_scene_as_usd(output_path, start_frame, end_frame):
bpy.context.scene.frame_start = start_frame
bpy.context.scene.frame_end = end_frame
# Export the scene as a USD file
bpy.ops.wm.usd_export(
filepath=output_path,
export_animation=True
)
# Save the current scene name
original_scene = bpy.context.scene.name
# Iterate through each action and export it as a USD file
for action in bpy.data.actions:
# Create a new scene for each action
bpy.context.window.scene = bpy.data.scenes[original_scene].copy()
new_scene = bpy.context.scene
# Link the action to all relevant objects
for obj in new_scene.objects:
if obj.animation_data is not None:
obj.animation_data.action = action
# Determine the frame range for the action
start_frame, end_frame = action.frame_range
# Export the scene as a USD file
output_path = os.path.join(output_directory, f"{action.name}.usdc")
export_scene_as_usd(output_path, int(start_frame), int(end_frame))
# Delete the temporary scene to free memory
bpy.data.scenes.remove(new_scene)
print("Export completed.")
I have also been able to successfully export rigging armatures as a single Skeleton - each “bone” showing getting imported into Reality Composer Pro 2.0 when exporting/importing manually.
I would like to have all of these animations available in a single scene to be used in a RealityView in visionOS - so I have placed all animation models in a RCP scene and created named Timeline Action animations for each, showing the correct model and hiding the rest when triggering specific animations.
I apply materials/textures to each so they appear the same, using Shader Graph.
Then in SwiftUI I use notifications (as shown here - https://forums.developer.apple.com/forums/thread/756978) to trigger each RCP Timeline Action animation from code.
Two questions:
Is there a better way than to have multiple models of the same skeleton - each with a different animation - in a scene to be able to trigger multiple animations? Or would this require recreating Blender animations using skeleton rigging and keyframes from within RCP Timelines?
If I want to programmatically create custom animations and move parts of the skeleton/armatures - do I need to do this by defining custom components in RCP, using IKRig and define movement of each of the “bones” in Xcode?
I’m looking for any tips/tricks/workflow from experienced engineers or 3D artists that can create a more efficient/optimized workflow using Blender, USD, RCP 2 and visionOS 2 with SwiftUI.
Thanks so much, I appreciate any help! I am very excited about all the new tools that keep evolving to make spatial apps really fun to build!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
visionOS
Compilation of the project for the WWDC 2024 session title Compose interactive 3D content in Reality Composer Pro fails.
After applying the fix mentioned here (https://developer.apple.com/forums/thread/762030?login=true), the project still won't compile.
Using Xcode 16 beta 7, I get these errors:
error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Component Compatibility: AudioLibrary not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Component Compatibility: BlendShapeWeights not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults")
error: Tool exited with code 1
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
visionOS
I created a custom component for composer pro in which I have several variables I need an entity to have.
The idea is to add this component to some 3d models and save them as usdz’s then I load these usdz’s in code and do specific things depending on these variables.
The component shows up in composer fine and I can set variables there. The problem is that the values I set in composer are different that what is shown in code. lets say in composer I set canMove = true. then when I read in code is set to false.
I don’t know if I’m missing something
public struct MyObjectComponent: Component, Codable
{
public var affectAll: Bool = false
public var affectFloor: Bool = false
public var canMove: Bool = false
public var moveX: Bool = false
public var moveY: Bool = false
public var moveZ: Bool = false
public var canRotate: Bool = false
public var rotateX: Bool = false
public var rotateY: Bool = false
public var rotateZ: Bool = false
public init() {
}
}
Any help appreciated.
Guillermo
Hey there, I am working on an app that displays environmental data using PNG color channels to represent data ranges, which gets overlayed on a map. The sampled values aren't what I'm expecting though... for example an RGB value of 0x7f0000 (R = 0.5, G = 0, B = 0) would be seen as 0.21, 0, 0 in the shader. This basically makes it unusable if I'm trying to show scientific data... I'm half wondering if I am completely misunderstanding how sampling works in RealityKit / RealityComposerPro. Anybody have any idea why it works like this?
Actual result (chart labels added in photoshop):
Expected:
Red > 0.1 Shader Graph
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
visionOS
Hello,
After watching the Work with Reality Composer Pro content in Xcode, I had created the following custom component.
public struct TestComponent : Component, Codable{
public var text : String = "helloWorld"
public init() {}
}
I had registered the custom component as suggested in App.init function
init() {
RealityKitContent.TestComponent.registerComponent()
}
The custom component is decoded and realityView shows the sphere, when I load the "Scene" from realityKitContent bundle.
But if I export the scene to a separate file named "test_scene.usdz" on disk and shared to the simulator and then trying to load it load in reality view causes
EXC_BAD_ACCESS #0 0x0000000194c8d508 in Swift._StringObject.getSharedUTF8Start() -> Swift.UnsafePointer<Swift.UInt8> ()
Printing the loaded entity, shows the customComponent but when trying to load in show realityview , crashes the app immediately. Is there a way to fix it?
Hi, I'm developing a prototype VisionOS game.
How to access the bones or joints information when exporting a USD file from Blender to RCP?
The animation in RCP works fine and the joints' information is correctly embedded in the USDA file (with usdchecker). However, RCP does not read it in USDA, USDC or USDZ. It should be possible based on Apple WWDC24 (Compose Interactive 3D content in RCP).
I want to attach and detach an entity to a particular bone in certain moments. So I need the bones' data. They are standard mixamo animations. My mesh is a single unified mesh. Using Blender 4.4
I've been using the MacOS XCode Reality Composer to export interactive .reality files that can be hosted on the web and linked to, triggering QuickLook to open the interactive AR experience.
That works really well.
I've just downloaded XCode 15 Beta which ships with the new Reality Composer Pro and I can't see a way to export to .reality files anymore. It seems that this is only for building apps that ship as native iOS etc apps, rather than that can be viewed in QuickLook.
Am I missing something, or is it no longer possible to export .reality files?
Thanks.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
QuickLook
Reality Composer
Reality Composer Pro
Hello,
I am looking to create a shader to update an entity's rendering. As a basic example say I want to recolour an entity, but leave its original textures showing through:
I understand with VisionOS I need to use Reality Composer Pro to create the shader, but I'm lost as how to reference the original colour that I'm trying to update in the node graph. All my attempts appear to completely override the textures in the entity (and its sub-entities) that I want to impact. Also the tutorials / examples I've looked at appear to create materials, not add an effect on top of existing materials.
Any hints or pointers?
Assuming this is possible, I've been trying to load the material in code, and apply to an entity. But do I need to do this to all child entities, or just the topmost?
do {
let entity = MyAssets.createModelEntity(.plane) // Loads from bundle and performs config
let material = try await ShaderGraphMaterial(named: "/Root/TestMaterial", from: "Test", in: realityKitContentBundle)
entity.applyToChildren {
$0.components[ModelComponent.self]?.materials = [material]
}
root.addChild(entity)
} catch {
fatalError(error.localizedDescription)
}
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
visionOS
We are developing a mixed reality app for Vision Pro using Reality Composer Pro, but we're consistently encountering a Protobuf-related crash whenever we enter the immersive space. Our Reality Composer Pro package is quite complex, with numerous objects. Could the complexity of the package be contributing to the issue, or could something else be at play? No errors are being flagged in the code or Reality Composer itself.
Here’s the error log:
[libprotobuf FATAL /Library/Caches/com.apple.xbs/Sources/REKit/ThirdParty/protobuf/src/google/protobuf/io/zero_copy_stream_impl_lite.cc:276] CHECK failed: (count) >= (0):
libc++abi: terminating due to uncaught exception of type google::protobuf::FatalException: CHECK failed: (count) >= (0):
Any insight on what might be causing this would be appreciated.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer Pro
visionOS
Debugging
I've tried importing a 3D KTX file into Reality Composer Pro ShaderGraph, but got nothing output. Realitykit doesn't support Image3D yet?
First, I use pyktx to create a 3D KTX texture file and it seems correct in Finder preview.
import numpy as np
from pyktx import KtxTexture2, KtxTextureCreateInfo, KtxTextureCreateStorage, VkFormat
size = 32
image = np.zeros((size, size, size, 3), dtype=np.uint8)
for i in range(size):
image[i, :, :, 0] = np.interp(i, [0, size], [50, 255])
image[:, i, :, 1] = np.interp(i, [0, size], [100, 255])
image[:, :, i, 2] = np.interp(i, [0, size], [150, 255])
info = KtxTextureCreateInfo(
gl_internal_format=None,
vk_format=VkFormat.VK_FORMAT_R8G8B8_SRGB,
base_width=image.shape[2],
base_height=image.shape[1],
base_depth=image.shape[0],
num_dimensions=3,
num_levels=1,
num_layers=1,
num_faces=1,
generate_mipmaps=False,
is_array=False,
)
texture = KtxTexture2.create(info, KtxTextureCreateStorage.ALLOC)
for _ in range(1):
texture.set_image_from_memory(0, 0, _, image[_].tobytes())
texture.write_to_named_file(f'{size}.ktx')
Then I import it into ShaderGraph like this but seems nothing could be read.
In addition, I tested 2D KTX file and it worked.
I also tested different vk_format or KTX1/KTX2 but did not work.
I also tested Image3DPixel/Image2DArray/Image3DRead and none of them work as long as it's 3D.
I have a RealityView displaying a Reality Composer Pro scene in window. Things are generally working fine, but the content seems to be appearing in front of and blocking the VisionOS window, rather than being contained inside it. Do I need to switch to a volumetric view for this to work? My scene simply contains a flat display which renders 3D content (it has a material that sends different imagery to each eye).
Hi,
In the downloadable WWDC sample project "CreatingASpaceshipGame" there is an audio file named "WorkMusic.aiff", as well mentioned in the video. Info says it's PCM 4-channel Quadrophonic.
Where can I find further information on how this file was authored? Was it simply exported from Logic Pro with Quadrophonic Surround settings or did it have any other specific treatment?
Thanks,
Axel
Hi, I'm trying to achieve 3D photo effects using a photo and a depth map, using GeomertyModifier. The offset is getting applied correctly in Reality Composer Pro, and in Xcode, but when I launch the app, the model offset is not getting applied. here are the screenshots of shader graphs and how it looks in the app
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor