In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have an application running on visionOS 2.0 that uses the ARKit C API to create anchors and listen for updates.
I am running an ARKit session with a WorldTrackingProvider (and a CameraFrameProvider, if that is relevant)
Then, I am registering a callback using ar_world_tracking_provider_set_anchor_update_handler_f
When updates arrive I iterate over the updated anchors using ar_world_anchors_enumerate_anchors_f.
Then, as described in the https://developer.apple.com/documentation/visionos/tracking-points-in-world-space documentation, I walk around and hold down the Digital Crown to reposition the current space. This resets the world origin to my current position.
When this happens, anchor updates arrive. In most cases, the anchor updates return the new transform (using ar_world_anchor_get_origin_from_anchor_transform) but sometimes I get an anchor update that reports the transform of the anchor from before the world origin was repositioned. Meaning instead of staying in place in the physical world, the world anchor moves relative to me.
I can work around this by calling ar_world_tracking_provider_copy_all_world_anchors_f which provides me with the correct transform, but this async method also adds some noticeable delay to the anchor updates.
Is this already a known issue?
I noticed that when I drag the menu window in an Immersive View, the entities behind it becomes semi-transparent, and the boundary between virtual and real-world objects is very pronounced.
May I ask how does VisionOS implement this effect? Is there any API or technique I can use in my own code to enable the same semi-transparent overlay - even when I am not dragging the menu window?
具体表现为:在Unity编辑器中材质显示正常,但部署到Vision Pro真机后部分材质丢失或Shader效果异常(如透明通道失效、光照计算错误等)。此问题影响了开发进度,希望得到技术支持的帮助
Specific results: The materials are displayed normally in the Unity editor, but after being deployed to the Vision Pro real machine, some materials are lost or the Shader effect is abnormal (such as transparent channel failure, antenna calculation error, etc.). This problem has affected the development progress, and I hope to get help from technical support
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I'm getting the following error message when compiling the Apple provided sample, Spaceship game for the Apple Visio Pro. I've already tried deleting the derived data resetting the package cache and restarting Xcode but still getting the following error: [xrsimulator] Exception thrown during compile: Cannot get rkassets content for path /Users/myoungkang/Downloads/CreatingASpaceshipGame/Packages/Studio/Sources/Studio/Studio.rkassets because 'The file “Studio.rkassets” couldn’t be opened because you don’t have permission to view it.'
error: Tool exited with code 1
Topic:
Spatial Computing
SubTopic:
General
I am still not finding resources to know how to replace hands in a full immersive Space.... I reached the goal by creating a ARKit session that can detect the USDZ hand mesh Joints and connect to the hand-tracked-joints.... but I feel that's not the best solution... I really want to use the RealityKit potential to track and replace hands (with USDZ skinned ones) in an immersive environment, but the only resources I found are from November 2023.... :(
Can someone help me?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Dear Apple Engineers,
I am working on a project in visionOS and need to implement a curved surface effect for video playback, where the width of the surface can be dynamically adjusted. Specifically, I want the video to be displayed on a curved surface (similar to a scroll unfolding), and the user should be able to adjust the width of this surface.
I have the following specific questions:
How can I implement a curved surface for video playback and ensure the video content is not stretched or distorted on the surface?
How can I create a dynamic curved surface (such as a bending plane) in RealityKit or visionOS, where the width can be adjusted by the user?
Is it possible to achieve more complex curved surface effects (such as scroll unfolding or bending) using Shaders or other techniques?
Thank you very much for your help!
I have a visionOS app using immersive space with RealityView. The app adds RealityKit entities to the app's Scene instance, and uses raycast to find CollisionCastHits.
I want now to write a unit test to check if the app finds the right hits.
To do so, I have to access the Scene instance to add entities, and to check if they are hit by scene.raycast.
But how can I access the scene instance?
I can access it e.g. after creating the RealityView via its content parameter, or via @Environment(\.realityKitScene). But this seems not to be possible in a unit test.
I tried the following test function:
@MainActor @Test func test() async throws {
var scene: RealityKit.Scene?
await withCheckedContinuation { continuation in
_ = RealityView(make: { content in
print("make")
let entity = Entity()
content.add(entity)
scene = entity.scene
continuation.resume()
})
}
#expect(scene != nil)
}
But this logs
◇ Test test() started.
SWIFT TASK CONTINUATION MISUSE: test() leaked its continuation!
The reason is apparently that the make closure of RealityView is only called when SwiftUI calls it within the body of a SwiftUI View.
So, is it possible at all to access the app's scene i a unit test?
I am working on a React Native app, specifically on the iOS native module with RealityKit. An apparently unexplainable error keeps happening at runtime, as you can see from the following image:
Crash log from XCode When I try to retrieve the position of my AnchorEntity relative to the world space (so using relativeTo: nil), it triggers a runtime exception during some of its internal calls:
CoreRE: re::BucketArray<unsigned short*, 32ul>::operator[](unsigned long) + 204
As you can see from the code, my AnchorEntity is not null as there is a guard check. I also tried to move that code into an objective c static function in order to use @try @catch and catch runtime exceptions, to later realise that RealityKit is not compatible with ObjectiveC. Do you have any idea/suggestion on how to fix it/prevent it?
Topic:
Spatial Computing
SubTopic:
ARKit
I'm placing sphere at finger tip and updating its position as hand move.
Finger joint tracking functions correctly, but I’ve observed noticeable latency in hand tracking updates whenever a UITextView becomes active. This lag happens intermittently during app usage, lasting about 5–10 seconds, after which the latency disappears and the sphere starts following the finger joints immediately.
When I open the immersive space for the first time, the profiler shows a large performance spike upto 328%. After that, it stabilizes and runs smoothly.
Note: I don’t observe any lag when CPU usage spikes to 300% (upon immersive view load)
yet the lag still occurs even when CPU usage remains below 100%.
I’m using the following code for hand tracking:
private func processHandTrackingUpdates() async {
for await update in handTracking.anchorUpdates {
let handAnchor = update.anchor
if handAnchor.isTracked {
switch handAnchor.chirality {
case .left:
leftHandAnchor = handAnchor
updateHandJoints(for: handAnchor, with: leftHandJointEntities)
case .right:
rightHandAnchor = handAnchor
updateHandJoints(for: handAnchor, with: rightHandJointEntities)
}
} else {
switch handAnchor.chirality {
case .left:
leftHandAnchor = nil
hideAllJoints(in: leftHandJointEntities)
case .right:
rightHandAnchor = nil
hideAllJoints(in: rightHandJointEntities)
}
}
await MainActor.run {
handTrackingData.processNewHandAnchors(
leftHand: self.leftHandAnchor,
rightHand: self.rightHandAnchor
)
}
}
}
And here’s the function I’m using to update the joint positions:
private func updateHandJoints(
for handAnchor: HandAnchor,
with jointEntities: [HandSkeleton.JointName: Entity]
) {
guard handAnchor.isTracked else {
hideAllJoints(in: jointEntities)
return
}
// Check if the little finger tip and intermediate base are both tracked.
if let tipJoint = handAnchor.handSkeleton?.joint(.littleFingerTip),
let intermediateBaseJoint = handAnchor.handSkeleton?.joint(.littleFingerIntermediateTip),
tipJoint.isTracked,
intermediateBaseJoint.isTracked,
let pinkySphere = jointEntities[.littleFingerTip] {
// Convert joint transforms to world space.
let tipTransform = handAnchor.originFromAnchorTransform * tipJoint.anchorFromJointTransform
let intermediateBaseTransform = handAnchor.originFromAnchorTransform * intermediateBaseJoint.anchorFromJointTransform
// Extract positions from the transforms.
let tipPosition = SIMD3<Float>(tipTransform.columns.3.x,
tipTransform.columns.3.y,
tipTransform.columns.3.z)
let intermediateBasePosition = SIMD3<Float>(intermediateBaseTransform.columns.3.x,
intermediateBaseTransform.columns.3.y,
intermediateBaseTransform.columns.3.z)
// Calculate the midpoint.
let midpointPosition = (tipPosition + intermediateBasePosition) / 2.0
// Position the sphere at the midpoint and make it visible.
pinkySphere.isEnabled = true
pinkySphere.transform.translation = midpointPosition
} else {
// If either joint is not tracked, hide the sphere.
jointEntities[.littleFingerTip]?.isEnabled = false
}
// Update the positions of all other hand joint spheres.
for (jointName, entity) in jointEntities {
if jointName == .littleFingerTip {
// Already handled the pinky above.
continue
}
guard let joint = handAnchor.handSkeleton?.joint(jointName),
joint.isTracked else {
entity.isEnabled = false
continue
}
entity.isEnabled = true
let jointTransform = handAnchor.originFromAnchorTransform * joint.anchorFromJointTransform
entity.transform.translation = SIMD3<Float>(jointTransform.columns.3.x,
jointTransform.columns.3.y,
jointTransform.columns.3.z)
}
}
I’ve attached both a profiler trace and a video recording from Vision Pro that clearly demonstrate the issue.
Profiler: https://drive.google.com/file/d/1fDWyGj_fgxud2ngkGH_IVmuH_kO-z0XZ
Vision Pro Recordings:
https://drive.google.com/file/d/17qo3U9ivwYBsbaSm26fjaOokkJApbkz-
https://drive.google.com/file/d/1LxTxgudMvWDhOqKVuhc3QaHfY_1x8iA0
Has anyone else experienced this behavior? My thought is that there might be some background calculations happening at the OS level causing this latency. Any guidance would be greatly appreciated.
Thanks!
Hello everyone,
I'm developing an app for visionOS that utilizes HealthKit to query heart rate data. However, I'm encountering an issue where the app doesn't retrieve the latest heart rate values. Specifically, it fails to get live heart rate data even after the data has been saved to the Health app. The readings my app displays are outdated and do not match the current values shown in the Health app.
Here's what I've tried so far:
Fetching Heart Rate Samples: Used HKSampleQuery and HKAnchoredObjectQuery to fetch the most recent heart rate samples. Despite this, the data retrieved is still not up-to-date.
Checking Permissions: Ensured that all necessary HealthKit permissions are granted. The app has authorization to read heart rate data and write workout data.
My questions are:
Is there a known issue or limitation with HealthKit on visionOS that prevents apps from accessing the latest heart rate data?
Are there additional steps or configurations required to access live heart rate data in visionOS apps?
Has anyone successfully implemented live heart rate monitoring on visionOS, and if so, could you share how you achieved it?
Visionos26 Enterprise api has the new feature: Shared Coordinate Space, participants exchange their coordinate data by SharedCoordinateSpaceProvider through their own network, when shared coordinate space established with nearby participants, the event: connectedParticipantIdentifiers(participants: [UUID]) will be received.
But the Event.participantIdentifier still be an invalid default value(00000000-0000-0000-FFFF-FFFFFFFF) in this time, I wonder when or how I can get a valid event.participantIdentifier, or is there some other way to get the local participantIdentifier?
Or If it's a bug, please fix it in later beta release version, thank you.
We are currently using ObjectCapture from ARKit, and we would like to fix exposure time, white balance parameter and ISO. How can we do this ?
Additionally, we'd like to obtain the following information from the ARKit : white balance parameters (in case we cannot fix them) and color correction matrices ?
==> Which information will I get from CMSSampleBuffer ?
Is there an option to block close up accomodation of the camera ?
Is there a way for the object capture module to take a video instead of a series of picture ?
It would be fantastic to have an answer on all of these questions to be able to move forward on new implementations.
I am honored that I successfully participated in the "Envision the future: Build great apps for visionOS"(https://developer.apple.com/events/view/ZCH7ZUY24C/dashboard) conference. However, unfortunately, I am in China, and due to the visa problem (because it usually takes at least 2 months to apply for a visa, but it only takes about 10 days from the time I received the notice to the meeting), I can't go to the United States to participate in the site. So I hope Apple can place an iPad on site and create a FaceTime link, and then I can make a call to this iPad. I also told Apple about this suggestion, but now there are only a few days left to start. They didn't reply to me. Even Apple has sent me the ticket to the Developer Center and asked me to add it to the Apple Wallet App, which means that Apple has not There is a request to deal with me. So I hope you can give me some advice or help me for those who know about this aspect. For Apple's internal engineers, if possible, I hope you can contact the person who manages this meeting. I'm very grateful for this. Thank you.
Hi, every one!
I'm trying to bind timeline(animation + audio) and behaviors on an entity in reality composer pro.
In xcode, I need to clone this entity and use the behavior, but I found that the behaviors are not clone(send notification but not received by reality composer pro and not execute the timeline).
How can I solve this problem? Thanks!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
ARKit
AR / VR
RealityKit
Reality Composer Pro
Ever since updating to Xcode 16 my AR app doesn't compile, because Xcode doesn't recognize the .rcproject files used to load the AR experiences in iOS app. The .rcproject files were authored in Reality Composer on iPadOS.
The expected behavior is described in this official Apple documentation article: https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file
How do I submit a ticket to Apple?
Hello, I am getting following error on console and my app crashes. It goes to dark and then Apple logo appears and app crashes
apply fence tx failed (client=0x61dbbfd7) [0xfffffecc (ipc/mig) server died]
[C:3] Error received: Connection interrupted.
Failed to commit transaction (client=0x94097449) [0x10000003 (ipc/send) invalid destination port]
Failed to commit transaction (client=0xe9684b50) [0x10000003 (ipc/send) invalid destination port]
[C:3-1] Error received: Connection interrupted.
Failed to commit transaction (client=0xbcac17e9) [0x10000003 (ipc/send) invalid destination port]
Failed to commit transaction (client=0x52392119) [0x10000003 (ipc/send) invalid destination port]
Failed to commit transaction (client=0xff841d17) [0x10000003 (ipc/send) invalid destination port]
Failed to commit transaction (client=0xdef5c915) [0x10000003 (ipc/send) invalid destination port]
Failed to commit transaction (client=0xefdc8bf3) [0x10000003 (ipc/send) invalid destination port]
Failed to commit transaction (client=0xd50c1eff) [0x10000003 (ipc/send) invalid destination port]
Failed to commit transaction (client=0x15690a46) [0x10000003 (ipc/send) invalid destination port]
Failed to commit transaction (client=0xf296f56b) [0x10000003 (ipc/send) invalid destination port]
Failed to commit transaction (client=0x61dbbfd7) [0x10000003 (ipc/send) invalid destination port]
apply fence tx failed (client=0x61dbbfd7) [0x10000003 (ipc/send) invalid destination port]
nw_read_request_report [C1] Receive failed with error "No message available on STREAM"
nw_read_request_report [C1] Receive failed with error "No message available on STREAM"
nw_read_request_report [C1] Receive failed with error "No message available on STREAM"
nw_read_request_report [C1] Receive failed with error "No message available on STREAM"
nw_read_request_report [C1] Receive failed with error "No message available on STREAM"
nw_read_request_report [C1] Receive failed with error "No message available on STREAM"
nw_read_request_report [C1] Receive failed with error "No message available on STREAM"
nw_read_request_report [C1] Receive failed with error "No message available on STREAM"
nw_read_request_report [C1] Receive failed with error "No message available on STREAM"
nw_protocol_socket_reset_linger [C1:2] setsockopt SO_LINGER failed [22: Invalid argument]
apply fence tx failed (client=0x61dbbfd7) [0x10000003 (ipc/send) invalid destination port]
Failed to set override status for bind point component member.
Failed to set override status for bind point component member.
Failed to set override status for bind point component member.
Message from debugger: Terminated due to signal 9
Could you please tell me what's the reason and how can I resolve this. When I loads 2,3 times then app works fine from that point onwards. But this happens time to time when debug.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
visionOS
I am seeking a comprehensive pathway to learning Metal programming on VisionOS. The official documentation’s Pathway on Metal is insufficient in this regard. I kindly request that someone create a detailed pathway to assist me in this endeavor.
The pathway should encompass the following key areas:
Knowledge Base:
Understand the fundamental principles of Metal and other frameworks, as well as basic concepts, to prepare for future learning.
Metal3 (very important) :
Gain a deep understanding of Metal itself, the programming language used to communicate with the GPU on the device to render graphics. This knowledge forms the foundation for all Metal-related tasks.
Compositor Services and ARKit (important) :
Learn how to display Metal scenes within the Vision device’s space and enable augmented reality (AR) and hand interaction. This knowledge is essential for creating interactive and immersive experiences.
Metal Performance Shaders:
Acquire expertise in optimizing material rendering to enhance performance.
MetalKit:
Simplifies the tasks that display your Metal content onscreen.
MetalFX:
Develop proficiency in using MetalFX to improve rendering efficiency and achieve visually stunning effects.
I would appreciate it if you could provide me with a detailed and comprehensive pathway, including the URLs of relevant documents, to guide my learning journey. Thank you for your assistance.
Spatial photo in RealityView has a default corner radius. I made a parallel effect with spatial photos in ScrollView(like Spatial Gallery), but the corner radius disappeared on left and right spatial photos. I've tried .clipShape and .mask modifiers, but they did't work. How to clip or mask spatial photo with corner radius effect?