Hello,
we have a RealityKit app that also runs on macOS via Catalyst.
For specific USD assets containing particle systems we have observed a reproducible crash.
Steps to reproduce:
Open Reality Composer Pro
Create new file
Create simple particle system (default one is fine)
export as USDZ
Create project in Xcode
Call Entity.load(… and pass in your USD
Running this on an Intel iMac with macOS Sequoia 15.3 will lead to a crash with the following console log:
validateWithDevice:4704: failed assertion `Render Pipeline DescrvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation
depthAttachmentPixelFormat (MTLPixelFormatDepthvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation
depthAttachmentPixelFormat (MTLPixelFormatDepthvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation
depthAttachmentPixelFormat (MTLPixelFormatDepth32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match.
'
iptor Validation
depthAttachmentPixelFormat (MTLPixelFormatDepth32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match.
'
32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match.
'
8) must match.
'
Xcode version: 16.2.0
iMac 2020 3,8GHz Intel Core i7
macOS Sequoia 15.3
FB16477373
It would be great if this could be fixed quickly or a workaround provided since it affects or production app. Thank you!
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
In the DestinationVideo demo, the onAppear in UpNextView is triggered again when it is closed, but I only want it to be triggered once. How can I achieve this?
Alternatively, I would like to capture the button click events in the player menu, as shown in the screenshot below.
I’m facing an issue while using CustomHoverEffect. In my view, there is a long title, which causes the title to be truncated. When the user hovers over it, the title should scroll. Although I have already implemented the scrolling effect, I am unsure how to trigger the scroll on hover. How should I approach this?
When you click the button in the background of three horizontal lines, when the view is about to appear, add buried event statistics, but click the button to close it, it will repeat the view will appear method API, equivalent to the view method repeated execution twice, resulting in incorrect buried event statistics
Hi, I am a new developer. I want to add articulated objects and deformable objects into my AR game. I haven't found any tutorial on this, I hope to interact with these objects. Please let me know if this is available in visionOS.
How do I obtain the device's camera permissions when developing camera apps?
Hello,
I was looking back into downloading the Tracking geographic locations in AR sample app from https://developer.apple.com/documentation/arkit/tracking-geographic-locations-in-ar
Unfortunately the Download links to the .zip of the DisplayingAPointCloudUsingSceneDepth sample project.
The exact same issue occurs when trying to download the sample code from https://developer.apple.com/documentation/ARKit/creating-a-fog-effect-using-scene-depth
Wondering if those links are deliberately broken because of possible deprecations.
Thanks to any Apple Engineer willing to look into that.
TODO Item
DONE Item
问题: 点击背景三道横线的按钮时候,进行事件埋点,在onAppear方法里进行埋点操作,后续点击关闭按钮的时候,onAppear方法会重新走一遍,造成了方法重复走,无法正确统计埋点事件
my coworkers and i are guessing at what data defines an anchor. i tried searching but struggled to find anything helpful.
our best guess was a combination Triangular Irregular Networks (TIN), gps, magnetic compass direction and maybe elevation sensors.
is this documented anywhere? if not, can a definition or description be provided?
I'm trying to develop an immersive visionOS app, which you can move an Entity having a PerspectiveCamera as its child in immersive space, and render the camera view on 2D window.
According to this thread, this seems to can be achieved using RealityRenderer. But when I added the scene entity loaded from realityKitContentBundle to realityRenderer.entities, I needed to clone all entities of the scene, otherwise all entities in the immersive space will disappear.
@Observable
@MainActor
final class OffscreenRenderModel {
private let renderer: RealityRenderer
private let colorTexture: MTLTexture
init(scene: Entity) throws {
renderer = try RealityRenderer()
// If not clone entities in the scene, all entities in the immersive space will disappear
renderer.entities.append(scene.clone(recursive: true))
let camera = PerspectiveCamera()
renderer.activeCamera = camera
renderer.entities.append(camera)
...
}
}
Is this the expected behavior? Or is there any other way to do this (move camera in immersive space and render its output on 2D window)?
Here is my sample code:
https://github.com/TAATHub/RealityKitPerspectiveCamera
My friend cannot build my visionOS project in the simulator. He gets the following error.
Error:
[xrsimulator] Exception thrown during compile: cannotGetRkassetsContents(path: "/Users/path/to/Packages/RealityKitContent/Sources/RealityKitContent/RealityKitContent.rkassets")
In Xcode, he is able to open the RealityKitContent package in realityComposer Pro by clicking on the Package.realitycomposerpro file. No warnings show up wrt this error in RCP either. All scenes appear to be usable/navigable in RCP. This error only comes up when he tries to build the project in Xcode command+b. The is no other information in the Report Navigator's Build logs for this error. The error is always followed by this next error.
Error:
Tool exited with code 1
Yikes, please help!
Hi,
When I'm looking at the RoomAnchor documentation I can see the planeAnchorIDs property.
My question: How I can get an array of PlaneAnchor with planeAnchorIDs?
A code example would be greatly appreciated.
Regards
Tof
I am submitting a challenge to the Swift Student Challenge. I have created a RealityContent folder using Reality Composer Pro. How can I import this folder into the Swift Package Manager (.swiftpm) project hosted on Playground to ensure that it becomes a usable package?
We use SceneReconstructionProvider to detect meshes in the surrounding environment and apply an OcclusionMaterial to them.
// Assuming `entity` represents one of the detected mesh in the environment
entity.components.set(ModelComponent(
mesh: mesh,
materials: [OcclusionMaterial()]
))
While this correctly occludes entities placed in the immersive space, it also occludes system windows. This becomes problematic when a window is dragged into an occluded area (before or after entering the immersive space), preventing interaction with its elements. In some cases, it also makes it impossible to focus on the window’s drag handle, since this might become occluded as well after moving the window nearby. More generally, system windows can be occluded when they come into proximity with a model that has OcclusionMaterial applied.
I'm aware of a change introduced in visionOS 2 regarding how occlusions interact with UI elements (as noted in the release notes). I believe this change was intended to ensure windows do not remain visible when opened in another room. However, this also introduces some challenges, as described in the scenario above.
Is there a way to prevent system window occlusion while still allowing entities to be occluded by environmental features? Perhaps not using OcclusionMaterial at all?
Development environment: Xcode 16.2, macOS 15.2
Run-time configuration: visionOS 2.2 and 2.3
I am considering adding finger pad haptics (Data flow for haptic feedback is directed from the AVP to the fingers, not vice versa). Simple piezos wired to a wrist connection holding the driver/battery.
But I'm concerned it will impact the hand tracking. Any guidance regarding gloves and/or the size of any peripherals attached to fingers?
Or, if anyone has another (inexpensive) low profile option on the market please LMK. Thanks
Hi,
I have a question.
In visionOS, when a user looks at a button and performs a pinch gesture with their index finger and thumb, the button responds. By default, this works with both the left and right hands. However, I want to disable the pinch gesture when performed with the left hand while keeping it functional with the right hand.
I understand that the system settings allow users to configure input for both hands, the left hand only, or the right hand only. However, I would like to control this behavior within the app itself.
Is this possible?
Best regards.
Hi,
I created an app using iOS Object Capture API which works only on Lidar enabled phones. It's a limitation of the Api provided by apple itself.
I Submitted an app for Review , but It is getting rejected (Twice) saying it doesnt work on non pro models. Even though I explained that capturing Needs Lidar and supported only in PRO models, It still gets rejected after testing in Non Pro models. is there a way out?
Hi there.
Thanks to amazing help from you guys, I've managed to code a 360 image carousel, where the user can browse 360 images located inside the project package.
Is there a way to access the filesystem on AVP outside the app?
I know about the FileManager, and I can get access to the .documentsDirectory, but how do I access documents folder from the "Files" app on the AVP?
My goal is to read images from a hardcoded folderlocation on the AVP, such that the user never will have to select the images themselves.
I know this may not be the "right" way to do this. The app is supposed to be "foolproof" with a minimum of userinteraction.
The only way to change the images should be to change the contents of the hardcoded imagefolder.
I hope this makes sense =)
Thanks in advance!
Regards,
Kim
Is it possible to have a skydome which influences lighting in the scene but is otherwise invisible? in a raytracer that would be visible to secondary rays but invisible to primary rays.
Cheers, thanks.
I am currently developing an app for visionOS and have encountered an issue involving a component and system that moves an entity up and down within a specific Y-axis range. The system works as expected until I introduce sound playback using AVAudioPlayer.
Whenever I use AVAudioPlayer to play sound, the entity exhibits unexpected behaviors, such as freezing or becoming unresponsive. The freezing of the entity's movement is particularly noticeable when playing the audio for the first time. After that, it becomes less noticeable, but you can still feel it, especially when the audio is played in quick succession.
Also, the issue is more noticable on real device than the simulator
//
// IssueApp.swift
// Issue
//
// Created by Zhendong Chen on 2/1/25.
//
import SwiftUI
@main
struct IssueApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
.windowStyle(.volumetric)
}
}
//
// ContentView.swift
// Issue
//
// Created by Zhendong Chen on 2/1/25.
//
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
@State var enlarge = false
var body: some View {
RealityView { content, attachments in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) {
if let sphere = scene.findEntity(named: "Sphere") {
sphere.components.set(UpAndDownComponent(speed: 0.03, minY: -0.05, maxY: 0.05))
}
if let button = attachments.entity(for: "Button") {
button.position.y -= 0.3
scene.addChild(button)
}
content.add(scene)
}
} attachments: {
Attachment(id: "Button") {
VStack {
Button {
SoundManager.instance.playSound(filePath: "apple_en")
} label: {
Text("Play audio")
}
.animation(.none, value: 0)
.fontWeight(.semibold)
}
.padding()
.glassBackgroundEffect()
}
}
.onAppear {
UpAndDownSystem.registerSystem()
}
}
}
//
// SoundManager.swift
// LinguaBubble
//
// Created by Zhendong Chen on 1/14/25.
//
import Foundation
import AVFoundation
class SoundManager {
static let instance = SoundManager()
private var audioPlayer: AVAudioPlayer?
func playSound(filePath: String) {
guard let url = Bundle.main.url(forResource: filePath, withExtension: ".mp3") else { return }
do {
audioPlayer = try AVAudioPlayer(contentsOf: url)
audioPlayer?.play()
} catch let error {
print("Error playing sound. \(error.localizedDescription)")
}
}
}
//
// UpAndDownComponent+System.swift
// Issue
//
// Created by Zhendong Chen on 2/1/25.
//
import RealityKit
struct UpAndDownComponent: Component {
var speed: Float
var axis: SIMD3<Float>
var minY: Float
var maxY: Float
var direction: Float = 1.0 // 1 for up, -1 for down
var initialY: Float?
init(speed: Float = 1.0, axis: SIMD3<Float> = [0, 1, 0], minY: Float = 0.0, maxY: Float = 1.0) {
self.speed = speed
self.axis = axis
self.minY = minY
self.maxY = maxY
}
}
struct UpAndDownSystem: System {
static let query = EntityQuery(where: .has(UpAndDownComponent.self))
init(scene: RealityKit.Scene) {}
func update(context: SceneUpdateContext) {
let deltaTime = Float(context.deltaTime) // Time between frames
for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) {
guard var component: UpAndDownComponent = entity.components[UpAndDownComponent.self] else { continue }
// Ensure we have the initial Y value set
if component.initialY == nil {
component.initialY = entity.transform.translation.y
}
// Calculate the current position
let currentY = entity.transform.translation.y
// Move the entity up or down
let newY = currentY + (component.speed * component.direction * deltaTime)
// If the entity moves out of the allowed range, reverse the direction
if newY >= component.initialY! + component.maxY {
component.direction = -1.0 // Move down
} else if newY <= component.initialY! + component.minY {
component.direction = 1.0 // Move up
}
// Apply the new position
entity.transform.translation = SIMD3<Float>(entity.transform.translation.x, newY, entity.transform.translation.z)
// Update the component with the new direction
entity.components[UpAndDownComponent.self] = component
}
}
}
Could someone help me with this?