Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Converting a Stop Motion Animation to usdz
Hello everyone, I've been trying for a few weeks now to convert a sequential series of meshes into a stop-motion animation in USDZ format. In Unreal Engine, I’ve already figured out how to transform the sequential series of individual meshes into a smooth animation using the node system and arrays. Unfortunately, the node system cannot be exported as a usdz animation logic in either Unreal or Blender. Because of this, I have tried several other methods to incorporate the animation logic. Here’s what I’ve tried so far: I attempted to create the animation in Blender with Render-/Viewports and mapping it to keyframes. However, in my experience, Viewports are not supported in the conversion. I tried aligning the vertices of individual objects and merging the frames using the Shrinkwrap modifier in Blender, then setting up a morph animation with keyframes. However, because the individual meshes are too different, this results in artifacts, and manually editing each mesh is too difficult for me to handle. I placed all individual meshes at the same position and animated them sequentially by scaling them from 0 to 100 in keyframes (Frame 1 is visible for 10 frames, then scales down at frame 11, while Frame 2 becomes visible at frame 11, and so on). I also adjusted the keyframes so that the scaling happens in a "constant" manner rather than the default Bezier or linear interpolation. I then converted this animation to .abc, and the result initially looked good. However, some information is lost when converting it with OpenUSD. The animation does not maintain its intended jump-like behavior in USDZ format, and instead, the scaling of individual files is visible in the animation. I tried using a Blender add-on (StepMotion), which allows the animation to be exported as .abc, but it can only be read in Blender or Unreal. Even in the preview, the animation is not displayed correctly, so converting the animation logic does not work either. 
Unfortunately, I have no alternative way to create the animation, as the individual frames have been provided to me as meshes. So far, I haven’t found a way to implement this successfully. I would be very grateful for any tips or ideas, as I am running out of options on how to make this work. Thanks in advance!
2
0
140
Apr ’25
How to create a MultiplayerDelegate and use TabletopNetworkSession features?
When building a multiplayer Tabletop game, the documentation includes how to attach a custom TabletopNetworkSessionCoordinator, which could be used in addition to TabletopGame.MultiplayerDelegate. But so far, we have been unable to create these types of custom coordinators or have a delegate that works. Our current setup with our generic GroupActivity works by sending the session to TabletopGame's coordinateWithSession method (like in the current sample project), but we didn't find a way to access and control, for example, the arbiter, seats, player events, among other features mentioned on https://developer.apple.com/documentation/tabletopkit/tabletopnetworksession. Is correct to expect having access to the participants, messenger, or journal without having to maintain a parallel coordinator?   possibly we are missing something here; any suggestions?
2
0
221
Apr ’25
VisionPro camera frame rate
Hi, I'm working with CameraFrameProvider from Enterprise API. Is it always capped at 30fps, or is there something I can switch to get more? I assume it is capped at 30, so let me cram in additional question here :). If I'd get a developer strap and attach an external camera capable of doing >30fps, will I get the full stream, or some other limitation will kick in?
2
0
104
Apr ’25
CompositorServices Or RealityKit
I have been concentrating on developing the visionOS application. While I am currently quite familiar with RealityKit, CompositorServices has also captured my attention. I have not yet acquired knowledge of CompositorServices. Could you please clarify whether it is essential for me to learn CompositorServices? Additionally, I would appreciate it if you could provide insights into the advantages of RealityKit and CompositorServices.
2
0
725
Mar ’25
Ground Shadows pass through objects
This issue has been since visionOS 1 unless that is how it is supposed to work. As you can see in the screen capture the shadows from the top box are shown on all 3 boxes below. This is a screen capture in composer pro but the same thing happens in the Vision Pro. Is there any way to stop this behavior and just have shadows on the first object below the object that is casting the shadows ?
2
0
509
Dec ’24
Metal Compositor Service & Persona (VisionOS)
Hello, I'm currently trying to make a collaborative app. But it just works only on Reality View, when I tried to use Compositor Layer like below, the personas disappeared. ImmersiveSpace(id: "ImmersiveSpace-Metal") { CompositorLayer(configuration: MetalLayerConfiguration()) { layerRenderer in SpatialRenderer_InitAndRun(layerRenderer) } } Is there any potential solution too see Personas in Metal view? Thanks in advance!
2
0
736
Sep ’25
RoomCaptureSession with ARSCNView crashes when scanning multiple hotspots across different rooms
We're developing an iOS application that integrates RoomCaptureSession with ARSCNView for room scanning. Our implementation differs from the standard RoomCaptureView because we need custom UI guidance with 3D dots placed in the scanning environment to guide users through the capture process. Bug Description: The application crashes when users attempt to scan multiple rooms or apartments in sequence. The crash specifically occurs with the following pattern: User successfully scans first room with multiple hotspots (working correctly) User stops scanning, moves to a new room In the new room, first 1-2 hotspots work correctly Application crashes when attempting to scan additional hotspots Technical Details: Error: SLAM Anchor assertion failure in SlamAnchor.cpp:37 : HasValidPose() Crash occurs in Thread 27 with CAPIDetectionOutputFwdNode Error suggests invalid positioning when placing AR anchors Steps to Reproduce: Start room scan Complete multiple hotspot captures in first room Stop scanning Start new room scan Capture 1-2 hotspots successfully Attempt additional hotspot captures -> crashes Attempted Solutions: Implemented anchor cleanup between sessions Added position validation before anchor placement Implemented ARSession error handling Added proper thread management for AR operations Environment: Device: iPhone 14 Pro (LiDAR equipped) iOS Version: 18.1.1 (22B91) Testing through TestFlight Crash Log Details: Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Exception Note: EXC_CORPSE_NOTIFY Triggered by Thread: 27 Thread 27 Crashed: 0 libsystem_kernel.dylib 0x00000001f0cc91d4 __pthread_kill + 8 1 libsystem_pthread.dylib 0x0000000228e12ef8 pthread_kill + 268 2 libsystem_c.dylib 0x00000001a86bbad8 abort + 128 3 AppleCV3D 0x0000000234d71a28 cv3d::vio::capi::SlamAnchor::SlamAnchor Question: Is there a recommended approach for handling multiple room captures with custom ARSCNView integration? The standard RoomCaptureView implementation doesn't show this behavior, but we need the custom guidance functionality that ARSCNView provides. Crash Log Code and full crash logs can be provided if needed.
2
1
612
Feb ’25
Unexpected Behavior in Entity Movement System When Using AVAudioPlayer in visionOS Development
I am currently developing an app for visionOS and have encountered an issue involving a component and system that moves an entity up and down within a specific Y-axis range. The system works as expected until I introduce sound playback using AVAudioPlayer. Whenever I use AVAudioPlayer to play sound, the entity exhibits unexpected behaviors, such as freezing or becoming unresponsive. The freezing of the entity's movement is particularly noticeable when playing the audio for the first time. After that, it becomes less noticeable, but you can still feel it, especially when the audio is played in quick succession. Also, the issue is more noticable on real device than the simulator // // IssueApp.swift // Issue // // Created by Zhendong Chen on 2/1/25. // import SwiftUI @main struct IssueApp: App { var body: some Scene { WindowGroup { ContentView() } .windowStyle(.volumetric) } } // // ContentView.swift // Issue // // Created by Zhendong Chen on 2/1/25. // import SwiftUI import RealityKit import RealityKitContent struct ContentView: View { @State var enlarge = false var body: some View { RealityView { content, attachments in // Add the initial RealityKit content if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { if let sphere = scene.findEntity(named: "Sphere") { sphere.components.set(UpAndDownComponent(speed: 0.03, minY: -0.05, maxY: 0.05)) } if let button = attachments.entity(for: "Button") { button.position.y -= 0.3 scene.addChild(button) } content.add(scene) } } attachments: { Attachment(id: "Button") { VStack { Button { SoundManager.instance.playSound(filePath: "apple_en") } label: { Text("Play audio") } .animation(.none, value: 0) .fontWeight(.semibold) } .padding() .glassBackgroundEffect() } } .onAppear { UpAndDownSystem.registerSystem() } } } // // SoundManager.swift // LinguaBubble // // Created by Zhendong Chen on 1/14/25. // import Foundation import AVFoundation class SoundManager { static let instance = SoundManager() private var audioPlayer: AVAudioPlayer? func playSound(filePath: String) { guard let url = Bundle.main.url(forResource: filePath, withExtension: ".mp3") else { return } do { audioPlayer = try AVAudioPlayer(contentsOf: url) audioPlayer?.play() } catch let error { print("Error playing sound. \(error.localizedDescription)") } } } // // UpAndDownComponent+System.swift // Issue // // Created by Zhendong Chen on 2/1/25. // import RealityKit struct UpAndDownComponent: Component { var speed: Float var axis: SIMD3<Float> var minY: Float var maxY: Float var direction: Float = 1.0 // 1 for up, -1 for down var initialY: Float? init(speed: Float = 1.0, axis: SIMD3<Float> = [0, 1, 0], minY: Float = 0.0, maxY: Float = 1.0) { self.speed = speed self.axis = axis self.minY = minY self.maxY = maxY } } struct UpAndDownSystem: System { static let query = EntityQuery(where: .has(UpAndDownComponent.self)) init(scene: RealityKit.Scene) {} func update(context: SceneUpdateContext) { let deltaTime = Float(context.deltaTime) // Time between frames for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) { guard var component: UpAndDownComponent = entity.components[UpAndDownComponent.self] else { continue } // Ensure we have the initial Y value set if component.initialY == nil { component.initialY = entity.transform.translation.y } // Calculate the current position let currentY = entity.transform.translation.y // Move the entity up or down let newY = currentY + (component.speed * component.direction * deltaTime) // If the entity moves out of the allowed range, reverse the direction if newY >= component.initialY! + component.maxY { component.direction = -1.0 // Move down } else if newY <= component.initialY! + component.minY { component.direction = 1.0 // Move up } // Apply the new position entity.transform.translation = SIMD3<Float>(entity.transform.translation.x, newY, entity.transform.translation.z) // Update the component with the new direction entity.components[UpAndDownComponent.self] = component } } } Could someone help me with this?
2
0
343
Feb ’25
Reading Imagefiles outside of app content
Hi there. Thanks to amazing help from you guys, I've managed to code a 360 image carousel, where the user can browse 360 images located inside the project package. Is there a way to access the filesystem on AVP outside the app? I know about the FileManager, and I can get access to the .documentsDirectory, but how do I access documents folder from the "Files" app on the AVP? My goal is to read images from a hardcoded folderlocation on the AVP, such that the user never will have to select the images themselves. I know this may not be the "right" way to do this. The app is supposed to be "foolproof" with a minimum of userinteraction. The only way to change the images should be to change the contents of the hardcoded imagefolder. I hope this makes sense =) Thanks in advance! Regards, Kim
2
0
268
Feb ’25
Spatial Gallery App functionality
Similar to the visionOS Spatial Gallery app, I'm developing a visionOS app that will show spatial photos and videos. Is it possible to re-create the horizontal (or a vertical) scrolling functionality that shows spatial photos and spatial video previews? Does the Spatial Gallery app use private APIs to create this functionality? I've been looking at the Quick Look documentation and have been able to use the PreviewApplication to show a single preview, but do not see anything for a collection of files as the Spatial Gallery app presents in the scrolling view. Any insights or direction on how this may be done is greatly appreciated.
2
0
138
Jun ’25
How to search location in global rather than in local?
I'm doing a weather app, users can search locations for getting weather, but the problem is, the results only shows locations in my country, not in global. For example, I'm in China, I can't search New York, it just shows nothing. Here's my code: @Observable class SearchPlaceManager: NSObject { var searchText: String = "" let searchCompleter = MKLocalSearchCompleter() var searchResults: [MKLocalSearchCompletion] = [] override init() { super.init() searchCompleter.resultTypes = .address searchCompleter.delegate = self } @MainActor func seachLocation() { if !searchText.isEmpty { searchCompleter.queryFragment = searchText } } } extension SearchPlaceManager: MKLocalSearchCompleterDelegate { func completerDidUpdateResults(_ completer: MKLocalSearchCompleter) { withAnimation { self.searchResults = completer.results } } } Also, I've tried to set searchCompleter.region = MKCoordinateRegion( center: CLLocationCoordinate2D(latitude: 0, longitude: 0), span: MKCoordinateSpan(latitudeDelta: 180, longitudeDelta: 360) ), but it doesn't work.
2
0
771
Dec ’24
How to prevent system windows occlusion when using OcclusionMaterial
We use SceneReconstructionProvider to detect meshes in the surrounding environment and apply an OcclusionMaterial to them. // Assuming `entity` represents one of the detected mesh in the environment entity.components.set(ModelComponent( mesh: mesh, materials: [OcclusionMaterial()] )) While this correctly occludes entities placed in the immersive space, it also occludes system windows. This becomes problematic when a window is dragged into an occluded area (before or after entering the immersive space), preventing interaction with its elements. In some cases, it also makes it impossible to focus on the window’s drag handle, since this might become occluded as well after moving the window nearby. More generally, system windows can be occluded when they come into proximity with a model that has OcclusionMaterial applied. I'm aware of a change introduced in visionOS 2 regarding how occlusions interact with UI elements (as noted in the release notes). I believe this change was intended to ensure windows do not remain visible when opened in another room. However, this also introduces some challenges, as described in the scenario above. Is there a way to prevent system window occlusion while still allowing entities to be occluded by environmental features? Perhaps not using OcclusionMaterial at all? Development environment: Xcode 16.2, macOS 15.2 Run-time configuration: visionOS 2.2 and 2.3
2
2
308
Feb ’25
TextComponent on iOS/macOS pixelated when viewed from short distance
Hello, I've been tinkering a bit with TextComponent. Based on the docs it seems like this component should always render sharp and nice text, no matter how close the user gets: RealityKit dynamically adjusts the backing size to a value that results in high-fidelity text at its current location. And it does on visionOS, but on iOS and macOS the text gets pixelated when I get close to it, as if its just rendering it once as a plain image texture. Can anyone tell me if this is expected behavior or a bug? Here two screenshots for comparison (iPhone and Vision Pro): Thanks!
2
2
101
Aug ’25
Launching a Unity fully immersive game from SwiftUI
I am trying to launch a fully immersive game from Unity on a SwiftUI view. The game is using Metal Rendering with Compositor Services. I added the unity Xcode project into the workspace, added the necessary bridge code. When I click on the button to call ufw?.showUnityWindow(), it does not start and I get the following in the console: AR session failed to start after 5 seconds. Is the app configured to use an immersive space?
2
0
99
Jun ’25
How to trigger other effects using hoverEffect?
I’m facing an issue while using CustomHoverEffect. In my view, there is a long title, which causes the title to be truncated. When the user hovers over it, the title should scroll. Although I have already implemented the scrolling effect, I am unsure how to trigger the scroll on hover. How should I approach this?
2
0
353
Feb ’25
Realitykit asset loading
With Xcode 26, loading ressources with RealityKit is extremely slow. Here my project takes almost 50 seconds to load. I also get multiple Hang detected messages in the console: When I uncheck "Debug executable" in the schema, the same project loads in 2 seconds. I'm using RealityKit asynchronous loading: private static func loadFromRealityComposerPro( named entityName: String, fromSceneNamed sceneName: String ) async -> Entity? { var entity: Entity? do { let scene = try await Entity( named: sceneName, in: visionPetsContentBundle ) entity = scene.findEntity(named: entityName) } catch { print( "Error loading \(entityName) from scene \(sceneName): \(error.localizedDescription)" ) } return entity } Anyone having the same problem?
2
0
73
Jun ’25