Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

RotateGesture3D auto constrained to axis
Hi, On visionOS to manage entity rotation we can rely on RotateGesture3D. We can even with the constrainedToAxis parameter authorize only rotation on an x, y or z axis or even make combinations. What I want to know is if it is possible to constrain the rotation on axis automatically. Let me explain, the functionality that I would like to implement is to constrain the rotation on an axis only once the user has started his gesture. The initial gesture the user makes should let us know which axis they want to rotate on. This would be equivalent to activating a constraint automatically on one of the axes, as if we were defining the gesture on one of the axes. RotateGesture3D(constrainedToAxis: .x) RotateGesture3D(constrainedToAxis: .y) RotateGesture3D(constrainedToAxis: .z) Is it possible to do this? If so, what would be the best way to do it? A code example would be greatly appreciated. Regards Tof
3
0
411
Feb ’25
Unable to Create a Fully Immersive Experience That Hides Other Windows in visionOS App
Description: I'm developing a travel/panorama viewing app for visionOS that allows users to view 360° panoramic images in an immersive space. When users enter panorama viewing mode, I want to provide a fully immersive experience where the main interface window and Earth 3D globe window are hidden. I've implemented the app following Apple's documentation on Creating Fully Immersive Experiences, but when users enter the immersive space, both the main window and the Earth 3D window remain visible, diminishing the immersive experience. Implementation Details: My app has three main components: A main content window showing panorama thumbnails A 3D globe window (volumetric) showing locations An immersive space for viewing 360° panoramas I'm using .immersionStyle(selection: $panoImageView, in: .full) to create a fully immersive experience, but other windows remain visible. Relevant Code: @main struct Travel_ImmersiveApp: App { @StateObject private var appModel = AppModel() @State private var panoImageView: ImmersionStyle = .full var body: some Scene { WindowGroup { ContentView() .environmentObject(appModel) } .windowStyle(.automatic) .defaultSize(width: 1280, height: 825) WindowGroup(id: "Earth") { Globe3DView() .environmentObject(appModel) .onAppear { appModel.isGlobeWindowOpen = true appModel.globeWindowOpen = true } .onDisappear { if !appModel.shouldCloseApp { appModel.handleGlobeWindowClose() } } } .windowStyle(.volumetric) .defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters) .windowResizability(.contentSize) ImmersiveSpace(id: "ImmersiveView") { ImmersiveView() .environmentObject(appModel) } .immersionStyle(selection: $panoImageView, in: .full) } } Opening the Immersive Space: func getPanoImageAndOpenImmersiveSpace() async { appModel.clearMemoryCache() do { let canView = appModel.canViewImage(image) if canView { let downloadedImage = try await appModel.getPanoramaImage(for: image) { progress in Task { @MainActor in cardState = .loading(progress: progress) } } await MainActor.run { appModel.updateCurrentImage(image, panoramaImage: downloadedImage) } if !appModel.immersiveSpaceOpened { try await openImmersiveSpace(id: "ImmersiveView") await MainActor.run { appModel.immersiveSpaceOpened = true cardState = .normal } } else { await MainActor.run { appModel.updateImmersiveView = true cardState = .normal } } } else { await MainActor.run { appModel.errorMessage = "You do not have permission to view this image." cardState = .normal } } } catch { // Error handling } } Immersive View Implementation: struct ImmersiveView: View { @EnvironmentObject var appModel: AppModel var body: some View { RealityView { content in let rootEntity = Entity() content.add(rootEntity) Task { if let selectedImage = appModel.selectedImage, appModel.canViewImage(selectedImage) { await loadPanorama(for: rootEntity) } } } update: { content in if appModel.updateImmersiveView, let selectedImage = appModel.selectedImage, appModel.canViewImage(selectedImage), let rootEntity = content.entities.first { Task { await loadPanorama(for: rootEntity) appModel.updateImmersiveView = false } } } .onAppear { print("ImmersiveView appeared") } .onDisappear { appModel.resetImmersiveState() } } // loadPanorama implementation... } What I've Tried Set immersionStyle to .full as recommended in the documentation Confirmed that the immersive space is properly opened and displaying panoramas Verified that the state management for the immersive space is working correctly Questions How can I ensure that when the user enters the immersive panorama viewing experience, all other windows (main interface and Earth 3D globe) are automatically hidden? Is there a specific API or approach I'm missing to properly implement a fully immersive experience that hides all other windows? Do I need to manually dismiss the windows when opening the immersive space, and if so, what's the best approach for doing this? Any guidance or sample code would be greatly appreciated. Thank you!
3
0
129
Apr ’25
Performance drop when particle emitter is combined with video play
Hi All, We're a studio building an app and as part of a scene we have a 3D asset with a smoke particle emitter and a curved mesh that plays video. I notice that when the video alone is played or the particle effect alone is done then the scene works fine but the frame rate drops drastically when both are turned on. How do I solve this because this is an important storytelling feature.
2
0
260
Oct ’25
Dynamically assigning texture resource to ShaderGraphMaterial on VisionOS
I implemented a ShaderGraphMaterial and tried to load it from my usda scene by ShaderGraphMaterial.init(name: in: bundle). I want to dynamically set TextureResource on that material, so I wanted to expose texture as Uniform Input of a ShaderGraphMaterial. But obviously RCP's Shader Graph doesn't support Texture input as parameter as the image shows: And from the code level, ShaderGraphMaterial also didn't expose a way to set TexturesResources neither. Its parameterNames shows an empty array if I didn't set any custom input params. The texture I get is from my backend so it really cannot be saved into a file and load it again (that would be too weird). Is there something I am missing?
1
0
533
Jan ’25
Ground Shadows pass through objects
This issue has been since visionOS 1 unless that is how it is supposed to work. As you can see in the screen capture the shadows from the top box are shown on all 3 boxes below. This is a screen capture in composer pro but the same thing happens in the Vision Pro. Is there any way to stop this behavior and just have shadows on the first object below the object that is casting the shadows ?
2
0
509
Dec ’24
CompositorServices Or RealityKit
I have been concentrating on developing the visionOS application. While I am currently quite familiar with RealityKit, CompositorServices has also captured my attention. I have not yet acquired knowledge of CompositorServices. Could you please clarify whether it is essential for me to learn CompositorServices? Additionally, I would appreciate it if you could provide insights into the advantages of RealityKit and CompositorServices.
2
0
725
Mar ’25
generateConvex(from mesh: MeshResource) crashes instead of throwing a Swift error
I have a MeshResource and I would like to create a collision component from it. let childBounds = child.visualBounds(relativeTo: self) var childShape: ShapeResource do { // Crashed by the following line instead of throwing a Swift Error childShape = try await ShapeResource.generateConvex(from: childModel.mesh); } catch { childShape = ShapeResource.generateBox(size: childBounds.extents) childShape = childShape.offsetBy(translation: childBounds.center) } Based on this document: https://developer.apple.com/documentation/realitykit/shaperesource/generateconvex(from:)-6upj9 Will throw an error if mesh does not define a nonempty convex volume. For example, will fail if all the vertices in mesh are coplanar. But, the method crashes the app instead of throwing a Swift error Incident Identifier: 35CD58F8-FFE3-48EA-85D3-6D241D8B0B4C CrashReporter Key: FE6790CA-6481-BEFD-CB26-F4E27652BEAE Hardware Model: Mac15,11 ... Version: 1.0 (1) Code Type: ARM-64 (Native) Role: Foreground Parent Process: launchd_sim [2057] Coalition: com.apple.CoreSimulator.SimDevice.85A2B8FA-689F-4237-B4E8-DDB93460F7F6 [1496] Responsible Process: SimulatorTrampoline [910] Date/Time: 2025-01-26 16:13:17.5053 +0800 Launch Time: 2025-01-26 16:13:09.5755 +0800 OS Version: macOS 15.2 (24C101) Release Type: User Report Version: 104 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000001, 0x00000001abf841d0 Termination Reason: SIGNAL 5 Trace/BPT trap: 5 Terminating Process: exc handler [17316] Triggered by Thread: 0 Thread 0 Crashed: 0 CoreRE 0x1abf841d0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedron + 232 1 CoreRE 0x1abf845f0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedronFromMesh + 868 2 RealityFoundation 0x1d25613bc static ShapeResource.generateConvex(from:) + 148 Here is the message on the app console from Xcode /Library/Caches/com.apple.xbs/Sources/REKit_Sim/ThirdParty/PhysX/physx/source/physxcooking/src/convex/QuickHullConvexHullLib.cpp (935) : internal error : QuickHullConvexHullLib::findSimplex: Simplex input points appers to be coplanar. Failed to cook convex mesh (0x3) assertion failure: 'convexPolyhedronShape != nullptr' (REAssetManagerCollisionShapeAssetCreateConvexPolyhedron:line 356) Bad parameters passed for convex mesh creation. Message from debug The above crash happened on a visionOS simulator (visionOS 2.2 (22N840)
1
0
414
Jan ’25
Is there a way to scale a RealityKit ShapeResource?
I can generate a ShapeResource from a ReakityKit entity's extents. Could I apply some scaling to the generated shape. Is there a way to do that? // model is a ModelResource and bounds is a BoundingBox var shape = ShapeResource.generateConvex(from: model.mesh); shape = shape.offsetBy(translation: bounds.center) // How can I scale the shape to fit within the bounds? The following API only provide the rotation and translation support. and I cannot find the scale support. offsetBy(rotation: simd_quatf = simd_quatf(ix: 0, iy: 0, iz: 0, r: 1), translation: SIMD3<Float> = SIMD3<Float>()) I can put the ShapeResource on an entity and scale the entity. But, I would like to know if it is possible to scale the ShapeResource itself without attaching it to an entity.
3
0
564
Feb ’25
Rendering Order with ModelSortGroup
I have a huge sphere where the camera stays inside the sphere and turn on front face culling on my ShaderGraphMaterial applied on that sphere, so that I can place other 3D stuff inside. However when it comes to attachment, the object occlusion never works as I am expecting. Specifically my attachments are occluded by my sphere (some are not so the behavior is not deterministic. Then I suspect it was the issue of depth testing so I started using ModelSortGroup to reorder the rending sequence. However it doesn't work. As I was searching through the internet, this post's comments shows that ModelSortGroup simply doesn't work on attachments. So I wonder how should I tackle this issue now? To let my attachments appear inside my sphere. OS/Sys: VisionOS 2.3/XCode 16.3
1
0
484
Feb ’25
CapturedRoom.Section is missing a lot of information
The Section struct only publicly makes the center property available, but this is a SIMD3 that doesn't seem to line up with the rest of the model. All other objects have a 4x4 transform matrix that accurately gives each position and rotation. When inspecting a Section in the debugger, many more properties are visible such as polygon and transform. Why are these not visible? The transform in particular seems necessary to make any sort of use of the Sections.
1
0
341
Sep ’25
USDZ with Blend Shapes Workflow Recommendations
Hi, since RealityKit 4 now supports Blend Shapes I was wondering if there are any workflow or tooling recommendations to bake/export them into a USDZ. Are Blender or Cinema4D capable to do that out of the box? Should we look into NVIDIA omniverse (https://docs.omniverse.nvidia.com/connect/latest/blender/manual.htm) So far this topic seems very sparsely documented and I would appreciate any hints. Thank you!
3
0
1.3k
Jan ’25
Unexpected Behavior in Entity Movement System When Using AVAudioPlayer in visionOS Development
I am currently developing an app for visionOS and have encountered an issue involving a component and system that moves an entity up and down within a specific Y-axis range. The system works as expected until I introduce sound playback using AVAudioPlayer. Whenever I use AVAudioPlayer to play sound, the entity exhibits unexpected behaviors, such as freezing or becoming unresponsive. The freezing of the entity's movement is particularly noticeable when playing the audio for the first time. After that, it becomes less noticeable, but you can still feel it, especially when the audio is played in quick succession. Also, the issue is more noticable on real device than the simulator // // IssueApp.swift // Issue // // Created by Zhendong Chen on 2/1/25. // import SwiftUI @main struct IssueApp: App { var body: some Scene { WindowGroup { ContentView() } .windowStyle(.volumetric) } } // // ContentView.swift // Issue // // Created by Zhendong Chen on 2/1/25. // import SwiftUI import RealityKit import RealityKitContent struct ContentView: View { @State var enlarge = false var body: some View { RealityView { content, attachments in // Add the initial RealityKit content if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { if let sphere = scene.findEntity(named: "Sphere") { sphere.components.set(UpAndDownComponent(speed: 0.03, minY: -0.05, maxY: 0.05)) } if let button = attachments.entity(for: "Button") { button.position.y -= 0.3 scene.addChild(button) } content.add(scene) } } attachments: { Attachment(id: "Button") { VStack { Button { SoundManager.instance.playSound(filePath: "apple_en") } label: { Text("Play audio") } .animation(.none, value: 0) .fontWeight(.semibold) } .padding() .glassBackgroundEffect() } } .onAppear { UpAndDownSystem.registerSystem() } } } // // SoundManager.swift // LinguaBubble // // Created by Zhendong Chen on 1/14/25. // import Foundation import AVFoundation class SoundManager { static let instance = SoundManager() private var audioPlayer: AVAudioPlayer? func playSound(filePath: String) { guard let url = Bundle.main.url(forResource: filePath, withExtension: ".mp3") else { return } do { audioPlayer = try AVAudioPlayer(contentsOf: url) audioPlayer?.play() } catch let error { print("Error playing sound. \(error.localizedDescription)") } } } // // UpAndDownComponent+System.swift // Issue // // Created by Zhendong Chen on 2/1/25. // import RealityKit struct UpAndDownComponent: Component { var speed: Float var axis: SIMD3<Float> var minY: Float var maxY: Float var direction: Float = 1.0 // 1 for up, -1 for down var initialY: Float? init(speed: Float = 1.0, axis: SIMD3<Float> = [0, 1, 0], minY: Float = 0.0, maxY: Float = 1.0) { self.speed = speed self.axis = axis self.minY = minY self.maxY = maxY } } struct UpAndDownSystem: System { static let query = EntityQuery(where: .has(UpAndDownComponent.self)) init(scene: RealityKit.Scene) {} func update(context: SceneUpdateContext) { let deltaTime = Float(context.deltaTime) // Time between frames for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) { guard var component: UpAndDownComponent = entity.components[UpAndDownComponent.self] else { continue } // Ensure we have the initial Y value set if component.initialY == nil { component.initialY = entity.transform.translation.y } // Calculate the current position let currentY = entity.transform.translation.y // Move the entity up or down let newY = currentY + (component.speed * component.direction * deltaTime) // If the entity moves out of the allowed range, reverse the direction if newY >= component.initialY! + component.maxY { component.direction = -1.0 // Move down } else if newY <= component.initialY! + component.minY { component.direction = 1.0 // Move up } // Apply the new position entity.transform.translation = SIMD3<Float>(entity.transform.translation.x, newY, entity.transform.translation.z) // Update the component with the new direction entity.components[UpAndDownComponent.self] = component } } } Could someone help me with this?
2
0
343
Feb ’25
Reading Imagefiles outside of app content
Hi there. Thanks to amazing help from you guys, I've managed to code a 360 image carousel, where the user can browse 360 images located inside the project package. Is there a way to access the filesystem on AVP outside the app? I know about the FileManager, and I can get access to the .documentsDirectory, but how do I access documents folder from the "Files" app on the AVP? My goal is to read images from a hardcoded folderlocation on the AVP, such that the user never will have to select the images themselves. I know this may not be the "right" way to do this. The app is supposed to be "foolproof" with a minimum of userinteraction. The only way to change the images should be to change the contents of the hardcoded imagefolder. I hope this makes sense =) Thanks in advance! Regards, Kim
2
0
268
Feb ’25
Obect Capture App getting rejected for not supporting non Pro Models
Hi, I created an app using iOS Object Capture API which works only on Lidar enabled phones. It's a limitation of the Api provided by apple itself. I Submitted an app for Review , but It is getting rejected (Twice) saying it doesnt work on non pro models. Even though I explained that capturing Needs Lidar and supported only in PRO models, It still gets rejected after testing in Non Pro models. is there a way out?
1
0
342
Feb ’25
Cannot extract imagePair from generated Spatial Photos
Hi I am trying to implement something simple as people can share their Spatial Photos with others (just like this post). I encountered the same issue with him, but his answer doesn't help me out here. Briefly speaking, I am using CGImgaeSoruce to extract paired leftImage and rightImage from one fetched spatial photo let photos = PHAsset.fetchAssets(with: .image, options: nil) // enumerating photos .... if asset.mediaSubtypes.contains(PHAssetMediaSubtype.spatialMedia) { spatialAsset = asset } // other code show below I can fetch left and right images from native Spatial Photo (taken by Apple Vision Pro or iPhone 15+), but it didn't work on generated spatial photo (2D -> 3D feat in Photos). // imageCount is 1 when it comes to generated spatial photo let imageCount = CGImageSourceGetCount(source) I searched over the net and someone says the generated version is having a depth image instead of left/right pair. But still I cannot extract any depth image from imageSource. The full code below, the imagePair extraction will stop at "no groups found": func extractPairedImage(phAsset: PHAsset, completion: @escaping (StereoImagePair?) -> Void) { let options = PHImageRequestOptions() options.isNetworkAccessAllowed = true options.deliveryMode = .highQualityFormat options.resizeMode = .none options.version = .original return PHImageManager.default().requestImageDataAndOrientation(for: phAsset, options: options) { imageData, _, _, _ in guard let imageData, let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil) else { completion(nil) return } let stereoImagePair = stereoImagePair(from: imageSource) completion(stereoImagePair) } } } func stereoImagePair(from source: CGImageSource) -> StereoImagePair? { guard let properties = CGImageSourceCopyProperties(source, nil) as? [CFString: Any] else { return nil } let imageCount = CGImageSourceGetCount(source) print(String(format: "%d images found", imageCount)) guard let groups = properties[kCGImagePropertyGroups] as? [[CFString: Any]] else { /// function returns here print("no groups found") return nil } guard let stereoGroup = groups.first(where: { let groupType = $0[kCGImagePropertyGroupType] as! CFString return groupType == kCGImagePropertyGroupTypeStereoPair }) else { return nil } guard let leftIndex = stereoGroup[kCGImagePropertyGroupImageIndexLeft] as? Int, let rightIndex = stereoGroup[kCGImagePropertyGroupImageIndexRight] as? Int, let leftImage = CGImageSourceCreateImageAtIndex(source, leftIndex, nil), let rightImage = CGImageSourceCreateImageAtIndex(source, rightIndex, nil), let leftProperties = CGImageSourceCopyPropertiesAtIndex(source, leftIndex, nil), let rightProperties = CGImageSourceCopyPropertiesAtIndex(source, rightIndex, nil) else { return nil } return (leftImage, rightImage, self.identifier) } Any suggestion? Thanks visionOS 2.4
3
0
131
Jun ’25
Spatial Gallery App functionality
Similar to the visionOS Spatial Gallery app, I'm developing a visionOS app that will show spatial photos and videos. Is it possible to re-create the horizontal (or a vertical) scrolling functionality that shows spatial photos and spatial video previews? Does the Spatial Gallery app use private APIs to create this functionality? I've been looking at the Quick Look documentation and have been able to use the PreviewApplication to show a single preview, but do not see anything for a collection of files as the Spatial Gallery app presents in the scrolling view. Any insights or direction on how this may be done is greatly appreciated.
2
0
138
Jun ’25
Misaligned visionOS Simulator Home Position
Using Xcode v26 Beta 6 on macOS v26 Beta 25a5349a When pressing on the home button of the visionOS simulator, I am not positioned in the middle of the room like would normally be. This occurred when moving a lot in the space to find an element added to an ImmersiveSpace. How to resolve: restart simulator device. See attached the pictures of the visionOSSimulatorCorrectHomePosition and the visionOSSimulatorMisallignedHomePosition.
2
0
874
Sep ’25