visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Adapt distance/depth of view relative to user
Hi, I'm currently working on some messages that should appear in front of the user depending on the system's state of my visionOS app. How am I able to change the distance of the appearing message relative to the user if the message is displayed as a View. Or is this only possible if I would create an enitity for that message, and then set apply .setPosition() and .relativeTo() e.g. the head anchor? Currently I can change the x and y coordinates of the view as it works within a 2D space, but as I'm intending to display that view in my immersive space, it would be cool if I can display my message a little bit further away in the user's UI, as it currently is a little bit to close in the user's view. If there is a solution without the use of entities I would prefer that one. Thank you for your help! Below an example: Feedback.swift import SwiftUI struct Feedback: View { let message: String var body: some View { VStack { Text(message) } } .position(x: 0, y: -850) // how to adapt distance/depth relative to user in UI? } } ImmersiveView.swift import SwiftUI import RealityKit struct ImmersiveView: View { @State private var feedbackMessage = "Hello World" public var body: some View { VStack {} .overlay( Feedback(message: feedbackMessage) ) RealityView { content in let configuration = SpatialTrackingSession.Configuration(tracking: [.hand]) let spatialTrackingSession = SpatialTrackingSession.init() _ = await spatialTrackingSession.run(configuration) // Head let headEntity = AnchorEntity(.head) content.add(headEntity) } } }
0
0
64
12h
How to display spatial images or videos on swiftUI view
Hi! Now I am making a visionOS program. I have an idea that I want to embed spatial videos or pictures into my UI, but now I have encountered problems and have no way to implement my idea. I have tried the following work: Use AVPlayerViewController to play a spatial video, but it is only display spatial video when modalPresentationStyle =.fullscreen. Once embedded in swiftUI's view, it shows it as a normal 2D image. The method of https://developer.apple.com/forums/thread/733813 I also tried, using a shadergraph to realize the function of the spatial images displaying, but the material can only be attached on the entity, I don't know how to make it show up in view. I also tried to use CAMetalLayer to implement this function and write a custom shader to display spatial images, but I couldn't find a function like unity_StereoEyeIndex in unity to render binocular switching. Does anyone have a good solution to my problem? Thank you!
0
0
87
22h
1 meter size limit on object visual presentation?
I’m encountering a 1-meter size limit on the visual presentation of objects presented in an immersive environment in vision os, both in the simulator and in the device For example, if I load a USDZ object that’s 1.0x0.5x0.05 meters, all of the 1.0x0.5 meter side is visible. If I scale it by a factor of 2.0, only a 1.0x1.0 viewport onto the object is shown, even though the object size reads out as scaled when queried by usdz.visualBounds(relativeTo: nil).extents and if the USDZ is animated the animation, the animation reflects the motion of the entire object I haven’t been able to determine why this is the case, nor any way to adjust/mitigate it. Is this a wired constraint of the system or is there a workaround. Target environment is visionos 1.2
1
0
103
1d
Running multiple ARKitSessions in the same app?
I would like to implement the following but I am not sure if this is a supported use case based on the current documentation: Run one ARKitSession with a WorldTrackingProvider in Swift for mixed immersion Metal rendering (to get the device anchor for the layer renderer drawable & view matrix) Run another ARKitSession with a WorldTrackingProvider and a CameraFrameProvider in a different library (that is part of the same app) using the ARKit C API and using the transforms from the anchors in that session to render objects in the Swift application part. In general, is this a supported use case or is it necessary to have one shared ARKitSession? Assuming this is supported, will the (device) anchors from both WorldTrackingProviders reference the same world coordinate system? Are there any performance downsides to having multiple ARKitSessions? Thanks
1
0
113
2d
Open the vision pro camera using Enterprise API and view it in application window
I want to see the vision pro camera view in my application window. I had write some code from apple, I stuck on CVPixelBuffer , How to convert pixelbuffer to video frame? Button("Camera Feed") { Task{ if #available(visionOS 2.0, *) { let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left]) let cameraFrameProvider = CameraFrameProvider() var arKitSession = ARKitSession() var pixelBuffer: CVPixelBuffer? await arKitSession.queryAuthorization(for: [.cameraAccess]) do { try await arKitSession.run([cameraFrameProvider]) } catch { return } guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return } for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } //==== print("=========================") print(mainCameraSample.pixelBuffer) print("=========================") // self.pixelBuffer = mainCameraSample.pixelBuffer } } else { // Fallback on earlier versions } } } I want to convert "mainCameraSample.pixelBuffer" in to video. Could you please guide me!!
2
0
149
3d
rendering MV HEVC encoded stereoscopic video frames using RealityKit VideoMaterial and AVSampleBufferVideoRenderer on VisionOS
Hi, Im trying to use this example (https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/converting_side-by-side_3d_video_to_multiview_hevc_and_spatial_video) to encode a stereoscopic (left eye right eye) video frame using MVHEVC. The sample project creates tagged buffers for left and right eye, and uses a writer to write the MVHEC encoded video buffers. But i after i get right and left tagged buffers, i want to use VideoMaterial and its AVSampleBufferVideoRenderer to enqueue these video frames. If i render MVHEVC encoded left eye sample buffer, and right eye sample buffer, sequentially will the AVSampleBufferVideoRenderer render it as a stereoscopic view? How does this work with VideoMaterial and AVSampleBufferVideoRenderer ? Thanks!
0
0
193
6d
VisionOS crashes Loading Entities from Disk Bundles with EXC_BREAKPOINT
Summary: I’m working on a VisionOS project where I need to dynamically load a .bundle file containing RealityKit content from the app’s Application Support directory. The .bundle is saved to disk after being downloaded or retrieved as an On-Demand Resource (ODR). Sample project with the issue: Github repo. Play the target test-odr to use with the local bundle and have the crash. Overall problem: Setup: Add a .bundle named RealityKitContent_RealityKitContent.bundle to the app’s resources. This bundle contains a Reality file with two USDA,: “Immersive” and “Scene”. Save to Disk: save the bundle to the Application Support directory, ensuring that the file is correctly copied and saved. Load the Bundle: load the bundle from the saved URL using Bundle(url: bundleURL) to initialize the Bundle object. Load Entity from Bundle: load a specific entity (“Scene”) from the bundle. When trying to load the entity using let storedEntity = try await Entity(named: "Scene", in: bundle), the app crashes with an EXC_BREAKPOINT error. ContentsOf Method Issue: If I use the Entity.load(contentsOf:realityFileURL, withName: entityName) method, it always loads the first root entity found (in this case, “Immersive”) rather than “Scene”, even when specifying the entity name. This is why I want to use the Bundle to load entities by name more precisely. Issue: The crash consistently occurs on the Entity(named: "Scene", in: bundle) line. I have verified that the bundle exists and is accessible at the specified path and that it contains the expected .reality file with multiple entities (“Immersive” and “Scene”). The error code I get is EXC_BREAKPOINT (code=1, subcode=0x1d135d4d0). What I’ve Tried: • Ensured the bundle is properly saved and accessible. • Checked that the bundle is initialized correctly from the URL. • Tested loading the entity using the contentsOf method, which works fine but always loads the “Immersive” entity, ignoring the specified name. Hence, I want to use the Bundle-based approach to load multiple USDA entities selectively. Question: Has anyone faced a similar issue or knows why loading entities using Entity(named:in:) from a disk-based bundle causes this crash? Any advice on how to debug or resolve this, especially for managing multiple root entities in a .reality file, would be greatly appreciated.
1
0
222
3d
Vision Pro App Stuck on Loading Screen – Works Fine on Simulator
Hi everyone, I'm currently developing an app for Vision Pro using SwiftUI, and I've encountered an issue when testing on the Vision Pro device. The app works perfectly fine on the Vision Pro simulator in Xcode, but when I run it on the actual device, it gets stuck on the loading screen. The logo appears and pulsates when it loads, as expected, but it never progresses beyond that point. Issue Details: The app doesn't crash, and I don't see any major errors in the console. However, in the debug logs, I encounter an exception: Thread 1: "*** -[NSProxy doesNotRecognizeSelector:plane] called!" I’ve searched through my project, but there’s no direct reference to a selector named plane. I suspect it may be related to a framework or system call failing on the device. There’s also this warning: NSBundle file:///System/Library/PrivateFrameworks/MetalTools.framework/ principal class is nil because all fallbacks have failed. What I’ve Tried: Verified that all assets and resources are properly bundled and loading (since simulators tend to be more forgiving with file paths). Tested the app with minimal UI to isolate potential causes, but the issue persists. Checked the app's Info.plist configuration to ensure it’s properly set up for Vision Pro. No crashes, just a loading screen hang on the device, while the app works fine in the Vision Pro simulator. Additional Info: The app’s UI consists of a loading animation (pulsating logo) before transitioning to the main content. Using Xcode 16.1 Beta, VisionOS SDK. The app is based on SwiftUI, with Vision Pro optimizations for immersive experience. Has anyone experienced something similar when moving from the simulator to the Vision Pro hardware? Any help or guidance would be appreciated, especially with regards to the exception or potential resource loading issues specific to the device. Thanks in advance!
1
0
143
4d
Can't view SwiftUI previews on Vision Pro because of code signing
Hello! I'm making an app for VisionOS. I can run the app on my Vision Pro, and I can see the SwiftUI previews in the simulator, but for some reason the previews refuse to run on the device. Here's what I'm seeing: == PREVIEW UPDATE ERROR: AppHostMustHaveGetTaskAllowError: Cards.app not code signed properly ”Cards.app” must be code signed in order to use on-device previews. Check your code signing settings for the target. As far as I can tell, the code signing settings are correct because the app itself runs just fine on the device. I'm not sure what to do…
1
0
116
1w
Using Native ARKit Object Tracking in Unity
Hello, Has anyone had success with implementing object tracking in Unity or adding native tracking capability to the VisionOS project built from Unity? I am working on an application for Vision Pro mainly in Unity using Polyspatial. The application requires me to track objects and make decisions based on tracked object's location. I was able to create an object tracking application on Native Swift, but could not successfully combine this with my Unity project yet. Each separate project (Main Unity app using Polyspatial and the native app on Swift) can successfully build and be deployed onto VisionPro. I know that Polyspatial and ARFoundation does not have support for ARKit's object tracking feature for VIsion Pro as of today, and they only support image tracking inside Unity. For that reason I have been exploring different ways of creating a bridge for two way interaction of the native tracking functionality and the other functionality in Unity. Below are the methods I tried and failed so far: Package the tracking functionality as a Swift Plugin and access this in Unity, and then build for Vision Pro: I can create packages and access them for simple exposed variables and methods, but not for outputs and methods from ARKit, which throw dependency errors while trying to make the swift package. Build project from Unity to VIsion Pro and expose a boolean to start/stop tracking that can be read by the native code, and then carry the tracking classes into the built project. In this approach I keep getting an error that says _TrackingStateChanged cannot be found, which is the class that exposes the bool toggled by the Unity button press: using System.Runtime.InteropServices; public class UnityBridge { [DllImport("__Internal")] private static extern void TrackingStateChanged(bool isTracking); public static void NotifyTrackingState() { // Call the Swift method TrackingStateChanged(TrackingStartManager.IsTrackingActive()); } } This seems to be translated to C++ code in the ill2cpp output from Unity, and even though I made sure that all necessary packages were added to the target, I keep receiving this error. from the UnityFramework plugin: Undefined symbol: _TrackingStateChanged I have considered extending the current Image Tracking approach in ARFoundation to include object tracking, but that seems to be too complicated for my use case and time frame for now. The final resort will be to forego Unity implementation and do everything in native code. However, I really want to be able to use Unity's conveniences and I have very limited experience with Swift development.
0
0
163
1w
How to Get Featured on the Apple Vision Pro App Store?
Hello everyone, I’ve recently released an app on the Apple Vision Pro App Store, and I believe it could be a strong candidate for featuring. I’ve already filled out the necessary forms for getting featured a few times, but I haven’t heard back, and it’s been over a month. I don’t mean to sound overly confident, but I genuinely think my app has the potential to be featured. If my app isn’t eligible for being featured for some reason, it would be great to know why. Additionally, if it doesn’t get chosen for featuring, is there someone specific I can contact for more information? I’d appreciate any insights or advice on how the featuring process works or how I might be able to improve my chances. Here is the link to my app: https://apps.apple.com/us/app/the-simulation-archive-1147/id6639664425 Also, I have the featuring artwork ready to go, but I understand I need to get it featured first to upload it. Thanks in advance for your help!
0
0
181
1w
Collision detection between two entities not working
Hi, I've tried to implement collision detection between the left index finger through a sphere and a simple 3D rectangle box. The sphere of my left index finger goes through the object, but no collision seems to take place. What am I missing? Thank you very much for your consideration! Below is my code; App.swift import SwiftUI @main private struct TrackingApp: App { public init() { ... } public var body: some Scene { WindowGroup { ContentView() } ImmersiveSpace(id: "AppSpace") { ImmersiveView() } } } ImmersiveView.swift import SwiftUI import RealityKit struct ImmersiveView: View { @State private var subscriptions: [EventSubscription] = [] public var body: some View { RealityView { content in /* LEFT HAND */ let leftHandIndexFingerEntity = AnchorEntity(.hand(.left, location: . let leftHandIndexFingerSphere = ModelEntity(mesh: .generateSphere(radius: 0.01), materials: [SimpleMaterial(color: .orange, isMetallic: false)]) leftHandIndexFingerEntity.addChild(leftHandIndexFingerSphere) leftHandIndexFingerEntity.generateCollisionShapes(recursive: true) leftHandIndexFingerEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateSphere(radius: 0.01)]) leftHandIndexFingerEntity.name = "LeftHandIndexFinger" content.add(leftHandIndexFingerEntity) /* 3D RECTANGLE*/ let width: Float = 0.7 let height: Float = 0.35 let depth: Float = 0.005 let rectangleEntity = ModelEntity(mesh: .generateBox(size: [width, height, depth]), materials: [SimpleMaterial(color: .red.withAlphaComponent(0.5), isMetallic: false)]) rectangleEntity.transform.rotation = simd_quatf(angle: -.pi / 2, axis: [1, 0, 0]) let rectangleAnchor = AnchorEntity(world: [0.1, 0.85, -0.5]) rectangleEntity.generateCollisionShapes(recursive: true) rectangleEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateBox(size: [width, height, depth])]) rectangleEntity.name = "Rectangle" rectangleAnchor.addChild(rectangleEntity) content.add(rectangleAnchor) /* Collision Handling */ let subscription = content.subscribe(to: CollisionEvents.Began.self, on: rectangleEntity) { collisionEvent in print("Collision detected between \(collisionEvent.entityA.name) and \(collisionEvent.entityB.name)") } subscriptions.append(subscription) } } }
1
0
204
1w
Forward and reverse animations with RealityKit on Vision Pro
Hello! I'm trying to play an animation with a toggle button. When the button is toggled the animation either plays forward from the first frame (.speed = 1) OR plays backward from the last frame (.speed = -1), so if the button is toggled when the animation is only halfway through, it 'jumps' to the first or last frame. The animation is 120 frames, and I want the position in playback to be preserved when the button is toggled - so the animation reverses or continues forward from whatever frame the animation was currently on. Any tips on implementation? Thanks! import RealityKit import RealityKitContent struct ModelView: View { var isPlaying: Bool @State private var scene: Entity? = nil @State private var unboxAnimationResource: AnimationResource? = nil var body: some View { RealityView { content in // Specify the name of the Entity you want scene = try? await Entity(named: "TestAsset", in: realityKitContentBundle) scene!.generateCollisionShapes(recursive: true) scene!.components.set(InputTargetComponent()) content.add(scene!) } .installGestures() .onChange(of: isPlaying) { if (isPlaying){ var playerDefinition = scene!.availableAnimations[0].definition playerDefinition.speed = 1 playerDefinition.repeatMode = .none playerDefinition.trimDuration = 0 let playerAnimation = try! AnimationResource.generate(with: playerDefinition) scene!.playAnimation(playerAnimation) } else { var playerDefinition = scene!.availableAnimations[0].definition playerDefinition.speed = -1 playerDefinition.repeatMode = .none playerDefinition.trimDuration = 0 let playerAnimation = try! AnimationResource.generate(with: playerDefinition) scene!.playAnimation(playerAnimation) } } } } Thanks!
2
0
205
1w
Unable to update IBL at runtime
I seem to be running into an issue in an app I am working on were I am unable to update the IBL for entity more than once in a RealityKit scene. The app is being developed for visionOS. I have a scene with a model the user interacts with and 360 panoramas as a skybox. These skyboxes can change based on user interaction. I have created an IBL for each of the skyboxes and was intending to swap out the ImageBasedLightComponent and ImageBasedLightReceiverComponent components when updating the skybox in the RealityView's update closure. The first update works as expected but updating the components after that has no effect. Not sure if this is intended or if I'm just holding it wrong. Would really appreciate any guidance. Thanks Simplified example // Task spun up from update closure in RealityView Task { if let information = currentSkybox.iblInformation, let resource = try? await EnvironmentResource(named: information.name) { parentEntity.components.remove(ImageBasedLightReceiverComponent.self) if let iblEntity = content.entities.first(where: { $0.name == "ibl" }) { content.remove(iblEntity) } let newIBLEntity = Entity() var iblComponent = ImageBasedLightComponent(source: .single(resource)) iblComponent.inheritsRotation = true iblComponent.intensityExponent = information.intensity newIBLEntity.transform.rotation = .init(angle: currentPanorama.rotation, axis: [0, 1, 0]) newIBLEntity.components.set(iblComponent) newIBLEntity.name = "ibl" content.add(newIBLEntity) parentEntity.components.set([ ImageBasedLightReceiverComponent(imageBasedLight: newIBLEntity), EnvironmentLightingConfigurationComponent(environmentLightingWeight: 0), ]) } else { parentEntity.components.remove(ImageBasedLightReceiverComponent.self) } }
1
0
176
1w
Track hardware input (keyboard, trackpad, etc.) in visionOS app during Mac Virtual Display usage?
Hi, I'm experimenting with how my visionOS app interacts with the Mac Virtual Display while the immersive space is active. Specifically, I'm trying to find out if my app can detect key presses or trackpad interactions (like clicks) when the Mac Virtual Display is in use for work, and my app is running in the background with an active immersive space. So far, I've tested a head-tracking system in my app that works when the app is open with an active immersive space, where I just moved the Mac Virtual Display in front of the visionOS app window. Could my visionOS app listen to keyboard and trackpad events that happen in the Mac Virtual Display environment?
0
0
161
1w
Creating Shared Experiences in Physical Locations
Hello everyone, I'm working on developing an app that allows users to share and enjoy experiences together while they are in the same physical locations. Despite trying several approaches, I haven't been able to achieve the desired functionality. If anyone has insights on how to make this possible or is interested in joining the project, I would greatly appreciate your help!
3
0
331
1w
Memory Leak using simple app with visionOS
Hello. When displaying a simple app like this: struct ContentView: View { var body: some View { EmptyView() } } And run the Leaks app from the developer tools in Xcode, I see a memory leak which I don't see when running the same application on iOS. You can simply run the app and it will show a memory leak. And this is what I see in the Leaks application. Any ideas on what is going on? Thanks!
2
0
259
1w