visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Hover effect in Custom UIKit Views
I am adapting my custom UI Framework for visionOS, and I'm wondering if it is going to be possible to detect hover over different UI elements within my view. The UI Framework draws to a Metal layer in a UIView. I don't currently support uihovergesturerecognizer on the view but I guess this wouldn't help, since you don't get coordinates. I can imagine an unpleasant solution might be to add invisible UIControls for each of my custom controls that are drawn in my own framework.
0
0
8
16h
Cursor display issue on attachment view in immersive space
While using Screen Mirroring in developer mode within my immersive space, I noticed an alignment issue with the computer cursor (transparent circle). When I move it toward an attachment view, the cursor remains horizontal instead of aligning with the surface of the attachment view. It shows correctly on a 2D window only wrong on attachment view. Is this behavior a bug, or could it be caused by a missing or incorrect configuration on the attachment view? Want help, thanks.
0
0
17
1d
Inconsistent ornament scale
I am developing an application which make use of 2 ornaments anchored to a volumetric window, one used a toolbar and one to display different views. The problem I am facing consistently is that the ornaments seems to scale up or down after moving the volume using the OS handle or starting a GroupActivity session. This first image shows the ornaments as soon as I started the app, no dragging nor group activities: This second images shows them as soon as I join a group activity session: The map, which might seem smaller, has not been touched and has always the same scale. In this last image I had just dragged the entire volume using the OS toolbar, resulting in the ornaments scaling down: This is how the volume and the ornaments are declared: WindowGroup(id: "CityVolume") { let cityVM = CityViewModel(volumeSize: CityView.initialVolumeSize) CityView(cityVM: cityVM) .ornament(attachmentAnchor: .scene(.bottomFront)) { HStack { TourismChartsButton() LandmarksListButton() CenterMapButton() ToggleImmersiveSpaceButton() TrafficDataButton() BusLinesButton() } .padding() .offset(z: 10) .rotation3DEffect(Angle(degrees: 15), axis: (x: 1.0, y: 0.0, z: 0.0)) } .ornament(attachmentAnchor: .scene(.back)) { ZStack { if AppModel.Instance.tourismVM.isChartViewVisible { TourismChartsView() } if AppModel.Instance.busLinesVM.isDataViewEnabled { BusLineView() } } } .task(observeGroupActivity) .onAppear { appModel.cityVM = cityVM } } .windowStyle(.volumetric) .windowResizability(.contentSize) .volumeWorldAlignment(.gravityAligned) .defaultSize(CityView.initialVolumeSize, in: .meters) It happens also without starting a SharePlay session, but not as frequently as during SharePlay. Experienced the same behaviour with toolbars. Am I doing something wrong with how I created the ornaments? Am I missing something?
0
0
22
2d
Partial Occlusion Material
I am looking for a material that functions in the same way that Occlusion Material does, except that it only partially occludes whatever is behind it. One way that I have thought of doing this was to change the opacity of the entity that was covered in Occlusion Material, however this did not change anything. Please let me know if this is possible.
1
1
30
2d
Apply mesh to real world people.
As far as I know, Apple hasn’t opened access to the Vision Pro camera for developers yet, so I’m trying to find possible workarounds within the current capabilities. I’m wondering if there’s any way to apply a mesh to a person in the scene in Vision Pro, or if there’s an alternative approach to roughly detect a human shape in front of the user?
2
0
36
2d
VisionPro camera frame rate
Hi, I'm working with CameraFrameProvider from Enterprise API. Is it always capped at 30fps, or is there something I can switch to get more? I assume it is capped at 30, so let me cram in additional question here :). If I'd get a developer strap and attach an external camera capable of doing >30fps, will I get the full stream, or some other limitation will kick in?
2
0
41
1d
Are VisionOS Enterprise APIs handled differently from one another?
Hi, At work, we've done some development on an Apple Vision Pro. On the project, we used object tracking to track an object in 3D and found the default tracking refresh rate (I believe 5Hz)to be too slow so we applied for enterprise APIs so we could change it. At some point, in the capabilities (as a beginner to Swift and the Apple development environment) I noticed that's where you enable the Object Tracking Parameter Adjustment API and I did so, before hearing back about whether we got access to the enterprise API's and the license file that comes with it. So I setup the re-fresh rate to 30Hz and logged the settings of the ObjectTrackingProvider, showing it was set at 30Hz and felt like it was better than the default when we ran our app. In the Xcode runtime logs, there was no warning or error saying that the license file for the enterprise API was not found (and I don't think we heard back from Apple if they had granted our request or not - even if they did I think the license would be expired by now). Fast forward to today, I was running the sample code of the Main Camera access for VisionOS linked in the official developer documentation and when I ran the project in Xcode, I noticed in the logs that it wanted an enterprise license and that's why it wasn't running as expected in the immersive space. We've since applied for the Enterprise API for Main Camera Access. I'm now confused - did I mistakenly believe the object tracking refresh rate was set to 30Hz but it actually wasn't due to the lack of a license file/being granted access to the enterprise APIs? It seemed to be running as expected without a license file. Is Object tracking Parameter Adjustment API handled with different permissions than Main Camera Access API even though they are both enterprise APIs? This is all for internal development and not planning on distributing an app but I find the behaviour to be confusing between the different enterprise API? Does anyone have more insight as I find the developer notes on the enterprise APIs to be a bit sparse.
0
0
33
1w
Activate hoverEffect on separate entity attachment view
Hi, I'm working on RealityView and I have two entities in RCP. In order to set views for both entities, I have to create two separate attachments for each entity. What I want to achieve is that when I hover (by eye) on one entity's attachment, it would trigger the hover effect of the other entity's attachment. I try to use the hoverEffectGroup, but it would only activate the hover effect in a subview, instead a complete separate view. I refer to the following WWDC instruction for the hover effect. https://developer.apple.com/videos/play/wwdc2024/10152/
0
0
20
1w
Is Using Metal Compute Shaders for Efficient Resource Copying to RealityKit the Best Approach for Streaming Data in Real-Time Rendering?
Hi Apple, In VisionOS, for real-time streaming of large 3D scenes, I plan to create Metal buffers and textures in multiple threads and then use a compute shader on the main thread to copy the Metal resources into RealityKit, minimizing main thread usage. Given that most of RealityKit's default APIs require execution on the main actor (main thread), it is not ideal for streaming data. Is this approach the best way to handle streaming data and real-time rendering? Thank you very much.
0
0
36
1w
ArAnchor shifts
I encountered some issues while developing a Vision Pro program using Unity. After binding an ARAnchor to a game object, I overlapped the virtual game object with a real-world cup. However, when I moved around with the Vision Pro on, the virtual game object shifted, causing the real-world cup and the virtual object to no longer coincide. Is there a way to solve this?
0
0
17
1w
PencilKit on visionOS Doesn’t Support Left-Handed Users? How Can We Customize Hand Roles?
I’m building a visionOS app that uses PencilKit for drawing. Currently, PencilKit defaults to using the right hand for drawing and the left hand for panning, with no apparent way to change this behavior. Some of my users are left-handed, and they naturally want to draw with their left hand and pan with their right. However, PencilKit doesn’t seem to support this interaction pattern. Is there a way to customize which hand does what in PencilKit on visionOS? Or have I missed some API or workaround that would allow support for left-handed users?
1
0
45
1w
Pass Video/ Frames to a Shader Graph?
Wondering if this is even possible without using CVImageBuffer and passing each frame as an image which I imagine will be very expensive. Have a PoC of a shader graph that applies a radial zoom effect to an image. In RealityKit I'm passing the image as a resource: if let textureResource = try? await TextureResource(named: "fuji") { let value = MaterialParameters.Value.textureResource(textureResource) try? material.setParameter(name: "MyImage", value: value) model.model?.materials = [material] } Thanks in advance
0
0
31
1w
Unable to Create a Fully Immersive Experience That Hides Other Windows in visionOS App
Description: I'm developing a travel/panorama viewing app for visionOS that allows users to view 360° panoramic images in an immersive space. When users enter panorama viewing mode, I want to provide a fully immersive experience where the main interface window and Earth 3D globe window are hidden. I've implemented the app following Apple's documentation on Creating Fully Immersive Experiences, but when users enter the immersive space, both the main window and the Earth 3D window remain visible, diminishing the immersive experience. Implementation Details: My app has three main components: A main content window showing panorama thumbnails A 3D globe window (volumetric) showing locations An immersive space for viewing 360° panoramas I'm using .immersionStyle(selection: $panoImageView, in: .full) to create a fully immersive experience, but other windows remain visible. Relevant Code: @main struct Travel_ImmersiveApp: App { @StateObject private var appModel = AppModel() @State private var panoImageView: ImmersionStyle = .full var body: some Scene { WindowGroup { ContentView() .environmentObject(appModel) } .windowStyle(.automatic) .defaultSize(width: 1280, height: 825) WindowGroup(id: "Earth") { Globe3DView() .environmentObject(appModel) .onAppear { appModel.isGlobeWindowOpen = true appModel.globeWindowOpen = true } .onDisappear { if !appModel.shouldCloseApp { appModel.handleGlobeWindowClose() } } } .windowStyle(.volumetric) .defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters) .windowResizability(.contentSize) ImmersiveSpace(id: "ImmersiveView") { ImmersiveView() .environmentObject(appModel) } .immersionStyle(selection: $panoImageView, in: .full) } } Opening the Immersive Space: func getPanoImageAndOpenImmersiveSpace() async { appModel.clearMemoryCache() do { let canView = appModel.canViewImage(image) if canView { let downloadedImage = try await appModel.getPanoramaImage(for: image) { progress in Task { @MainActor in cardState = .loading(progress: progress) } } await MainActor.run { appModel.updateCurrentImage(image, panoramaImage: downloadedImage) } if !appModel.immersiveSpaceOpened { try await openImmersiveSpace(id: "ImmersiveView") await MainActor.run { appModel.immersiveSpaceOpened = true cardState = .normal } } else { await MainActor.run { appModel.updateImmersiveView = true cardState = .normal } } } else { await MainActor.run { appModel.errorMessage = "You do not have permission to view this image." cardState = .normal } } } catch { // Error handling } } Immersive View Implementation: struct ImmersiveView: View { @EnvironmentObject var appModel: AppModel var body: some View { RealityView { content in let rootEntity = Entity() content.add(rootEntity) Task { if let selectedImage = appModel.selectedImage, appModel.canViewImage(selectedImage) { await loadPanorama(for: rootEntity) } } } update: { content in if appModel.updateImmersiveView, let selectedImage = appModel.selectedImage, appModel.canViewImage(selectedImage), let rootEntity = content.entities.first { Task { await loadPanorama(for: rootEntity) appModel.updateImmersiveView = false } } } .onAppear { print("ImmersiveView appeared") } .onDisappear { appModel.resetImmersiveState() } } // loadPanorama implementation... } What I've Tried Set immersionStyle to .full as recommended in the documentation Confirmed that the immersive space is properly opened and displaying panoramas Verified that the state management for the immersive space is working correctly Questions How can I ensure that when the user enters the immersive panorama viewing experience, all other windows (main interface and Earth 3D globe) are automatically hidden? Is there a specific API or approach I'm missing to properly implement a fully immersive experience that hides all other windows? Do I need to manually dismiss the windows when opening the immersive space, and if so, what's the best approach for doing this? Any guidance or sample code would be greatly appreciated. Thank you!
3
0
64
1w
Reality Composer Pro Transparent Textures
Hey everyone, I am currently developing an app in visionOS and using RealityComposerPro create scenes in put in my app. I have a humanoid model with hair strands, and each strand of hair has an opacity map. However, some reflections are still visible even though the opacity is zero. There are also some weird culling among hair strands (in the left circle) and weird reflections in hair cards (in the right circle). Here's my settings for the materials. Since all the hair strands are interconnected with each other, it is hard to decide the drawing order in Xcode, so I am wondering if there's an easier way to handle transparency objects. Please let me know if you know anything helpful, much appreciated!
0
0
34
2w
Unity on VisionOS development - best practice on structuring a project
Hello, I am experimenting with Unity to develop a mixed reality (MR) application for visionOS. I would like to understand the best approach for structuring my project: Should I build the entire experience in Unity (both Windows and Volumes)? Or is it better to create only certain elements (e.g., Volumes) in Unity while managing Windows separately in Xcode? Also, how well do interactions (e.g pinch, grab…) created in Unity integrate with Xcode? If I use the PolySpatial plugin, does that allow me to manage all interactions entirely within Unity, or would I still need to handle/integrate part of it in Xcode? What's worked best for you? Please let me know if you have any recommendations, Thanks!
3
0
63
1w
Using vision pro to detect distance to real life objects
Is it possible to detect distance from the vision pro to real live objects and people? I tried using scene.raycast to perform a raycast forward from the center of the viewport, but it doesn't seem to react to real life objects, only entities. I see mentioned here: https://developer.apple.com/forums/thread/776807?answerId=829576022#829576022, that a raycast with scene reconstruction should allow me to measure that distance, as long as the object is non-moving. How could I accomplish that?
2
0
80
1w
VisionOS - Gamepad steals focus
I am developing a vision os app for controlling an underwater ROV. I have ornaments with telemetry and buttons around a central video view feed. I have custom buttons mappings, such as "A" for locking the depth of the drone. However, when I look at buttons or certain ornaments, my custom gamepad logic is kept from running. This means that when a SwiftUI Button gains focus on visionOS, pressing the controller’s A button triggers the system’s default “click” on that Button rather than my custom buttonA handler. Essentially, focus interception by the system is stealing my A-press events and preventing my custom gamepad logic from running. Is there a way to disable the built in gamepad interaction and only allow my custom gamepad mappings?
1
0
75
2w