visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts

Post

Replies

Boosts

Views

Activity

Triangle Recommendation Limit for M5 Vision Pro
For the M2 Apple Vision Pro, there's "a general guideline, we recommend no more than 500 thousand triangles for an immersive scene, with 250 thousand for applications in the shared space." --https://developer.apple.com/videos/play/wwdc2024/10186/?time=147 Is there a revised recommendation for the M5 Apple Vision Pro?
0
0
25
4h
[WebXR] Support for AR module in VisionOS 2.x
Thank you again for pushing the web forward in VisionOS 2, super exciting! The latest WWDC24 video touched on VR experiences for VisionOS2.0 using WebXR, however there was no mention of passthrough AR experiences. Samples such as this one are not supported: https://immersive-web.github.io/webxr-samples/immersive-ar-session.html In Settings > Safari, there is a feature flag for the AR WebXR module, but enabling it did not seem to change anything. Is this the expected behavior at this time? Any developer preview(s) we could try?
9
6
2.1k
17h
Custom Material half visible..?
I'm currently implementing 180° / 360° immersive video for my app. I easily implemented 360° by just applying VideoMaterial to flipped sphere. But I'm stuck at 180°. I'm trying to implement by applying VideoMaterial to hemisphere (half sphere). I want to make VideoMaterial to be visible half front sphere and half back sphere transparent / clear. Would there be any advice / information / idea to implement this? Your help would be grateful.
0
0
55
17h
Material showing only half?
Hi, I'm currently implementing 180° / 360° property for immersive video in my app. I was able to implement 360° easily by just giving VideoMaterial to flipped sphere. However, I'm bit stuck at 180°. I want to implement by setting VideoMaterial to hemisphere mesh. But since RealityKit doesn't provide default function such like MeshResource.generateHemisphere yet, I just want to apply VideoMaterial half front visible, and half back transparent. I thought this would make my sphere looks like hemisphere. But I can't find my way to implement this method.. I would appreciate any advice / idea / information that might help.
0
0
32
17h
ManipulationComponent Not Translating using indirect input
When using the new RealityKit Manipulation Component on Entities, indirect input will never translate the entity - no matter what settings are applied. Direct manipulation works as expected for both translation and rotation. Is this intended behaviour? This is different from how indirect manipulation works on Model3D. How else can we get translation from this component? visionOS 26 Beta 2 Build from macOS 26 Beta 2 and Xcode 26 Beta 2 Attached is replicable sample code, I have tried this in other projects with the same results. var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "MovieFilmReel", in: reelRCPBundle) { ManipulationComponent.configureEntity(immersiveContentEntity, allowedInputTypes: .all, collisionShapes: [ShapeResource.generateBox(width: 0.2, height: 0.2, depth: 0.2)]) immersiveContentEntity.position.y = 1 immersiveContentEntity.position.z = -0.5 var mc = ManipulationComponent() mc.releaseBehavior = .stay immersiveContentEntity.components.set(mc) content.add(immersiveContentEntity) } } }
11
2
912
2d
Automating pickerWheels in VisionOS
Hi! I am learning Swift and UIKit for work. I am trying to automate using a pickerWheel in VisionOS, but since .adjust(toValue: ) was removed in VisionOS's API, I am absolutely struggling to find a way to set a pickerWheel to a specific value. Currently, my solution is to calculate the amount of times I would need to increment/decrement the wheel to get from the current value to the desired value, then do so one at a time. However, this currently does not work, as .accessibilityIncrement() and .accessibilityDecrement() do not work, and .swipeUp() and .swipeDown() go too far. What can I do? Note: I am not a frontend engineer, so while solutions may exist that involve changes to the frontend, I would much rather try and get the frontend we do have to work as is.
1
0
40
3d
visionOS Simulator Rotate and Scale gestures difficult to register (capture)
We were having an issue wrb the system rotate and scale gestures (two-handed gestures / RotateGesture3D and MagnifyGesture) were extremely difficult to register (make work) in the visionOS simulator. The solution we found was to: Launch your app in the simulator Move the pointer on top of the 3D object for which you are testing rotation and scaling gestures. Press and hold the Option key to display touch points (ie: the two-handed gesture points). While maintaining the option key pressed, release the pointer and re-enable it again. I am using a track pad with tap-to-click enabled and three-finger to drag enabled in accessibility, so "release the pointer and re-enable it again" translates simply to removing the three finger and placing them again on the trackpad. If you have maintained the option key pressed, then you should now be able to rotate and scale the 3D object. Context if you are interested: Our issue was also occurring in Apple's own sample project relating to gestures "Transforming RealityKit entities using gestures", at below link. On Apple's article "Interacting with your app in the visionOS simulator" at the below link, for two-handed gestures it states "Press and hold the Option key to display touch points. Move the pointer while pressing the Option key to change the distance between the touch points. Move the pointer and hold the Shift and Option keys to reposition the touch points." This simply did not work anymore for rotation and scaling gestures. These gestures used to be a lot more responsive in Sonoma. Either the article should be updated to what I described above, or there is an issue. Our colleague who is using macOS Sonoma 14.6.1 with the latest release of Xcode is not having these issues. Here is the list of configurations (troubleshooting we tried!) where it is difficult to achieve rotation and scaling gestures in the visionOS simulator: macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.1 macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.0 macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.1 macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.0 macOS Sequoia 16.1 Beta, remove all Xcodes and installed the build from AppStore (Xcode 16.1) macOS Sequoia 16.1 Beta, Xcode 16.0 w visionOS 2.0 completely wiped out, and reset entire development machine, re-installed latest releases of sequoia (15.1) and xcode (15.1)) Throughout these troubleshooting I often: restarted both xcode and sim erased all derived data erased all contents and settings from sims performed fresh git clones None of the above worked, only the workaround described above works atm. As you can maybe deduce, it was very time consuming to find the workaround, we also wasted some development effort thinking our gesture development was no-good. Hopefully this will help other devs. Article Link: https://developer.apple.com/documentation/xcode/interacting-with-your-app-in-the-visionos-simulator Gesture sample project link: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
3
0
1k
4d
Alert within Popover is clipped in height causing title to be hidden
When showing an Alert from within a Popover that has a fixed height, the newly presented Alert is in the same position but gets limited by the Popovers height causing the title of the Alert to be hidden. Is this intentional behavior or are Alerts not supported within Popovers and I'd have to pass it through to my main view? Code: // // DemoalertPopover.swift // ******** // // Created by Thilo on 26.06.2023. // import SwiftUI struct DemoAlertPopover: View { @Environment(\.dismiss) var dismiss @State private var showDeleteConfirmation = false let dateFormatter: DateFormatter = { let formatter = DateFormatter() formatter.dateFormat = "dd.MM.yyyy" return formatter }() var body: some View { VStack(alignment: .leading, spacing: 0) { HStack { Text("Title").font(.extraLargeTitle).lineLimit(1) Spacer() Button(action: { dismiss() }) { Label("Close", systemImage: "xmark").labelStyle(.iconOnly) } } HStack(alignment: .center) { Text("Content").font(.largeTitle) Text("Content2").foregroundStyle(.secondary) } LazyVGrid(columns: [GridItem(.flexible(), spacing: 10),GridItem(.flexible(), spacing: 10),], spacing: 10) { Button(action: { showDeleteConfirmation = true }) { ZStack{ Image(systemName: "trash.fill").resizable().foregroundColor(.primary) }.aspectRatio(1/1,contentMode: .fit) .frame(maxWidth: .infinity).padding(5) }.aspectRatio(3/1,contentMode: .fit) }.alert("Are you sure you want to delete ...?", isPresented: $showDeleteConfirmation) { Button("Trash",role: .destructive, action: { print("Deleted") dismiss() }) Button("Cancel", role: .cancel) {} } message: { Text("This is a small message below the title, just so you know.") } } .padding(.all, 10).frame(width: 300) } } #Preview { DemoAlertPopover() } Video: https://www.youtube.com/shorts/31Kl7qbJIiA
2
0
808
5d
Performance drop when particle emitter is combined with video play
Hi All, We're a studio building an app and as part of a scene we have a 3D asset with a smoke particle emitter and a curved mesh that plays video. I notice that when the video alone is played or the particle effect alone is done then the scene works fine but the frame rate drops drastically when both are turned on. How do I solve this because this is an important storytelling feature.
2
0
228
6d
Why don’t the dinosaurs in “Encounter Dinosaurs” respond to real-world light intensity?
I have a question about Apple’s preinstalled visionOS app “Encounter Dinosaurs.” In this app, the dinosaurs are displayed over the real-world background, but the PhysicallyBasedMaterial (PBM) in RealityKit doesn’t appear to respond to the actual brightness of the environment. Even when I change the lighting in the room, the dinosaurs’ brightness and shading remain almost the same. If this behavior is intentional — for example, if the app disables real-world lighting influence or uses a fixed lighting setup — could someone explain how and why it’s implemented that way?
1
0
409
6d
visionOS – Starting GroupActivity FaceTime Call dismisses Immersive Space
Hello, I am in the process of implementing SharePlay support in my visionOS app. Everything runs fine when I test locally, but when my app is distributed via TestFlight, calling try await activity.activate() shows the SharePlay dialog as usual, but then when I start a new FaceTime call, my ImmersiveSpace gets dismissed. This is only happening when the app is distributed via TestFlight, when I run it locally the ImmersiveSpace stays active as expected. Looking at the console on my Mac I found this log: Invalid initial client settings class: UIApplicationSceneClientSettings; expected class: MRUISharedApplicationSceneClientSettings; bundle ID: com.apple.facetime; scene ID: com.apple.facetime:SFBSystemService-DDA8C751-C0C4-487E-AD85-59EF4E6C6050 Does anyone have an idea how I can fix this? It's driving me nuts and I wasted over a day looking for a workaround but so far been unsuccessful. Thanks!
5
0
687
1w
Can I access Enterprise APIs for visionOS (main camera) with an individual developer account?
Hello, my name is Kanazawa Kenta, and I am a university student currently conducting research and development using visionOS. In my project, I would like to access the main camera using the Enterprise APIs for visionOS. However, when I try to request permission or view the documentation, I receive the following message: "Your account can’t access this page. There may be certain requirements to view this content. You must be the Account Holder of an Apple Developer Program for Organizations or an Apple Developer Enterprise Program to view this page." I am currently enrolled in the Apple Developer Program as an Individual. My question is: Is it possible to use the Enterprise APIs with an individual developer account? If not, what are the available options for students or researchers who wish to use these APIs for academic purposes? I would greatly appreciate any guidance or suggestions on how I can gain access, or alternative ways to use the main camera for research in visionOS. Thank you for your help. Best regards, Kanazawa Kenta
5
0
157
1w
Logitech Muse API?
Hello, I've pre-ordered the Logitech Muse with hopes of developing with it, but have yet to find any documentation relating to the capabilities it will have/any APIs that will be available to take advantage of the Muse. Is anyone aware of what might become available? Thank you in advance.
1
0
65
1w
RealityKit Instanced Rendering on visionOS
Hello, I've been trying to leverage instanced rendering in RealityKit on visionOS but have not had success. RealityKit states this is supported: https://developer.apple.com/documentation/realitykit/validating-usd-files https://developer.apple.com/videos/play/wwdc2021/10075/?time=1373 https://developer.apple.com/videos/play/wwdc2023/10099/?time=772 RealityKit Trace metrics Validating instancing is working: To test I made a base visionOS app with immersive space and the entity replaced with my test usdz file. I've been using the RealityKit Trace profiling template in xcode instruments in the immersive space and volume closed. This gets consistent draw call results. If I have a single sphere mesh with one material I get one draw call, but the number of draw calls grows linearly with mesh count no matter how my entity is configured. What I've tried Create a test scene in blender, export with instancing enabled Create a test scene in Reality Composer Pro using references Author usda files by hand based on the OpenUSD spec Programatically create a MeshResource with Contents at runtime References https://openusd.org/release/api/_usd__page__scenegraph_instancing.html https://developer.apple.com/documentation/realitykit/meshresource https://developer.apple.com/documentation/realitykit/meshresource/instance Thank you
4
4
1.3k
1w
Request File Access from Unity for Apple Vision Pro
Hi, I am trying to load files from the Apple Vision Pro's storage into a Unity App (using Apple visionOS XR Plugin and not PolySpatial package). So far, I've tried using UnitySimpleFileBrowser and UnityStandaloneFileBrowser (both aren't made for the Vision Pro and don't work there), and then implemented my own naive file browser that at least allows me to view directories (that I can see from the App Sandbox). This is of course very limited: Gray folders can't be accessed, the only 3 available ones don't contain anything where a user would put files through the "Files" app. I know that an app can request access to these "Files & Folders": So my question is: Is there a way to request this access for a Unity-built app at the moment? If yes, what do I need to do? I've looked into the generated Xcode project's "Capabilities", but did not find anything related to file access. Any help is appreciated!
5
0
301
1w
Xcode shows alert about unknown com.apple.quicklook.preview extension point when running on Apple Vision Pro Simulator
I have an iOS app with a QuickLook extension. I also added Apple Vision Pro in the target's General > Supported Destinations section. About one year ago, I was able to run the app on iPhone, iPad and Apple Vision Pro Simulators. Today I tried running it again on Apple Vision Pro with Xcode 26.0.1, but Xcode shows this error: Try again later. Appex bundle at ~/Library/Developer/CoreSimulator/Devices/F6B3CCA8-82FA-485F-A306-CF85FF589096/data/Library/Caches/com.apple.mobile.installd.staging/temp.PWLT59/extracted/problem.app/PlugIns/problemQuickLook.appex with id org.example.problem.problemQuickLook specifies a value (com.apple.quicklook.preview) for the NSExtensionPointIdentifier key in the NSExtension dictionary in its Info.plist that does not correspond to a known extension point. I tried again later a couple times, even after running Clean Build Folder Immediately, without any change. I can reproduce this with a fresh Xcode project to which I add a Quick Look Preview Extension and Apple Vision Pro as a supported destination. The error doesn't happen when running on Apple Vision Pro (Designed for iPad) or iPad Pro 13-inch (M4) destinations. What is the problem? I created FB20448815.
4
0
202
1w
AVPlayer stutters when using AVPlayerItemVideoOutput
We’re trying to build a custom player for Unity. For this, we’re using AVPlayer with AVPlayerItemVideoOutput to get textures. However, we noticed that playback is not smooth and the stream often freezes. For testing, we used this 8K video: https://deovr.com/nwfnq1 The video was played using the following code: @objc public func playVideo(urlString: String) { guard let url = URL(string: urlString) else { return } let pItem = AVPlayerItem(url: url) playerItem = pItem pItem.preferredForwardBufferDuration = 10.0 let pixelBufferAttributes: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, kCVPixelBufferMetalCompatibilityKey as String: true, ] let output = AVPlayerItemVideoOutput( pixelBufferAttributes: pixelBufferAttributes ) pItem.add(output) playerItemObserver = pItem.observe(\.status) { [weak self] pItem, _ in guard pItem.status == .readyToPlay else { return } self?.playerItemObserver = nil self?.player.play() } player = AVPlayer(playerItem: pItem) player.currentItem?.preferredPeakBitRate = 35_000_000 } When AVPlayerItemVideoOutput is attached, the video stutters and the log looks like this: 🟢 Playback likely to keep up 🟡 Buffer ahead: 4.08s | buffer: 4.08s 🟡 Buffer ahead: 4.08s | buffer: 4.08s 🟡 Buffer ahead: -0.07s | buffer: 0.00s 🟡 Buffer ahead: 2.94s | buffer: 3.49s 🟡 Buffer ahead: 2.50s | buffer: 4.06s 🟡 Buffer ahead: 1.74s | buffer: 4.30s 🟡 Buffer ahead: 0.74s | buffer: 4.30s 🟠 Playback may stall 🛑 Buffer empty 🟡 Buffer ahead: 0.09s | buffer: 4.30s 🟠 Playback may stall 🟠 Playback may stall 🛑 Buffer empty 🟠 Playback may stall 🟣 Buffer full 🟡 Buffer ahead: 1.41s | buffer: 1.43s 🟡 Buffer ahead: 1.41s | buffer: 1.43s 🟡 Buffer ahead: 1.07s | buffer: 1.43s 🟣 Buffer full 🟡 Buffer ahead: 0.47s | buffer: 1.65s 🟠 Playback may stall 🛑 Buffer empty 🟡 Buffer ahead: 0.10s | buffer: 1.65s 🟠 Playback may stall 🟡 Buffer ahead: 1.99s | buffer: 2.03s 🟡 Buffer ahead: 1.99s | buffer: 2.03s 🟣 Buffer full 🟣 Buffer full 🟡 Buffer ahead: 1.41s | buffer: 2.00s 🟡 Buffer ahead: 0.68s | buffer: 2.27s 🟡 Buffer ahead: 0.09s | buffer: 2.27s 🟠 Playback may stall 🛑 Buffer empty 🟠 Playback may stall When we remove AVPlayerItemVideoOutput from the player, the video plays smoothly, and the output looks like this: 🟢 Playback likely to keep up 🟡 Buffer ahead: 1.94s | buffer: 1.94s 🟡 Buffer ahead: 1.94s | buffer: 1.94s 🟡 Buffer ahead: 1.22s | buffer: 2.22s 🟡 Buffer ahead: 1.05s | buffer: 3.05s 🟡 Buffer ahead: 1.12s | buffer: 4.12s 🟡 Buffer ahead: 1.18s | buffer: 5.18s 🟡 Buffer ahead: 0.72s | buffer: 5.72s 🟡 Buffer ahead: 1.27s | buffer: 7.28s 🟡 Buffer ahead: 2.09s | buffer: 3.03s 🟡 Buffer ahead: 4.16s | buffer: 6.10s 🟡 Buffer ahead: 6.66s | buffer: 7.09s 🟡 Buffer ahead: 5.66s | buffer: 7.09s 🟡 Buffer ahead: 4.66s | buffer: 7.09s 🟡 Buffer ahead: 4.02s | buffer: 7.45s 🟡 Buffer ahead: 3.62s | buffer: 8.05s 🟡 Buffer ahead: 2.62s | buffer: 8.05s 🟡 Buffer ahead: 2.49s | buffer: 3.53s 🟡 Buffer ahead: 2.43s | buffer: 3.38s 🟡 Buffer ahead: 1.90s | buffer: 3.85s We’ve tried different attribute settings for AVPlayerItemVideoOutput. We also removed all logic related to reading frame data, but the choppy playback still remained. Can you advise whether this is a player issue or if we’re doing something wrong?
1
0
359
2w