visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

1,225 Posts
Sort by:
Post not yet marked as solved
0 Replies
27 Views
WindowGroup{ SolarDisplayView() .environment(model) } .windowStyle(.plain) Why is the code above correct while the code below reports an error? How to modify the following code? WindowGroup{ SolarDisplayView() .environment(model) } .windowStyle(model.isShow ? .plain : .automatic)
Posted
by xuhengfei.
Last updated
.
Post not yet marked as solved
0 Replies
45 Views
In the past, Apple recommended restricting USDZ models to a maximum of 100,000 triangles and a texture sizes of 2048x2048 for Apple QuickLook (and I think for RealityKit on iOS in general). Does Apple have any recommended max polygon counts for visionOS? Is it the same for models running in a Volumetric window in the shared space and in ImmersiveSpace? What is the recommended texture size for visionOS? (I seem to recall 8192x8192, but I can't find it now)
Posted
by Todd2.
Last updated
.
Post not yet marked as solved
0 Replies
53 Views
I've created a Full Immersive VisionOS project and added a spacial video player in the ImmersiveView swift file. I have a few buttons on a different VideosView swift file on a floating window and i'd like switch the video playing in ImmersiveView when i click on a button in VideosView file. Video player working great in ImmersiveView: RealityView { content in if let videoEntity = try? await Entity(named: "Video", in: realityKitContentBundle) { guard let url = Bundle.main.url(forResource: "video1", withExtension: "mov") else {fatalError("Video was not found!")} let asset = AVURLAsset(url: url) let playerItem = AVPlayerItem(asset: asset) let player = AVPlayer() videoEntity.components[VideoPlayerComponent.self] = .init(avPlayer: player) content.add(videoEntity) player.replaceCurrentItem(with: playerItem) player.play() }else { print("file not found!") } } Buttons in floating window from VideosView: struct VideosView: View { var body: some View { VStack{ Button(action: {}) { Text("video 1").font(.title) } Button(action: {}) { Text("video 2").font(.title) } Button(action: {}) { Text("video 3").font(.title) } } } } In general how do I control the video player across views and how do I replace the video when each button is selected. Any help/code/links would be greatly appreciated.
Posted Last updated
.
Post not yet marked as solved
1 Replies
90 Views
I have the Vision Pro developer strap, do I need to do anything to make Instruments transfer the data over it rather than wifi? Or will it do that automatically? It seems incredibly slow for transferring and then analysing data. I can see the Vision Pro recognised in Configurator, so assume it's working. Otherwise.. Any tips for speeding up Instruments? Capturing 5 mins of gameplay (high-freq) then takes 30-40+ mins to appear in Instruments on an M2 Max 32gb. Thanks!
Posted
by yezz.
Last updated
.
Post not yet marked as solved
0 Replies
65 Views
I want to transfer this video stream to another device and then view it on the other device. But I did not see any development information related to the camera by checking the VisionOS documentation information, so I would like to ask if anyone knows how to do it? Thank you.
Posted
by iOS-LI.
Last updated
.
Post not yet marked as solved
0 Replies
75 Views
I'm creating a full immersive app of a large 3d environment in which I need to be able to move the player with different options like, hand gestures, game controller and teleporting. I have worked with unreal engine in which moving the player is easy and well documented. But I have not been able to find any information on how I could achieve this in visionOS. Has anyone done something similar that could give me some advice or sample code? any help appreciated Guillermo
Posted
by gl5.
Last updated
.
Post not yet marked as solved
5 Replies
493 Views
Hello everyone! For applications with multiple windows, if I close all open windows, I've noticed that visionOS reopen only the last closed window when launching the app again from the menu. Is there a way I can set a main window that will always be open when the application is launched?
Posted
by kentvchr.
Last updated
.
Post not yet marked as solved
0 Replies
88 Views
We want to overlay a SwiftUI attachment on a RealityView, like it is done in the Diorama sample. By default, the attachments seem to be placed centered at their position. However, for our use-case we need to set a different anchor point, so the attachment is always aligned to one of the corners of the attachment view, e.g. the lower left should be aligned with the attachment's position. Is this possible?
Posted
by waldgeist.
Last updated
.
Post marked as solved
4 Replies
100 Views
New to Apple development. Vision Pro is the reason I got a developer license and am learning XCode, SwiftUI .... The Vision Pro tutorials seem to use WIFI or the developer strap to connect the Development environment to the Vision Pro. I have the developer strap, but can't use it on my company computer. I have been learning using the developer tools, but I can't test the apps on my personal Vision Pro. Is there a way to generate an app file on the Mac Book that I can download to the Vision Pro? This would be a file that I could transfer to cloud storage and download using Safari to the Vision Pro. I will eventually get a Vision Pro at work, but till then I want to start developing.
Posted Last updated
.
Post not yet marked as solved
1 Replies
96 Views
As the title says. While I can find the video capture on the desktop but I can not find where it is storing the screenshots even when it says Screenshot's succeeded. I am referencing this: https://developer.apple.com/documentation/visionos/capturing-screenshots-and-video-from-your-apple-vision-pro-for-2d-viewing
Posted Last updated
.
Post not yet marked as solved
1 Replies
100 Views
I have some usdz files saved and I would like to make thumbnails for them in 2D of course. I was checking Creating Quick Look Thumbnails to Preview Files in Your App but it says Augmented reality objects using the USDZ file format (iOS and iPadOS only) I would like to have the same functionality in my visionOS app. How can I do that? I thought about using some api to convert 3d asset into 2d asset, but it would be better If I could do that inside the Swift environment. Basically I wanna do Image(uiImage: "my_usdz_file")
Posted
by Hygison.
Last updated
.
Post not yet marked as solved
1 Replies
66 Views
I have a unity scene which i have created for vision pro and i have also created a biomatric authentication application for vision os using Xcode and swift. What i want to do is call unity scene after the authentication has taken place form the xcode. now i have seen medium post but it only shows how we can do that for apps, I am not bale to do that for vision Pro I have followed this post : https://medium.com/mop-developers/launch-a-unity-game-from-a-swiftui-ios-app-11a5652ce476 All this i am doing because as far as i know Apple vision pro is not currently supporting optic id authentication with unity's polyspatial plugin. Any help on this will be appreciated. Thank you in advace.
Posted
by snusharma.
Last updated
.
Post not yet marked as solved
1 Replies
103 Views
Our app needs to scan QR codes (or a similar mechanism) to populate it with content the user wants to see. Is there any update on QR code scanning availability on this platform? I asked this before, but never got any feedback. I know that there is no way to access the camera (which is an issue in itself), but at least the system could provide an API to scan codes. (It would be also cool if we were able to use the same codes Vision Pro uses for detecting the Zeiss glasses, as long as we could create these via server-side JavaScript code.)
Posted
by waldgeist.
Last updated
.
Post not yet marked as solved
1 Replies
128 Views
Hi there. I've been trying to take a snapshot programmatically on apple vision pro but haven't succeeded. This is the code I am using so far: func takeSnapshot<Content: View>(of view: Content) -> UIImage? { var image: UIImage? uiQueue.sync { let controller = UIHostingController(rootView: view) controller.view.bounds = UIScreen.main.bounds let renderer = UIGraphicsImageRenderer(size: controller.view.bounds.size) image = renderer.image { context in controller.view.drawHierarchy(in: controller.view.bounds, afterScreenUpdates: true) } } return image } However, UIScreen is unavailable on visionOS. Any idea of how I can achieve this? Thanks Oscar
Posted
by ojrlopez.
Last updated
.
Post marked as solved
1 Replies
85 Views
I built two parts of my app a bit disjointed: my physics component, which controls all SceneReconstruction, HandTracking, and WorldTracking. my spatial GroupActivities component that allows you to see personas of those that join the activity. My problem: When trying to use any DataProvider in a spatial experience, I get the ARKit Session Event: dataProviderStateChanged, which disables all of my providers. My question: Has anyone successfully been able to find a workaround for this? I think it would be amazing to have one user be able to be the "host" for the activity and the scene reconstruction provider still continue to run for them.
Posted Last updated
.
Post not yet marked as solved
1 Replies
120 Views
I'm trying to understand how Apple handles dragging windows around in an immersive space. 3d Gestures seem to be only half of the solution in that they are great if you're standing still and want to move the window an exaggerated amount around the environment, but if you then start walking while dragging, the amplified gesture sends the entity flying off into the distance. It seems they quickly transition from one coordinate system to another depending on if the user is physically moving. If you drag a window and start walking the movement suddenly matches your speed. When you stop moving, you can push and pull the windows around again like a super hero. Am I missing something obvious in how to copy this behavior? Hello world, which uses the 3d gesture has the same problem. You can move the world around but if you walk with it, it flies off. Are they tracking the head movement and if it's moved more than a certain amount it uses that offset instead? Is there anything out of the box that can do this before I try and hack my own solution?
Posted Last updated
.
Post not yet marked as solved
0 Replies
98 Views
I'm building a visionOS app which loads a Reality Composer scene with a large number of models. The app includes several of these scenes, and allows the user to switch between them. Because the scenes have a large number of models, I want to unload the currently loaded scene before loading a different one. So far I have been unable to reclaim all of the used memory by removing the entities from the scene. I've made a few small changes to the Mixed Immersive app template which demonstrate this behavior which I've included below (apparently I'm unable to upload a zip file with the entire project). Using just the two spheres included in the reality kit content the leaked memory is fairly small, but if you add a couple larger models to the scene (I was able to easily find free ones online) then the memory leak becomes much more obvious. When the immersive space is initially opened, I'm seeing roughly 44MB of used memory (as shown in the Xcode Debug navigator). Each time I tap the "Load Models" and then "Unload Models" buttons, the memory use decreases but does not get back down to the initial amount. Subsequent loads and unloads will continue to increase the used memory (the amount of increase will depend on the models that you add to the scene). Also note that I've seen similar memory increases when dynamically creating the entities. Inside ViewModel.loadModels I've included some commented out code that dynamically creates entities instead of loading a Reality Composer scene. Is there a way to fully reclaim the used memory? I've tried many different ways to clear the RealityKit entities but so far have been unsuccessful. struct RKMemTestApp: App { private var viewModel = ViewModel() var body: some Scene { WindowGroup { ContentView() .environment(viewModel) } ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() .environment(viewModel) } } } Add this above the body in ContentView: @Environment(ViewModel.self) private var viewModel The ContentView body should be: VStack { Toggle("Show ImmersiveSpace", isOn: $showImmersiveSpace) .font(.title) .frame(width: 360) .padding(24) .glassBackgroundEffect() Button("Load Models") { viewModel.loadModels() } Button("Unload Models") { viewModel.unloadModels() } } ImmersiveView: struct ImmersiveView: View { @Environment(ViewModel.self) private var viewModel var body: some View { RealityView { content in if let rootEntity = viewModel.rootEntity { content.add(rootEntity) } } update: { content in if viewModel.rootEntity == nil && !content.entities.isEmpty { content.entities.removeAll() } else if let rootEntity = viewModel.rootEntity, content.entities.isEmpty { content.add(rootEntity) } } } } ViewModel: import Foundation import Observation import RealityKit import RealityKitContent @Observable class ViewModel { var rootEntity: Entity? init() { } func loadModels() { Task { if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) { Task { @MainActor in if rootEntity == nil { rootEntity = Entity() } rootEntity!.addChild(scene) } } } /*if rootEntity == nil { rootEntity = Entity() } for _ in 0..<1000 { let mesh = MeshResource.generateSphere(radius:0.1) let material = SimpleMaterial(color: .blue, roughness: 0, isMetallic: true) let entity = ModelEntity(mesh: mesh, materials: [material]) entity.position = [Float.random(in: 0.0..<1.0), Float.random(in: 0.5..<1.5), -Float.random(in: 1.5..<2.5)] rootEntity!.addChild(entity) }*/ } func unloadModels() { rootEntity?.children.removeAll() rootEntity?.removeFromParent() rootEntity = nil } }
Posted
by KGraus.
Last updated
.
Post marked as solved
1 Replies
93 Views
I am trying to launch openImmersiveSpace, but seem like there is an issue with the openImmersiveSpace Task. Error: Static method 'buildExpression' requires that 'Task<OpenImmersiveSpaceAction.Result, Never>' conform to 'View' Here is the code and the error shows up on the "Task" line. import SwiftUI import RealityKit import RealityKitContent struct TestView: View { @Environment(\.openImmersiveSpace) var openImmersiveSpace @Environment(\.dismissImmersiveSpace) var dismissImmersiveSpace var body: some View { VStack{ Text("Open Full Immersive & switch to NextViewArea") NavigationLink { Task { await openImmersiveSpace(id: "ImmersiveSpace") } NextViewArea() } label:{ Label(" Enter Full Immersive Space") } } } } How can I move onto the next view area in the floating window while also launching full immersive space. Any help would be much appreciated.
Posted Last updated
.
Post not yet marked as solved
4 Replies
582 Views
Hi, My app launches with a mixed immersive space. the Preferred Default Scene Session Role is set to Immersive Space Application Session Role. ImmersiveSpace(id: "sceneSpace"){ ImmersiveView() .environmentObject(modelObject) }.immersionStyle(selection: .constant(.mixed), in: .mixed) Other WindowGroups are opened too. Problem: When the x button (bottom left corner) is tapped on any WindowGroup the immersive space is dismissed. When the user opens the app again the immersive space is gone. The same happens when the user opens the Home Screen. How can I keep the same immersive space when the app is opened again. Thank you!
Posted
by gebs.
Last updated
.