Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

Not being able to set maximum window size for Views under different tabs
On TikTok on Vision Pro, the home page has different minimum and maximum window heights and widths compared to the search page. Now I am able to set minimum window size for different tab views but maximum size doesn't seem to work Code: // WindowSizeModel.swift import Foundation import SwiftUI enum TabType { case home case search case profile } @Observable class WindowSizeModel { var minWidth: CGFloat = 400 var maxWidth: CGFloat = 500 var minHeight: CGFloat = 400 var maxHeight: CGFloat = 500 func setWindowSize(for tab: TabType) { switch tab { case .home: configureWindowSize(minWidth: 400, maxWidth: 500, minHeight: 400, maxHeight: 500) case .search: configureWindowSize(minWidth: 300, maxWidth: 800, minHeight: 300, maxHeight: 800) case .profile: configureWindowSize(minWidth: 800, maxWidth: 1000, minHeight: 800, maxHeight: 1000) } } private func configureWindowSize(minWidth: CGFloat, maxWidth: CGFloat, minHeight: CGFloat, maxHeight: CGFloat) { self.minWidth = minWidth self.maxWidth = maxWidth self.minHeight = minHeight self.maxHeight = maxHeight } } // tiktokForSpacialModelingApp.swift import SwiftUI @main struct tiktokForSpacialModelingApp: App { @State private var windowSizeModel: WindowSizeModel = WindowSizeModel() var body: some Scene { WindowGroup { MainView() .frame( minWidth: windowSizeModel.minWidth, maxWidth: windowSizeModel.maxWidth, minHeight: windowSizeModel.minHeight, maxHeight: windowSizeModel.maxHeight) .environment(windowSizeModel) } .windowResizability(.contentSize) } } // MainView.swift import SwiftUI import RealityKit struct MainView: View { @State private var selectedTab: TabType = TabType.home @Environment(WindowSizeModel.self) var windowSizeModel; var body: some View { @Bindable var windowSizeModel = windowSizeModel TabView(selection: $selectedTab) { Tab("Home", systemImage: "play.house", value: TabType.home) { HomeView() } Tab("Search", systemImage: "magnifyingglass", value: TabType.search) { SearchView() } Tab("Profile", systemImage: "person.crop.circle", value: TabType.profile) { ProfileView() } } .onAppear { windowSizeModel.setWindowSize(for: TabType.home) } .onChange(of: selectedTab) { oldTab, newTab in if oldTab == newTab { return } else if newTab == TabType.home { windowSizeModel.setWindowSize(for: TabType.home) } else if newTab == TabType.search { windowSizeModel.setWindowSize(for: TabType.search) } else if newTab == TabType.profile { windowSizeModel.setWindowSize(for: TabType.profile) } } } }
1
0
72
5d
Tapping once with both hands only works sometimes in visionOS
Hello! I have an iOS app where I am looking into support for visionOS. I have a whole bunch of gestures set up using UIGestureRecognizer and so far most of them work great in visionOS! But I do see something odd that I am not sure can be fixed on my end. I have a UITapGestureRecognizer which is set up with numberOfTouchesRequired = 2 which I am assuming translates in visionOS to when you tap your thumb and index finger on both hands. When I tap with both hands sometimes this tap gesture gets kicked off and other times it doesn't and it says it only received one touch when it should be two. Interestingly, I see this behavior in Apple Maps where tapping once with both hands should zoom out the map, which only works sometimes. Can anyone explain this or am I missing something?
0
0
114
5d
VisionOS access ARKit when in shared space
I was planning to experiment with ARKit for Vision OS to create a widget app that places small room persistent objects in the user room, which the user can anchor anywhere they like. Trouble is, I don’t find it an amazing experience the fact that this needs to be used in a full space, as it’s limiting. those types of widgets would make sense only when one want to glance at them quickly, not as part of the main task a user is performing. Is there any way the room positional anchors can be stored and reestablished any time somebody opens an app in the shared space, rather than in the full one?
1
0
122
5d
VisionOS GroupActivities WatchTogether
I have an application that is meant to be a "watch together" GroupActivity using SharePlay that coordinates video playback using AVPlayerPlaybackCoordinator. In the current implementation, the activity begins before opening the AVPlayer, however when clicking the back button within the AVPlayer view, the user is prompted to "End Activity for Everyone" or "End Activity for just me". There is not an option to continue the group activity. My goal is to retain the same GroupSession, even if a user exits the AVPlayer view. Is there a way to avoid ending the session when coordinating playback using the AVPlayerPlaybackCoordinator? private func startObservingSessions() async { sessionInfo = .init() // Await new sessions to watch video together. for await session in MyActivity.sessions() { // Clean up the old session, if it exists. cleanUpSession(groupSession) #if os(visionOS) // Retrieve the new session's system coordinator object to update its configuration. guard let systemCoordinator = await session.systemCoordinator else { continue } // Create a new configuration that enables all participants to share the same immersive space. var configuration = SystemCoordinator.Configuration() // Sets up spatial persona configuration configuration.spatialTemplatePreference = .sideBySide configuration.supportsGroupImmersiveSpace = true // Update the coordinator's configuration. systemCoordinator.configuration = configuration #endif // Set the app's active group session before joining. groupSession = session // Store session for use in sending messages sessionInfo?.session = session let stateListener = Task { await self.handleStateChanges(groupSession: session) } subscriptions.insert(.init { stateListener.cancel() }) // Observe when the local user or a remote participant changes the activity on the GroupSession let activityListener = Task { await self.handleActivityChanges(groupSession: session) } subscriptions.insert(.init { activityListener.cancel() }) // Join the session to participate in playback coordination. session.join() } } /// An implementation of `AVPlayerPlaybackCoordinatorDelegate` that determines how /// the playback coordinator identifies local and remote media. private class CoordinatorDelegate: NSObject, AVPlayerPlaybackCoordinatorDelegate { var video: Video? // Adopting this delegate method is required when playing local media, // or any time you need a custom strategy for identifying media. Without // implementing this method, coordinated playback won't function correctly. func playbackCoordinator(_ coordinator: AVPlayerPlaybackCoordinator, identifierFor playerItem: AVPlayerItem) -> String { // Return the video id as the player item identifier. "\(video?.id ?? -1)" } } /// /// Initializes the playback coordinator for synchronizing video playback func initPlaybackCoordinator(playbackCoordinator: AVPlayerPlaybackCoordinator) async { self.playbackCoordinator = playbackCoordinator if let coordinator = self.playbackCoordinator { coordinator.delegate = coordinatorDelegate } if let activeSession = groupSession { // Set the group session on the AVPlayer instances's playback coordinator // so it can synchronize playback with other devices. playbackCoordinator.coordinateWithSession(activeSession) } } /// A coordinator that acts as the player view controller's delegate object. final class PlayerViewControllerDelegate: NSObject, AVPlayerViewControllerDelegate { let player: PlayerModel init(player: PlayerModel) { self.player = player } #if os(visionOS) // The app adopts this method to reset the state of the player model when a user // taps the back button in the visionOS player UI. func playerViewController(_ playerViewController: AVPlayerViewController, willEndFullScreenPresentationWithAnimationCoordinator coordinator: UIViewControllerTransitionCoordinator) { Task { @MainActor in // Calling reset dismisses the full-window player. player.reset() } } #endif }
0
0
89
5d
Attach a Attachment to the hand VisionOS
I am trying to attach a button to user's left hand. the position is tracked. the button stays above the user's left hand. but it doesn't face the user. or it doesn't even face where the wrist is pointing. this is the main code snippet: if (model.editWindowAdded) { let originalMatrix = model.originFromWristLeft let theattachment = attachments.entity(for: "sample")! entityDummy.addChild(theattachment) let testrotvalue = simd_quatf(real: 0.9906431, imag: SIMD3<Float>(-0.028681312, entityDummy.orientation.imag.y, 0.025926698)) entityDummy.orientation = testrotvalue theattachment.position = [0, 0.1, 0] var timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) {_ in let originalMatrix = model.originFromWristLeft print(originalMatrix.columns.0.y) let testrotvalue = simd_quatf(real: 0.9906431, imag: SIMD3<Float>(-0.028681312,0.1, 0.025926698)) entityDummy.orientation = testrotvalue } }
2
0
154
5d
Can’t Figure Out How to Get My Earth Entity to Rotate on its Axis
I can‘t Figure Out How to Get My Earth Entity to Rotate on its Axis. This is a follow up post from a previous Apple Developer forum post. How would I have the earth (parent) entity rotate CCW underneath the orbiting starship child? I tried adding the following code block to the RealityView but it is not working: if let rotatingEarth = starshipEntity.findEntity(named: "Earth") { rotatingEarth.transform.rotation = simd_quatf.init(angle: 360, axis: SIMD3(x: 0, y: 1, z: 0)) if let animation = try? AnimationResource.generate(with: rotatingEarth as! AnimationDefinition) { rotatingEarth.playAnimation(animation) } } Any advice on getting the earth to rotate? I tried reviewing the Hello World WWDC23 project code, but I was unable to understand the complexity and how that sample project got the earth to rotate. i want to do this for visionOS 1.2. I realize there are some new animation and possible other capabilities coming up in vision 2.0 but I want to try to address this issue in the current released visionOS version.
3
0
170
6d
Building for 'iOS', but linking in object file built for 'visionOS'
I have an application made from Flutter, which is possible to run on VisionOS by running as design to Ipad, and I would like that inside this application would be possible to go to mixed reality somehow. I am trying to do so far was to embedded the vision project that I have inside the swift application that flutter generates, but in this attempt I got an error from Xcode telling me that this way is not possible. I wonder if is there an another way that I could achieve my goal?
2
0
143
6d
visionOS2.0 main camera image fusion
I want to align and fuse the video streams from the main camera and my external camera in visionOS 2.0, ensuring that the fused image remains directly in front of the field of view as the head moves, similar to a normal passthrough mode video image. I have already achieved and verified the static image alignment and fusion on the Vision Pro using screenshots from the main camera and the external video stream. However, I don't know how to perform real-time fusion with the main camera images. Could you please advise on how I can achieve this?
1
0
86
1w
Unable to display contextMenu
This is a visionOS App. I added contextMenu under a combination view, but when I pressed the view for a long time, there was no response. I tried to use this contextMenu in other views, which can be used normally, so I think there is something wrong with this combination view, but I don't know what the problem is. I hope you can remind me. Thank you! Views with problems: struct NAMEView: View { @StateObject private var placeStore = PlaceStore() var body: some View { ZStack { Group { HStack(spacing: 2) { Image(systemName: "mappin.circle.fill") .font(.system(size: 50)) .symbolRenderingMode(.multicolor) .accessibilityLabel("your location") .accessibilityAddTraits([.isHeader]) .padding(.leading, 5.5) VStack { Text("\(placeStore.locationName)") .font(.title3) .accessibilityLabel(placeStore.locationName) Text("You are here in App") .font(.system(size: 13)) .foregroundColor(.secondary) .accessibilityLabel("You are here in App") } .hoverEffect { effect, isActive, _ in effect.opacity(isActive ? 1 : 0) } .padding() } } .onAppear { placeStore.updateLocationName() } .glassBackgroundEffect() .hoverEffect { effect, isActive, proxy in effect.clipShape(.capsule.size( width: isActive ? proxy.size.width : proxy.size.height, height: proxy.size.height, anchor: .leading )) .scaleEffect(isActive ? 1.05 : 1.0) } } } }
1
0
178
1w
Floor stability with physics simulations
In RealityKit using visionOS, I scan the room and use the resulting mesh to create occlusion and physical boundaries. That works well and iI can place cubes (with physics on) onto that too. However, I also want to update the mesh with versions from new scans and that make all my cubes jump. Is there a way to prevent this? I get that the inaccuracies will produce slightly different mesh and I don’t want to anchor the objects so my guess is I need to somehow determine a fixed floor height and alter the scanned meshes so they adhere that fixed height. Any thoughts or ideas appreciated /Andreas
0
0
161
1w
Where does the device anchor locate?
Hi, My goal is to obtain the device location (6 DoF) of the Apple Vision Pro and I find a function that might satisfy my need: final func queryDeviceAnchor(atTimestamp timestamp: TimeInterval) -> DeviceAnchor? which returns a device anchor (containing the position and orientation of the headset). However, I couldn't find any document specify where does the device anchor exactly locate on the headset. Does it locate at the midpoint between the user's eyes? Does it locate at the centroid of the six world facing tracking cameras? It would be really helpful if someone can provide a local transformation matrix (similar to a camera extrinsic) from a visible rigid component (say the digital crown, top button, or the laser scanner) to the device anchor. Thanks.
1
1
167
1w
LowLevelMesh: Triangle Colors
I am trying to follow the documentation with the beta version of visionOS with the new realitykit LowLevelMesh construct (https://developer.apple.com/documentation/realitykit/lowlevelmesh) that draws a triangle. Although the code indicates different colors for each of the 3 vertex points, the triangle renders in white. I believe that the missing link may be a shadergraph material, but because I will be drawing millions of triangles, with colors defined at the nodes and interpolated over the area of the triangles, I want to make sure it is efficient, either with shadergraph materials or perhaps metal. I have, with an earlier version of the app I'm working on, successfully used a shadergraph material with MeshDescriptor.primatives as polygons for tetrahedrons. However, that is inefficient for more than 1,000 tetrahedrons (and crashes) so I'm trying to use the new LowLevelMesh instead (with each tetrahedron split into 4 triangles). However, I can't get very far using the example code from the documentation (that results in the white triangles), even trying to use the default shadergraph (GridMaterial) without getting quite a few error messages. I try to fix the errors with the suggested fixes and then get new ones (whack-a-mole) until it's seems to be all broken.... So in addition to my general question of shadergraph vs metal for a LowLevelMesh, a concrete example of using a shadergraph material with LowLevelMesh would be most appreciated! Thanks.
3
0
284
1w
visionOS 2—Immersive Space/GroupActivity Issue
Platform and Version Development Environment: Xcode 16 Beta 3 visionOS 2 Beta 3 Description of Problem I am currently working on integrating SharePlay into my visionOS 2 application. The application features a fully immersive space where users can interact. However, I have encountered an issue during testing on TestFlight. When a user taps a button to activate SharePlay via the GroupActivity's activate() method within the immersive space, the immersive space visually disappears but is not properly dismissed. Instead, the immersive space can be made to reappear by turning the Digital Crown. Unfortunately, when it reappears, it overlaps with the built-in OS immersive space, resulting in a mixed and confusing user interface. This behavior is particularly concerning because the immersive space is not progressive and should not work with the Digital Crown being turned. It is important to note that this problem is only present when testing the app via TestFlight. When the same build is compiled with the Release configuration and run directly through Xcode, the immersive space behaves as expected, and the issue does not occur. Steps to Reproduce Build a project that includes a fully immersive space and incorporates GroupActivity support. Add a button within a window or through a RealityView attachment that triggers the GroupActivity's activate() method. Upload the build to TestFlight. Connect to a FaceTime call. Open the app and enter a immersive space then press the button to activate the Group Activity.
0
0
244
1w
How to save point cloud data and view it
Hello, Recently, I’ve been studying point cloud development. As a beginner in this field, I’m seeking some guidance on how to approach this. I want to obtain point cloud data and be able to display and view it as shown in the picture below. I used the "Displaying a Point Cloud Using Scene Depth" code as my starting point and then attempted to add a button to save the data in the Renderer private variable particlesBuffer: MetalBuffer. The particlesBuffer array contains data structured as follows: struct ParticleUniforms { simd_float3 position; simd_float3 color; float confidence; }; My understanding is that this data represents the point cloud. If I am wrong, please let me know. Next, I wrote my own code to display this data using SceneKit by creating small spheres based on the position and color values. However, in practice, this method only allows me to display about 30,000 spheres before it becomes very laggy. I believe my implementation might be incorrect because I noticed that using the 3D scanner App to display point clouds achieves a much better performance. Could you please advise me on how to achieve an effect like the one shown in the image below? Thank you.
0
0
89
1w
Remote spatial images
Hello 👋 following questions: I am using a Simulator with VisionOS 2.0 installed on. I am trying to display a remote spatial image. But I cannot display it. I am trying to use the new updates form Webkit (https://webkit.org/blog/15443/news-from-wwdc24-webkit-in-safari-18-beta/#spatial-media) and show the image in a webview. But I cannot make it work. The image is not shown. In the native version I thought about the new quicklook features that would help to display the spatial media. But I think this is also just for local files. Right? I downloaded the file before but no success. Any Ideas how I can display remote spatial images?
0
0
150
1w