Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Start Metal3 and visionOS in Compositor Services
I am seeking a comprehensive pathway to learning Metal programming on VisionOS. The official documentation’s Pathway on Metal is insufficient in this regard. I kindly request that someone create a detailed pathway to assist me in this endeavor. The pathway should encompass the following key areas: Knowledge Base: Understand the fundamental principles of Metal and other frameworks, as well as basic concepts, to prepare for future learning. Metal3 (very important) : Gain a deep understanding of Metal itself, the programming language used to communicate with the GPU on the device to render graphics. This knowledge forms the foundation for all Metal-related tasks. Compositor Services and ARKit (important) : Learn how to display Metal scenes within the Vision device’s space and enable augmented reality (AR) and hand interaction. This knowledge is essential for creating interactive and immersive experiences. Metal Performance Shaders: Acquire expertise in optimizing material rendering to enhance performance. MetalKit: Simplifies the tasks that display your Metal content onscreen. MetalFX: Develop proficiency in using MetalFX to improve rendering efficiency and achieve visually stunning effects. I would appreciate it if you could provide me with a detailed and comprehensive pathway, including the URLs of relevant documents, to guide my learning journey. Thank you for your assistance.
0
0
518
Nov ’24
Creating a multiview video playback experience in visionOS. There is no back button on the player.
Function Introduction "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/" When I use this function, my videoPlayer has no back Action in player. And we did not find any method provided by the system "addChildViewControllerAndView(form)" "https://developer.apple.com/documentation/avkit/adopting-the-system-player-interface-in-visionos" Referencing this document also did not work As long as you enter this line of code let playerController = AVPlayerViewController() // Enable the multiview experience along with the default recommended set. playerController.experienceController.allowedExperiences = .recommended(including: [.multiview]) there is no back button, only full screen and zoom out
8
0
982
Nov ’24
[App synchronization] I have a question about synchronizing Vision Pro app contents.
Hi! I'm creating an app like this: Using Image Tracking to set world anchor in real world first. The timeline in Reality Composer Pro scene needs to be played in same time(for the people in same place using the app). People using the app will see the same contents in same position in same time in same place. I already made Image Tracking feature worked. But the big problem is "Synchronization". I found Group Activities and TabletopKit to solve the problem. But I don't know if this are the right modules for this project. How do I solve this problem technically? If you have ideas, please let me know. I really need help for this.
1
0
616
Nov ’24
Maintain rotation sense while dragging
Hi, I heve a problem with an visionOS app and I couldn't find a solution. I have 3D carousel with cards and when I use the drag gesture and I drag to the left I want the carousel to rotate clockwise and when I drag to the right I want the carousel to rotate counter clockwise. My problem is when I rotate my body more than 90 degrees to the left or to the right the drag gesture changes it's value and the carousel rotates in the opposite direction? Do you know how can I maintain the right sense taking into account that the user can rotate his body? I've tried to take the user orientation with device tracking and check if rotation on Y axis is greater than 90 degrees in both direction but It is a very small area bettween 70-110 degrees when it still rotates in the opposite direction. I think that's because the device traker doesn't update at the same rate as drag gesture or it doesn't have the same acurracy.
1
0
520
Nov ’24
Recognizing Font Uninstall on visionOS
My visionOS app can install custom fonts. My visionOS app also lists these fonts as available within the application, and I can see them in a list using CTFontManagerCopyAvailableFontFamilyNames I manually track which fonts have been installed. So far, so good. But here’s my problem: When a user uninstalls a font via Settings, I have no way to tell. That’s because CTFontManagerCopyAvailableFontFamilyNames will still list that font because it’s still available within the application. How can I track these changes in my app when a font is uninstalled via Settings?
0
0
508
Nov ’24
VisionOS Continuously Rotate a 3D Object
I want to open a view in my App that contains a Model3D view and I want that object to rotate continuously around the Y axis while it is visible. Is it possible to animate the rotation of a Model3D view? I've tried this code, but the object just sits there and doesn't rotate. import RealityKit import RealityKitContent import SwiftUI struct QuantumComputerArea: View { @State var degreesRotating = 0.0 var body: some View { VStack { Model3D(named: "quantumComputer") { phase in switch phase { case .empty: ProgressView() case let .failure(error): Text(error.localizedDescription) case let .success(model): model .resizable() .scaledToFit() .offset(x: -75, y: 0) .rotation3DEffect(.degrees(degreesRotating), axis: (x: 0, y: 1, z: 0)) @unknown default: fatalError() } //phase } //Model3D .onAppear { withAnimation(Animation.linear(duration: 10).repeatForever(autoreverses: false)) { degreesRotating = 360 } } } //VStack } //View } //View I'm probably missing something simple but if anyone has any suggestions (including use a RealityView) I'd be grateful for the advice.
2
0
1.2k
Nov ’24
Synchronizing Physical Properties of EntityEquipment in TableTopKi
I am working on adding synchronized physical properties to EntityEquipment in TableTopKit, allowing seamless coordination during GroupActivities sessions between players. Current Approach and Limitations I have tried setting EntityEquipment's state to DieState and treating it as a TossableRepresentation object. This approach achieves basic physical properties synchronized across players. However, it has several limitations: No Collision Detection Between Dice: Multiple dice do not collide with each other. Shape Limitations: Custom shapes, like parallelepipeds, cannot be configured. Below is my existing code for Base Entity Equipment without physical properties: struct CubeWithPhysics: EntityEquipment { let id: ID let entity: Entity var initialState: BaseEquipmentState init(id: ID, entity: Entity) { self.id = id self.entity = entity initialState = .init(parentID: .tableID, pose: .init(position: .zero, rotation: .zero), entity: self.entity) } } I’d appreciate any guidance on the recommended approach to adding synchronized physical properties to EntityEquipment.
5
4
1.1k
Nov ’24
Example: "Volumetric Windows" don't stay in place in the simulatro
I just downloaded and opened the sample code for Creating 3D models as movable windows. (Link: https://developer.apple.com/documentation/visionos/creating-a-volumetric-window-in-visionos ). I opened the main view in the simulator (canvas), placed the volumentric window somewhere and then moved the camera a bit. Expected: I would expect volumentric windows to stay in place when I place them somewhere and then move around or look in a different direction. Actually: They don't stay in place. They slightly move with the camera. Question: Is this actual behavior expected? Is this just a thing with the simulator and will not happen with real hardware?
1
0
378
Dec ’24
How to draw directly to the pixels of the Vision Pro screens?
I have been playing around with the idea of drawing directly onto the pixels of the Vision Pro, as I am working on a telepresence app that streams a live stereoscopic feed from an articulated robot neck to the wearer. I was playing around in the Compositor Services demo and modified it to show the following. I created a grid pattern using normalized device coordinates (-1 to 1) and it looks great when it shows up in the simulator as shown below. I wanted to see the effects of lens distortion on the image so I launched this script inside the actual Vision Pro, it seems that each eye has only a portion of this screen visible. I have included a screen capture of a screen recording inside of the Vision Pro when running this modified app. The lines appear straight, which says to me that there must be some automatic pre-distortion correction applied (similar to the image shown below taken from an AVP teardown that I cannot link here). However, I am wondering why the grid appears cropped and what the bounds of the frame are defined by?
0
1
400
Dec ’24
RealityKit Spatial Audio - Volume drops abruptly
I have a class with an Entity, on which I added a Spatial Audio component. Furthermore, I have a function, which uses the playAudio() method to start the Spatial Audio. During the first call of the function, everything is fine. If I call the again, the audio volume drops abruptly after a half second. It is very quiet. Approximately, I have following code: class VoiceOutputPlayer: NSObject, ObservableObject, AVAudioPlayerDelegate { private var speechEntity = Entity() func play() { Task{ let audioRessource = try await AudioFileResource(contentsOf: urlWave) self.speechEntity.playAudio(audioRessource) } } func initSpatialAudio() -> Entity { speechEntity.transform.translation.y = -0.37 speechEntity.transform.translation.z = 0.09 speechEntity.spatialAudio = SpatialAudioComponent(gain: Double.zero) speechEntity.spatialAudio?.reverbLevel = -2 speechEntity.spatialAudio?.directivity = .beam(focus: 0.9) speechEntity.orientation = .init(angle: .pi, axis: [0, 1, 0]) speechEntity.spatialAudio?.distanceAttenuation = .rolloff(factor: 1) return speechEntity } } Have visionOS 2.2 on the Apple Vision Pro and use Xcode 16.1
1
0
466
Dec ’24
Feedback : Improve windows management on Vision Pro
Copy of Feedback: FB15969432 Improve window management with immersive spaces. It is hard to manage windows from code when entering immersive space. Look for instance at the sample: https://developer.apple.com/documentation/visionos/displaying-a-3d-environment-through-a-portal The window displayed before entering the virtual space stays there once the virtual space is entered : this window is too big but can't be resized by the program. One could say this big window could be closed and a smaller window opened by the program with the "exit" button, but then this small window should be closed and the main window reopened when leaving te immersive space. In the immersive space closing the "Exit" window with the X does not allow to leave the immersive space. If the crown button is then used we go back to the Vision Pro main menu. If the app is chosen again we can see that it wasn't closed : the "Exit" window is now displayed but we are not in the immersive space! Don't say "this is just a sample app", because all developers face those issues. Please try to find the right solutions with your team to enhance this sample and share the right way to solve those issues. You could find that the specifications need to be enhanced. You can also see that there is no way to exit a program from a program even if this is something that could be useful for some apps (you end a SharePlay game for instance) Thank you very much for your time and consideration.
1
1
466
Dec ’24
How to obtain, FoV values for eyes VisionOS
Hi, We have been experimenting with VisionOS and we are in need to query the field of view values for each eye of the device. We are currently using drawable.views[0].tangents and drawable.views[1].tangents respectively which is labeled as 'Depreceated' . We wonder is there an alternative function for obrtaining FOVs since we had no luck to calcuate them out of the projection matrix we obtain from drawables. Thanks
0
1
317
Dec ’24
visionOS console warning: Trying to convert coordinates between views that are in different UIWindows
Hello, I have an iOS app that is using SwiftUI but the gesture code is written using UIGestureRecognizer. When I run this app on visionOS using the "Designed for iPad" destination and try to use any of my gestures I see this warning in the console: Trying to convert coordinates between views that are in different UIWindows, which isn't supported. Use convertPoint:fromCoordinateSpace: instead. But I don't see any visible problems with the gestures. I see this warning printed out after the gesture takes place but before any of our gesture methods get kicked off. So now I am wondering if this is something we need to deal with or some internal work that needs to happen in UIKit. Does anyone have any thoughts on this?
3
0
1.1k
Dec ’24