Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Does Apple Spatial Audio Format documentation exist
The WWDC25 video and notes titled “Learn About Apple Immersive Video Technologies” introduced the Apple Spatial Audio Format (ASAF) and codec (APAC). However, despite references throughout on using immersive video, there is scant information on ASAF/APAC (including no code examples and no framework references), and I’ve found no documentation in Apple’s APIs/Frameworks about its implementation and use months on. I want to leverage ambisonic audio in my app. I don’t want to write a custom AU if APAC will be opened up to developers. If you read the notes below along with the iPhone 17 advertising (“Video is captured with Spatial Audio for immersive listening”), it sounds like this is very much a live feature in iOS26. Anyone know the state of play? I’m across how the PHASE engine works, which is unrelated to what I’m asking about here. Original quote from video referenced above: “ASAF enables truly externalized audio experiences by ensuring acoustic cues are used to render the audio. It’s composed of new metadata coupled with linear PCM, and a powerful new spatial renderer that’s built into Apple platforms. It produces high resolution Spatial Audio through numerous point sources and high resolution sound scenes, or higher order ambisonics.” ”ASAF is carried inside of broadcast Wave files with linear PCM signals and metadata. You typically use ASAF in production, and to stream ASAF audio, you will need to encode that audio as an mp4 APAC file.” ”APAC efficiently distributes ASAF, and APAC is required for any Apple immersive video experience. APAC playback is available on all Apple platforms except watchOS, and supports Channels, Objects, Higher Order Ambisonics, Dialogue, Binaural audio, interactive elements, as well as provisioning for extendable metadata.”
4
0
436
18h
When placing a TextField within a RealityViewAttachment, the virtual keyboard does not appear in front of the user as expected.
Hello, Thank you for your time. I have a question regarding visionOS app development. When placing a SwiftUI TextField inside RealityView.attachments, we found that focusing on the field does not bring up the virtual keyboard in front of the user. Instead, the keyboard appears around the user’s lower abdomen area. However, when placing the same TextField in a regular SwiftUI layer outside of RealityView, the keyboard appears in the correct position as expected. This suggests that the issue is specific to RealityView.attachments. We are currently exploring ways to have the virtual keyboard appear directly in front of the user when using TextField inside RealityViewAttachments. If there is any method to explicitly control the keyboard position or any known workarounds—including alternative UI approaches—we would greatly appreciate your guidance. Best regards, Sadao Tokuyama
3
1
597
Jul ’25
How to disable the download option from Quicklook PreviewApplication?
Hi, Following the wwdc24 video - What’s new in Quick Look for visionOS, I've managed to open a 3D model using PreviewApplication by calling let previewItem = PreviewItem(url: modelURL, displayName: "Easter", editingMode: .disabled) _ = PreviewApplication.open(items: [previewItem]) However, the "Save to Downloads" option is aways there(see attached screenshot). As the models are user generated content, and I don't want the download option to be available to all users. How to disable it?
3
1
621
Oct ’24
Attaching VideoMaterial to DockingRegion
I have a VideoMaterial inside a RealityView and want to attach this to a DockingRegion inside an immersive environment. It appears that adding the VideoMaterial entity as a child of the docking region somewhat works, but there are no lighting effects (specular, diffuse) from the playing video. So essentially, how can you add a VideoMaterial to a DockingRegion and achieve the same reflections/behavior as using AVPlayerViewController. The latter is not an option as I need custom controls.
2
1
550
Dec ’24
Send messages to the scene
I saw onnoffitacation in the Behavior configuration of Reality Composer pro, which asked me to enter the Nofficatition name, that is to say, this requires swift in Xcode to send a message. There is a message name in the message, so I hope you can write a list for me how to use Swift in Xcode to send a message containing the message name.(There is an answer in https://developer.apple.com/forums/thread/756978, but it doesn't work.) and in the time line in Reality Composer Pro, there is a Notification action, which is used to send messages to swift. How can I ask swift to detect whether the Notification action has sent a message?(There is an answer in https://developer.apple.com/videos/play/wwdc2024/10102/, but it doesn't work.) I have asked this question before (https://developer.apple.com/forums/thread/756978). Those answers were available before, but now they are all invalid in the latest system. I hope you can help me. Thank you.
1
1
749
Oct ’24
Can developers use spatial image in third App's Spatial widget
Spatial widget is a new feature of visionos 26. I notice The system’s Photo app can add a Spatial Image in the widget. I wonder if third apps can use spatial image or any 3D content in it's widget? I try to use RealityView in widget and it run with a crash. So does spatial Image in widget only supported by the system Photo app, and not available to developers now?
1
1
701
Jul ’25
Export Camera Position from object capture API
Hi, I would like to train Gaussian splats from my object captures. So I need a pointcloud and camera positions together with the original photos taken to train GS In an app like postShot. I could do this with Reality Capture, which supports exporting pointclouds and camera position but it does not do well with turntable photogrammetry. While the Apple object capture API does produce really solid results with turntable images. so my question is, can I export camera data from my object captures to use in another application? Or is there may be a plan to at this feature in the future? It would be really helpful in creating ultra realistic, 3-D objects in Gaussian splat format. Thanks for any isuggestions…
0
1
342
Dec ’24
Please provide access to face tracking blendshapes on vision os
Apple, please provide access to face tracking blend shapes on vision os, just like you do on iOS. You have the best eye and face tracking implementation on the market, please let us use it. There is a sizable audience who will buy the headset just for it. I personally know multiple people who are not buying the headset simply because you locked those features out. No raw camera access is needed, just abstracted blendshapes values. You will make the headset so much more useful if you do this simple thing.
1
1
85
Jun ’25
Disable Object Occlusion on iOS
I’m developing an app using RealityKit and RealityView. On newer iPhones, such as the iPhone 15 Pro, Object Occlusion appears to be enabled by default, which causes 3D entities to be hidden behind real-world objects in the scene. However, I need to disable this behavior to ensure proper rendering of my 3D content. This issue does not occur on older devices like the iPhone 13, where the app works as intended. I haven’t been able to find a solution to explicitly disable object occlusion on the newer devices for RealityView. Any guidance or suggestions to resolve this issue would be greatly appreciated! Thanks!
1
1
570
Dec ’24
Issue in TabletopKit Sample scene + shared immersive space
Hello Community, I am currently developing an experimental VisionOS app, to investigate the social effects of the new Spatial Persona feature, for my bachelor thesis. My setup includes a simple board game for the participants, in which they can engage with their persona avatars. I tried to use the TabletopKit for this setup, but ran into issues when starting the SharePlay session. When I testes my app, I couldn't see the other spatial persona anymore, despite the green SharePlay button indicating the session started. The other person can see my actions in their version of the app on the board, but can not interact with anything. Also, we are both seat on the default side of the seat. I tried to remove the environment I added, because it doesn't seem to synch with the other player. When I tried the FaceTime feature in the simulator without the environment, I could then see the test robot avatar, but at a totally wrong place. It's seems like it isn't just my environment occluding the seats, but a flaw in the seating process as well. When I tried the FaceTime feature in the simulator on the official test scene (TabletopKit Sample), I got the same incorrect placement and the warning "role(for:inSeatNumber:): The provided role identifier does not match a role in the current template." So my questions are: What needs to be changed so the TabletopKit can handle seating correctly? How can I correctly use immersive scenes in combination with the TabletopKit? I tried to keep the implementation of the TabletopKit example as close as possible, so I think it will enough to look into this codebase for now. I debugged the position of seats and they are placed correctly in front of their equipment. The personas are just not placed on them.
6
4
584
Mar ’25
Is visionpro support facial recognition app?
Hello everyone, I'm a Computer Science student. My supervisor has given me some topics for my final year project, and one of them involves using Vision Pro for facial recognition—specifically, identifying a designated face to display specific information. As a developer, my understanding of Vision Pro is quite limited. I've done some research online and found that Unity and Xcode are used as development tools. Traditionally, facial recognition is done using OpenCV. However, I've come across articles stating that Apple, due to security reasons, cannot implement facial recognition. I’d like to ask if that’s true. Also, with VisionOS 2 featuring object tracking and image tracking, could these methods potentially replace facial recognition?
1
1
613
Sep ’24
Cannot find the entitlement of Enterprise API for Vision pro
We are developing VisionOS app now, we have applied the Enterprise API for visionOS, including Main Camera Access for Vision Pro, and already get the "Enterprise.license" in the mail apple sent us, we use the developer account import the license file into Xcode: but in Xcode, we cannot find the entitlement of Enterprise API: if we put com.apple.developer.arkit.main-camera-access.allow into Entitlement file of the project manually,Xcode will alarm: and we find that the app itself dont have "Additional Capabilities" which include the Enterprise API: what should we do to have the entitlement file for the Enterprise API, so we can use the enterprise API?
6
1
757
Oct ’24
visionOS pushWindow being dismissed on app foreground
We seen to have found an issue when using the pushWindow action on visionOS. The issue occurs if the app is backgrounded then reopened by selecting the apps icon on the home screen. Any window that is opened via the pushWindow action is then dismissed. We've been able to replicate the issue in a small sample project. Replication steps Open app Open window via the push action Press the digital crown On the home screen select the apps icon again The pushed window will now be dismissed. There is a sample project linked here that shows off the issue, including a video of the bug in progress
2
1
326
Oct ’24
Launching a timeline on a specific model via notification
Hello! I’m familiar with the discussion on “Sending messages to the scene”, and I’ve successfully used that code. However, I have several instances of the same model in my scene. Is it possible to make only one specific model respond to a notification? For example, can I pass something like RealityKit.NotificationTrigger.SourceEntity in userInfo or use another method to target just one instance?
1
1
82
May ’25
Partial Occlusion Material
I am looking for a material that functions in the same way that Occlusion Material does, except that it only partially occludes whatever is behind it. One way that I have thought of doing this was to change the opacity of the entity that was covered in Occlusion Material, however this did not change anything. Please let me know if this is possible.
2
1
98
Apr ’25