Does Apple Spatial Audio Format documentation exist

The WWDC25 video and notes titled “Learn About Apple Immersive Video Technologies” introduced the Apple Spatial Audio Format (ASAF) and codec (APAC). However, despite references throughout on using immersive video, there is scant information on ASAF/APAC (including no code examples and no framework references), and I’ve found no documentation in Apple’s APIs/Frameworks about its implementation and use months on.

I want to leverage ambisonic audio in my app. I don’t want to write a custom AU if APAC will be opened up to developers. If you read the notes below along with the iPhone 17 advertising (“Video is captured with Spatial Audio for immersive listening”), it sounds like this is very much a live feature in iOS26.

Anyone know the state of play? I’m across how the PHASE engine works, which is unrelated to what I’m asking about here.

Original quote from video referenced above: “ASAF enables truly externalized audio experiences by ensuring acoustic cues are used to render the audio. It’s composed of new metadata coupled with linear PCM, and a powerful new spatial renderer that’s built into Apple platforms. It produces high resolution Spatial Audio through numerous point sources and high resolution sound scenes, or higher order ambisonics.”

”ASAF is carried inside of broadcast Wave files with linear PCM signals and metadata. You typically use ASAF in production, and to stream ASAF audio, you will need to encode that audio as an mp4 APAC file.”

”APAC efficiently distributes ASAF, and APAC is required for any Apple immersive video experience. APAC playback is available on all Apple platforms except watchOS, and supports Channels, Objects, Higher Order Ambisonics, Dialogue, Binaural audio, interactive elements, as well as provisioning for extendable metadata.”

Answered by Vision Pro Engineer in 859551022

Hello @Plight , thank you for your question!

First, could you please share more about what you are trying to do in your app? Its difficult to give advice on workflows without knowing exactly what kind of experience you are trying to create.

Most of our existing documentation is going to be from WWDC videos, like the one you linked above, and the documentation website for specific APIs. Here's another WWDC25 video where we go over the Apple Positional Audio Codec (APAC) near the end: Learn about the Apple Projected Media Profile. It provides an example of how to author an audio file using AVAssetWriter.

Davinci Resolve is currently capable of authoring ambisonic ASAF content, so I recommend investigating that tool to see if it meets your needs. Additionally, Resolve should be cabable of encoding an ASAF master wav file into APAC. The Compressor app also has this functionality.

You can use the AVPlayer API to playback ASAF content in your app. This is available on a VideoMaterial for example.

Please send any feedback about the state of APIs to us using Feedback Assistant. Thank you!

Hello @Plight, thank you for your question!

First, could you please share more about what you are trying to do in your app? Its difficult to give advice on workflows without knowing exactly what kind of experience you are trying to create.

Most of our existing documentation is going to be from WWDC videos, like the one you linked above, and the documentation website for specific APIs. Here's another WWDC25 video where we go over the Apple Positional Audio Codec (APAC) near the end: Learn about the Apple Projected Media Profile. It provides an example of how to author an audio file using AVAssetWriter.

Davinci Resolve is currently capable of authoring ambisonic ASAF content, so I recommend investigating that tool to see if it meets your needs. Additionally, Resolve should be cabable of encoding an ASAF master wav file into APAC. The Compressor app also has this functionality.

You can use the AVPlayer API to playback ASAF content in your app. This is available on a VideoMaterial for example.

Please send any feedback about the state of APIs to us using Feedback Assistant. Thank you!

Accepted Answer

Hello @Plight , thank you for your question!

First, could you please share more about what you are trying to do in your app? Its difficult to give advice on workflows without knowing exactly what kind of experience you are trying to create.

Most of our existing documentation is going to be from WWDC videos, like the one you linked above, and the documentation website for specific APIs. Here's another WWDC25 video where we go over the Apple Positional Audio Codec (APAC) near the end: Learn about the Apple Projected Media Profile. It provides an example of how to author an audio file using AVAssetWriter.

Davinci Resolve is currently capable of authoring ambisonic ASAF content, so I recommend investigating that tool to see if it meets your needs. Additionally, Resolve should be cabable of encoding an ASAF master wav file into APAC. The Compressor app also has this functionality.

You can use the AVPlayer API to playback ASAF content in your app. This is available on a VideoMaterial for example.

Please send any feedback about the state of APIs to us using Feedback Assistant. Thank you!

Thanks for the rapid reply!

Apologies, my question was vague in hindsight, but your answers very helpful and reviewing the materials you shared plus some extra research, I have straightened out my understanding.

I'm new to iOS and navigating the various audio APIs. I am building an iOS app, audio-only (no VR/AR/game engine) with a UI to play back head-tracked audio using third order ambisonics (i.e. binauralising using head position with reference to the more detailed positional information in 3OA).

I'm looking to overlay other spatialised audio features which are a bit much for this thread as it's focused on ASAF and the codec, but basically I've been seeking to understand which APIs support ambisonics and where ASAF/APAC fits into the existing picture.

Now I've got a grip on APAC compatibility and conversion method (I came across this as well: https://developer.apple.com/av-foundation/Apple-Positional-Audio-Codec.pdf), I'll take this away and have a play.

I've also familiarised myself with ambisonic implementations in AVAudioEngine i.e https://developer.apple.com/videos/play/wwdc2019/510/?time=363 and https://developer.apple.com/documentation/coreaudiotypes/kaudiochannellayouttag_hoa_acn_sn3d

I'll jump into Resolve to get some hands-on experience with ASAF - don't suppose this is coming to a DAW near you anytime soon (i.e. Logic, ProTools, Cubase?)

Also - any chance ambisonic support will be added to the AmbientMixer in the PHASE engine?

Best, Plight

I'm very happy to hear this helped you out! :)

In regards to questions about future plans, I'm unable to share anything on that front here. However, I strongly recommend sending us feedback using Feedback Assistant. In your feedback, detail your use case and what you would like to see from Apple. This helps us understand how developers like yourself are using these technologies, or how you'd want to use these technologies, so that we can better support you in the future.

Feel free to start a new thread if you have any other questions on different topics.

Thank you!

Does Apple Spatial Audio Format documentation exist
 
 
Q