Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

changePlaybackRateCommand does not work on iOS.
Hi. I work on an audio app for iOS which is successfully using the MPRemoteCommandCenter for commands like next, back, skip forward, skip backward etc. I am trying to implement playback rate controls in my app (so that users can change the playback speed of audio to 0.5x or 2x for example). While the above commands work, the changePlaybackRateCommand does not seem to. I have enabled the command, given it a target/handler and set supported rates. With the other commands, this caused the UI to change on lock screen, in command center etc, by adding the control for the command (a next button for the next command for example). However, it does not seem to do anything for the playback rate command. I can implement my own "rate button" UI and rate change handling, but I'm wondering if this is a known bug within Apple? Looking online, it seems other people face the same issue and haven't been able to get this command to work. Why is this API provided if it doesn't seem to do anything? Is there something I'm missing? Kind regards.
0
0
259
Jan ’25
AVAssetWriterInput appendSampleBuffer failed with error -12780
I tried adding watermarks to the recorded video. Appending sample buffers using AVAssetWriterInput's append method fails and when I inspect the AVAssetWriter's error property, I get the following: Error Domain=AVFoundation Error Domain Code=-11800 "This operation cannot be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDDescription=This operation cannot be completed, NSUnderlyingError=0x302399a70 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}} As far as I can tell -11800 indicates an AVErrorUknown, however I have not been able to find information about the -12780 error code, which as far as I can tell is undocumented. Thanks! Here is the code
3
0
460
Dec ’24
CarPlay CPNowPlayingTemplate show wrong command buttons
i have a CarPlay implementation eand I want to show previous/next track button on player UI MPRemoteCommandCenter.shared().seekForwardCommand.isEnabled = false MPRemoteCommandCenter.shared().seekBackwardCommand.isEnabled = false MPRemoteCommandCenter.shared().previousTrackCommand.isEnabled = true MPRemoteCommandCenter.shared().nextTrackCommand.isEnabled = true It works correctly on CarPlay simulator , but on some car only SEEK button are shown . I have to suppose that it is that a problem on the car side , but I would ask about your opinion , maybe there is some pieces I'm missing
3
0
346
Jan ’25
Title: Ambisonic B-Format Playback Issues on Vision Pro
I'm trying to implement Ambisonic B-Format audio playback on Vision Pro with head tracking. So far audio plays, head tracking works, and the sound appears to be stereo. The problem is that it is not a proper binaural playback when compared to playing back the audiofile with a DAW. Has anyone successfully implemented B-Format playback on Vision Pro? Any suggestions on my current implementation: func playAmbiAudioForum() async { do { try AVAudioSession.sharedInstance().setCategory(.playback) try AVAudioSession.sharedInstance().setActive(true) // AudioFile laoding/preperation guard let testFileURL = Bundle.main.url(forResource: "audiofile", withExtension: "wav") else { print("Test file not found") return } let audioFile = try AVAudioFile(forReading: testFileURL) let audioFileFormat = audioFile.fileFormat // create AVAudioFormat with Ambisonics B Format guard let layout = AVAudioChannelLayout(layoutTag: kAudioChannelLayoutTag_Ambisonic_B_Format) else { print("layout failed") return } let format = AVAudioFormat( commonFormat: audioFile.processingFormat.commonFormat, sampleRate: audioFile.fileFormat.sampleRate, interleaved: false, channelLayout: layout ) // write audiofile to buffer guard let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: UInt32(audioFile.length)) else { print("buffer failed") return } try audioFile.read(into: buffer) playerNode.renderingAlgorithm = .HRTF // connecting nodes audioEngine.attach(playerNode) audioEngine.connect(playerNode, to: audioEngine.outputNode, format: format) audioEngine.prepare() playerNode.scheduleBuffer(buffer, at: nil) { print("File finished playing") } try audioEngine.start() playerNode.play() } catch { print("Setup error:", error) } }
0
0
300
Jan ’25
Garageband displaying error 100001 when loading up some AU plugins
I recently got some plugins from Universal Audio, and have licensed them properly through both UA and iLok manager. Whenever I try to load up the plugins (specifically from UA) in GarageBand, it first says that "NSCreateObjectFileImageFromMemory-p47UEwps” because the developper can not be verified. After clicking either 'show in finder' or 'okay', it opens the plugin in a form without its GUI and showing that it is not licensed (even though it is). It also displays error code 100001. I have tried only some basic stuff to troubleshoot like restarting the DAW/my computer and reinstalling/relicensing the softwares. I don't know if the macOS version has anything to do with it but for some reason I just can't get it to work.
1
0
250
Jan ’25
Logic Pro loads AUv3 when compiled in Swift 5 but not Swift 6
I have spent a long time refactoring lots of older Swift code to compile without error in Swift 6. The app is a v3 audio unit host and audio unit. Having installed Sonoma and XCode 16 I compile the code using Swift 6 and it compiles and runs without any warnings or errors. My host will load my AU no problem. LOGIC PRO is still the ONLY audio unit host that will load native Mac V3 audio units and so I like to test my code using Logic. In Sonoma with XCode 16... My AU passes the most stringent AUVAL tests both in terminal and Logic pro. If I compile the AU source in Swift 5 Logic will see the AU, load it and run it without problems. But when I compile the AU in Swift 6 Logic sees the AU, will scan it and verify it passes the tests but will not load the AU. In XCode I see a log message that a "helper application failed to run" but the debugger never connects to the AU and I don't think Logic even gets as far as instantiating the AU. So... what is causing this? I'm stumped.. Developing AUv3 is a brain-aching maze of undocumented hurdles and I'm hoping someone might have found a solution for this one. Meanwhile I guess my only option is to continue using the Swift 5 compiler. (appending a little note just to mention that all the DSP code is written in C/C++, Swift is used mainly for the user interface and also does some offline thready work )
1
0
336
Jan ’25
AUv3 recent "Failed to find component with type..." frequent issues
I've been generating new Audio Unit Extension apps with Xcode 16 (and newer), and although they generally work initially, it is easy (although I'm not sure how to do it reliably) to cause the app to no longer be able to instantiate the audiounit. Generally the call to AVAudioUnit.findComponent fails and SimplePlayEngine hits the fatalError("Failed to find component with type...") In the most recent project, merely adding files to the extension (without making any use of them) caused it to go off the rails. If I "Archive" the app+plugin, there is no audio unit extension in the bundle. If I switch to the audiounit extension and build it it's fine. If I look at the build folder in Library/Developer/Xcode/project_folder the extension_name.appex is there. Any ideas? If I can coax an unmodified audio unit extension project to exhibit this behavior I'll attach it here. Right now what I have has code I don't want to share.
4
1
537
Dec ’24
Can Logic Pro load an Audio Unit v3 in-process?
After investing more than a week into getting a bunch of audio unit projects converted into app + appex + framework, they all are now correctly loaded in-process in the demo host app that is part of Xcode's template. However, Logic Pro adamantly refuses to load them in-process. Does Logic Pro simply not do that ever, or is there some hint or configuration my plugins need to provide to enable that? If it is unsupported, will it be supported in some future version of Logic? The entire point of investing that week was performance, which is moot if it is impossible to test the impact of loading in-process in a real-world usage scenario.
1
0
532
Oct ’24
AVSpeechUtterance Mandarin voice output replaced by SIRI language setting after upgraded the IOS to 18
Hi, Apple's engineer. Hoping that you can reply to this one. We're developing a Text-to-Speak app. Everything went well until the IOS got upgraded to 18. AVSpeechSynthesisVoice(language: "zh-CN") is running well under IOS 16 AND IOS 17. It speaks Mandarin correctly. In IOS 18, we noticed that Siri's Language setting interrupted the performance of AVSpeechSynthesisVoice. It plays Cantonese instead of Mandarin. Buggy language setting in Siri that affects the AVSpeechSynthesisVoice : Chinese (Cantonese - China mainland) Chinese (Cantonese -Hong Kong)
3
3
616
Nov ’24
Help Needed: How to Make iOS Timer More Stable?
I’m currently developing an iOS metronome app using DispatchSourceTimer as the timer. The interval is set very small, around 50 milliseconds, and I’m using CFAbsoluteTimeGetCurrent to calculate the elapsed time to ensure the beat is played within a ±0.003-second margin. The problem is that once the app goes to the background, the timing becomes unstable—it slows down noticeably, then recovers after 1–2 seconds. When coming back to the foreground, it suddenly speeds up, and again, it takes 1–2 seconds to return to normal. It feels like the app is randomly “powering off” and then “overclocking.” It’s super frustrating. I’ve noticed that some metronome apps in the App Store have similar issues, but there’s one called “Professional Metronome” that’s rock solid with no such problems. What kind of magic are they using? Any experts out there who can help? Thanks in advance! P.S. I’ve already enabled background audio permissions. The professional metronome that has no issues: https://link.zhihu.com/?target=https%3A//apps.apple.com/cn/app/pro-metronome-%25E4%25B8%2593%25E4%25B8%259A%25E8%258A%2582%25E6%258B%258D%25E5%2599%25A8/id477960671
2
0
325
Jan ’25
Audio Volume increases by itself
I am using an AVAudioPlayer to play a "tick" sound once per second in a SwiftUI app. When running the app on an iPhone 16 (18.2.1) the tick sounds increase in volume after a few seconds. This does not happen in the simulator nor on an iPhone SE 2020 (18.1.1).
2
0
218
Jan ’25
[VisionOS Audio] AVAudioPlayerNode occasionally produces loud popping/distortion when playing PCM data
I'm experiencing audio issues while developing for visionOS when playing PCM data through AVAudioPlayerNode. Issue Description: Occasionally, the speaker produces loud popping sounds or distorted noise This occurs during PCM audio playback using AVAudioPlayerNode The issue is intermittent and doesn't happen every time Technical Details: Platform: visionOS Device: vision pro / simulator Audio Framework: AVFoundation Audio Node: AVAudioPlayerNode Audio Format: PCM I would appreciate any insights on: Common causes of audio distortion with AVAudioPlayerNode Recommended best practices for handling PCM playback in visionOS Potential configuration issues that might cause this behavior Has anyone encountered similar issues or found solutions? Any guidance would be greatly helpful. Thank you in advance!
2
1
482
Dec ’24
How to find `AudioHardwareControl` direction?
I'm working with modern Core Audio API introduced in macOS Sequoia. I have an AudioHadwareDevice which has several controls of type AudioHardwareControl. I figured out to filter only volume controls I can use classID == kAudioVolumeControlClassID condition. Some devices have volume controls for both input and output. How I can determine the direction of the control? Streams, i.e. AudioHardwareStream object have direction, but I didn't found a way to map controls to streams. There are kAudioObjectPropertyScopeInput and kAudioObjectPropertyScopeOutput property scopes, but no matter what I tried controls always return false to any control.hasProperty(address: whatever). Any other ideas?
1
0
413
Dec ’24
Inquiry about Potential Core Audio Improvements
Hi everyone, I wanted to bring up a question about Core Audio and its potential for future updates or improvements, specifically regarding latency optimization. As someone who relies on Core Audio for real-time audio processing, any enhancements in this area would be incredibly beneficial for professionals in the industry. Does anyone know if Apple has shared any plans or updates regarding Core Audio’s performance, particularly for low-latency applications? I’d appreciate any insights or advice from the community! Thanks so much! Best, Michael
1
0
389
Dec ’24
AVAudioEngine Voice Processing Fails with Mismatched Input/Output Devices: AggregateDevice Channel Count Mismatch
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected. Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers) Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers) When using paired input and output devices: The setup works as expected. Example: MacBook Pro microphone → MacBook Pro speakers. When using mismatched devices: AVAudioEngine setup fails during aggregate device construction. Example: AirPods microphone → MacBook Pro speakers. Error logs indicate a channel count mismatch. Here are the partial logs. Due to the content limit, I cannot post the entire logs. AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875) AUVPAggregate.cpp:1036 err=-10875 AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875 AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) Is it possible to use voice processing with different input/output devices? If yes, are there any specific configurations required to handle mismatched devices? How can we resolve channel count mismatch errors during aggregate device construction? Are there settings or API adjustments to enforce compatibility between input/output devices? Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices? For instance, can we force an intermediate channel configuration or downmix input/output formats?
2
1
418
Jan ’25
How to set volume with MusicKit Web?
I've got a web app built with MusicKit that displays a list of songs. I have player controls for play, pause, skip next, skip, previous, toggle shuffle and set repeat mode. All of these work by using music. The play button, when nothing is playing and nothing is in the queue, will enqueue all the tracks and start playing with the below, for example: await music.setQueue({ songs, startPlaying: true }); I've implemented a progress slider based on feedback from the "playbackProgressDidChange" listener. Now, how in the world can I set the volume? This seems like it should be simple, but I am at a complete loss here. The docs say: "The volume of audio playback, which is set directly on the HTMLMediaElement as the HTMLMediaElement.volume property. This value ranges between 0, which would be muting the audio, and 1, which would be the loudest possible." Given that all my controls work off the music instance, I don't understand how I can do that. In this video from WWDC 2022, music web components are touched on briefly. These are also documented very sparsely. The volume docs are here. For the life of me, I can't even get the volume web component to display in the UI. It appears that MusicKit Web is hobbled compared to the native implementation, but surely adjusting volume shouldn't be that hard right? I'd appreciate any insight on how to do this, including how to get web components to work (in a Next JS app). Thanks.
2
0
400
Jan ’25
Rear View Camera Installed – Now CarPlay Audio Stops After 15 Seconds via Bluetooth
I recently installed a rear-view camera in my car, and ever since, I've been experiencing a frustrating issue with my CarPlay. After about 15 seconds of playing audio via Bluetooth, the sound stops coming out of the speakers, even though the song continues to run in the background. For context, my stereo system is an aftermarket unit that I installed to enable CarPlay functionality. Everything worked perfectly before adding the rear-view camera. Unfortunately, my unit does not have a port for a wired connection, so I can't test the audio using a cable. Has anyone experienced a similar issue? Could the camera installation be interfering with the Bluetooth or audio system somehow? Any advice or troubleshooting tips would be greatly appreciated!
1
0
263
Jan ’25
App recedes to background,audioEngine.start()
private var audioEngine = AVAudioEngine() private var inputNode: AVAudioInputNode! func startAnalyzing() { inputNode = audioEngine.inputNode let recordingFormat = inputNode.outputFormat(forBus: 0) let hardwareSampleRate = recordingSession.sampleRate inputNode.removeTap(onBus: 0) if recordingFormat.sampleRate != hardwareSampleRate { print("。") let newFormat = AVAudioFormat(commonFormat: recordingFormat.commonFormat, sampleRate: hardwareSampleRate, channels: recordingFormat.channelCount, interleaved: recordingFormat.isInterleaved) inputNode.installTap(onBus: 0, bufferSize: 1024, format: newFormat) { buffer, time in self.processAudioBuffer(buffer, time: time) } } else { inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, time in self.processAudioBuffer(buffer, time: time) } } do { audioEngine.prepare() try audioEngine.start() } catch { print(": \(error)") } } I back the app to the background and then call startAnalyzing(), which reports an error and the background recording permissions are configured。 error: [10429:570139] [aurioc] AURemoteIO.cpp:1668 AUIOClient_StartIO failed (561145187) [10429:570139] [avae] AVAEInternal.h:109 [AVAudioEngineGraph.mm:1545:Start: (err = PerformCommand(*ioNode, kAUStartIO, NULL, 0)): error 561145187 Audio engine couldn't start. Is background boot not allowed?
1
0
360
Dec ’24