Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Created

Question about Apple Vision Pro audio input sampling rate for research
I am a graduate student conducting research in speech/audio signal processing and multimodal interaction. Apple Vision Pro is widely recognized as a multimodal interactive system supporting voice, eye, and gesture inputs. However, I could not find detailed specifications or documentation about the audio input sampling rate used by the device’s built-in microphone array when capturing user audio. Specifically, I would like to understand: What is the default audio input sampling rate (e.g., 16 kHz, 44.1 kHz, 48 kHz, etc.) for the Vision Pro’s microphones? When developing with visionOS / AVAudioSession / AVAudioEngine, is there a documented or recommended sampling rate for audio capture? Are there any best practices or settings for enabling high-quality voice capture on Vision Pro (especially for voice research tasks)? For context, my work involves voice processing, analysis, and possibly on-device real-time speech recognition. Any pointers to relevant APIs, documentation or examples (especially regarding audio capture buffer size or available formats on visionOS) would be very helpful. Thank you in advance! Best regards.
0
0
99
17h
AVAudioSession setActive(true) fails after phone call when app is in background
I’m seeing what appears to be an iOS audio-session issue that occurs only when a phone call happens while the app is in the background. API: AVAudioSession, AVAudioRecorder Background Modes: Audio enabled (UIBackgroundModes = audio) Category: .playAndRecord Microphone permission: granted Expected Behavior If the app is recording audio in the background and a phone call interrupts it: AVAudioSession.interruptionNotification(.began) fires Call ends AVAudioSession.interruptionNotification(.ended) fires App should be able to re-activate its audio session and resume or restart recording Apple documentation suggests this should be supported for background audio apps. Actual Behavior When the app is in the background and phone call is ended: AVAudioSession.interruptionNotification(.ended) does fire Attempting to reactivate the audio session always fails: Error Domain=NSOSStatusErrorDomain Code=560557684 ("!int") "Session activation failed" The session appears to remain permanently “interrupted” Retrying activation (with delays) does not help Recreating AVAudioRecorder does not help Reactivation works only after the app is opened again
0
0
95
18h
AVAudioSession.outputVolume does not reflect system volume changes made while app is in background
I have a question regarding the behavior of AVAudioSession.sharedInstance().outputVolume. Observed behavior: When the app is in the foreground, I read audioSession.outputVolume (for example, 0.1). The app is then moved to the background. While the app is in the background, the user changes the system volume using the hardware buttons (for example, to 0.5). When the app returns to the foreground, audioSession.outputVolume still reports the previous value (0.1). From my testing, outputVolume only seems to update when the system volume is changed while the app is in the foreground. Volume changes made while the app is in the background are not reflected when the app returns to the foreground. Questions: According to Apple’s documentation for AVAudioSession.outputVolume: “The systemwide output volume set by the user.” https://developer.apple.com/documentation/avfaudio/avaudiosession/outputvolume However, based on our testing on iOS 18.6.2 and iOS 18.1, the observed behavior seems to differ from this description. Questions: The documentation states that outputVolume represents the system-wide volume set by the user. In our testing, the value does not reflect volume changes made while the app is in the background and only updates when the app is in the foreground.Is this the expected behavior of AVAudioSession.outputVolume? Is there any other recommended way in Swift to retrieve the current system volume that reflects user changes made both while the app is in the foreground and while it is in the background? Any clarification on the intended behavior or recommended handling would be greatly appreciated.
0
0
92
1d
coreaudio-api mailing list search broken
Hello, The search functionality of the coreaudio-api mailing list archive has been broken for a very long time. Several of the lower-level audio APIs have only been discussed on this mailing list, making it critical for those of us maintaining old audio code. Steps to reproduce: Open https://lists.apple.com/archives/list/coreaudio-api@lists.apple.com/ in your web browser. Enter a search term in the "Search this list" field in the top-right corner of the page. The search will eventually time out with "502 Bad Gateway" Can somebody please forward this information to the current maintainer? I've tried to contact developer support but they weren't sure what to do. Thanks!
0
0
101
1d
Hybrid Wired-to-Wireless Audio Mode Using AirPods Charging Case
Many Apple users own both Bluetooth earphones (AirPods) and traditional wired earphones. While Bluetooth audio provides freedom of movement, some users still prefer wired earphones for comfort, sound profile, or personal preference. However, plugging wired earphones directly into an iPhone can feel restrictive and inconvenient during daily use. This proposal suggests a hybrid audio approach where wired earphones can be connected to a Bluetooth-enabled AirPods charging case (or a similar Apple-designed module), allowing users to enjoy wired earphones without a physical connection to the iPhone. #Problem Statement *Wired earphones offer consistent audio quality and zero latency *Bluetooth earphones provide freedom from cables *Users must currently choose one or the other *Plugging wired earphones into an iPhone limits movement and can feel intrusive in daily scenarios (walking, commuting, working) There is no native Apple solution that allows wired earphones to function wirelessly while maintaining Apple’s audio experience standards. #Proposed Solution Introduce a Wired-to-Wireless Audio Mode through the AirPods charging case or a dedicated Apple Bluetooth audio bridge. How it works: User plugs wired earphones into the AirPods case (or a future AirPods accessory port) The case acts as a Bluetooth audio transmitter Audio is streamed wirelessly from iPhone to the case The case outputs audio to the wired earphones #User experiences: No cable connected to the iPhone Familiar wired earphone sound Freedom of movement similar to Bluetooth earbuds User Experience (UX Flow) Plug wired earphones into the AirPods case iPhone automatically detects: “Wired Earphones via AirPods Case” Seamless pairing using existing AirPods framework Audio controls, volume, and switching handled through iOS No additional apps required #Key Benefits Combines wired sound reliability with wireless convenience Reduces physical cable disturbance during use Extends usefulness of existing wired earphones Minimal learning curve for users Fits naturally into Apple’s ecosystem and design philosophy #Privacy & Performance Considerations On-device audio processing only No cloud involvement Low-latency audio using Apple’s proprietary Bluetooth codecs Power-efficient usage leveraging AirPods case battery #Target Users Users who prefer wired earphones but want wireless freedom Commuters and walkers Developers and professionals who multitask Users sensitive to Bluetooth earbud fit or comfort #Ecosystem Fit Builds on existing AirPods pairing and audio stack Aligns with Apple’s focus on seamless UX Could be implemented via: New AirPods hardware Firmware update + accessory Dedicated Apple audio bridge
1
0
245
4d
AVSpeechSynthesizer system voices (SLA clarification)
Hello, I am building an iOS-only, commercial app that uses AVSpeechSynthesizer with system voices, strictly using the APIs provided by Apple. Before distributing the app, I want to ensure that my current implementation does not conflict with the iOS Software License Agreement (SLA) and is aligned with Apple’s intended usage. For a better playback experience (more accurate estimation of utterance duration and smoother skip forward/backward during playback), I currently synthesize speech using: AVSpeechSynthesizer.write(_:toBufferCallback:) Converting the received AVAudioPCMBuffer buffers into audio data Storing the audio inside the app sandbox Playing it back using AVAudioPlayer / AVAudioEngine The cached audio is: Generated fully on-device using system voices Stored only inside the app’s private container Used only for internal playback controls (timeline, seek, skip ±5 seconds) Never shared, exported, uploaded, or exposed outside the app The alternative approaches would be: Keeping the generated audio entirely in memory (RAM) for playback purposes, without writing it to the file system at any point Or using AVSpeechSynthesizer.speak(_:) and playing speech strictly in real time which has a poorer user experience compared to my approach I have reviewed the current iOS Software License Agreement: https://www.apple.com/legal/sla/docs/iOS18_iPadOS18.pdf In particular, section (f) mentions restrictions around System Characters, Live Captions, and Personal Voice, including the following excerpt: “…use … only for your personal, non-commercial use… No other creation or use of the System Characters, Live Captions, or Personal Voice is permitted by this License, including but not limited to the use, reproduction, display, performance, recording, publishing or redistribution in a … commercial context.” I do not see a specific reference in the SLA to system text-to-speech voices used via AVSpeechSynthesizer, and I want to be certain that temporarily caching synthesized speech for internal, non-exported playback is acceptable in a commercial app. My question is: Is caching AVSpeechSynthesizer system-voice output inside the app sandbox for internal playback acceptable, or is Apple’s recommended approach to rely only on real-time playback (speak(_:)) or strictly in-memory buffering without file storage? If this question falls outside DTS technical scope and is instead a policy or licensing matter, I would appreciate guidance on the authoritative Apple documentation or the correct Apple team/contact. Thank you.
0
1
249
1w
Logic Pro - discover channel upstream latency
Hello everyone, I've written an audio unit plugin that needs to be aware of any upstream latency caused by heavy plugins before it on the channel. Is there any way to query this? I know that Logic applies PDC at the channel's output (summing point), but I need to know what the accumulated latency is at the point the audio enters my plugin. Thanks!
0
0
315
1w
It crashes when AVAssetReader is released
Thread 5 Crashed: 0 libobjc.A.dylib 0x19af7b038 objc_msgSend + 56 1 CoreFoundation 0x19dfdb618 cow_cleanup + 135 2 CoreFoundation 0x19dfdb6fc -[__NSDictionaryM dealloc] + 147 3 MediaToolbox 0x1b167636c FigRemotePropertyCacheTeardown + 31 4 MediaToolbox 0x1b1c5b648 remoteXPCAsset_Finalize + 107 5 CoreMedia 0x1b1e9166c FigBaseObjectFinalize + 275 6 CoreFoundation 0x19dfcc5ec _CFRelease + 295 7 AVFCore 0x1b1054d64 -[AVFigAssetTrackInspector dealloc] + 151 8 AVFCore 0x1b0f818d8 -[AVAssetTrack dealloc] + 63 9 CoreFoundation 0x19dfdba28 RELEASE_OBJECTS_IN_THE_ARRAY + 115 10 CoreFoundation 0x19dfdb7e0 -[__NSArrayM dealloc] + 147 11 AVFCore 0x1b0f52e04 -[AVURLAsset dealloc] + 167 12 libobjc.A.dylib 0x19af821f8 object_cxxDestructFromClass(objc_object*, objc_class*) + 115 13 libobjc.A.dylib 0x19af7df20 objc_destructInstance_nonnull_realized(objc_object*) + 75 14 libobjc.A.dylib 0x19af7d4a4 _objc_rootDealloc + 71 15 AVFCore 0x1b0fef988 -[AVAssetReaderOutput dealloc] + 415 16 AVFCore 0x1b0ff11ec -[AVAssetReaderTrackOutput dealloc] + 127 17 CoreFoundation 0x19dfe20a4 -[__NSSingleObjectArrayI dealloc] + 63 18 libobjc.A.dylib 0x19af7d3f8 AutoreleasePoolPage::releaseUntil(objc_object**) + 203
1
0
268
2w
iPhone 14 Pro: External USB mic not available in AVAudioSession for call apps, but works in Voice Memos & Instagram Live
I’m facing a strange audio routing issue that seems specific to iPhone 14 Pro / Pro Max. I’m using LiveKit (WebRTC) in a React Native app, which uses AVAudioSession internally for audio capture (VoIP / call-style usage). 🔍 What’s happening: I’m using an external USB microphone. On these devices: iPhone 11 → ✅ USB mic works iPhone 13 → ✅ USB mic works iPhone 17 Pro → ✅ USB mic works iPhone 14 Pro Max → ❌ USB mic does NOT work On iPhone 14 Pro Max: The same USB mic: ✅ Works in Voice Memos ✅ Works in Instagram Live ❌ Does NOT appear as an input option in my app ❌ Does NOT work in WhatsApp / Instagram calls Also: In my app on iPhone 14 Pro Max, iOS does not show the audio input selector UI On iPhone 17 Pro, the same app and same build does show the selector and the USB mic works ⚙️ My audio session config ( LiveKit ): await AudioSession.setAppleAudioConfiguration({ audioCategory: 'playAndRecord', audioMode: 'default', audioCategoryOptions: ['allowBluetooth', 'defaultToSpeaker'], }); await AudioSession.startAudioSession(); ❓ My questions: Is this a known limitation or behavior specific to iPhone 14 Pro / Pro Max? Does iPhone 14 Pro have different audio routing rules for call / VoIP mode compared to other devices? Why does the same USB mic work in recording apps (Voice Memos, Instagram Live) but not in call-style apps (LiveKit, WhatsApp, Instagram call)? Is there any documented difference in AVAudioSession behavior on iPhone 14 Pro regarding external USB audio inputs?
1
0
78
2w
Unique identifier of a MIDI device
Hello, I need to know what is a unique identifier of a MIDI device (source/destination). Important note: I want to get the same ID when a device is reconnected (unplugged and then plugged again). The main candidate is kMIDIPropertyUniqueID property. But I don't know if it meets the requirement above or not. Additional question: is it always available for any endpoint? Also there is kMIDIPropertyDeviceID property. What about it? And one more option is just MIDIEndpointRef returned by MIDIGetSource or MIDIGetDestination. So what is the proper way to get ID which persists between device reconnections?
0
0
34
2w
AppleAVBAudio assertion information
Hi, I'm currently developping an AVB hardware device, and I'm currently stuck because because the apple AVB stack is throwing me errors without much informations. Is there any way to have more information about these assertions and why they are happening ? Furtermore is there any documentation on theAppleAVBAudio module ? It would be very handy Here are the logs shown in the console: Filtering the log data using "process == "coreaudiod"" Timestamp Thread Type Activity PID TTL 2025-12-05 15:44:27.087043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.087545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.088043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.088546+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.089043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.089545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.090043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.090545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.091043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.091545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.092044+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.092544+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.093044+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.093552+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.094050+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.094543+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
1
0
83
2w
ApplicationMusicPlayer fails play in macCatalyst 26.3 due to RemotePlayerService crash
I've filed this as FB21446798 but figured I'd post here too. In the first build of macOS 26.3, playback via ApplicationMusicPlayer is completely broken. When starting playback of anything at all, the console shows the following error: applicationController: xpc service connection interrupted Failed to obtain remoteObject: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service created from an endpoint was invalidated from this process." UserInfo={NSDebugDescription=The connection to service created from an endpoint was invalidated from this process.} Failed to prepareToPlay with error: Error Domain=MPMusicPlayerControllerErrorDomain Code=10 "(null)" UserInfo={NSUnderlyingError=0xc92910ff0 {Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service created from an endpoint was invalidated from this process." UserInfo={NSDebugDescription=The connection to service created from an endpoint was invalidated from this process.}}} In addition, several crash logs for RemotePlayerService are generated, showing my app as the parent process. This issue is 100% repeatable. No matter how I load the queue, whether it’s catalog or library content, any variation I can think of all fails like this. I really hope this can be fixed before 26.3 comes out, otherwise my app will be totally unusable. 😅
1
0
410
3w
Indexing of Music App
Recently, after the update of 26.3 Mac OS (Tahoe), the ordering of my music app has been horrible at best - music disappearing, tracks not aligning with albums (even if the albums are different years). It's created quite a problem, because the disappearing tracks issue seems to be replicating to iOS devices as well (although track numbering and album association seem to be stable). Has anyone else heard of this issue?
0
0
216
3w
AVAudioEngine obtains channel audio data
Currently, I have successfully used ChannelMap to map hardware input channels and obtained audio data from the hardware device's MIC and OTG inputs. Additionally, I have used ChannelMap to map output channels to freely feed data for playback to each output channel. However, I now have a problem. I have a hardware device that only has output channels (no input channels), and the system has set this hardware device as the default playback device. In this case, how can I obtain the audio data being played to the output channels for modification?
0
0
209
3w
_MPRemoteCommandEventDispatch crashes on iOS 26.x devices.
I'm seeing crashes in _MPRemoteCommandEventDispatch on iOS 26.x devices in 3 apps. According to Bugsnag logs they are: NSInternalInconsistencyException: event dispatch <_MPRemoteCommandEventDispatch: <MPRemoteCommandEvent: 0x11c049500 commandID=THV0 command=<MPRemoteCommand: 0x109ad1ea0 type=Play (0) enabled=YES handlers=[0x109b6a310]> sourceID=(null) ([HostedRoutingSessionDataSource] handleControlSendingCommand<2W5E>)> state:201> deallocated without calling continuation I attached a log from Xcode organizer matching Bugsnag crash. mpr_remote_command_event.crash When I set the brakpoint on the -[_MPRemoteCommandEventDispatch dealloc] I can see it it's hit every time I tap play or pause on locked screen play button. Thread 0 Crashed: 0 libsystem_kernel.dylib 0x00000002370420cc __pthread_kill + 8 (:-1) 1 libsystem_pthread.dylib 0x00000001e975c810 pthread_kill + 268 (pthread.c:1721) 2 libsystem_c.dylib 0x0000000198f8ff64 abort + 124 (abort.c:122) 3 libc++abi.dylib 0x000000018a7cf808 __abort_message + 132 (abort_message.cpp:66) 4 libc++abi.dylib 0x000000018a7be484 demangling_terminate_handler() + 304 (cxa_default_handlers.cpp:76) 5 libobjc.A.dylib 0x000000018a6cff78 _objc_terminate() + 156 (objc-exception.mm:496) 6 xxxxxxxxxxxxxx 0x00000001003a7db8 CPPExceptionTerminate() + 416 (BSG_KSCrashSentry_CPPException.mm:156) 7 libc++abi.dylib 0x000000018a7cebdc std::__terminate(void (*)()) + 16 (cxa_handlers.cpp:59) 8 libc++abi.dylib 0x000000018a7ceb80 std::terminate() + 108 (cxa_handlers.cpp:88) 9 CoreFoundation 0x000000018d7341c4 __CFRunLoopPerCalloutARPEnd + 256 (CFRunLoop.c:769) 10 CoreFoundation 0x000000018d70bb5c __CFRunLoopRun + 1976 (CFRunLoop.c:3179) 11 CoreFoundation 0x000000018d70aa6c _CFRunLoopRunSpecificWithOptions + 532 (CFRunLoop.c:3462) 12 GraphicsServices 0x000000022e31c498 GSEventRunModal + 120 (GSEvent.c:2049) 13 UIKitCore 0x00000001930ceba4 -[UIApplication _run] + 792 (UIApplication.m:3902) 14 UIKitCore 0x0000000193077a78 UIApplicationMain + 336 (UIApplication.m:5577) 15 xxxxxxxxxxxxxx 0x00000001000c0134 main + 308 (main.swift:15) 16 dyld 0x000000018a722e28 start + 7116 (dyldMain.cpp:1477) Is the crash happening when the app is being terminated? Thank you!
0
0
278
4w
SpeechAnalyzer > AnalysisContext lack of documentation
I'm using the new SpeechAnalyzer framework to detect certain commands and want to improve accuracy by giving context. Seems like AnalysisContext is the solution for this, but couldn't find any usage example. So I want to make sure that I'm doing it right or not. let context = AnalysisContext() context.contextualStrings = [ AnalysisContext.ContextualStringsTag("commands"): [ "set speed level", "set jump level", "increase speed", "decrease speed", ... ], AnalysisContext.ContextualStringsTag("vocabulary"): [ "speed", "jump", ... ] ] try await analyzer.setContext(context) With this implementation, it still gives outputs like "Set some speed level", "It's speed level", etc. Also, is it possible to make it expect number after those commands, in order to eliminate results like "set some speed level to" (instead of two).
0
0
277
4w
sysEx struct in CoreMIDI/MIDIMessages.h
The sysEx struct in the MIDIUniversalMessage struct has a channel member but the System Exclusive (7-Bit) Message doesn't have a channel field. The System Exclusive (7-Bit) Message has a # of bytes field but the sysEx struct doesn't have a nrOfBytes, byteCount or bytesUsed member. It looks like the channel member of the sysEx struct contains the number of used bytes. Is this a mistake in the header or did I misunderstand something?
1
0
558
4w
Start and stop recording Voice Memos with Siri
using iOS 26.2; Airpods 4 Long press stem to launch Siri Speak "Record Voice Memo" -> Recording starts Recording in progress... Long press stem to launch Siri -> Nothing happens. To stop recording need use phone. is this intended behaviour? i would like to be able to stop recording with Siri I am able to launch Siri from phone while recording, but point is to keep phone in pocket and start/stop recordings only via Airpods.
1
0
154
Dec ’25
Error resuming background audio while connected to CarPlay
My app utilizes background audio to play music files. I have the audio background mode enabled and I initialize the AVAudioSession in playback mode with the mixWithOthers option. And it usually works great while the app is backgrounded. I listen for audio interruptions as well as route changes and I am able to handle them appropriately and I can usually resume my background audio no problem. I discovered an issue while connected to CarPlay though. Roughly 50% of the time when I disconnect from a phone call while connected to CarPlay I get the following error after calling the play() method of my AVAudioPlayer instance: "ATAudioSessionClientImpl.mm:281 activation failed. status = 561015905" If I instead try to start a new audio session I get a similar error: Error Domain=NSOSStatusErrorDomain Code=561015905 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed} Like I said, this isn't reproducible 100% of the time and is so far only seen while connected to CarPlay. I don't think Im forgetting so additional capability or plist setting, but if anyone has any clues it would be greatly appreciated. Otherwise this is likely just a bug that I need to report to Apple. One very important note, and reason I believe it's just a bug, is that while I was testing I found that other music apps like Spotify will also fail to resume their audio at the same time my app fails. Another important detail is that when it works successfully I receive the audio session interruption ended notification, and when it doesn't work I only receive a route configuration change or route override notification. From there I am able to still successfully granted background time to execute code, but my call to resume audio fails with the above mentioned error codes.
0
0
236
Dec ’25
AVAudioEngine Voice Processing Fails with Mismatched Input/Output Devices: AggregateDevice Channel Count Mismatch
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected. Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers) Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers) When using paired input and output devices: The setup works as expected. Example: MacBook Pro microphone → MacBook Pro speakers. When using mismatched devices: AVAudioEngine setup fails during aggregate device construction. Example: AirPods microphone → MacBook Pro speakers. Error logs indicate a channel count mismatch. Here are the partial logs. Due to the content limit, I cannot post the entire logs. AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875) AUVPAggregate.cpp:1036 err=-10875 AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875 AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) Is it possible to use voice processing with different input/output devices? If yes, are there any specific configurations required to handle mismatched devices? How can we resolve channel count mismatch errors during aggregate device construction? Are there settings or API adjustments to enforce compatibility between input/output devices? Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices? For instance, can we force an intermediate channel configuration or downmix input/output formats?
0
0
212
Dec ’25