I’m facing a strange audio routing issue that seems specific to iPhone 14 Pro / Pro Max.
I’m using LiveKit (WebRTC) in a React Native app, which uses AVAudioSession internally for audio capture (VoIP / call-style usage).
🔍 What’s happening:
I’m using an external USB microphone.
On these devices:
iPhone 11 → ✅ USB mic works
iPhone 13 → ✅ USB mic works
iPhone 17 Pro → ✅ USB mic works
iPhone 14 Pro Max → ❌ USB mic does NOT work
On iPhone 14 Pro Max:
The same USB mic:
✅ Works in Voice Memos
✅ Works in Instagram Live
❌ Does NOT appear as an input option in my app
❌ Does NOT work in WhatsApp / Instagram calls
Also:
In my app on iPhone 14 Pro Max, iOS does not show the audio input selector UI
On iPhone 17 Pro, the same app and same build does show the selector and the USB mic works
⚙️ My audio session config ( LiveKit ):
await AudioSession.setAppleAudioConfiguration({
audioCategory: 'playAndRecord',
audioMode: 'default',
audioCategoryOptions: ['allowBluetooth', 'defaultToSpeaker'],
});
await AudioSession.startAudioSession();
❓ My questions:
Is this a known limitation or behavior specific to iPhone 14 Pro / Pro Max?
Does iPhone 14 Pro have different audio routing rules for call / VoIP mode compared to other devices?
Why does the same USB mic work in recording apps (Voice Memos, Instagram Live) but not in call-style apps (LiveKit, WhatsApp, Instagram call)?
Is there any documented difference in AVAudioSession behavior on iPhone 14 Pro regarding external USB audio inputs?
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi!
I am writing a browser extension that allows you to control the playback of media content on a music service website. Unfortunately Safari does not support tracking changes to the audible property in an event tabs.onUpdated. Is there an alternative to this event? I'm looking for a way to track when the automatic inference engine interrupts playback on a music service website.
That you.
The sysEx struct in the MIDIUniversalMessage struct has a channel member but the System Exclusive (7-Bit) Message doesn't have a channel field.
The System Exclusive (7-Bit) Message has a # of bytes field but the sysEx struct doesn't have a nrOfBytes, byteCount or bytesUsed member.
It looks like the channel member of the sysEx struct contains the number of used bytes.
Is this a mistake in the header or did I misunderstand something?
We are currently working on a CarPlay navigation app and so far everything is working well except for speaking turn notifications.
Our TTS implementation works fine on the phone and works fine on CarPlay if the voice is spoken over the speaker in the car. If users connect a BT headset to the car and listen through that headset, then the voice commands are chopped up / stutter.
Why would users use BT headset? Well, we are working on a motorcycle app, and there are no speakers usually on a motorcycle.
It sounds like the BT channel is opened and closed repeatedly for every character / word spoken. This happens on different CarPlay devices and different Bluetooth headsets, we have reports from multiple users that they find this behavior annoying and that other apps work fine.
Is this a known issue? Are there possible workaround?
My app utilizes background audio to play music files. I have the audio background mode enabled and I initialize the AVAudioSession in playback mode with the mixWithOthers option. And it usually works great while the app is backgrounded. I listen for audio interruptions as well as route changes and I am able to handle them appropriately and I can usually resume my background audio no problem.
I discovered an issue while connected to CarPlay though. Roughly 50% of the time when I disconnect from a phone call while connected to CarPlay I get the following error after calling the play() method of my AVAudioPlayer instance:
"ATAudioSessionClientImpl.mm:281 activation failed. status = 561015905"
If I instead try to start a new audio session I get a similar error:
Error Domain=NSOSStatusErrorDomain Code=561015905 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed}
Like I said, this isn't reproducible 100% of the time and is so far only seen while connected to CarPlay. I don't think Im forgetting so additional capability or plist setting, but if anyone has any clues it would be greatly appreciated. Otherwise this is likely just a bug that I need to report to Apple.
One very important note, and reason I believe it's just a bug, is that while I was testing I found that other music apps like Spotify will also fail to resume their audio at the same time my app fails.
Another important detail is that when it works successfully I receive the audio session interruption ended notification, and when it doesn't work I only receive a route configuration change or route override notification. From there I am able to still successfully granted background time to execute code, but my call to resume audio fails with the above mentioned error codes.
Since iOS 18, the system setting “Allow Audio Playback” (enabled by default) allows third-party app audio to continue playing while the user is recording video with the Camera app. This has created a problem for the app I’m developing.
➡️ The problem:
My app plays continuous audio in both foreground and background states. If the user starts recording video using the iOS Camera app, the app’s audio — still playing in the background — gets captured in the video — obviously an unintended behavior.
Yes, the user could stop the app manually before starting the video recording, but that can’t be guaranteed. As a developer, I need a way to stop the app’s audio before the video recording begins.
So far, I haven’t found a reliable way to detect when video recording starts if ‘Allow Audio Playback’ is ON.
➡️ What I’ve tried:
— AVAudioSession.interruptionNotification → doesn’t fire
— devicesChangedEventStream → not triggered
I don’t want to request mic permission (app doesn’t use mic). also, disabling the app from playing audio in the background isn’t an option as it is a crucial part of the user experience
➡️ What I need:
A reliable, supported way to detect when the Camera app begins video recording, without requiring mic access — so I can stop audio and avoid unintentional overlap with the user’s recordings.
Any official guidance, workarounds, or AVFoundation techniques would be greatly appreciated.
Thanks.
Hi,
I'm working on an audio mixing app, that comes with bundled audio units that provide some of the app's core functionality.
For the next release of that app, we are planning to make two changes:
make the app sandboxed
package the bundled audio units as .appex bundles instead as .component bundles, so we don't need to take care of the installation at the correct spot in the file system
When trying this new approach, we run into problems where [[AVAudioUnitEffect alloc] initWithAudioComponentDescription:] crashes when trying to load our audio unit with the exception:
AVAEInternal.h:109 [AUInterface.mm:468:AUInterfaceBaseV3: (AudioComponentInstanceNew(comp, &_auv2)): error -10863
Our audio unit has the `sandboxSafe flag enabled, and loads fine when the host app is not sandboxed, so I'm guessing I got the bundle id/code signing requirements for the .appex correct.
It seems, that my .appex isn't even loaded, and the system rejects it because of its metadata. Maybe there something wrong the Info.plist generated by Juice?
"BuildMachineOSBuild" => "23H222"
"CFBundleDisplayName" => "elgato_sample_recorder"
"CFBundleExecutable" => "ElgatoSampleRecorder"
"CFBundleIdentifier" => "com.iwascoding.EffectLoader.samplerecorderAUv3"
"CFBundleName" => "elgato_sample_recorder"
"CFBundlePackageType" => "XPC!"
"CFBundleShortVersionString" => "1.0.0.0"
"CFBundleSignature" => "????"
"CFBundleSupportedPlatforms" => [
0 => "MacOSX"
]
"CFBundleVersion" => "1.0.0.0"
"DTCompiler" => "com.apple.compilers.llvm.clang.1_0"
"DTPlatformBuild" => "24C94"
"DTPlatformName" => "macosx"
"DTPlatformVersion" => "15.2"
"DTSDKBuild" => "24C94"
"DTSDKName" => "macosx15.2"
"DTXcode" => "1620"
"DTXcodeBuild" => "16C5032a"
"LSMinimumSystemVersion" => "10.13"
"NSExtension" => {
"NSExtensionAttributes" => {
"AudioComponents" => [
0 => {
"description" => "Elgato Sample Recorder"
"factoryFunction" => "elgato_sample_recorderAUFactoryAUv3"
"manufacturer" => "Manu"
"name" => "Elgato: Elgato Sample Recorder"
"sandboxSafe" => 1
"subtype" => "Znyk"
"tags" => [
0 => "Effects"
]
"type" => "aufx"
"version" => 65536
}
]
}
"NSExtensionPointIdentifier" => "com.apple.AudioUnit-UI"
"NSExtensionPrincipalClass" => "elgato_sample_recorderAUFactoryAUv3"
}
"NSHighResolutionCapable" => 1
}
Any ideas what I am missing?
Two issues:
No matter what I set in
try audioSession.setPreferredSampleRate(x)
the sample rate on both iOS and macOS is always 48000 when the output goes through the speaker, and 24000 when my Airpods connect to an iPhone/iPad.
Now, I'm checking the current output loudness to animate a 3D character, using
mixerNode.installTap(onBus: 0, bufferSize: y, format: nil) { [weak self] buffer, time in
Task { @MainActor in
// calculate rms and animate character accordingly
but any buffer size under 4800 is just ignored and the buffers I get are 4800 sized.
This is ok, when the sampleRate is currently 48000, as 10 samples per second lead to decent visual results.
But when AirPods connect, the samplerate is 24000, which means only 5 samples per second, so the character animation looks lame.
My AVAudioEngine setup is the following:
audioEngine.connect(playerNode, to: pitchShiftEffect, format: format)
audioEngine.connect(pitchShiftEffect, to: mixerNode, format: format)
audioEngine.connect(mixerNode, to: audioEngine.outputNode, format: nil)
Now, I'd be fine if the outputNode runs at whatever if it needs, as long as my tap would get at least 10 samples per second.
PS: Specifying my favorite format in the
let format = AVAudioFormat(standardFormatWithSampleRate: 48_000, channels: 2)!
mixerNode.installTap(onBus: 0, bufferSize: y, format: format)
doesn't change anything either
Hi,
I've had a new deck installed in my car for about 1.5 weeks.
I'm having compatibility issues with my 15PM.
It happens both wired and wirelessly, I get the error "Accessory not supported by this device". It used to happen all the time, now it's 50/50. Sometimes it works.
I've removed and added Bluetooth multiple times on phone and deck, I bought a belkin usb-c to usb-a cable today and it seems to fix it but the problem comes back.
I've changed the setting "FaceID and passcode-allow access when locked-accessories."
The car stereo guy reckons it's definitely an issue with the phone not the deck, I'm inclined to believe him since the error states "by this device".
Any advice appreciated.
Topic:
Media Technologies
SubTopic:
Audio
I have an AUv3 that passes all validation and can be loaded into Logic Pro without issue. The UI for the plug in can be any aspect ratio but Logic insists on presenting it in a view with a fixed aspect ratio. That is when resizing, both the height and width are resized. I have never managed to work out what it is I need to do specify to Logic to allow the user to resize width or height independently of each other.
Can anyone tell me what I need to specify in the AU code that will inform Logic that the view can be resized from any side of the window/panel?
I have the new iOS 26 SpeechTranscriber working in my application. The issue I am facing is how to determine if the device I am running on supports SpeechTranscriber. I was able to create code that tests if the device supports transcription but it takes a bit of time to run and thus the results are not available when the app launches. What I am looking for is a list of what iOS 26 devices it doesn't run on. I think its safe to assume any new devices will support it so if we can just have a list of what devices that can run iOS 26 and not able to do transcription it would be much faster for the app. I have determined it doesn't work on a SE 2nd Gen, it works on iPhone 12, SE 3rd Gen, iPhone 14 Pro, 15 Pro. As the SpeechTranscriber doesn't work in the simulator I can't determine that way. I have checked the docs and it doesn't list the devices it doesn't work on.
Hello,
I'm observing an intermittent memory leak being reported in the iOS Simulator when initializing and starting an AVAudioEngine. Even with minimal setup—just attaching a single AVAudioPlayerNode and connecting it to the mainMixerNode—Xcode's memory diagnostics and Instruments sometimes flag a leak.
Here is a simplified version of the code I'm using:
// This function is called when the user taps a button in the view controller:
#import "ViewController.h"
@interface ViewController ()
@end
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
}
- (IBAction)myButtonAction:(id)sender {
NSLog(@"Test");
soundCreate();
}
@end
// media.m
static AVAudioEngine *audioEngine = nil;
void soundCreate(void)
{
if (audioEngine != nil)
return;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil];
[[AVAudioSession sharedInstance] setActive:YES error:nil];
audioEngine = [[AVAudioEngine alloc] init];
AVAudioPlayerNode* playerNode = [[AVAudioPlayerNode alloc] init];
[audioEngine attachNode:playerNode];
[audioEngine connect:playerNode to:(AVAudioNode *)[audioEngine mainMixerNode] format:nil];
[audioEngine startAndReturnError:nil];
}
In the memory leak report, the following call stack is repeated, seemingly in a loop:
ListenerMap::InsertEvent(XAudioUnitEvent const&, ListenerBinding*) AudioToolboxCore
ListenerMap::AddParameter(AUListener*, void*, XAudioUnitEvent const&) AudioToolboxCore
AUListenerAddParameter AudioToolboxCore
addOrRemoveParameterListeners(OpaqueAudioComponentInstance*, AUListenerBase*, AUParameterTree*, bool) AudioToolboxCore
0x180178ddf
Hi there,
I recently launched a dj app to the mac app store, and was wondering how I could access songs for mixing purposes via Apple Music just like how serato, rekordbox, djay, and other DJ apps do?
Thanks,
Gunek
I'm developing a TTS Audio Unit Extension that needs to write trace/log files to a shared App Group container. While the main app can successfully create and write files to the container, the extension gets sandbox denied errors despite having proper App Group entitlements configured.
Setup:
Main App (Flutter) and TTS Audio Unit Extension share the same App Group
App Group is properly configured in developer portal and entitlements
Main app successfully creates and uses files in the container
Container structure shows existing directories (config/, dictionary/) with populated files
Both targets have App Group capability enabled and entitlements set
Current behavior:
Extension can access/read the App Group container
Extension can see existing directories and files
All write attempts are blocked with "sandbox deny(1) file-write-create" errors
Code example:
const char* createSharedGroupPathWithComponent(const char* groupId, const char* component) {
NSString* groupIdStr = [NSString stringWithUTF8String:groupId];
NSString* componentStr = [NSString stringWithUTF8String:component];
NSURL* url = [[NSFileManager defaultManager]
containerURLForSecurityApplicationGroupIdentifier:groupIdStr];
NSURL* fullPath = [url URLByAppendingPathComponent:componentStr];
NSError *error = nil;
if (![[NSFileManager defaultManager] createDirectoryAtPath:fullPath.path
withIntermediateDirectories:YES
attributes:nil
error:&error]) {
NSLog(@"Unable to create directory %@", error.localizedDescription);
}
return [[fullPath path] UTF8String];
}
Error output:
Sandbox: simaromur-extension(996) deny(1) file-write-create /private/var/mobile/Containers/Shared/AppGroup/36CAFE9C-BD82-43DD-A962-2B4424E60043/trace
Key questions:
Are there additional entitlements required for TTS Audio Unit Extensions to write to App Group containers?
Is this a known limitation of TTS Audio Unit Extensions?
What is the recommended way to handle logging/tracing in TTS Audio Unit Extensions?
If writing to App Group containers is not supported, what alternatives are available?
Current entitlements:
<dict>
<key>com.apple.security.application-groups</key>
<array>
<string>group.com.<company>.<appname></string>
</array>
</dict>
Hi,
I'm trying to setup a AVAudioEngine for USB Audio recording and monitoring playthrough.
As soon as I try to setup playthough I get an error in the console: AVAEInternal.h:83 required condition is false: [AVAudioEngineGraph.mm:1361:Initialize: (IsFormatSampleRateAndChannelCountValid(outputHWFormat))]
Any ideas how to fix it?
// Input-Device setzen
try? setupInputDevice(deviceID: inputDevice)
let input = audioEngine.inputNode
// Stereo-Format erzwingen
let inputHWFormat = input.inputFormat(forBus: 0)
let stereoFormat = AVAudioFormat(commonFormat: inputHWFormat.commonFormat, sampleRate: inputHWFormat.sampleRate, channels: 2, interleaved: inputHWFormat.isInterleaved)
guard let format = stereoFormat else {
throw AudioError.deviceSetupFailed(-1)
}
print("Input format: \(inputHWFormat)")
print("Forced stereo format: \(format)")
audioEngine.attach(monitorMixer)
audioEngine.connect(input, to: monitorMixer, format: format)
// MonitorMixer -> MainMixer (Output)
// Problem here, format: format also breaks.
audioEngine.connect(monitorMixer, to: audioEngine.mainMixerNode, format: nil)
Hello everyone,
I am working on an app that allows you to review your own music using Apple Music. Currently I am running into an issue with the skipping forwards and backwards outside of the app.
How it should work: When skipping forward or backwards on the lock or home screen of an iPhone, the next or previous song on an album should play and the information should change to reflect that in the app.
If you play a song in Apple Music, you can see a Now Playing view in the lock screen.
When you skip forward or backwards, it will do either action and it would reflect that when you see a little frequency icon on artwork image of a song.
What it's doing: When skipping forward or backwards on the lock or home screen of an iPhone, the next or previous song is reflected outside of the app, but not in the app.
When skipping a song outside of the app, it works correctly to head to the next song.
But when I return to the app, it is not reflected
NOTE: I am not using MusicKit variables such as Track, Album to display the songs. Since I want to grab the songs and review them I need a rating so I created my own that grabs the MusicItemID, name, artist(s), etc.
NOTE: I am using ApplicationMusicPlayer.shared
Is there a way to get the song to reflect in my app?
(If its easier, a simple example of it would be nice. No need to create an entire xprod file)
I wrote a Swift macOS app to control a PCI audio device. The code switches between the default output and input channels. As soon as I launch the Audio-Midi Setup utility, channel switching stops working. The driver properties allow switching, but the system doesn't respond. I have to delete the contents of /Library/Preferences/Audio and reset Core Audio. What am I missing?
func setDefaultChannelsOutput() {
guard let deviceID = getDeviceIDByName(deviceName: "PCI-424") else { return }
let selectedIndex = DefaultChannelsOutput.indexOfSelectedItem
if selectedIndex < 0 || selectedIndex >= 24 { return }
let channel1 = UInt32(selectedIndex * 2 + 1)
let channel2 = UInt32(selectedIndex * 2 + 2)
var channels: [UInt32] = [channel1, channel2]
var propertyAddress = AudioObjectPropertyAddress(
mSelector: kAudioDevicePropertyPreferredChannelsForStereo,
mScope: kAudioDevicePropertyScopeOutput,
mElement: kAudioObjectPropertyElementWildcard
)
let dataSize = UInt32(MemoryLayout<UInt32>.size * channels.count)
let status = AudioObjectSetPropertyData(deviceID, &propertyAddress, 0, nil, dataSize, &channels)
if status != noErr {
print("Error setting default output channels: \(status)")
}
}
Topic:
Media Technologies
SubTopic:
Audio
In MusicKit Web the playback states are provided as numbers.
For example the playbackStateDidChange event listener will return:
{oldState: 2, state: 3, item:...}
When the state changes from playing (2) to paused (3).
Those are pretty easy to guess, but I'm having a hard time with some of the others: completed,
ended,
loading,
none,
paused,
playing,
seeking,
stalled,
stopped,
waiting.
I cannot find a mapping of states to numbers documented anywhere. I got the above states from an enum in a d.ts file that is often incorrect/incomplete.
Can someone help out pointing to the docs or provide a mapping?
Thanks.
Hi, I believe I've found a potential error in the sample code on the documentation page for creating and using a process tap with an aggregate device. The issue is in the section explaining how to add a tap to the aggregate device. I have already filed a Feedback Assistant ticket on this (ID: FB17411663) but haven't heard back for months.
Capturing system audio with Core Audio taps
The sample code for modifying the kAudioAggregateDevicePropertyTapList incorrectly uses the tapID as the target AudioObjectID when calling AudioObjectSetPropertyData.
// (Code to get the list and potentially modify listAsArray)
if var listAsArray = list as? [CFString] {
// ... (modification logic) ...
// Set the list back on the aggregate device. <--- The comment is correct
list = listAsArray as CFArray
_ = withUnsafeMutablePointer(to: &list) { list in
// INCORRECT: This call uses tapID as the target object.
AudioObjectSetPropertyData(tapID, &propertyAddress, 0, nil, propertySize, list)
}
}
The kAudioAggregateDevicePropertyTapList is a property that belongs to the aggregate device, not the individual tap. Therefore, to set this property, the AudioObjectSetPropertyData function must target the AudioObjectID of the aggregate device itself. Using tapID as the first argument is logically incorrect for this operation and will not update the aggregate device as intended.
Furthermore, the preceding AudioObjectGetPropertyData call to fetch the list also appears to use the incorrect tapID as its target in the sample.
The AudioObjectID for both getting and setting this property should be the ID of the aggregate device.
_ = AudioObjectGetPropertyData(aggregateDeviceID, &propertyAddress, 0, nil, &propertySize, &list)
_ = AudioObjectSetPropertyData(aggregateDeviceID, &propertyAddress, 0, nil, propertySize, newList)
Thank you!
Please Update Andorid MusicKit,the version 1.1.2 will complied fail。the error msg:•SDKUriHandlerActivity>. Apps targeting Android 12 and higher are required to specify an explicit value for android:exported when the corres