Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

aumi AUv3 with AvAudioEngine ConnectMIDI multiple
Hi! I am creating a aumi AUv3 extension and I am trying to achieve simultaneous connections to multiple other avaudionodes. I would like to know it is possible to route the midi to different outputs inside the render process in the AUv3. I am using connectMIDI(_:to:format:eventListBlock:) to connect the output of the AUv3 to multiple AvAudioNodes. However, when I send midi out of the AUv3, it gets sent to all the AudioNodes connected to it. I can't seem to find any documentation on how to route the midi only to one of the connected nodes. Is this possible?
3
0
607
Dec ’24
Memory leak AVAudioPlayer
Let's consider the following code. I've created an actor that loads a list of .mp3 files from a Bundle and then makes it available for audio reproduction. Unfortunately, I'm experiencing a memory leak. At the play method. player.play() From Instruments I get _malloc_type_malloc_outlined libsystem_malloc.dylib start_wqthread libsystem_pthread.dylib private actor AudioActor { enum Failure: Error { case soundsNotLoaded([AudioPlayerClient.Sound: Error]) } enum Player { case music(AVAudioPlayer) } var players: [Sound: Player] = [:] let bundles: [Bundle] init(bundles: UncheckedSendable<[Bundle]>) { self.bundles = bundles.wrappedValue } func load(sounds: [Sound]) throws { try AVAudioSession.sharedInstance().setActive(true, options: []) var errors: [Sound: Error] = [:] for sound in sounds { guard let url = bundle.url(forResource: sound.name, withExtension: "mp3") else { continue } do { self.players[sound] = try .music(AVAudioPlayer(contentsOf: url)) } catch { errors[sound] = error } } guard errors.isEmpty else { throw Failure.soundsNotLoaded(errors) } } func play(sound: Sound, loops: Int?) throws { guard let player = self.players[sound] else { return } switch player { case let .music(player): player.numberOfLoops = loops ?? -1 player.play() } } func stop(sound: Sound) throws { guard let player = self.players[sound] else { throw Failure.soundsNotLoaded([:]) } switch player { case let .music(player): player.stop() } } }
0
0
96
Mar ’25
Destroy MIDIUMPMutableEndpoint again?
Is there a way to destroy MIDIUMPMutableEndpoint again? In my app, the user has a setting to enable and disable MIDI 2.0. If MIDI 2.0 should not be supported (or if iOS version < 18), it creates a virtual destination and a virtual source. And if MIDI 2.0 should be enabled, it instead creates a MIDIUMPMutableEndpoint, which itself creates the virtual destination and source automatically. So here is my problem: I didn't find any way to destroy the MIDIUMPMutableEndpoint again. There is a method to disable it (setEnabled:NO), but that doesn't destroy or hide the virtual destination and source. So when the user turns MIDI 2.0 support off, I will have two virtual destinations and sources, and cannot get rid of the 2.0 ones. What is the correct way to get rid of the MIDIUMPMutableEndpoint once it is created?
0
0
56
Sep ’25
Issue with Audio Sample Rate Conversion in Video Calls
Hey everyone, I'm encountering an issue with audio sample rate conversion that I'm hoping someone can help with. Here's the breakdown: Issue Description: I've installed a tap on an input device to convert audio to an optimal sample rate. There's a converter node added on top of this setup. The problem arises when joining Zoom or FaceTime calls—the converter gets deallocated from memory, causing the program to crash. Symptoms: The converter node is being deallocated during video calls. The program crashes entirely when this happens. Traditional methods of monitoring sample rate changes (tracking nominal or actual sample rates) aren't working as expected. The Big Challenge: I can't figure out how to properly monitor sample rate changes. Listeners set up to track these changes don't trigger when the device joins a Zoom or FaceTime call. Please, if anyone has experience with this or knows a solution, I'd really appreciate your help. Thanks in advance! ⁠
0
0
56
Apr ’25
Garageband displaying error 100001 when loading up some AU plugins
I recently got some plugins from Universal Audio, and have licensed them properly through both UA and iLok manager. Whenever I try to load up the plugins (specifically from UA) in GarageBand, it first says that "NSCreateObjectFileImageFromMemory-p47UEwps” because the developper can not be verified. After clicking either 'show in finder' or 'okay', it opens the plugin in a form without its GUI and showing that it is not licensed (even though it is). It also displays error code 100001. I have tried only some basic stuff to troubleshoot like restarting the DAW/my computer and reinstalling/relicensing the softwares. I don't know if the macOS version has anything to do with it but for some reason I just can't get it to work.
1
0
378
Jan ’25
Accessory not supported by this device
Hi, I've had a new deck installed in my car for about 1.5 weeks. I'm having compatibility issues with my 15PM. It happens both wired and wirelessly, I get the error "Accessory not supported by this device". It used to happen all the time, now it's 50/50. Sometimes it works. I've removed and added Bluetooth multiple times on phone and deck, I bought a belkin usb-c to usb-a cable today and it seems to fix it but the problem comes back. I've changed the setting "FaceID and passcode-allow access when locked-accessories." The car stereo guy reckons it's definitely an issue with the phone not the deck, I'm inclined to believe him since the error states "by this device". Any advice appreciated.
0
0
212
Aug ’25
TTS Audio Unit Extension: File Write Access in App Group Container Denied Despite Proper Entitlements
I'm developing a TTS Audio Unit Extension that needs to write trace/log files to a shared App Group container. While the main app can successfully create and write files to the container, the extension gets sandbox denied errors despite having proper App Group entitlements configured. Setup: Main App (Flutter) and TTS Audio Unit Extension share the same App Group App Group is properly configured in developer portal and entitlements Main app successfully creates and uses files in the container Container structure shows existing directories (config/, dictionary/) with populated files Both targets have App Group capability enabled and entitlements set Current behavior: Extension can access/read the App Group container Extension can see existing directories and files All write attempts are blocked with "sandbox deny(1) file-write-create" errors Code example: const char* createSharedGroupPathWithComponent(const char* groupId, const char* component) { NSString* groupIdStr = [NSString stringWithUTF8String:groupId]; NSString* componentStr = [NSString stringWithUTF8String:component]; NSURL* url = [[NSFileManager defaultManager] containerURLForSecurityApplicationGroupIdentifier:groupIdStr]; NSURL* fullPath = [url URLByAppendingPathComponent:componentStr]; NSError *error = nil; if (![[NSFileManager defaultManager] createDirectoryAtPath:fullPath.path withIntermediateDirectories:YES attributes:nil error:&amp;error]) { NSLog(@"Unable to create directory %@", error.localizedDescription); } return [[fullPath path] UTF8String]; } Error output: Sandbox: simaromur-extension(996) deny(1) file-write-create /private/var/mobile/Containers/Shared/AppGroup/36CAFE9C-BD82-43DD-A962-2B4424E60043/trace Key questions: Are there additional entitlements required for TTS Audio Unit Extensions to write to App Group containers? Is this a known limitation of TTS Audio Unit Extensions? What is the recommended way to handle logging/tracing in TTS Audio Unit Extensions? If writing to App Group containers is not supported, what alternatives are available? Current entitlements: &lt;dict&gt; &lt;key&gt;com.apple.security.application-groups&lt;/key&gt; &lt;array&gt; &lt;string&gt;group.com.&lt;company&gt;.&lt;appname&gt;&lt;/string&gt; &lt;/array&gt; &lt;/dict&gt;
0
0
89
Apr ’25
Is there an errors with SpatialAudioCLI?
Hi, everyone, I downloaded the source code EditingSpatialAudioWithAnAudioMix.zip from https://developer.apple.com/documentation/Cinematic/editing-spatial-audio-with-an-audio-mix, when I carried out one of the actions named "process" in command line the program crashed!! Form the source code, I found that the value of componentType is set to kAudioUnitType_FormatConverter: // The actual `AudioUnit`. public var auAudioMix = AVAudioUnitEffect() init() { // Generate a component description for the audio unit. let componentDescription = AudioComponentDescription( componentType: kAudioUnitType_FormatConverter, componentSubType: kAudioUnitSubType_AUAudioMix, componentManufacturer: kAudioUnitManufacturer_Apple, componentFlags: 0, componentFlagsMask: 0) auAudioMix=AVAudioUnitEffect(audioComponentDescription: componentDescription) } But in the document from https://developer.apple.com/documentation/avfaudio/avaudiouniteffect/init(audiocomponentdescription:), it seems that componentType can not be set to kAudioUnitType_FormatConverter and : Has everyone encountered this problem?
1
0
110
2w
AVAssetWriterInput Crash on appendSampleBuffer Converting PCM
Overview We are producing audio in real time from an editing application and are trying to put that on an HLS stream. We attempt to submit PCM samples through an audio writer but are getting a crash after a select number of samples have been appended. Depending on the number of audio frames in the PCM buffer, we might get more iterations before the crash but it always has the same traceback (see below). Code The setup is rather simple. We took inspiration from a few sources around the web. NSMutableDictionary *audio = [[NSMutableDictionary alloc] init]; [audio setObject:@(kAudioFormatMPEG4AAC) forKey:AVFormatIDKey]; [audio setObject:[NSNumber numberWithInt:config.audioSampleRate] // 48000 forKey:AVSampleRateKey]; [audio setObject:[NSNumber numberWithInt:config.audioChannels] // 2 forKey:AVNumberOfChannelsKey]; [audio setObject:@160000 forKey:AVEncoderBitRateKey]; m_audioConfig = [[NSDictionary alloc] initWithDictionary:audio]; m_audio = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:m_audioConfig]; AVAudioFrameCount audioFrames = BUFFER_SAMPLES * bCount; AVAudioPCMBuffer *pcmBuffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:m_full.pcmFormat frameCapacity:audioFrames]; pcmBuffer.frameLength = pcmBuffer.frameCapacity; AudioChannelLayout layout; memset(&layout, 0, sizeof(layout)); layout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo; CMFormatDescriptionRef format; OSStatus stats = CMAudioFormatDescriptionCreate( kCFAllocatorDefault, pcmBuffer.format.streamDescription, sizeof(layout), &layout, 0, nil, nil, &format ); for (int i = 0; i < bCount; i++) { AudioPCM pcm; audioCallback->callback(pcm); memcpy(*(pcmBuffer.int16ChannelData) + (bufferSize * i), pcm.data, bufferSize); } size_t samplesConsumed = BUFFER_SAMPLES * bCount; CMSampleBufferRef sampleBuffer; CMSampleTimingInfo timing; timing.duration = CMTimeMake(1, config.audioSampleRate); timing.presentationTimeStamp = presentationTime; timing.decodeTimeStamp = kCMTimeInvalid; OSStatus ostatus = CMSampleBufferCreate( kCFAllocatorDefault, nil, false, nil, nil, format, (CMItemCount)pcmBuffer.frameLength, 1, &timing, 0, nil, &sampleBuffer ); //// ostatus = CMSampleBufferSetDataBufferFromAudioBufferList( sampleBuffer, kCFAllocatorDefault, kCFAllocatorDefault, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, pcmBuffer.audioBufferList ); if (ostatus != noErr) { NSLog(@"fill audio sample from buffer list failed: %s", logAudioError(ostatus)); return; } ostatus = CMSampleBufferSetDataReady(sampleBuffer); if (ostatus != noErr) { NSLog(@"set sample buffer ready failed: %s", logAudioError(ostatus)); return; } // Finally we can attach it, then shove the presentation time forward [m_audio appendSampleBuffer:sampleBuffer]; The Crash The crash points towards some level of deallocation when the conversion tooling is done or has enough samples to process an output packet? It's had to say. 0 caulk 0x1a1e9532c caulk::alloc::tiered_allocator<caulk::alloc::size_range_tier<0ul, 1008ul, caulk::alloc::tree_allocator<caulk::alloc::chunk_allocator<caulk::alloc::page_allocator, caulk::alloc::bitmap_allocator, caulk::alloc::embed_block_memory, 16384ul, 16ul, 6ul>>>, caulk::alloc::size_range_tier<1009ul, 256000ul, caulk::alloc::guarded_edges_allocator<caulk::alloc::consolidating_free_map<caulk::alloc::page_allocator, 10485760ul>, 4ul>>, caulk::alloc::tracking_allocator<caulk::alloc::page_allocator>>::deallocate(caulk::alloc::block, unsigned long) + 636 1 AudioToolboxCore 0x1993fbfe4 ExtendedAudioBufferList_Destroy + 112 2 AudioToolboxCore 0x1993d5fe0 std::__1::__optional_destruct_base<ACCodecOutputBuffer, false>::~__optional_destruct_base[abi:ne180100]() + 68 3 AudioToolboxCore 0x1993d5f48 acv2::CodecConverter::~CodecConverter() + 196 4 AudioToolboxCore 0x1993d5e5c acv2::CodecConverter::~CodecConverter() + 16 5 AudioToolboxCore 0x1992574d8 std::__1::vector<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>, std::__1::allocator<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>>>::__clear[abi:ne180100]() + 84 6 AudioToolboxCore 0x199259acc acv2::AudioConverterChain::RebuildConverterChain(acv2::ChainBuildSettings const&) + 116 7 AudioToolboxCore 0x1992596ec acv2::AudioConverterChain::SetProperty(unsigned int, unsigned int, void const*) + 1808 8 AudioToolboxCore 0x199324acc acv2::AudioConverterV2::setProperty(unsigned int, unsigned int, void const*) + 84 9 AudioToolboxCore 0x199327f08 with_resolved(OpaqueAudioConverter*, caulk::function_ref<int (AudioConverterAPI*)>) + 60 10 AudioToolboxCore 0x1993281e4 AudioConverterSetProperty + 72 11 MediaToolbox 0x1a7566c2c FigSampleBufferProcessorCreateWithAudioCompression + 2296 12 MediaToolbox 0x1a754db08 0x1a70b5000 + 4819720 13 MediaToolbox 0x1a754dab4 FigMediaProcessorCreateForAudioCompressionWithFormatWriter + 100 14 MediaToolbox 0x1a77ebb98 0x1a70b5000 + 7564184 15 MediaToolbox 0x1a7804158 0x1a70b5000 + 7663960 16 MediaToolbox 0x1a7801da0 0x1a70b5000 + 7654816 17 AVFCore 0x1ada530c4 -[AVFigAssetWriterTrack addSampleBuffer:error:] + 192 18 AVFCore 0x1ada55164 -[AVFigAssetWriterAudioTrack _flushPendingSampleBuffersReturningError:] + 500 19 AVFCore 0x1ada55354 -[AVFigAssetWriterAudioTrack addSampleBuffer:error:] + 472 20 AVFCore 0x1ada4ebf0 -[AVAssetWriterInputWritingHelper appendSampleBuffer:error:] + 128 21 AVFCore 0x1ada4c354 -[AVAssetWriterInput appendSampleBuffer:] + 168 22 lib_devapple_hls.dylib 0x115d2c7cc detail::AppleHLSImplementation::audioRuntime() + 1052 23 lib_devapple_hls.dylib 0x115d2d094 void* std::__1::__thread_proxy[abi:ne180100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void (detail::AppleHLSImplementation::*)(), detail::AppleHLSImplementation*>>(void*) + 72 24 libsystem_pthread.dylib 0x196e5b2e4 _pthread_start + 136 Any insight would be welcome!
2
0
162
Jun ’25
Custom AVAssetResourceLoaderDelegate on iOS 15 fails to load large files
In our app we have implemented a AVAssetResourceLoaderDelegate to handle encrypted downloaded files. We have it working on all iOS versions but we are seeing issues on iOS 15 (15.8.3) with large files (> 1 Gb). We have so far seen two cases where either the load method on the AVURLAsset fails early and throws an unknown error code or starts requesting more data than the device has available RAM. The CPU usage is almost always over 100%, even after pausing playback. The memory issue can happen even though the player has successfully started playback. When running this on devices running iOS 16 and above we set the isEntireLengthAvailableOnDemand to true on the AVAssetResourceLoadingContentInformationRequest. This seems to be key to solving the issue those devices that support it. If we set the property to false we see the same memory issue as on iOS 15. So we have a solution for iOS 16 and upwards but are at a loss for how to handle iOS 15. Is there something we have overlooked or is it in fact an issue with that iOS version?
0
0
463
Dec ’24
AVAudioEngine : Split 1x4 channel bus into 4x1 channel busses?
I'm using a 4 channel USB Audio interface, with 4 microphones, and want to process them through 4 independent effect chains. However the output from AVAudioInputNode is a single 4 channel bus. How can I split this into 4 mono busses? The following code splits the input into 4 copies, and routes them through the effects, but each bus contains all four channels. How can I remap the channels to remove the unwanted channels from the bus? I tried using channelMap on the mixer node but that had no effect. I'm currently using this code primarily on iOS but it should be portable between iOS and MacOS. It would be possible to do this through a Matrix Mixer Node, but that seems completely overkill, for such a basic operation. I'm already using a Matrix Mixer to combine the inputs, and it's not well supported in AVAudioEngine. AVAudioInputNode *inputNode=[engine inputNode]; [inputNode setVoiceProcessingEnabled:NO error:nil]; NSMutableArray *micDestinations=[NSMutableArray arrayWithCapacity:trackCount]; for(i=0;i<trackCount;i++) { fixMicFormat[i]=[AVAudioMixerNode new]; [engine attachNode:fixMicFormat[i]]; // And create reverb/compressor and eq the same way... [engine connect:reverb[i] to:matrixMixerNode fromBus:0 toBus:i format:nil]; [engine connect:eq[i] to:reverb[i] fromBus:0 toBus:0 format:nil]; [engine connect:compressor[i] to:eq[i] fromBus:0 toBus:0 format:nil]; [engine connect:fixMicFormat[i] to:compressor[i] fromBus:0 toBus:0 format:nil]; [micDestinations addObject:[[AVAudioConnectionPoint alloc] initWithNode:fixMicFormat[i] bus:0] ]; } AVAudioFormat *inputFormat = [inputNode outputFormatForBus: 1]; [engine connect:inputNode toConnectionPoints:micDestinations fromBus:1 format:inputFormat];
2
0
224
Oct ’25
watchOS longFormAudio cannot de active
My workout watch app supports audio playback during exercise sessions. When users carry both Apple Watch, iPhone, and AirPods, with AirPods connected to the iPhone, I want to route audio from Apple Watch to AirPods for playback. I've implemented this functionality using the following code. try? session.setCategory(.playback, mode: .default, policy: .longFormAudio, options: []) try await session.activate() When users are playing music on iPhone and trigger my code in the watch app, Apple Watch correctly guides users to select AirPods, pauses the iPhone's music, and plays my audio. However, when playback finishes and I end the session using the code below: try session.setActive(false, options:[.notifyOthersOnDeactivation]) the iPhone doesn't automatically resume the previously interrupted music playback—it requires manual intervention. Is this expected behavior, or am I missing other important steps in my code?
1
0
241
1w
watchOS 26: Audio Playback Interrupted by Fitness Notifications Across Multiple Apps
After upgrading to watchOS 26, users report that when playing music on Apple Watch, if a fitness reminder is received, the music automatically pauses and users need to manually tap the play button to resume music playback. This phenomenon occurs with multiple music and podcast apps. This issue did not exist before the upgrade. We would like to know if this is an Apple bug or if there are any special development configurations needed?"
1
0
145
Oct ’25
Essentials of macOS to read and write mp3 and mp4 audio files
Hi, On macOS I used to open MP3 and MP4 files with ExtAudioFile. For a few years it doesn't work anymore. So I decided to try different macOS API using the AudioFileID of AudioToolbox framework. I decided to write a test: https://gist.github.com/joelkraehemann/7f5b241b52ca38c3a765c138fb647588 It fails right here: AudioFileOpenWithCallbacks() By telling OSStatus error 1954115647, which means kAudioFileUnsupportedFileTypeError. The filename was set to an MP4 file: ~/Music/test.mp4 Howto fix this? regards, Joël
1
0
123
Jun ’25
AVAudioSessionCategoryPlayback is not allowed while CallKit call is active
We require assistance in resolving a critical audio design conflict within our Push-to-Talk (PTT) application. Our current volume amplification strategy—which relies on applying a GAIN factor to PCM samples in conjunction with setting the AVAudioSession category to Playback—is working successfully when PTT is used independently. However, upon integrating and reporting the same PTT call through the CallKit framework, this amplification effect is lost. The CallKit integration appears to be forcing a different, non-amplifying audio session category or configuration, negatively impacting the user's perceived call volume. We need guidance on how to maintain the AVAudioSessionCategoryPlayback setting, or an equivalent high-volume configuration, while operating under the control of CallKit.
3
0
217
4w
Unexpected AVAudioSession behavior after iOS 18.5 causing audio loss in VoIP calls
After updating to iOS 18.5, we’ve observed that outgoing audio from our app intermittently stops being transmitted during VoIP calls using AVAudioSession configured with .playAndRecord and .voiceChat. The session is set active without errors, and interruptions are handled correctly, yet audio capture suddenly ceases mid-call. This was not observed in earlier iOS versions (≤ 18.4). We’d like to confirm if there have been any recent changes in AVAudioSession, CallKit, or related media handling that could affect audio input behavior during long-running calls. func configureForVoIPCall() throws { try setCategory( .playAndRecord, mode: .voiceChat, options: [.allowBluetooth, .allowBluetoothA2DP, .defaultToSpeaker]) try setActive(true) }
1
0
172
Aug ’25
Can't set AVAudio sampleRate and installTap needs bufferSize 4800 at minimum
Two issues: No matter what I set in try audioSession.setPreferredSampleRate(x) the sample rate on both iOS and macOS is always 48000 when the output goes through the speaker, and 24000 when my Airpods connect to an iPhone/iPad. Now, I'm checking the current output loudness to animate a 3D character, using mixerNode.installTap(onBus: 0, bufferSize: y, format: nil) { [weak self] buffer, time in Task { @MainActor in // calculate rms and animate character accordingly but any buffer size under 4800 is just ignored and the buffers I get are 4800 sized. This is ok, when the sampleRate is currently 48000, as 10 samples per second lead to decent visual results. But when AirPods connect, the samplerate is 24000, which means only 5 samples per second, so the character animation looks lame. My AVAudioEngine setup is the following: audioEngine.connect(playerNode, to: pitchShiftEffect, format: format) audioEngine.connect(pitchShiftEffect, to: mixerNode, format: format) audioEngine.connect(mixerNode, to: audioEngine.outputNode, format: nil) Now, I'd be fine if the outputNode runs at whatever if it needs, as long as my tap would get at least 10 samples per second. PS: Specifying my favorite format in the let format = AVAudioFormat(standardFormatWithSampleRate: 48_000, channels: 2)! mixerNode.installTap(onBus: 0, bufferSize: y, format: format) doesn't change anything either
1
0
337
Aug ’25