Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Created

ScaleTimeRange will cause noise in sound
I'm using AVFoundation to make a multi-track editor app, which can insert multiple track and clip, including scale some clip to change the speed of the clip, (also I'm not sure whether AVFoundation the best choice for me) but after making the scale with scaleTimeRange API, there is some short noise sound in play back. Also, sometimes it's fine when play AVMutableCompostion using AVPlayer with AVPlayerItem, but after exporting with AVAssetReader, will catch some short noise sounds in result file.... Not sure why. Here is the example project, which can build and run directly. https://github.com/luckysmg/daily_images/raw/refs/heads/main/TestDemo.zip
0
0
125
Jul ’25
Playing periodic audio in background using AVFoundation - facing audio session startup failure
Hello everyone, I’m new to Swift development and have been working on an audio module that plays a specific sound at regular intervals - similar to a workout timer that signals switching exercises every few minutes. Following AVFoundation documentation, I’m configuring my audio session like this: let session = AVAudioSession.sharedInstance() try session.setCategory( .playback, mode: .default, options: [.interruptSpokenAudioAndMixWithOthers, .duckOthers] ) self.engine.attach(self.player) self.engine.connect(self.player, to: self.engine.outputNode, format: self.audioFormat) try? session.setActive(true) When it’s time to play cues, I schedule playback on a DispatchQueue: // scheduleAudio uses DispatchQueue self.scheduleAudio(at: interval.start) { do { try audio.engine.start() audio.node.play() for sample in interval.samples { audio.node.scheduleBuffer(sample.buffer, at: AVAudioTime(hostTime: sample.hostTime)) } } catch { print("Audio activation failed: \(error)") } } This works perfectly in the foreground. But once the app goes into the background, the scheduled callback runs, yet the audio engine fails to start, resulting in an error with code 561015905. Interestingly, if the app is already playing audio before going to the background, the scheduled sounds continue to play as expected. I have added the required background audio mode to my Info plist file by including the key UIBackgroundModes with the value audio. Is there anything else I should configure? What is the best practice to play periodic audio when the app runs in the background? How do apps like turn-by-turn navigation handle continuous audio playback in the background? Any advice or pointers would be greatly appreciated!
0
0
183
Jul ’25
Always audio from latest connected external USB mic
Hello! I've two mics connected to a USB-hub. The USB-hub is then connected to my iPad. Both mics are part of the audio session's list of available inputs. The problem is that regardless of which mic I select in my app (using setPreferredInput() on the audio session), the audio keeps coming from the mic that was last connected to the USB-hub. Anyone that knows if this is a limitation in iPadOS/iOS?
1
1
185
Jul ’25
The files generated using AVAudioRecorder have a constant size of only 4kb
Hello. My app uses AVAudioRecorder to generate recording files, which are consistently only 4kb in size. Most users generate audio files normally, with only a few users experiencing this phenomenon occasionally. After uninstalling and installing the app, it will work normally, but it will reappear after a period of time. I have compared that the problematic audio files generated each time are fixed and cannot be played. Added the audioRecorderDidFinishRecording proxy method, which shows that the recording was completed normally. The user also reported that the recording is normal, but there is a problem with the generated file. How should I handle this issue? Look forward to your reply. - (void)startRecordWithOrderID:(NSString *)orderID { AVAudioSession *audioSession = [AVAudioSession sharedInstance]; [audioSession setCategory:AVAudioSessionCategoryRecord error:nil]; [audioSession setActive:YES error:nil]; NSMutableDictionary *settings = [[NSMutableDictionary alloc] init]; [settings setObject:[NSNumber numberWithFloat: 8000.0] forKey:AVSampleRateKey]; [settings setObject:[NSNumber numberWithInt: kAudioFormatLinearPCM] forKey:AVFormatIDKey]; [settings setObject:[NSNumber numberWithInt:16] forKey:AVLinearPCMBitDepthKey]; [settings setObject:[NSNumber numberWithInt: 1] forKey:AVNumberOfChannelsKey]; [settings setObject:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsBigEndianKey]; [settings setObject:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsFloatKey]; NSString *path = [WDUtility createDirInDocument:@"audios" withOrderID:orderID withPathExtension:@"wav"]; NSURL *tmpFile = [NSURL fileURLWithPath:path]; recorder = [[AVAudioRecorder alloc] initWithURL:tmpFile settings:settings error:nil]; [recorder setDelegate:self]; [recorder prepareToRecord]; [recorder record]; }
0
0
106
Jul ’25
Graceful shutdown during background audio playback.
Hello. My team and I think we have an issue where our app is asked to gracefully shutdown with a following SIGTERM. As we’ve learned, this is normally not an issue. However, it seems to also be happening while our app (an audio streamer) is actively playing in the background. From our perspective, starting playback is indicating strong user intent. We understand that there can be extreme circumstances where the background audio needs to be killed, but should it be considered part of normal operation? We hope that’s not the case. All we see in the logs is the graceful shutdown request. We can say with high certainty that it’s happening though, as we know that playback is running within 0.5 seconds of the crash, without any other tracked user interaction. Can you verify if this is intended behavior, and if there’s something we can do about it from our end. From our logs it doesn’t look to be related to either memory usage within the app, or the system as a whole. Best, John
0
1
95
Jun ’25
AVAssetResourceLoaderDelegate for radio stream
Hi everyone, I’m trying to use AVAssetResourceLoaderDelegate to handle a live radio stream (e.g. Icecast/HTTP stream). My goal is to have access to the last 30 seconds of audio data during playback, so I can analyze it for specific audio patterns in near-real-time. I’ve implemented a custom resource loader that works fine for podcasts and static files, where the file size and content length are known. However, for infinite live streams, my current implementation stops receiving new loading requests after the first one is served. As a result, the playback either stalls or fails to continue. Has anyone successfully used AVAssetResourceLoaderDelegate with a continuous radio stream? Or maybe you can suggest betterapproach for buffering and analyzing live audio? Any tips, examples, or advice would be appreciated. Thanks!
0
0
116
Jun ’25
iOS 26.0 (23A5276f) - Bluetooth Call Audio Broken (AirPods + Car)
iOS 26.0 (23A5276f) – Bluetooth Call Audio Issue I’m experiencing a Bluetooth audio issue on iOS 26.0 (build 23A5276f). I cannot make or receive phone calls properly using Bluetooth devices — this affects both my car’s Bluetooth system and my AirPods Pro (2nd generation). Notably: Regular phone calls have no audio (either I can’t hear the other person, or they can’t hear me). WhatsApp and other VoIP apps work fine with the same Bluetooth devices. Media playback (music, video, etc.) works without issues over Bluetooth. It seems this bug is limited to the native Phone app or the system audio routing for regular cellular calls. Please advise if this is a known issue or if a fix is expected in upcoming beta releases.
1
1
270
Jun ’25
SpeechAnalyzer speech to text wwdc sample app
I am using the sample app from: https://developer.apple.com/videos/play/wwdc2025/277/?time=763 I installed this on an Iphone 15 Pro with iOS 26 beta 1. I was able to get good transcription with it. The app did crash sometimes when transcribing and I was going to post here with the details. I then installed iOS beta 2 and uninstalled the sample app. Now every time I try to run the sample app on the 15 Pro I get this message: SpeechAnalyzer: Input loop ending with error: Error Domain=SFSpeechErrorDomain Code=10 "Cannot use modules with unallocated locales [en_US (fixed en_US)]" UserInfo={NSLocalizedDescription=Cannot use modules with unallocated locales [en_US (fixed en_US)]} I can't continue our our work towards using SpeechAnalyzer now with this error. I have set breakpoints on all the catch handlers and it doesn't catch this error. My phone region is "United States"
21
8
1.6k
Jun ’25
Creating RTP-MIDI Sessions via MIDINetworkSession C API (dlopen/dlsym) on macOS 15?
I’m an amateur developer working on a free utility for composers/producers, for which the macOS release needs to create and name RTP-MIDI sessions in Audio MIDI Setup from the command line (so I can ship a small C helper instead of telling users to click through the UI). Here’s what I’ve tried so far, without luck: • Plist hacks: Injecting entries into ~/Library/Audio/MIDI Configurations/*.mcfg works when AMS is closed, but AMS immediately locks and reverts my changes when it’s open. • CoreMIDI C API: I can create virtual ports with MIDISourceCreate, but attempting MIDIObjectGetDataProperty on the apple.midirtp.session plugin always returns err –10836. • Obj-C & Swift: Loading MIDINetworkSession and calling defaultSession, init, setNetworkName: and setting enabled = YES doesn’t produce a new session object in the Network panel. • dlopen/dlsym: I extracted the real CoreMIDI binary out of the dyld shared cache and tried binding _MIDINetworkSessionCreate, _SetName, _SetEnabled, etc., but all the symbols come back null or my tool segfaults. • Plugin registration: I’ve pulled the factory UUID (70C9C5EA-7C65-11D8-B317-000393A34B5A) from /System/Library/Extensions/AppleMIDIRTPDriver.plugin/Contents/Info.plist and called CFPlugInRegisterFactories, but it still never exposes the session-creation calls. At this point I’m convinced I’m either loading the wrong binary or missing one critical step in registering the RTP-MIDI plugin’s private API. Can anyone point me to: The exact path of the dylib or bundle that actually exports the MIDINetworkSessionCreate/MIDINetworkSessionSetName/MIDINetworkSessionSetEnabled symbols? A minimal working snippet (C or Obj-C) that reliably creates and names a Network-MIDI session? Any pointers, sample code, or even ideas about where Apple hides this functionality on macOS 15 would be hugely appreciated. Thanks!
0
1
103
Jun ’25
Is Call Translation API available for VOIP?
I might have misunderstood the docs, but is Call Translation going to be available for VOIP applications? Eg in an already connected VOIP call, would it be possible for Call Translations to be enabled on an iOS 26 and Apple Intelligence supported device? I have personally tried it and it doesn’t look like it supported VOIP but would love to confirm this. reference: https://developer.apple.com/documentation/callkit/cxsettranslatingcallaction/
1
0
69
Jun ’25
Unable to play audio via MusicKit
Hey folks, I'm running into an odd issue suddenly with an app that had a working MusicKit integration before. I'm using ApplicationMusicPlayer to play Apple Music albums and songs. I'm testing on a physical device, signed in to Apple ID, and with a valid subscription. Apple Music via the first-party app works entirely fine on this device. Attempting to play back any content at all gives the log: <ICUserIdentityStoreACAccountBackend: 0x1070bf3e0> Failed to initialize primary apple account, error=Error Domain=ICError Code=-7013 "Client is not entitled to access account store" UserInfo={NSDebugDescription=Client is not entitled to access account store} [ICUserIdentityStore] - initializing account histories with activeAccountDSID = nil, activeLockerAccountDSID = nil, timestamp = 14605951908 [ICUserIdentityStore] Failed to fetch local store account with error: Error Domain=ICError Code=-7013 "Client is not entitled to access account store" UserInfo={NSDebugDescription=Client is not entitled to access account store}. The album artwork, track names, etc, all appear in the control center playback controls, but the music doesn't play. Trying to trigger playback with control center just results in it skipping to the next track, which doesn't play either. This exact code used to work. I have the MusicKit service selected in Apple Connect. Since this isn't entitlement-based, I'm not sure how else to check that I'm set up correctly. I've tried deleting/reinstalling the app, restarting the device, cleaning/rebuilding, and deleting DerivedData, to no avail. Any help? Running Xcode 16.4 (16F6), testing on iOS 18.5 (22F76)
0
1
134
Jun ’25
How to capture audio from the stream that's playing on the speakers?
Good day, ladies and gents. I have an application that reads audio from the microphone. I'd like it to also be able to read from the Mac's audio output stream. (A bonus would be if it could detect when the Mac is playing music.) I'd eventually be able to figure it out reading docs, but if someone can give a hint, I'd be very grateful, and would owe you the libation of your choice. Here's the code used to set up the AudioUnit: -(NSString*) configureAU { AudioComponent component = NULL; AudioComponentDescription description; OSStatus err = noErr; UInt32 param; AURenderCallbackStruct callback; if( audioUnit ) { AudioComponentInstanceDispose( audioUnit ); audioUnit = NULL; } // was CloseComponent // Open the AudioOutputUnit description.componentType = kAudioUnitType_Output; description.componentSubType = kAudioUnitSubType_HALOutput; description.componentManufacturer = kAudioUnitManufacturer_Apple; description.componentFlags = 0; description.componentFlagsMask = 0; if( component = AudioComponentFindNext( NULL, &description ) ) { err = AudioComponentInstanceNew( component, &audioUnit ); if( err != noErr ) { audioUnit = NULL; return [ NSString stringWithFormat: @"Couldn't open AudioUnit component (ID=%d)", err] ; } } // Configure the AudioOutputUnit: // You must enable the Audio Unit (AUHAL) for input and output for the same device. // When using AudioUnitSetProperty the 4th parameter in the method refers to an AudioUnitElement. // When using an AudioOutputUnit for input the element will be '1' and the output element will be '0'. param = 1; // Enable input on the AUHAL err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &param, sizeof(UInt32) ); chkerr("Couldn't set first EnableIO prop (enable inpjt) (ID=%d)"); param = 0; // Disable output on the AUHAL err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &param, sizeof(UInt32) ); chkerr("Couldn't set second EnableIO property on the audio unit (disable ootpjt) (ID=%d)"); param = sizeof(AudioDeviceID); // Select the default input device AudioObjectPropertyAddress OutputAddr = { kAudioHardwarePropertyDefaultInputDevice, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; err = AudioObjectGetPropertyData( kAudioObjectSystemObject, &OutputAddr, 0, NULL, &param, &inputDeviceID ); chkerr("Couldn't get default input device (ID=%d)"); // Set the current device to the default input unit err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &inputDeviceID, sizeof(AudioDeviceID) ); chkerr("Failed to hook up input device to our AudioUnit (ID=%d)"); callback.inputProc = AudioInputProc; // Setup render callback, to be called when the AUHAL has input data callback.inputProcRefCon = self; err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 0, &callback, sizeof(AURenderCallbackStruct) ); chkerr("Could not install render callback on our AudioUnit (ID=%d)"); param = sizeof(AudioStreamBasicDescription); // get hardware device format err = AudioUnitGetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1, &deviceFormat, &param ); chkerr("Could not install render callback on our AudioUnit (ID=%d)"); audioChannels = MAX( deviceFormat.mChannelsPerFrame, 2 ); // Twiddle the format to our liking actualOutputFormat.mChannelsPerFrame = audioChannels; actualOutputFormat.mSampleRate = deviceFormat.mSampleRate; actualOutputFormat.mFormatID = kAudioFormatLinearPCM; actualOutputFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved; if( actualOutputFormat.mFormatID == kAudioFormatLinearPCM && audioChannels == 1 ) actualOutputFormat.mFormatFlags &= ~kLinearPCMFormatFlagIsNonInterleaved; #if __BIG_ENDIAN__ actualOutputFormat.mFormatFlags |= kAudioFormatFlagIsBigEndian; #endif actualOutputFormat.mBitsPerChannel = sizeof(Float32) * 8; actualOutputFormat.mBytesPerFrame = actualOutputFormat.mBitsPerChannel / 8; actualOutputFormat.mFramesPerPacket = 1; actualOutputFormat.mBytesPerPacket = actualOutputFormat.mBytesPerFrame; // Set the AudioOutputUnit output data format err = AudioUnitSetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &actualOutputFormat, sizeof(AudioStreamBasicDescription)); chkerr("Could not change the stream format of the output device (ID=%d)"); param = sizeof(UInt32); // Get the number of frames in the IO buffer(s) err = AudioUnitGetProperty( audioUnit, kAudioDevicePropertyBufferFrameSize, kAudioUnitScope_Global, 0, &audioSamples, &param ); chkerr("Could not determine audio sample size (ID=%d)"); err = AudioUnitInitialize( audioUnit ); // Initialize the AU chkerr("Could not initialize the AudioUnit (ID=%d)"); // Allocate our audio buffers audioBuffer = [self allocateAudioBufferListWithNumChannels: actualOutputFormat.mChannelsPerFrame size: audioSamples * actualOutputFormat.mBytesPerFrame]; if( audioBuffer == NULL ) { [ self cleanUp ]; return [NSString stringWithFormat: @"Could not allocate buffers for recording (ID=%d)", err]; } return nil; } (...again, it would be nice to know if audio output is active and thereby choose the clean output stream over the noisy mic, but that would be a different chunk of code, and my main question may just be a quick edit to this chunk.) Thanks for your attention! ==Dave [p.s. if i get more than one useful answer, can i "Accept" more than one, to spread the credit around?] {pps: of course, the code lines up prettier in a monospaced font!}
1
0
148
Jun ’25
Wi-Fi Access Point Not Reconnecting While AVAudioSession Is Active
We’ve encountered a reproducible issue where the iPhone fails to reconnect to a Wi-Fi access point under the following conditions: The device is connected to a 2.4GHz Wi-Fi network. A Bluetooth audio accessory is connected (e.g. headset). AVAudioSession is active (such as during a voice call or when using the Voice Memos app). The user moves away from the access point, causing a disconnect. Upon returning within range, the access point is no longer recognized or reconnected while AVAudioSession remains active. However, if the Bluetooth device is disconnected or the AVAudioSession is deactivated, the Wi-Fi access point is immediately recognized again. We confirmed this behavior not only in my app but also using Apple's built-in Voice Memos app, suggesting this is not specific to our implementation. It appears that the Wi-Fi system deprioritizes reconnection while AVAudioSession is engaged. Could this be by design? Or is this a known issue or limitation with Wi-Fi and AVAudioSession interaction? Test Environment: Device: iPhone 13 mini iOS: 17.5.1 Wi-Fi: 2.4GHz band Accessories: Bluetooth headset We’d appreciate clarification on whether this is expected behavior or a bug. Thank you!
0
0
123
Jun ’25
Issue using Siphon Tap on input AudioQueue
Hi all, I've developed an audio DSP application in C++ using AudioToolbox and CoreAudio on MacOS 14.4.1 with Xcode 15. I use an AudioQueue for input and another for output. This works great. I'm now adding real-time audio analysis eg spectral analysis. I want this to run independently of my audio processing so it can not interfere with audio playback. Taps on AudioQueues seem to be a good way of doing this... Since the analytics won't modify the audio data, I am using a Siphon Tap by setting the AudioQueueProcessingTapFlags to kAudioQueueProcessingTap_PreEffects | kAudioQueueProcessingTap_Siphon; This works fine on my output queue. However, on my input queue the Tap callback is called once and then a EXC_BAD_ACCESS occurs - screen shot below. NB: I believe that a callback should only call AudioQueueProcessingTapGetSourceAudio when not using a Siphon, so I don't call it. Relevant code: AudioQueueProcessingTapCallback tap_callback) { // Makes an audio tap for a queue void * tap_data_ptr = NULL; AudioQueueProcessingTapFlags tap_flags = kAudioQueueProcessingTap_PostEffects | kAudioQueueProcessingTap_Siphon; uint32_t max_frames = 0; AudioStreamBasicDescription asbd; AudioQueueProcessingTapRef tap_ref; OSStatus status = AudioQueueProcessingTapNew(queue_ref, tap_callback, tap_data_ptr, tap_flags, &max_frames, &asbd, &tap_ref); if (status != noErr) printf("Error while making Tap\n"); else printf("Successfully made tap\n"); } void tapper(void * tap_data, AudioQueueProcessingTapRef tap_ref, uint32_t number_of_frames_in, AudioTimeStamp * ts_ptr, AudioQueueProcessingTapFlags * tap_flags_ptr, uint32_t * number_of_frames_out_ptr, AudioBufferList * buf_list) { // Callback function for audio queue tap printf("Tap callback"); }``` Image of exception stack provided by Xcode: ![]("https://developer.apple.com/forums/content/attachment/27479e8d-a118-459b-aa2d-7e30528910e3" "title=Screenshot 2025-06-14 at 1.29.14 PM.png;width=932;height=562") What have I missed? Appreciate any help you learned folks may be able to provide. Best, Geoff.
1
0
82
Jun ’25