Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

Music Keeps cutting off
Everytime I put my AirPods in and connect them to my phone or my Mac or my iPad since the iOS 18.3 update on my devices they’ve been disconnecting without reason, pausing songs I’m in the middle of playing, and only partially reconnecting in one pod and it’s getting really frustrating
1
0
297
Feb ’25
Shazamkit with AirPods
HI Guys, I'm using Shazamkit in my IOS app and successfully capturing the currently playing track details, when using the devices (iPhone) built-in mic. When I test with AirPods though, my app cannot both send the output to through the AirPods and capture that same output with the AirPods mic, for Shazamkit recognition. I believe this must be possible, because the Shazamkit widget on IOS can do this. Is it restricted in some way for third party apps? If not, I'd appreciate some guidance on how to achieve this in Swift code. Thanks in advance.
1
1
528
Feb ’25
Indicate Packet Loss With AVAudioConverter for OPUS Decoding
I'm using an AVAudioConverter object to decode an OPUS stream for VoIP. The decoding itself works well, however, whenever the stream stalls (no more audio packet is available to decode because of network instability) this can be heard in crackling / abrupt stop in decoded audio. OPUS can mitigate this by indicating packet loss by passing a null pointer in the C-library to int opus_decode_float (OpusDecoder * st, const unsigned char * data, opus_int32 len, float * pcm, int frame_size, int decode_fec), see https://opus-codec.org/docs/opus_api-1.2/group__opus__decoder.html#ga9c554b8c0214e24733a299fe53bb3bd2. However, with AVAudioConverter using Swift I'm constructing an AVAudioCompressedBuffer like so:         let compressedBuffer = AVAudioCompressedBuffer(             format: VoiceEncoder.Constants.networkFormat,             packetCapacity: 1,             maximumPacketSize: data.count         )         compressedBuffer.byteLength = UInt32(data.count)         compressedBuffer.packetCount = 1   compressedBuffer.packetDescriptions! .pointee.mDataByteSize = UInt32(data.count)         data.copyBytes(             to: compressedBuffer.data .assumingMemoryBound(to: UInt8.self),             count: data.count         ) where data: Data contains the raw OPUS frame to be decoded. How can I specify data loss in this context and cause the AVAudioConverter to output PCM data whenever no more input data is available? More context: I'm specifying the audio format like this:         static let frameSize: UInt32 = 960         static let sampleRate: Float64 = 48000.0         static var networkFormatStreamDescription = AudioStreamBasicDescription(             mSampleRate: sampleRate,             mFormatID: kAudioFormatOpus,             mFormatFlags: 0,             mBytesPerPacket: 0,             mFramesPerPacket: frameSize,             mBytesPerFrame: 0,             mChannelsPerFrame: 1,             mBitsPerChannel: 0,             mReserved: 0         )         static let networkFormat = AVAudioFormat( streamDescription: &networkFormatStreamDescription )! I've tried 1) setting byteLength and packetCount to zero and 2) returning nil but setting .haveData in the AVAudioConverterInputBlock I'm using with no success.
1
1
820
May ’25
storing AVAsset in SwiftData
Hi, I am creating an app that can include videos or images in it's data. While @Attribute(.externalStorage) helps with images, with AVAssets I actually would like access to the URL behind that data. (as it would be stupid to load and then save the data again just to have a URL) One key component is to keep all of this clean enough so that I can use (private) CloudKit syncing with the resulting model. All the best Christoph
1
0
533
Jun ’25
Error on connect AudioEngin with AudioPlayerNoded with AVAudioPCMFormatInt16
Hi community, I'm trying to setup an AVAudioFormat with AVAudioPCMFormatInt16. But, i've an error : AVAEInternal.h:125 [AUInterface.mm:539:SetFormat: ([[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr])] returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 "(null)" If i understand the error code 10868, the format is not correct. But, how i can use PCM Int16 format ? Here is my method : - (void)setupAudioDecoder:(double)sampleRate audioChannels:(double)audioChannels { if (self.isRunning) { return; } self.audioEngine = [[AVAudioEngine alloc] init]; self.audioPlayerNode = [[AVAudioPlayerNode alloc] init]; [self.audioEngine attachNode:self.audioPlayerNode]; AVAudioChannelCount channelCount = (AVAudioChannelCount)audioChannels; self.audioFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatInt16 sampleRate:sampleRate channels:channelCount interleaved:YES]; NSLog(@"Audio Format: %@", self.audioFormat); NSLog(@"Audio Player Node: %@", self.audioPlayerNode); NSLog(@"Audio Engine: %@", self.audioEngine); // Error on this line [self.audioEngine connect:self.audioPlayerNode to:self.audioEngine.mainMixerNode format:self.audioFormat]; /**NSError *error = nil; if (![self.audioEngine startAndReturnError:&error]) { NSLog(@"Erreur lors de l'initialisation du moteur audio: %@", error); return; } [self.audioPlayerNode play]; self.isRunning = YES;*/ } Also, i see the audioEngine seem not running ? Audio Engine: ________ GraphDescription ________ AVAudioEngineGraph 0x600003d55fe0: initialized = 0, running = 0, number of nodes = 1 Anyone have already use this format with AVAudioFormat ? Thank you !
1
0
630
Oct ’24
Garageband displaying error 100001 when loading up some AU plugins
I recently got some plugins from Universal Audio, and have licensed them properly through both UA and iLok manager. Whenever I try to load up the plugins (specifically from UA) in GarageBand, it first says that "NSCreateObjectFileImageFromMemory-p47UEwps” because the developper can not be verified. After clicking either 'show in finder' or 'okay', it opens the plugin in a form without its GUI and showing that it is not licensed (even though it is). It also displays error code 100001. I have tried only some basic stuff to troubleshoot like restarting the DAW/my computer and reinstalling/relicensing the softwares. I don't know if the macOS version has anything to do with it but for some reason I just can't get it to work.
1
0
362
Jan ’25
AVAudioEngine. Select input device on macOS
Hello! I'm use AVFoundation for preview video and audio from selected device, and I try use AVAudioEngine for preview audio in real-time, but I can't or I don't understand how select input device? I can hear only my microphone in real-time So far, I'm using AVCaptureAudioPreviewOutput for in real-time hear audio, but I think has delay. On iOS works easy with AVAudioEngine, but on macOS bruh...
1
0
396
Mar ’25
AVAudioEngine - How to archive configured nodes to file?
I’m looking to add DAW-like capabilities to my macOS music app, and AVAudioEngine seems like the right tool for the job. However, I haven’t been able to find any documentation on how to save the user’s AVAudioEngine configuration—specifically the connections between nodes and the internal states of each node—to a file. Does AVAudioEngine provide any API for saving and restoring this state, or does it need to be handled manually? If it’s manual, are there any sample "DAW" apps or resources that demonstrate how this can be implemented? Any guidance would be greatly appreciated. Thanks, BD
1
0
479
Dec ’24
AVMIDIPlayer not working for all instruments
Hi, I test AVMIDIPlayer in order to replace classes written based on AVAudioEngine with callbacks functions sending MIDI events to test, I use an NSMutableData filled with: the MIDI header a track for time signature a track containing a few midi events. I then create an instance of the AVMIDIPlayer using the data Everything works fine for some instrument (00 … 20) or 90 but not for other instruments 60, 70, … The MiDI header and the time signature track are based on the MIDI.org sample, https://midi.org/standard-midi-files-specification RP-001_v1-0_Standard_MIDI_Files_Specification_96-1-4.pdf the midi events are: UInt8 trkEvents[] = { 0x00, 0xC0, instrument, // Tubular bell 0x00, 0x90, 0x4C, 0xA0, // Note 4C 0x81, 0x40, 0x48, 0xB0, // TS + Note 48 0x00, 0xFF, 0x2F, 0x00}; // End for (UInt8 i=0; i<3; i++) { printf("0x%X ", trkEvents[i]); } printf("\n"); [_midiTempData appendBytes:trkEvents length:sizeof(trkEvents)]; A template application is used to change the instrument in a NSTextField I was wondering if specifics are required for some instruments? The interface header: #import <AVFoundation/AVFoundation.h> NS_ASSUME_NONNULL_BEGIN @interface TestMIDIPlayer : NSObject @property (retain) NSMutableData *midiTempData; @property (retain) NSURL *midiTempURL; @property (retain) AVMIDIPlayer *midiPlayer; - (void)createTest:(UInt8)instrument; @end NS_ASSUME_NONNULL_END The implementation: #pragma mark - typedef struct _MThd { char magic[4]; // = "MThd" UInt8 headerSize[4]; // 4 Bytes, MSB first. Always = 00 00 00 06 UInt8 format[2]; // 16 bit, MSB first. 0; 1; 2 Use 1 UInt8 trackCount[2]; // 16 bit, MSB first. UInt8 division[2]; // }MThd; MThd MThdMake(void); void MThdPrint(MThd *mthd) ; typedef struct _MIDITrackHeader { char magic[4]; // = "MTrk" UInt8 trackLength[4]; // Ignore, because it is occasionally wrong. } Track; Track TrackMake(void); void TrackPrint(Track *track) ; #pragma mark - C Functions MThd MThdMake(void) { MThd mthd = { "MThd", {0, 0, 0, 6}, {0, 1}, {0, 0}, {0, 0} }; MThdPrint(&mthd); return mthd; } void MThdPrint(MThd *mthd) { char *ptr = (char *)mthd; for (int i=0;i<sizeof(MThd); i++, ptr++) { printf("%X", *ptr); } printf("\n"); } Track TrackMake(void) { Track track = { "MTrk", {0, 0, 0, 0} }; TrackPrint(&track); return track; } void TrackPrint(Track *track) { char *ptr = (char *)track; for (int i=0;i<sizeof(Track); i++, ptr++) { printf("%X", *ptr); } printf("\n"); } @implementation TestMIDIPlayer - (id)init { self = [super init]; printf("%s %p\n", __FUNCTION__, self); if (self) { _midiTempData = nil; _midiTempURL = [[NSURL alloc]initFileURLWithPath:@"midiTempUrl.mid"]; _midiPlayer = nil; [self createTest:0x0E]; NSLog(@"_midiTempData:%@", _midiTempData); } return self; } - (void)dealloc { [_midiTempData release]; [_midiTempURL release]; [_midiPlayer release]; [super dealloc]; } - (void)createTest:(UInt8)instrument { /* MIDI Header */ [_midiTempData release]; _midiTempData = nil; _midiTempData = [[NSMutableData alloc]initWithCapacity:1024]; MThd mthd = MThdMake(); MThd *ptrMthd = &mthd; ptrMthd->trackCount[1] = 2; ptrMthd->division[1] = 0x60; MThdPrint(ptrMthd); [_midiTempData appendBytes:ptrMthd length:sizeof(MThd)]; /* Track Header Time signature */ Track track = TrackMake(); Track *ptrTrack = &track; ptrTrack->trackLength[3] = 0x14; [_midiTempData appendBytes:ptrTrack length:sizeof(track)]; UInt8 trkEventsTS[]= { 0x00, 0xFF, 0x58, 0x04, 0x04, 0x04, 0x18, 0x08, // Time signature 4/4; 18; 08 0x00, 0xFF, 0x51, 0x03, 0x07, 0xA1, 0x20, // tempo 0x7A120 = 500000 0x83, 0x00, 0xFF, 0x2F, 0x00 }; // End [_midiTempData appendBytes:trkEventsTS length:sizeof(trkEventsTS)]; /* Track Header Track events */ ptrTrack->trackLength[3] = 0x0F; [_midiTempData appendBytes:ptrTrack length:sizeof(track)]; UInt8 trkEvents[] = { 0x00, 0xC0, instrument, // Tubular bell 0x00, 0x90, 0x4C, 0xA0, // Note 4C 0x81, 0x40, 0x48, 0xB0, // TS + Note 48 0x00, 0xFF, 0x2F, 0x00}; // End for (UInt8 i=0; i<3; i++) { printf("0x%X ", trkEvents[i]); } printf("\n"); [_midiTempData appendBytes:trkEvents length:sizeof(trkEvents)]; [_midiTempData writeToURL:_midiTempURL atomically:YES]; dispatch_async(dispatch_get_main_queue(), ^{ if (!_midiPlayer.isPlaying) [self midiPlay]; }); } - (void)midiPlay { NSError *error = nil; _midiPlayer = [[AVMIDIPlayer alloc]initWithData:_midiTempData soundBankURL:nil error:&error]; if (_midiPlayer) { [_midiPlayer prepareToPlay]; [_midiPlayer play:^{ printf("Midi Player ended\n"); [_midiPlayer stop]; [_midiPlayer release]; _midiPlayer = nil; }]; } } @end Call from AppDelegate - (IBAction)actionInstrument:(NSTextField*)sender { [_testMidiplayer createTest:(UInt8)sender.intValue]; }
1
0
422
Dec ’24
No audio in screen recordings when using AVAudioEngine Voice Processing
Hello, We are developing a real-time speech recognition application and are utilizing AVAudioEngine with voice processing enabled on the input node. However, we have observed that enabling this mode interferes with the built-in iOS screen recording feature - specifically, the recorded video does not capture any audio when this mode is active. Since we want users to be able to record their experience within our app, this issue significantly impacts our functionality. Is there a known workaround or recommended approach to ensure that both voice processing and screen recording can function simultaneously? Any guidance would be greatly appreciated. Thank you!
1
0
287
Mar ’25
Connect 2 mono nodes as L/R input for a stereo node
Hello, I'm fairly new to AVAudioEngine and I'm trying to connect 2 mono nodes as left/right input to a stereo node. I was successful in splitting the input audio to 2 mono nodes using AVAudioConnectionPoint and channelMap. But I can't figure out how to connect them back to a stereo node. I'll post the code I have so far. The use case for this is that I'm trying to process the left/right channels with separate audio units. Any ideas? let monoFormat = AVAudioFormat(standardFormatWithSampleRate: nativeFormat.sampleRate, channels: 1)! let leftInputMixer = AVAudioMixerNode() let rightInputMixer = AVAudioMixerNode() let leftOutputMixer = AVAudioMixerNode() let rightOutputMixer = AVAudioMixerNode() let channelMixer = AVAudioMixerNode() [leftInputMixer, rightInputMixer, leftOutputMixer, rightOutputMixer, channelMixer].forEach { engine.attach($0) } let leftConnectionR = AVAudioConnectionPoint(node: leftInputMixer, bus: 0) let rightConnectionR = AVAudioConnectionPoint(node: rightInputMixer, bus: 0) plugin.leftInputMixer = leftInputMixer plugin.rightInputMixer = rightInputMixer plugin.leftOutputMixer = leftOutputMixer plugin.rightOutputMixer = rightOutputMixer plugin.channelMixer = channelMixer leftInputMixer.auAudioUnit.channelMap = [0] rightInputMixer.auAudioUnit.channelMap = [1] engine.connect(previousNode, to: [leftConnectionR, rightConnectionR], fromBus: 0, format: monoFormat) // Process right channel, pass through left channel engine.connect(rightInputMixer, to: plugin.audioUnit, format: monoFormat) engine.connect(plugin.audioUnit, to: rightOutputMixer, format: monoFormat) engine.connect(leftInputMixer, to: leftOutputMixer, format: monoFormat) // Mix back to stereo? engine.connect(leftOutputMixer, to: channelMixer, format: stereoFormat) engine.connect(rightOutputMixer, to: channelMixer, format: stereoFormat)
1
0
526
Nov ’24
arm64 Logic Leaking Plugins (Not Calling AP_Close)
I'm running into an issue where in some cases, when the AUHostingServiceXPC_arrow process is shut down by Logic, the process is terminated abruptly without calling AP_Close on all of the plugins hosted in the process. In our case, we have filesystem resources we need to clean up, and having stale files around from the last run can cause issues in new sessions, so this leak is having some pretty gnarly effects. I can reproduce the issue using only Apple sample plugins, and it seems to be triggered by a timeout. If I have two different AU plugins in the session, and I add a 1 second sleep to the destructor of one of the sample plugins, Logic will force terminate the process and the remaining destructors are not called (even for the plugins without the 1 second sleep). Is there a way to avoid this behavior? Or to safely clean up our plugin even if other plugins in the session take a second to tear down?
1
1
519
Oct ’24
Unexpected AVAudioSession behavior after iOS 18.5 causing audio loss in VoIP calls
After updating to iOS 18.5, we’ve observed that outgoing audio from our app intermittently stops being transmitted during VoIP calls using AVAudioSession configured with .playAndRecord and .voiceChat. The session is set active without errors, and interruptions are handled correctly, yet audio capture suddenly ceases mid-call. This was not observed in earlier iOS versions (≤ 18.4). We’d like to confirm if there have been any recent changes in AVAudioSession, CallKit, or related media handling that could affect audio input behavior during long-running calls. func configureForVoIPCall() throws { try setCategory( .playAndRecord, mode: .voiceChat, options: [.allowBluetooth, .allowBluetoothA2DP, .defaultToSpeaker]) try setActive(true) }
1
0
146
Aug ’25
iOS 26 Beta Personal Voice bug affecting AVSpeechSynthesizer
I have sent in a feedback report (FB18222398) but I have no idea if anyone has looked at it. I know from past experiences that Apple devs do look at these forums. This applies to each of the betas, 1, 2 and 3. I have created a new Personal Voice with each beta. I create a personal voice in English. When it's done processing, I tap Preview and it says in English what is expected. But after some time, an hour or a day, the language of the voice file changes languages and no longer works properly. If I press Preview it is no longer intelligible. I have a text to speech app and initially the created voice works but then when the language of the file changes, it no longer works. I have run an app on my iphone through Xcode that prints to the console the voices installed on the device with the language. Currently this is the voice file: Voice Identifier: com.apple.speech.personalvoice.AAA9C6F2-9125-475F-BA2F-22C63274991D Language: es-MX and on a second device the same personal voice is in a different language: Voice Identifier: com.apple.speech.personalvoice.AAA9C6F2-9125-475F-BA2F-22C63274991D Language: zh-CN Although, a previous personal voice file that listed as Spanish-Mexican played in English with a Spanish accent or when playing Spanish text, it sounded almost perfect. This current personal voice doesn't do that, and is unintelligible. Previous attempts have converted to Chinese. I hope someone can look into this.
1
0
115
Jul ’25
Logic Pro cannot load v3 audio unit with framework compiled with Swift 6
Sequoia 15.4.1 (24E263) XCode: 16.3 (16E140) Logic Pro: 11.2.1 I’ve been developing a complex audio unit for Mac OS that works perfectly well in its own bespoke host app and is now well into its beta testing stage. It did take some effort to get it to work well in Logic Pro however and all was fine and working well until: The AU part is an empty app extension with a framework containing its code. The framework contains Swift code for the UI and C code for the DSP parts. When the framework is compiled using the Swift 5 compiler the AU will run in Logic with no problems. (I should also mention that AU passes the most strict auval tests). But… when the framework is compiled with Swift 6 Logic Pro cannot load it. Logic displays a message saying the audio unit could not be loaded and to contact the developer. My own host app loads the AU perfectly well with the Swift 6 version, so I know there’s nothing wrong with the audio unit. I cannot find any differences in any of the built output files except, of course, the actual binary code in the framework. I’ve worked for hours on this and cannot find a solution other than to build the framework in Swift 5. (I worked hard to get all the async code updated and working with Swift 6! so I feel a little cheated!) What is happening? Is this a bug in Logic? Is this a bug in Swift 6 compiler/linker? I’m at the Duh! hands in the air, tearing out hair stage! ( once again!)
1
0
312
Jul ’25
Apple Music iOS 26 features in Android
Since many users like me use Apple Music on Android, the app is almost as feature-rich as iOS. It would be fantastic if the developers could add the new iOS 26 features to the Android app, along with a minor UI change. I know it’s challenging to implement liquid glass on Android hardware or design, but features like auto-mix, pronunciation, and translation could be added. kindly consider this request !!!!
1
0
105
Jul ’25