Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

Unable to match music with shazamkit for Android
Hello, i can successfully match music using shazamkit on Apple using SwiftUI, a simple app that let user to load an audio file and exctracts the relative match, while i am unable to match music using shamzamkit on Android. I am trying to make the same simple app but i cannot match music as i get MATCH_ATTEMPT_FAILED every time i try to. I don't know what i am doing wrong but the shazam part in the kotlin Android code is in this method : suspend fun processAudioFileInBackground( filePath: String, developerTokenProvider: DeveloperTokenProvider ) = withContext(Dispatchers.IO) { val bufferSize = 1024 * 1024 val audioFile = FileInputStream(filePath) val byteBuffer = ByteBuffer.allocate(bufferSize) byteBuffer.order(ByteOrder.LITTLE_ENDIAN) var bytesRead: Int while (audioFile.read(byteBuffer.array()).also { bytesRead = it } != -1) { val signatureGenerator = (ShazamKit.createSignatureGenerator(AudioSampleRateInHz.SAMPLE_RATE_44100) as ShazamKitResult.Success).data signatureGenerator.append(byteBuffer.array(), bytesRead, System.currentTimeMillis()) val signature = signatureGenerator.generateSignature() println("Signature: ${signature.durationInMs}") val catalog = ShazamKit.createShazamCatalog(developerTokenProvider, Locale.ENGLISH) val session = (ShazamKit.createSession(catalog) as ShazamKitResult.Success).data val matchResult = session.match(signature) println("MatchResult : $matchResult") setMatchResult(matchResult) byteBuffer.clear() } audioFile.close() } I noticed that changing Locale in catalog creation results in different result as i get NoMatch without exception. Can you please help me with this? Do i need to create a custom catalog?
0
0
105
May ’25
Help for a plugin audio unit
Hello All, It seems that it's "very easy" (😬) to implement a little Swift code inside the prepared AU using Xcode 16.2 on Sequoia 15.1.1 and a Mac Studio M1 Ultra, but my issue is that I finally don't know... where. The documentation says that I've to find the AudioUnitViewController.swift file and then modify the render block : audioUnit.renderBlock = { (numFrames, ioData) in // Process audio here } in the Xcode project automatically generated, but I didn't find such a file... If somebody can help me in showing where is the file to be modified, I'll be very grateful ! Thank you very much. J
1
0
432
Feb ’25
Issue with Audio Sample Rate Conversion in Video Calls
Hey everyone, I'm encountering an issue with audio sample rate conversion that I'm hoping someone can help with. Here's the breakdown: Issue Description: I've installed a tap on an input device to convert audio to an optimal sample rate. There's a converter node added on top of this setup. The problem arises when joining Zoom or FaceTime calls—the converter gets deallocated from memory, causing the program to crash. Symptoms: The converter node is being deallocated during video calls. The program crashes entirely when this happens. Traditional methods of monitoring sample rate changes (tracking nominal or actual sample rates) aren't working as expected. The Big Challenge: I can't figure out how to properly monitor sample rate changes. Listeners set up to track these changes don't trigger when the device joins a Zoom or FaceTime call. Please, if anyone has experience with this or knows a solution, I'd really appreciate your help. Thanks in advance! ⁠
0
0
56
Apr ’25
iOS Audio Routing - Bluetooth Output + Built-in Microphone Input
Hello! I'm experiencing an issue with iOS's audio routing system when trying to use Bluetooth headphones for audio output while also recording environmental audio from the built-in microphone. Desired behavior: Play audio through Bluetooth headset (AirPods) Record unprocessed environmental audio from the iPhone's built-in microphone Actual behavior: When explicitly selecting the built-in microphone, iOS reports it's using it (in currentRoute.inputs) However, the actual audio data received is clearly still coming from the AirPods microphone The audio is heavily processed with voice isolation/noise cancellation, removing environmental sounds Environment Details Device: iPhone 12 Pro Max iOS Version: 18.4.1 Hardware: AirPods Audio Framework: AVAudioEngine (also tried AudioQueue) Code Attempted I've tried multiple approaches to force the correct routing: func configureAudioSession() { let session = AVAudioSession.sharedInstance() // Configure to allow Bluetooth output but use built-in mic try? session.setCategory(.playAndRecord, options: [.allowBluetoothA2DP, .defaultToSpeaker]) try? session.setActive(true) // Explicitly select built-in microphone if let inputs = session.availableInputs, let builtInMic = inputs.first(where: { $0.portType == .builtInMic }) { try? session.setPreferredInput(builtInMic) print("Selected input: \(builtInMic.portName)") } // Log the current route let route = session.currentRoute print("Current input: \(route.inputs.first?.portName ?? "None")") // Configure audio engine with native format let inputNode = audioEngine.inputNode let nativeFormat = inputNode.inputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: nativeFormat) { buffer, time in // Process audio buffer // Despite showing "Built-in Microphone" in route, audio appears to be // coming from AirPods with voice isolation applied - welp! } try? audioEngine.start() } I've also tried various combinations of: Different audio session modes (.default, .measurement, .voiceChat) Different option combinations (with/without .allowBluetooth, .allowBluetoothA2DP) Setting session.setPreferredInput() both before and after activation Diagnostic Observations When AirPods are connected: AVAudioSession.currentRoute.inputs correctly shows "Built-in Microphone" after setPreferredInput() The actual audio data received shows clear signs of AirPods' voice isolation processing Background/environmental sounds are actively filtered out... When recording a test audio played near the phone (not through the app), the recording is nearly silent. Only headset voice goes through. Questions Is there a workaround to force iOS to actually use the built-in microphone while maintaining Bluetooth output? Are there any lower-level configurations that might resolve this issue? Any insights, workarounds, or suggestions would be greatly appreciated. This is blocking a critical feature in my application that requires environmental audio recording while providing audio feedback through headphones 😅
0
0
155
May ’25
AVAudioEngine : Split 1x4 channel bus into 4x1 channel busses?
I'm using a 4 channel USB Audio interface, with 4 microphones, and want to process them through 4 independent effect chains. However the output from AVAudioInputNode is a single 4 channel bus. How can I split this into 4 mono busses? The following code splits the input into 4 copies, and routes them through the effects, but each bus contains all four channels. How can I remap the channels to remove the unwanted channels from the bus? I tried using channelMap on the mixer node but that had no effect. I'm currently using this code primarily on iOS but it should be portable between iOS and MacOS. It would be possible to do this through a Matrix Mixer Node, but that seems completely overkill, for such a basic operation. I'm already using a Matrix Mixer to combine the inputs, and it's not well supported in AVAudioEngine. AVAudioInputNode *inputNode=[engine inputNode]; [inputNode setVoiceProcessingEnabled:NO error:nil]; NSMutableArray *micDestinations=[NSMutableArray arrayWithCapacity:trackCount]; for(i=0;i<trackCount;i++) { fixMicFormat[i]=[AVAudioMixerNode new]; [engine attachNode:fixMicFormat[i]]; // And create reverb/compressor and eq the same way... [engine connect:reverb[i] to:matrixMixerNode fromBus:0 toBus:i format:nil]; [engine connect:eq[i] to:reverb[i] fromBus:0 toBus:0 format:nil]; [engine connect:compressor[i] to:eq[i] fromBus:0 toBus:0 format:nil]; [engine connect:fixMicFormat[i] to:compressor[i] fromBus:0 toBus:0 format:nil]; [micDestinations addObject:[[AVAudioConnectionPoint alloc] initWithNode:fixMicFormat[i] bus:0] ]; } AVAudioFormat *inputFormat = [inputNode outputFormatForBus: 1]; [engine connect:inputNode toConnectionPoints:micDestinations fromBus:1 format:inputFormat];
2
0
224
Oct ’25
MusicKit playbackTime Accuracy
Hello, Has anyone else experienced variations in the accuracy of the playbackTime value? After a few seconds of playback, the reported time adjusts by a fraction of a second, making it difficult to calculate the actual playbackTime of the audio. This can be recreated by playing a song in MusicKit, recording the start time of the audio, playing for at least 10-20 seconds, and then comparing the playbackTime value to one calculated using the start time of the audio. In my experience this jump occurs after about 10 seconds of playback. Any help would be appreciated. Thanks!
1
0
112
May ’25
Populating Now Playing with Objective-C
Hello. I am attempting to display the music inside of my app in Now Playing. I've tried a few different methods and keep running into unknown issues. I'm new to Objective-C and Apple development so I'm at a loss of how to continue. Currently, I have an external call to viewDidLoad upon initialization. Then, when I'm ready to play the music, I call playMusic. I have it hardcoded to play an mp3 called "1". I believe I have all the signing set up as the music plays after I exit the app. However, there is nothing in Now Playing. There are no errors or issues that I can see while the app is running. This is the only file I have in Xcode relating to this feature. Please let me know where I'm going wrong or if there is another object I need to use! #import <Foundation/Foundation.h> #import <UIKit/UIKit.h> #import <MediaPlayer/MediaPlayer.h> #import <AVFoundation/AVFoundation.h> @interface ViewController : UIViewController <AVAudioPlayerDelegate> @property (nonatomic, strong) AVPlayer *player; @property (nonatomic, strong) MPRemoteCommandCenter *commandCenter; @property (nonatomic, strong) MPMusicPlayerController *controller; @property (nonatomic, strong) MPNowPlayingSession *nowPlayingSession; @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; NSLog(@"viewDidLoad started."); [self setupAudioSession]; [self initializePlayer]; [self createNowPlayingSession]; [self configureNowPlayingInfo]; NSLog(@"viewDidLoad completed."); } - (void)setupAudioSession { AVAudioSession *audioSession = [AVAudioSession sharedInstance]; NSError *setCategoryError = nil; if (![audioSession setCategory:AVAudioSessionCategoryPlayback error:&setCategoryError]) { NSLog(@"Error setting category: %@", [setCategoryError localizedDescription]); } else { NSLog(@"Audio session category set."); } NSError *activationError = nil; if (![audioSession setActive:YES error:&activationError]) { NSLog(@"Error activating audio session: %@", [activationError localizedDescription]); } else { NSLog(@"Audio session activated."); } } - (void)initializePlayer { NSString *soundFilePath = [NSString stringWithFormat:@"%@/base/game/%@",[[NSBundle mainBundle] resourcePath], @"bgm/1.mp3"]; if (!soundFilePath) { NSLog(@"Audio file not found."); return; } NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath]; self.player = [AVPlayer playerWithURL:soundFileURL]; NSLog(@"Player initialized with URL: %@", soundFileURL); } - (void)createNowPlayingSession { self.nowPlayingSession = [[MPNowPlayingSession alloc] initWithPlayers:@[self.player]]; NSLog(@"Now Playing Session created with players: %@", self.nowPlayingSession.players); } - (void)configureNowPlayingInfo { MPNowPlayingInfoCenter *infoCenter = [MPNowPlayingInfoCenter defaultCenter]; CMTime duration = self.player.currentItem.duration; Float64 durationSeconds = CMTimeGetSeconds(duration); CMTime currentTime = self.player.currentTime; Float64 currentTimeSeconds = CMTimeGetSeconds(currentTime); NSDictionary *nowPlayingInfo = @{ MPMediaItemPropertyTitle: @"Example Title", MPMediaItemPropertyArtist: @"Example Artist", MPMediaItemPropertyPlaybackDuration: @(durationSeconds), MPNowPlayingInfoPropertyElapsedPlaybackTime: @(currentTimeSeconds), MPNowPlayingInfoPropertyPlaybackRate: @(self.player.rate) }; infoCenter.nowPlayingInfo = nowPlayingInfo; NSLog(@"Now Playing info configured: %@", nowPlayingInfo); } - (void)playMusic { [self.player play]; [self createNowPlayingSession]; [self configureNowPlayingInfo]; } - (void)pauseMusic { [self.player pause]; [self configureNowPlayingInfo]; } @end
2
0
552
Feb ’25
Microphone Recording interrupts when phone ringing
I'm developing an iOS app that requires continuous audio recording. Currently, when a phone call comes in, the AVAudioSession is interrupted and recording stops completely during the ringing phase. While I understand recording should stop if the call is answered, my app needs to continue recording while the phone is merely ringing. I've observed that Apple's Voice Memos app maintains recording during incoming call rings. This indicates the hardware and iOS are capable of supporting this functionality. Request Please advise on any available AVAudioSession configurations or APIs that would allow my app to: Continue recording during an incoming call ring Only stop recording if/when the call is actually answered Impact This interruption significantly impacts the user experience and core functionality of my app. Workarounds like asking users to enable airplane mode are impractical and create a poor user experience. Questions Is there an approved way to maintain microphone access during call rings? If not currently possible, could this capability be considered for addition to a future iOS SDK? Are there any interim solutions or best practices Apple recommends for this use case? Thank you for your help. SUPPORT INFORMATION Did someone from Apple ask you to submit a code-level support request? No Do you have a focused test project that demonstrates your issue? Yes, I have a focused test project to submit with my request What code level support issue are you having? Problems with an Apple framework API in my app
2
0
107
Jun ’25
AVAudioEngine Stop Method
Hi all! I have been experiencing some issues when using the AVAudioEngine to play audio and record input while doing a voice chat (through the PTT Interface). I noticed if I connect any players to the AudioGraph OR call start that the audio session becomes active (this is on iOS). I don't see anything in the docs or the header files in the AVFoundation, but is it possible that calling the stop method on an engine deactivates the audio session too? In a normal app this behavior seems logical, but when using PTT all activation and deactivation of the audio session must go through the framework and its delegate methods. The issue I am debugging is that when the engine with the input node tapped gets stopped, and there is a gap between the input and when the server replies with inbound audio to be played and something seems to be getting the hardware/audio session into a jammed state. Thanks for any feedback and/or confirmation on this behavior!
2
0
630
Feb ’25
SystemAudio Capture API Fails with OSStatus error 1852797029 (kAudioCodecIllegalOperationError)
Issue Description I'm implementing a system audio capture feature using AudioHardwareCreateProcessTap and AudioHardwareCreateAggregateDevice. The app successfully creates the tap and aggregate device, but when starting the IO procedure with AudioDeviceStart, it sometimes fails with OSStatus error 1852797029. (The operation couldn’t be completed. (OSStatus error 1852797029.)) The error occurs inconsistently, which makes it particularly difficult to debug and reproduce. Questions Has anyone encountered this intermittent "nope" error code (0x6e6f7065) when working with system audio capture? Are there specific conditions or system states that might trigger this error sporadically? Are there any known workarounds for handling this intermittent failure case? Any insights or guidance would be greatly appreciated. I'm wondering if anyone else has encountered this specific "nope" error code (0x6e6f7065) when working with system audio capture.
0
0
118
May ’25
"Baking together" two audio tracks into one for drag-and-drop
Hi all, with my app ScreenFloat, you can record your screen, along with system- and microphone audio. Those two audio feeds are recorded into separate audio tracks in order to individually remove or edit them later on. Now, these recordings you create with ScreenFloat can be drag-and-dropped to other apps instantly. So far, so good, but some apps, like Slack, or VLC, or even websites like YouTube, do not play back multiple audio tracks, just one. So what I'm trying to do is, on dragging the video recording file out of ScreenFloat, instantly baking together the two individual audio tracks into one, and offering that new file as the drag and drop file, so that all audio is played in the target app. But it's slow. I mean, it's actually quite fast, but for drag and drop, it's slow. My approach is this: "Bake together" the two audio tracks into a one-track m4a audio file using AVMutableAudioMix and AVAssetExportSession Take the video track, add the new audio file as an audio track to it, and render that out using AVAssetExportSession For a quick benchmark, a 3'40'' movie, step 1 takes ~1.7 seconds, and step two adds another ~1.5 seconds, so we're at ~3.2 seconds. That's an eternity for a drag and drop, where the user might cancel if there's no immediate feedback. I could also do it in one step, but then I couldn't use the AV*Passthrough preset, and that makes it take around 32 seconds then, because I assume it touches the video data (which is unnecessary in this case, so I think the two-step approach here is the fastest). So, my question is, is there a faster way? The best idea I can come up with right now is, when initially recording the screen with system- and microphone audio as separate tracks, to also record both of them into a third, muted, "hidden" track I could use later on, basically eliminating the need for step one and just ripping the two single audio tracks out of the movie and only have the video and the "hidden" track (then unmuted), but I'd still have a ~1.5 second delay there. Also, there's the processing and data overhead (basically doubling the movie's audio data). All this would be great for an export operation (where one expects it to take a little time), but for a drag-and-drop operation, it's not ideal. I've discarded the idea of doing a promise file drag, because many apps do not accept those, and I want to keep wide compatibility with all sorts of apps. I'd appreciate any ideas or pointers. Thank you kindly, Matthias
2
0
649
Mar ’25
Mic audio before and after a call is answered
I have an app that records a health provider’s conversation with a patient. I am using Audio Queue Services for this. If a phone call comes in while recording, the doctor wants to be able to ignore the call and continue the conversation without touching the phone. If the doctor answers the call, that’s fine – I will stop the recording. I can detect when the call comes in and ends using CXCallObserver and AVAudioSession.interruptionNotification. Unfortunately, when a call comes in and before it is answered or dismissed, the audio is suppressed. After the call is dismissed, the audio continues to be suppressed. How can I continue to get audio from the mic as long as the user does not answer the phone call?
1
0
54
May ’25
TTS Audio Unit Extension: File Write Access in App Group Container Denied Despite Proper Entitlements
I'm developing a TTS Audio Unit Extension that needs to write trace/log files to a shared App Group container. While the main app can successfully create and write files to the container, the extension gets sandbox denied errors despite having proper App Group entitlements configured. Setup: Main App (Flutter) and TTS Audio Unit Extension share the same App Group App Group is properly configured in developer portal and entitlements Main app successfully creates and uses files in the container Container structure shows existing directories (config/, dictionary/) with populated files Both targets have App Group capability enabled and entitlements set Current behavior: Extension can access/read the App Group container Extension can see existing directories and files All write attempts are blocked with "sandbox deny(1) file-write-create" errors Code example: const char* createSharedGroupPathWithComponent(const char* groupId, const char* component) { NSString* groupIdStr = [NSString stringWithUTF8String:groupId]; NSString* componentStr = [NSString stringWithUTF8String:component]; NSURL* url = [[NSFileManager defaultManager] containerURLForSecurityApplicationGroupIdentifier:groupIdStr]; NSURL* fullPath = [url URLByAppendingPathComponent:componentStr]; NSError *error = nil; if (![[NSFileManager defaultManager] createDirectoryAtPath:fullPath.path withIntermediateDirectories:YES attributes:nil error:&amp;error]) { NSLog(@"Unable to create directory %@", error.localizedDescription); } return [[fullPath path] UTF8String]; } Error output: Sandbox: simaromur-extension(996) deny(1) file-write-create /private/var/mobile/Containers/Shared/AppGroup/36CAFE9C-BD82-43DD-A962-2B4424E60043/trace Key questions: Are there additional entitlements required for TTS Audio Unit Extensions to write to App Group containers? Is this a known limitation of TTS Audio Unit Extensions? What is the recommended way to handle logging/tracing in TTS Audio Unit Extensions? If writing to App Group containers is not supported, what alternatives are available? Current entitlements: &lt;dict&gt; &lt;key&gt;com.apple.security.application-groups&lt;/key&gt; &lt;array&gt; &lt;string&gt;group.com.&lt;company&gt;.&lt;appname&gt;&lt;/string&gt; &lt;/array&gt; &lt;/dict&gt;
0
0
90
Apr ’25
In Speech framework is SFTranscriptionSegment timing supposed to be off and speechRecognitionMetadata nil until isFinal?
I'm working in Swift/SwiftUI, running XCode 16.3 on macOS 15.4 and I've seen this when running in the iOS simulator and in a macOS app run from XCode. I've also seen this behaviour with 3 different audio files. Nothing in the documentation says that the speechRecognitionMetadata property on an SFSpeechRecognitionResult will be nil until isFinal, but that's the behaviour I'm seeing. I've stripped my class down to the following: private var isAuthed = false // I call this in a .task {} in my SwiftUI View public func requestSpeechRecognizerPermission() { SFSpeechRecognizer.requestAuthorization { authStatus in Task { self.isAuthed = authStatus == .authorized } } } public func transcribe(from url: URL) { guard isAuthed else { return } let locale = Locale(identifier: "en-US") let recognizer = SFSpeechRecognizer(locale: locale) let recognitionRequest = SFSpeechURLRecognitionRequest(url: url) // the behaviour occurs whether I set this to true or not, I recently set // it to true to see if it made a difference recognizer?.supportsOnDeviceRecognition = true recognitionRequest.shouldReportPartialResults = true recognitionRequest.addsPunctuation = true recognizer?.recognitionTask(with: recognitionRequest) { (result, error) in guard result != nil else { return } if result!.isFinal { //speechRecognitionMetadata is not nil } else { //speechRecognitionMetadata is nil } } } } Further, and this isn't documented either, the SFTranscriptionSegment values don't have correct timestamp and duration values until isFinal. The values aren't all zero, but they don't align with the timing in the audio and they change to accurate values when isFinal is true. The transcription otherwise "works", in that I get transcription text before isFinal and if I wait for isFinal the segments are correct and speechRecognitionMetadata is filled with values. The context here is I'm trying to generate a transcription that I can then highlight the spoken sections of as audio plays and I'm thinking I must be just trying to use the Speech framework in a way it does not work. I got my concept working if I pre-process the audio (i.e. run it through until isFinal and save the results I need to json), but being able to do even a rougher version of it 'on the fly' - which requires segments to have the right timestamp/duration before isFinal - is perhaps impossible?
1
0
115
Jul ’25
changePlaybackRateCommand does not work on iOS.
Hi. I work on an audio app for iOS which is successfully using the MPRemoteCommandCenter for commands like next, back, skip forward, skip backward etc. I am trying to implement playback rate controls in my app (so that users can change the playback speed of audio to 0.5x or 2x for example). While the above commands work, the changePlaybackRateCommand does not seem to. I have enabled the command, given it a target/handler and set supported rates. With the other commands, this caused the UI to change on lock screen, in command center etc, by adding the control for the command (a next button for the next command for example). However, it does not seem to do anything for the playback rate command. I can implement my own "rate button" UI and rate change handling, but I'm wondering if this is a known bug within Apple? Looking online, it seems other people face the same issue and haven't been able to get this command to work. Why is this API provided if it doesn't seem to do anything? Is there something I'm missing? Kind regards.
0
0
335
Jan ’25
AVAssetWriterInput Crash on appendSampleBuffer Converting PCM
Overview We are producing audio in real time from an editing application and are trying to put that on an HLS stream. We attempt to submit PCM samples through an audio writer but are getting a crash after a select number of samples have been appended. Depending on the number of audio frames in the PCM buffer, we might get more iterations before the crash but it always has the same traceback (see below). Code The setup is rather simple. We took inspiration from a few sources around the web. NSMutableDictionary *audio = [[NSMutableDictionary alloc] init]; [audio setObject:@(kAudioFormatMPEG4AAC) forKey:AVFormatIDKey]; [audio setObject:[NSNumber numberWithInt:config.audioSampleRate] // 48000 forKey:AVSampleRateKey]; [audio setObject:[NSNumber numberWithInt:config.audioChannels] // 2 forKey:AVNumberOfChannelsKey]; [audio setObject:@160000 forKey:AVEncoderBitRateKey]; m_audioConfig = [[NSDictionary alloc] initWithDictionary:audio]; m_audio = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:m_audioConfig]; AVAudioFrameCount audioFrames = BUFFER_SAMPLES * bCount; AVAudioPCMBuffer *pcmBuffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:m_full.pcmFormat frameCapacity:audioFrames]; pcmBuffer.frameLength = pcmBuffer.frameCapacity; AudioChannelLayout layout; memset(&layout, 0, sizeof(layout)); layout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo; CMFormatDescriptionRef format; OSStatus stats = CMAudioFormatDescriptionCreate( kCFAllocatorDefault, pcmBuffer.format.streamDescription, sizeof(layout), &layout, 0, nil, nil, &format ); for (int i = 0; i < bCount; i++) { AudioPCM pcm; audioCallback->callback(pcm); memcpy(*(pcmBuffer.int16ChannelData) + (bufferSize * i), pcm.data, bufferSize); } size_t samplesConsumed = BUFFER_SAMPLES * bCount; CMSampleBufferRef sampleBuffer; CMSampleTimingInfo timing; timing.duration = CMTimeMake(1, config.audioSampleRate); timing.presentationTimeStamp = presentationTime; timing.decodeTimeStamp = kCMTimeInvalid; OSStatus ostatus = CMSampleBufferCreate( kCFAllocatorDefault, nil, false, nil, nil, format, (CMItemCount)pcmBuffer.frameLength, 1, &timing, 0, nil, &sampleBuffer ); //// ostatus = CMSampleBufferSetDataBufferFromAudioBufferList( sampleBuffer, kCFAllocatorDefault, kCFAllocatorDefault, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, pcmBuffer.audioBufferList ); if (ostatus != noErr) { NSLog(@"fill audio sample from buffer list failed: %s", logAudioError(ostatus)); return; } ostatus = CMSampleBufferSetDataReady(sampleBuffer); if (ostatus != noErr) { NSLog(@"set sample buffer ready failed: %s", logAudioError(ostatus)); return; } // Finally we can attach it, then shove the presentation time forward [m_audio appendSampleBuffer:sampleBuffer]; The Crash The crash points towards some level of deallocation when the conversion tooling is done or has enough samples to process an output packet? It's had to say. 0 caulk 0x1a1e9532c caulk::alloc::tiered_allocator<caulk::alloc::size_range_tier<0ul, 1008ul, caulk::alloc::tree_allocator<caulk::alloc::chunk_allocator<caulk::alloc::page_allocator, caulk::alloc::bitmap_allocator, caulk::alloc::embed_block_memory, 16384ul, 16ul, 6ul>>>, caulk::alloc::size_range_tier<1009ul, 256000ul, caulk::alloc::guarded_edges_allocator<caulk::alloc::consolidating_free_map<caulk::alloc::page_allocator, 10485760ul>, 4ul>>, caulk::alloc::tracking_allocator<caulk::alloc::page_allocator>>::deallocate(caulk::alloc::block, unsigned long) + 636 1 AudioToolboxCore 0x1993fbfe4 ExtendedAudioBufferList_Destroy + 112 2 AudioToolboxCore 0x1993d5fe0 std::__1::__optional_destruct_base<ACCodecOutputBuffer, false>::~__optional_destruct_base[abi:ne180100]() + 68 3 AudioToolboxCore 0x1993d5f48 acv2::CodecConverter::~CodecConverter() + 196 4 AudioToolboxCore 0x1993d5e5c acv2::CodecConverter::~CodecConverter() + 16 5 AudioToolboxCore 0x1992574d8 std::__1::vector<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>, std::__1::allocator<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>>>::__clear[abi:ne180100]() + 84 6 AudioToolboxCore 0x199259acc acv2::AudioConverterChain::RebuildConverterChain(acv2::ChainBuildSettings const&) + 116 7 AudioToolboxCore 0x1992596ec acv2::AudioConverterChain::SetProperty(unsigned int, unsigned int, void const*) + 1808 8 AudioToolboxCore 0x199324acc acv2::AudioConverterV2::setProperty(unsigned int, unsigned int, void const*) + 84 9 AudioToolboxCore 0x199327f08 with_resolved(OpaqueAudioConverter*, caulk::function_ref<int (AudioConverterAPI*)>) + 60 10 AudioToolboxCore 0x1993281e4 AudioConverterSetProperty + 72 11 MediaToolbox 0x1a7566c2c FigSampleBufferProcessorCreateWithAudioCompression + 2296 12 MediaToolbox 0x1a754db08 0x1a70b5000 + 4819720 13 MediaToolbox 0x1a754dab4 FigMediaProcessorCreateForAudioCompressionWithFormatWriter + 100 14 MediaToolbox 0x1a77ebb98 0x1a70b5000 + 7564184 15 MediaToolbox 0x1a7804158 0x1a70b5000 + 7663960 16 MediaToolbox 0x1a7801da0 0x1a70b5000 + 7654816 17 AVFCore 0x1ada530c4 -[AVFigAssetWriterTrack addSampleBuffer:error:] + 192 18 AVFCore 0x1ada55164 -[AVFigAssetWriterAudioTrack _flushPendingSampleBuffersReturningError:] + 500 19 AVFCore 0x1ada55354 -[AVFigAssetWriterAudioTrack addSampleBuffer:error:] + 472 20 AVFCore 0x1ada4ebf0 -[AVAssetWriterInputWritingHelper appendSampleBuffer:error:] + 128 21 AVFCore 0x1ada4c354 -[AVAssetWriterInput appendSampleBuffer:] + 168 22 lib_devapple_hls.dylib 0x115d2c7cc detail::AppleHLSImplementation::audioRuntime() + 1052 23 lib_devapple_hls.dylib 0x115d2d094 void* std::__1::__thread_proxy[abi:ne180100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void (detail::AppleHLSImplementation::*)(), detail::AppleHLSImplementation*>>(void*) + 72 24 libsystem_pthread.dylib 0x196e5b2e4 _pthread_start + 136 Any insight would be welcome!
2
0
162
Jun ’25