AudioToolbox

RSS for tag

Record or play audio convert formats parse audio streams and configure your audio session using AudioToolbox.

Posts under AudioToolbox tag

42 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

BuildConverter: AudioConverterNew returned -50
AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 16000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 2 ch, 16000 Hz, Int16, interleaved AQMEIO_HAL.cpp:2773 iOSSimulatorAudioDevice-15111-0: Abandoning I/O cycle because reconfig pending (1). HALC_ProxySystem.cpp:163 HALC_ProxySystem::GetObjectInfo: got an error from the server, Error: 560947818 (!obj) HALC_ShellObject.mm:213 HALC_ShellObject::HasProperty: there is no proxy object AudioHardware-mac-imp.cpp:1224 AudioObjectRemovePropertyListener: no object with given ID 160 HALSystem.cpp:2216 AudioObjectPropertiesChanged: no such object why? Can't record on ios17. Normal recording before iOS 16.
0
0
471
Sep ’24
ExtAudioFileRead throwing AVAudioSessionErrorCodeResourceNotAvailable error on iOS and iPadOS 18
Calls to ExtAudioFileRead are throwing OSStatus 561145203 (AVAudioSessionErrorCodeResourceNotAvailable) on iOS and iPadOS 18 -- earlier versions of iOS have not exhibited this behavior. This is a longstanding code path that has seen a spike of these error codes since iOS 18's release. The following is also printed to the Xcode 16 console:
2
1
620
Sep ’24
How best to handle AirPods audio glitches in Game Mode?
Hello! The new lower latency support for AirPods in Game Mode is impressive, but I'm not sure of the best way to handle the transition into/out of Game Mode while audio is playing. In order to lower the latency, the system appears to drop some number of samples, with the result being a good deal less latency. My use case is macOS where it's easier to switch in/out of the fullscreen game (a simple swipe left), thus causing more issues for Game Mode since the audio is playing the entire time. It would be nice if offscreen games could remain in game mode, but I understand not wanting to give developers that control. Are there any best practices for avoiding or masking the audio glitch caused by this skip-ahead? Is there a system event I can receive to know when Game Mode is about to be enabled or disabled, where I could perhaps fade out the audio? My callback checks the inTimestamp->mSampleTime value to detect gaps, but it only rarely detects a Game Mode gap, even though the audio skip-ahead always happens. BTW, I am currently only developing on macOS (15.0) and I'm working at a low level with AudioUnit callbacks and a SpatialMixer. I am not currently using any higher-level audio APIs. And here's a few questions I don't necessarily expect answers to, but it doesn't hurt to ask: Is there any additional technical details about how this latency reduction works, or exactly how much of a reduction is achieved (or said another way, how many samples are dropped)? How much does this affect AirPods battery life? And finally, is there a way to query the actual latency value? I check the value for kAudioDevicePropertyLatency but it seems to always report 160ms for AirPods. Thanks!
1
0
814
Sep ’24
iPhone 13 iOS 18 Upgrade Issues: Missing Call Recording and Notes Features
I recently upgraded my iPhone 13 to iOS 18, and I'm facing two issues. I didn't get the call recording feature. When I make a call and the person picks up, the call recording icon is not showing. The option to take notes during a call has disappeared. Earlier, the notes option used to be available on the call screen itself. Please help fix these two issues, or let me know if it's possible to resolve them from my end.
0
0
1.9k
Sep ’24
About nullAudio AudioObjectPropertySelector custom attributes
1, I saw nullAudio custom properties of the static const AudioObjectPropertySelector kPlugIn_CustomPropertyID = 'PCst'; But I don't know how to use this in a project. 2. What is the difference between PlugIn and Device's custom properties? 3. When I try to customize the PropertySelector for deivce. After adding kAudioObjectPropertyCustomPropertyInfoList NullAudio_HasDeviceProperty method to compile again after restarting coreAudio service, found that virtual devices don't show.
0
0
473
Sep ’24
Understanding the number of input channels in Core Audio
Hello everyone, I'm new to Core Audio and still haven't found my footing. I'm learning how to capture audio from the default device, using Audio Units. On my MacBook, the default audio input is mono. But when I write a piece of code to capture audio using AUHAL, I'm discovering that I need to provide an AudioBufferList with two channels, not one. Also, when I try to capture audio from an audio interface with 20 audio inputs, I must provide an AudioBufferList with two channels, and not with 20 channels. To investigate the issue, I wrote a small diagnostic program, which opens the default audio device and probes it for the number of channels. Depending on which way I'm probing, I'm getting different results. When I probe the stream format, I'm getting information that there is 1 channels. But when I probe the input audio unit, I'm getting information that there are 2 input channels. Here's my program to demonstrate the issue: // InputDeviceChannels.m // Compile with: // clang -framework CoreAudio -framework AudioToolbox -framework CoreFoundation -framework AudioUnit -o InputDeviceChannels InputDeviceChannels.m // // On my system, this prints: // Device Name: MacBook Pro Microphone // Number of Channels (Stream Format): 1 // Number of Elements (Element Count): 2 #import <AudioToolbox/AudioToolbox.h> #import <AudioUnit/AudioUnit.h> #import <CoreAudio/CoreAudio.h> #import <Foundation/Foundation.h> void printDeviceInfo(AudioUnit audioUnit) { UInt32 size; OSStatus err; AudioStreamBasicDescription streamFormat; size = sizeof(streamFormat); err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1, &streamFormat, &size); if (err != noErr) { printf("Error getting stream format\n"); exit(1); } int numChannels = streamFormat.mChannelsPerFrame; UInt32 elementCount; size = sizeof(elementCount); err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0, &elementCount, &size); if (err != noErr) { printf("Error getting element count\n"); exit(1); } printf("Number of Channels (Stream Format): %d\n", numChannels); printf("Number of Elements (Element Count): %d\n", elementCount); } void printDeviceName(AudioDeviceID deviceID) { UInt32 size; OSStatus err; CFStringRef deviceName = NULL; size = sizeof(deviceName); err = AudioObjectGetPropertyData( deviceID, &(AudioObjectPropertyAddress){kAudioDevicePropertyDeviceNameCFString, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain}, 0, NULL, &size, &deviceName); if (err != noErr) { printf("Error getting device name\n"); exit(1); } char deviceNameStr[256]; if (!CFStringGetCString(deviceName, deviceNameStr, sizeof(deviceNameStr), kCFStringEncodingUTF8)) { printf("Error converting device name to C string\n"); exit(1); } CFRelease(deviceName); printf("Device Name: %s\n", deviceNameStr); } int main(int argc, const char *argv[]) { @autoreleasepool { OSStatus err; // Get the default input device ID AudioDeviceID input_device_id = kAudioObjectUnknown; { UInt32 property_size = sizeof(input_device_id); AudioObjectPropertyAddress input_device_property = { kAudioHardwarePropertyDefaultInputDevice, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain, }; err = AudioObjectGetPropertyData(kAudioObjectSystemObject, &input_device_property, 0, NULL, &property_size, &input_device_id); if (err != noErr || input_device_id == kAudioObjectUnknown) { printf("Error getting default input device ID\n"); exit(1); } } // Print the device name using the input device ID printDeviceName(input_device_id); // Open audio unit for the input device AudioComponentDescription desc = {kAudioUnitType_Output, kAudioUnitSubType_HALOutput, kAudioUnitManufacturer_Apple, 0, 0}; AudioComponent component = AudioComponentFindNext(NULL, &desc); AudioUnit audioUnit; err = AudioComponentInstanceNew(component, &audioUnit); if (err != noErr) { printf("Error creating AudioUnit\n"); exit(1); } // Enable IO for input on the AudioUnit and disable output UInt32 enableInput = 1; UInt32 disableOutput = 0; err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &enableInput, sizeof(enableInput)); if (err != noErr) { printf("Error enabling input on AudioUnit\n"); exit(1); } err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &disableOutput, sizeof(disableOutput)); if (err != noErr) { printf("Error disabling output on AudioUnit\n"); exit(1); } // Set the current device to the input device err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &input_device_id, sizeof(input_device_id)); if (err != noErr) { printf("Error setting device for AudioUnit\n"); exit(1); } // Initialize AudioUnit err = AudioUnitInitialize(audioUnit); if (err != noErr) { printf("Error initializing AudioUnit\n"); exit(1); } // Print device info printDeviceInfo(audioUnit); // Clean up AudioUnitUninitialize(audioUnit); AudioComponentInstanceDispose(audioUnit); } return 0; } It prints: Device Name: MacBook Pro Microphone Number of Channels (Stream Format): 1 Number of Elements (Element Count): 2 I tried to set the number of channels to 1 on the input unit, but it didn’t change anything. After calling setNumberOfChannels(1, audioUnit), I’m still getting the same output. Note 1: I know that I can ignore one channel, etc, etc. My purpose here is not to "somehow get it to work", I already did that. My purpose is to understand the API, so that I'll be able to write code that handles any number of audio inputs. Note 2: I already read a bunch of documentation, especially this here: https://developer.apple.com/library/archive/technotes/tn2091/ - perhaps the channel map could help here, but I can’t make sense of it - I tried to use it based on my understanding but I only got the -50 OSStatus. How should I understand this? Is it that that audio unit is an abstraction layer and automatically converts mono input into stereo input? Can I ask AUHAL to provide me the same number of input channels that the audio device has?
1
0
797
Sep ’24
What methods in what Framework to separate an audio file into two files?
I'm having trouble using SFSpeechRecognizer & SFSpeechRecognitionTask to show me the words from an audio file. I found a solution on stackoverflow to separate the audio file into smaller sizes. How would I do that programmatically using Swift for a macOS app Xcode project? I would prefer not to separate the file into smaller files. I will submit another post with more information for that.
3
0
667
Aug ’24
Implementing Multi-Channel Audio Recording on iOS with Built-In and External Mics
Hi there community, First and foremost, a big thank you to everyone who takes the time to read this. TL;DR: How, if even possible, can I record multiple audio streams simultaneously on an iOS application (iPad/iPhone)? I'm working on a recorder for the iPad to gather data for a machine learning project focused on speech recognition. Our goal is to capture extensive speech data, which requires recording from multiple microphones. Specifically, I need to record from all mics connected to our Scarlett 4i4 audio interface and, most importantly, also record from the built-in mic on the iPad or iPhone at the same time. As a newcomer to Swift development, I initially explored AVAudioRecorder. However, I quickly realized that it only supports one active audio node at a time, making multi-channel recording impossible. (perhaps you can proof me wrong, would make my day) Next, I transitioned to using AVAudioEngine, but encountered the same limitation: I couldn't manage to get input nodes for both the built-in mic and the Scarlett interface channels simultaneously. The application started behaving oddly, often resulting in identical audio data being recorded across all files. Determined to find a solution, I delved deeper into the Core Audio framework, specifically using Audio Toolbox. My approach involved creating and configuring multiple Audio Units, each corresponding to a different audio input device. Here's a brief overview of my current implementation: Listing Available Input Devices: I used AVAudioSession to enumerate all available input devices. Creating Audio Units: For each device, I created an Audio Unit and attempted to configure it for recording. Setting Up Callbacks: I set up input and output callbacks to handle the audio processing. Despite my efforts over the last few days, I haven't had much success. The callbacks for the Audio Units don't seem to be invoked correctly, and I'm struggling to achieve simultaneous multi-channel recording. Below is a snippet of my latest attempt: let audioUnitCallback: AURenderCallback = { ( inRefCon: UnsafeMutableRawPointer, ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>, inTimeStamp: UnsafePointer<AudioTimeStamp>, inBusNumber: UInt32, inNumberFrames: UInt32, ioData: UnsafeMutablePointer<AudioBufferList>? ) -> OSStatus in guard let ioData = ioData else { return noErr } print("Input callback invoked") let audioUnit = inRefCon.assumingMemoryBound(to: AudioUnit.self).pointee var bufferList = AudioBufferList( mNumberBuffers: 1, mBuffers: AudioBuffer( mNumberChannels: 1, mDataByteSize: 0, mData: nil ) ) let status = AudioUnitRender(audioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList) if status != noErr { print("AudioUnitRender failed: \(status)") return status } // Copy rendered data to output buffer let buffer = UnsafeMutableAudioBufferListPointer(ioData)[0] buffer.mData?.copyMemory(from: bufferList.mBuffers.mData!, byteCount: Int(bufferList.mBuffers.mDataByteSize)) buffer.mDataByteSize = bufferList.mBuffers.mDataByteSize print("Rendered audio data") return noErr } let outputCallback: AURenderCallback = { ( inRefCon: UnsafeMutableRawPointer, ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>, inTimeStamp: UnsafePointer<AudioTimeStamp>, inBusNumber: UInt32, inNumberFrames: UInt32, ioData: UnsafeMutablePointer<AudioBufferList>? ) -> OSStatus in guard let ioData = ioData else { return noErr } print("Output callback invoked") // Process the output data if needed return noErr } In essence, I'm stuck and in need of guidance. Has anyone here successfully implemented multi-channel recording on iOS, especially involving both built-in microphones and external audio interfaces? Any shared experiences, insights, or suggestions on how to proceed would be immensely appreciated. Thank you once again for your time and assistance!
0
0
789
Jul ’24
getUserMedia not working in background for UIWebView
I have a React website that uses getUserMedia to capture user audio. I'm displaying this website in an iOS mobile app using UIWebView in SwiftUI. The audio is correctly captured when the app is in focus. However, when the app goes to the home screen and runs in the background, the microphone audio gets cut off. This issue does not occur when the website is opened in iOS Safari. Here's my Info.plist and .entitlements file. I granted most, if not all, permissions for both files in an attempt to get it to work, but it still doesn't resolve the issue. Info.plist audio bluetooth-central bluetooth-peripheral external-accessory fetch location nearby-interaction processing push-to-talk remote-notification voip Entitlements.plist com.apple.developer.push-to-talk: true com.apple.developer.spatial-audio.profile-access: true inter-app-audio: true
0
0
473
Jul ’24
API or SDK for Integrating Audible Files in iOS App
Hello Apple Community, I am developing an iOS app and would like to add a feature that allows users to play and organize Audible.com files within the app. Does Audible or the App Store provide any API or SDK for third-party apps to access and manage Audible content? If so, could you please provide some guidance on how to integrate it into my app? Thank you for your assistance! Best regards, Yes it labs
0
0
676
Jul ’24
All SystemSoundID
In the AudioServicesPlaySystemSound function of AudioToolbox, you can enter the corresponding SystemSoundID to play some sound effects that come with the system. However, I can't be sure what sound effect each number corresponds to, so I want to know all the sound effects in visionOS and its corresponding SystemSoundID.
0
1
580
Jul ’24
(Audio) WorkGroup in CoreAudio server plugin
Hi, we have multiple threads in our CoreAudio server plugin carrying out necessary asynchronous work (namely handling USB callbacks and shuffling the required data to the IO). Although these threads have been set up with the appropriate THREAD_TIME_CONSTRAINT_POLICY (which actually improves it) - on M* processors there is an extremely high, non-realtime amount of jitter of >10ms(!) Now either the runloop notification from the USB stack comes that late or the thread driving the runloop hasn't been set up to correctly handling the callbacks in a timely manner. Since AudioUnits threads requiring to comply to the frame deadlines can join the workgroup of the audio device is there a similar opportunity for the CoreAudio server plugin threads? And if so, how should these correctly be set up? Thanks for any hints! Or pointing me to the docs :)
0
0
666
Jun ’24
AudioMidi.app / Music.app drift sync
When I connect my MacBook to my living room AirPort (older gen wallwart) via Music app, the music output in both rooms is synced. When I try to setup a Multi-Output Device in AudioMidi setup, I'm not able to get them synced. I'm outputting to the same devices, they're all on the same sample rate, and I've played with the various settings (Primary Clock Source and Drift Sync). What gives? How are these connections different? Intel MacBook Pro 2018 running Sonoma 14.5
1
0
781
May ’24
Deadlock in AudioWorkIntervalCreate
Sometimes when I call AudioWorkIntervalCreate the call hangs with the following stacktrace. The call is made on the main thread. mach_msg2_trap 0x00007ff801f0b3ce mach_msg2_internal 0x00007ff801f19d80 mach_msg_overwrite 0x00007ff801f12510 mach_msg 0x00007ff801f0b6bd HALC_Object_AddPropertyListener 0x00007ff8049ea43e HALC_ProxyObject::HALC_ProxyObject(unsigned int, unsigned int, unsigned int, unsigned int) 0x00007ff8047f97f2 HALC_ProxyObjectMap::_CreateObject(unsigned int, unsigned int, unsigned int, unsigned int) 0x00007ff80490f69c HALC_ProxyObjectMap::CopyObjectByObjectID(unsigned int) 0x00007ff80490ecd6 HALC_ShellPlugIn::_ReconcileDeviceList(bool, bool, std::__1::vector<unsigned int, std::__1::allocator<unsigned int>>&, std::__1::vector<unsigned int, std::__1::allocator<unsigned int>>&) 0x00007ff8045d68cf HALB_CommandGate::ExecuteCommand(void () block_pointer) const 0x00007ff80492ed14 HALC_ShellObject::ExecuteCommand(void () block_pointer) const 0x00007ff80470f554 HALC_ShellPlugIn::ReconcileDeviceList(bool, bool) 0x00007ff8045d6414 HALC_ShellPlugIn::ConnectToServer() 0x00007ff8045d74a4 HAL_HardwarePlugIn_InitializeWithObjectID(AudioHardwarePlugInInterface**, unsigned int) 0x00007ff8045da256 HALPlugInManagement::CreateHALPlugIn(HALCFPlugIn const*) 0x00007ff80442f828 HALSystem::InitializeDevices() 0x00007ff80442ebc3 HALSystem::CheckOutInstance() 0x00007ff80442b696 AudioObjectAddPropertyListener_mac_imp 0x00007ff80469b431 auoop::WorkgroupManager_macOS::WorkgroupManager_macOS() 0x00007ff8040fc3d5 auoop::gWorkgroupManager() 0x00007ff8040fc245 AudioWorkIntervalCreate 0x00007ff804034a33
2
0
624
Jun ’24
Recording stereo audio with `AVCaptureAudioDataOutput`
Hey all! I'm building a Camera app using AVFoundation, and I am using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates. (I cannot use AVCaptureMovieFileOutput because I am doing some processing inbetween) When recording the audio CMSampleBuffers to the AVAssetWriter, I noticed that compared to the stock iOS camera app, they are mono-audio, not stereo audio. I wonder how recording in stereo audio works, are there any guides or documentation available for that? Is a stereo audio frame still one CMSampleBuffer, or will it be multiple CMSampleBuffers? Do I need to synchronize them? Do I need to set up the AVAssetWriter/AVAssetWriterInput differently? This is my Audio Session code: func configureAudioSession(configuration: CameraConfiguration) throws { ReactLogger.log(level: .info, message: "Configuring Audio Session...") // Prevent iOS from automatically configuring the Audio Session for us audioCaptureSession.automaticallyConfiguresApplicationAudioSession = false let enableAudio = configuration.audio != .disabled // Check microphone permission if enableAudio { let audioPermissionStatus = AVCaptureDevice.authorizationStatus(for: .audio) if audioPermissionStatus != .authorized { throw CameraError.permission(.microphone) } } // Remove all current inputs for input in audioCaptureSession.inputs { audioCaptureSession.removeInput(input) } audioDeviceInput = nil // Audio Input (Microphone) if enableAudio { ReactLogger.log(level: .info, message: "Adding Audio input...") guard let microphone = AVCaptureDevice.default(for: .audio) else { throw CameraError.device(.microphoneUnavailable) } let input = try AVCaptureDeviceInput(device: microphone) guard audioCaptureSession.canAddInput(input) else { throw CameraError.parameter(.unsupportedInput(inputDescriptor: "audio-input")) } audioCaptureSession.addInput(input) audioDeviceInput = input } // Remove all current outputs for output in audioCaptureSession.outputs { audioCaptureSession.removeOutput(output) } audioOutput = nil // Audio Output if enableAudio { ReactLogger.log(level: .info, message: "Adding Audio Data output...") let output = AVCaptureAudioDataOutput() guard audioCaptureSession.canAddOutput(output) else { throw CameraError.parameter(.unsupportedOutput(outputDescriptor: "audio-output")) } output.setSampleBufferDelegate(self, queue: CameraQueues.audioQueue) audioCaptureSession.addOutput(output) audioOutput = output } } This is how I activate the audio session just before I start recording: let audioSession = AVAudioSession.sharedInstance() try audioSession.updateCategory(AVAudioSession.Category.playAndRecord, mode: .videoRecording, options: [.mixWithOthers, .allowBluetoothA2DP, .defaultToSpeaker, .allowAirPlay]) if #available(iOS 14.5, *) { // prevents the audio session from being interrupted by a phone call try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true) } if #available(iOS 13.0, *) { // allow system sounds (notifications, calls, music) to play while recording try audioSession.setAllowHapticsAndSystemSoundsDuringRecording(true) } audioCaptureSession.startRunning() And this is how I set up the AVAssetWriter: let audioSettings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: options.fileType) let format = audioInput.device.activeFormat.formatDescription audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: format) audioWriter!.expectsMediaDataInRealTime = true assetWriter.add(audioWriter!) ReactLogger.log(level: .info, message: "Initialized Audio AssetWriter.") The rest is trivial - I receive CMSampleBuffers of the audio in my delegate's callback, write them to the audioWriter, and it ends up in the .mov file - but it is not stereo, it's mono. Is there anything I'm missing here?
0
1
857
Apr ’24