AudioToolbox

RSS for tag

Record or play audio convert formats parse audio streams and configure your audio session using AudioToolbox.

AudioToolbox Documentation

Posts under AudioToolbox tag

30 Posts
Sort by:
Post not yet marked as solved
1 Replies
765 Views
When I use Audio Toolbox APIs to decode AAC audio, I am getting kAudioConverterErr_FormatNotSupported(1718449215) error. This works fine in BigSur(11.5.2) and previous versions. prompt> ./ffmpeg -c:a aac_at -i audio24b_he_aac.aac -f s16le -acodec pcm_s16le output.pcm -v trace ffmpeg version N-104401-gcd38fbf4f7 Copyright (c) 2000-2021 the FFmpeg developers  built with Apple clang version 13.0.0 (clang-1300.0.29.3)  configuration: --disable-x86asm  libavutil   57. 7.100 / 57. 7.100  libavcodec   59. 12.100 / 59. 12.100  libavformat  59. 6.100 / 59. 6.100  libavdevice  59. 0.101 / 59. 0.101  libavfilter   8. 15.100 / 8. 15.100  libswscale   6. 1.100 / 6. 1.100  libswresample  4. 0.100 / 4. 0.100 Splitting the commandline. Reading option '-c:a' ... matched as option 'c' (codec name) with argument 'aac_at'. Reading option '-i' ... matched as input url with argument 'audio24b_he_aac.aac'. Reading option '-f' ... matched as option 'f' (force format) with argument 's16le'. Reading option '-acodec' ... matched as option 'acodec' (force audio codec ('copy' to copy stream)) with argument 'pcm_s16le'. Reading option 'output.pcm' ... matched as output url. Reading option '-v' ... matched as option 'v' (set logging level) with argument 'trace'. Finished splitting the commandline. Parsing a group of options: global . Applying option v (set logging level) with argument trace. Successfully parsed a group of options. Parsing a group of options: input url audio24b_he_aac.aac. Applying option c:a (codec name) with argument aac_at. Successfully parsed a group of options. Opening an input file: audio24b_he_aac.aac. [NULL @ 0x7f8cf4004d40] Opening 'audio24b_he_aac.aac' for reading [file @ 0x7f8cf4005180] Setting default whitelist 'file,crypto,data' Probing aac score:51 size:2048 [aac @ 0x7f8cf4004d40] Format aac probed with size=2048 and score=51 [aac @ 0x7f8cf4004d40] Before avformat_find_stream_info() pos: 0 bytes read:65696 seeks:4 nb_streams:1 [aac_at @ 0x7f8cf40057c0] AudioToolbox init error: 1718449215 [aac @ 0x7f8cf4004d40] All info found [aac @ 0x7f8cf4004d40] Estimating duration from bitrate, this may be inaccurate Following lines of code is failing. status = AudioConverterNew(&in_format, &out_format, &at->converter); if (status != 0) { av_log(avctx, AV_LOG_ERROR, "AudioToolbox init error: %i\n", (int)status); return AVERROR_UNKNOWN; } Is there any change with respect to these APIs in Monterey. I could not find any thing in the documentation or release notes.
Posted Last updated
.
Post not yet marked as solved
0 Replies
145 Views
My app samples the various inputs available on the iPhone and iPad and performs a frequency analysis. In addition to using the internal accelerometer and gyroscope I can also sample the microphone and USB input devices such as accelerometers through the audio input subsystem. The highest sample rate I use with the microphone and USB devices is the 48 KHz of the audio sampling subsystem. This provides a bandwidth of 24 kHz (Nyquist frequency) on the sampled signal. This has worked for many generations of iPhone and iPad until now. When I use my iPhone 14 Pro there is a sharp frequency cutoff at about 8 kHz. I see an artifact at the same frequency when I use the simulators. BUT when I use my 11" iPad Pro, or my current generation iPhone SE I do not see this effect and get good data out to 24 kHz. The iPad Pro does show some rolloff near 24 kHz which is noticeable but not a problem for most applications. The rolloff at 8 kHz is a serious problem for my customers who are testing equipment vibration and noise. I am wondering if this is related to the new microphone options "Standard", "Voice Isolation", and "Wide Spectrum". But if so, why only on the iPhone 14Pro and the simulators? I have searched the documentation but apparently it is not possible to programmatically change the microphone mode and the Apple documentation on how to use this new feature is lacking. I am using AVAudioSession and AVAudioRecorder methods to acquire the data through the audio capture hardware. This code has been working well for me for over 10 years so I do not think it is a code problem but it could be a configuration problem because of new hardware in the iPhone 14 although I have not found anything in the documentation. Examples from various devices and a simulator are shown below for microphone. Does anyone have an idea what may be causing this problem? iPhoneSE 3rd Gen iPad Gen 9 iPad Pro 11in iPhone 14Pro iPad 10th Generation Simulator
Posted
by BruceT.
Last updated
.
Post not yet marked as solved
0 Replies
201 Views
After searching for a few days, i have come to know that iOS doesn't allow recording in background. Through recording continues when it has started in foreground and goes to background. Came to know that, "This is a new a privacy protection restriction introduced in 12.4" When trying to start recording from background, we get “AVAudioSession.ErrorCode.cannotStartRecording” error. What do i need: our application needs to starts recording audio and transmit it while in background when user presses an external button i.e. PTT button of a BLE device. It seems there are several similar issue posted in this forum but neither could i find any plausible solution nor got any specific documentation from Apple. Requesting to help me out or at least please provide me a specific documentation stating "background recording is not possible". And if any workaround exists, please suggest. P.S. we are using **AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .spokenAudio, options: [.mixWithOthers, .allowBluetooth, .allowBluetoothA2DP, .duckOthers]) **
Posted
by mcpttdev.
Last updated
.
Post not yet marked as solved
0 Replies
216 Views
I have a High Sierra system which I access via ssh and for which I program old-style from the command line in Objective-C and build via Makefiles. I have a very small program that employs the CoreAudio API and calls the AudioFileOpenURL and AudioFileClose functions. The program compiles ok but the linker is reporting that the symbols AudioFileOpenURL and AudioFileClose are undefined. I cannot find any dylib file among the libraries on my system that has those symbols. Can someone suggest where I might obtain the appropriate library? Here is the linker output that I am getting: gcc -o example1 example1.o -lobjc -lextension Undefined symbols for architecture x86_64: "_AudioFileClose", referenced from: _main in example1.o "_AudioFileGetProperty", referenced from: _main in example1.o "_AudioFileGetPropertyInfo", referenced from: _main in example1.o "_AudioFileOpenURL", referenced from: _main in example1.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) make: *** [example1] Error 1
Posted
by grday52.
Last updated
.
Post not yet marked as solved
3 Replies
679 Views
I'm building an audio recording app. For our users it's important that recordings never get lost - even if the app crashes, users would like to have the partial recording. We encode recordings in AAC or ALAC and store them in an m4a file using AVAudioFile. However, in case the app crashes, those m4a files are invalid - the MOOV atom is missing. Are there recording settings that change the m4a file so that it is always playable, even if the recording is interrupted half-way? I'm not at all an expert in audio codecs, but from what I understand, it is possible to write the MOOV atom at the beginning of the audio file instead of the end. This could solve this. But of course, I'd prefer an actual expert to tell me what a good solution is, and how to configure this in AVAudioFile.
Posted Last updated
.
Post not yet marked as solved
1 Replies
263 Views
I'd like to understand the most robust way to record audio to disk using AVAudioNodes installTap(onBus:bufferSize:format:block:) method. Currently, I'm dispatching out the buffers I receive in my AVAudioNodeTapBlock to a serial dispatch queue then writing to disk using some Audio Toolbox methods, but my concern is that if I hold on to the buffer provided in the AVAudioNodeTapBlock for too long (due to disk I/O for example), I'll end up getting issues. What I'm considering is creating my own larger pool of preallocated AVAudioPCMBuffers (a few seconds worth) and copying across the data from the buffer provided by the tap into one of the buffers from this larger pool in the AVAudioNodeTapBlock directly (no dispatch queue). Is there a simpler way of handling this, or does this sound like the best route?
Posted
by FlatMap.
Last updated
.
Post not yet marked as solved
3 Replies
557 Views
I am struggling to see why the following low-level audio recording function - which is based on tn2091 - Device input using the HAL Output Audio Unit - (a great article, btw, although a bit dated, and it would be wonderful if it was updated to use Swift and non deprecated stuff at some point!) fails to work under macOS: func createMicUnit() -> AUAudioUnit { let compDesc = AudioComponentDescription( componentType: kAudioUnitType_Output, componentSubType: kAudioUnitSubType_HALOutput, // I am on macOS, os this is good componentManufacturer: kAudioUnitManufacturer_Apple, componentFlags: 0, componentFlagsMask: 0) return try! AUAudioUnit(componentDescription: compDesc, options: []) } func startMic() { // mic permision is already granted at this point, but let's check let status = AVCaptureDevice.authorizationStatus(for: AVMediaType.audio) precondition(status == .authorized) // yes, all good let unit = createMicUnit() unit.isInputEnabled = true unit.isOutputEnabled = false precondition(!unit.canPerformInput) // can't record yet, and know why? print(deviceName(unit.deviceID)) // "MacBook Pro Speakers" - this is why let micDeviceID = defaultInputDeviceID print(deviceName(micDeviceID)) // "MacBook Pro Microphone" - this is better try! unit.setDeviceID(micDeviceID) // let's switch device to mic precondition(unit.canPerformInput) // now we can record print("\(String(describing: unit.channelMap))") // channel map is "nil" by default unit.channelMap = [0] // not sure if this helps or not let sampleRate = deviceActualFrameRate(micDeviceID) print(sampleRate) // 48000.0 let format = AVAudioFormat( commonFormat: .pcmFormatFloat32, sampleRate: sampleRate, channels: 1, interleaved: false)! try! unit.outputBusses[1].setFormat(format) unit.inputHandler = { flags, timeStamp, frameCount, bus in fatalError("never gets here") // now the weird part - this is never called! } try! unit.allocateRenderResources() try! unit.startHardware() // let's go! print("mic should be working now... why it doesn't?") // from now on the (UI) app continues its normal run loop } All sanity checks pass with flying colors but unit's inputHandler is not being called. Any idea why? Thank you!
Posted Last updated
.
Post not yet marked as solved
0 Replies
787 Views
I've seen applications such as Dante Via (Audinate) and Loopback (Rogue Amoeba) that can capture audio from an application directly. Which framework is used to achieve this? So far all I've seen is ways to tap in to audio devices with AUs.
Posted
by EKLynx.
Last updated
.
Post not yet marked as solved
0 Replies
532 Views
I just filed a bug report with Apple, but I wanted to post here in case people had input about this. I would love to hear that there is just some assumption or logic that I am messing up. When an AVAudioEngine with voice processing io enabled is running, all other audio sources within the app that are started later will have low volume (seemingly not routed to the speakers?). After either setting the AVAudioSession category to .playAndRecord or overriding the AVAudioSession output route to speaker, the volume corrects itself (output seems to route to the speakers now). The exact reproduction steps in the code can be broken down as follows. Make sure you have record permissions: Create an AVAudioEngine Access each engine's .mainMixerNode Create an AVPlayer with some audio file (This is also reproducible with AVAudioPlayer and AVAudioEngine) Configure the session with the following: AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .videoChat, options: [.defaultToSpeaker]) (Note that I'm setting .defaultToSpeaker) Activate the session with AVAudioSession.sharedInstance().setActive(true) Enable voice processing on the engine with the following: try! engine.outputNode.setVoiceProcessingEnabled(true) Start engine with the following: try! engine.start() Start the AVPlayer and note the volume. You may need to increase the system volume to hear what you're playing. You can call either of the following and the audio from the AVPlayer will fix its volume: AVAudioSession.sharedInstance().setCategory(AVAudioSession.sharedInstance().category) AVAudioSession.sharedInstance().overrideOutputAudioPort(.speaker) Note that the volume instantly raises If you were to have another audio source (AVAudioPlayer, AVPlayer, AVAudioEngine), that is started after step 9, you would need to repeat step 9 again. Obviously, this is not ideal and can cause some very unexpected volume changes for end users. This was reproducible on iOS 13.6, 15.7, and 16.2.1 (latest). If anyone has ideas as to how to prevent this or work around it other than the workaround demonstrated here, I'm all ears.
Posted Last updated
.
Post not yet marked as solved
1 Replies
584 Views
I am using the MusicSequenceFileCreate method to generate a MIDI file from a beat-based MusicSequence. In iOS 16.0.2 devices, the file that is created has a Sysex MIDI message added (not by me) to the file at time 0: f0 2a 11 67 40 40 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff ff 00 00 00 00 00 00 00 00 00 00 f7 Sysex messages are manufacturer dependent, a file with this Sysex message can't be read into apps like Nanostudio, Ableton, Zenbeats. It can be read by GarageBand. My app's deployment target is iOS 13.0. Has anybody else ran into this issue? Thanks
Posted
by fermor.
Last updated
.
Post not yet marked as solved
0 Replies
526 Views
Hi Community Members, We tried to find the particular sound through the microphone. We used PKCCheck to detect the sound decibel. we detect the sound based on the number of dB values per second we receive and add some logic over it to get the result, but when we have continuous sound like an alarm we can't detect it as the gap between the sound is very less. Any suggestion on libraries to achieve this. Also whether we can achieve this thru Frequency & Amplitude method. Please advise.
Posted Last updated
.
Post not yet marked as solved
1 Replies
917 Views
I built a simple recorder on my website for users to record and playback audio. It works on ALL desktop browsers (including Safari) but when I use any browser on my iPhone, the mic is active at the opposite time. The flow is: ask permission > user allows mic access > user presses record > records audio > saves and plays back On iPhone what's happening is after the user allows permission, the mic goes active (visualized by the mic icon in safari browser) and then once the user presses record, it disables the mic. I am using getUserMedia within a React.js app Why is it doing this?
Posted
by alybla97.
Last updated
.
Post not yet marked as solved
0 Replies
494 Views
Hello, I'm trying to write my own audio unit extension and I have a problem. I don't understand the difference between implementorValueObserver and AUParameterEvent. It looks like both of them can be used to update parameters in DSPKernel. It seems natural to use an observer, but it requires to think about thread safety in contrast to AUParameterEvent handling. Can someone comment on this or give me more context about this two entities?
Posted
by the7winds.
Last updated
.
Post not yet marked as solved
0 Replies
586 Views
I am using AVAudioConverter in Objective C. AVAudioConverter has bitRate and bitRateStrategy parameters defined. The default bitRateStrategy value is AVAudioBitRateStrategy_LongTermAverage. If I set bitRateStrategy to AVAudioBitRateStrategy_Constant there is no change. The bitRate variable works correctly. AVAudioConverter *audioConverter = [[AVAudioConverter alloc] initFromFormat:inputFormat toFormat:outputFormat]; audioConverter.bitRate = 128000; audioConverter.bitRateStrategy = AVAudioBitRateStrategy_Constant;
Posted
by pavele.
Last updated
.
Post marked as solved
4 Replies
1.7k Views
I have a working AUv3 AUAudioUnit app extension but I had to work around a strange issue: I found that the internalRenderBlock value is fetched and invoked before allocateRenderResources() is called. I have not found any documentation stating that this would be the case, and intuitively it does not make any sense. Is there something I am doing in my code that would be causing this to be the case? Should I *force* a call to allocateRenderResources() if it has not been called before internalRenderBlock is fetched? Thanks! Brad
Posted
by bradhowes.
Last updated
.
Post not yet marked as solved
1 Replies
889 Views
This just seems like a useful thing to have when rendering audio. For example, let's say you have an effect that pitches audio up/down. That typically requires that you know the sample rate of the incoming audio. The way I do this right now is just to save the sample rate after the AUAudioUnit's render resources have been allocated, but being provided this info on a per-render-callback basis seems more useful. Another use case is for AUAudioUnit's on the input chain. Since the format for connections must match the hardware format, you can no longer explicitly set the format that you expect the audio to come in at. You can check the sample rate on the AVAudioEngine's input node or the sample rate on the AVAudioSession singleton, but when you are working with the audio from within the render callback, you don't want to be accessing those methods due to the possibility they are blocking calls. This is especially true when using the AVAudioSinkNode where you don't have the ability to set the sample rate before the underlying node's render resources are allocated. Am I missing something here, or does this actually seem useful?
Posted Last updated
.
Post not yet marked as solved
0 Replies
564 Views
The possibility of sending predefined sounds and being able to associate them with images should be implemented in iMessage. For example, the iconic sound of netflix.
Posted
by Clizia.
Last updated
.