AudioToolbox

RSS for tag

Record or play audio convert formats parse audio streams and configure your audio session using AudioToolbox.

AudioToolbox Documentation

Posts under AudioToolbox tag

29 Posts
Sort by:
Post not yet marked as solved
0 Replies
25 Views
I just filed a bug report with Apple, but I wanted to post here in case people had input about this. I would love to hear that there is just some assumption or logic that I am messing up. When an AVAudioEngine with voice processing io enabled is running, all other audio sources within the app that are started later will have low volume (seemingly not routed to the speakers?). After either setting the AVAudioSession category to .playAndRecord or overriding the AVAudioSession output route to speaker, the volume corrects itself (output seems to route to the speakers now). The exact reproduction steps in the code can be broken down as follows. Make sure you have record permissions: Create an AVAudioEngine Access each engine's .mainMixerNode Create an AVPlayer with some audio file (This is also reproducible with AVAudioPlayer and AVAudioEngine) Configure the session with the following: AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .videoChat, options: [.defaultToSpeaker]) (Note that I'm setting .defaultToSpeaker) Activate the session with AVAudioSession.sharedInstance().setActive(true) Enable voice processing on the engine with the following: try! engine.outputNode.setVoiceProcessingEnabled(true) Start engine with the following: try! engine.start() Start the AVPlayer and note the volume. You may need to increase the system volume to hear what you're playing. You can call either of the following and the audio from the AVPlayer will fix its volume: AVAudioSession.sharedInstance().setCategory(AVAudioSession.sharedInstance().category) AVAudioSession.sharedInstance().overrideOutputAudioPort(.speaker) Note that the volume instantly raises If you were to have another audio source (AVAudioPlayer, AVPlayer, AVAudioEngine), that is started after step 9, you would need to repeat step 9 again. Obviously, this is not ideal and can cause some very unexpected volume changes for end users. This was reproducible on iOS 13.6, 15.7, and 16.2.1 (latest). If anyone has ideas as to how to prevent this or work around it other than the workaround demonstrated here, I'm all ears.
Posted Last updated
.
Post not yet marked as solved
2 Replies
208 Views
I'm building an audio recording app. For our users it's important that recordings never get lost - even if the app crashes, users would like to have the partial recording. We encode recordings in AAC or ALAC and store them in an m4a file using AVAudioFile. However, in case the app crashes, those m4a files are invalid - the MOOV atom is missing. Are there recording settings that change the m4a file so that it is always playable, even if the recording is interrupted half-way? I'm not at all an expert in audio codecs, but from what I understand, it is possible to write the MOOV atom at the beginning of the audio file instead of the end. This could solve this. But of course, I'd prefer an actual expert to tell me what a good solution is, and how to configure this in AVAudioFile.
Posted Last updated
.
Post not yet marked as solved
1 Replies
246 Views
I am using the MusicSequenceFileCreate method to generate a MIDI file from a beat-based MusicSequence. In iOS 16.0.2 devices, the file that is created has a Sysex MIDI message added (not by me) to the file at time 0: f0 2a 11 67 40 40 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff ff 00 00 00 00 00 00 00 00 00 00 f7 Sysex messages are manufacturer dependent, a file with this Sysex message can't be read into apps like Nanostudio, Ableton, Zenbeats. It can be read by GarageBand. My app's deployment target is iOS 13.0. Has anybody else ran into this issue? Thanks
Posted
by fermor.
Last updated
.
Post not yet marked as solved
0 Replies
245 Views
Hi Community Members, We tried to find the particular sound through the microphone. We used PKCCheck to detect the sound decibel. we detect the sound based on the number of dB values per second we receive and add some logic over it to get the result, but when we have continuous sound like an alarm we can't detect it as the gap between the sound is very less. Any suggestion on libraries to achieve this. Also whether we can achieve this thru Frequency & Amplitude method. Please advise.
Posted Last updated
.
Post not yet marked as solved
1 Replies
616 Views
I built a simple recorder on my website for users to record and playback audio. It works on ALL desktop browsers (including Safari) but when I use any browser on my iPhone, the mic is active at the opposite time. The flow is: ask permission > user allows mic access > user presses record > records audio > saves and plays back On iPhone what's happening is after the user allows permission, the mic goes active (visualized by the mic icon in safari browser) and then once the user presses record, it disables the mic. I am using getUserMedia within a React.js app Why is it doing this?
Posted
by alybla97.
Last updated
.
Post not yet marked as solved
0 Replies
286 Views
Hello, I'm trying to write my own audio unit extension and I have a problem. I don't understand the difference between implementorValueObserver and AUParameterEvent. It looks like both of them can be used to update parameters in DSPKernel. It seems natural to use an observer, but it requires to think about thread safety in contrast to AUParameterEvent handling. Can someone comment on this or give me more context about this two entities?
Posted
by the7winds.
Last updated
.
Post not yet marked as solved
0 Replies
355 Views
I am using AVAudioConverter in Objective C. AVAudioConverter has bitRate and bitRateStrategy parameters defined. The default bitRateStrategy value is AVAudioBitRateStrategy_LongTermAverage. If I set bitRateStrategy to AVAudioBitRateStrategy_Constant there is no change. The bitRate variable works correctly. AVAudioConverter *audioConverter = [[AVAudioConverter alloc] initFromFormat:inputFormat toFormat:outputFormat]; audioConverter.bitRate = 128000; audioConverter.bitRateStrategy = AVAudioBitRateStrategy_Constant;
Posted
by pavele.
Last updated
.
Post marked as solved
4 Replies
1.4k Views
I have a working AUv3 AUAudioUnit app extension but I had to work around a strange issue: I found that the internalRenderBlock value is fetched and invoked before allocateRenderResources() is called. I have not found any documentation stating that this would be the case, and intuitively it does not make any sense. Is there something I am doing in my code that would be causing this to be the case? Should I *force* a call to allocateRenderResources() if it has not been called before internalRenderBlock is fetched? Thanks! Brad
Posted
by bradhowes.
Last updated
.
Post not yet marked as solved
1 Replies
634 Views
This just seems like a useful thing to have when rendering audio. For example, let's say you have an effect that pitches audio up/down. That typically requires that you know the sample rate of the incoming audio. The way I do this right now is just to save the sample rate after the AUAudioUnit's render resources have been allocated, but being provided this info on a per-render-callback basis seems more useful. Another use case is for AUAudioUnit's on the input chain. Since the format for connections must match the hardware format, you can no longer explicitly set the format that you expect the audio to come in at. You can check the sample rate on the AVAudioEngine's input node or the sample rate on the AVAudioSession singleton, but when you are working with the audio from within the render callback, you don't want to be accessing those methods due to the possibility they are blocking calls. This is especially true when using the AVAudioSinkNode where you don't have the ability to set the sample rate before the underlying node's render resources are allocated. Am I missing something here, or does this actually seem useful?
Posted Last updated
.
Post not yet marked as solved
0 Replies
348 Views
The possibility of sending predefined sounds and being able to associate them with images should be implemented in iMessage. For example, the iconic sound of netflix.
Posted
by Clizia.
Last updated
.
Post not yet marked as solved
0 Replies
305 Views
I am trying to compile on my iPhone XS device an app on Xcode Version 14.0 beta 2. First time I connected the device it correctly showed among the devices urging me to enable Developer Modality on Privacy & Security. I did it and the iPhone restarted, but once it happened the iPhone disappeared from the targets, thus not allowing me to compile on it. For some strange reason my iPad instead showed, but when I tried to compile on it, Xcode of course urged me to connect it. So it seems Xcode sees not connected device and does not see connected devices. Is anyone able to give a sense to that and possibly a solution?
Posted
by fbartolom.
Last updated
.
Post not yet marked as solved
1 Replies
580 Views
The AUAudioUnit in my app uses the user presets system that was introduced with iPadOS/iOS 13. The user presets system worked well in iPadOS 13, and it worked in at least one of the pre-release beta versions of iPadOS 14. However, it has not worked since iPadOS 14.0 was released. The problem is (superficially, at least) that: saveUserPreset no longer writes preset files to the app's Documents folder, but the method doesn't fail either. The userPresets array, which typically contains the presets that were deserialised from the files stored in the Documents folder, is invariably empty. The presets that the array would contain prior to upgrading from iPadOS 13 are no longer accessible. In other words, prior to upgrading to iPadOS 14, my app could save user presets using the saveUserPreset method and subsequently access their contents from the userPresets array. Since upgrading to iPadOS 14, the userPresets array is invariably empty and the saveUserPreset method no longer saves presets to the Documents folder. Has anyone else experienced this problem?
Posted
by davidspry.
Last updated
.
Post not yet marked as solved
5 Replies
1.9k Views
In previous versions of macOS (Catalina & Big Sur) I used the following code in my app to mute the system Input device (assuming a good inputDeviceId): let inputDeviceId = input.id var address = AudioObjectPropertyAddress( mSelector: AudioObjectPropertySelector(kAudioDevicePropertyMute), mScope: AudioObjectPropertyScope(kAudioDevicePropertyScopeInput), mElement: AudioObjectPropertyElement(kAudioObjectPropertyElementMaster)) let size = UInt32(MemoryLayout<UInt32>.size) var mute: UInt32 = muted ? 1 : 0 AudioObjectSetPropertyData(inputDeviceId, &address, 0, nil, size, &mute) This worked great in both previous operating systems across all input devices I tested. However, in macOS 12.0.1 this no longer works specifically for bluetooth devices. And, beyond that, it instead mutes the Output volume of these bluetooth devices. On the system microphone or via a line-in, this still works as expected. I'm trying to determined what changed in Monterey that caused bluetooth devices to start behaving the opposite as they were before with respect to this API? Is there a new recommended approach for muting microphone input from Bluetooth devices? Any help, guidance, or context here is appreciated.
Posted Last updated
.
Post not yet marked as solved
2 Replies
678 Views
We're seeing the following when running auvaltool It seems AUParameterNode cannot be constructed with a default value and has no option to set default values via property or method so what does this mean? .... Does it mean we ALSO have to iterate all the parameters using the old fashioned long winded toolbox calls ? Or does it mean nothing at all? The "Will fail in future auval version" is worrying! Values: Minimum = 5.000000, Default = 0.000000, Maximum = 300.000000 Flags: Expert Mode, Readable,  WARNING: use -strict_DefP flag * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Parameter's Published defaultvalue does not fall with [min, max] range  * * This will fail using -strict option. Will fail in future auval version  * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *   -parameter PASS
Posted Last updated
.
Post not yet marked as solved
1 Replies
850 Views
Curious if there is a sound way for an AUv3 component to identify how many other instances of it that are running on a device. For instance, if GarageBand has 4 tracks and all of the tracks use the same AUv3 component, is there a sound way for each one to obtain a unique index value? Thanks!
Posted
by bradhowes.
Last updated
.
Post not yet marked as solved
1 Replies
596 Views
I'm writing a macOS audio unit hosting app using the AVAudioUnit and AUAudioUnit APIs. I'm trying to use the NSView cacheDisplay(in:to:) function to capture an image of a plugin's view: func viewToImage(veiwToCapture: NSView) -> NSImage? {     var image: NSImage? = nil     if let rep = veiwToCapture.bitmapImageRepForCachingDisplay(in: veiwToCapture.bounds) {       veiwToCapture.cacheDisplay(in: veiwToCapture.bounds, to: rep)       image = NSImage(size: veiwToCapture.bounds.size)       image!.addRepresentation(rep)     }     return image   } } This works ok when a plugin is instantiated using the .loadInProcess option. If the plugin is instantiated using the .loadOutOfProcess option the resulting bitmapImageRep is blank. I'd much rather be loading plugins out-of-process for the enhanced stability. Is there any trick I'm missing to be able to capture the contents of the NSView from an out-of-process audio unit?
Posted
by opsGordon.
Last updated
.