AVAudioEngine

RSS for tag

Use a group of connected audio node objects to generate and process audio signals and perform audio input and output.

Posts under AVAudioEngine tag

48 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

How to obtain an AVAudioFormat for a canonical format?
I receive a buffer from[AVSpeechSynthesizer convertToBuffer:fromBuffer:] and want to schedule it on an AVPlayerNode. The player node's output format need to be something that the next node could handle and as far as I understand most nodes can handle a canonical format. The format provided by AVSpeechSynthesizer is not something thatAVAudioMixerNode supports. So the following:   AVAudioEngine *engine = [[AVAudioEngine alloc] init];   playerNode = [[AVAudioPlayerNode alloc] init];   AVAudioFormat *format = [[AVAudioFormat alloc] initWithSettings:utterance.voice.audioFileSettings];   [engine attachNode:self.playerNode];   [engine connect:self.playerNode to:engine.mainMixerNode format:format]; Throws an exception: Thread 1: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 \"(null)\"" I am looking for a way to obtain the canonical format for the platform so that I can use AVAudioConverter to convert the buffer. Since different platforms have different canonical formats, I imagine there should be some library way of doing this. Otherwise each developer will have to redefine it for each platform the code will run on (OSX, iOS etc) and keep it updated when it changes. I could not find any constant or function which can make such format, ASDB or settings. The smartest way I could think of, which does not work:   AudioStreamBasicDescription toDesc;   FillOutASBDForLPCM(toDesc, [AVAudioSession sharedInstance].sampleRate,                      2, 16, 16, kAudioFormatFlagIsFloat, kAudioFormatFlagsNativeEndian);   AVAudioFormat *toFormat = [[AVAudioFormat alloc] initWithStreamDescription:&toDesc]; Even the provided example for iPhone, in the documentation linked above, uses kAudioFormatFlagsAudioUnitCanonical and AudioUnitSampleType which are deprecated. So what is the correct way to do this?
4
0
1.7k
Feb ’24
Low playback volume when using kAudioUnitSubType_VoiceProcessingIO audio unit
I’m developing a voice communication app for the iPad with both playback and record and using AudioUnit of type kAudioUnitSubType_VoiceProcessingIO to have echo cancellation. When playing the audio before initializing the recording audio unit, volume is high. But if I'm playing the audio after initializing the audio unit or when switching to remoteio and then back to vpio the playback volume is low. It seems like a bug in iOS, any solution or workaround for this? Searching the net I only found this post without any solution: https://developer.apple.com/forums/thread/671836
1
2
1.9k
Jul ’23
Enabling Voice Processing changes channels count on the input node.
I've noticed that enabling voice processing on AVAudioInputNode change the node's format - most noticeably channel count. let inputNode = avEngine.inputNode print("Format #1: \(inputNode.outputFormat(forBus: 0))") // Format #1: <AVAudioFormat 0x600002bb4be0:  1 ch,  44100 Hz, Float32> try! inputNode.setVoiceProcessingEnabled(true) print("Format #2: \(inputNode.outputFormat(forBus: 0))") // Format #2: <AVAudioFormat 0x600002b18f50:  3 ch,  44100 Hz, Float32, deinterleaved> Is this expected? How can I interpret these channels? My input device is an aggregate device where each channel comes from a different microphone. I then record each channels to separate files. But when voice processing messes up with the channels layout, I cannot rely on this anymore.
1
1
2.3k
Jul ’23
Multi Channel Audio With AVAudioEngine, Flipping Audio Channels
Hi, I have multiple audio files I want to decide which channel goes to which output. For example, how to route four 2-channel audio files to an 8-channel output. Also If I have an AVAudioPlayerNode playing a 2-channel track through headphones, can I flip the channels on the output for playback, i.e flip left and right? I have read the following thread which seeks to do something similar, but it is from 2012 and I do not quite understand how it would work in modern day. Many thanks, I am a bit stumped.
1
1
1.8k
Jun ’23
`videoChat` AVAudioSession.Mode Issues on iPhone 14 Pro Max
I work on a video conferencing application, which makes use of AVAudioEngine and the videoChat AVAudioSession.Mode This past Friday, an internal user reported an "audio cutting in and out" issue with their new iPhone 14 Pro, and I was able to reproduce the issue later that day on my iPhone 14 Pro Max. No other iOS devices running iOS 16 are exhibiting this issue. I have narrowed down the root cause to the videoChat AVAudioSession.Mode after changing line 53 of the ViewController.swift file in Apple's "Using Voice Processing" sample project (https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing) from: try session.setCategory(.playAndRecord, options: .defaultToSpeaker) to try session.setCategory(.playAndRecord, mode: .videoChat, options: .defaultToSpeaker) This only causes issues on my iPhone 14 Pro Max device, not on my iPhone 13 Pro Max, so it seems specific to the new iPhones only. I am also seeing the following logged to the console using either device, which appears to be specific to iOS 16, but am not sure if it is related to the videoChat issue or not: 2022-09-19 08:23:20.087578-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:71  Invalid input size for property 1684431725 2022-09-19 08:23:20.087605-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:225  Invalid input size for property 1684431725 I am assuming 1684431725 is 'dfcm' but I am not sure what Audio Session Property that might be.
13
2
3.9k
Nov ’23
AVSpeechSynthesizer write method get audio buffer and play audio deosn't work
I am using AVSpeechSynthesizer to get audio buffer and play, I am using AVAudioEngine and AVAudioPlayerNode to play the buffer. But I am getting error. [avae] AVAEInternal.h:76 required condition is false: [AVAudioPlayerNode.mm:734:ScheduleBuffer: (_outputFormat.channelCount == buffer.format.channelCount)] 2023-05-02 03:14:35.709020-0700 AudioPlayer[12525:308940] *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: _outputFormat.channelCount == buffer.format.channelCount' Can anyone please help me to play the AVAudioBuffer from AVSpeechSynthesizer write method?
2
1
1.1k
Jul ’23
AVAudioEngine Cannot Create AVAudioFile from URL
I cannot seem to create an AVAudioFile from a URL to be played in an AVAudioEngine. Here is my complete code, following the documentation. import UIKit import AVKit import AVFoundation class ViewController: UIViewController { let audioEngine = AVAudioEngine() let audioPlayerNode = AVAudioPlayerNode() override func viewDidLoad() { super.viewDidLoad() streamAudioFromURL(urlString: "https://samplelib.com/lib/preview/mp3/sample-9s.mp3") } func streamAudioFromURL(urlString: String) { guard let url = URL(string: urlString) else { print("Invalid URL") return } let audioFile = try! AVAudioFile(forReading: url) let audioEngine = AVAudioEngine() let playerNode = AVAudioPlayerNode() audioEngine.attach(playerNode) audioEngine.connect(playerNode, to: audioEngine.outputNode, format: audioFile.processingFormat) playerNode.scheduleFile(audioFile, at: nil, completionCallbackType: .dataPlayedBack) { _ in /* Handle any work that's necessary after playback. */ } do { try audioEngine.start() playerNode.play() } catch { /* Handle the error. */ } } } I am getting the following error on let audioFile = try! AVAudioFile(forReading: url) Thread 1: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.coreaudio.avfaudio Code=2003334207 "(null)" UserInfo={failed call=ExtAudioFileOpenURL((CFURLRef)fileURL, &_extAudioFile)} I have tried many other .mp3 file URLs as well as .wav and .m4a and none seem to work. The documentation makes this look so easy but I have been trying for hours to no avail. If you have any suggestions, they would be greatly appreciated!
2
0
1.5k
Jun ’23
AVAudioEngine input on Mac, disrupting low priority sounds
I am analysing sounds by tapping the mic on the Mac. All is working well, but it disrupts other (what I assume) are low priority sounds e.g. dragging an item off the dock, sending a message is messages, speaking something in Shortcuts or Terminal. Other sounds like music.app playing, Siri speaking are not disrupted. The disruption sounds like the last part of the sound being repeated two extra times, very noticeable. This is the code: import Cocoa import AVFAudio class AudioHelper: NSObject { let audioEngine = AVAudioEngine() func start() async throws { audioEngine.inputNode.installTap(onBus: 0, bufferSize: 8192, format: nil) { buffer, time in } try audioEngine.start() } } I have tried increasing the buffer, changing the qos to utility (in the hope the sound analysis would become less important than the disrupted sounds),running on a non-main thread, but no luck. MacOS 13.4.1 Any assistance would be appreciated.
2
2
1.1k
Jun ’23
AVFoundation: Play Audio Delayed in Background with Ducking
I have a timer in one of my apps. I now want to add audio that plays at the end of the timer. It's a workout app and the sound should remind the user that it is time for the next exercise. The audio should duck music playback (Apple Music / Spotify) and also work in background. Background audio is enabled for the app. I am not able to achieve everything at the same time. I set the audio session to category playback with options duckOthers. do { try AVAudioSession.sharedInstance().setCategory( .playback, options: .duckOthers ) } catch { print(error) } For playback I just use the AVAudioPlayer. When the user starts the timer, i schedule a timer in the future and play the sound. While this works perfectly in the foreground, the sound is not played back when going to background, as timers are not fired in the background, but rather when the user puts the app back in foreground. I have also tried using AVAudioEngine and AVAudioPlayerNode, as the latter can start playback delayed. The case from above works now, but the audio ducking begins immediately when initialising the AVAudioEngine, which is also not what i want. Is there any other approach that I am not aware of?
0
0
1.2k
Jun ’23
Logic Pro, SIGSEV, critical crashes
Hi, I'm working hard with Logic Pro and it's the 4th time that the application crashes. This is report I receive. What can I do to fix? Thank you in advance Translated Report (Full Report Below) Process: Logic Pro X [1433] Path: /Applications/Logic Pro X.app/Contents/MacOS/Logic Pro X Identifier: com.apple.logic10 Version: 10.7.7 (5762) Build Info: MALogic-5762000000000000~2 (1A85) App Item ID: 634148309 App External ID: 854029738 Code Type: X86-64 (Native) Parent Process: launchd [1] User ID: 501 Date/Time: 2023-07-01 09:16:42.7422 +0200 OS Version: macOS 13.3.1 (22E261) Report Version: 12 Bridge OS Version: 7.4 (20P4252) Anonymous UUID: F5E0021C-707D-3E26-12BC-6E1D779A746A Time Awake Since Boot: 2700 seconds System Integrity Protection: enabled Crashed Thread: 0 Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000010 Exception Codes: 0x0000000000000001, 0x0000000000000010 Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11 Terminating Process: exc handler [1433] VM Region Info: 0x10 is not in any region. Bytes before following region: 140737486778352 REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL UNUSED SPACE AT START ---> shared memory 7fffffe7f000-7fffffe80000 [ 4K] r-x/r-x SM=SHM Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 Logic Pro X 0x108fe6972 0x108a75000 + 5708146 1 Logic Pro X 0x108def2d3 0x108a75000 + 3646163 2 Foundation 0x7ff80e4b3f35 __NSFireDelayedPerform + 440 3 CoreFoundation 0x7ff80d623478 CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION + 20 4 CoreFoundation 0x7ff80d622ff3 __CFRunLoopDoTimer + 807 5 CoreFoundation 0x7ff80d622c19 __CFRunLoopDoTimers + 285 6 CoreFoundation 0x7ff80d608f79 __CFRunLoopRun + 2206 7 CoreFoundation 0x7ff80d608071 CFRunLoopRunSpecific + 560 8 HIToolbox 0x7ff817070fcd RunCurrentEventLoopInMode + 292 9 HIToolbox 0x7ff817070dde ReceiveNextEventCommon + 657 10 HIToolbox 0x7ff817070b38 _BlockUntilNextEventMatchingListInModeWithFilter + 64 11 AppKit 0x7ff81069a7a0 _DPSNextEvent + 858 12 AppKit 0x7ff81069964a -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1214 13 Logic Pro X 0x10a29885d 0x108a75000 + 25311325 14 MAToolKit 0x1117f0e37 0x1116ec000 + 1068599 15 MAToolKit 0x1117f64ae 0x1116ec000 + 1090734 16 AppKit 0x7ff8108864b1 -[NSWindow(NSEventRouting) _handleMouseDownEvent:isDelayedEvent:] + 4330 17 AppKit 0x7ff8107fdcef -[NSWindow(NSEventRouting) _reallySendEvent:isDelayedEvent:] + 404 18 AppKit 0x7ff8107fd93f -[NSWindow(NSEventRouting) sendEvent:] + 345 19 Logic Pro X 0x108ebf486 0x108a75000 + 4498566 20 AppKit 0x7ff8107fc319 -[NSApplication(NSEvent) sendEvent:] + 345 21 Logic Pro X 0x10a2995f4 0x108a75000 + 25314804 22 Logic Pro X 0x10a2990c9 0x108a75000 + 25313481 23 Logic Pro X 0x10a29337f 0x108a75000 + 25289599 24 Logic Pro X 0x10a29962e 0x108a75000 + 25314862 25 Logic Pro X 0x10a2990c9 0x108a75000 + 25313481 26 AppKit 0x7ff810ab6bbe -[NSApplication _handleEvent:] + 65 27 AppKit 0x7ff81068bcdd -[NSApplication run] + 623 28 AppKit 0x7ff81065fed2 NSApplicationMain + 817 29 Logic Pro X 0x10956565d 0x108a75000 + 11470429 30 dyld 0x7ff80d1d441f start + 1903 Thread 1:: caulk.messenger.shared:17 0 libsystem_kernel.dylib 0x7ff80d4ef52e semaphore_wait_trap + 10 1 caulk 0x7ff816da707e caulk::semaphore::timed_wait(double) + 150 2 caulk 0x7ff816da6f9c caulk::concurrent::details::worker_thread::run() + 30 3 caulk 0x7ff816da6cb0 void* caulk::thread_proxy<std::__1::tuple<caulk::thread::attributes, void (caulk::concurrent::details::worker_thread::)(), std::__1::tuplecaulk::concurrent::details::worker_thread*>>(void) + 41 4 libsystem_pthread.dylib 0x7ff80d52e1d3 _pthread_start + 125 5 libsystem_pthread.dylib 0x7ff80d529bd3 thread_start + 15 Thread 2:: com.apple.NSEventThread 0 libsystem_kernel.dylib 0x7ff80d4ef5b2 mach_msg2_trap + 10 1 libsystem_kernel.dylib 0x7ff80d4fd72d mach_msg2_internal + 78 2 libsystem_kernel.dylib 0x7ff80d4f65e4 mach_msg_overwrite + 692 3 libsystem_kernel.dylib 0x7ff80d4ef89a mach_msg + 19 4 SkyLight 0x7ff81219f7ac CGSSnarfAndDispatchDatagrams + 160 5 SkyLight 0x7ff8124b8cfd SLSGetNextEventRecordInternal + 284 6 SkyLight 0x7ff8122d8360 SLEventCreateNextEvent + 9 7 HIToolbox 0x7ff81707bfea PullEventsFromWindowServerOnConnection(unsigned int, unsigned char, __CFMachPortBoost*) + 45 8 HIToolbox 0x7ff81707bf8b MessageHandler(__CFMachPort*, void*, long, void*) + 48 9 CoreFoundation 0x7ff80d637e66 __CFMachPortPerform + 244 10 CoreFoundation 0x7ff80d60a5a3 CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION + 41 11 CoreFoundation 0x7ff80d60a4e3 __CFRunLoopDoSource1 + 540 12 CoreFoundation 0x7ff80d609161 __CFRunLoopRun + 2694 13 CoreFoundation 0x7ff80d608071 CFRunLoopRunSpecific + 560 14 AppKit 0x7ff8107fa909 _NSEventThread + 132 15 libsystem_pthread.dylib 0x7ff80d52e1d3 _pthread_start + 125 16 libsystem_pthread.dylib 0x7ff80d529bd3 thread_start + 15
0
0
773
Jul ’23
Low volume levels over USB-C adapter
We are developing an app that uses external hardware to measure analogue hearing-loop performance . It uses audio jack on phone/iPad. With the new hardware on iPad using USB-C , we have noticed that the same input , one with lighting adapter and one with usb-C adapter - both produce way different input levels. The USB-C is ~23dB lower, with the same code and settings. That's almost 10x difference. Is there any way to control the USB-C adapter? am I missing something ? The code simply uses AVAudioInputNode and block attached to it via self.inputNode.installTap we do adjust gain to 1.0 let gain: Float = 1.0 try session.setInputGain(gain) But that still does not help. I wish there was an apple lab I could go to , to speak to engineers about it.
0
0
948
Jul ’23
MacOS echo cancellation (AUVoiceProcessing) trouble with device gain/volume
Hi community I'm developing an application for MacOS and i need to capture the mic audio stream. Currently using CoreAudio in Swift i'm able to capture the audio stream using IO Procs and have applied the AUVoiceProcessing for prevent echo from speaker device. I was able to connect the audio unit and perform the echo cancellation. The problem that i'm getting is that when i'm using AUVoiceProcessing the gain of the two devices get reduced and that affects the volume of the two devices (microphone and speaker). I have tried to disable the AGC using the property kAUVoiceIOProperty_VoiceProcessingEnableAGCbut the results are the same. There is any option to disable the gain reduction or there is a better approach to get the echo cancellation working?
3
0
1.4k
Oct ’23
Unity streaming audio playback to AVAudioSession not playing correctly or captured by screen recording?
Our app is a game written in Unity where we have most of our audio playback handled by Unity. However, one of our game experiences utilized microphone input for speech recognition, and so in order for us to perform echo cancellation (while the game has audio playback), we setup an audio stream from Unity to native Swift code that performs the mixing of the input/output nodes. We however found that by streaming the audio buffer to our AVAudioSession: The volume of the audio playback appears to output differently When capturing a screen recording of the app, the audio playback being played from AVAudioSession does not get captured at all. Looking to figure out what could be causing the discrepency in playback as well as capture behaviour during screen recordings. We setup the AVAudioSession with this configuration: AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, options: .mixWithOthers) with inputNode.setVoiceProcessingEnabled(true) after connecting our IO and mixer nodes. Any suggestions or ideas on what to look out for would be appreciated!
0
0
944
Jul ’23
Can no longer access iTunes music purchases with AVAudioFile with iOS 17
Prior to iOS 17, I used AVAudioFile to open (for reading) the assetURL of MPMediaItem for songs that the user purchased through iTunes Store. With the iOS 17 Beta, this seems no longer possible as AVAudioFile throws this: ExtAudioFile.cpp:211 about to throw -54: open audio file AVAEInternal.h:109 [AVAudioFile.mm:135:AVAudioFileImpl: (ExtAudioFileOpenURL((CFURLRef)fileURL, &_extAudioFile)): error -54 Also can't copy the url to Documents directory because I get this: The file “item.m4a” couldn’t be opened because URL type ipod-library isn’t supported. This seems to be affecting other apps on the App Store besides mine, and it will reflect very badly on my app if this makes into the final iOS 17 because I have encouraged users to buy songs on iTunes Store to use with my app. Now there seems like there is no way to access them. Is this a known bug, or is there some type of workaround?
6
1
1.3k
Sep ’23
Record over bluetooth using SBC
When recording audio over bluetooth from AirPods to iPhone using the AVAudioRecorder the Bluetooth audio codec used is always AAC-ELD independent of the codec to store which is selected in the AVAudioRecorder instance. As far as I know must every Bluetooth device support SBC, hence, it should be possible for the AirPods to transmit the recorded audio using the SBC codec instead of AAC-ELD. However, I could not find any resource on how the request this codec using the AVAudioRecorder or AVAudioEngine. Is it possible to request SBC at all and if yes how?
0
0
600
Aug ’23
Retreive Audio Workgroup in iOS
Hi! I am working on an audio application on iOS. This is how I retreive the workgroup from the remoteIO audiounit (ioUnit). The unit is initialized and is working fine (meaning that it is regularly called by the system). os_workgroup_t os_workgroup{nullptr}; uint32_t os_workgroup_index_size; if (status = AudioUnitGetProperty(ioUnit, kAudioOutputUnitProperty_OSWorkgroup, kAudioUnitScope_Global, 0, &os_workgroup, &os_workgroup_index_size); status != noErr) { throw runtime_error("AudioUnitSetProperty kAudioOutputUnitProperty_OSWorkgroup - Failed with OSStatus: " + to_string(status)); } However the resulting os_workgroup's value is 0x40. Which seems not correct. No wonder I cannot join any other realtime threads to the workgroup as well. The returned status however is a solid 0. Can anyone help?
0
0
740
Aug ’23
AVAudioSession currentRoute suddenly has no inputs even if availableInputs contains built-in mic
From an app that reads audio from the built-in microphone, I'm receiving many crash logs where the AVAudioEngine fails to start again after the app was suspended. Basically, I'm calling these two methods in the app delegate's applicationDidBecomeActive and applicationDidEnterBackground methods respectively: let audioSession = AVAudioSession.sharedInstance() func startAudio() throws { self.audioEngine = AVAudioEngine() try self.audioSession.setCategory(.record, mode: .measurement)} try audioSession.setActive(true) self.audioEngine!.inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil, block: { ... }) self.audioEngine!.prepare() try self.audioEngine!.start() } func stopAudio() throws { self.audioEngine?.stop() self.audioEngine?.inputNode.removeTap(onBus: 0) self.audioEngine = nil try self.audioSession.setActive(false, options: [.notifyOthersOnDeactivation]) } In the crash logs (iOS 16.6) I'm seeing that this works fine several times as the app is opened and closed, but suddenly the audioEngine.start() call fails with the error Error Domain=com.apple.coreaudio.avfaudio Code=-10851 "(null)" UserInfo={failed call=err = AUGraphParser::InitializeActiveNodesInInputChain(ThisGraph, *GetInputNode())} and the audioEngine!.inputNode.outputFormat(forBus: 0) is something like <AVAudioFormat 0x282301c70: 2 ch, 0 Hz, Float32, deinterleaved> . Also, right before installing the tap, audioSession.availableInputs contains an entry of type MicrophoneBuiltIn but audioSession.currentRoute lists no inputs at all. I was not able to reproduce this situation on my own devices yet. Does anyone have an idea why this is happening?
0
0
697
Aug ’23
Hello guys, how can I use the speak() API of AVSpeechSynthesizer when I turn off my iPhone?
Hello, I have struggled to resolve issue above question. I could speak utterance when I turn on my iPhone, but when my iPhone goes to background mode(turn off iPhone), It doesn't speak any more. I think it is possible to play audio or speak utterance because I can play music on background status in youtube. Any help please??
1
0
578
Sep ’23