Post not yet marked as solved
New on iOS17 we can control the amount of 'other audio' ducking through the AVAudioEngine. Is this also possible on AVAudioSession?
In my app I don't voice input, but I do play voice audio while music from other apps plays in the background. Often the music either drowns to voice, if I use the .mixWithOthers category, or it's not loud enough if I use .duckOthers. It would be awesome to have the level of control that AVAudioEngine has.
Post not yet marked as solved
Thank you for this new API.
Today, when using AUVoiceIO, voice gets processed ahead of rendering to ensure the echo canceller is capable of filtering it out from the input.
Will other audio be processed in the same way? For example, rendered as mono in a 16kHz sampling rate?
I'm asking because I'm wondering if this API will unlock the ability to use wide-band, stereo, high quality other audio (for example game audio) simultaneously while using voice.
Thanks!
Post not yet marked as solved
I'm trying to change the audio input (microphone) between all the available devices from AVAudioSession.sharedInstance().availableInputs. I'm using AVAudioSession.routeChangeNotification to get automatic route changes when devices get connected/disconnected and change the preferred input with setPreferredInput, then I restart my audioEngine and it works fine.
But when I try to change the preferred input programmatically It doesn't change the audio capture inputNode. But keeps the last connected device and capturing.
Even the AVAudioSession.sharedInstance().currentRoute.inputs changes but the audioEngine?.inputNode doesn't change to setPreferredInput call.
WhatsApp seems to have done that without any issues.
Any suggestions or leads are highly appreciated. Thanks.
Post not yet marked as solved
I am developing a MacOS video/audio chat app that uses the audio input + audio only intermittently. The rest of the time I need to stop and tear down AVAudioEngine to allow other applications such as music players to use audio.
I have found that just pausing or stopping the engine is not enough, I need to completely tear it down and force a deinit by setting engine = nil in my objective C code (with ARC enabled).
What I have learned is that I have to make sure to tear down and detach absolutely everyhing, otherwise AVAudioEngine will fail to start the next time, especially when using a bluetooth headset. However, after months of trial and error, I have something that appears to be almost stable. However, I am sometimes hitting the crash show below after alloc + init of AVAudioEngine instance, when enabling voice processing. The crash is found when building with address-sanitizer enabled, and the logging above the line is my own:
stopping audio engine
disabling voice processing...
voice processing disabled
engine stopped
waiting for engine...
starting audio engine...
enabling voice processing...
==75508==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x000111e11be0 at pc 0x000103123360 bp 0x00016d231c90 sp 0x00016d231450
WRITE of size 52 at 0x000111e11be0 thread T218
#0 0x10312335c in wrap_memcpy+0x244 (libclang_rt.asan_osx_dynamic.dylib:arm64e+0x1b35c) (BuildId: f0a7ac5c49bc3abc851181b6f92b308a32000000200000000100000000000b00)
#1 0x1077f407c (CoreAudio:arm64e+0xc07c) (BuildId: 3318bd64e64f3e69991d605d1bc10d7d32000000200000000100000000030d00)
#2 0x1078f1484 (CoreAudio:arm64e+0x109484) (BuildId: 3318bd64e64f3e69991d605d1bc10d7d32000000200000000100000000030d00)
#3 0x1a3d661a0 in AudioUnitGetProperty+0x1c0 (AudioToolboxCore:arm64e+0x2101a0) (BuildId: 3a76e12cd37d3545bb42d52848e0bd7032000000200000000100000000030d00)
#4 0x207d8be38 in AVAudioIOUnit_OSX::_GetHWFormat(unsigned int, unsigned int*)+0x76c (AVFAudio:arm64e+0xbde38) (BuildId: 4a3f05007b8c35c98be4e78396ca9eeb32000000200000000100000000030d00)
#5 0x207d8aea4 in invocation function for block in AVAudioIOUnit::IOUnitPropertyListener(void*, ComponentInstanceRecord*, unsigned int, unsigned int, unsigned int)+0x15c (AVFAudio:arm64e+0xbcea4) (BuildId: 4a3f05007b8c35c98be4e78396ca9eeb32000000200000000100000000030d00)
#6 0x103149f74 in __wrap_dispatch_async_block_invoke+0xc0 (libclang_rt.asan_osx_dynamic.dylib:arm64e+0x41f74) (BuildId: f0a7ac5c49bc3abc851181b6f92b308a32000000200000000100000000000b00)
#7 0x1a1d4a870 in _dispatch_call_block_and_release+0x1c (libdispatch.dylib:arm64e+0x2870) (BuildId: 8e87dc0ea5703933b37d5e05ad51620632000000200000000100000000030d00)
#8 0x1a1d4c3fc in _dispatch_client_callout+0x10 (libdispatch.dylib:arm64e+0x43fc) (BuildId: 8e87dc0ea5703933b37d5e05ad51620632000000200000000100000000030d00)
#9 0x1a1d53a84 in _dispatch_lane_serial_drain+0x298 (libdispatch.dylib:arm64e+0xba84) (BuildId: 8e87dc0ea5703933b37d5e05ad51620632000000200000000100000000030d00)
#10 0x1a1d545f4 in _dispatch_lane_invoke+0x17c (libdispatch.dylib:arm64e+0xc5f4) (BuildId: 8e87dc0ea5703933b37d5e05ad51620632000000200000000100000000030d00)
#11 0x1a1d5f240 in _dispatch_workloop_worker_thread+0x284 (libdispatch.dylib:arm64e+0x17240) (BuildId: 8e87dc0ea5703933b37d5e05ad51620632000000200000000100000000030d00)
#12 0x1a1ef8070 in _pthread_wqthread+0x11c (libsystem_pthread.dylib:arm64e+0x3070) (BuildId: b401cfb38dfe32db92b3ba8af0f8ca6e32000000200000000100000000030d00)
#13 0x1a1ef6d90 in start_wqthread+0x4 (libsystem_pthread.dylib:arm64e+0x1d90) (BuildId: b401cfb38dfe32db92b3ba8af0f8ca6e32000000200000000100000000030d00)
0x000111e11be0 is located 0 bytes to the right of 32-byte region [0x000111e11bc0,0x000111e11be0)
allocated by thread T218 here:
#0 0x10314ae68 in wrap_malloc+0x94 (libclang_rt.asan_osx_dynamic.dylib:arm64e+0x42e68) (BuildId: f0a7ac5c49bc3abc851181b6f92b308a32000000200000000100000000000b00)
#1 0x207d8bdd4 in AVAudioIOUnit_OSX::_GetHWFormat(unsigned int, unsigned int*)+0x708 (AVFAudio:arm64e+0xbddd4) (BuildId: 4a3f05007b8c35c98be4e78396ca9eeb32000000200000000100000000030d00)
#2 0x207d8aea4 in invocation function for block in AVAudioIOUnit::IOUnitPropertyListener(void*, ComponentInstanceRecord*, unsigned int, unsigned int, unsigned int)+0x15c (AVFAudio:arm64e+0xbcea4) (BuildId: 4a3f05007b8c35c98be4e78396ca9eeb32000000200000000100000000030d00)
#3 0x103149f74 in __wrap_dispatch_async_block_invoke+0xc0 (libclang_rt.asan_osx_dynamic.dylib:arm64e+0x41f74) (BuildId: f0a7ac5c49bc3abc851181b6f92b308a32000000200000000100000000000b00)
#4 0x1a1d4a870 in _dispatch_call_block_and_release+0x1c (libdispatch.dylib:arm64e+0x2870) (BuildId: 8e87dc0ea5703933b37d5e05ad51620632000000200000000100000000030d00)
#5 0x1a1d4c3fc in _dispatch_client_callout+0x10 (libdispatch.dylib:arm64e+0x43fc) (BuildId: 8e87dc0ea5703933b37d5e05ad51620632000000200000000100000000030d00)
#6 0x1a1d53a84 in _dispatch_lane_serial_drain+0x298 (libdispatch.dylib:arm64e+0xba84) (BuildId: 8e87dc0ea5703933b37d5e05ad51620632000000200000000100000000030d00)
#7 0x1a1d545f4 in _dispatch_lane_invoke+0x17c (libdispatch.dylib:arm64e+0xc5f4) (BuildId: 8e87dc0ea5703933b37d5e05ad51620632000000200000000100000000030d00)
#8 0x1a1d5f240 in _dispatch_workloop_worker_thread+0x284 (libdispatch.dylib:arm64e+0x17240) (BuildId: 8e87dc0ea5703933b37d5e05ad51620632000000200000000100000000030d00)
#9 0x1a1ef8070 in _pthread_wqthread+0x11c (libsystem_pthread.dylib:arm64e+0x3070) (BuildId: b401cfb38dfe32db92b3ba8af0f8ca6e32000000200000000100000000030d00)
#10 0x1a1ef6d90 in start_wqthread+0x4 (libsystem_pthread.dylib:arm64e+0x1d90) (BuildId: b401cfb38dfe32db92b3ba8af0f8ca6e32000000200000000100000000030d00)
This is on a Macbook M2 Pro running MacOS 13.3.1 (a) (22E772610a).
What is the best way to proceed with this, it looks to me like a bug in AVAudioEngine/CoreAudio.
Best regards,
Jacob Gorm Hansen
Post not yet marked as solved
private func updateNowPlayingInfo() {
var nowPlayingInfo = [String: Any]()
nowPlayingInfo[MPMediaItemPropertyTitle] = songLabel.text
nowPlayingInfo[MPMediaItemPropertyPlaybackDuration] = Int(Double(audioLengthSamples) / audioSampleRate)
nowPlayingInfo[MPNowPlayingInfoPropertyElapsedPlaybackTime] = Int(Double(currentPosition) / audioSampleRate)
print(isPlaying)
print("updateNow")
let playbackRate = isPlaying ? self.timeEffect.rate : 0.0
print(playbackRate)
nowPlayingInfo[MPNowPlayingInfoPropertyPlaybackRate] = playbackRate
MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo
}
Whenever I press my play/pause button in the app, I expect control center and the lock screen to reflect this. However, control center symbols stays as pause regardless of what I do in the app.
Am I missing anything? Running on device with iOS 16.4.1.
private func configureRemoteCommandCenter() {
let commandCenter = MPRemoteCommandCenter.shared()
// Play command
commandCenter.playCommand.isEnabled = true
commandCenter.playCommand.addTarget { [weak self] event in
// Handle the play command
self?.playOrPauseFunction()
return .success
}
// Pause command
commandCenter.pauseCommand.isEnabled = true
commandCenter.pauseCommand.addTarget { [weak self] event in
// Handle the pause command
self?.playOrPauseFunction()
return .success
}
commandCenter.togglePlayPauseCommand.isEnabled = true
commandCenter.togglePlayPauseCommand.addTarget { [weak self] event in
self?.playOrPauseFunction()
return .success
}
commandCenter.changePlaybackRateCommand.isEnabled = true
commandCenter.changePlaybackPositionCommand.isEnabled = true
commandCenter.changePlaybackPositionCommand.addTarget { [unowned self] event in
guard let event = event as? MPChangePlaybackPositionCommandEvent else {
return .commandFailed
}
currentPosition = AVAudioFramePosition(event.positionTime * audioSampleRate)
scrubSeek(to: Double(currentPosition))
updateNowPlayingInfo()
return .success
}
// Add handlers for other remote control events here...
}
Post not yet marked as solved
Our app uses playAndRecord category with options of interruptSpokenAudioAndMixWithOthers, allowBluetoothA2DP, allowAirPlay & defaultToSpeaker, and while backgrounded, Spotify or other music apps play normally when the iPhone is not connected to anything or is connected to Bluetooth. However, once iPhone is connected to CarPlay either via Bluetooth or via USB cable, the music apps' playback quality become flat which suggests the recording is using the Hands Free Profile (which would also happen if allowBluetooth is set and connected to Bluetooth devices) - is there any way to keep using the iPhone microphone to record while connected to CarPlay so that we can allow music apps to play normally?
hi I know there's many ways to record Voice in IOS by using one of these famous framework :
AVFoundation
AudioToolbox
Core Audio
but what I want todo is to be able record a phone call
but there's many challenges
interruptions : when I got a call the system interrupt any running app in order to handle this call
so how I can I make the voice recording app record voice in background so after I receive a call I open the app and run the record function
even though I solved the previous issue , how can I record the sound comes from the phone top speaker
if anyone have an idea or any thoughts , please share it with me
Post not yet marked as solved
I am using AVSpeechSynthesizer to get audio buffer and play,
I am using AVAudioEngine and AVAudioPlayerNode to play the buffer.
But I am getting error.
[avae] AVAEInternal.h:76 required condition is false: [AVAudioPlayerNode.mm:734:ScheduleBuffer: (_outputFormat.channelCount == buffer.format.channelCount)]
2023-05-02 03:14:35.709020-0700 AudioPlayer[12525:308940] *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: _outputFormat.channelCount == buffer.format.channelCount'
Can anyone please help me to play the AVAudioBuffer from AVSpeechSynthesizer write method?
Post not yet marked as solved
I've created an app that applies stereo EQ to songs with AVAudioUnitEQ and AVAudioEngine. It works great on MPMediaItems with no protected assets.
I'd like to apply the EQ to Apple Music tracks for users that have an active subscription.
In order to play tracks with AVAudioEngine, I create an AVAudioFile from the MPMediaItem assetURL. When I try to get the URL of an Apple Music track, it returns nil even though I have an active subscription.
Is it possible to get the URL of an Apple Music track that an active subscriber has downloaded to their library? If so, I think I'd be all set getting it to work with AVAudioEngine.
If it's not possible to get the URL, does anyone know if there's some other method to play Apple Music tracks with AVAudioEngine?
Post not yet marked as solved
I just filed a bug report with Apple, but I wanted to post here in case people had input about this. I would love to hear that there is just some assumption or logic that I am messing up.
When an AVAudioEngine with voice processing io enabled is running, all other audio sources within the app that are started later will have low volume (seemingly not routed to the speakers?). After either setting the AVAudioSession category to .playAndRecord or overriding the AVAudioSession output route to speaker, the volume corrects itself (output seems to route to the speakers now).
The exact reproduction steps in the code can be broken down as follows. Make sure you have record permissions:
Create an AVAudioEngine
Access each engine's .mainMixerNode
Create an AVPlayer with some audio file (This is also reproducible with AVAudioPlayer and AVAudioEngine)
Configure the session with the following: AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .videoChat, options: [.defaultToSpeaker]) (Note that I'm setting .defaultToSpeaker)
Activate the session with AVAudioSession.sharedInstance().setActive(true)
Enable voice processing on the engine with the following: try! engine.outputNode.setVoiceProcessingEnabled(true)
Start engine with the following: try! engine.start()
Start the AVPlayer and note the volume. You may need to increase the system volume to hear what you're playing.
You can call either of the following and the audio from the AVPlayer will fix its volume:
AVAudioSession.sharedInstance().setCategory(AVAudioSession.sharedInstance().category)
AVAudioSession.sharedInstance().overrideOutputAudioPort(.speaker)
Note that the volume instantly raises
If you were to have another audio source (AVAudioPlayer, AVPlayer, AVAudioEngine), that is started after step 9, you would need to repeat step 9 again.
Obviously, this is not ideal and can cause some very unexpected volume changes for end users.
This was reproducible on iOS 13.6, 15.7, and 16.2.1 (latest).
If anyone has ideas as to how to prevent this or work around it other than the workaround demonstrated here, I'm all ears.
I'm working on an audio recording app which uses an external USB Audio interface (e.g, Focusrite Scarlett Solo) connected to an iPhone.
When I run AVAudioSession.sharedInstance().currentRoute.inputs it returns the interface correctly.
1 element
- 0 : <AVAudioSessionPortDescription: 0x28307c650, type = USBAudio; name = Scarlett Solo USB; UID = AppleUSBAudioEngine:Focusrite:Scarlett Solo USB:130000:1,2; selectedDataSource = (null)>
Channels are returned correctly as well.
po AVAudioSession.sharedInstance().currentRoute.inputs.first?.channels
▿ Optional<Array<AVAudioSessionChannelDescription>>
▿ some : 2 elements
- 0 : <AVAudioSessionChannelDescription: 0x283070b60, name = Scarlett Solo USB 1; label = 4294967295 (0xffffffff); number = 1; port UID = AppleUSBAudioEngine:Focusrite:Scarlett Solo USB:130000:1,2>
- 1 : <AVAudioSessionChannelDescription: 0x283070b70, name = Scarlett Solo USB 2; label = 4294967295 (0xffffffff); number = 2; port UID = AppleUSBAudioEngine:Focusrite:Scarlett Solo USB:130000:1,2>
When I connect the inputNode to mainMixerNode in AVAudioEngine it uses multi-channel input so the Line/Instrument input is on the right channel and Microphone input is on the left.
How can I make it so that I use only the 2nd Channel (guitar) as a mono to be played back in both speakers?
I've been looking through some docs and discussions but could not find the answer.
I tried changing channels to 1 in audio format but as expected it plays the first channel in mono but I can't select 2nd channel to be played instead.
let input = engine.inputNode
let inputFormat = input.inputFormat(forBus: 0)
let preferredFormat = AVAudioFormat(
commonFormat: inputFormat.commonFormat,
sampleRate: inputFormat.sampleRate,
channels: 1,
interleaved: false
)!
engine.connect(input, to: engine.mainMixerNode, format: preferredFormat)
Post not yet marked as solved
My app plays music via AVAudioEngine. I'm using effects that prohibit me from using a built-in audio player.
I'm using MPNowPlayingInfoCenter to provide information to the NowPlaying app on the Apple Watch.
When I play a song, the Apple Watch displays the song metadata.
When I hit pause on my phone, the Apple Watch shows that the track is no longer advancing via the progress wheel; however, it continues to display the pause button even though the music is paused. Ideally, it would switch to the play symbol when the music is paused.
In the documentation it says you can use the playBackState var to set the playback state in macOS. There does not seem to be an equivalent in iOS. I figured maybe the watch would know to switch the pause symbol to the play symbol when the playback rate is 0, but it does not.
All other functionality is working properly. The only thing that doesn't update on the watch is the pause / play symbol when the music toggles between play and pause by tapping my phone or from receiving remote command events.
Thank you in advance for your help.
Post not yet marked as solved
AVAudioUnitTimePitch.latency is 0.09s on my debug devices.
It will have a little time delay during render audio using `AVAudioEngine.
I just want to change the pitch during playing audio.
So how can I avoid this this latency??
Post not yet marked as solved
According to the Apple docs, you should be able to connect audio nodes to an AVAudioEngine instance during runtime, but I'm getting a crash while trying to do so, in particular, when trying to connect instances of AVAudioUnitTimePitch or AVAudioUnitVarispeed to an AVAudioEngine with manual rendering mode enabled. The error message I get is:
Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason:
'player started when in a disconnected state'
In my code, first, I configure the audio engine:
let engine = AVAudioEngine()
let format = AVAudioFormat(standardFormatWithSampleRate: 48000, channels: 2)!
try! engine.enableManualRenderingMode(.offline, format: format, maximumFrameCount: 1024)
try! engine.start()
Then, I try to attach the player to the engine:
let player = AVAudioPlayerNode()
configureEngine(player: player, useVarispeed: true)
player.play() // this is the line that causes the crash
Finally, this is the function I use to configure the engine nodes graph:
func configureEngine(player: AVAudioPlayerNode, useVarispeed: Bool) {
engine.attach(player)
guard useVarispeed else {
engine.connect(player, to: engine.mainMixerNode, format: format)
return
}
let varispeed = AVAudioUnitVarispeed()
engine.attach(varispeed)
engine.connect(player, to: varispeed, format: format)
engine.connect(varispeed, to: engine.mainMixerNode, format: format)
}
If I pass false as the value for the useVarispeed parameter, the crash goes away. What is even more interesting is that if I add a dummy player node before starting the engine, the crash goes away too 🤷♂️
Could anyone please add clarity on what's going on here? Is this a bug or a limitation of the framework that I'm not aware of?
Here's a simple project demonstrating the problem: https://github.com/rlaguilar/AVAudioEngineBug
Post not yet marked as solved
Adopting Picture in Picture in a Custom Player
https://developer.apple.com/documentation/avkit/adopting_picture_in_picture_in_a_custom_player
I have implemented PIP by following this link when I tap Pip button it works fine but when I am playing any video If I minimise app Picture in Picture does not start .In my case I have used AVPlayerLayer and I have set AVAudiosession category as playback .I am on iOS 14 beta 4
Post not yet marked as solved
I work on a video conferencing application, which makes use of AVAudioEngine and the videoChat AVAudioSession.Mode
This past Friday, an internal user reported an "audio cutting in and out" issue with their new iPhone 14 Pro, and I was able to reproduce the issue later that day on my iPhone 14 Pro Max. No other iOS devices running iOS 16 are exhibiting this issue.
I have narrowed down the root cause to the videoChat AVAudioSession.Mode after changing line 53 of the ViewController.swift file in Apple's "Using Voice Processing" sample project (https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing) from:
try session.setCategory(.playAndRecord, options: .defaultToSpeaker)
to
try session.setCategory(.playAndRecord, mode: .videoChat, options: .defaultToSpeaker)
This only causes issues on my iPhone 14 Pro Max device, not on my iPhone 13 Pro Max, so it seems specific to the new iPhones only.
I am also seeing the following logged to the console using either device, which appears to be specific to iOS 16, but am not sure if it is related to the videoChat issue or not:
2022-09-19 08:23:20.087578-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:71 Invalid input size for property 1684431725
2022-09-19 08:23:20.087605-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:225 Invalid input size for property 1684431725
I am assuming 1684431725 is 'dfcm' but I am not sure what Audio Session Property that might be.
Post not yet marked as solved
Hello
My app record voice with Record_Engine class(AvaudioEngine).
Problem:
Even though i made my app go to background, app return to inactive state in few seconds(about 3 seconds after watch screen locked).
Example:
How can i leave my app background.
like built-in record app.
and my recorder class is here.
//Recorder.swift
class Record_Engine : ObservableObject {
@Published var recording_file : AVAudioFile!
private var engine: AVAudioEngine!
private var mixerNode: AVAudioMixerNode!
init() {
setupSession()
setupEngine()
}
fileprivate func setupSession() {
let session = AVAudioSession.sharedInstance()
do {
try session.setCategory(AVAudioSession.Category.playAndRecord, mode: .default)
try session.setActive(true)
} catch {
print(error.localizedDescription)
}
}
fileprivate func setupEngine() {
engine = AVAudioEngine()
mixerNode = AVAudioMixerNode()
mixerNode.volume = 0
engine.attach(mixerNode)
makeConnections()
engine.prepare()
}
fileprivate func makeConnections() {
let inputNode = engine.inputNode
let inputFormat = inputNode.outputFormat(forBus: 0)
engine.connect(inputNode, to: mixerNode, format: inputFormat)
let mainMixerNode = engine.mainMixerNode
let mixerFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: inputFormat.sampleRate, channels: 1, interleaved: false)
engine.connect(mixerNode, to: mainMixerNode, format: mixerFormat)
}
func startRecording() throws {
let tapNode: AVAudioNode = mixerNode
let format = tapNode.outputFormat(forBus: 0)
self.recording_file = try AVAudioFile(forWriting: get_file_path(), settings: format.settings)
tapNode.installTap(onBus: 0, bufferSize: 8192, format: format, block: {
(buffer, time) in
do {try self.recording_file.write(from: buffer)}
catch {print(error.localizedDescription)}
})
try engine.start()
}
}
Post not yet marked as solved
I have the following code to connect inputNode to mainMixerNode of AVAudioEngine:
public func setupAudioEngine() {
self.engine = AVAudioEngine()
let format = engine.inputNode.inputFormat(forBus: 0)
//main mixer node is connected to output node by default
engine.connect(self.engine.inputNode, to: self.engine.mainMixerNode, format: format)
do {
engine.prepare()
try self.engine.start()
}
catch {
print("error couldn't start engine")
}
engineRunning = true
}
But I am seeing a crash in Crashlytics dashboard (which I can't reproduce).
Fatal Exception: com.apple.coreaudio.avfaudio
required condition is false: IsFormatSampleRateAndChannelCountValid(format)
Before calling the function setupAudioEngine I make sure the AVAudioSession category is not playback where mic is not available. The function is called where audio route change notification is handled and I check this condition specifically. Can someone tell me what I am doing wrong?
Fatal Exception: com.apple.coreaudio.avfaudio
0 CoreFoundation 0x99288 __exceptionPreprocess
1 libobjc.A.dylib 0x16744 objc_exception_throw
2 CoreFoundation 0x17048c -[NSException initWithCoder:]
3 AVFAudio 0x9f64 AVAE_RaiseException(NSString*, ...)
4 AVFAudio 0x55738 AVAudioEngineGraph::_Connect(AVAudioNodeImplBase*, AVAudioNodeImplBase*, unsigned int, unsigned int, AVAudioFormat*)
5 AVFAudio 0x5cce0 AVAudioEngineGraph::Connect(AVAudioNode*, AVAudioNode*, unsigned long, unsigned long, AVAudioFormat*)
6 AVFAudio 0xdf1a8 AVAudioEngineImpl::Connect(AVAudioNode*, AVAudioNode*, unsigned long, unsigned long, AVAudioFormat*)
7 AVFAudio 0xe0fc8 -[AVAudioEngine connect:to:format:]
8 MyApp 0xa6af8 setupAudioEngine + 701 (MicrophoneOutput.swift:701)
9 MyApp 0xa46f0 handleRouteChange + 378 (MicrophoneOutput.swift:378)
10 MyApp 0xa4f50 @objc MicrophoneOutput.handleRouteChange(note:)
11 CoreFoundation 0x2a834 __CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__
12 CoreFoundation 0xc6fd4 ___CFXRegistrationPost_block_invoke
13 CoreFoundation 0x9a1d0 _CFXRegistrationPost
14 CoreFoundation 0x408ac _CFXNotificationPost
15 Foundation 0x1b754 -[NSNotificationCenter postNotificationName:object:userInfo:]
16 AudioSession 0x56f0 (anonymous namespace)::HandleRouteChange(AVAudioSession*, NSDictionary*)
17 AudioSession 0x5cbc invocation function for block in avfaudio::AVAudioSessionPropertyListener(void*, unsigned int, unsigned int, void const*)
18 libdispatch.dylib 0x1e6c _dispatch_call_block_and_release
19 libdispatch.dylib 0x3a30 _dispatch_client_callout
20 libdispatch.dylib 0x11f48 _dispatch_main_queue_drain
21 libdispatch.dylib 0x11b98 _dispatch_main_queue_callback_4CF
22 CoreFoundation 0x51800 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__
23 CoreFoundation 0xb704 __CFRunLoopRun
24 CoreFoundation 0x1ebc8 CFRunLoopRunSpecific
25 GraphicsServices 0x1374 GSEventRunModal
26 UIKitCore 0x514648 -[UIApplication _run]
27 UIKitCore 0x295d90 UIApplicationMain
28 libswiftUIKit.dylib 0x30ecc UIApplicationMain(_:_:_:_:)
29 MyApp 0xc358 main (WhiteBalanceUI.swift)
30 ??? 0x104b1dce4 (Missing)
Post not yet marked as solved
Using latest version of Ventura on an M1 iMac...
2 issues I've noticed when an Auv3 is loaded Out Of Process...
1 - When using tokenByAddingRenderObserver: The provided block is never called if an Auv3 is loaded out of process.
2: When loaded In process an Auv3 audio unit that is itself a host for other audio units can see all the audio units in the system. That is all v2 units and all v3 units.
When loaded Out of Procress and the same v3 unit can only find the v2 units when querying. The system appears to be hiding the installed v3 units.
This seems like the reverse of what I would expect. When sandboxed (out of process) why are other V3 units hidden from inspection?
I've filed bug reports on these ages ago but had no response.
I'm particularly interesting in issue #2 above as it makes a big difference when our AU host unit is running in Logic which only allows loading out of process.
When loading into our own host app we can load the au in-process and it works nicely, able to host any other unit installed on the system.
Post not yet marked as solved
Hi. I'm audio/video player developer.
I implemented player as AVAudioPlayerNode to handle audio buffer.
Suddenly, I wondered how great it would be if AVPlayer could handle buffers.
I would like to know if there is a way to handle the buffer in AVPlayer, and if not, can you disclose it in the future?
Thanks.