Description
As of iOS 18, AVAudioSession.setPreferredIOBufferDuration ignores the requested buffer size when Sound Recognition or Vocal Shortcuts is enabled. This results in 1) much larger buffer sizes and 2) mismatched buffer sizes between input and output buffers, which causes ‘glitchy’ audio and increased latency.
Additionally, when this issue occurs AVAudioSession.setPreferredIOBufferDuration continues to return ‘true’ and no error is produced.
Steps to Reproduce:
Enable Vocal Shortcuts on a device running iOS 18. Enable at least one shortcut (e.g. Control Center).
Open or clone the example project (https://github.com/cwalo/SoundRecognitionBug)
Build and install the example project
Attach a headset and launch the application
Observe console logs showing
a requested buffer size of 0.005805 (256 samples @ 48k)
an actual buffer size of 0.023220 (1104 samples @48k - this is regularly the resulting buffer size in all of our tests)
Quit the app and detach the headset. Enable mutesOutput in AudioSystem.mm (to avoid feedback)
Launch the application
Observe
Same result from step 4
Mismatched hardware buffer size of 1104 and recorded frame count of 1024
Mismatched playbackCount and recordCount
Quit the app and disable vocal shortcuts
Launch the app
Observe IOBufferDuration matching the requested duration and matched buffer sizes (expected behavior)
Expected results:
Requested IOBufferDuration is respected or AVAudioSession returns false or error is produced
Input and output buffer sizes match
Device(s): iPhone 11 Pro, iPad Pro
OS: iOS 18.0.1
Environment: Xcode 16.1
FB: FB15715421
Related to: https://forums.developer.apple.com/forums/thread/765477
AVAudioSession
RSS for tagUse the AVAudioSession object to communicate to the system how you intend to use audio in your app.
Posts under AVAudioSession tag
86 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Case-ID: 10075936
PLATFORM AND VERSION
iOS
Development environment: Xcode Xcode15, macOS macOS 14.5
Run-time configuration: iOS iOS18.0.1
DESCRIPTION OF PROBLEM
Our customer experienced an one-way audio issue when switching from the built-in microphone to AirPods Pro (model: A2084, version: 6F21) during a VoIP call. The issue occurred when the customer's voice could not be heard by the other party, but the customer could hear the other party's voice.
STEPS TO REPRODUCE
Here are the details:
After the issue occurred, subsequent VoIP calls experienced the same issue when using AirPods Pro, but the issue did not occur when using the built-in microphone. The issue could only be resolved by restarting the system, and killing the app did not work.
Log and code analysis:
In WebRTC, it listens for AVAudioSessionRouteChangeNotification. In the above scenario, when webrtc receives the route change notification, it will print the audio session configuration information. At this point, the input channel count was 0, which was abnormal:
[Webrtc] (RTCLogging.mm:33): (audio_device_ios.mm:535 HandleValidRouteChange): RTC_OBJC_TYPE(RTCAudioSession):
{
category: AVAudioSessionCategoryPlayAndRecord
categoryOptions: 128
mode: AVAudioSessionModeVoiceChat
isActive: 1
sampleRate: 48000.00
IOBufferDuration: 0.020000
outputNumberOfChannels: 2
inputNumberOfChannels: 0
outputLatency: 0.021500
inputLatency: 0.005000
outputVolume: 0.600000
isPreferredSpeaker: 0
isCallkit: 0
}
If app tries to call API, setPreferredInputNumberOfChannels at this point, it will fail with an error code of -50:
setConfiguration:active:shouldSetActive:error:]): Failed to set preferred input number of channels(1): The operation couldn’t be completed. (OSStatus error -50.)
Our questions:
When AVAudioSession is active, the category and mode are as expected. Why is the input channel count 0?
Assuming that the AVAudioSession state is abnormal at this point, why does killing the app not resolve the issue, and why does the system need to be restarted to resolve the issue?
Is it possible that the category and mode of the AVAudioSession fetched by the app is currently wrong? Does it need to be reset again each time the callkit is started if the category and mode fetched are the same as the values to be set?
I am experiencing an issue while recording audio using AVAudioEngine with the installTap method. I convert the AVAudioPCMBuffer to Data and send it to a UDP server. However, when I receive the Data and play it back, there is continuous crackling noise during playback.
I am sending audio data using this library "https://github.com/mindAndroid/swift-rtp" by creating packet and send it.
Please help me resolve this issue. I have attached the code reference that I am currently using.
Thank you.
ViewController.swift
I’ve encountered an issue when trying to transcribe audio during a SharePlay session in VisionOS. Specifically, the AVAudioSession appears to fail when sharing audio, preventing successful transcription. The problem seems related to AVAudioSession.sharedInstance() and using the .mixWithOthers option, which is supposed to enable multiple audio sources to coexist without interference.
Here’s the relevant code snippet that throws the error:
private static func prepareEngine() throws -> (AVAudioEngine, SFSpeechAudioBufferRecognitionRequest) {
let audioEngine = AVAudioEngine()
let request = SFSpeechAudioBufferRecognitionRequest()
request.shouldReportPartialResults = true
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.playAndRecord, mode: .default, options: [.mixWithOthers, .allowBluetooth])
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
let inputNode = audioEngine.inputNode
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in
request.append(buffer)
}
audioEngine.prepare()
try audioEngine.start()
return (audioEngine, request)
}
The setup is designed to initialize an AVAudioEngine and a SFSpeechAudioBufferRecognitionRequest for real-time transcription, but fails within the SharePlay context. Notably, while .mixWithOthers is intended to handle concurrent audio sessions, it doesn’t appear to work as expected during SharePlay. The audioSession.setActive(true) line is where the setup typically fails, with no clear solution to proceed.
Has anyone else faced similar issues with AVAudioSession and SharePlay in VisionOS? Any insights on how to manage audio sharing or transcription during a SharePlay session would be greatly appreciated!
The specific error is:
The operation couldn't be completed. (com.apple.coreaudio.avfaudio error 561145187.)
I am developing an app for Vehicle owners with a built-in map navigation feature with voice navigation support. The app works fine without voice navigation but when I use voice navigation it occasionally crashes and it crashes while voice navigation is not in progress.
What makes it impossible to diagnose is that even though it crashed 10 times on the flight, I don't see any crash reports in 'Apple Connect'. I tried running it in a simulator and it didn't crash there! but on a real device, when I drive with the app navigating me I crashes abruptly after a few minutes and it's not while the voice navigation is speaking! I also ran the app without AVFoundation and it did not crash then. So I am 100% sure it is something with this AVFoundation framework.
If anyone can help find the problem in my following code it would be really helpful.
import SwiftUI
import AVFoundation
struct DirectionHeaderView: View {
@Environment(\.colorScheme) var bgMode: ColorScheme
var directionSign: String?
var nextStepDistance: String
var instruction: String
@Binding var showDirectionsList: Bool
@Binding var height: CGFloat
@StateObject var locationDataManager: LocationDataManager
@State private var synthesizer = AVSpeechSynthesizer()
@State private var audioSession = AVAudioSession.sharedInstance()
@State private var lastInstruction: String = ""
@State private var utteranceDistance: String = ""
@State private var isStepExited = false
@State private var range = 20.0
var body: some View {
VStack {
HStack {
VStack {
if let directionSign = directionSign {
Image(systemName: directionSign)
}
if !instruction.contains("Re-calculating the route...") {
Text("\(nextStepDistance)")
.onChange(of: nextStepDistance) {
let distance = getDistanceInNumber(distance: nextStepDistance)
if distance <= range && !isStepExited {
startVoiceNavigation(with: instruction)
isStepExited = true
}
}
}
}
Spacer()
Text(instruction)
.onAppear {
isStepExited = false
utteranceDistance = nextStepDistance
range = nextStepRange(distance: utteranceDistance)
startVoiceNavigation(with: "In \(utteranceDistance), \(instruction)")
}
.onChange(of: instruction) {
isStepExited = false
utteranceDistance = nextStepDistance
range = nextStepRange(distance: utteranceDistance)
startVoiceNavigation(with: "In \(utteranceDistance), \(instruction)")
}
.padding(10)
Spacer()
}
}
.padding(.horizontal,10)
.background(bgMode == .dark ? Color.black.gradient : Color.white.gradient)
}
func startVoiceNavigation(with utterance: String) {
if instruction.isEmpty || utterance.isEmpty {
return
}
if instruction.contains("Re-calculating the route...") {
synthesizer.stopSpeaking(at: AVSpeechBoundary.immediate)
return
}
let thisUttarance = AVSpeechUtterance(string: utterance)
lastInstruction = instruction
if audioSession.category == .playback && audioSession.categoryOptions == .mixWithOthers {
DispatchQueue.main.async {
synthesizer.speak(thisUttarance)
}
}
else {
setupAudioSession()
DispatchQueue.main.async {
synthesizer.speak(thisUttarance)
}
}
}
func setupAudioSession() {
do {
try audioSession.setCategory(AVAudioSession.Category.playback, options: AVAudioSession.CategoryOptions.mixWithOthers)
try audioSession.setActive(true)
}
catch {
print("error:\(error.localizedDescription)")
}
}
func nextStepRange(distance: String) -> Double {
var thisStepDistance = getDistanceInNumber(distance: distance)
if thisStepDistance != 0 {
switch thisStepDistance {
case 0...200:
if locationDataManager.speed >= 90 {
return thisStepDistance/1.5
}
else {
return thisStepDistance/2
}
case 201...300:
if locationDataManager.speed >= 90 {
return 120
}
else {
return 100
}
case 301...500:
if locationDataManager.speed >= 90 {
return 150
}
else {
return 125
}
case 501...1000:
if locationDataManager.speed >= 90 {
return 250
}
else {
return 200
}
case 1001...10000:
if locationDataManager.speed >= 90 {
return 250
}
else {
return 200
}
default:
if locationDataManager.speed >= 90 {
return 250
}
else {
return 200
}
}
}
return 200
}
func getDistanceInNumber(distance: String) -> Double {
var thisStepDistance = 0.0
if distance.contains("km") {
let stepDistanceSplits = distance.split(separator: " ")
let stepDistanceText = String(stepDistanceSplits[0])
if let dist = Double(stepDistanceText) {
thisStepDistance = dist * 1000
}
}
else {
var stepDistanceSplits = distance.split(separator: " ")
var stepDistanceText = String(stepDistanceSplits[0])
if let dist = Double(stepDistanceText) {
thisStepDistance = dist
}
}
return thisStepDistance
}
}
#Preview {
DirectionHeaderView(directionSign: "", nextStepDistance: "", instruction: "", showDirectionsList: .constant(false), height: .constant(0), locationDataManager: LocationDataManager())
}
I'm building a streaming app on visionOS that can play sound from audio buffers each frame. The audio format has a bitrate of 48000, and each buffer has 480 samples.
I noticed when calling
audioPlayerNode.scheduleBuffer(audioBuffer)
The memory keeps increasing at the speed of 0.1MB per second And at around 4 minutes, the node seems to be full of buffers and had a hard reset, at which point, the audio is stopped temporary with a memory change. see attached screenshot.
However, if I call
audioPlayerNode.scheduleBuffer(audioBuffer, at: nil, options: .interrupts)
The memory leak issue is gone, but the audio is broken (sounds like been shortened).
Below is the full code snippet, anyone knows how to fix it?
@Observable
final class MyAudioPlayer {
private var audioEngine: AVAudioEngine = .init()
private var audioPlayerNode: AVAudioPlayerNode = .init()
private var audioFormat: AVAudioFormat?
init() {
audioEngine.attach(audioPlayerNode)
audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: nil)
try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default)
try? AVAudioSession.sharedInstance().setActive(true)
audioEngine.prepare()
try? audioEngine.start()
audioPlayerNode.play()
}
// more code...
/// callback every frame
private func audioFrameCallback_Non_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) {
guard let buf,
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 48000, channels: 2, interleaved: false),
let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples))
else { return }
audioBuffer.frameLength = AVAudioFrameCount(samples)
if let data = audioBuffer.floatChannelData {
for channel in 0 ..< Int(format.channelCount) {
for frame in 0 ..< Int(audioBuffer.frameLength) {
data[channel][frame] = buf[frame * Int(format.channelCount) + channel]
}
}
}
// memory leak here
audioPlayerNode.scheduleBuffer(audioBuffer)
}
}
The new iPhone 16 supports spatial audio recordings in the camera app when recording videos. Is it possible to also record spatial audio without video, and is it possible for 3rd party developers to do so? If so, how do I need to configure AVAudioSession and/or AVAudioEngine to record spatial audio in my audio recording app on iPhone 16?
When we tested the audio quality of our VoIP App, we found that when the iOS18.0 device was played with AirPods Pro 2, we could hear noises similar to peak clipping and distortion, especially when the sound source played was loud and high-pitched. Here is the device information we tested:
Model: iPhone 16 Pro Max, iPhone 15 Pro
System version: iOS 18.0 (22A3354)
Bluetooth headset model: AirPods Pro 2
Bluetooth firmware version: 6F8
We tested multiple apps (including phone calls, FaceTime, Zoom, WeChat, Tencent Meeting), and they all had the above noise problem.
We also found two phenomena:
If we use the same iOS 18 device to connect HUAWEI FreeBuds Pro or FreeBuds 2, there is no such noise problem;
If we use an iOS 17 device to connect to the same AirPods Pro 2 for testing, there is no such noise problem;
Therefore, we suspect that it is caused by the compatibility problem between iOS 18.0 and AirPods firmware 6F8. The firmware version of our AirPods Pro 2 is 6F8, which was released on June 26, and iOS 18.0 was released on September 16. Maybe they are not very compatible. I hope that subsequent firmware updates can fix this problem.
Hi!
I have an AVAudioSequencer with some AVMusicTracks that are filled with AVParameterEvents.
If I toggle the isMuted property of a track, it will instantly mute when changed to true. However, after turning the muting to false, the events will only triggers on the next round of a loop and not instantly. Is this intended behaviour, and is there some way to get the events to trigger immediately after toggling the isMuted to be false?
Hi everyone,
I’m experiencing an issue where audio interruptions (e.g., phone calls) are not being intercepted while running sound classification in an app that uses the AVAudioSession. Classification works fine, but interruptions aren’t handled, even though I’ve followed Apple’s guidelines on handling audio interruptions [1_Document].
The classification was initially based on [2_Classifer], where it worked perfectly. However, when I adopted classification in a more camera-focused app using [3_Cam], the interruption behavior stopped working. The classification setup is functioning with [3_Cam], but audio interruptions are not triggered.
The listener is invoked before starting sound analysis as suggested in [2_Classifier].
startListeningForAudioSessionInterruptions()
try startAnalyzing([(request, observer)])
FYI, one change I have made for classifications is following. This works fine in all cases.
// try audioSession.setCategory(.record, mode: .default)
try audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth])
I suspect the issue might be related to the AVAudioSession configuration or how the app handles recording and playback together. Is there anything else I should check related to AVAudioSession? Are there additional APIs I could use to pre-check or better handle audio interruptions?
Any suggestions or guidance would be greatly appreciated!
Platform: Swift 5, Xcode 16, iOS 18.
References:
Document
Classifier
Cam
Best Regards
AVAudioEngine and AVAudioSession
Welcome! I will start off with the terms AVAudioEngineImpl::Initialize(NSError**).
Why? I want to make those who run into this issue have to possibility to find this post through Search Engines!
This is short small breakdown based on what I observed while trying to use these two Components. It's not a guide that goes into all the details.
If you're trying to figure out how to fix a crash, you may can find a common way to fix it, in this post!
Is it possible to use AVAudioEngine and AVAudioSession together?
The answer is yes.
But you will face challenges regarding it. Mostly AVAudioEngine. Whatever you're trying to do, it will take a lot of testing. I don't know how it will be with an IDE. But with just .app and iPhone it will take some testing. Or a lot of testing.
Something that helped me fixing a crash was, this here: https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing
This example Project by Apple, uses both AVAudioEngine and AVAudioSession.
How can I fix AVAudioEngineImpl::Initialize(NSError**) ?
I think this depends. If you're lucky and have a crash log, you may can find clues, but the stack trace sometimes doesn't really help either.
I will mention common cases that I encountered though.
inputNode
https://developer.apple.com/documentation/avfaudio/avaudioengine/1386063-inputnode
You need an inputNode apparently. You need to access it or else I think there won't be one. And if there isn't one, AVAudioEngine.start will most likely crash.
The audio engine creates a singleton on demand when first accessing this variable.
Doing this has prevented this common issue for me.
.prepare deallocates and can cause a crash if you restart your AudioEngine
Another issue I faced was handling .prepare wrong. You don't need .prepare. But if you use installTap or other things, I think you need it.
Here is a common thing to note.
If you had previous initialized inputNode. Those could be gone after using .prepare.
You have to ensure you're accessing AVAudioEngine.inputNode again before calling .start() or whatever node you need.
The Voice Processing Project, does this by creating a Managing Controller for AVAudioEngine with a sort of "setup" function, which ensures that everything is ready, before .prepare and .start get called.
AVAudioSession's setCategory
You have to experiment with it. The crashes can be very weird. Sometimes your App will only crash once, and then only after you install it again, or if you start it up.
You are actually able to use .setActive and .setCategory with AVAduioEngine. Just do not try to do .setActive(false) before you've stopped the AudioEngine, as it will fail.
Sometimes I'd run into an issue with .setActive(true) so you really have to experiment if leaving that part out resolves the issue or not.
try session.setCategory(.multiRoute, mode: .default, options: [.defaultToSpeaker, .mixWithOthers])
Experiment with it. But these .multiRoute and .mixWithOthers have allowed me to use AVAudioEngine to make a test recording. And I can even switch the Data Sources and Polar Patterns without any issues.
Sometimes you can get away without setting .setActive at all. Not sure if AVAudioEngine does it automatically.
Short Summary
If you use .prepare and then .stop, make sure to initialize things like .inputNode before calling .prepare and .start again. (THIS CAN BE DIFFERENT)
Only call .setActive(false) after you used .stop. Otherwise I believe it has no chance to stop it.
AVAudioSession setCategory is important. Ensure you use mixRoutes or experiment with all the modes.
If you manage to solve your crash, you'll be able to indeed change the Data Sources and Polar Patterns and more!
Use isRunning before using .start, this will save you from another crash. If you use .start while it's already running, I think try and catch won't save you here, you have to ensure you're not starting it twice.
I hope that this short breakdown will help you to resolve your crash. If you get deeper into AVAudioEngine and AVAudioSession, you'll probably face more crashes. I yet, need to figure out how to solve them. I have a lot of trouble to put my Testing App on my iPhone, so I am sorry if this guide didn't cover every detail of it.
A HUGE tip from me is to check the Documentations. As example, when I read the Documentation for inputNode I learned why my app crashed, it's because I never accessed and initialized one.
The Developer Documentation can be a little bit of a laberynth, and I strongly recommend you to read every property you try to access if you believe they cause issues. And I also recommend to find example Projects like the Voice Processing ones. As there aren't any Code Examples in the Documentation.
If I have bluetooth speaker connected and I have installTap called on input Node, the callback is fired for 1-2 seconds then it doesnt anymore. I dont see any route or any notification handler called in between.
engine.inputNode.removeTap(onBus: 0)
engine.inputNode.installTap(
onBus: 0,
bufferSize: 4096,
format: format
) { buffer, _ in
// 3
guard let channelData = buffer.floatChannelData else {
return
}
// This callback fails after some time.
}
Not sure if this is expected, but I noticed some other applications, they seem to work fine.
If I remove bluetooth device, my input works fine.
Also I have no issues with output on Speaker.
Is anyone experiencing an issue with initial ‘AVAudioSession.sharedInstance().outputVolume’ returning 0 after updating to iOS 18.0?
I’m observing the outputVolume of AVAudioSession, and when I adjust the device volume, the change value returns correctly. However, when I call AVAudioSession.sharedInstance().outputVolume to get the current volume before knowing the change value, it returns 0, even though the device volume is not actually 0.
After upgrading to iOS 18 CarPlay with 2023 Lexus and iPhone 15 Pro Max shows multiple issues:
• speakers reduced to Mono sound (going back to normal after some minutes and then reducing again)
• no speaker sound at all
• touching / moving phone while driving resulting in “on and off” sound
No Reboot / Shutdown helps
No Cable connection works
@Apple: do you test your software professionally or is this outsourced to the community? Doesn’t look at all like a professional approach?
Please solve this dangerous (traffic!) and annoying topic ASAP!
Thanks - Torsten
My Code
private let synthesizer = AVSpeechSynthesizer()
override func didReceive(_ request: UNNotificationRequest, withContentHandler contentHandler: @escaping (UNNotificationContent) -> Void) {
self.contentHandler = contentHandler
bestAttemptContent = (request.content.mutableCopy() as? UNMutableNotificationContent)
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playback)
let utterance = AVSpeechUtterance(string: "Hello Sim")
utterance.voice = AVSpeechSynthesisVoice(language: "th_TH")
utterance.rate = 1.0
utterance.pitchMultiplier = 1.0
utterance.volume = 1.0
utterance.preUtteranceDelay = 0
self.synthesizer.usesApplicationAudioSession = false
self.synthesizer.speak(utterance)
} catch {
print(error.localizedDescription)
}
if let bestAttemptContent = self.bestAttemptContent {
contentHandler(bestAttemptContent)
}
}
Info.plist enable
UIBackgroundModes
audio
fetch
processing
remote-notification
payloadNotification
{
"aps":{
"alert":{
"title":"title"
"subtitle":"subtitle"
"body":"body"
}
"mutable-content":1
"content-available":1
"sound":"tester.caf"
}
}
This code can play in simulator but can't play in real device
I have an app that plays audio and the behaviour of it has changed in watchOS 11. I can no longer figure out how to play the audio through the headphones.
To play audio I..
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback,
mode: .default,
policy: .longFormAudio,
options: []
let activated = try await session.activate()
if activated {
// play audio
}
In previous versions, 'try await session.activate()` would bring up a route picker where the user could select their headphones. Now on watchOS 11 it just plays the audio out of the speaker.
Maybe that's what some people want but if they do want it to play out of the headphones I can't see how I can give that option now? There's no AVRoutePickerView available on watchOS for selecting it.
I've tried setting the category to .multiRoute instead of .playback and that does bring up the picker but selecting the speaker results in an error code and selecting the headphones results in it saying it cannot find my headphones (which shouldn't be the case since Apple Music on watchOS finds them).
Tried overriding the output with try session.overrideOutputAudioPort(.speaker) but the compiler complains that speaker isn't available on watchOS, which is strange as if I understand correctly it's possible to play through the speaker now at least on some Apple Watches.
So is this a bug or is there some way I've not found of playing audio through the headphones?
Hi,
I am trying to detect if an audio stream is Dolby Atmos. I have existing code that determines if a stream is Dolby Atmos based on the following:
Channel count is greater than equal to 8
Binaural is true
Immersive is true
Downmix is false
I am trying to determine if these rules are correct and documentation that specifies these rules that I can reference in the future.
Any help you can provide is greatly appreciated.
Regards,
John
Is it possible to play audio in the Background or when the app is Terminated? If yes, how can I play audio in the Background or when the app is Terminated in iOS using Swift? I am receiving an audio link in a Firebase notification. How can I play this audio link when the app is in the Background or Terminated?
When using the AVSpeechSynthesizer() , I get an error after a couple of seconds :"IPCAUClient.cpp:139 IPCAUClient: can't connect to server (-66748) <0x104309130>", and then it speaks the text.
The second time I call speak, there is no delay and error and it speaks immediately.
Where does this error and delay come from and how can I resolve it?
Intialization code:
self.audioSession = AVAudioSession.sharedInstance() // 2) handle audio session first, before trying to read the text
do {
try audioSession.setCategory(.playback, mode: .voicePrompt, options: .duckOthers)
try audioSession.setActive(false)
} catch let error {
Logger.model.debug("❓\(error.localizedDescription)")
}
speechSynthesizer = AVSpeechSynthesizer()
speechSynthesizer.usesApplicationAudioSession = true
Speak code:
let utterance = AVSpeechUtterance(string: text)
utterance.preUtteranceDelay = 0.1
utterance.rate = 0.5
utterance.pitchMultiplier = 0.75
utterance.prefersAssistiveTechnologySettings = false
self.speechSynthesizer.speak(utterance)
The last statement gives this error message!
I can reproduce the bug that CallKit doesn't active audiosession after the outgoing call put on hold because of an incoming call.
VoIP calling with CallKit
Steps to reproduce:
Download SpeakerBox example app from the link above and start it with XCode
Start a new outgoing call
Call your phone from other phone
Hold and Accept the call
After a few secs finish the call from the other phone
The outgoing call will be still on hold
Click on the call and click Toggle Hold
The call won't be active again because the audiosession is activated.
Logs:
Received provider(_:didDeactivate:)
Received provider(_:didDeactivate:)
Received provider(_:didDeactivate:)
Received provider(_:didDeactivate:)
Received provider(_:didDeactivate:)
Requested transaction successfully
Starting audio
Type: stdio
AURemoteIO.cpp:1162 failed: 561017449 (enable 3, outf< 1 ch, 44100 Hz, Float32> inf< 1 ch, 44100 Hz, Float32>)
Type: Error | Timestamp: 2024-08-15 12:20:29.949437+02:00 | Process: Speakerbox | Library: libEmbeddedSystemAUs.dylib | Subsystem: com.apple.coreaudio | Category: aurioc | TID: 0x19540d
AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error 561017449
Type: Error | Timestamp: 2024-08-15 12:20:29.949619+02:00 | Process: Speakerbox | Library: AVFAudio | Subsystem: com.apple.avfaudio | Category: avae | TID: 0x19540d
Couldn't start Apple Voice Processing IO: Error Domain=com.apple.coreaudio.avfaudio Code=561017449 "(null)" UserInfo={failed call=err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)}
Type: Notice | Timestamp: 2024-08-15 12:20:29.949730+02:00 | Process: Speakerbox | Library: Speakerbox | TID: 0x19540d
Route change:
Type: Notice | Timestamp: 2024-08-15 12:20:30.167498+02:00 | Process: Speakerbox | Library: Speakerbox | TID: 0x19540d
ReasonUnknown
Type: Notice | Timestamp: 2024-08-15 12:20:30.167549+02:00 | Process: Speakerbox | Library: Speakerbox | TID: 0x19540d
Previous route:
Type: Notice | Timestamp: 2024-08-15 12:20:30.167568+02:00 | Process: Speakerbox | Library: Speakerbox | TID: 0x19540d
<AVAudioSessionRouteDescription: 0x302c00bc0,
inputs = (
"<AVAudioSessionPortDescription: 0x302c01330, type = MicrophoneBuiltIn; name = iPhone Mikrofon; UID = Built-In Microphone; selectedDataSource = (null)>"
);
outputs = (
"<AVAudioSessionPortDescription: 0x302c004d0, type = Receiver; name = Vev\U0151; UID = Built-In Receiver; selectedDataSource = (null)>"
)>
Type: Notice | Timestamp: 2024-08-15 12:20:30.167626+02:00 | Process: Speakerbox | Library: Speakerbox | TID: 0x19540d