AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

Posts under AVAudioSession tag

95 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Are some backgrounded apps allowed to record phone calls but not others?
It’s been established that generally speaking background apps cannot record audio while the foreground app is already reading audio data from the microphone, but are there exceptions? For instance, is there an exception for certain Apple apps? If so, and there’s a special exception that most programmers don’t know about but some Apple’s engineers do and perhaps some hackers do as well, wouldn’t the mechanism that allows that eventually be exploited?
0
0
166
1w
Can backgrounded app record phone calls?
I'd like to know: Let's say there's a backgrounded app which has microphone access, such as Signal or SoundHound or Shazam. It's established that these apps are allowed to record audio in the user's environment even after being backgrounded, seemingly for as long as they want and even upload that sound data. But can they ALSO continue recording even while another app that is in the foreground is using the microphone, such as the Phone app or Signal?
1
0
145
1w
How to implement continuous speech recognition in the background?
Hi, I'd like to develop an app which runs speech recognition even after going into background. I know I can accomplish this using audio background mode and the process the audio but I am not sure if this workaround would get accepted into App Store because of the processing limitations while in the background. How can I accomplish this while still being compliant with Apples privacy policy and other restrictions? Thanks, Marek
0
0
131
2w
PTTFramework w/ AVAudioSession
Hi all, I have spent a lot of time reading the tech note and watching the WDDC video that introduce the PTTFramework on iOS. I currently have a custom setup where I am using AVAudioEngine to schedule and play buffers that are being streamed through a call. I am looking to use the PTTFramework to allow a user to trigger this push to talk behavior from the lock screen and the various places with the system UI it provides. However I am unsure what the correct behavior is regarding the handling of the audio session. Right now I am using .playback when there is no active voice transmission so that devices such as AirPods can be in AD2P mode where applicable, and then transitioning to .playbackAndRecord category only when the mic input should become active. Following this change in my AVAudioEngine manager I am then manually activating and deactivating the audio session manually when the engine is either playing/recording or idle. In the documentation it states that you should not attempt to activate or deactivate your audio session directly, but allow the framework to handle it. Does that mean that I need to either call the request to transmit delegate function or set an active participant on the channel manager first, and then wait for the didBecomeActive delegate method to trigger before I actually attempt to play or record any audio? (I am using the fullDuplex mode currently.) I noticed that that delegate method will only trigger if the audio session wasn't active before doing one of the above (setting active participant, requesting transmit). Lastly, when using the PTTFramework it also mentions that we get support for PTT devices and I notice on the didBeginTransmittingFrom property we have a handsfreeButton case. Is there any documentation or resources for what is actually supported out of the box for this? I am currently working on handling a lot of the push to talk through bluetooth LE, and wanted to make sure there wasn't overlap with what the system provides. Thank you!
0
0
93
2w
AudioServicesPlaySystemSound not playing through BluetoothA2DP device
Hello We have an application that play some sound via the system sound APIs from the AudioToolbox framework. AudioServicesCreateSystemSoundID(url as CFURL, &soundID) AudioServicesPlaySystemSoundWithCompletion(soundID) Our make sure that an active audio session is available before playing the system sound. But when the device is connected to a BluetoothA2DP device. The sound are played on through the device speaker and not through the bluetooth A2DP device. Our AudioSesison is configured with the following categories [.allowBluetooth, .defaultToSpeaker, .allowBluetoothA2DP] Sound played from the AVAudioPlayer are played on the allowBluetoothA2DP device with similar code. Is this a bug in the AudioToolbox framework?
2
0
127
2w
Swift iOS CallKit audio resource contention
I noticed the following behavior with CallKit when receiving a VolP push notification: When the app is in the foreground and a CallKit incoming call banner appears, pressing the answer button directly causes the speaker indicator in the CallKit interface to turn on. However, the audio is not actually activated (the iPhone's orange microphone indicator does not light up). In the same foreground scenario, if I expand the CallKit banner before answering the call, the speaker indicator does not turn on, but the orange microphone indicator does light up, and audio works as expected. When the app is in the background or not running, the incoming call banner works as expected: I can answer the call directly without expanding the banner, and the speaker does not turn on automatically. The orange microphone indicator lights up as it should. Why is there a difference in behavior between answering directly from the banner versus expanding it first when the app is in the foreground? Is there a way to ensure consistent audio activation behavior across these scenarios? I tried reconfiguring the audio when answering a call, but an error occurred during setActive, preventing the configuration from succeeding. let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setActive(false) try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [.defaultToSpeaker]) try audioSession.setActive(true, options: []) } catch { print("Failed to activate audio session: \(error)") } action.fulfill() } Error Domain=NSOSStatusErrorDomain Code=561017449 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed}
1
0
118
2w
Swift iOS CallKit audio resource contention
I noticed the following behavior with CallKit when receiving a VolP push notification: When the app is in the foreground and a CallKit incoming call banner appears, pressing the answer button directly causes the speaker indicator in the CallKit interface to turn on. However, the audio is not actually activated (the iPhone's orange microphone indicator does not light up). In the same foreground scenario, if I expand the CallKit banner before answering the call, the speaker indicator does not turn on, but the orange microphone indicator does light up, and audio works as expected. When the app is in the background or not running, the incoming call banner works as expected: I can answer the call directly without expanding the banner, and the speaker does not turn on automatically. The orange microphone indicator lights up as it should. Why is there a difference in behavior between answering directly from the banner versus expanding it first when the app is in the foreground? Is there a way to ensure consistent audio activation behavior across these scenarios?
0
0
96
2w
AVAudioSession's "availableInputs" not update in time
// Here addObserver for routeChangeNotification func testAudioRoute() { // My app is an VoIP app, so I need to set "playAndRecord" and "allowBluetooth" try? AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: [.duckOthers, .allowBluetooth, .allowBluetoothA2DP]) NotificationCenter.default.addObserver(self, selector: #selector(currentRouteChanged(noti:)), name: AVAudioSession.routeChangeNotification, object: nil) } // Print the "availableInputs" once got a notification @objc func currentRouteChanged(noti: Notification) { let availableInputs = AVAudioSession.sharedInstance().availableInputs?.compactMap({ $0.portType }) ?? [] let currentRouteInputs = AVAudioSession.sharedInstance().currentRoute.inputs.compactMap({ $0.portType }) let currentRouteOutputs = AVAudioSession.sharedInstance().currentRoute.outputs.compactMap({ $0.portType }) print("willtest: \navailableInputs=\(availableInputs), \ncurrentRouteInputs=\(currentRouteInputs), \ncurrentRouteOutputs=\(currentRouteOutputs)") /* When BT (Airpods pro 2) CONNECTTED: it will print like below when notification comes, this is correct. ---------------------------------------------------------- willtest: availableInputs=[__C.AVAudioSessionPort(_rawValue: MicrophoneBuiltIn), __C.AVAudioSessionPort(_rawValue: BluetoothHFP)], currentRouteInputs=[], currentRouteOutputs=[__C.AVAudioSessionPort(_rawValue: BluetoothA2DPOutput)] ---------------------------------------------------------- When BT (Airpods pro 2) DISCONNECTTED: it will print like below when notification comes, this is wrong. ---------------------------------------------------------- availableInputs=[__C.AVAudioSessionPort(_rawValue: MicrophoneBuiltIn), __C.AVAudioSessionPort(_rawValue: BluetoothHFP)], currentRouteInputs=[], currentRouteOutputs=[__C.AVAudioSessionPort(_rawValue: Speaker)] */ } So my question here is: Why does the "availableInputs" still contain the "C.AVAudioSessionPort(_rawValue: BluetoothHFP)" item even though I have already disconnected the BT device? (Put AirPods in the case.) BTW, if I tap the "Manual" button once I disconnected the BT, it also prints the "wrong" value for "availableInputs", and it will become normal after about 3~4 seconds.
4
0
171
2w
AVAudioEngineConfigurationChange Clearing AVPlayerNode
Hi all, I am working on an app where I have live prompts playing, in addition to a voice channel that sometimes becomes active. Right now I am using two different AVAudioSession Configurations so what we only switch to a mic enabled mode when we actually need input from the mic. These are defined below. When just using the device hardware, everything works as expected and the modes change and the playback continues as needed. However when using bluetooth devices such as AirPods where the switch from AD2P to HFP is needed, I am getting a AVAudioEngineConfigurationChange notification. In response I am tearing down the engine and creating a new one with the same 2 player nodes. This does work fine and there are no crashes, except all the audio I have scheduled on a player node has now been cleared. All the completion blocks marked with ".dataPlayedBack" return the second this event happens, and leaves me in a state where I now have a valid engine setup again but have no idea what actually played, or was errantly marked as such. Is this the expected behavior when getting a configuration change notification? Adding some information below to my audio graph for context: All my parts of the graph, I disconnect when getting this event and do the same to the new engine private var inputEngine: AVAudioEngine private var audioEngine: AVAudioEngine private let voicePlayerNode: AVAudioPlayerNode private let promptPlayerNode: AVAudioPlayerNode audioEngine.attach(voicePlayerNode) audioEngine.attach(promptPlayerNode) audioEngine.connect( voicePlayerNode, to: audioEngine.mainMixerNode, format: voiceNodeFormat ) audioEngine.connect( promptPlayerNode, to: audioEngine.mainMixerNode, format: nil ) An example of how I am scheduling playback, and where that completion is firing even if it didn't actually play. private func scheduleVoicePlayback(_ id: AudioPlaybackSample.Id, buffer: AVAudioPCMBuffer) async throws { guard !voicePlayerQueue.samples.contains(where: { $0 == id }) else { return } seprateQueue.append(buffer) if !isVoicePlaying { activateAudioSession() } voicePlayerQueue.samples.append(id) if !voicePlayerNode.isPlaying { voicePlayerNode.play() } if let convertedBuffer = buffer.convert(to: voiceNodeFormat) { await voicePlayerNode.scheduleBuffer(convertedBuffer, completionCallbackType: .dataPlayedBack) } else { throw AudioPlaybackError.failedToConvert } voiceSampleHasBeenPlayed(id) } And lastly my audio session configuration if its useful. extension AVAudioSession { static func setDefaultCategory() { do { try sharedInstance().setCategory( .playback, options: [ .duckOthers, .interruptSpokenAudioAndMixWithOthers ] ) } catch { print("Failed to set default category? \(error.localizedDescription)") } } static func setVoiceChatCategory() { do { try sharedInstance().setCategory( .playAndRecord, options: [ .defaultToSpeaker, .allowBluetooth, .allowBluetoothA2DP, .duckOthers, .interruptSpokenAudioAndMixWithOthers ] ) } catch { print("Failed to set category? \(error.localizedDescription)") } } }
1
0
235
2w
Increased and Mismatched Audio Buffer Sizes on iOS 18 when Sound Recognition or Vocal Shortcuts Is Enabled
Description As of iOS 18, AVAudioSession.setPreferredIOBufferDuration ignores the requested buffer size when Sound Recognition or Vocal Shortcuts is enabled. This results in 1) much larger buffer sizes and 2) mismatched buffer sizes between input and output buffers, which causes ‘glitchy’ audio and increased latency. Additionally, when this issue occurs AVAudioSession.setPreferredIOBufferDuration continues to return ‘true’ and no error is produced. Steps to Reproduce: Enable Vocal Shortcuts on a device running iOS 18. Enable at least one shortcut (e.g. Control Center). Open or clone the example project (https://github.com/cwalo/SoundRecognitionBug) Build and install the example project Attach a headset and launch the application Observe console logs showing a requested buffer size of 0.005805 (256 samples @ 48k) an actual buffer size of 0.023220 (1104 samples @48k - this is regularly the resulting buffer size in all of our tests) Quit the app and detach the headset. Enable mutesOutput in AudioSystem.mm (to avoid feedback) Launch the application Observe Same result from step 4 Mismatched hardware buffer size of 1104 and recorded frame count of 1024 Mismatched playbackCount and recordCount Quit the app and disable vocal shortcuts Launch the app Observe IOBufferDuration matching the requested duration and matched buffer sizes (expected behavior) Expected results: Requested IOBufferDuration is respected or AVAudioSession returns false or error is produced Input and output buffer sizes match Device(s): iPhone 11 Pro, iPad Pro OS: iOS 18.0.1 Environment: Xcode 16.1 FB: FB15715421 Related to: https://forums.developer.apple.com/forums/thread/765477
2
2
274
1w
AirPods Pro issue during a VoIP call,
Case-ID: 10075936 PLATFORM AND VERSION iOS Development environment: Xcode Xcode15, macOS macOS 14.5 Run-time configuration: iOS iOS18.0.1 DESCRIPTION OF PROBLEM Our customer experienced an one-way audio issue when switching from the built-in microphone to AirPods Pro (model: A2084, version: 6F21) during a VoIP call. The issue occurred when the customer's voice could not be heard by the other party, but the customer could hear the other party's voice. STEPS TO REPRODUCE Here are the details: After the issue occurred, subsequent VoIP calls experienced the same issue when using AirPods Pro, but the issue did not occur when using the built-in microphone. The issue could only be resolved by restarting the system, and killing the app did not work. Log and code analysis: In WebRTC, it listens for AVAudioSessionRouteChangeNotification. In the above scenario, when webrtc receives the route change notification, it will print the audio session configuration information. At this point, the input channel count was 0, which was abnormal: [Webrtc] (RTCLogging.mm:33): (audio_device_ios.mm:535 HandleValidRouteChange): RTC_OBJC_TYPE(RTCAudioSession): { category: AVAudioSessionCategoryPlayAndRecord categoryOptions: 128 mode: AVAudioSessionModeVoiceChat isActive: 1 sampleRate: 48000.00 IOBufferDuration: 0.020000 outputNumberOfChannels: 2 inputNumberOfChannels: 0 outputLatency: 0.021500 inputLatency: 0.005000 outputVolume: 0.600000 isPreferredSpeaker: 0 isCallkit: 0 } If app tries to call API, setPreferredInputNumberOfChannels at this point, it will fail with an error code of -50: setConfiguration:active:shouldSetActive:error:]): Failed to set preferred input number of channels(1): The operation couldn’t be completed. (OSStatus error -50.) Our questions: When AVAudioSession is active, the category and mode are as expected. Why is the input channel count 0? Assuming that the AVAudioSession state is abnormal at this point, why does killing the app not resolve the issue, and why does the system need to be restarted to resolve the issue? Is it possible that the category and mode of the AVAudioSession fetched by the app is currently wrong? Does it need to be reset again each time the callkit is started if the category and mode fetched are the same as the values to be set?
2
0
242
3w
iOS Audio Crackling issue when send audio data to UDP server and Play
I am experiencing an issue while recording audio using AVAudioEngine with the installTap method. I convert the AVAudioPCMBuffer to Data and send it to a UDP server. However, when I receive the Data and play it back, there is continuous crackling noise during playback. I am sending audio data using this library "https://github.com/mindAndroid/swift-rtp" by creating packet and send it. Please help me resolve this issue. I have attached the code reference that I am currently using. Thank you. ViewController.swift
0
0
220
Nov ’24
Cannot Transcribe Audio During SharePlay in VisionOS
I’ve encountered an issue when trying to transcribe audio during a SharePlay session in VisionOS. Specifically, the AVAudioSession appears to fail when sharing audio, preventing successful transcription. The problem seems related to AVAudioSession.sharedInstance() and using the .mixWithOthers option, which is supposed to enable multiple audio sources to coexist without interference. Here’s the relevant code snippet that throws the error: private static func prepareEngine() throws -> (AVAudioEngine, SFSpeechAudioBufferRecognitionRequest) { let audioEngine = AVAudioEngine() let request = SFSpeechAudioBufferRecognitionRequest() request.shouldReportPartialResults = true let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playAndRecord, mode: .default, options: [.mixWithOthers, .allowBluetooth]) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) let inputNode = audioEngine.inputNode let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in request.append(buffer) } audioEngine.prepare() try audioEngine.start() return (audioEngine, request) } The setup is designed to initialize an AVAudioEngine and a SFSpeechAudioBufferRecognitionRequest for real-time transcription, but fails within the SharePlay context. Notably, while .mixWithOthers is intended to handle concurrent audio sessions, it doesn’t appear to work as expected during SharePlay. The audioSession.setActive(true) line is where the setup typically fails, with no clear solution to proceed. Has anyone else faced similar issues with AVAudioSession and SharePlay in VisionOS? Any insights on how to manage audio sharing or transcription during a SharePlay session would be greatly appreciated! The specific error is: The operation couldn't be completed. (com.apple.coreaudio.avfaudio error 561145187.)
0
0
174
Nov ’24
My App crashes run-time occasionally when I use AVFoundation and related code
I am developing an app for Vehicle owners with a built-in map navigation feature with voice navigation support. The app works fine without voice navigation but when I use voice navigation it occasionally crashes and it crashes while voice navigation is not in progress. What makes it impossible to diagnose is that even though it crashed 10 times on the flight, I don't see any crash reports in 'Apple Connect'. I tried running it in a simulator and it didn't crash there! but on a real device, when I drive with the app navigating me I crashes abruptly after a few minutes and it's not while the voice navigation is speaking! I also ran the app without AVFoundation and it did not crash then. So I am 100% sure it is something with this AVFoundation framework. If anyone can help find the problem in my following code it would be really helpful. import SwiftUI import AVFoundation struct DirectionHeaderView: View { @Environment(\.colorScheme) var bgMode: ColorScheme var directionSign: String? var nextStepDistance: String var instruction: String @Binding var showDirectionsList: Bool @Binding var height: CGFloat @StateObject var locationDataManager: LocationDataManager @State private var synthesizer = AVSpeechSynthesizer() @State private var audioSession = AVAudioSession.sharedInstance() @State private var lastInstruction: String = "" @State private var utteranceDistance: String = "" @State private var isStepExited = false @State private var range = 20.0 var body: some View { VStack { HStack { VStack { if let directionSign = directionSign { Image(systemName: directionSign) } if !instruction.contains("Re-calculating the route...") { Text("\(nextStepDistance)") .onChange(of: nextStepDistance) { let distance = getDistanceInNumber(distance: nextStepDistance) if distance <= range && !isStepExited { startVoiceNavigation(with: instruction) isStepExited = true } } } } Spacer() Text(instruction) .onAppear { isStepExited = false utteranceDistance = nextStepDistance range = nextStepRange(distance: utteranceDistance) startVoiceNavigation(with: "In \(utteranceDistance), \(instruction)") } .onChange(of: instruction) { isStepExited = false utteranceDistance = nextStepDistance range = nextStepRange(distance: utteranceDistance) startVoiceNavigation(with: "In \(utteranceDistance), \(instruction)") } .padding(10) Spacer() } } .padding(.horizontal,10) .background(bgMode == .dark ? Color.black.gradient : Color.white.gradient) } func startVoiceNavigation(with utterance: String) { if instruction.isEmpty || utterance.isEmpty { return } if instruction.contains("Re-calculating the route...") { synthesizer.stopSpeaking(at: AVSpeechBoundary.immediate) return } let thisUttarance = AVSpeechUtterance(string: utterance) lastInstruction = instruction if audioSession.category == .playback && audioSession.categoryOptions == .mixWithOthers { DispatchQueue.main.async { synthesizer.speak(thisUttarance) } } else { setupAudioSession() DispatchQueue.main.async { synthesizer.speak(thisUttarance) } } } func setupAudioSession() { do { try audioSession.setCategory(AVAudioSession.Category.playback, options: AVAudioSession.CategoryOptions.mixWithOthers) try audioSession.setActive(true) } catch { print("error:\(error.localizedDescription)") } } func nextStepRange(distance: String) -> Double { var thisStepDistance = getDistanceInNumber(distance: distance) if thisStepDistance != 0 { switch thisStepDistance { case 0...200: if locationDataManager.speed >= 90 { return thisStepDistance/1.5 } else { return thisStepDistance/2 } case 201...300: if locationDataManager.speed >= 90 { return 120 } else { return 100 } case 301...500: if locationDataManager.speed >= 90 { return 150 } else { return 125 } case 501...1000: if locationDataManager.speed >= 90 { return 250 } else { return 200 } case 1001...10000: if locationDataManager.speed >= 90 { return 250 } else { return 200 } default: if locationDataManager.speed >= 90 { return 250 } else { return 200 } } } return 200 } func getDistanceInNumber(distance: String) -> Double { var thisStepDistance = 0.0 if distance.contains("km") { let stepDistanceSplits = distance.split(separator: " ") let stepDistanceText = String(stepDistanceSplits[0]) if let dist = Double(stepDistanceText) { thisStepDistance = dist * 1000 } } else { var stepDistanceSplits = distance.split(separator: " ") var stepDistanceText = String(stepDistanceSplits[0]) if let dist = Double(stepDistanceText) { thisStepDistance = dist } } return thisStepDistance } } #Preview { DirectionHeaderView(directionSign: "", nextStepDistance: "", instruction: "", showDirectionsList: .constant(false), height: .constant(0), locationDataManager: LocationDataManager()) }
2
0
237
Nov ’24
AVAudioPlayerNode scheduleBuffer leaks memory
I'm building a streaming app on visionOS that can play sound from audio buffers each frame. The audio format has a bitrate of 48000, and each buffer has 480 samples. I noticed when calling audioPlayerNode.scheduleBuffer(audioBuffer) The memory keeps increasing at the speed of 0.1MB per second And at around 4 minutes, the node seems to be full of buffers and had a hard reset, at which point, the audio is stopped temporary with a memory change. see attached screenshot. However, if I call audioPlayerNode.scheduleBuffer(audioBuffer, at: nil, options: .interrupts) The memory leak issue is gone, but the audio is broken (sounds like been shortened). Below is the full code snippet, anyone knows how to fix it? @Observable final class MyAudioPlayer { private var audioEngine: AVAudioEngine = .init() private var audioPlayerNode: AVAudioPlayerNode = .init() private var audioFormat: AVAudioFormat? init() { audioEngine.attach(audioPlayerNode) audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: nil) try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default) try? AVAudioSession.sharedInstance().setActive(true) audioEngine.prepare() try? audioEngine.start() audioPlayerNode.play() } // more code... /// callback every frame private func audioFrameCallback_Non_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) { guard let buf, let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 48000, channels: 2, interleaved: false), let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples)) else { return } audioBuffer.frameLength = AVAudioFrameCount(samples) if let data = audioBuffer.floatChannelData { for channel in 0 ..< Int(format.channelCount) { for frame in 0 ..< Int(audioBuffer.frameLength) { data[channel][frame] = buf[frame * Int(format.channelCount) + channel] } } } // memory leak here audioPlayerNode.scheduleBuffer(audioBuffer) } }
1
0
271
Nov ’24
Noise occurs when playing on iOS 18.0 device + AirPods Pro 2
When we tested the audio quality of our VoIP App, we found that when the iOS18.0 device was played with AirPods Pro 2, we could hear noises similar to peak clipping and distortion, especially when the sound source played was loud and high-pitched. Here is the device information we tested: Model: iPhone 16 Pro Max, iPhone 15 Pro System version: iOS 18.0 (22A3354) Bluetooth headset model: AirPods Pro 2 Bluetooth firmware version: 6F8 We tested multiple apps (including phone calls, FaceTime, Zoom, WeChat, Tencent Meeting), and they all had the above noise problem. We also found two phenomena: If we use the same iOS 18 device to connect HUAWEI FreeBuds Pro or FreeBuds 2, there is no such noise problem; If we use an iOS 17 device to connect to the same AirPods Pro 2 for testing, there is no such noise problem; Therefore, we suspect that it is caused by the compatibility problem between iOS 18.0 and AirPods firmware 6F8. The firmware version of our AirPods Pro 2 is 6F8, which was released on June 26, and iOS 18.0 was released on September 16. Maybe they are not very compatible. I hope that subsequent firmware updates can fix this problem.
1
0
333
Oct ’24
Toggling AVMusicTrack isMuted
Hi! I have an AVAudioSequencer with some AVMusicTracks that are filled with AVParameterEvents. If I toggle the isMuted property of a track, it will instantly mute when changed to true. However, after turning the muting to false, the events will only triggers on the next round of a loop and not instantly. Is this intended behaviour, and is there some way to get the events to trigger immediately after toggling the isMuted to be false?
1
0
292
Oct ’24
Audio Interruption Not Being Intercepted in AVAudioSession with Classification
Hi everyone, I’m experiencing an issue where audio interruptions (e.g., phone calls) are not being intercepted while running sound classification in an app that uses the AVAudioSession. Classification works fine, but interruptions aren’t handled, even though I’ve followed Apple’s guidelines on handling audio interruptions [1_Document]. The classification was initially based on [2_Classifer], where it worked perfectly. However, when I adopted classification in a more camera-focused app using [3_Cam], the interruption behavior stopped working. The classification setup is functioning with [3_Cam], but audio interruptions are not triggered. The listener is invoked before starting sound analysis as suggested in [2_Classifier]. startListeningForAudioSessionInterruptions() try startAnalyzing([(request, observer)]) FYI, one change I have made for classifications is following. This works fine in all cases. // try audioSession.setCategory(.record, mode: .default) try audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth]) I suspect the issue might be related to the AVAudioSession configuration or how the app handles recording and playback together. Is there anything else I should check related to AVAudioSession? Are there additional APIs I could use to pre-check or better handle audio interruptions? Any suggestions or guidance would be greatly appreciated! Platform: Swift 5, Xcode 16, iOS 18. References: Document Classifier Cam Best Regards
1
0
294
Oct ’24
Short small starter guide for AVAudioEngine and AVAudioSession on iOS
AVAudioEngine and AVAudioSession Welcome! I will start off with the terms AVAudioEngineImpl::Initialize(NSError**). Why? I want to make those who run into this issue have to possibility to find this post through Search Engines! This is short small breakdown based on what I observed while trying to use these two Components. It's not a guide that goes into all the details. If you're trying to figure out how to fix a crash, you may can find a common way to fix it, in this post! Is it possible to use AVAudioEngine and AVAudioSession together? The answer is yes. But you will face challenges regarding it. Mostly AVAudioEngine. Whatever you're trying to do, it will take a lot of testing. I don't know how it will be with an IDE. But with just .app and iPhone it will take some testing. Or a lot of testing. Something that helped me fixing a crash was, this here: https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing This example Project by Apple, uses both AVAudioEngine and AVAudioSession. How can I fix AVAudioEngineImpl::Initialize(NSError**) ? I think this depends. If you're lucky and have a crash log, you may can find clues, but the stack trace sometimes doesn't really help either. I will mention common cases that I encountered though. inputNode https://developer.apple.com/documentation/avfaudio/avaudioengine/1386063-inputnode You need an inputNode apparently. You need to access it or else I think there won't be one. And if there isn't one, AVAudioEngine.start will most likely crash. The audio engine creates a singleton on demand when first accessing this variable. Doing this has prevented this common issue for me. .prepare deallocates and can cause a crash if you restart your AudioEngine Another issue I faced was handling .prepare wrong. You don't need .prepare. But if you use installTap or other things, I think you need it. Here is a common thing to note. If you had previous initialized inputNode. Those could be gone after using .prepare. You have to ensure you're accessing AVAudioEngine.inputNode again before calling .start() or whatever node you need. The Voice Processing Project, does this by creating a Managing Controller for AVAudioEngine with a sort of "setup" function, which ensures that everything is ready, before .prepare and .start get called. AVAudioSession's setCategory You have to experiment with it. The crashes can be very weird. Sometimes your App will only crash once, and then only after you install it again, or if you start it up. You are actually able to use .setActive and .setCategory with AVAduioEngine. Just do not try to do .setActive(false) before you've stopped the AudioEngine, as it will fail. Sometimes I'd run into an issue with .setActive(true) so you really have to experiment if leaving that part out resolves the issue or not. try session.setCategory(.multiRoute, mode: .default, options: [.defaultToSpeaker, .mixWithOthers]) Experiment with it. But these .multiRoute and .mixWithOthers have allowed me to use AVAudioEngine to make a test recording. And I can even switch the Data Sources and Polar Patterns without any issues. Sometimes you can get away without setting .setActive at all. Not sure if AVAudioEngine does it automatically. Short Summary If you use .prepare and then .stop, make sure to initialize things like .inputNode before calling .prepare and .start again. (THIS CAN BE DIFFERENT) Only call .setActive(false) after you used .stop. Otherwise I believe it has no chance to stop it. AVAudioSession setCategory is important. Ensure you use mixRoutes or experiment with all the modes. If you manage to solve your crash, you'll be able to indeed change the Data Sources and Polar Patterns and more! Use isRunning before using .start, this will save you from another crash. If you use .start while it's already running, I think try and catch won't save you here, you have to ensure you're not starting it twice. I hope that this short breakdown will help you to resolve your crash. If you get deeper into AVAudioEngine and AVAudioSession, you'll probably face more crashes. I yet, need to figure out how to solve them. I have a lot of trouble to put my Testing App on my iPhone, so I am sorry if this guide didn't cover every detail of it. A HUGE tip from me is to check the Documentations. As example, when I read the Documentation for inputNode I learned why my app crashed, it's because I never accessed and initialized one. The Developer Documentation can be a little bit of a laberynth, and I strongly recommend you to read every property you try to access if you believe they cause issues. And I also recommend to find example Projects like the Voice Processing ones. As there aren't any Code Examples in the Documentation.
0
0
388
Sep ’24