AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

AVAudioSession Documentation

Posts under AVAudioSession tag

92 Posts
Sort by:
Post not yet marked as solved
1 Replies
2.5k Views
When the screen is unlocked the AVSpeechSynthesizer.speak is working fine and in locked not working do { try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, mode: .default, options: .defaultToSpeaker) try AVAudioSession.sharedInstance().setActive(true, options: .notifyOthersOnDeactivation) } catch { print("audioSession properties weren't set because of an error.") } let utterance = AVSpeechUtterance(string: voiceOutdata) utterance.voice = AVSpeechSynthesisVoice(language: "en-US") let synth = AVSpeechSynthesizer() synth.speak(utterance) defer { disableAVSession() } Error Log in the locked state [AXTTSCommon] Failure starting audio queue alp![AXTTSCommon] _BeginSpeaking: couldn't begin playback
Posted
by
Post not yet marked as solved
4 Replies
3k Views
I want to record both IMU data and Audio data from Airpods Pro. I have tried many times, and I failed. I can successfully record the IMU data and iPhone's microphone data simultaneously. When I choose Airpods Pro's microphone in the setCategory() function, the IMU data collection process stopped. If I change recordingSession.setCategory(.playAndRecord, mode: .default, options: .allowBluetooth) to ecordingSession.setCategory(.playAndRecord, mode: .default), everything is okay except the audio is recorded from the handphone. If I add options: .allowBluetooth, the IMU update will stop. Could you give me some suggestions for this? Below are some parts of my code.   let My_IMU = CMHeadphoneMotionManager()   let My_writer = CSVWriter()   var write_state: Bool = false   func test()   {     recordingSession = AVAudioSession.sharedInstance()     do {       try recordingSession.setCategory(.playAndRecord, mode: .default, options: .allowBluetooth)       try recordingSession.setActive(true)       recordingSession.requestRecordPermission() { [unowned self] allowed in         DispatchQueue.main.async {           if allowed == false {print("failed to record!")}         }       }     } catch {       print("failed to record!")     }                 let audioFilename = getDocumentsDirectory().appendingPathComponent("test_motion_Audio.m4a")     let settings = [       AVFormatIDKey: Int(kAudioFormatMPEG4AAC),       AVSampleRateKey: 8000,       AVNumberOfChannelsKey: 1,       AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue     ]     do     {       audioRecorder = try AVAudioRecorder(url: audioFilename, settings: settings)       audioRecorder.delegate = self       audioRecorder.record()     }     catch     {       print("Fail to record!")       finishRecording()     }           write_state.toggle()     let dir = FileManager.default.urls(      for: .documentDirectory,      in: .userDomainMask     ).first!           let filename = "test_motion_Audio.csv"     let fileUrl = dir.appendingPathComponent(filename)     My_writer.open(fileUrl)           APP.startDeviceMotionUpdates(to: OperationQueue.current!, withHandler: {[weak self] motion, error in       guard let motion = motion, error == nil else { return }       self?.My_writer.write(motion)     })   }
Posted
by
Post not yet marked as solved
3 Replies
1.7k Views
I have a music app that can play in the background, using AVQueuePlayer. I'm in the process of adding support for CloudKit sync of the CoreData store, switching from NSPersistentContainer to NSPersistentCloudKitContainer. The initial sync can be fairly large (10,000+ records), depending on how much the user has used the app. The issue I'm seeing is this: ✅ When the app is in the foreground, CloudKit sync uses a lot of CPU, nearly 100% for a long time (this is expected during the initial sync). ✅ If I AM NOT playing music, when I put the app in the background, CloudKit sync eventually stops syncing until I bring the app to the foreground again (this is also expected). ❌ If I AM playing music, when I put the app in the background, CloudKit never stops syncing, which leads the system to terminate the app after a certain amount of time due to high CPU usage. Is there any way to pause the CloudKit sync when the app is in the background or is there any way to mitigate this?
Posted
by
Post not yet marked as solved
1 Replies
969 Views
I know that if you want background audio from AVPlayer you need to detatch your AVPlayer from either your AVPlayerViewController or your AVPlayerLayer in addition to having your AVAudioSession configured correctly. I have that all squared away and background audio is fine until we introduce AVPictureInPictureController or use the PiP behavior baked into AVPlayerViewController. If you want PiP to behave as expected when you put your app into the background by switching to another app or going to the homescreen you can't perform the detachment operation otherwise the PiP display fails. On an iPad if PiP is active and you lock your device you continue to get background audio playback. However on an iPhone if PiP is active and you lock the device the audio pauses. However if PiP is inactive and you lock the device the audio will pause and you have to manually tap play on the lockscreen controls. This is the same between iPad and iPhone devices. My questions are: Is there a way to keep background-audio playback going when PiP is inactive and the device is locked (iPhone and iPad) Is there a way to keep background-audio playback going when PiP is active and the device is locked? (iPhone)
Posted
by
Post not yet marked as solved
1 Replies
1.8k Views
I’m developing a voice communication app for the iPad with both playback and record and using AudioUnit of type kAudioUnitSubType_VoiceProcessingIO to have echo cancellation. When playing the audio before initializing the recording audio unit, volume is high. But if I'm playing the audio after initializing the audio unit or when switching to remoteio and then back to vpio the playback volume is low. It seems like a bug in iOS, any solution or workaround for this? Searching the net I only found this post without any solution: https://developer.apple.com/forums/thread/671836
Posted
by
Post not yet marked as solved
1 Replies
1.6k Views
Hi, I have multiple audio files I want to decide which channel goes to which output. For example, how to route four 2-channel audio files to an 8-channel output. Also If I have an AVAudioPlayerNode playing a 2-channel track through headphones, can I flip the channels on the output for playback, i.e flip left and right? I have read the following thread which seeks to do something similar, but it is from 2012 and I do not quite understand how it would work in modern day. Many thanks, I am a bit stumped.
Posted
by
Post not yet marked as solved
10 Replies
3.8k Views
I work on a video conferencing application, which makes use of AVAudioEngine and the videoChat AVAudioSession.Mode This past Friday, an internal user reported an "audio cutting in and out" issue with their new iPhone 14 Pro, and I was able to reproduce the issue later that day on my iPhone 14 Pro Max. No other iOS devices running iOS 16 are exhibiting this issue. I have narrowed down the root cause to the videoChat AVAudioSession.Mode after changing line 53 of the ViewController.swift file in Apple's "Using Voice Processing" sample project (https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing) from: try session.setCategory(.playAndRecord, options: .defaultToSpeaker) to try session.setCategory(.playAndRecord, mode: .videoChat, options: .defaultToSpeaker) This only causes issues on my iPhone 14 Pro Max device, not on my iPhone 13 Pro Max, so it seems specific to the new iPhones only. I am also seeing the following logged to the console using either device, which appears to be specific to iOS 16, but am not sure if it is related to the videoChat issue or not: 2022-09-19 08:23:20.087578-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:71  Invalid input size for property 1684431725 2022-09-19 08:23:20.087605-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:225  Invalid input size for property 1684431725 I am assuming 1684431725 is 'dfcm' but I am not sure what Audio Session Property that might be.
Posted
by
Post not yet marked as solved
1 Replies
1.3k Views
In my app, which gives voice prompts and sound warnings while driving, I test if the audio session is connected to CarPlay in order to use CarPlay settings for AVAudioSession by doing: BOOL found = NO; for (AVAudioSessionPortDescription * portDescr in [AVAudioSession sharedInstance].currentRoute.outputs) { if ([portDescr.portType isEqualToString:AVAudioSessionPortCarAudio]) found = YES; } This has worked for several years. But now, in iOS 16, when the app is in background, it won't detect that the iPhone is connected to CarPlay most of the times by using this method. Therefore, I will set up the audio session in a way which won't play any sound to the user. I have seen this happening in other apps such as Google Maps. So my guess is that something is not working in iOS 16 an AVAudioSession in background. Has anyone experienced this? Any workaround? I tried notifications to detect a change in a route, but sometimes I wouldn't get the notification if the app was in background when connecting to CarPlay. So this workaround didn't work either. Any ideas? Thank you.
Posted
by
Post not yet marked as solved
9 Replies
3.6k Views
I am getting an error in iOS 16. This error doesn't appear in previous iOS versions. I am using RemoteIO to playback live audio at 4000 hz. The error is the following: Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets This is how the audio format and the callback is set: // Set the Audio format AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = 4000; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = 16; audioFormat.mBytesPerPacket = 2; audioFormat.mBytesPerFrame = 2; AURenderCallbackStruct callbackStruct; // Set output callback callbackStruct.inputProc = playbackCallback; callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self); status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &callbackStruct, sizeof(callbackStruct)); Note that the mSampleRate I set is 4000 Hz. In iOS 15 I get 0.02322 seconds of buffer duration (IOBufferDuration) and 93 frames in each callback. This is expected, because: number of frames * buffer duration = sampling rate 93 * 0.02322 = 4000 Hz However, in iOS 16 I am getting the aforementioned error in the callback. Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets Since the number of frames is equal to the number of packets, I am getting 1 or 2 frames in the callback and the buffer duration is of 0.02322 seconds. This didn't affect the playback of the "raw" signal, but it did affect the playback of the "processed" signal. number of frames * buffer duration = sampling rate 2 * 0.02322 = 0.046 Hz That doesn't make any sense. This error appears for different sampling rates (8000, 16000, 32000), but not for 44100. However I would like to keep 4000 as my sampling rate. I have also tried to set the sampling rate by using the setPreferredSampleRate(_:) function of AVAudioSession, but the attempt didn't succeed. The sampling rate was still 44100 after calling that function. Any help on this issue would be appreciated.
Posted
by
Post not yet marked as solved
3 Replies
1.7k Views
Hi Folks, I'm currently working on video conferencing app and use AVRoutePickerView for output device selection. Since iOS 16 release it started to display incorrect names for ear-piece and speaker options. For both of them the name is iPhone (before it was iPhone/Speaker) Output changes works correctly but display names confuse. Here is my audio session configuration:     private func configureSession() {         let configuration = RTCAudioSessionConfiguration.webRTC()         configuration.category = AVAudioSession.Category.playAndRecord.rawValue         configuration.categoryOptions = [.allowBluetooth, .allowBluetoothA2DP, .duckOthers, .mixWithOthers]         let session = RTCAudioSession.sharedInstance()         session.lockForConfiguration()         do {             try session.setConfiguration(configuration)         } catch let error as NSError {             logError("[AudioSessionManager] Unable to configure RTCAudioSession with error: \(error.localizedDescription)")         }         session.unlockForConfiguration()     }          private func overrideOutputPortIfNeeded() {         DispatchQueue.main.async {             guard let currentOutputType = self.session.currentRoute.outputs.first?.portType else { return }                      self.session.lockForConfiguration()             let shouldOverride = [.builtInReceiver, .builtInSpeaker].contains(currentOutputType)             logDebug("[AudioSessionManager] Should override output to speaker? \(shouldOverride)")             if shouldOverride {                 do {                     try self.session.overrideOutputAudioPort(.speaker)                 } catch let error as NSError {                     logError("[AudioSessionManager] Unable to override output to Speaker: \(error.localizedDescription)")                 }             }             self.session.unlockForConfiguration()         }     } Any help appreciated, Thansk!
Posted
by
Post not yet marked as solved
2 Replies
1k Views
I am using AVSpeechSynthesizer to get audio buffer and play, I am using AVAudioEngine and AVAudioPlayerNode to play the buffer. But I am getting error. [avae] AVAEInternal.h:76 required condition is false: [AVAudioPlayerNode.mm:734:ScheduleBuffer: (_outputFormat.channelCount == buffer.format.channelCount)] 2023-05-02 03:14:35.709020-0700 AudioPlayer[12525:308940] *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: _outputFormat.channelCount == buffer.format.channelCount' Can anyone please help me to play the AVAudioBuffer from AVSpeechSynthesizer write method?
Posted
by
Post not yet marked as solved
1 Replies
433 Views
Hello We have published Voix Audio Recorder Application. The app seems to work but we have very strange issue on one particular iPad device: Application is requesting permission for microphone access, but as you can see on attached images, the permission for microphone is totally missing from permission list. On similar iPad devices, we have not noticed similar issue. What could be cause of this issue and how can we avoid it?
Posted
by
Post not yet marked as solved
3 Replies
775 Views
I tried to press system Talk button to talk in channel when my APP in background, I see the channelManager(_ : PTChannelManager, _: UUID, didBeginTransmittingFrom _: PTChannelTransmitRequestSource) is called, but the function func channelManager(_ : PTChannelManager, didActivate _: AVAudioSession) never called. When my APP in foreground, everything works fine. I've set the APP's capability Background Modes -> Push to Talk. Anyone can help?
Posted
by
Post not yet marked as solved
5 Replies
878 Views
Hi, I'd like to develop an iOS application that keeps the mic open for voice recording and processing even when the screen is off. I want to perform speech-to-text requests whenever samples of voice are detected (using a voice activity detection library) and also send requests to the cloud based on what is spoken. I've enabled the Audio background mode and preliminary testing seems to indicate that this is working. That is, I can press "record" in my app, switch to another app then shut the screen off, and speak for several seconds before auto-stopping the recording and sending it to a SFSpeechRecognizer task, which appears to succeed. However, I have read that this should not be supported so before going further down this path, I wanted to understand what exactly are the processing limitations in this mode? The documentation doesn't seem very clear to me. Thanks, -- B.
Posted
by
Post marked as solved
1 Replies
1k Views
hi I know there's many ways to record Voice in IOS by using one of these famous framework : AVFoundation AudioToolbox Core Audio but what I want todo is to be able record a phone call but there's many challenges interruptions : when I got a call the system interrupt any running app in order to handle this call so how I can I make the voice recording app record voice in background so after I receive a call I open the app and run the record function even though I solved the previous issue , how can I record the sound comes from the phone top speaker if anyone have an idea or any thoughts , please share it with me
Posted
by
Post not yet marked as solved
0 Replies
552 Views
Our app uses playAndRecord category with options of interruptSpokenAudioAndMixWithOthers, allowBluetoothA2DP, allowAirPlay & defaultToSpeaker, and while backgrounded, Spotify or other music apps play normally when the iPhone is not connected to anything or is connected to Bluetooth. However, once iPhone is connected to CarPlay either via Bluetooth or via USB cable, the music apps' playback quality become flat which suggests the recording is using the Hands Free Profile (which would also happen if allowBluetooth is set and connected to Bluetooth devices) - is there any way to keep using the iPhone microphone to record while connected to CarPlay so that we can allow music apps to play normally?
Posted
by
Post not yet marked as solved
0 Replies
338 Views
https://developer.apple.com/documentation/avfaudio/avaudiosession/mode/1616455-voicechat In this document's discussion region, there is side effect mentioned. But details of side effect is not available. Please share the details of side effect. Using this mode has the side effect of enabling the allowBluetooth category option.
Posted
by
Post not yet marked as solved
0 Replies
393 Views
We noticed that the number of AVAudioSession.mediaServicesWereResetNotification increased significantly after upgrading to iOS 16.4.1. When AVAudioSession.mediaServicesWereResetNotification is posted, there is a high probability that it failed to restart audio modules just on iOS 16.4.1. Another issue is that we couldn't emulate to reset media services after upgrading to iOS 16.4.1 similarly. Because the call would be ended by callkit after resetting media services. Steps to Reproduce: 2.1 Make a native call and the call is connected. 2.2 Click "Reset Media Servicces"(Settings -> Developer -> "Reset Media Servicces") button, and return to the phone menu. 2.3 The call is terminated by Callkit.
Posted
by
Post not yet marked as solved
0 Replies
924 Views
I'm writing an app in which I have a sound being played whenever a particular action occurs. During testing I noticed that this sounds will always be heard if my phone is connected to a Bluetooth speaker even if the mute switch is active. Is there a way to prevent this? I feel like my app should respect the mute switch in this case. If I am not connected to the Bluetooth speaker everything works as expected.
Posted
by