AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

AVAudioSession Documentation

Posts under AVAudioSession tag

90 Posts
Sort by:
Post not yet marked as solved
0 Replies
343 Views
Hi all, I have created a QuickLook Preview for my custom datatype in my app. I use SwiftUI wrapped in UIKit for the preview. My issue is that when I try and play audio using AVAudioPlayer, I receive a status code 50 error. Does anyone know if there are seperate permissions I need to request before being able to do this? Here are the errors I get while trying to set my audio session as active and play on the avaudioplayer Thanks for your help and advice! The operation couldn’t be completed. (OSStatus error -50.) nwi_state: registration failed (9) connection <connection: 0x100e0b270> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } : error <dictionary: 0x251524530> { count = 1, transaction: 0, voucher = 0x0, contents = "XPCErrorDescription" => <string: 0x2515246c8> { length = 18, contents = "Connection invalid" } } auto-cancelling <connection: 0x100e0b270> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } 0x2816bf680 reply: XPC_ERROR_CONNECTION_INVALID throwing swix::exception: !(is_valid()) AQ_API_V2Impl.cpp:134 AudioQueueNew: <-AudioQueueNew failed -302 rebuilding null connection 0x2816bf680 reply: XPC_ERROR_CONNECTION_INVALID connection <connection: 0x100822a90> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } : error <dictionary: 0x251524530> { count = 1, transaction: 0, voucher = 0x0, contents = "XPCErrorDescription" => <string: 0x2515246c8> { length = 18, contents = "Connection invalid" } } throwing swix::exception: !(is_valid()) auto-cancelling <connection: 0x100822a90> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } AQ_API_V2Impl.cpp:134 AudioQueueNew: <-AudioQueueNew failed -302
Posted
by
Post not yet marked as solved
1 Replies
373 Views
There is a CustomPlayer class and inside it is using the MTAudioProcessingTap to modify the Audio buffer. Let's say there are instances A and B of the Custom Player class. When A and B are running, the process of B's MTAudioProcessingTap is stopped and finalize callback is coming up when A finishes the operation and the instance is terminated. B is still experiencing this with some parts left to proceed. Same code same project is not happening in iOS 17.0 or lower. At the same time when A is terminated, B can complete the task without any impact on B. What changes to iOS 17.1 are resulting in these results? I'd appreciate it if you could give me an answer on how to avoid these issues. let audioMix = AVMutableAudioMix() var audioMixParameters: [AVMutableAudioMixInputParameters] = [] try composition.tracks(withMediaType: .audio).forEach { track in let inputParameter = AVMutableAudioMixInputParameters(track: track) inputParameter.trackID = track.trackID var callbacks = MTAudioProcessingTapCallbacks( version: kMTAudioProcessingTapCallbacksVersion_0, clientInfo: UnsafeMutableRawPointer( Unmanaged.passRetained(clientInfo).toOpaque() ), init: { tap, clientInfo, tapStorageOut in tapStorageOut.pointee = clientInfo }, finalize: { tap in Unmanaged<ClientInfo>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).release() }, prepare: nil, unprepare: nil, process: { tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut in var timeRange = CMTimeRange.zero let status = MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut, flagsOut, &timeRange, numberFramesOut) if noErr == status { .... } }) var tap: Unmanaged<MTAudioProcessingTap>? let status = MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks, kMTAudioProcessingTapCreationFlag_PostEffects, &tap) guard noErr == status else { return } inputParameter.audioTapProcessor = tap?.takeUnretainedValue() audioMixParameters.append(inputParameter) tap?.release() } audioMix.inputParameters = audioMixParameters return audioMix
Posted
by
Post not yet marked as solved
1 Replies
432 Views
My project has uses an AVAudioEngine with a very simple setup: A Speech recognizer running on a tap on the engine's input with separate AVAudioPlayerNodes handling playback. try session.setCategory(.playAndRecord, mode: .default, options: []) try session.setActive(true, options: .notifyOthersOnDeactivation) try session.setAllowHapticsAndSystemSoundsDuringRecording(true) filePlayerNode ---> engine.mainMixerNode bufferPlayerNode --> engine.mainMixerNode engine.mainMixerNode --> engine.outputNode //bufferPlayer.scheduleBuffer() is called on its own queue The input works fine since the buffers can be collected into a file and plays back correctly, and also because the recognizer works fine; but when I try to play the live audio by sending the buffer to the bufferPlayer on this or another device, the buffer audio plays at a very low volume, sometimes with severe distortions. If I lower the sample rate via AVAudioConverter, the distortions get worse. I've tried experimenting with the AVAudioSession category options, having separate AVAudioEngines, and much, much more, yet I still haven't figured this out. It's gotten to the point where I've fixed almost all the arcane and minor issues in my audio system, yet I still can't play back my voice properly. The ability to both play and record simultaneously is a basic feature of phones--when on speaker mode, a phone doesn't need to behave like a walkie-talkie. In my mind, it's inconceivable that the relatively new AVAudioEngine doesn't have a implementation for this, since the main issue (feedback loops) can be dealt with via a simple primitive circuit. Live video chat apps like FaceTime wouldn't be possible without this, yet to my surprise I found no answers online (what I did find were articles explaining how to write a file while playback is occurring). Is there truly no way to do this on AVAudioEngine? Am I missing something fundamental? Any pointers would be greatly appreciated
Posted
by
wmk
Post not yet marked as solved
0 Replies
260 Views
Hello, I am having issue with the setting my avaudiosession output to bluetooth a2dp device. I want to use built in mic for the input and a2dp device (airpod pro 2) for the output route. Whenever I set the .allowBluetoothA2DP for my avaudioSession option, the output changes to speaker. the mode is default and category is playandrecord. If I do the same procedure with airpod pro 1, the output sets to the airpod pro 1. I am having the trouble when I use airpod pro 2 with iphone with ios 17. It seems like there is no issue with ios version below 17. Anyone went through this kind of issue? Thank you in advance.
Posted
by
Post not yet marked as solved
0 Replies
350 Views
When setting the mode during the configuration of an audio session in Swift, the previously configured categoryOptions get reset. For example, if you perform setMode as shown below, you will observe that all previously set categoryOptions are cleared. Example: try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .videoChat, options: [.allowBluetooth, .defaultToSpeaker]) try AVAudioSession.sharedInstance().setMode(.voiceChat) If you need to change the mode while maintaining the categoryOptions, you have to perform setCategory once again. Although the exact reason for this behavior has not been identified, the practical impact on the application's functionality is not yet clear. Why do you think this handling is in place?
Posted
by
Post not yet marked as solved
1 Replies
376 Views
I've been trying to make a native version of my iPad app which uses AVAudioPlayer. Everything works fine in iOS and iPad OS, however, when running on visionOS, it sounds like it's constantly skipping (both in the simulator and on an actual device). Anyone know why this might be or how to fix or a workaround?
Posted
by
Post not yet marked as solved
0 Replies
419 Views
Hi everyone, I was working on some code that involves recording audio with AVAudioEngine and got an issue that just crashes the app: EXC_BREAKPOINT Exception 6, Code 1, Subcode 4304279688 +0x009888 AudioRecordModule.setupAudioEngine +0x009788 AudioRecordModule.setupAudioEngine +0x00c5bc AudioRecordModule.handleConfigurationChange Below is the relevant code in the Recorder class. public class AudioRecordModule: Module { private var audioEngine: AVAudioEngine? private func startRecording(options recordingOptions: RecordingOptions) { try AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: .mixWithOthers) try AVAudioSession.sharedInstance().setActive(true) outputFormat = AVAudioFormat( commonFormat: recordingOptions.bitDepth == 32 ? .pcmFormatInt32 : .pcmFormatInt16, sampleRate: Double(recordingOptions.sampleRate), channels: AVAudioChannelCount(recordingOptions.channels), interleaved: true )! let fileUri = URL(string: recordingOptions.fileUri)! let formatSettings: [String: Any] = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVSampleRateKey: recordingOptions.sampleRate, AVNumberOfChannelsKey: recordingOptions.channels, AVEncoderBitRateStrategyKey: AVAudioBitRateStrategy_Constant, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue, ] self.recordedFile = try AVAudioFile( forWriting: fileUri, settings: formatSettings, commonFormat: outputFormat.commonFormat, interleaved: outputFormat.isInterleaved ) if !hadSetupNotification { setupNotifications() } } func handleConfigurationChange() { DispatchQueue.main.async { self.releaseAudioEngine() self.setupAudioEngine() if self.state == "recording" { // we could attempt to keep recording do { try self.audioEngine?.start() } catch { self.internalPauseRecording() self.sendInterruptEvent() } } } } func setupNotifications() { nc.addObserver( forName: Notification.Name.AVAudioEngineConfigurationChange, object: nil, queue: nil ) { [weak self] _ in guard let weakself = self else { return } if weakself.state != "inactive" { weakself.handleConfigurationChange() } } } private func setupAudioEngine() { self.audioEngine = nil let audioEngine = AVAudioEngine() self.audioEngine = audioEngine let inputNode = audioEngine.inputNode let inputFormat = inputNode.inputFormat(forBus: 0) let converter = AVAudioConverter(from: inputFormat, to: outputFormat)! inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat) { (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in do { let inputBlock: AVAudioConverterInputBlock = { _, outStatus in outStatus.pointee = AVAudioConverterInputStatus.haveData return buffer } let frameCapacity = AVAudioFrameCount(self.outputFormat.sampleRate) * buffer.frameLength / AVAudioFrameCount(buffer.format.sampleRate) let outputBuffer = AVAudioPCMBuffer( pcmFormat: self.outputFormat, frameCapacity: frameCapacity )! var error: NSError? converter.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock) if let error = error { throw error } else { try self.recordedFile?.write(from: outputBuffer) } } catch { print(error) } } } private func releaseAudioEngine() { if let audioEngine = self.audioEngine { audioEngine.inputNode.removeTap(onBus: 0) audioEngine.stop() } audioEngine = nil } } Beside that, the record module works normally. It is just the configuration change that it does not handle well. I understand that when configuration changes, I need to reinit the audio engine to have the correct input format (since the new config/audio device can have different sample rate and such). If I don't do that, the app also crashes perhaps due to the mismatch. AVAudioRecorder is not an option for me. Thank you for your help.
Posted
by
Post not yet marked as solved
1 Replies
364 Views
:( We are currently in the process of developing a video calling app using WebRTC. We initiate one-to-one video calls with the AVAudioSession configured as follows: do { if audioSession.category != .playAndRecord { try audioSession.setCategory( AVAudioSession.Category.playAndRecord, options: [ .defaultToSpeaker ] ) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) } if audioSession.mode != .videoChat { try audioSession.setMode(.videoChat) } } catch { logger.error(msg: "AVAudioSession: \(error.localizedDescription)") } After initiating a video call, we recorded this app's video call using the iOS default screen recording feature. As a result, the recorded video includes system audio. However, iOS/iPad apps with similar features (Zoom, Skype, Slack) do not include audio in their recordings. Why does this difference occur? Is this behavior a security feature of iOS, and are there specific conditions required? Is there a need for some sort of configuration in AVAudioSession? additional :( I also reached out to Apple Developer Technical Support, and they responded, "We were able to reproduce it, but since we don't understand the issue, we will investigate it." What's that about...
Posted
by
Post not yet marked as solved
0 Replies
318 Views
I am creating an app where you can record a video and listen to music in the background. At the top of my viewDidLoad I set the AVAudioSession Category to .playAndRecord let audioSession = AVAudioSession.sharedInstance() AVCaptureSession().automaticallyConfiguresApplicationAudioSession = false do { try audioSession.setCategory(AVAudioSession.Category.playAndRecord, options: [.mixWithOthers, .allowAirPlay, .allowBluetoothA2DP]) try audioSession.setActive(true) } catch { print("error trying to record and play audio") } However when I do this the audio cuts out for a second or less at app open and app close. I would like the audio to continue playing and not cutout. Is there anything I can do to ensure the audio continues to play?
Posted
by
Post not yet marked as solved
3 Replies
422 Views
Hi There, I am trying to record a meeting and upload it to AWS server. The recording is in .m4a format and the upload request is a URLSession request. The following code works perfectly for recordings less than 15 mins. But then for greater recordings, it gets stuck Could you please help me out in this? func startRecording() { let audioURL = getAudioURL() let audioSettings = [ AVFormatIDKey: Int(kAudioFormatMPEG4AAC), AVSampleRateKey: 12000, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue ] do { audioRecorder = try AVAudioRecorder(url: audioURL, settings: audioSettings) audioRecorder.delegate = self audioRecorder.record() } catch { finishRecording(success: false) } } func uploadRecordedAudio{ let _ = videoURL.startAccessingSecurityScopedResource() let input = UploadVideoInput(signedUrl: signedUrlResponse, videoUrl: videoURL, fileExtension: "m4a") self.fileExtension = "m4a" uploadService.uploadFile(videoUrl: videoURL, input: input) videoURL.stopAccessingSecurityScopedResource() } func uploadFileWithMultipart(endPoint: UploadEndpoint) { var urlRequest: URLRequest urlRequest = endPoint.urlRequest uploadTask = URLSession.shared.uploadTask(withStreamedRequest: urlRequest) uploadTask?.delegate = self uploadTask?.resume() }
Posted
by
Post marked as solved
1 Replies
460 Views
I am creating a camera app where I would like music from another app (Apple Music, Spotify, etc.) to continue playing once the app is opened. Currently I am using .mixWithOthers to do this in my viewDidLoad. let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(AVAudioSession.Category.playback, options: [.mixWithOthers]) try audioSession.setActive(true) } catch { print("error trying to record and play audio") } However I am running into an issue where the music only plays if you resume music playback once you start recording a video. Otherwise, when you open the app music will stop when you see the preview. The interesting thing is that if you start playing music while recording, then once you stop music continues to play in the preview view. If you close the app (not force close) and reopen then music play back continues as expected. However, once you force close the app then it returns to the original behavior. I've tried to do research on this and I have not been able to find anything. Any help is appreciated. Let me know if more details are needed.
Posted
by
Post not yet marked as solved
0 Replies
307 Views
there is a method setPreferredInput in AVAudioSession which can be used to select different input device. But, does there any similar function like "setPerferredOutput" so that in my APP I can select a specific audio output device to play audio ? I do not want user to change it through system interfaces (such as the Control Center), but by logic inside APP. thanks!
Posted
by
Post not yet marked as solved
0 Replies
468 Views
How can I record audio in a keyboard extension? I've enabled microphone support by enabling "RequestsOpenAccess". When I try to record, I get the error below in the console. This doesn't make sense as Apple's docs seem to say that microphone access is allowed with Full Keyboard Access. What is the point of enabling the microphone if the app cannot access the data from the microphone? -CMSUtilities- CMSUtility_IsAllowedToStartRecording: Client sid:0x2205e, XXXXX(17965), 'prim' with PID 17965 was NOT allowed to start recording because it is an extension and doesn't have entitlements to record audio.
Posted
by
Post not yet marked as solved
1 Replies
493 Views
Basically for this iPhone app I want to be able to record from either the built in microphone or from a connected USB audio device while simultaneously playing back processed audio on connected AirPods. It's a pretty simple AVAudioEngine setup that includes a couple of effects units. The category is set to .playAndRecord with the .allowBluetooth and .allowBluetoothA2DP options added. With no attempts to set the preferred input and AirPods connected, the AirPods mic will be used and output also goes to the AirPods. If I call setPreferredInput to either built in mic or a USB audio device I will get input as desired but then output will always go to the speaker. I don't really see a good explanation for this and overrideOutputAudioPort does not really seem to have suitable options. Testing this on iPhone 14 Pro
Posted
by
Post not yet marked as solved
2 Replies
764 Views
Hello, I started to set audio stereo recording (both audio and video are recorded) and the audio quality seems to be lower than quality obtained with native camera application (configured for stereo). Using console to check the log, I found a difference between camera app and mine regarding MXSessionMode (of mediaserverd) in fact, camera application gives MXSessionMode = SpatialRecording and mine MXSessionMode = VideoRecording How can I configure capture session to finally have MXSessionMode = SpatialRecording? Any suggestion? Best regards
Posted
by
Post marked as solved
1 Replies
766 Views
Hello everyone, I'm using Flutter and the just_audio package. When a user receives a push notification, the app plays audio in the background. I've tested this functionality on iPhone 6s and iPhone 13. It works correctly on iPhone 6s and the app plays the sound on push notification received. However on iPhone 13 the app receives the notification, starts the background process but fails to play the sound with these errors: mediaserverd(MediaExperience)[17680] &lt;Notice&gt;: -CMSUtilities- CMSUtility_IsAllowedToStartPlaying: Client sid:0x45107e5, Runner(28933), 'prim' with category MediaPlayback and mode Default and mixable does not have assertions to start mixable playback mediaserverd(MediaExperience)[17680] &lt;Notice&gt;: -CMSessionMgr- MXCoreSessionBeginInterruption_WithSecTaskAndFlags: CMSessionBeginInterruption failed as client 'sid:0x45107e5, Runner(28933), 'prim'' has insufficient privileges to take control mediaserverd(AudioSessionServer)[17680] &lt;Error&gt;: AudioSessionServerImp.mm:405 { "action":"cm_session_begin_interruption", "error":"translating CM session error", "session":{"ID":"0x45107e5","name":"Runner(28933)"}, "details":{"calling_line":879,"error_code":-16980,"error_string":"Operation denied. Cannot start playing"} } From what I understand of these errors is that on the newer iPhones, there must be additional permissions. Does anyone have any idea on how I can fix this?
Posted
by
Post not yet marked as solved
1 Replies
591 Views
While playing sound, I need to create an AudioUnit to record the microphone at the same time. In order to use echo cancellation, i choose kAudioUnitSubType_VoiceProcessingIO subType to init AudioUnit. It works well in iOS 16 and below. But in iOS 17, the playing volume decreases while playing audio and record. Thank you for your help and hope to see your suggestions.
Posted
by
Post not yet marked as solved
3 Replies
815 Views
I did an SwiftUI app where I use AudioSession, AVAudioPlayer and CallKit. I want to play small sounds when pressing a button during the active call, but apparently it is not possible on iOS 17. This was working on previous iOS versions. I tried already all the different audio session categories and modes and nothing seems to work. I tried to find for change logs or relevant docs / issues talking about this but I was unable to find any reference. Expected: Init call, play sound, hear sound. Actual: Init call, play sound, don't hear nothing. Without the active call, I can hear the song. Am I doing or understanding something wrong? What could possibly be happening? Thank you and appreciate if someone can provide an insight on this!
Posted
by
Post not yet marked as solved
0 Replies
597 Views
In voip application , when the CallKit is enabled if we try playing a video through AVplayer the video content is updated frame by frame and the audio of the content is not audible . This issue is observed only in iOS 17, any idea how can we resolve this
Posted
by