AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

Posts under AVAudioSession tag

73 Posts

Post

Replies

Boosts

Views

Activity

Airplay selection not working
I'm trying to implement airplay into my app. I can successfully playback sound and trigger the airplay selector sheet. If the target device is a Bluetooth only device I can connect with no problem and stream the audio to the Bluetooth device, but if the audio device is a airplay specific device like a HomePod or an Apple TV when I select it, I get a spinning icon, indicating that it is trying to connect, and eventually it times out and stops without connecting. I don't believe it is an AirPlay audio issue because if I go to a different app, for example a podcast app and select my HomePods for output, and then switch back to my app. My audio will correctly stream to the HomePod. Not only that, I have it so that my icon will change color to indicate that it is connected via airplay and it is correctly indicating that it is connected via AirPlay. But I cannot then disconnect it using the Airplay selector. The issue appears to be in the AirPlay selection side, which I have spent several days attempting to troubleshoot mostly using ChatGPT to suggest code different than what I have to maybe work around the issue. Mostly it is focused on the audio player section, but it doesn't seem like that is really the route that is the problem.
2
0
223
Jun ’25
Push to talk channelManager(_:didActivate:) doesn't get called
I am implementing the new Push to talk framework and I found an issue where channelManager(:didActivate:) is not called after I immediately return a NOT NIL activeRemoteParticipant from incomingPushResult. I have tested it and it could play the PTT audio in foreground and background. This issue is only occurring when I join the PTT Channel from the app foreground, then kill the app. The channel gets restored via channelDescriptor(restoredChannelUUID:). After the channel gets restored, I send PTT push. I can see that my device is receiving the incomingPushResult and returning the activeRemotePartipant and the notification panel is showing that A is speaking - but channelManager(:didActivate:) never gets called. Thus resulting in no audio being played. Rejoining the channel fixes the issue. And reopening the app also seems to fix the issue.
1
0
150
Jun ’25
AVAudioSession dropping wired USBAudio source
My app inputs electrical waveforms from an IV485B39 2 channel USB device using an AVAudioSession. Before attempting to acquire data I make sure the input device is available as follows: AVAudiosSession *audioSession = [AVAudioSession sharedInstance]; [audioSession setCategory :AVAudioSessionCategoryRecord error:&err]; NSArray *inputs = [audioSession availableInputs]; I have been using this code for about 10 years. My app is scriptable so a user can acquire data from the IV485B29 multiple times with various parameter settings (sampling rates and sample duration). Recently the scripts have been failing to complete and what I have notice that when it fails the list of available inputs is missing the USBAudio input. While debugging I have noticed that when working properly the list of inputs includes both the internal microphone as well as the USBAudio device as shown below. VIB_TimeSeriesViewController:***Available inputs = ( "<AVAudioSessionPortDescription: 0x11584c7d0, type = MicrophoneBuiltIn; name = iPad Microphone; UID = Built-In Microphone; selectedDataSource = Front>", "<AVAudioSessionPortDescription: 0x11584cae0, type = USBAudio; name = 485B39 200095708064650803073200616; UID = AppleUSBAudioEngine:Digiducer.com :485B39 200095708064650803073200616:000957 200095708064650803073200616:1; selectedDataSource = (null)>" ) But when it fails I only see the built in microphone. VIB_TimeSeriesViewController:***Available inputs = ( "<AVAudioSessionPortDescription: 0x11584cef0, type = MicrophoneBuiltIn; name = iPad Microphone; UID = Built-In Microphone; selectedDataSource = Front>" ) If I only see the built in microphone I immediately repeat the three lines of code and most of the "inputs" contains both the internal microphone and the USBAudioDevice AVAudiosSession *audioSession = [AVAudioSession sharedInstance]; [audioSession setCategory :AVAudioSessionCategoryRecord error:&err]; NSArray *inputs = [audioSession availableInputs]; This fix always works on my M2 iPadPro and my iPhone 14 but some of my customers have older devices and even with 3 tries they still get faults about 1 in 10 tries. I rolled back my code to a released version from about 12 months ago where I know we never had this problem and compiled it against the current libraries and the problem still exists. I assume this is a problem caused by a change in the AVAudioSession framework libraries. I need to find a way to work around the issue or get the library fixed.
0
0
90
May ’25
App Randomly Crashes During Continuous Sound Playback Using AVAudioPlayer
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2 We're using AVAudioPlayer to play a sound when a button is tapped. In our use case, this button can be tapped very frequently — roughly every 0.1 to 0.2 seconds. Each tap triggers the following function: var audioPlayer: AVAudioPlayer? func soundPlay(resource: String, type: String){ guard let path = Bundle.main.path(forResource: resource, ofType: type) else { return } do { audioPlayer = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path)) audioPlayer!.delegate = self try audioSession.setCategory(.playback) } catch { return } self.audioPlayer!.play() } The issue is that under high-frequency tapping (especially around 0.1–0.15s intervals), the app occasionally crashes. The crash does not occur every time, but it happens randomly — sometimes within 30 seconds, within 1 minute, or even 3 minutes of continuous tapping. Interestingly, adding a delay of 0.2 seconds between button taps seems to prevent the crash entirely. Delays shorter than 0.2 seconds (e.g.,0.15s,0.18s) still result in occasional crashes. My questions are: **Is this expected behavior from AVAudioPlayer or AVAudioSession? Could this be a known issue or a limitation in AVFoundation? Is there any documentation or guidance on handling frequent sound playback safely?** Any insights or recommendations on how to handle rapid, repeated audio playback more reliably would be appreciated.
0
0
177
May ’25
How to disable/hide Audio Controls on lock screen from WkWebView
Hi, I am trying to remove the audio controls for my app on the lock screen. Since I use WKWebView, there are 3 audio tags in my html and I play and pause em via JS. However, if I do not play any sound since app launch, there are no audio controls on the lock screen. But if I play one of those 3 files (they are even less then 3 Sec sound effects e.g. for buttons) the audio controls appears on lock screen. Note even when the sounds on pause() or not playing they were listed on the lock screen. What I have tried so far without success MPNowPlayingInfoCenter.default().nowPlayingInfo = [:] and ``try audioSession.setCategory(.playback, mode: .default, options: []) try audioSession.setActive(false, options: .notifyOthersOnDeactivation)`` and UIApplication.shared.endReceivingRemoteControlEvents() Another problem is that the app scales with iOS system settings "display zoom". Is there a way to deny it? It is latest Xcode verion 16.3 and iOS 18. I have no background mode in my Capabilities. Nothing worked so far. Has anyone an idea? Greetings
2
0
122
May ’25
Need to check Call Status without using CallKit
My app requirement is to check that User is on call while doing transaction. If user on call then we need to show Caution alert. For this requirement we used CallKit to detect Call status and it's working fine but recently Apple has rejected the application because of Callkit that is banned in China. Could you please provide any solution to check the Call Status only.
3
0
187
May ’25
AVAudioPlayer/SKAudioNode audio no longer plays after interruption
Hi 👋! We have a SpriteKit-based app where we play AVAudio sounds in three different ways: Effects (incl. UI sounds) with AVAudioPlayer. Long looping tracks with AVAudioPlayer. Short animation effects on the timeline of SpriteKit's SKScene files (effectively SKAudioNode nodes). We've found that when you exit the app or otherwise interrupt audio plays, future audio plays often fail. For example, there's a WebKit-based video trailer inside the app, and if you play it, our looping background music track (2.) will stop playing, and won't resume as you close the trailer (return from WebKit). This is probably due to us not manually restarting the track (so may well be easily fixed). Periodically played AVAudioPlayer audio (1.) are not affected. However, the more concerning thing is that the audio tracks on SKScene file timelines (3.) will no longer play. My hypothesis is that AVAudioEngine gets interrupted, and needs to be restarted for those AVAudioNode elements to regain functionality. Thing is, we don't deal with AVAudioEngine at all currently in the app, meaning it is never initiated to begin with. Obviously things return to normal when you remove the app from short-term memory and restart it. However, it seems many of our users aren't doing this, and often report audio failing presumably due to some interruption in the past without the app ever being cleared from memory. Any idea why timeline-run SKAudioNodes would fail like this? Should the app react to app backgrounding/foregrounding regarding audio? Any help would be very much appreciated ✌️!
0
1
124
May ’25
iOS Audio Routing - Bluetooth Output + Built-in Microphone Input
Hello! I'm experiencing an issue with iOS's audio routing system when trying to use Bluetooth headphones for audio output while also recording environmental audio from the built-in microphone. Desired behavior: Play audio through Bluetooth headset (AirPods) Record unprocessed environmental audio from the iPhone's built-in microphone Actual behavior: When explicitly selecting the built-in microphone, iOS reports it's using it (in currentRoute.inputs) However, the actual audio data received is clearly still coming from the AirPods microphone The audio is heavily processed with voice isolation/noise cancellation, removing environmental sounds Environment Details Device: iPhone 12 Pro Max iOS Version: 18.4.1 Hardware: AirPods Audio Framework: AVAudioEngine (also tried AudioQueue) Code Attempted I've tried multiple approaches to force the correct routing: func configureAudioSession() { let session = AVAudioSession.sharedInstance() // Configure to allow Bluetooth output but use built-in mic try? session.setCategory(.playAndRecord, options: [.allowBluetoothA2DP, .defaultToSpeaker]) try? session.setActive(true) // Explicitly select built-in microphone if let inputs = session.availableInputs, let builtInMic = inputs.first(where: { $0.portType == .builtInMic }) { try? session.setPreferredInput(builtInMic) print("Selected input: \(builtInMic.portName)") } // Log the current route let route = session.currentRoute print("Current input: \(route.inputs.first?.portName ?? "None")") // Configure audio engine with native format let inputNode = audioEngine.inputNode let nativeFormat = inputNode.inputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: nativeFormat) { buffer, time in // Process audio buffer // Despite showing "Built-in Microphone" in route, audio appears to be // coming from AirPods with voice isolation applied - welp! } try? audioEngine.start() } I've also tried various combinations of: Different audio session modes (.default, .measurement, .voiceChat) Different option combinations (with/without .allowBluetooth, .allowBluetoothA2DP) Setting session.setPreferredInput() both before and after activation Diagnostic Observations When AirPods are connected: AVAudioSession.currentRoute.inputs correctly shows "Built-in Microphone" after setPreferredInput() The actual audio data received shows clear signs of AirPods' voice isolation processing Background/environmental sounds are actively filtered out... When recording a test audio played near the phone (not through the app), the recording is nearly silent. Only headset voice goes through. Questions Is there a workaround to force iOS to actually use the built-in microphone while maintaining Bluetooth output? Are there any lower-level configurations that might resolve this issue? Any insights, workarounds, or suggestions would be greatly appreciated. This is blocking a critical feature in my application that requires environmental audio recording while providing audio feedback through headphones 😅
0
0
186
May ’25
Audio Session in Notification Service Extension
Is there anyway that I could use AVAudioSession, AVAudioPlayer or anything similar in Notification Service Extension? I am trying to implement Audio Playback in the Notification Service Extension to play specific audio file when receiving Notification regardless the app state(foreground, background or killed), but I am not able to activate audio session in Notification Service Extension. NSError *sessionError = nil; BOOL success = [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback error:&sessionError]; success = [[AVAudioSession sharedInstance] setActive:YES error:&sessionError]; if (!success) { NSLog(@"Error activating audio session: %@", sessionError); } Below is the error that I got when I am trying to run the code above in Notification Service Extension. Error Domain=NSOSStatusErrorDomain Code=561015905 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed}
1
0
128
May ’25
Push notification volume for custom sound
Is there a way to configure the APNS notification sound volume to be louder? I am implementing some custom sounds(narrative sentences) for APNS, it does play the custom sound, but the volume of the custom sound is not loud enough even though I had set the device's volume and "RingTone and Alerts" volume to max. I tried to amplify the custom sound file, it does play louder but the result is minimum if I want to maintain the quality of the sound without it been distorted. I tried to use Notification Service Extension, AVAudioPlayer and AVAudioSession to play the sound, it does play louder in max volume compare with relying on default sound payload in APNS, but the problem is AVAudioPlayer and AVAudioSession do not seems to be usable when the application is in background or killed state, is there any other alternative I could use?
3
0
155
Apr ’25
Audio session activation occasionally fails from CarPlay
I'm working on adding CarPlay support to an audio app and am running into an issue. Occasionally, when a user opens the app from CarPlay while the main app scene is either not connected or is currently in the background, I will receive an error when attempting to activate the audio session. The code below mimics my setup: do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio) try AVAudioSession.sharedInstance().setActive(true) } catch { print(error) // NSOSStatusErrorDomain - 560557684: Session activation failed } That error code maps to AVAudioSession.ErrorCode.cannotInterruptOthers. Once in this state, all subsequent attempts to play different pieces of content will fail. However, things will start working normally if the user opens the app on their phone and tries again from CarPlay (while the app is in the foreground on their phone). I'm not sure why it would behave this way and want to note that I do have the audio background mode capability enabled. Has anyone else encountered this? Are there any workarounds or changes I could make to prevent this from happening?
0
1
176
Apr ’25
Level Networking on watchOS for Duplex audio streaming
I did watch WWDC 2019 Session 716 and understand that an active audio session is key to unlocking low‑level networking on watchOS. I’m configuring my audio session and engine as follows: private func configureAudioSession(completion: @escaping (Bool) -> Void) { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: []) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) // Retrieve sample rate and configure the audio format. let sampleRate = audioSession.sampleRate print("Active hardware sample rate: \(sampleRate)") audioFormat = AVAudioFormat(standardFormatWithSampleRate: sampleRate, channels: 1) // Configure the audio engine. audioInputNode = audioEngine.inputNode audioEngine.attach(audioPlayerNode) audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: audioFormat) try audioEngine.start() completion(true) } catch { print("Error configuring audio session: \(error.localizedDescription)") completion(false) } } private func setupUDPConnection() { let parameters = NWParameters.udp parameters.includePeerToPeer = true connection = NWConnection(host: "***.***.xxxxx.***", port: 0000, using: parameters) setupNWConnectionHandlers() } private func setupTCPConnection() { let parameters = NWParameters.tcp connection = NWConnection(host: "***.***.xxxxx.***", port: 0000, using: parameters) setupNWConnectionHandlers() } private func setupWebSocketConnection() { guard let url = URL(string: "ws://***.***.xxxxx.***:0000") else { print("Invalid WebSocket URL") return } let session = URLSession(configuration: .default) webSocketTask = session.webSocketTask(with: url) webSocketTask?.resume() print("WebSocket connection initiated") sendAudioToServer() receiveDataFromServer() sendWebSocketPing(after: 0.6) } private func setupNWConnectionHandlers() { connection?.stateUpdateHandler = { [weak self] state in DispatchQueue.main.async { switch state { case .ready: print("Connected (NWConnection)") self?.isConnected = true self?.failToConnect = false self?.receiveDataFromServer() self?.sendAudioToServer() case .waiting(let error), .failed(let error): print("Connection error: \(error.localizedDescription)") DispatchQueue.main.asyncAfter(deadline: .now() + 2) { self?.setupNetwork() } case .cancelled: print("NWConnection cancelled") self?.isConnected = false default: break } } } connection?.start(queue: .main) } Duplex in this context refers to two-way audio transmission simultaneously recording and sending audio while also receiving and playing back incoming audio, similar to a VoIP/SIP call. The setup works fine on the simulator, which suggests that the core logic is correct. However, since the simulator doesn’t fully replicate WatchOS hardware behavior especially for audio sessions and networking issues might arise when running on a real device. The problem likely lies in either the Watch’s actual hardware limitations, permission constraints, or specific audio session configurations. I am reaching out to seek further assistance regarding the challenges I've been experiencing with establishing a UDP, TCP & web socket connection on watchOS using NWConnection for duplex audio streaming. Despite implementing the recommendations provided earlier, I am still encountering difficulties From what I can see, your implementation is focused on streaming audio playback with the server. In my case, I'm looking for a slightly different approach: I want to capture audio and send buffers of a specific size to the server while playing audio simultaneously, essentially achieving full duplex streaming similar to a VOIP call. Additionally, I’d like to ensure that if no external audio route is connected, the Apple Watch speaker is used by default. Any thoughts or insights on adapting this setup for those requirements would be very welcome.
1
0
154
Apr ’25
Level Networking on watchOS for Duplex audio streaming
I did watch WWDC 2019 Session 716 and understand that an active audio session is key to unlocking low‑level networking on watchOS. I’m configuring my audio session and engine as follows: private func configureAudioSession(completion: @escaping (Bool) -> Void) { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: []) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) // Retrieve sample rate and configure the audio format. let sampleRate = audioSession.sampleRate print("Active hardware sample rate: \(sampleRate)") audioFormat = AVAudioFormat(standardFormatWithSampleRate: sampleRate, channels: 1) // Configure the audio engine. audioInputNode = audioEngine.inputNode audioEngine.attach(audioPlayerNode) audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: audioFormat) try audioEngine.start() completion(true) } catch { print("Error configuring audio session: \(error.localizedDescription)") completion(false) } } private func setupUDPConnection() { let parameters = NWParameters.udp parameters.includePeerToPeer = true connection = NWConnection(host: "***.***.xxxxx.***", port: 0000, using: parameters) setupNWConnectionHandlers() } private func setupTCPConnection() { let parameters = NWParameters.tcp connection = NWConnection(host: "***.***.xxxxx.***", port: 0000, using: parameters) setupNWConnectionHandlers() } private func setupWebSocketConnection() { guard let url = URL(string: "ws://***.***.xxxxx.***:0000") else { print("Invalid WebSocket URL") return } let session = URLSession(configuration: .default) webSocketTask = session.webSocketTask(with: url) webSocketTask?.resume() print("WebSocket connection initiated") sendAudioToServer() receiveDataFromServer() sendWebSocketPing(after: 0.6) } private func setupNWConnectionHandlers() { connection?.stateUpdateHandler = { [weak self] state in DispatchQueue.main.async { switch state { case .ready: print("Connected (NWConnection)") self?.isConnected = true self?.failToConnect = false self?.receiveDataFromServer() self?.sendAudioToServer() case .waiting(let error), .failed(let error): print("Connection error: \(error.localizedDescription)") DispatchQueue.main.asyncAfter(deadline: .now() + 2) { self?.setupNetwork() } case .cancelled: print("NWConnection cancelled") self?.isConnected = false default: break } } } connection?.start(queue: .main) } I am reaching out to seek further assistance regarding the challenges I've been experiencing with establishing a UDP, TCP & web socket connection on watchOS using NWConnection for duplex audio streaming. Despite implementing the recommendations provided earlier, I am still encountering difficulties. Or duplex audio streaming not possible on apple watch?
6
0
172
Apr ’25
AVAudioSession automatically sets the tablet audio volume to 50% when recording audio.
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2 I'm using AVAudioSession to record sound in my application. But I recently came to realize that when the app starts a recording session on a tablet, OS automatically sets the tablet volume to 50% and when after recording ends, it doesn't change back to the previous volume level before starting the recording. So I would like to know whether this is an OS default behavior or a bug? If it's a default behavior, I much appreciate if I can get a link to the documentation.
0
0
115
Apr ’25
Creating an initial Now Playing state of paused - impossible?
I am working on an app which plays audio - https://youtu.be/VbAfUk_eYl0?si=nJg5ayy2faWE78-g - and one of the features is, on restart, if you had paused playback of a file at the time the app was previously shut down (or were playing one at the time of shutdown), the paused state and position in the file is restored exactly as it was, on restart. The functionality works. However, it seems impossible to get the "now playing" information in iOS into the right state to reflect that via the MediaPlayer API. On restart, handlers are attached to the play/pause/togglePlayPause actions on MPRemoteCommandCenter.shared(), and the map of media info is updated on MPNowPlayingInfoCenter.default().nowPlayingInfo. What happens is that iOS's media view shows the audio as playing and offers a pause button - even though the play action is enabled and the pause action is disabled. Once playback has been initiated (my workaround is to have the pause action toggle the play state, since otherwise you wouldn't be able to initiate playback from controls in a car without initiating it once from a device first). I've created a simplified white-noise-player demo to illustrate the problem - simply build and deploy it, and then start the app, lock your device and look at the playback controls on the lock screen. It will show a pause button - same behavior I've described. https://github.com/timboudreau/ios-play-pause-demo I've tried a few things to narrow down the source of the issue - for example, thinking that not MPNowPlayingInfoPropertyPlaybackProgress and MPMediaItemPropertyPlaybackDuration might be the culprit (since the system interpolates elapsed time and it's recommended to update those properties infrequently) on startup might do the trick, but the result is the same, just without a duration or progress shown. What governs this behavior, and is there some way to explicitly tell the media player API your current state is paused?
0
2
133
Apr ’25
Audio player app is silent if device connected via CarPlay
I have a SwiftUI app - (https://youtu.be/VbAfUk_eYl0?si=JxUBh0Bpb-vc1E1U) - which I thought was almost ready for release - a manager for airdropped audio files from Logic Pro or other music creation applications. It uses AVAudioEngine and AVAudioPlayerNode to play audio, and the MediaPlayer API to integrate with car audio and similar, all of which works well. It does not currently have an explicit CarPlay integration (and I'm slightly horrified at the amount of work that is going to require). I had the good or bad luck of getting a loaner car with carplay while mine is being repaired yesterday, and lo and behold, when connected to the vehicle via CarPlay, there is no audio output in the vehicle at all. The now playing panel correctly shows the information my app provides about the currently playing song; the player node believes it is playing, the AVAudioSession is configured as it should be. But there is no sound. Obviously I cannot ship it in this state. I've tried fiddling with the parameters the AVAudioSession is configured with, in case there was some parameter that was preventing audio output, to no avail - currently: var options = AVAudioSession.CategoryOptions() options.insert(.allowAirPlay) options.insert(.allowBluetooth) options.insert(.allowBluetoothA2DP) try session.setCategory(.playback, mode: .default, options: options) try? session.setPreferredIOBufferDuration(0.002) // ~96 samples at 44.1kHz try? session.setPrefersNoInterruptionsFromSystemAlerts(true) try? session.setPrefersInterruptionOnRouteDisconnect(false) try session.setActive(true, options: [.notifyOthersOnDeactivation]) All diagnostics within the app show the player operating correctly - files are played and flushed; AVAudioPlayerNodeCompletionCallbacks are called when they should be. But the output is not audible in the vehicle. I would much prefer to ship this app without full-blown CarPlay integration, but with working audio when connected via CarPlay, and work on full CarPlay integration for the next release. Is there some secret handshake I am just missing to make this work?
1
0
149
Mar ’25
PTT Framework has compatibility issue with .voiceChat AVAudioSession mode
As I've mentioned before our app uses PTT Framework to record and send audio messages. In one of supported by app mode we are using WebRTC.org library for that purpose. Internally WebRTC.org library uses Voice-Processing I/O Unit (kAudioUnitSubType_VoiceProcessingIO subtype) to retrieve audio from mic. According to https://developer.apple.com/documentation/avfaudio/avaudiosession/mode-swift.struct/voicechat using Voice-Processing I/O Unit leads to implicit enabling .voiceChat AVAudioSession mode (i.e. it looks like it's not possible to use Voice-Processing I/O Unit without .voiceChat mode). And problem is following: when user starts outgoing PTT, PTT Framework plays audio notification, but in case of enabled .voiceChat mode that sound is playing distorted or not playing at all. Questions: Is it known issue? Is there any way to workaround it?
17
0
831
Mar ’25
Play Audio for a Metronome
Hi, I am looking for a good way to play sounds at a high frequency. At the moment I am using the AVAudioEngine, and create a couple AVAudioPlayerNode and for each sound I need to play I create a AVAudioPCMBuffer. When the app needs to play a sound, I get the correct AVAudioPCMBuffer for the sound and use the first available AVAudioPlayerNode and feed it to the buffer. The timing for a metronome app has to be very precise because if it's of by about 16ms the user can hear that it is not playing had the right interval. For low speeds this is working without any problems, but at high speeds it is getting worse. Maybe anyone has an idea on how I can improve my method. Its a Plugin for Flutter. import AVFoundation class FastSoundPlayer { private var audioPlayers: [SoundPlayer?] = [] private var sounds: [String: Sound] = [:] private var engine = AVAudioEngine() let session = AVAudioSession.sharedInstance() init() { do { try session.setCategory(AVAudioSession.Category.playback, mode: AVAudioSession.Mode.default, options: [AVAudioSession.CategoryOptions.mixWithOthers]) try session.setActive(true) createSoundPlayers(count: 20) try engine.start() } catch { print("Error starting audio engine: \(error.localizedDescription)") } } // Selector method to handle applicationDidBecomeActiveNotification func applicationDidBecomeActive() { // Reinitialize AVAudioEngine and reattach all nodes do { engine.reset() objc_sync_enter(audioPlayers) audioPlayers.removeAll() createSoundPlayers(count: 20) objc_sync_exit(audioPlayers) try engine.start() } catch { print("Error starting audio engine: \(error.localizedDescription)") } } func createSoundPlayers(count: Int) { for _ in 0..<count { let player = SoundPlayer() engine.attach(player.player) engine.connect(player.player, to: engine.mainMixerNode, format: nil) audioPlayers.append(player) } } func load(sound: Data, name: String) { let sound = Sound(soundData: sound) sounds[name] = sound } func play(name: String) { if !engine.isRunning { applicationDidBecomeActive() } guard let sound = sounds[name] else { print("Sound not found") return } if let player = getAvailablePlayer() { player.play(sound: sound) } } func getAvailablePlayer() -> SoundPlayer? { for player in audioPlayers { if !player!.isPlaying { return player } } return nil } } class SoundPlayer { let player = AVAudioPlayerNode() var isPlaying = false init() { player.volume = 1.0 } func play(sound: Sound) { player.scheduleBuffer(sound.sound!, at: nil, options: .interrupts, completionCallbackType: .dataPlayedBack) { _ in self.complete() } if (player.engine != nil && player.engine!.isRunning) { player.play() isPlaying = true } } func complete() { isPlaying = false } } class Sound { var sound: AVAudioPCMBuffer? init(soundData: Data) { do { let temporaryURL = FileManager.default.temporaryDirectory.appendingPathComponent("tempSound.wav") try soundData.write(to: temporaryURL) // Create AVAudioFile from the temporary file URL let audioFile = try AVAudioFile(forReading: temporaryURL) // Define the format for the PCM buffer (44100Hz, stereo) let format = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44100, channels: 2, interleaved: false) // Create AVAudioPCMBuffer guard let pcmBuffer = AVAudioPCMBuffer(pcmFormat: format!, frameCapacity: AVAudioFrameCount(audioFile.length)) else { // Failed to create PCM buffer self.sound = nil return } // Read audio file into PCM buffer try audioFile.read(into: pcmBuffer) // Assign the created AVAudioPCMBuffer to the sound property self.sound = pcmBuffer } catch { print("Error loading sound file: \(error.localizedDescription)") self.sound = nil } } } Thanks!
1
0
159
Mar ’25
Low-Level Networking on watchOS for Duplex audio streaming
Dear Apple Team, I am facing an issue with UDP networking in my watchOS app for duplex audio streaming using NWConnection. I have already added the necessary capabilities, including background mode for audio, to ensure smooth operation. Issue Details: The UDP connection works fine on the simulator since it uses macOS networking and allows low-level access. However, on a real Apple Watch (running watchOS 10), the connection remains in a "waiting" state and fails with Error 50. I am aware of Technical Note TN3135 regarding low-level networking on watchOS, but even after following these guidelines, the issue persists. Questions: Does watchOS impose additional restrictions on UDP networking compared to iOS/macOS? Are there any specific entitlements or configurations required to allow UDP connections on a real Apple Watch? Is there a workaround or debugging method to get more insights into why the connection fails? I would appreciate any guidance or recommendations on resolving this issue.
3
0
154
Mar ’25