Integrate music and other audio content into your apps.

Posts under Audio tag

58 Posts

Post

Replies

Boosts

Views

Activity

Audio session activation occasionally fails from CarPlay
I'm working on adding CarPlay support to an audio app and am running into an issue. Occasionally, when a user opens the app from CarPlay while the main app scene is either not connected or is currently in the background, I will receive an error when attempting to activate the audio session. The code below mimics my setup: do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio) try AVAudioSession.sharedInstance().setActive(true) } catch { print(error) // NSOSStatusErrorDomain - 560557684: Session activation failed } That error code maps to AVAudioSession.ErrorCode.cannotInterruptOthers. Once in this state, all subsequent attempts to play different pieces of content will fail. However, things will start working normally if the user opens the app on their phone and tries again from CarPlay (while the app is in the foreground on their phone). I'm not sure why it would behave this way and want to note that I do have the audio background mode capability enabled. Has anyone else encountered this? Are there any workarounds or changes I could make to prevent this from happening?
0
1
199
Apr ’25
AVAudioSession dropping wired USBAudio source
My app inputs electrical waveforms from an IV485B39 2 channel USB device using an AVAudioSession. Before attempting to acquire data I make sure the input device is available as follows: AVAudiosSession *audioSession = [AVAudioSession sharedInstance]; [audioSession setCategory :AVAudioSessionCategoryRecord error:&err]; NSArray *inputs = [audioSession availableInputs]; I have been using this code for about 10 years. My app is scriptable so a user can acquire data from the IV485B29 multiple times with various parameter settings (sampling rates and sample duration). Recently the scripts have been failing to complete and what I have notice that when it fails the list of available inputs is missing the USBAudio input. While debugging I have noticed that when working properly the list of inputs includes both the internal microphone as well as the USBAudio device as shown below. VIB_TimeSeriesViewController:***Available inputs = ( "<AVAudioSessionPortDescription: 0x11584c7d0, type = MicrophoneBuiltIn; name = iPad Microphone; UID = Built-In Microphone; selectedDataSource = Front>", "<AVAudioSessionPortDescription: 0x11584cae0, type = USBAudio; name = 485B39 200095708064650803073200616; UID = AppleUSBAudioEngine:Digiducer.com :485B39 200095708064650803073200616:000957 200095708064650803073200616:1; selectedDataSource = (null)>" ) But when it fails I only see the built in microphone. VIB_TimeSeriesViewController:***Available inputs = ( "<AVAudioSessionPortDescription: 0x11584cef0, type = MicrophoneBuiltIn; name = iPad Microphone; UID = Built-In Microphone; selectedDataSource = Front>" ) If I only see the built in microphone I immediately repeat the three lines of code and most of the "inputs" contains both the internal microphone and the USBAudioDevice AVAudiosSession *audioSession = [AVAudioSession sharedInstance]; [audioSession setCategory :AVAudioSessionCategoryRecord error:&err]; NSArray *inputs = [audioSession availableInputs]; This fix always works on my M2 iPadPro and my iPhone 14 but some of my customers have older devices and even with 3 tries they still get faults about 1 in 10 tries. I rolled back my code to a released version from about 12 months ago where I know we never had this problem and compiled it against the current libraries and the problem still exists. I assume this is a problem caused by a change in the AVAudioSession framework libraries. I need to find a way to work around the issue or get the library fixed.
0
0
120
May ’25
Making music….
Maybe a bit of a simplistic question, but…. Best ways to create music - programmatically - within an app? I have a few small, simple games that I would like to add some background music to. I’ve fiddled with AVAudio, SKAudio, and now AudioKit. I’m just not quite finding the holy grail of music generation. I don’t know if it’s more technique or tool. So I thought I’d hit up the pool of minds here for some suggestions…?
0
0
141
Jun ’25
AVSpeechSynthesisVoices available on device
Hello there! Is there any list of voices that are always available on iOS/iPadOS devices? It seems that AVSpeechSynthesisVoice(identifier: "com.apple.voice.compact.en-US.Samantha") is always available on all devices. I thought that AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Nicky_en-US_compact") and AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Aaron_en-US_compact") were available by default on certain newer devices. Is this true? I also noticed that on the same iPad where I was using those 2 voices (Nicky and Aaron) - when I updated to the iPadOS 26 beta, those voices were no longer available. Any information you can share about which voices should be reliably available on which devices would be extremely helpful for our development. Thanks so much!
0
0
191
Jun ’25
AudioPlaybackIntent usage requesting a screen unlock
Hello, I have an AppIntent that uses the AudioPlaybackIntent to trigger my app to open and initiate an AVPlayer that plays back a media stream I control. When the phone is unlocked, everything works as I expect. The app opens and plays the audio. However, when the phone is locked, any attempt to invoke the intent causes a "Request Code" dialog to be displayed. This seems counter to what I would expect with the AudioPlaybackIntent usage. Am I able to accomplish what I'm after here with AppIntents? Does the fact that I'm using openAppWhenRun require me to have the phone unlocked somehow? import AppIntents import Foundation struct PlayStationAppIntent: AudioPlaybackIntent { static var title: LocalizedStringResource = "Play radio station" static var description: IntentDescription = .init("Play radio station") static var notification: Notification.Name = .init("playStation") static var openAppWhenRun: Bool = true init() {} func perform() async throws -> some IntentResult { AudioPlayerService.shared.play() return .result() } }
0
0
165
Jul ’25
Terminal Command or AppleScript to Set Audio Balance to Perfect Center?
Hi everyone, I'm looking for a way to programmatically set the left/right audio balance to perfect center (50/50) using either a Terminal command or AppleScript. Background: The audio balance slider in System Settings &gt; Sound &gt; Output &amp; Input works functionally, but I have difficulty determining when it's positioned at the exact center point. The visual nature of the slider makes it challenging for me to achieve the precision I need, and I end up adjusting it repeatedly trying to get it perfectly centered. What I'm looking for: A Terminal command that can set the audio balance to exact center An AppleScript that accomplishes the same thing Any other programmatic method to ensure perfect 50/50 balance I've tried searching through the defaults command documentation and Core Audio frameworks but haven't found the right approach yet. Has anyone successfully automated this setting before? Any help would be greatly appreciated! Thanks in advance, Dylan
0
0
102
Jul ’25
occasional glitches and empty buffers when using AudioFileStream + AVAudioConverter
I'm streaming mp3 audio data using URLSession/AudioFileStream/AVAudioConverter and getting occasional silent buffers and glitches (little bleeps and whoops as opposed to clicks). The issues are present in an offline test, so this isn't an issue of underruns. Doing some buffering on the input coming from the URLSession (URLSessionDataTask) reduces the glitches/silent buffers to rather infrequent, but they do still happen occasionally. var bufferedData = Data() func parseBytes(data: Data) { bufferedData.append(data) // XXX: this buffering reduces glitching // to rather infrequent. But why? if bufferedData.count > 32768 { bufferedData.withUnsafeBytes { (bytes: UnsafeRawBufferPointer) in guard let baseAddress = bytes.baseAddress else { return } let result = AudioFileStreamParseBytes(audioStream!, UInt32(bufferedData.count), baseAddress, []) if result != noErr { print("❌ error parsing stream: \(result)") } } bufferedData = Data() } } No errors are returned by AudioFileStream or AVAudioConverter. func handlePackets(data: Data, packetDescriptions: [AudioStreamPacketDescription]) { guard let audioConverter else { return } var maxPacketSize: UInt32 = 0 for packetDescription in packetDescriptions { maxPacketSize = max(maxPacketSize, packetDescription.mDataByteSize) if packetDescription.mDataByteSize == 0 { print("EMPTY PACKET") } if Int(packetDescription.mStartOffset) + Int(packetDescription.mDataByteSize) > data.count { print("❌ Invalid packet: offset \(packetDescription.mStartOffset) + size \(packetDescription.mDataByteSize) > data.count \(data.count)") } } let bufferIn = AVAudioCompressedBuffer(format: inFormat!, packetCapacity: AVAudioPacketCount(packetDescriptions.count), maximumPacketSize: Int(maxPacketSize)) bufferIn.byteLength = UInt32(data.count) for i in 0 ..< Int(packetDescriptions.count) { bufferIn.packetDescriptions![i] = packetDescriptions[i] } bufferIn.packetCount = AVAudioPacketCount(packetDescriptions.count) _ = data.withUnsafeBytes { ptr in memcpy(bufferIn.data, ptr.baseAddress, data.count) } if verbose { print("handlePackets: \(data.count) bytes") } // Setup input provider closure var inputProvided = false let inputBlock: AVAudioConverterInputBlock = { packetCount, statusPtr in if !inputProvided { inputProvided = true statusPtr.pointee = .haveData return bufferIn } else { statusPtr.pointee = .noDataNow return nil } } // Loop until converter runs dry or is done while true { let bufferOut = AVAudioPCMBuffer(pcmFormat: outFormat, frameCapacity: 4096)! bufferOut.frameLength = 0 var error: NSError? let status = audioConverter.convert(to: bufferOut, error: &error, withInputFrom: inputBlock) switch status { case .haveData: if verbose { print("✅ convert returned haveData: \(bufferOut.frameLength) frames") } if bufferOut.frameLength > 0 { if bufferOut.isSilent { print("(haveData) SILENT BUFFER at frame \(totalFrames), pending: \(pendingFrames), inputPackets=\(bufferIn.packetCount), outputFrames=\(bufferOut.frameLength)") } outBuffers.append(bufferOut) totalFrames += Int(bufferOut.frameLength) } case .inputRanDry: if verbose { print("🔁 convert returned inputRanDry: \(bufferOut.frameLength) frames") } if bufferOut.frameLength > 0 { if bufferOut.isSilent { print("(inputRanDry) SILENT BUFFER at frame \(totalFrames), pending: \(pendingFrames), inputPackets=\(bufferIn.packetCount), outputFrames=\(bufferOut.frameLength)") } outBuffers.append(bufferOut) totalFrames += Int(bufferOut.frameLength) } return // wait for next handlePackets case .endOfStream: if verbose { print("✅ convert returned endOfStream") } return case .error: if verbose { print("❌ convert returned error") } if let error = error { print("error converting: \(error.localizedDescription)") } return @unknown default: fatalError() } } }
0
0
567
Jul ’25
How to detect when iOS Camera app starts video recording (with Allow Audio Playback ON)?
Since iOS 18, the system setting “Allow Audio Playback” (enabled by default) allows third-party app audio to continue playing while the user is recording video with the Camera app. This has created a problem for the app I’m developing. ➡️ The problem: My app plays continuous audio in both foreground and background states. If the user starts recording video using the iOS Camera app, the app’s audio — still playing in the background — gets captured in the video — obviously an unintended behavior. Yes, the user could stop the app manually before starting the video recording, but that can’t be guaranteed. As a developer, I need a way to stop the app’s audio before the video recording begins. So far, I haven’t found a reliable way to detect when video recording starts if ‘Allow Audio Playback’ is ON. ➡️ What I’ve tried: — AVAudioSession.interruptionNotification → doesn’t fire — devicesChangedEventStream → not triggered I don’t want to request mic permission (app doesn’t use mic). also, disabling the app from playing audio in the background isn’t an option as it is a crucial part of the user experience ➡️ What I need: A reliable, supported way to detect when the Camera app begins video recording, without requiring mic access — so I can stop audio and avoid unintentional overlap with the user’s recordings. Any official guidance, workarounds, or AVFoundation techniques would be greatly appreciated. Thanks.
0
0
290
Aug ’25
Regression: RealityKit spatial audio crackles and pops on iOS 26.0 beta 5 (FB19423059)
RealityKit spatial audio crackles and pops on iOS 26.0 beta 5. It works correctly on iOS 18.6 and visionOS 26.0 beta 5. The APIs used are AudioPlaybackController, Entity.prepareAudio, Entity.play Videos of the expected and observed behavior are attached to the feedback FB19423059. The audio should be a consistent, repeating sound, but it seems oddly abbreviated and the volume varies unexpectedly. Thank you for investigating this issue.
0
0
318
Aug ’25
Homepod Crossfade
I’m running HomePod OS 26 on two HomePod minis and OS 18.6 on main HomePod (original) I’ve enabled Crossfade in the Home app. I’m playing Apple Music directly in the HomePod mini. Crossfade just doesn’t work on any HomePod. I can understand it not working on the HomePod - but why isn’t it working on the minis running OS 26? I’ve tried disabling and enabling Crossfade, rebooting HomePods etc but nothing?!
0
0
354
Aug ’25
Default Voices for AVSpeechUtterance
It appears iOS only comes with low quality voices installed. iOS requires the user to go into settings to download higher quality voices to be used with AVSpeechUtterance. There doesn't seem to be any api that can be used to make this process easier for the app user. Is there a way / api that would allow an app to download and use a higher quality voice? Will apple ever install on default higher quality voices? We really want to use the text to speech api in iOS however the very high amount of user friction to use high quality voices is stopping us. I would appreciate a response. Thanks
0
0
795
Sep ’25
macOS Tahoe: Can't setup AVAudioEngine with playthrough
Hi, I'm trying to setup a AVAudioEngine for USB Audio recording and monitoring playthrough. As soon as I try to setup playthough I get an error in the console: AVAEInternal.h:83 required condition is false: [AVAudioEngineGraph.mm:1361:Initialize: (IsFormatSampleRateAndChannelCountValid(outputHWFormat))] Any ideas how to fix it? // Input-Device setzen try? setupInputDevice(deviceID: inputDevice) let input = audioEngine.inputNode // Stereo-Format erzwingen let inputHWFormat = input.inputFormat(forBus: 0) let stereoFormat = AVAudioFormat(commonFormat: inputHWFormat.commonFormat, sampleRate: inputHWFormat.sampleRate, channels: 2, interleaved: inputHWFormat.isInterleaved) guard let format = stereoFormat else { throw AudioError.deviceSetupFailed(-1) } print("Input format: \(inputHWFormat)") print("Forced stereo format: \(format)") audioEngine.attach(monitorMixer) audioEngine.connect(input, to: monitorMixer, format: format) // MonitorMixer -> MainMixer (Output) // Problem here, format: format also breaks. audioEngine.connect(monitorMixer, to: audioEngine.mainMixerNode, format: nil)
0
0
206
Oct ’25
AVAudioEngine obtains channel audio data
Currently, I have successfully used ChannelMap to map hardware input channels and obtained audio data from the hardware device's MIC and OTG inputs. Additionally, I have used ChannelMap to map output channels to freely feed data for playback to each output channel. However, I now have a problem. I have a hardware device that only has output channels (no input channels), and the system has set this hardware device as the default playback device. In this case, how can I obtain the audio data being played to the output channels for modification?
0
0
292
Dec ’25
🎧Define if headphones is only playing device for current session
I need to apply headphone-specific scenario only when headphones are the sole active playback device in my iOS audio app. Problem that there is no absolute way to definitively understand that headphones are the sole active playback device AVAudioSession.currentRoute.outputs portTypes don't guarantee headphones: let session = AVAudioSession.sharedInstance() let outputs = session.currentRoute.outputs let headphonesOnly = outputs.count == 1 && (outputs.first?.portType == .headphones || outputs.first?.portType == .bluetoothA2DP || outputs.first?.portType == .bluetoothHFP || outputs.first?.portType == .bluetoothLE) The issue in code above that listed bluetooth profiles (A2DP, HFP, LE) can be used by any audio device, not only headphones Is there any public API on iOS that can: Distinguish Bluetooth headphones vs Bluetooth speakers when both use A2DP/LE? Expose the user’s “Device Type” classification (headphones / speaker / car stereo, etc.) that is shown in Settings → Bluetooth → Device Type? Provide a more reliable way to know “this route is definitely headphones” for A2DP devices, beyond portType and portName string heuristics?
0
0
107
Feb ’26
SiriKit: INPlayMediaIntent with a targeted speaker
I've got a streaming Radio app that loads an HLS stream into an AVAudioPlayer. I've set up an Intents extension that notifies SiriKit that my app must handle the INPlayMediaIntent in app, and, I'm able to successfully initiate the stream playing from my phone using the string "Play ". My intent handler in app looks like this: completionHandler(INPlayMediaIntentResponse(code: .success, userActivity: nil)) DispatchQueue.main.async { AudioPlayerService.shared.play() } The Audio Player service, in its init, does the following: try AVAudioSession.sharedInstance().setCategory( .playback, mode: .default, policy: .longFormAudio ) Additionally, in my Info.plist, I have the AirPlay optimization policy set to Long Form Audio. Having said all that, when I try to route my app to play "on a given HomePod speaker" ("play on ") the speaker routing instructions are never followed. I've looked and not been able to find where I might be able to instruct my app to follow the correct path here. I was assuming I could not trigger this behavior manually, as I believe I don't really have any control over AirPlay routing. Is there any guidance for working with SiriKit to do the right thing with regards to audio routing?
0
0
146
Feb ’26
Audio session activation occasionally fails from CarPlay
I'm working on adding CarPlay support to an audio app and am running into an issue. Occasionally, when a user opens the app from CarPlay while the main app scene is either not connected or is currently in the background, I will receive an error when attempting to activate the audio session. The code below mimics my setup: do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio) try AVAudioSession.sharedInstance().setActive(true) } catch { print(error) // NSOSStatusErrorDomain - 560557684: Session activation failed } That error code maps to AVAudioSession.ErrorCode.cannotInterruptOthers. Once in this state, all subsequent attempts to play different pieces of content will fail. However, things will start working normally if the user opens the app on their phone and tries again from CarPlay (while the app is in the foreground on their phone). I'm not sure why it would behave this way and want to note that I do have the audio background mode capability enabled. Has anyone else encountered this? Are there any workarounds or changes I could make to prevent this from happening?
Replies
0
Boosts
1
Views
199
Activity
Apr ’25
AVAudioSession dropping wired USBAudio source
My app inputs electrical waveforms from an IV485B39 2 channel USB device using an AVAudioSession. Before attempting to acquire data I make sure the input device is available as follows: AVAudiosSession *audioSession = [AVAudioSession sharedInstance]; [audioSession setCategory :AVAudioSessionCategoryRecord error:&err]; NSArray *inputs = [audioSession availableInputs]; I have been using this code for about 10 years. My app is scriptable so a user can acquire data from the IV485B29 multiple times with various parameter settings (sampling rates and sample duration). Recently the scripts have been failing to complete and what I have notice that when it fails the list of available inputs is missing the USBAudio input. While debugging I have noticed that when working properly the list of inputs includes both the internal microphone as well as the USBAudio device as shown below. VIB_TimeSeriesViewController:***Available inputs = ( "<AVAudioSessionPortDescription: 0x11584c7d0, type = MicrophoneBuiltIn; name = iPad Microphone; UID = Built-In Microphone; selectedDataSource = Front>", "<AVAudioSessionPortDescription: 0x11584cae0, type = USBAudio; name = 485B39 200095708064650803073200616; UID = AppleUSBAudioEngine:Digiducer.com :485B39 200095708064650803073200616:000957 200095708064650803073200616:1; selectedDataSource = (null)>" ) But when it fails I only see the built in microphone. VIB_TimeSeriesViewController:***Available inputs = ( "<AVAudioSessionPortDescription: 0x11584cef0, type = MicrophoneBuiltIn; name = iPad Microphone; UID = Built-In Microphone; selectedDataSource = Front>" ) If I only see the built in microphone I immediately repeat the three lines of code and most of the "inputs" contains both the internal microphone and the USBAudioDevice AVAudiosSession *audioSession = [AVAudioSession sharedInstance]; [audioSession setCategory :AVAudioSessionCategoryRecord error:&err]; NSArray *inputs = [audioSession availableInputs]; This fix always works on my M2 iPadPro and my iPhone 14 but some of my customers have older devices and even with 3 tries they still get faults about 1 in 10 tries. I rolled back my code to a released version from about 12 months ago where I know we never had this problem and compiled it against the current libraries and the problem still exists. I assume this is a problem caused by a change in the AVAudioSession framework libraries. I need to find a way to work around the issue or get the library fixed.
Replies
0
Boosts
0
Views
120
Activity
May ’25
Making music….
Maybe a bit of a simplistic question, but…. Best ways to create music - programmatically - within an app? I have a few small, simple games that I would like to add some background music to. I’ve fiddled with AVAudio, SKAudio, and now AudioKit. I’m just not quite finding the holy grail of music generation. I don’t know if it’s more technique or tool. So I thought I’d hit up the pool of minds here for some suggestions…?
Replies
0
Boosts
0
Views
141
Activity
Jun ’25
AutoMix Api Available in MusicKit
Is there any way for me to use an AutoMix api in my IOS apps, I would play tracks using the Apple Music api and use AutoMix to attempt to merge tracks. Is this feature/api available to developers.
Replies
0
Boosts
0
Views
129
Activity
Jun ’25
AVSpeechSynthesisVoices available on device
Hello there! Is there any list of voices that are always available on iOS/iPadOS devices? It seems that AVSpeechSynthesisVoice(identifier: "com.apple.voice.compact.en-US.Samantha") is always available on all devices. I thought that AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Nicky_en-US_compact") and AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Aaron_en-US_compact") were available by default on certain newer devices. Is this true? I also noticed that on the same iPad where I was using those 2 voices (Nicky and Aaron) - when I updated to the iPadOS 26 beta, those voices were no longer available. Any information you can share about which voices should be reliably available on which devices would be extremely helpful for our development. Thanks so much!
Replies
0
Boosts
0
Views
191
Activity
Jun ’25
AudioPlaybackIntent usage requesting a screen unlock
Hello, I have an AppIntent that uses the AudioPlaybackIntent to trigger my app to open and initiate an AVPlayer that plays back a media stream I control. When the phone is unlocked, everything works as I expect. The app opens and plays the audio. However, when the phone is locked, any attempt to invoke the intent causes a "Request Code" dialog to be displayed. This seems counter to what I would expect with the AudioPlaybackIntent usage. Am I able to accomplish what I'm after here with AppIntents? Does the fact that I'm using openAppWhenRun require me to have the phone unlocked somehow? import AppIntents import Foundation struct PlayStationAppIntent: AudioPlaybackIntent { static var title: LocalizedStringResource = "Play radio station" static var description: IntentDescription = .init("Play radio station") static var notification: Notification.Name = .init("playStation") static var openAppWhenRun: Bool = true init() {} func perform() async throws -> some IntentResult { AudioPlayerService.shared.play() return .result() } }
Replies
0
Boosts
0
Views
165
Activity
Jul ’25
Terminal Command or AppleScript to Set Audio Balance to Perfect Center?
Hi everyone, I'm looking for a way to programmatically set the left/right audio balance to perfect center (50/50) using either a Terminal command or AppleScript. Background: The audio balance slider in System Settings &gt; Sound &gt; Output &amp; Input works functionally, but I have difficulty determining when it's positioned at the exact center point. The visual nature of the slider makes it challenging for me to achieve the precision I need, and I end up adjusting it repeatedly trying to get it perfectly centered. What I'm looking for: A Terminal command that can set the audio balance to exact center An AppleScript that accomplishes the same thing Any other programmatic method to ensure perfect 50/50 balance I've tried searching through the defaults command documentation and Core Audio frameworks but haven't found the right approach yet. Has anyone successfully automated this setting before? Any help would be greatly appreciated! Thanks in advance, Dylan
Replies
0
Boosts
0
Views
102
Activity
Jul ’25
occasional glitches and empty buffers when using AudioFileStream + AVAudioConverter
I'm streaming mp3 audio data using URLSession/AudioFileStream/AVAudioConverter and getting occasional silent buffers and glitches (little bleeps and whoops as opposed to clicks). The issues are present in an offline test, so this isn't an issue of underruns. Doing some buffering on the input coming from the URLSession (URLSessionDataTask) reduces the glitches/silent buffers to rather infrequent, but they do still happen occasionally. var bufferedData = Data() func parseBytes(data: Data) { bufferedData.append(data) // XXX: this buffering reduces glitching // to rather infrequent. But why? if bufferedData.count > 32768 { bufferedData.withUnsafeBytes { (bytes: UnsafeRawBufferPointer) in guard let baseAddress = bytes.baseAddress else { return } let result = AudioFileStreamParseBytes(audioStream!, UInt32(bufferedData.count), baseAddress, []) if result != noErr { print("❌ error parsing stream: \(result)") } } bufferedData = Data() } } No errors are returned by AudioFileStream or AVAudioConverter. func handlePackets(data: Data, packetDescriptions: [AudioStreamPacketDescription]) { guard let audioConverter else { return } var maxPacketSize: UInt32 = 0 for packetDescription in packetDescriptions { maxPacketSize = max(maxPacketSize, packetDescription.mDataByteSize) if packetDescription.mDataByteSize == 0 { print("EMPTY PACKET") } if Int(packetDescription.mStartOffset) + Int(packetDescription.mDataByteSize) > data.count { print("❌ Invalid packet: offset \(packetDescription.mStartOffset) + size \(packetDescription.mDataByteSize) > data.count \(data.count)") } } let bufferIn = AVAudioCompressedBuffer(format: inFormat!, packetCapacity: AVAudioPacketCount(packetDescriptions.count), maximumPacketSize: Int(maxPacketSize)) bufferIn.byteLength = UInt32(data.count) for i in 0 ..< Int(packetDescriptions.count) { bufferIn.packetDescriptions![i] = packetDescriptions[i] } bufferIn.packetCount = AVAudioPacketCount(packetDescriptions.count) _ = data.withUnsafeBytes { ptr in memcpy(bufferIn.data, ptr.baseAddress, data.count) } if verbose { print("handlePackets: \(data.count) bytes") } // Setup input provider closure var inputProvided = false let inputBlock: AVAudioConverterInputBlock = { packetCount, statusPtr in if !inputProvided { inputProvided = true statusPtr.pointee = .haveData return bufferIn } else { statusPtr.pointee = .noDataNow return nil } } // Loop until converter runs dry or is done while true { let bufferOut = AVAudioPCMBuffer(pcmFormat: outFormat, frameCapacity: 4096)! bufferOut.frameLength = 0 var error: NSError? let status = audioConverter.convert(to: bufferOut, error: &error, withInputFrom: inputBlock) switch status { case .haveData: if verbose { print("✅ convert returned haveData: \(bufferOut.frameLength) frames") } if bufferOut.frameLength > 0 { if bufferOut.isSilent { print("(haveData) SILENT BUFFER at frame \(totalFrames), pending: \(pendingFrames), inputPackets=\(bufferIn.packetCount), outputFrames=\(bufferOut.frameLength)") } outBuffers.append(bufferOut) totalFrames += Int(bufferOut.frameLength) } case .inputRanDry: if verbose { print("🔁 convert returned inputRanDry: \(bufferOut.frameLength) frames") } if bufferOut.frameLength > 0 { if bufferOut.isSilent { print("(inputRanDry) SILENT BUFFER at frame \(totalFrames), pending: \(pendingFrames), inputPackets=\(bufferIn.packetCount), outputFrames=\(bufferOut.frameLength)") } outBuffers.append(bufferOut) totalFrames += Int(bufferOut.frameLength) } return // wait for next handlePackets case .endOfStream: if verbose { print("✅ convert returned endOfStream") } return case .error: if verbose { print("❌ convert returned error") } if let error = error { print("error converting: \(error.localizedDescription)") } return @unknown default: fatalError() } } }
Replies
0
Boosts
0
Views
567
Activity
Jul ’25
How to detect when iOS Camera app starts video recording (with Allow Audio Playback ON)?
Since iOS 18, the system setting “Allow Audio Playback” (enabled by default) allows third-party app audio to continue playing while the user is recording video with the Camera app. This has created a problem for the app I’m developing. ➡️ The problem: My app plays continuous audio in both foreground and background states. If the user starts recording video using the iOS Camera app, the app’s audio — still playing in the background — gets captured in the video — obviously an unintended behavior. Yes, the user could stop the app manually before starting the video recording, but that can’t be guaranteed. As a developer, I need a way to stop the app’s audio before the video recording begins. So far, I haven’t found a reliable way to detect when video recording starts if ‘Allow Audio Playback’ is ON. ➡️ What I’ve tried: — AVAudioSession.interruptionNotification → doesn’t fire — devicesChangedEventStream → not triggered I don’t want to request mic permission (app doesn’t use mic). also, disabling the app from playing audio in the background isn’t an option as it is a crucial part of the user experience ➡️ What I need: A reliable, supported way to detect when the Camera app begins video recording, without requiring mic access — so I can stop audio and avoid unintentional overlap with the user’s recordings. Any official guidance, workarounds, or AVFoundation techniques would be greatly appreciated. Thanks.
Replies
0
Boosts
0
Views
290
Activity
Aug ’25
Regression: RealityKit spatial audio crackles and pops on iOS 26.0 beta 5 (FB19423059)
RealityKit spatial audio crackles and pops on iOS 26.0 beta 5. It works correctly on iOS 18.6 and visionOS 26.0 beta 5. The APIs used are AudioPlaybackController, Entity.prepareAudio, Entity.play Videos of the expected and observed behavior are attached to the feedback FB19423059. The audio should be a consistent, repeating sound, but it seems oddly abbreviated and the volume varies unexpectedly. Thank you for investigating this issue.
Replies
0
Boosts
0
Views
318
Activity
Aug ’25
Is it possible to use Apple’s system sounds, specifically ringtones like Reflection or Marimba, in my iOS app?
I want to use Apple ringtones in my app so that the app can play some of them. Could this lead to any legal issues or App Store rejection? What are the guidelines and potential concerns I should be aware of?
Replies
0
Boosts
0
Views
190
Activity
Aug ’25
Homepod Crossfade
I’m running HomePod OS 26 on two HomePod minis and OS 18.6 on main HomePod (original) I’ve enabled Crossfade in the Home app. I’m playing Apple Music directly in the HomePod mini. Crossfade just doesn’t work on any HomePod. I can understand it not working on the HomePod - but why isn’t it working on the minis running OS 26? I’ve tried disabling and enabling Crossfade, rebooting HomePods etc but nothing?!
Replies
0
Boosts
0
Views
354
Activity
Aug ’25
Use of MusicKit in mobile app to manipulate audio with EQ
Hello, I want to know if there are any restrictions with MusicKit to be used in a mobile app to be able to manipulate audio with an EQ on tracks coming from Apple Music, without modifying the actual track structure/data of course, just the audio output.
Replies
0
Boosts
0
Views
415
Activity
Sep ’25
Default Voices for AVSpeechUtterance
It appears iOS only comes with low quality voices installed. iOS requires the user to go into settings to download higher quality voices to be used with AVSpeechUtterance. There doesn't seem to be any api that can be used to make this process easier for the app user. Is there a way / api that would allow an app to download and use a higher quality voice? Will apple ever install on default higher quality voices? We really want to use the text to speech api in iOS however the very high amount of user friction to use high quality voices is stopping us. I would appreciate a response. Thanks
Replies
0
Boosts
0
Views
795
Activity
Sep ’25
macOS Tahoe: Can't setup AVAudioEngine with playthrough
Hi, I'm trying to setup a AVAudioEngine for USB Audio recording and monitoring playthrough. As soon as I try to setup playthough I get an error in the console: AVAEInternal.h:83 required condition is false: [AVAudioEngineGraph.mm:1361:Initialize: (IsFormatSampleRateAndChannelCountValid(outputHWFormat))] Any ideas how to fix it? // Input-Device setzen try? setupInputDevice(deviceID: inputDevice) let input = audioEngine.inputNode // Stereo-Format erzwingen let inputHWFormat = input.inputFormat(forBus: 0) let stereoFormat = AVAudioFormat(commonFormat: inputHWFormat.commonFormat, sampleRate: inputHWFormat.sampleRate, channels: 2, interleaved: inputHWFormat.isInterleaved) guard let format = stereoFormat else { throw AudioError.deviceSetupFailed(-1) } print("Input format: \(inputHWFormat)") print("Forced stereo format: \(format)") audioEngine.attach(monitorMixer) audioEngine.connect(input, to: monitorMixer, format: format) // MonitorMixer -> MainMixer (Output) // Problem here, format: format also breaks. audioEngine.connect(monitorMixer, to: audioEngine.mainMixerNode, format: nil)
Replies
0
Boosts
0
Views
206
Activity
Oct ’25
Asaf production suite AAX 2.91.9
Hi. I am mixing content destined for Vision Pro. Locked to video. I have the AAX installer and the ASAF video player demonstrated in the quicktimes is nit included in the install package for pro tools. Would it be possible to post a link ?
Replies
0
Boosts
0
Views
642
Activity
Oct ’25
Apple Music for DJ App
Hi there, I recently launched a dj app to the mac app store, and was wondering how I could access songs for mixing purposes via Apple Music just like how serato, rekordbox, djay, and other DJ apps do? Thanks, Gunek
Replies
0
Boosts
0
Views
348
Activity
Nov ’25
AVAudioEngine obtains channel audio data
Currently, I have successfully used ChannelMap to map hardware input channels and obtained audio data from the hardware device's MIC and OTG inputs. Additionally, I have used ChannelMap to map output channels to freely feed data for playback to each output channel. However, I now have a problem. I have a hardware device that only has output channels (no input channels), and the system has set this hardware device as the default playback device. In this case, how can I obtain the audio data being played to the output channels for modification?
Replies
0
Boosts
0
Views
292
Activity
Dec ’25
🎧Define if headphones is only playing device for current session
I need to apply headphone-specific scenario only when headphones are the sole active playback device in my iOS audio app. Problem that there is no absolute way to definitively understand that headphones are the sole active playback device AVAudioSession.currentRoute.outputs portTypes don't guarantee headphones: let session = AVAudioSession.sharedInstance() let outputs = session.currentRoute.outputs let headphonesOnly = outputs.count == 1 && (outputs.first?.portType == .headphones || outputs.first?.portType == .bluetoothA2DP || outputs.first?.portType == .bluetoothHFP || outputs.first?.portType == .bluetoothLE) The issue in code above that listed bluetooth profiles (A2DP, HFP, LE) can be used by any audio device, not only headphones Is there any public API on iOS that can: Distinguish Bluetooth headphones vs Bluetooth speakers when both use A2DP/LE? Expose the user’s “Device Type” classification (headphones / speaker / car stereo, etc.) that is shown in Settings → Bluetooth → Device Type? Provide a more reliable way to know “this route is definitely headphones” for A2DP devices, beyond portType and portName string heuristics?
Replies
0
Boosts
0
Views
107
Activity
Feb ’26
SiriKit: INPlayMediaIntent with a targeted speaker
I've got a streaming Radio app that loads an HLS stream into an AVAudioPlayer. I've set up an Intents extension that notifies SiriKit that my app must handle the INPlayMediaIntent in app, and, I'm able to successfully initiate the stream playing from my phone using the string "Play ". My intent handler in app looks like this: completionHandler(INPlayMediaIntentResponse(code: .success, userActivity: nil)) DispatchQueue.main.async { AudioPlayerService.shared.play() } The Audio Player service, in its init, does the following: try AVAudioSession.sharedInstance().setCategory( .playback, mode: .default, policy: .longFormAudio ) Additionally, in my Info.plist, I have the AirPlay optimization policy set to Long Form Audio. Having said all that, when I try to route my app to play "on a given HomePod speaker" ("play on ") the speaker routing instructions are never followed. I've looked and not been able to find where I might be able to instruct my app to follow the correct path here. I was assuming I could not trigger this behavior manually, as I believe I don't really have any control over AirPlay routing. Is there any guidance for working with SiriKit to do the right thing with regards to audio routing?
Replies
0
Boosts
0
Views
146
Activity
Feb ’26