Hi, In my project I am using AVFoundation for recording the audio. We are using AVAudioMixerNode class below method to record the audio packet.
**func installTap(
onBus bus: AVAudioNodeBus,
bufferSize: AVAudioFrameCount,
format: AVAudioFormat?,
block tapBlock: @escaping AVAudioNodeTapBlock
)
**
It works perfectly fine.
But in production env some small percentage of the user we are facing issue like after recording few packets it stops automatically without stopping the audio engine. Can anyone help here that why this happens? I have also observed for mediaServicesWereResetNotification and added log on receiving this notification but when this issue happens I don't see any occurence of this log. Also is there any callback when the engine stops?
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
When multiple identical songs are added to a playlist, Playlist.Entry.id uses a suffix-based identifier (e.g. songID_0, songID_1, etc.). Removing one entry causes others to shift, changing their .id values. This leads to diffing errors and collection view crashes in SwiftUI or UIKit when entries are updated.
Steps to Reproduce:
Add the same song to a playlist multiple times.
Observe .id.rawValue of entries (e.g. i.SONGID_0, i.SONGID_1).
Remove one entry.
Fetch playlist again — note the other IDs have shifted.
FB18879062
I am work an app development on an app which request an audio function in background as an alert sound.
during debug testing , the function work fine,
but once I testing standalone without debugging , The function not work , it will play out the sound when I back to app.
does any way to trace the issues ?
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2
We're using AVAudioPlayer to play a sound when a button is tapped. In our use case, this button can be tapped very frequently — roughly every 0.1 to 0.2 seconds. Each tap triggers the following function:
var audioPlayer: AVAudioPlayer?
func soundPlay(resource: String, type: String){
guard let path = Bundle.main.path(forResource: resource, ofType: type) else {
return
}
do {
audioPlayer = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path))
audioPlayer!.delegate = self
try audioSession.setCategory(.playback)
} catch {
return
}
self.audioPlayer!.play()
}
The issue is that under high-frequency tapping (especially around 0.1–0.15s intervals), the app occasionally crashes. The crash does not occur every time, but it happens randomly — sometimes within 30 seconds, within 1 minute, or even 3 minutes of continuous tapping.
Interestingly, adding a delay of 0.2 seconds between button taps seems to prevent the crash entirely. Delays shorter than 0.2 seconds (e.g.,0.15s,0.18s) still result in occasional crashes.
My questions are:
**Is this expected behavior from AVAudioPlayer or AVAudioSession?
Could this be a known issue or a limitation in AVFoundation?
Is there any documentation or guidance on handling frequent sound playback safely?**
Any insights or recommendations on how to handle rapid, repeated audio playback more reliably would be appreciated.
Hello!
I have a problem with getting album extended info from users library. Note that app authorised to use Apple Music according documentation.
I get albums from users library with this code:
func getLibraryAlbums() async throws -> MusicItemCollection<Album> {
let request = MusicLibraryRequest<Album>()
let response = try await request.response()
return response.items
}
This is an example of Albums request respones:
{
"data" : [
{
"meta" : {
"musicKit_identifierSet" : {
"isLibrary" : true,
"id" : "1945382328890400383",
"dataSources" : [
"localLibrary",
"legacyModel"
],
"type" : "Album",
"deviceLocalID" : {
"databaseID" : "37336CB19CF51727",
"value" : "1945382328890400383"
},
"catalogID" : {
"kind" : "adamID",
"value" : "1173535954"
}
}
},
"id" : "1945382328890400383",
"type" : "library-albums",
"attributes" : {
"artwork" : {
"url" : "musicKit:\/\/artwork\/transient\/{w}x{h}?id=4A2F444C%2D336D%2D49EA%2D90C8%2D13C547A5B95B",
"width" : 0,
"height" : 0
},
"genreNames" : [
"Pop"
],
"trackCount" : 1,
"artistName" : "Сара Окс",
"isAppleDigitalMaster" : false,
"audioVariants" : [
"lossless"
],
"playParams" : {
"catalogId" : "1173535954",
"id" : "1945382328890400383",
"musicKit_persistentID" : "1945382328890400383",
"kind" : "album",
"musicKit_databaseID" : "37336CB19CF51727",
"isLibrary" : true
},
"name" : "Нимфомания - Single",
"isCompilation" : false
}
},
{
"meta" : {
"musicKit_identifierSet" : {
"isLibrary" : true,
"id" : "-8570883332059662437",
"dataSources" : [
"localLibrary",
"legacyModel"
],
"type" : "Album",
"deviceLocalID" : {
"value" : "-8570883332059662437",
"databaseID" : "37336CB19CF51727"
},
"catalogID" : {
"kind" : "adamID",
"value" : "1618488499"
}
}
},
"id" : "-8570883332059662437",
"type" : "library-albums",
"attributes" : {
"isCompilation" : false,
"genreNames" : [
"Pop"
],
"trackCount" : 1,
"artistName" : "TIMOFEEW & KURYANOVA",
"isAppleDigitalMaster" : false,
"audioVariants" : [
"lossless"
],
"playParams" : {
"catalogId" : "1618488499",
"musicKit_persistentID" : "-8570883332059662437",
"kind" : "album",
"id" : "-8570883332059662437",
"musicKit_databaseID" : "37336CB19CF51727",
"isLibrary" : true
},
"artwork" : {
"url" : "musicKit:\/\/artwork\/transient\/{w}x{h}?id=BEA6DBD3%2D8E14%2D4A10%2D97BE%2D8908C7C5FC2C",
"width" : 0,
"height" : 0
},
"name" : "Не звони - Single"
}
},
...
]
}
In AlbumView using task: view modifier I request extended information about the album with this code:
func loadExtendedInfo(_ album: Album) async throws -> Album {
let response = try await album.with([.tracks, .audioVariants, .recordLabels], preferredSource: .library)
return response
}
but in the response some of the fields are always nil, for example recordLabels, releaseDate, url, editorialNotes, copyright.
Please tell me what I'm doing wrong?
Hello,
I have a CarPlay Navigation app and utilize the AVSpeechSynthesizer to speak directions to a user. Everything works great on my CarPlay simulator as well as when plugged into my GMC truck. However, I found out yesterday that one of my users with a Ford truck the audio would cut in an out.
After much troubleshooting, I was able to replicate this on my own truck when using Bluetooth to connect to CarPlay. My user was also utilizing Bluetooth. Has anyone else experienced this? Is there a fix to the problem?
import SwiftUI
import AVFoundation
class TextToSpeechService: NSObject, ObservableObject, AVSpeechSynthesizerDelegate {
private var speechSynthesizer = AVSpeechSynthesizer()
static let shared = TextToSpeechService()
override init() {
super.init()
speechSynthesizer.delegate = self
}
func configureAudioSession() {
speechSynthesizer.delegate = self
do {
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .voicePrompt, options: [.mixWithOthers, .allowBluetooth])
} catch {
print("Failed to set audio session category: \(error.localizedDescription)")
}
}
func speak(_ text: String) {
Task(priority: .high) {
let speechUtterance = AVSpeechUtterance(string: text)
speechUtterance.voice = AVSpeechSynthesisVoice(language: AVSpeechSynthesisVoice.currentLanguageCode())
try AVAudioSession.sharedInstance().setActive(true, options: .notifyOthersOnDeactivation)
speechSynthesizer.speak(speechUtterance)
}
}
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) {
Task {
stopSpeech()
try AVAudioSession.sharedInstance().setActive(false)
}
}
func stopSpeech() {
speechSynthesizer.stopSpeaking(at: .immediate)
}
}
Your draft looks great! Here's a refined version with the iOS 17 comparison emphasized and slightly better flow:
Hi Apple Engineers and fellow developers,
I'm experiencing a critical regression with ShazamKit's background operation on iOS 18. ShazamKit's SHManagedSession stops identifying songs in the background after approximately 20 seconds on iOS 18, while the exact same code works perfectly on iOS 17.
The behavior is consistent: the app works perfectly in the foreground, but when backgrounded or device is locked, it initially works for about 20 seconds then stops identifying new songs. The microphone indicator remains active suggesting audio access is maintained, but ShazamKit doesn't send identified songs in the background until you open the app again. Detection immediately resumes when bringing the app to foreground.
My technical setup uses SHManagedSession for continuous matching with background modes properly configured in Info.plist including audio mode, and Background App Refresh enabled. I've tested this on physical devices running iOS 18.0 through 18.5 with the same results across all versions. The exact same code running on iOS 17 devices works flawlessly in the background.
To reproduce: initialize SHManagedSession and start matching, begin song identification in foreground, background the app or lock device, play different songs which are initially detected for about 20 seconds, then after the timeout period new songs are no longer identified until you bring the app to foreground.
This regression has impacted my production app as users who rely on continuous background music identification are experiencing a broken feature. I submitted this as Feedback ID FB15255903 last September with no solution so far.
I've created a minimal demo project that reproduces this issue: https://github.com/tfmart/ShazamKitBackground
Has anyone else experienced this ShazamKit background regression on iOS 18? Are there any known workarounds or alternative approaches? Given the time this issue has persisted, could we please get acknowledgment of this regression, expected timeline for a fix, or any recommended workarounds?
Testing environment is Xcode 16.0+ on iOS 18.0-18.5 across multiple physical device models.
Any guidance would be greatly appreciated.
When using the Apple Devices to sync Apple Music to iPhone where is the Apple Devices backup being written to?
Apple Devices->music->sync.
Not trying to backup the iPhone via Apple Devices app.
Does an artist similarity station broaden selection variety compared to a song similarity station?
You don't have to answer if it is against nondisclosure terms.
Hi everyone 👋
I’m building an iOS app in Swift where I want to do the following:
Record the user’s voice
Transcribe the spoken sentence (speech-to-text)
Auto-detect the spoken language
Translate it to another language selected by the user (e.g., English → Spanish or Hindi → English)
Speak back (text-to-speech) the translated text on the same device
Is this possible to record via phone mic and play the transcribe voice into headphone's audio?
I noticed that while playing back the same tracks via MusicKit on different OSes I get different results regarding the audio files being streamed.
Playing back a lossless file with 24Bit 48kHz and watching the Console for RemotePlayerService I get:
on iPadOS: Lossless; groupID: audio-alac-stereo-48000-24; bitDepth: 24-bit; sampleRate: 48khz; codec: alac; channels: 2; layout: Stereo;
on macOS: Creating AudioQueue with format:'paac', framesPerPacket:1024, sampleRate:44100
While the iPad looks perfect, the Mac does not. Is there a way to fix this issue on macOS.
BTW: I switched the Audio-Midi Settings before, after and while the macOS App was lunched. I also switched to different output devices. I wasn't able to change the bad audio-output on the mac. I tested this under Sequoia 15.5 and Tahoe beta 1, Xcode 16.4 and 26 beta 1.
The AudioVariants of the Album/Tracks are .dolbyAtmos, .lossless, .lossyStereo
Apple Music displays Lossless 24 Bit/48 kHz ALAC when clicking on the playercontroll icon on macOS
I hope there are only some missing or misconfigured properties to get macOS up to par.
Thanks :-)
Hello everyone,
I’m new to Swift development and have been working on an audio module that plays a specific sound at regular intervals - similar to a workout timer that signals switching exercises every few minutes.
Following AVFoundation documentation, I’m configuring my audio session like this:
let session = AVAudioSession.sharedInstance()
try session.setCategory(
.playback,
mode: .default,
options: [.interruptSpokenAudioAndMixWithOthers, .duckOthers]
)
self.engine.attach(self.player)
self.engine.connect(self.player, to: self.engine.outputNode, format: self.audioFormat)
try? session.setActive(true)
When it’s time to play cues, I schedule playback on a DispatchQueue:
// scheduleAudio uses DispatchQueue
self.scheduleAudio(at: interval.start) {
do {
try audio.engine.start()
audio.node.play()
for sample in interval.samples {
audio.node.scheduleBuffer(sample.buffer, at: AVAudioTime(hostTime: sample.hostTime))
}
} catch {
print("Audio activation failed: \(error)")
}
}
This works perfectly in the foreground. But once the app goes into the background, the scheduled callback runs, yet the audio engine fails to start, resulting in an error with code 561015905.
Interestingly, if the app is already playing audio before going to the background, the scheduled sounds continue to play as expected.
I have added the required background audio mode to my Info plist file by including the key UIBackgroundModes with the value audio.
Is there anything else I should configure? What is the best practice to play periodic audio when the app runs in the background? How do apps like turn-by-turn navigation handle continuous audio playback in the background?
Any advice or pointers would be greatly appreciated!
Hello there!
Is there any list of voices that are always available on iOS/iPadOS devices?
It seems that AVSpeechSynthesisVoice(identifier: "com.apple.voice.compact.en-US.Samantha") is always available on all devices.
I thought that AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Nicky_en-US_compact") and AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Aaron_en-US_compact") were available by default on certain newer devices. Is this true?
I also noticed that on the same iPad where I was using those 2 voices (Nicky and Aaron) - when I updated to the iPadOS 26 beta, those voices were no longer available.
Any information you can share about which voices should be reliably available on which devices would be extremely helpful for our development. Thanks so much!
Hey folks, I'm running into an odd issue suddenly with an app that had a working MusicKit integration before.
I'm using ApplicationMusicPlayer to play Apple Music albums and songs. I'm testing on a physical device, signed in to Apple ID, and with a valid subscription. Apple Music via the first-party app works entirely fine on this device.
Attempting to play back any content at all gives the log:
<ICUserIdentityStoreACAccountBackend: 0x1070bf3e0> Failed to initialize primary apple account, error=Error Domain=ICError Code=-7013 "Client is not entitled to access account store" UserInfo={NSDebugDescription=Client is not entitled to access account store}
[ICUserIdentityStore] - initializing account histories with activeAccountDSID = nil, activeLockerAccountDSID = nil, timestamp = 14605951908
[ICUserIdentityStore] Failed to fetch local store account with error: Error Domain=ICError Code=-7013 "Client is not entitled to access account store" UserInfo={NSDebugDescription=Client is not entitled to access account store}.
The album artwork, track names, etc, all appear in the control center playback controls, but the music doesn't play. Trying to trigger playback with control center just results in it skipping to the next track, which doesn't play either.
This exact code used to work. I have the MusicKit service selected in Apple Connect. Since this isn't entitlement-based, I'm not sure how else to check that I'm set up correctly.
I've tried deleting/reinstalling the app, restarting the device, cleaning/rebuilding, and deleting DerivedData, to no avail.
Any help?
Running Xcode 16.4 (16F6), testing on iOS 18.5 (22F76)
Hi,
I'm trying to setup a AVAudioEngine for USB Audio recording and monitoring playthrough.
As soon as I try to setup playthough I get an error in the console: AVAEInternal.h:83 required condition is false: [AVAudioEngineGraph.mm:1361:Initialize: (IsFormatSampleRateAndChannelCountValid(outputHWFormat))]
Any ideas how to fix it?
// Input-Device setzen
try? setupInputDevice(deviceID: inputDevice)
let input = audioEngine.inputNode
// Stereo-Format erzwingen
let inputHWFormat = input.inputFormat(forBus: 0)
let stereoFormat = AVAudioFormat(commonFormat: inputHWFormat.commonFormat, sampleRate: inputHWFormat.sampleRate, channels: 2, interleaved: inputHWFormat.isInterleaved)
guard let format = stereoFormat else {
throw AudioError.deviceSetupFailed(-1)
}
print("Input format: \(inputHWFormat)")
print("Forced stereo format: \(format)")
audioEngine.attach(monitorMixer)
audioEngine.connect(input, to: monitorMixer, format: format)
// MonitorMixer -> MainMixer (Output)
// Problem here, format: format also breaks.
audioEngine.connect(monitorMixer, to: audioEngine.mainMixerNode, format: nil)
Hi,
Not sure if this is the right forum to ask this question in, but could you please advise if I can use Apple Digital Masters logo (badge) in my iOS app that is playing music from Apple Music service?
Topic:
Media Technologies
SubTopic:
Audio
I have an app that displays artwork via MPMediaItem.artwork, requesting an image with a specific size. How do I get a media item's MPMediaItemAnimatedArtwork, and how to get the preview image and video to display to the user?
hi,
i need to read wether the transport is playing or stopped but my current method that works for vst does not work for au.
is there a lpx resource available for developers anywhere?
if (auto* playHead = processor->getPlayHead())
{
juce::AudioPlayHead::CurrentPositionInfo posInfo;
if (playHead->getCurrentPosition(posInfo))
{
bool isCurrentlyPlaying = posInfo.isPlaying;
if (isCurrentlyPlaying != wasTransportPlaying)
{
if (isCurrentlyPlaying)
{
wasTransportPlaying = isCurrentlyPlaying;
startAllTimers();
}
else
{
wasTransportPlaying = isCurrentlyPlaying;
stopAllTimers();
}
}
}
}
thanks :)
I have an AUv3 plugin which uses an FFT - which requires n samples before it can produce any output - so, depending on the relation between the host's buffer size and the FFT window size, it may receive a several buffers of samples, producing no output, and then dumping out what it has once a sufficient number of samples have been received.
This means that output is produced in fits and starts, in batches that match the FFT size (modulo oversampling) - e.g. if being fed buffers of 256 samples with an fft size of 1024, the output buffer sizes will be 0 for the first 3 buffers, and upon the fourth, the first 256 processed samples are returned and the remaining 768 cached; the next three buffers will return the remaining cached samples while processing and buffering subsequent ones, and so forth.
The internal mechanics of that I have solved, caching output if the current output buffer is too small, and so forth - so it all works as advertised, and the plugin reports its latency correctly. And when run as an app in demo-mode, playback works as expected.
In the plugin's render block, it captures the number of frames written, and if it is less than the number of frames passed in, adjusts the mDataByteSize of the output buffers to match the actual quantity of data being returned:
unsigned int framesWritten = (unsigned int) processHelper->processWithEvents(inAudioBufferList, outAudioBufferList, timestamp, frameCount, realtimeEventListHead);
if (framesWritten < frameCount) {
for (UInt32 i = 0; i < outAudioBufferList->mNumberBuffers; ++i) {
outAudioBufferList->mBuffers[i].mDataByteSize = framesWritten * 4; // assume 4 byte floats
}
}
However, there are a couple of serious issues:
auval -v fails it with - Render Test at 64 frames, sample rate: 22050 Hz ERROR: Output Buffer Size does not match requested
When connected to Logic Pro, it appears that mDataByteSize is ignored, and the entire allocated buffer is read - audio has sections of silence snipped into it which corresponds the number of empty buffers being returned
If I set Logic's buffer size to 1024 and use a 1024 sample FFT window, the plugin works correctly - but of course a plugin cannot dictate buffer size, and `1024 is too small a window size to be useful for anything but filtering very high frequencies
This seems like it has to be a solvable problem, and most likely the issue is in how my code reports the number of usable samples in the returned buffer.
So, what is the correct way for a plugin to report that it has no samples to return, but will, uh, real soon now?
I know I could convert this plugin to be one that does offline rendering of the entire input, but this is real-time processing, just with a fixed amount of latency, so that should not be necessary.
Let's consider the following code.
I've created an actor that loads a list of .mp3 files from a Bundle and then makes it available for audio reproduction.
Unfortunately, I'm experiencing a memory leak.
At the play method.
player.play()
From Instruments I get
_malloc_type_malloc_outlined libsystem_malloc.dylib
start_wqthread libsystem_pthread.dylib
private actor AudioActor {
enum Failure: Error {
case soundsNotLoaded([AudioPlayerClient.Sound: Error])
}
enum Player {
case music(AVAudioPlayer)
}
var players: [Sound: Player] = [:]
let bundles: [Bundle]
init(bundles: UncheckedSendable<[Bundle]>) {
self.bundles = bundles.wrappedValue
}
func load(sounds: [Sound]) throws {
try AVAudioSession.sharedInstance().setActive(true, options: [])
var errors: [Sound: Error] = [:]
for sound in sounds {
guard let url = bundle.url(forResource: sound.name, withExtension: "mp3")
else { continue }
do {
self.players[sound] = try .music(AVAudioPlayer(contentsOf: url))
} catch {
errors[sound] = error
}
}
guard errors.isEmpty
else { throw Failure.soundsNotLoaded(errors) }
}
func play(sound: Sound, loops: Int?) throws {
guard let player = self.players[sound]
else { return }
switch player {
case let .music(player):
player.numberOfLoops = loops ?? -1
player.play()
}
}
func stop(sound: Sound) throws {
guard let player = self.players[sound]
else { throw Failure.soundsNotLoaded([:]) }
switch player {
case let .music(player):
player.stop()
}
}
}