Among the millions of users of our online product, we have identified through data metrics that the silent audio data capture rate on iPadOS 18.4.1 or 18.5 has increased abnormally. However, we are unable to reproduce the issue. Has anyone encountered a similar issue? The parameters we used are as follows:
AudioSession:
category:AVAudioSessionCategoryPlayAndRecord
mode:AVAudioSessionModeDefault
option:77
preferredSampleRate:48000.000000
preferredIOBufferDuration:0.010000
AudioUnit
format.mFormatID = kAudioFormatLinearPCM;
format.mSampleRate = 48000.0;
format.mChannelsPerFrame = 2;
format.mBitsPerChannel = 16;
format.mFramesPerPacket = 1;
format.mBytesPerFrame = format.mChannelsPerFrame * 16 / 8;
format.mBytesPerPacket = format.mBytesPerFrame * format.mFramesPerPacket;
format.mFormatFlags = kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsSignedInteger;
component.componentType = kAudioUnitType_Output;
component.componentSubType = kAudioUnitSubType_RemoteIO;
component.componentManufacturer = kAudioUnitManufacturer_Apple;
component.componentFlags = 0;
component.componentFlagsMask = 0;
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
I noticed that while playing back the same tracks via MusicKit on different OSes I get different results regarding the audio files being streamed.
Playing back a lossless file with 24Bit 48kHz and watching the Console for RemotePlayerService I get:
on iPadOS: Lossless; groupID: audio-alac-stereo-48000-24; bitDepth: 24-bit; sampleRate: 48khz; codec: alac; channels: 2; layout: Stereo;
on macOS: Creating AudioQueue with format:'paac', framesPerPacket:1024, sampleRate:44100
While the iPad looks perfect, the Mac does not. Is there a way to fix this issue on macOS.
BTW: I switched the Audio-Midi Settings before, after and while the macOS App was lunched. I also switched to different output devices. I wasn't able to change the bad audio-output on the mac. I tested this under Sequoia 15.5 and Tahoe beta 1, Xcode 16.4 and 26 beta 1.
The AudioVariants of the Album/Tracks are .dolbyAtmos, .lossless, .lossyStereo
Apple Music displays Lossless 24 Bit/48 kHz ALAC when clicking on the playercontroll icon on macOS
I hope there are only some missing or misconfigured properties to get macOS up to par.
Thanks :-)
Hello there!
Is there any list of voices that are always available on iOS/iPadOS devices?
It seems that AVSpeechSynthesisVoice(identifier: "com.apple.voice.compact.en-US.Samantha") is always available on all devices.
I thought that AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Nicky_en-US_compact") and AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Aaron_en-US_compact") were available by default on certain newer devices. Is this true?
I also noticed that on the same iPad where I was using those 2 voices (Nicky and Aaron) - when I updated to the iPadOS 26 beta, those voices were no longer available.
Any information you can share about which voices should be reliably available on which devices would be extremely helpful for our development. Thanks so much!
I have an app that displays artwork via MPMediaItem.artwork, requesting an image with a specific size. How do I get a media item's MPMediaItemAnimatedArtwork, and how to get the preview image and video to display to the user?
Is there any way for me to use an AutoMix api in my IOS apps, I would play tracks using the Apple Music api and use AutoMix to attempt to merge tracks.
Is this feature/api available to developers.
Please consider adding the ability to programatically download Premium and Enhanced voices. At the moment it is extremely inconvenient for our users, as they have to navigate to settings themselves to download voices. Our app relies heavily on SpeechSynthesis integration, and it would greatly benefit from this feature.
FB16307193
When using the Apple Devices to sync Apple Music to iPhone where is the Apple Devices backup being written to?
Apple Devices->music->sync.
Not trying to backup the iPhone via Apple Devices app.
There appears to be no method of going forward or backwards in Get Info in the Music application,
Topic:
Media Technologies
SubTopic:
Audio
I found that the aggregated device correctly obtains input channels in the standard microphone mode. However, in voice isolation mode, it only retrieves channels from the first sub-device in the aggregated device's list. If I want to properly obtain channel information in voice isolation mode, how should I do it?
Hi,
On macOS I used to open MP3 and MP4 files with ExtAudioFile. For a few years it doesn't work anymore.
So I decided to try different macOS API using the AudioFileID of AudioToolbox framework.
I decided to write a test:
https://gist.github.com/joelkraehemann/7f5b241b52ca38c3a765c138fb647588
It fails right here:
AudioFileOpenWithCallbacks()
By telling OSStatus error 1954115647, which means kAudioFileUnsupportedFileTypeError.
The filename was set to an MP4 file:
~/Music/test.mp4
Howto fix this?
regards, Joël
Your draft looks great! Here's a refined version with the iOS 17 comparison emphasized and slightly better flow:
Hi Apple Engineers and fellow developers,
I'm experiencing a critical regression with ShazamKit's background operation on iOS 18. ShazamKit's SHManagedSession stops identifying songs in the background after approximately 20 seconds on iOS 18, while the exact same code works perfectly on iOS 17.
The behavior is consistent: the app works perfectly in the foreground, but when backgrounded or device is locked, it initially works for about 20 seconds then stops identifying new songs. The microphone indicator remains active suggesting audio access is maintained, but ShazamKit doesn't send identified songs in the background until you open the app again. Detection immediately resumes when bringing the app to foreground.
My technical setup uses SHManagedSession for continuous matching with background modes properly configured in Info.plist including audio mode, and Background App Refresh enabled. I've tested this on physical devices running iOS 18.0 through 18.5 with the same results across all versions. The exact same code running on iOS 17 devices works flawlessly in the background.
To reproduce: initialize SHManagedSession and start matching, begin song identification in foreground, background the app or lock device, play different songs which are initially detected for about 20 seconds, then after the timeout period new songs are no longer identified until you bring the app to foreground.
This regression has impacted my production app as users who rely on continuous background music identification are experiencing a broken feature. I submitted this as Feedback ID FB15255903 last September with no solution so far.
I've created a minimal demo project that reproduces this issue: https://github.com/tfmart/ShazamKitBackground
Has anyone else experienced this ShazamKit background regression on iOS 18? Are there any known workarounds or alternative approaches? Given the time this issue has persisted, could we please get acknowledgment of this regression, expected timeline for a fix, or any recommended workarounds?
Testing environment is Xcode 16.0+ on iOS 18.0-18.5 across multiple physical device models.
Any guidance would be greatly appreciated.
I have a music app I'm developing and having a weird issue where I can see now playing info for every other platform than tvOS. As far as I can tell I have correctly configured the MPNowPlayingInfoCenter
MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo MPNowPlayingInfoCenter.default().playbackState = .playing
Are there any extra requirements to get my app's now-playing info showing in control center on tvOS? Another strange issue that might be related is I can use the apple TV remote to pause audio but not resume playback, so I feel like there's something I'm missing about registering audio playback on tvOS specifically.
In iOS 18, CarPlay shows an error: “There was a problem loading this content” after playback starts. Audio works fine, but the Now Playing screen doesn’t load. I’m using MPPlayableContentManager. This worked fine in iOS 17. Anyone else seeing this error in iOS 18?
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2
We're using AVAudioPlayer to play a sound when a button is tapped. In our use case, this button can be tapped very frequently — roughly every 0.1 to 0.2 seconds. Each tap triggers the following function:
var audioPlayer: AVAudioPlayer?
func soundPlay(resource: String, type: String){
guard let path = Bundle.main.path(forResource: resource, ofType: type) else {
return
}
do {
audioPlayer = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path))
audioPlayer!.delegate = self
try audioSession.setCategory(.playback)
} catch {
return
}
self.audioPlayer!.play()
}
The issue is that under high-frequency tapping (especially around 0.1–0.15s intervals), the app occasionally crashes. The crash does not occur every time, but it happens randomly — sometimes within 30 seconds, within 1 minute, or even 3 minutes of continuous tapping.
Interestingly, adding a delay of 0.2 seconds between button taps seems to prevent the crash entirely. Delays shorter than 0.2 seconds (e.g.,0.15s,0.18s) still result in occasional crashes.
My questions are:
**Is this expected behavior from AVAudioPlayer or AVAudioSession?
Could this be a known issue or a limitation in AVFoundation?
Is there any documentation or guidance on handling frequent sound playback safely?**
Any insights or recommendations on how to handle rapid, repeated audio playback more reliably would be appreciated.
Hi,
I am trying to remove the audio controls for my app on the lock screen. Since I use WKWebView, there are 3 audio tags in my html and I play and pause em via JS. However, if I do not play any sound since app launch, there are no audio controls on the lock screen. But if I play one of those 3 files (they are even less then 3 Sec sound effects e.g. for buttons) the audio controls appears on lock screen.
Note even when the sounds on pause() or not playing they were listed on the lock screen.
What I have tried so far without success
MPNowPlayingInfoCenter.default().nowPlayingInfo = [:]
and
``try audioSession.setCategory(.playback, mode: .default, options: [])
try audioSession.setActive(false, options: .notifyOthersOnDeactivation)``
and
UIApplication.shared.endReceivingRemoteControlEvents()
Another problem is that the app scales with iOS system settings "display zoom". Is there a way to deny it?
It is latest Xcode verion 16.3 and iOS 18.
I have no background mode in my Capabilities.
Nothing worked so far. Has anyone an idea?
Greetings
I created a virtual audio device to capture system audio with a sample rate of 44.1 kHz. After capturing the audio, I forward it to the hardware sound card using AVAudioEngine, also with a sample rate of 44.1 kHz. However, due to the clock sources being unsynchronized, problems occur after a period of playback. How can I retrieve the clock source of the hardware device and set it for the virtual device?
ApplicationMusicPlayer is not available on watchOS but all other platforms. Is there a technical reason for that like battery life? Same goes for SystemMusicPlayer and MPMusicPlayerController. I already filed feedbacks for that.
I am work an app development on an app which request an audio function in background as an alert sound.
during debug testing , the function work fine,
but once I testing standalone without debugging , The function not work , it will play out the sound when I back to app.
does any way to trace the issues ?
Hello,
I have been running into issues with setting nowPlayingInfo information, specifically updating information for CarPlay and the CPNowPlayingTemplate.
When I start playback for an item, I see lock screen information update as expected, along with the CarPlay now playing information.
However, the playing items are books with collections of tracks. When I select a new track(chapter) within the book, I set the MPMediaItemPropertyTitle to the new chapter name. This change is reflected correctly on the lock screen, but almost never appears correctly on the CarPlay CPNowPlayingTemplate. The previous chapter title remains set and never updates.
I see "Application exceeded audio metadata throttle limit." in the debug console fairly frequently.
From that a I figured that I need to minimize updates to the nowPlayingInfo dictionary. What I did:
I store the metadata dictionary in a local dictionary and only set values in the main nowPlayingInfo dictionary when they are different from the current value.
I kick off the nowPlayingInfo update via a task that initially sleeps for around 2 seconds (not a final value, just for my current testing). If a previous Task is active, it gets cancelled, so that only one update can happen within that time window.
Neither of these things have been sufficient. I can switch between different titles entirely and the information updates (including cover art).
But when I switch chapters within a title, the MPMediaItemPropertyTitle continues to get dropped. I know the value is getting set, because it updates on the lock screen correctly.
In total, I have 12 keys I update for info, though with the above changes, usually 2-4 of them actually get updated with high frequency.
I am running out of ideas to satisfy the throttling thresholds to accurately display metadata. I could use some advice.
Thanks.
I'm working in Swift/SwiftUI, running XCode 16.3 on macOS 15.4 and I've seen this when running in the iOS simulator and in a macOS app run from XCode. I've also seen this behaviour with 3 different audio files.
Nothing in the documentation says that the speechRecognitionMetadata property on an SFSpeechRecognitionResult will be nil until isFinal, but that's the behaviour I'm seeing.
I've stripped my class down to the following:
private var isAuthed = false
// I call this in a .task {} in my SwiftUI View
public func requestSpeechRecognizerPermission() {
SFSpeechRecognizer.requestAuthorization { authStatus in
Task {
self.isAuthed = authStatus == .authorized
}
}
}
public func transcribe(from url: URL) {
guard isAuthed else { return }
let locale = Locale(identifier: "en-US")
let recognizer = SFSpeechRecognizer(locale: locale)
let recognitionRequest = SFSpeechURLRecognitionRequest(url: url)
// the behaviour occurs whether I set this to true or not, I recently set
// it to true to see if it made a difference
recognizer?.supportsOnDeviceRecognition = true
recognitionRequest.shouldReportPartialResults = true
recognitionRequest.addsPunctuation = true
recognizer?.recognitionTask(with: recognitionRequest) { (result, error) in
guard result != nil else { return }
if result!.isFinal {
//speechRecognitionMetadata is not nil
} else {
//speechRecognitionMetadata is nil
}
}
}
}
Further, and this isn't documented either, the SFTranscriptionSegment values don't have correct timestamp and duration values until isFinal. The values aren't all zero, but they don't align with the timing in the audio and they change to accurate values when isFinal is true.
The transcription otherwise "works", in that I get transcription text before isFinal and if I wait for isFinal the segments are correct and speechRecognitionMetadata is filled with values.
The context here is I'm trying to generate a transcription that I can then highlight the spoken sections of as audio plays and I'm thinking I must be just trying to use the Speech framework in a way it does not work. I got my concept working if I pre-process the audio (i.e. run it through until isFinal and save the results I need to json), but being able to do even a rougher version of it 'on the fly' - which requires segments to have the right timestamp/duration before isFinal - is perhaps impossible?