I noticed that while playing back the same tracks via MusicKit on different OSes I get different results regarding the audio files being streamed.
Playing back a lossless file with 24Bit 48kHz and watching the Console for RemotePlayerService I get:
on iPadOS: Lossless; groupID: audio-alac-stereo-48000-24; bitDepth: 24-bit; sampleRate: 48khz; codec: alac; channels: 2; layout: Stereo;
on macOS: Creating AudioQueue with format:'paac', framesPerPacket:1024, sampleRate:44100
While the iPad looks perfect, the Mac does not. Is there a way to fix this issue on macOS.
BTW: I switched the Audio-Midi Settings before, after and while the macOS App was lunched. I also switched to different output devices. I wasn't able to change the bad audio-output on the mac. I tested this under Sequoia 15.5 and Tahoe beta 1, Xcode 16.4 and 26 beta 1.
The AudioVariants of the Album/Tracks are .dolbyAtmos, .lossless, .lossyStereo
Apple Music displays Lossless 24 Bit/48 kHz ALAC when clicking on the playercontroll icon on macOS
I hope there are only some missing or misconfigured properties to get macOS up to par.
Thanks :-)
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
On Apple TV 4K 3rd generation, with tvOS 26 beta 2, when two HomePod 2 are paired to the device, music and movie sources with Dolby Atmos can only be listened to in stereo. dolby atmos not supported
Topic:
Media Technologies
SubTopic:
Audio
We’ve encountered a reproducible issue where the iPhone fails to reconnect to a Wi-Fi access point under the following conditions:
The device is connected to a 2.4GHz Wi-Fi network.
A Bluetooth audio accessory is connected (e.g. headset).
AVAudioSession is active (such as during a voice call or when using the Voice Memos app).
The user moves away from the access point, causing a disconnect.
Upon returning within range, the access point is no longer recognized or reconnected while AVAudioSession remains active.
However, if the Bluetooth device is disconnected or the AVAudioSession is deactivated, the Wi-Fi access point is immediately recognized again.
We confirmed this behavior not only in my app but also using Apple's built-in Voice Memos app, suggesting this is not specific to our implementation.
It appears that the Wi-Fi system deprioritizes reconnection while AVAudioSession is engaged. Could this be by design? Or is this a known issue or limitation with Wi-Fi and AVAudioSession interaction?
Test Environment:
Device: iPhone 13 mini
iOS: 17.5.1
Wi-Fi: 2.4GHz band
Accessories: Bluetooth headset
We’d appreciate clarification on whether this is expected behavior or a bug. Thank you!
hi,
Is there an Audio Unit logo I can show on my website? I would love to show that my application is able to host Audio Unit plugins.
regards, Joël
I’m developing a macOS audio monitoring app using AVAudioEngine, and I’ve run into a critical issue on macOS 26 beta where AVFoundation fails to detect any input devices, and AVAudioEngine.start() throws the familiar error 10877.
FB#: FB19024508
Strange Behavior:
AVAudioEngine.inputNode shows no channels or input format on bus 0.
AVAudioEngine.start() fails with -10877 (AudioUnit connection error).
AVCaptureDevice.DiscoverySession returns zero audio devices.
Microphone permission is granted (authorized), and the app is properly signed and sandboxed with com.apple.security.device.audio-input.
However, CoreAudio HAL does detect all input/output devices:
Using AudioObjectGetPropertyDataSize and AudioObjectGetPropertyData with kAudioHardwarePropertyDevices, I can enumerate 14+ devices, including AirPods, USB DACs, and BlackHole.
This suggests the lower-level audio stack is functional.
I have tried:
Resetting CoreAudio with sudo killall coreaudiod
Rebuilding and re-signing the app
Clearing TCC with tccutil reset Microphone
Running on Apple Silicon and testing Rosetta/native detection via sysctl.proc_translated
Using a fallback mechanism that logs device info from HAL and rotates logs for submission via Feedback Assistant
I have submitted logs and a reproducible test case via Feedback Assitant : FB#: FB19024508]
Let's consider the following code.
I've created an actor that loads a list of .mp3 files from a Bundle and then makes it available for audio reproduction.
Unfortunately, I'm experiencing a memory leak.
At the play method.
player.play()
From Instruments I get
_malloc_type_malloc_outlined libsystem_malloc.dylib
start_wqthread libsystem_pthread.dylib
private actor AudioActor {
enum Failure: Error {
case soundsNotLoaded([AudioPlayerClient.Sound: Error])
}
enum Player {
case music(AVAudioPlayer)
}
var players: [Sound: Player] = [:]
let bundles: [Bundle]
init(bundles: UncheckedSendable<[Bundle]>) {
self.bundles = bundles.wrappedValue
}
func load(sounds: [Sound]) throws {
try AVAudioSession.sharedInstance().setActive(true, options: [])
var errors: [Sound: Error] = [:]
for sound in sounds {
guard let url = bundle.url(forResource: sound.name, withExtension: "mp3")
else { continue }
do {
self.players[sound] = try .music(AVAudioPlayer(contentsOf: url))
} catch {
errors[sound] = error
}
}
guard errors.isEmpty
else { throw Failure.soundsNotLoaded(errors) }
}
func play(sound: Sound, loops: Int?) throws {
guard let player = self.players[sound]
else { return }
switch player {
case let .music(player):
player.numberOfLoops = loops ?? -1
player.play()
}
}
func stop(sound: Sound) throws {
guard let player = self.players[sound]
else { throw Failure.soundsNotLoaded([:]) }
switch player {
case let .music(player):
player.stop()
}
}
}
Is there any way for me to use an AutoMix api in my IOS apps, I would play tracks using the Apple Music api and use AutoMix to attempt to merge tracks.
Is this feature/api available to developers.
According to the header file the outputVolume properties supported range is 0.0-1.0:
/*! @property outputVolume
@abstract The mixer's output volume.
@discussion
This accesses the mixer's output volume (0.0-1.0, inclusive).
@property (nonatomic) float outputVolume;
However when setting the volume to 2.0 the audio does indeed play louder. Is the header file out of date and if so, what is the supported range for outputVolume?
Thanks
It sounds simple but searching for the name "Favorite Songs" is a non-starter because it's called different names in different countries, even if I specify "&l=en_us" on the query.
So is there another property, relationship or combination thereof which I can use to tell me when I've found the right playlist?
Properties I've looked at so far:
canEdit: will always be false so narrows things down a little
inFavorites: not helpful as it depends on whether the user has favourite the favourites playlist, so not relevant
hasCatalog: seems always true so again may narrow things down a bit
isPublic: doesn't help
Adding the catalog relationship doesn't seem to show anything immediately useful either.
Can anyone help?
Ideally I'd like to see this as a "kind" or "type" as it has different properties to other playlists, but frankly I'll take anything at this point.
I have a Catalyst app ('container') which hosts an embedded AUv3 Audio Unit extension ('plugin'). This used to work for years and has worked with this project until a few days ago.
it still works on iOS as expected
on MacOS the extension is never registered/installed and won't load
extension won't show up with AUVal
seems to have stopped working with the 26.1 XCode update
I'm fairly certain the problem is not code related (i.e. likely build settings, project settings, entitlements, signing, etc.)
I have compared all settings with another still-working project and can't find any meaningful difference
(I can't request code-level support because even the minimal thing vastly exceeds the 250 lines of code limit.)
How can I debug the issue? I literally don't know where to start to fix this problem, short of rebuilding the entire thing and hope that it magically starts working again.
I wrote a Swift macOS app to control a PCI audio device. The code switches between the default output and input channels. As soon as I launch the Audio-Midi Setup utility, channel switching stops working. The driver properties allow switching, but the system doesn't respond. I have to delete the contents of /Library/Preferences/Audio and reset Core Audio. What am I missing?
func setDefaultChannelsOutput() {
guard let deviceID = getDeviceIDByName(deviceName: "PCI-424") else { return }
let selectedIndex = DefaultChannelsOutput.indexOfSelectedItem
if selectedIndex < 0 || selectedIndex >= 24 { return }
let channel1 = UInt32(selectedIndex * 2 + 1)
let channel2 = UInt32(selectedIndex * 2 + 2)
var channels: [UInt32] = [channel1, channel2]
var propertyAddress = AudioObjectPropertyAddress(
mSelector: kAudioDevicePropertyPreferredChannelsForStereo,
mScope: kAudioDevicePropertyScopeOutput,
mElement: kAudioObjectPropertyElementWildcard
)
let dataSize = UInt32(MemoryLayout<UInt32>.size * channels.count)
let status = AudioObjectSetPropertyData(deviceID, &propertyAddress, 0, nil, dataSize, &channels)
if status != noErr {
print("Error setting default output channels: \(status)")
}
}
Topic:
Media Technologies
SubTopic:
Audio
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2
I'm using AVAudioSession to record sound in my application. But I recently came to realize that when the app starts a recording session on a tablet, OS automatically sets the tablet volume to 50% and when after recording ends, it doesn't change back to the previous volume level before starting the recording. So I would like to know whether this is an OS default behavior or a bug?
If it's a default behavior, I much appreciate if I can get a link to the documentation.
Among Japanese end users, audio issues during screen recording—primarily in game applications—have become a topic of discussion.
We have confirmed that the trigger for this issue is highly likely to be related to changes to IOBufferDuration.
When using setPreferredIOBufferDuration and the IOBufferDuration is set to a value smaller than the default, audio problems occur in the recorded screen capture video.
Audio playback is performed using AudioUnit (RemoteIO).
https://developer.apple.com/documentation/avfaudio/avaudiosession/setpreferrediobufferduration(_:)?language=objc
This issue was not observed on iOS 18, and it appears to have started occurring after upgrading to iOS 26.
We provide an audio middleware solution, and we had incorporated changes to IOBufferDuration into our product to achieve low-latency audio playback.
As a result, developers using our product as well as their end users are being affected by this issue.
We kindly request that this issue be investigated and addressed in a future update.
“This document has been translated by AI. The original text is included below for reference.”
日本のエンドユーザー間で主にゲームアプリケーションにおける画面収録時の音声の問題が話題になっています。
こちらの症状のトリガーが、IOBufferDurationの変更によるものである可能性が高いことを確認しました。
setPreferredIOBufferDurationを使用し、IOBufferDurationがデフォルトより小さい状態の時、画面収録された動画の音声に問題が発生することをしています。
音声の再生にはAudioUnit(RemoteIO)を使用しています。
https://developer.apple.com/documentation/avfaudio/avaudiosession/setpreferrediobufferduration(_:)?language=objc
iOS 18ではこのような問題は確認されておらず、iOS26になってから問題が発生しているようです。
私たちはオーディオミドルウェアを提供しており、低遅延の再生のためにIOBufferDurationの変更を製品に組み込んでいました。
そのため、弊社製品をご利用いただいている開発者およびエンドユーザーの皆様がこの不具合の影響を受けています。
こちらの不具合の調査及び修正対応を検討いただけますでしょうか。
Topic:
Media Technologies
SubTopic:
Audio
Environment
Windows 11 [edition/build]: [e.g., 23H2, 22631.x]
Apple Music for Windows version: [e.g., 1.x.x from Microsoft Store]
Library folder: C:\Users<user>\Music\Apple Music\Apple Music Library.musiclibrary
Summary
I need a supported way to programmatically enumerate the local Apple Music library on Windows (track file paths, playlists, etc.) for reconciliation with the on-disk Media folder. On macOS this used to be straightforward via scripting/export; on Windows I can’t find an equivalent.
What I’m seeing in the library bundle
Library.musicdb → not SQLite. First 4 bytes: 68 66 6D 61 ("hfma").
Library Preferences.musicdb → also starts with "hfma".
artwork.sqlite → SQLite but appears to be artwork cache only (no track file paths).
Extras.itdb → has SQLite format 3 header but (from a quick scan) not seeing track locations.
Genius.itdb → not a SQLite database on this machine.
What I’ve tried
Attempted to open Library.musicdb with SQLite providers → error: “file is not a database.”
Binary/string scans (ASCII, UTF-16LE/BE, null-stripped) of Library.musicdb → did not reveal file paths or obvious plist/XML/JSON blobs.
The Windows Apple Music UI doesn’t appear to expose “Export Library / Export Playlist” like legacy iTunes did, and I can’t find a public API for local library enumeration on Windows.
What I’m trying to accomplish
Read local track entries (absolute or relative paths), detect broken links, and reconcile against the Media folder. A read-only solution is fine; I do not need to modify the library.
Questions for Apple
Is the Library.musicdb file format documented anywhere, or is there a supported SDK/API to enumerate the local library on Windows?
Is there a supported export mechanism (CLI, UI, or API) on Windows Apple Music to dump the local library and/or playlists (XML/CSV/JSON)?
Is there a Windows-specific equivalent to the old iTunes COM automation or any MusicKit surface that can return local library items (not streaming catalog) and their file locations?
If none of the above exist today, is there a recommended workaround from Apple for library reconciliation on Windows (e.g., documented support for importing M3U/M3U8 to rebuild the local library from disk)?
Are there any plans/timeline for adding Windows feature parity with iTunes/Music on macOS for exporting or scripting the local library?
Why this matters
For large personal libraries, users occasionally end up with orphaned files on disk or broken links in the app. Without an export or API, it’s difficult to audit and fix at scale on Windows.
Reference details (in case it helps triage)
Library.musicdb header bytes: 68-66-6D-61-A0-00-00-00-10-26-34-00-15-00-01-00 (ASCII shows hfma…).
artwork.sqlite is readable but doesn’t contain track file paths (appears limited to artwork).
I can supply a minimal repro tool and logs if that’s helpful.
Feature request (if no current API)
Add an official Export Library/Playlists action on Windows Apple Music, or
Provide a read-only Windows API (or schema doc) that surfaces track file locations and playlist membership from the local library.
Thanks in advance for any guidance or pointers to docs I might have missed.
Currently, I have successfully used ChannelMap to map hardware input channels and obtained audio data from the hardware device's MIC and OTG inputs. Additionally, I have used ChannelMap to map output channels to freely feed data for playback to each output channel. However, I now have a problem.
I have a hardware device that only has output channels (no input channels), and the system has set this hardware device as the default playback device. In this case, how can I obtain the audio data being played to the output channels for modification?
Hi everyone,
We’re encountering an issue where AudioQueueNewOutput blocks indefinitely and never returns, and we’re hoping to get some insight or confirmation if this is a known behavior/regression on newer iOS versions.
Issue Description
When triggering audio playback, we create an output AudioQueue using AudioQueueNewOutput.
On some devices, the call hangs inside AudioQueueNewOutput and never returns, with no OSStatus error and no subsequent logs.
This behavior is reproducible mainly on iOS 18.3.
Earlier iOS versions do not show this issue under the same code path.
if (audioDes)
{
mAudioDes.mSampleRate = audioDes->mSampleRate;
mAudioDes.mBitsPerChannel = audioDes->mBitsPerChannel;
mAudioDes.mChannelsPerFrame = audioDes->mChannelsPerFrame;
mAudioDes.mFormatID = audioDes->mFormatID;
mAudioDes.mFormatFlags = audioDes->mFormatFlags;
mAudioDes.mFramesPerPacket = audioDes->mFramesPerPacket;
mAudioDes.mBytesPerFrame = audioDes->mBytesPerFrame;
mAudioDes.mBytesPerPacket = audioDes->mBytesPerFrame;
mAudioDes.mReserved = 0;
}
// Create AudioQueue for output
OSStatus status = AudioQueueNewOutput(
&mAudioDes,
AQOutputCallback,
this,
NULL,
NULL,
0,
&audioQueue
);
code-block
The thread blocks inside AudioQueueNewOutput, and execution never reaches the next line.
Additional Notes / Observations
ASBD is confirmed to be valid
Standard PCM output
Sample rate, channels, bytes per frame/packet all consistent
Same ASBD works correctly on earlier iOS versions
AudioQueue is created on a background thread
Not on the main thread
Not inside the AudioQueue callback
On first creation, AVAudioSession may not yet be active
setCategory and setActive:YES may be called shortly before creating the AudioQueue
There may be a timing window where the session is still activating
Issue is reported mainly on iOS 18.3
Multiple user reports point to iOS 18.3 devices
Same code path works on iOS 17.x and earlier
No OSStatus error is returned — the call simply never returns.
Questions
Is it expected that AudioQueueNewOutput can block indefinitely while waiting for AVAudioSession / audio route / HAL readiness?
Have there been any behavior changes in iOS 18.3 regarding AudioQueue creation or AudioSession synchronization?
Is it unsafe to call AudioQueueNewOutput before AVAudioSession is fully active on recent iOS versions?
Are there recommended patterns (or delays / callbacks) to ensure AudioQueue creation does not hang?
Any insight or confirmation would be greatly appreciated.
Thanks in advance!
Is there a way to destroy MIDIUMPMutableEndpoint again?
In my app, the user has a setting to enable and disable MIDI 2.0. If MIDI 2.0 should not be supported (or if iOS version < 18), it creates a virtual destination and a virtual source. And if MIDI 2.0 should be enabled, it instead creates a MIDIUMPMutableEndpoint, which itself creates the virtual destination and source automatically.
So here is my problem: I didn't find any way to destroy the MIDIUMPMutableEndpoint again. There is a method to disable it (setEnabled:NO), but that doesn't destroy or hide the virtual destination and source. So when the user turns MIDI 2.0 support off, I will have two virtual destinations and sources, and cannot get rid of the 2.0 ones.
What is the correct way to get rid of the MIDIUMPMutableEndpoint once it is created?
When trying to load an ambisonics file using this project: https://github.com/robertncoomber/NativeiOSAmbisonicPlayback/
I get "Unexpected Ambisonics format".
Interestingly, loading a 3rd order ambisonics file works fine:
let ambisonicLayoutTag = kAudioChannelLayoutTag_HOA_ACN_SN3D | 16
let AmbisonicLayout = AVAudioChannelLayout(layoutTag: ambisonicLayoutTag)
let StereoLayout = AVAudioChannelLayout(layoutTag: kAudioChannelLayoutTag_Stereo)
So it's purely related to the kAudioChannelLayoutTag_Ambisonic_B_Format
I have an app that displays artwork via MPMediaItem.artwork, requesting an image with a specific size. How do I get a media item's MPMediaItemAnimatedArtwork, and how to get the preview image and video to display to the user?
Since MacOS 26 Apple Music has inconsitent drops to the Quality of some Tracks indiscrimantly. I don't know if others Expereinced it. It doesn't happen on the Speakers or connected via Bluetooth, but the AUX I/O has it quite often. It is more noticable on Headphones with 48kHz and higher Frequency Bandwidth.
Here is the FB18062589