Hi, I believe I've found a potential error in the sample code on the documentation page for creating and using a process tap with an aggregate device. The issue is in the section explaining how to add a tap to the aggregate device. I have already filed a Feedback Assistant ticket on this (ID: FB17411663) but haven't heard back for months.
Capturing system audio with Core Audio taps
The sample code for modifying the kAudioAggregateDevicePropertyTapList incorrectly uses the tapID as the target AudioObjectID when calling AudioObjectSetPropertyData.
// (Code to get the list and potentially modify listAsArray)
if var listAsArray = list as? [CFString] {
// ... (modification logic) ...
// Set the list back on the aggregate device. <--- The comment is correct
list = listAsArray as CFArray
_ = withUnsafeMutablePointer(to: &list) { list in
// INCORRECT: This call uses tapID as the target object.
AudioObjectSetPropertyData(tapID, &propertyAddress, 0, nil, propertySize, list)
}
}
The kAudioAggregateDevicePropertyTapList is a property that belongs to the aggregate device, not the individual tap. Therefore, to set this property, the AudioObjectSetPropertyData function must target the AudioObjectID of the aggregate device itself. Using tapID as the first argument is logically incorrect for this operation and will not update the aggregate device as intended.
Furthermore, the preceding AudioObjectGetPropertyData call to fetch the list also appears to use the incorrect tapID as its target in the sample.
The AudioObjectID for both getting and setting this property should be the ID of the aggregate device.
_ = AudioObjectGetPropertyData(aggregateDeviceID, &propertyAddress, 0, nil, &propertySize, &list)
_ = AudioObjectSetPropertyData(aggregateDeviceID, &propertyAddress, 0, nil, propertySize, newList)
Thank you!
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
hi,
Is there an Audio Unit logo I can show on my website? I would love to show that my application is able to host Audio Unit plugins.
regards, Joël
According to the documentation (https://developer.apple.com/documentation/avfoundation/avplayeritem/externalmetadata), AVPlayerItem should have an externalMetadata property. However it does not appear to be visible to my app. When I try, I get:
Value of type 'AVPlayerItem' has no member 'externalMetadata'
Documentation states iOS 12.2+; I am building with a minimum deployment target of iOS 18.
Code snippet:
import Foundation
import AVFoundation
/// ... in function ...
// create metadata as described in https://developer.apple.com/videos/play/wwdc2022/110338
var title = AVMutableMetadataItem()
title.identifier = .commonIdentifierAlbumName
title.value = "My Title" as NSString?
title.extendedLanguageTag = "und"
var playerItem = await AVPlayerItem(asset: composition)
playerItem.externalMetadata = [ title ]
I am work an app development on an app which request an audio function in background as an alert sound.
during debug testing , the function work fine,
but once I testing standalone without debugging , The function not work , it will play out the sound when I back to app.
does any way to trace the issues ?
Hello,
I'm trying to determine the best/recommended AVAudioSession configuration (i.e category, mode, and options) for the following use-case.
Essentially, I'd like to switch between periods of playing an audio file and then recognizing speech. The audio file is typically speech and I don't intend for playback and speech recognition to occur simultaneously. I'd like for the user to sill be able to interact with Siri and I'd like for it to work with CarPlay where navigation prompts can occur.
I would assume the category to use is 'playAndRecord', but I'm not sure if it's better to just set that once for the entire lifecycle, or set to 'playback' for audio file playback and then switch to 'playAndRecord' for speech recognition . I'm also not sure on the best 'mode' and 'options' to set. Any suggestions would be appreciated.
Thanks.
ApplicationMusicPlayer is not available on watchOS but all other platforms. Is there a technical reason for that like battery life? Same goes for SystemMusicPlayer and MPMusicPlayerController. I already filed feedbacks for that.
Is there a way to destroy MIDIUMPMutableEndpoint again?
In my app, the user has a setting to enable and disable MIDI 2.0. If MIDI 2.0 should not be supported (or if iOS version < 18), it creates a virtual destination and a virtual source. And if MIDI 2.0 should be enabled, it instead creates a MIDIUMPMutableEndpoint, which itself creates the virtual destination and source automatically.
So here is my problem: I didn't find any way to destroy the MIDIUMPMutableEndpoint again. There is a method to disable it (setEnabled:NO), but that doesn't destroy or hide the virtual destination and source. So when the user turns MIDI 2.0 support off, I will have two virtual destinations and sources, and cannot get rid of the 2.0 ones.
What is the correct way to get rid of the MIDIUMPMutableEndpoint once it is created?
Environment
Windows 11 [edition/build]: [e.g., 23H2, 22631.x]
Apple Music for Windows version: [e.g., 1.x.x from Microsoft Store]
Library folder: C:\Users<user>\Music\Apple Music\Apple Music Library.musiclibrary
Summary
I need a supported way to programmatically enumerate the local Apple Music library on Windows (track file paths, playlists, etc.) for reconciliation with the on-disk Media folder. On macOS this used to be straightforward via scripting/export; on Windows I can’t find an equivalent.
What I’m seeing in the library bundle
Library.musicdb → not SQLite. First 4 bytes: 68 66 6D 61 ("hfma").
Library Preferences.musicdb → also starts with "hfma".
artwork.sqlite → SQLite but appears to be artwork cache only (no track file paths).
Extras.itdb → has SQLite format 3 header but (from a quick scan) not seeing track locations.
Genius.itdb → not a SQLite database on this machine.
What I’ve tried
Attempted to open Library.musicdb with SQLite providers → error: “file is not a database.”
Binary/string scans (ASCII, UTF-16LE/BE, null-stripped) of Library.musicdb → did not reveal file paths or obvious plist/XML/JSON blobs.
The Windows Apple Music UI doesn’t appear to expose “Export Library / Export Playlist” like legacy iTunes did, and I can’t find a public API for local library enumeration on Windows.
What I’m trying to accomplish
Read local track entries (absolute or relative paths), detect broken links, and reconcile against the Media folder. A read-only solution is fine; I do not need to modify the library.
Questions for Apple
Is the Library.musicdb file format documented anywhere, or is there a supported SDK/API to enumerate the local library on Windows?
Is there a supported export mechanism (CLI, UI, or API) on Windows Apple Music to dump the local library and/or playlists (XML/CSV/JSON)?
Is there a Windows-specific equivalent to the old iTunes COM automation or any MusicKit surface that can return local library items (not streaming catalog) and their file locations?
If none of the above exist today, is there a recommended workaround from Apple for library reconciliation on Windows (e.g., documented support for importing M3U/M3U8 to rebuild the local library from disk)?
Are there any plans/timeline for adding Windows feature parity with iTunes/Music on macOS for exporting or scripting the local library?
Why this matters
For large personal libraries, users occasionally end up with orphaned files on disk or broken links in the app. Without an export or API, it’s difficult to audit and fix at scale on Windows.
Reference details (in case it helps triage)
Library.musicdb header bytes: 68-66-6D-61-A0-00-00-00-10-26-34-00-15-00-01-00 (ASCII shows hfma…).
artwork.sqlite is readable but doesn’t contain track file paths (appears limited to artwork).
I can supply a minimal repro tool and logs if that’s helpful.
Feature request (if no current API)
Add an official Export Library/Playlists action on Windows Apple Music, or
Provide a read-only Windows API (or schema doc) that surfaces track file locations and playlist membership from the local library.
Thanks in advance for any guidance or pointers to docs I might have missed.
Hi everyone!
I’ve developed a location-based Audio AR app in Unity with FMOD & Resonance Audio and AirPods Pro Head-Tracking to create a ubiquitous augmented soundscape experience. Think of it as an audio version of Pokémon Go, but with a more precise location requirement to ensure spatial audio is placed correctly.
I want this experience to run in the background on iOS, but from what I’ve gathered, it seems Unity doesn’t support this well. So, I’m considering developing a Swift version instead.
Since this is primarily for research purposes, privacy concerns are not a major issue in my case. However, I’ve come across some potential challenges:
Real-time precise location updates – Can iOS provide fully instantaneous, high-accuracy location updates in the background?
Continuous real-time data processing – Can an app continuously process spatial audio, head-tracking, and location data while running in the background?
I’m not sure if newer iOS versions have improved in these areas or if there are workarounds to achieve this.
Would this kind of experience be feasible to run in the background on iOS? Any insights or pointers would be greatly appreciated!
I’m very new to iOS development, so apologies if this is a basic question. Thanks in advance!
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2
We're using AVAudioPlayer to play a sound when a button is tapped. In our use case, this button can be tapped very frequently — roughly every 0.1 to 0.2 seconds. Each tap triggers the following function:
var audioPlayer: AVAudioPlayer?
func soundPlay(resource: String, type: String){
guard let path = Bundle.main.path(forResource: resource, ofType: type) else {
return
}
do {
audioPlayer = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path))
audioPlayer!.delegate = self
try audioSession.setCategory(.playback)
} catch {
return
}
self.audioPlayer!.play()
}
The issue is that under high-frequency tapping (especially around 0.1–0.15s intervals), the app occasionally crashes. The crash does not occur every time, but it happens randomly — sometimes within 30 seconds, within 1 minute, or even 3 minutes of continuous tapping.
Interestingly, adding a delay of 0.2 seconds between button taps seems to prevent the crash entirely. Delays shorter than 0.2 seconds (e.g.,0.15s,0.18s) still result in occasional crashes.
My questions are:
**Is this expected behavior from AVAudioPlayer or AVAudioSession?
Could this be a known issue or a limitation in AVFoundation?
Is there any documentation or guidance on handling frequent sound playback safely?**
Any insights or recommendations on how to handle rapid, repeated audio playback more reliably would be appreciated.
In iOS 18, CarPlay shows an error: “There was a problem loading this content” after playback starts. Audio works fine, but the Now Playing screen doesn’t load. I’m using MPPlayableContentManager. This worked fine in iOS 17. Anyone else seeing this error in iOS 18?
I'm working on a project to support spatial audio editing, using this sample project as a reference: https://developer.apple.com/documentation/Cinematic/editing-spatial-audio-with-an-audio-mix
This sample works well on an unedited capture, but does not work for a capture that has already been edited.
The failure is occurring at "let audioInfo = try await CNAssetSpatialAudioInfo(asset: myAsset)", which is throwing "no eligible audio tracks in asset".
I also find that for already edited captures, if i use CNAssetSpatialAudioInfo.assetContainsSpatialAudio, it returns false.
What i mean by "already edited" is that if I take a spatial capture with my iPhone 16, and then edit that capture in the Photos app using the Cinematic effect, and then save the edited output (e.g. edited_capture.mov), I can't import that edited_capture.mov into my project as a spatial audio asset.
Is this intentional behavior or a bug?
If it's intentional, can you describe why?
Topic:
Media Technologies
SubTopic:
Audio
I am working on an app which plays audio - https://youtu.be/VbAfUk_eYl0?si=nJg5ayy2faWE78-g - and one of the features is, on restart, if you had paused playback of a file at the time the app was previously shut down (or were playing one at the time of shutdown), the paused state and position in the file is restored exactly as it was, on restart.
The functionality works. However, it seems impossible to get the "now playing" information in iOS into the right state to reflect that via the MediaPlayer API. On restart, handlers are attached to the play/pause/togglePlayPause actions on MPRemoteCommandCenter.shared(), and the map of media info is updated on MPNowPlayingInfoCenter.default().nowPlayingInfo.
What happens is that iOS's media view shows the audio as playing and offers a pause button - even though the play action is enabled and the pause action is disabled.
Once playback has been initiated (my workaround is to have the pause action toggle the play state, since otherwise you wouldn't be able to initiate playback from controls in a car without initiating it once from a device first).
I've created a simplified white-noise-player demo to illustrate the problem - simply build and deploy it, and then start the app, lock your device and look at the playback controls on the lock screen. It will show a pause button - same behavior I've described.
https://github.com/timboudreau/ios-play-pause-demo
I've tried a few things to narrow down the source of the issue - for example, thinking that not MPNowPlayingInfoPropertyPlaybackProgress and MPMediaItemPropertyPlaybackDuration might be the culprit (since the system interpolates elapsed time and it's recommended to update those properties infrequently) on startup might do the trick, but the result is the same, just without a duration or progress shown.
What governs this behavior, and is there some way to explicitly tell the media player API your current state is paused?
I have a music app I'm developing and having a weird issue where I can see now playing info for every other platform than tvOS. As far as I can tell I have correctly configured the MPNowPlayingInfoCenter
MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo MPNowPlayingInfoCenter.default().playbackState = .playing
Are there any extra requirements to get my app's now-playing info showing in control center on tvOS? Another strange issue that might be related is I can use the apple TV remote to pause audio but not resume playback, so I feel like there's something I'm missing about registering audio playback on tvOS specifically.
My audio app shows a control bar at the bottom of the window. The controls show nicely, but there is a black "slab" appearing behind the inline controls, the same size as the playerView. Setting the player view background color does nothing:
playerView.wantsLayer = true playerView.layer?.backgroundColor = NSColor.clear.cgColor
How can I clear the background?
If I use .floating controlsStyle, I don't get the background "slab".
Topic:
Media Technologies
SubTopic:
Audio
Hi,
I'm trying to setup a AVAudioEngine for USB Audio recording and monitoring playthrough.
As soon as I try to setup playthough I get an error in the console: AVAEInternal.h:83 required condition is false: [AVAudioEngineGraph.mm:1361:Initialize: (IsFormatSampleRateAndChannelCountValid(outputHWFormat))]
Any ideas how to fix it?
// Input-Device setzen
try? setupInputDevice(deviceID: inputDevice)
let input = audioEngine.inputNode
// Stereo-Format erzwingen
let inputHWFormat = input.inputFormat(forBus: 0)
let stereoFormat = AVAudioFormat(commonFormat: inputHWFormat.commonFormat, sampleRate: inputHWFormat.sampleRate, channels: 2, interleaved: inputHWFormat.isInterleaved)
guard let format = stereoFormat else {
throw AudioError.deviceSetupFailed(-1)
}
print("Input format: \(inputHWFormat)")
print("Forced stereo format: \(format)")
audioEngine.attach(monitorMixer)
audioEngine.connect(input, to: monitorMixer, format: format)
// MonitorMixer -> MainMixer (Output)
// Problem here, format: format also breaks.
audioEngine.connect(monitorMixer, to: audioEngine.mainMixerNode, format: nil)
Hi everyone 👋
I’m building an iOS app in Swift where I want to do the following:
Record the user’s voice
Transcribe the spoken sentence (speech-to-text)
Auto-detect the spoken language
Translate it to another language selected by the user (e.g., English → Spanish or Hindi → English)
Speak back (text-to-speech) the translated text on the same device
Is this possible to record via phone mic and play the transcribe voice into headphone's audio?
My app Balletrax is a music player for people to use while they teach ballet. Used to be you could silence notifications during use, but now the customer seems to have to know how to use Focus mode, remember to turn it on and off, and have to check the notifications one does and doesn't want to use. Is there no way to silence all notifications when the app is in use?
I found that the aggregated device correctly obtains input channels in the standard microphone mode. However, in voice isolation mode, it only retrieves channels from the first sub-device in the aggregated device's list. If I want to properly obtain channel information in voice isolation mode, how should I do it?
When using the Apple Devices to sync Apple Music to iPhone where is the Apple Devices backup being written to?
Apple Devices->music->sync.
Not trying to backup the iPhone via Apple Devices app.