I have a Catalyst app ('container') which hosts an embedded AUv3 Audio Unit extension ('plugin'). This used to work for years and has worked with this project until a few days ago.
it still works on iOS as expected
on MacOS the extension is never registered/installed and won't load
extension won't show up with AUVal
seems to have stopped working with the 26.1 XCode update
I'm fairly certain the problem is not code related (i.e. likely build settings, project settings, entitlements, signing, etc.)
I have compared all settings with another still-working project and can't find any meaningful difference
(I can't request code-level support because even the minimal thing vastly exceeds the 250 lines of code limit.)
How can I debug the issue? I literally don't know where to start to fix this problem, short of rebuilding the entire thing and hope that it magically starts working again.
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi, In my project I am using AVFoundation for recording the audio. We are using AVAudioMixerNode class below method to record the audio packet.
**func installTap(
onBus bus: AVAudioNodeBus,
bufferSize: AVAudioFrameCount,
format: AVAudioFormat?,
block tapBlock: @escaping AVAudioNodeTapBlock
)
**
It works perfectly fine.
But in production env some small percentage of the user we are facing issue like after recording few packets it stops automatically without stopping the audio engine. Can anyone help here that why this happens? I have also observed for mediaServicesWereResetNotification and added log on receiving this notification but when this issue happens I don't see any occurence of this log. Also is there any callback when the engine stops?
My audio app shows a control bar at the bottom of the window. The controls show nicely, but there is a black "slab" appearing behind the inline controls, the same size as the playerView. Setting the player view background color does nothing:
playerView.wantsLayer = true playerView.layer?.backgroundColor = NSColor.clear.cgColor
How can I clear the background?
If I use .floating controlsStyle, I don't get the background "slab".
Topic:
Media Technologies
SubTopic:
Audio
Does anyone know how to pronounce the sound of a specific instrument when you tap a button on the screen on your iPhone or iPad? Now, in the middle of creating a music learning app, I'm thinking of assigning monotones or chords to the button-like frames on the keyboard and fingerboard on the screen. Can it be achieved with SwiftUI chords alone? Once upon a time, MIDI level 1 I remember that there was a pronunciation function of the instrument, but I don't think about implementing the same function in the current OS. Please lend me your wisdom.
Topic:
Media Technologies
SubTopic:
Audio
When multiple identical songs are added to a playlist, Playlist.Entry.id uses a suffix-based identifier (e.g. songID_0, songID_1, etc.). Removing one entry causes others to shift, changing their .id values. This leads to diffing errors and collection view crashes in SwiftUI or UIKit when entries are updated.
Steps to Reproduce:
Add the same song to a playlist multiple times.
Observe .id.rawValue of entries (e.g. i.SONGID_0, i.SONGID_1).
Remove one entry.
Fetch playlist again — note the other IDs have shifted.
FB18879062
【溦N51888M】腾龙公司会员申请流程步骤【罔纸 211239.com 】输入官惘到浏览器打开联系24小时在线业务人员办理上下,打开公司官网.
二、点击主页右上角注册按钮.
三、填写账号信息.
四、输入手机号,验证码,密码.
五、勾选用户协议,完成注册协议,完成注册.
注意:若出现账号已存在」提示,需重新设置唯一账号名称
Topic:
Media Technologies
SubTopic:
Audio
Hello,
I am building an iOS-only, commercial app that uses AVSpeechSynthesizer with system voices, strictly using the APIs provided by Apple. Before distributing the app, I want to ensure that my current implementation does not conflict with the iOS Software License Agreement (SLA) and is aligned with Apple’s intended usage.
For a better playback experience (more accurate estimation of utterance duration and smoother skip forward/backward during playback), I currently synthesize speech using:
AVSpeechSynthesizer.write(_:toBufferCallback:)
Converting the received AVAudioPCMBuffer buffers into audio data
Storing the audio inside the app sandbox
Playing it back using AVAudioPlayer / AVAudioEngine
The cached audio is:
Generated fully on-device using system voices
Stored only inside the app’s private container
Used only for internal playback controls (timeline, seek, skip ±5 seconds)
Never shared, exported, uploaded, or exposed outside the app
The alternative approaches would be:
Keeping the generated audio entirely in memory (RAM) for playback purposes, without writing it to the file system at any point
Or using AVSpeechSynthesizer.speak(_:) and playing speech strictly in real time which has a poorer user experience compared to my approach
I have reviewed the current iOS Software License Agreement:
https://www.apple.com/legal/sla/docs/iOS18_iPadOS18.pdf
In particular, section (f) mentions restrictions around System Characters, Live Captions, and Personal Voice, including the following excerpt:
“…use … only for your personal, non-commercial use…
No other creation or use of the System Characters, Live Captions, or Personal Voice is permitted by this License, including but not limited to the use, reproduction, display, performance, recording, publishing or redistribution in a … commercial context.”
I do not see a specific reference in the SLA to system text-to-speech voices used via AVSpeechSynthesizer, and I want to be certain that temporarily caching synthesized speech for internal, non-exported playback is acceptable in a commercial app.
My question is:
Is caching AVSpeechSynthesizer system-voice output inside the app sandbox for internal playback acceptable, or is Apple’s recommended approach to rely only on real-time playback (speak(_:)) or strictly in-memory buffering without file storage?
If this question falls outside DTS technical scope and is instead a policy or licensing matter, I would appreciate guidance on the authoritative Apple documentation or the correct Apple team/contact.
Thank you.
Since iOS 18, the system setting “Allow Audio Playback” (enabled by default) allows third-party app audio to continue playing while the user is recording video with the Camera app. This has created a problem for the app I’m developing.
➡️ The problem:
My app plays continuous audio in both foreground and background states. If the user starts recording video using the iOS Camera app, the app’s audio — still playing in the background — gets captured in the video — obviously an unintended behavior.
Yes, the user could stop the app manually before starting the video recording, but that can’t be guaranteed. As a developer, I need a way to stop the app’s audio before the video recording begins.
So far, I haven’t found a reliable way to detect when video recording starts if ‘Allow Audio Playback’ is ON.
➡️ What I’ve tried:
— AVAudioSession.interruptionNotification → doesn’t fire
— devicesChangedEventStream → not triggered
I don’t want to request mic permission (app doesn’t use mic). also, disabling the app from playing audio in the background isn’t an option as it is a crucial part of the user experience
➡️ What I need:
A reliable, supported way to detect when the Camera app begins video recording, without requiring mic access — so I can stop audio and avoid unintentional overlap with the user’s recordings.
Any official guidance, workarounds, or AVFoundation techniques would be greatly appreciated.
Thanks.
Environment
Windows 11 [edition/build]: [e.g., 23H2, 22631.x]
Apple Music for Windows version: [e.g., 1.x.x from Microsoft Store]
Library folder: C:\Users<user>\Music\Apple Music\Apple Music Library.musiclibrary
Summary
I need a supported way to programmatically enumerate the local Apple Music library on Windows (track file paths, playlists, etc.) for reconciliation with the on-disk Media folder. On macOS this used to be straightforward via scripting/export; on Windows I can’t find an equivalent.
What I’m seeing in the library bundle
Library.musicdb → not SQLite. First 4 bytes: 68 66 6D 61 ("hfma").
Library Preferences.musicdb → also starts with "hfma".
artwork.sqlite → SQLite but appears to be artwork cache only (no track file paths).
Extras.itdb → has SQLite format 3 header but (from a quick scan) not seeing track locations.
Genius.itdb → not a SQLite database on this machine.
What I’ve tried
Attempted to open Library.musicdb with SQLite providers → error: “file is not a database.”
Binary/string scans (ASCII, UTF-16LE/BE, null-stripped) of Library.musicdb → did not reveal file paths or obvious plist/XML/JSON blobs.
The Windows Apple Music UI doesn’t appear to expose “Export Library / Export Playlist” like legacy iTunes did, and I can’t find a public API for local library enumeration on Windows.
What I’m trying to accomplish
Read local track entries (absolute or relative paths), detect broken links, and reconcile against the Media folder. A read-only solution is fine; I do not need to modify the library.
Questions for Apple
Is the Library.musicdb file format documented anywhere, or is there a supported SDK/API to enumerate the local library on Windows?
Is there a supported export mechanism (CLI, UI, or API) on Windows Apple Music to dump the local library and/or playlists (XML/CSV/JSON)?
Is there a Windows-specific equivalent to the old iTunes COM automation or any MusicKit surface that can return local library items (not streaming catalog) and their file locations?
If none of the above exist today, is there a recommended workaround from Apple for library reconciliation on Windows (e.g., documented support for importing M3U/M3U8 to rebuild the local library from disk)?
Are there any plans/timeline for adding Windows feature parity with iTunes/Music on macOS for exporting or scripting the local library?
Why this matters
For large personal libraries, users occasionally end up with orphaned files on disk or broken links in the app. Without an export or API, it’s difficult to audit and fix at scale on Windows.
Reference details (in case it helps triage)
Library.musicdb header bytes: 68-66-6D-61-A0-00-00-00-10-26-34-00-15-00-01-00 (ASCII shows hfma…).
artwork.sqlite is readable but doesn’t contain track file paths (appears limited to artwork).
I can supply a minimal repro tool and logs if that’s helpful.
Feature request (if no current API)
Add an official Export Library/Playlists action on Windows Apple Music, or
Provide a read-only Windows API (or schema doc) that surfaces track file locations and playlist membership from the local library.
Thanks in advance for any guidance or pointers to docs I might have missed.
Hello,
The search functionality of the coreaudio-api mailing list archive has been broken for a very long time. Several of the lower-level audio APIs have only been discussed on this mailing list, making it critical for those of us maintaining old audio code.
Steps to reproduce:
Open https://lists.apple.com/archives/list/coreaudio-api@lists.apple.com/ in your web browser.
Enter a search term in the "Search this list" field in the top-right corner of the page.
The search will eventually time out with "502 Bad Gateway"
Can somebody please forward this information to the current maintainer? I've tried to contact developer support but they weren't sure what to do.
Thanks!
Topic:
Media Technologies
SubTopic:
Audio
Hi,
I'm still stuck getting a basic record-with-playthrouh pipeline to work.
Has anyone a sample of setting up a AVAudioEngine pipeline for recording with playthrough?
Plkaythrough works with AVPlayerNode as input but not with any microphone input. The docs mention the "enabled state" of the outputNode of the engine without explaining the concept, i.e. how to enable an output.
When the engine renders to and from an audio device, the AVAudioSession category and the availability of hardware determines whether an app performs output. Check the output node’s output format (specifically, the hardware format) for a nonzero sample rate and channel count to see if output is in an enabled state.
Well, in my setup the output is NOT enabled, and any attempt to switch (e.g. audioEngine.outputNode.auAudioUnit.setDeviceID(deviceID) )/ attach a dedicated device / ... results in exceptions / errors
I’m running HomePod OS 26 on two HomePod minis and OS 18.6 on main HomePod (original)
I’ve enabled Crossfade in the Home app.
I’m playing Apple Music directly in the HomePod mini.
Crossfade just doesn’t work on any HomePod.
I can understand it not working on the HomePod - but why isn’t it working on the minis running OS 26?
I’ve tried disabling and enabling Crossfade, rebooting HomePods etc but nothing?!
I develop a application with an uvc camera, this camera is a webcam, I use the AVFoundation library ,but when I run the code "[self.mCaptureSession startRunning]" ,I can not get the buffer, I already set the delegate, any answer will help.
I’ve been researching how to achieve a recording playback effect in iOS similar to the hands-free calling effect in the system’s phone app. How can this be implemented? I tried using the voice chat recording method, but found that the volume of the speaker output is too low. How should this issue be addressed? I couldn’t find a suitable API. Could you provide me with some documentation or sample code? Thank you.
Hi,
On macOS I used to open MP3 and MP4 files with ExtAudioFile. For a few years it doesn't work anymore.
So I decided to try different macOS API using the AudioFileID of AudioToolbox framework.
I decided to write a test:
https://gist.github.com/joelkraehemann/7f5b241b52ca38c3a765c138fb647588
It fails right here:
AudioFileOpenWithCallbacks()
By telling OSStatus error 1954115647, which means kAudioFileUnsupportedFileTypeError.
The filename was set to an MP4 file:
~/Music/test.mp4
Howto fix this?
regards, Joël
I’m facing a strange audio routing issue that seems specific to iPhone 14 Pro / Pro Max.
I’m using LiveKit (WebRTC) in a React Native app, which uses AVAudioSession internally for audio capture (VoIP / call-style usage).
🔍 What’s happening:
I’m using an external USB microphone.
On these devices:
iPhone 11 → ✅ USB mic works
iPhone 13 → ✅ USB mic works
iPhone 17 Pro → ✅ USB mic works
iPhone 14 Pro Max → ❌ USB mic does NOT work
On iPhone 14 Pro Max:
The same USB mic:
✅ Works in Voice Memos
✅ Works in Instagram Live
❌ Does NOT appear as an input option in my app
❌ Does NOT work in WhatsApp / Instagram calls
Also:
In my app on iPhone 14 Pro Max, iOS does not show the audio input selector UI
On iPhone 17 Pro, the same app and same build does show the selector and the USB mic works
⚙️ My audio session config ( LiveKit ):
await AudioSession.setAppleAudioConfiguration({
audioCategory: 'playAndRecord',
audioMode: 'default',
audioCategoryOptions: ['allowBluetooth', 'defaultToSpeaker'],
});
await AudioSession.startAudioSession();
❓ My questions:
Is this a known limitation or behavior specific to iPhone 14 Pro / Pro Max?
Does iPhone 14 Pro have different audio routing rules for call / VoIP mode compared to other devices?
Why does the same USB mic work in recording apps (Voice Memos, Instagram Live) but not in call-style apps (LiveKit, WhatsApp, Instagram call)?
Is there any documented difference in AVAudioSession behavior on iPhone 14 Pro regarding external USB audio inputs?
Hi!
I get personal recommendations MusicItemCollection using this code:
func getRecommendations() async throws -> MusicItemCollection<MusicPersonalRecommendation> {
let request = MusicPersonalRecommendationsRequest()
let response = try await request.response()
let recommendations = response.recommendations
return recommendations
}
However, all recommendations contain no more than 12 MusicItem's, while the Music.app application provides much more for some recommendations, for example, for the You recently listened recommendation, the Music.app application displays 40 items. Each recommendation has an items property that contains a collection of musical items MusicItemCollection<MusicPersonalRecommendation.Item>, the hasNextBatch property for these collections is always false. I expected that for some collections loading of new items would be available. Please tell me if I'm doing something wrong or is this a MusicKit bug?
Thank you!
There appears to be no method of going forward or backwards in Get Info in the Music application,
Topic:
Media Technologies
SubTopic:
Audio
Hi,
I'm currently developping an AVB hardware device, and I'm currently stuck because because the apple AVB stack is throwing me errors without much informations.
Is there any way to have more information about these assertions and why they are happening ?
Furtermore is there any documentation on theAppleAVBAudio module ? It would be very handy
Here are the logs shown in the console:
Filtering the log data using "process == "coreaudiod""
Timestamp Thread Type Activity PID TTL
2025-12-05 15:44:27.087043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.087545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.088043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.088546+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.089043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.089545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.090043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.090545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.091043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.091545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.092044+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.092544+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.093044+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.093552+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.094050+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
2025-12-05 15:44:27.094543+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
I am trying to stream audio from local filesystem.
For that, I am trying to use an AVAssetResourceLoaderDelegate for an AVURLAsset. However, Content-Length is not known at the start. To overcome this, I tried several methods:
Set content length as nil, in the AVAssetResourceLoadingContentInformationRequest
Set content length to -1, in the ContentInformationRequest
Both of these cause the AVPlayerItem to fail with an error.
I also tried setting Content-Length as INT_MAX, and setting a renewalDate = Date(timeIntervalSinceNow: 5). However, that seems to be buggy. Even after updating the Content-Length to the correct value (e.g. X bytes) and finishing that loading request, the resource loader keeps getting requests with requestedOffset = X with dataRequest.requestsAllDataToEndOfResource = true. These requests keep coming indefinitely, and as a result it seems that the next item in the queue does not get played. Also, .AVPlayerItemDidPlayToEndTime notification does not get called.
I wanted to check if this is an expected behavior or is there a bug in this implementation. Also, what is the recommended way to stream audio of unknown initial length from local file system?
Thanks!