I am trying to get MIDI output from the AU Host demo app using the recent MIDI processor example. The processor works correctly in Logic Pro, but I cannot send MIDI from the AUv3 extension in standalone mode using the default host app to another program (e.g., Ableton).
The MIDI manager, which is part of the standalone host app, works fine, and I can send MIDI using it directly—Ableton receives it without issues. I have already set the midiOutputNames in the extension, and the midiOutBlock is mapped. However, the MIDI data from the AUv3 extension does not reach Ableton in standalone mode. I suspect the issue is that midiOutBlock might never be called in the plugin, or perhaps an input to the plugin is missing, which prevents it from sending MIDI. I am currently using the default routing.
I have modified the MIDI manager such that it works well as described above. Here is a part of my code for SimplePlayEngine.swift and my MIDIManager.swift for reference:
@MainActor
@Observable
public class SimplePlayEngine {
private let midiOutBlock: AUMIDIOutputEventBlock = { sampleTime, cable, length, data in return noErr }
var scheduleMIDIEventListBlock: AUMIDIEventListBlock? = nil
public init() {
engine.attach(player)
engine.prepare()
setupMIDI()
}
private func setupMIDI() {
if !MIDIManager.shared.setupPort(midiProtocol: MIDIProtocolID._2_0, receiveBlock: { [weak self] eventList, _ in
if let scheduleMIDIEventListBlock = self?.scheduleMIDIEventListBlock {
_ = scheduleMIDIEventListBlock(AUEventSampleTimeImmediate, 0, eventList)
}
}) {
fatalError("Failed to setup Core MIDI")
}
}
func initComponent(type: String, subType: String, manufacturer: String) async -> ViewController? {
reset()
guard let component = AVAudioUnit.findComponent(type: type, subType: subType, manufacturer: manufacturer) else {
fatalError("Failed to find component with type: \(type), subtype: \(subType), manufacturer: \(manufacturer))" )
}
do {
let audioUnit = try await AVAudioUnit.instantiate(
with: component.audioComponentDescription, options: AudioComponentInstantiationOptions.loadOutOfProcess)
self.avAudioUnit = audioUnit
self.connect(avAudioUnit: audioUnit)
return await audioUnit.loadAudioUnitViewController()
} catch {
return nil
}
}
private func startPlayingInternal() {
guard let avAudioUnit = self.avAudioUnit else { return }
setSessionActive(true)
if avAudioUnit.wantsAudioInput { scheduleEffectLoop() }
let hardwareFormat = engine.outputNode.outputFormat(forBus: 0)
engine.connect(engine.mainMixerNode, to: engine.outputNode, format: hardwareFormat)
do { try engine.start() } catch {
isPlaying = false
fatalError("Could not start engine. error: \(error).")
}
if avAudioUnit.wantsAudioInput { player.play() }
isPlaying = true
}
private func resetAudioLoop() {
guard let avAudioUnit = self.avAudioUnit else { return }
if avAudioUnit.wantsAudioInput {
guard let format = file?.processingFormat else { fatalError("No AVAudioFile defined.") }
engine.connect(player, to: engine.mainMixerNode, format: format)
}
}
public func connect(avAudioUnit: AVAudioUnit?, completion: @escaping (() -> Void) = {}) {
guard let avAudioUnit = self.avAudioUnit else { return }
engine.disconnectNodeInput(engine.mainMixerNode)
resetAudioLoop()
engine.detach(avAudioUnit)
func rewiringComplete() {
scheduleMIDIEventListBlock = auAudioUnit.scheduleMIDIEventListBlock
if isPlaying { player.play() }
completion()
}
let hardwareFormat = engine.outputNode.outputFormat(forBus: 0)
engine.connect(engine.mainMixerNode, to: engine.outputNode, format: hardwareFormat)
if isPlaying { player.pause() }
let auAudioUnit = avAudioUnit.auAudioUnit
if !auAudioUnit.midiOutputNames.isEmpty { auAudioUnit.midiOutputEventBlock = midiOutBlock }
engine.attach(avAudioUnit)
if avAudioUnit.wantsAudioInput {
engine.disconnectNodeInput(engine.mainMixerNode)
if let format = file?.processingFormat {
engine.connect(player, to: avAudioUnit, format: format)
engine.connect(avAudioUnit, to: engine.mainMixerNode, format: format)
}
} else {
let stereoFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareFormat.sampleRate, channels: 2)
engine.connect(avAudioUnit, to: engine.mainMixerNode, format: stereoFormat)
}
rewiringComplete()
}
}
and my MIDI Manager
@MainActor
class MIDIManager: Identifiable, ObservableObject {
func setupPort(midiProtocol: MIDIProtocolID,
receiveBlock: @escaping @Sendable MIDIReceiveBlock) -> Bool {
guard setupClient() else { return false }
if MIDIInputPortCreateWithProtocol(client, portName, midiProtocol, &port, receiveBlock) != noErr {
return false
}
for source in self.sources {
if MIDIPortConnectSource(port, source, nil) != noErr {
print("Failed to connect to source \(source)")
return false
}
}
setupVirtualMIDIOutput()
return true
}
private func setupVirtualMIDIOutput() {
let virtualStatus = MIDISourceCreate(client, virtualSourceName, &virtualSource)
if virtualStatus != noErr {
print("❌ Failed to create virtual MIDI source: \(virtualStatus)")
} else {
print("✅ Created virtual MIDI source: \(virtualSourceName)")
}
}
func sendMIDIData(_ data: [UInt8]) {
print("hey")
var packetList = MIDIPacketList()
withUnsafeMutablePointer(to: &packetList) { ptr in
let pkt = MIDIPacketListInit(ptr)
_ = MIDIPacketListAdd(ptr, 1024, pkt, 0, data.count, data)
if virtualSource != 0 {
let status = MIDIReceived(virtualSource, ptr)
if status != noErr {
print("❌ Failed to send MIDI data: \(status)")
} else {
print("✅ Sent MIDI data: \(data)")
}
}
}
}
}
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I found that the aggregated device correctly obtains input channels in the standard microphone mode. However, in voice isolation mode, it only retrieves channels from the first sub-device in the aggregated device's list. If I want to properly obtain channel information in voice isolation mode, how should I do it?
hi all,
as soon an audio is played in a whatever app, coreaudiod inserts a sleep prevent assertion for both, the system AND the display.
can i somehow stop the insertion of the display sleep assertion?
pid 223(coreaudiod): [0x00004e9e00058dc2] 00:03:18 PreventUserIdleDisplaySleep named: "com.apple.audio.AppleGFXHDAEngineOutputDP:10001:0:{B31A-08C6-00000000}.context.preventuseridledisplaysleep"
Created for PID: 4145.
where PID 4145 is spotify.
but it doesn't matter which app is playing the audio.
any help would be appreciated
thanks
Topic:
Media Technologies
SubTopic:
Audio
Environment
Device: iPhone 16e
iOS Version: 18.4.1 - 18.7.1
Framework: AVFoundation (AVAudioEngine)
Problem Summary
On iPhone 16e (iOS 18.4.1-18.7.1), the installTap callback stops being invoked after resuming from a phone call interruption. This issue is specific to phone call interruptions and does not occur on iPhone 14, iPhone SE 3, or earlier devices.
Expected Behavior
After a phone call interruption ends and audioEngine.start() is called, the previously installed tap should continue receiving audio buffers.
Actual Behavior
After resuming from phone call interruption:
Tap callback is no longer invoked
No audio data is captured
No errors are thrown
Engine appears to be running normally
Note: Normal pause/resume (without phone call interruption) works correctly.
Steps to Reproduce
Start audio recording on iPhone 16e
Receive or make a phone call (triggers AVAudioSession interruption)
End the phone call
Resume recording with audioEngine.start()
Result: Tap callback is not invoked
Tested devices:
iPhone 16e (iOS 18.4.1-18.7.1): Issue reproduces ✗
iPhone 14 (iOS 18.x): Works correctly ✓
iPhone SE 3 (iOS 18.x): Works correctly ✓
Code
Initial Setup (Works)
let inputNode = audioEngine.inputNode
inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, time in
self.processAudioBuffer(buffer, at: time)
}
audioEngine.prepare()
try audioEngine.start()
Interruption Handling
NotificationCenter.default.addObserver(
forName: AVAudioSession.interruptionNotification,
object: AVAudioSession.sharedInstance(),
queue: nil
) { notification in
guard let userInfo = notification.userInfo,
let typeValue = userInfo[AVAudioSessionInterruptionTypeKey] as? UInt,
let type = AVAudioSession.InterruptionType(rawValue: typeValue) else {
return
}
if type == .began {
self.audioEngine.pause()
} else if type == .ended {
try? self.audioSession.setActive(true)
try? self.audioEngine.start()
// Tap callback doesn't work after this on iPhone 16e
}
}
Workaround
Full engine restart is required on iPhone 16e:
func resumeAfterInterruption() {
audioEngine.stop()
inputNode.removeTap(onBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, time in
self.processAudioBuffer(buffer, at: time)
}
audioEngine.prepare()
try audioSession.setActive(true)
try audioEngine.start()
}
This works but adds latency and complexity compared to simple resume.
Questions
Is this expected behavior on iPhone 16e?
What is the recommended way to handle phone call interruptions?
Why does this only affect iPhone 16e and not iPhone 14 or SE 3?
Any guidance would be appreciated!
Hi, I'm trying to plan out development of an app and am wondering if it is possible to have user generated content automatically populate into a custom shazamkit catalogue and be able to query this catalogue non-locally?
Storing all the submissions locally would obviously not scale.
Hi everyone,
We’re encountering an issue where AudioQueueNewOutput blocks indefinitely and never returns, and we’re hoping to get some insight or confirmation if this is a known behavior/regression on newer iOS versions.
Issue Description
When triggering audio playback, we create an output AudioQueue using AudioQueueNewOutput.
On some devices, the call hangs inside AudioQueueNewOutput and never returns, with no OSStatus error and no subsequent logs.
This behavior is reproducible mainly on iOS 18.3.
Earlier iOS versions do not show this issue under the same code path.
if (audioDes)
{
mAudioDes.mSampleRate = audioDes->mSampleRate;
mAudioDes.mBitsPerChannel = audioDes->mBitsPerChannel;
mAudioDes.mChannelsPerFrame = audioDes->mChannelsPerFrame;
mAudioDes.mFormatID = audioDes->mFormatID;
mAudioDes.mFormatFlags = audioDes->mFormatFlags;
mAudioDes.mFramesPerPacket = audioDes->mFramesPerPacket;
mAudioDes.mBytesPerFrame = audioDes->mBytesPerFrame;
mAudioDes.mBytesPerPacket = audioDes->mBytesPerFrame;
mAudioDes.mReserved = 0;
}
// Create AudioQueue for output
OSStatus status = AudioQueueNewOutput(
&mAudioDes,
AQOutputCallback,
this,
NULL,
NULL,
0,
&audioQueue
);
code-block
The thread blocks inside AudioQueueNewOutput, and execution never reaches the next line.
Additional Notes / Observations
ASBD is confirmed to be valid
Standard PCM output
Sample rate, channels, bytes per frame/packet all consistent
Same ASBD works correctly on earlier iOS versions
AudioQueue is created on a background thread
Not on the main thread
Not inside the AudioQueue callback
On first creation, AVAudioSession may not yet be active
setCategory and setActive:YES may be called shortly before creating the AudioQueue
There may be a timing window where the session is still activating
Issue is reported mainly on iOS 18.3
Multiple user reports point to iOS 18.3 devices
Same code path works on iOS 17.x and earlier
No OSStatus error is returned — the call simply never returns.
Questions
Is it expected that AudioQueueNewOutput can block indefinitely while waiting for AVAudioSession / audio route / HAL readiness?
Have there been any behavior changes in iOS 18.3 regarding AudioQueue creation or AudioSession synchronization?
Is it unsafe to call AudioQueueNewOutput before AVAudioSession is fully active on recent iOS versions?
Are there recommended patterns (or delays / callbacks) to ensure AudioQueue creation does not hang?
Any insight or confirmation would be greatly appreciated.
Thanks in advance!
ApplicationMusicPlayer is not available on watchOS but all other platforms. Is there a technical reason for that like battery life? Same goes for SystemMusicPlayer and MPMusicPlayerController. I already filed feedbacks for that.
A bit of a novice to app development here but I have a paid developer account, I have registered the identifier for MusicKit on the developer website (using the bundle identifier I've selected in Xcode) but the option to add MusicKit as a capability is not available in Xcode?
I've manually updated the certificates, closed the app and reopened it, started a new project and tried with a different demo project?
Apologies if I am missing something obvious but could someone help me get this capability added?
We are currently working on a CarPlay navigation app and so far everything is working well except for speaking turn notifications.
Our TTS implementation works fine on the phone and works fine on CarPlay if the voice is spoken over the speaker in the car. If users connect a BT headset to the car and listen through that headset, then the voice commands are chopped up / stutter.
Why would users use BT headset? Well, we are working on a motorcycle app, and there are no speakers usually on a motorcycle.
It sounds like the BT channel is opened and closed repeatedly for every character / word spoken. This happens on different CarPlay devices and different Bluetooth headsets, we have reports from multiple users that they find this behavior annoying and that other apps work fine.
Is this a known issue? Are there possible workaround?
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2
I'm using AVAudioSession to record sound in my application. But I recently came to realize that when the app starts a recording session on a tablet, OS automatically sets the tablet volume to 50% and when after recording ends, it doesn't change back to the previous volume level before starting the recording. So I would like to know whether this is an OS default behavior or a bug?
If it's a default behavior, I much appreciate if I can get a link to the documentation.
I'm encountering numerous crashes involving the com.apple.coreaudio.AQClient thread on our application. The crash details are as follows:
#10 com.apple.coreaudio.AQClient
SIGSEGV
SEGV_ACCERR
0 libobjc.A.dylib _objc_msgSend + 44
1 AudioToolbox ClientMessageHandler::PropertyChanged(unsigned int) + 872
2 AudioToolbox ClientAudioQueue::FetchAndDeliverPendingCallbacks(unsigned int) + 924
3 AudioToolbox __XCallbackNotificationsAvailable + 212
4 libAudioToolboxUtility.dylib _mshMIGPerform + 260
5 CoreFoundation ___CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 56
6 CoreFoundation ___CFRunLoopDoSource1 + 596
7 CoreFoundation ___CFRunLoopRun + 2392
8 CoreFoundation _CFRunLoopRunSpecific + 572
9 AudioToolbox CADeprecated::GenericRunLoopThread::Entry(void*) + 156
10 libAudioToolboxUtility.dylib CADeprecated::CAPThread::Entry(CADeprecated::CAPThread*) + 88
11 libsystem_pthread.dylib __pthread_start + 116
All these crashes occur on system versions below iOS/iPadOS 17, primarily when the device's available RAM is low. What steps can I take to resolve this issue? Any insights would be greatly appreciated!
Topic:
Media Technologies
SubTopic:
Audio
On Apple TV 4K 3rd generation, with tvOS 26 beta 2, when two HomePod 2 are paired to the device, music and movie sources with Dolby Atmos can only be listened to in stereo. dolby atmos not supported
Topic:
Media Technologies
SubTopic:
Audio
I'm using AVFoundation to make a multi-track editor app, which can insert multiple track and clip, including scale some clip to change the speed of the clip, (also I'm not sure whether AVFoundation the best choice for me) but after making the scale with scaleTimeRange API, there is some short noise sound in play back. Also, sometimes it's fine when play AVMutableCompostion using AVPlayer with AVPlayerItem, but after exporting with AVAssetReader, will catch some short noise sounds in result file.... Not sure why.
Here is the example project, which can build and run directly. https://github.com/luckysmg/daily_images/raw/refs/heads/main/TestDemo.zip
Hi,
when a CoreMIDI driver controls physical HW it is probably quite commune to have to control the amount of MIDI data received from the system.
What comes to mind is to just delay returning control of the MIDIDriverInterface::Send() callback to the calling process. While the application trying to send MIDI really stalls until the callback returns it seems only to be a side effect of a generally stalled CoreMIDI server. Between the callbacks the application can send as much MIDI data as it wants to CoreMIDI, it's buffering seems to be endless... However the HW might not be able to play out all the data.
It seems there is no way to indicate an overflow/full buffer situation back the application/CoreMIDI. How is this supposed to work?
Thanks, any hints or pointers are highly appreciated!
Hagen.
I bought two "Apple USB-C to Headphone Jack Adapters". Upon closer inspection, they seems to be of different generations:
The one with product ID 0x110a on top is working fine. The one with product ID 0x110b has two issues:
There is a short but loud click noise on the headphone when I connect it to the iPad.
When I play audio using AVAudioPlayer the first half of a second or so is cut off.
Here's how I'm playing the audio:
audioPlayer = try AVAudioPlayer(contentsOf: url)
audioPlayer?.delegate = self
audioPlayer?.prepareToPlay()
audioPlayer?.play()
Is this a known issue? Am I doing something wrong?
I prefer to use the album fetched from the library instead of the catalog since this is faster. If doing so, how can I check if all tracks of an album are added to the library. In this case I'd like to fetch the catalog version or throw an error (for example when offline).
Using .with(.tracks) on the library album fetches the tracks added to the library.
The trackCount property is referring to the tracks that can be fetched from the library.
The isComplete property is always nil when fetching from the library.
One possible way is checking the trackNumber and discCount properties. However this only detects that not all tracks of an album are added to the library if there is a song not added ahead of one that is. I'd like to be able to handle this edge case as well.
Is there currently a way to do this? I'd prefer to not rely on the apple music catalog for this since this is supposed to work offline as well. Fetching and storing all trackIDs when connected and later comparing against these would work, but this would potentially mean storing tens of thousands of track ids.
Thank you
I am a graduate student conducting research in speech/audio signal processing and multimodal interaction.
Apple Vision Pro is widely recognized as a multimodal interactive system supporting voice, eye, and gesture inputs. However, I could not find detailed specifications or documentation about the audio input sampling rate used by the device’s built-in microphone array when capturing user audio.
Specifically, I would like to understand:
What is the default audio input sampling rate (e.g., 16 kHz, 44.1 kHz, 48 kHz, etc.) for the Vision Pro’s microphones?
When developing with visionOS / AVAudioSession / AVAudioEngine, is there a documented or recommended sampling rate for audio capture?
Are there any best practices or settings for enabling high-quality voice capture on Vision Pro (especially for voice research tasks)?
For context, my work involves voice processing, analysis, and possibly on-device real-time speech recognition. Any pointers to relevant APIs, documentation or examples (especially regarding audio capture buffer size or available formats on visionOS) would be very helpful.
Thank you in advance!
Best regards.
My audio app shows a control bar at the bottom of the window. The controls show nicely, but there is a black "slab" appearing behind the inline controls, the same size as the playerView. Setting the player view background color does nothing:
playerView.wantsLayer = true playerView.layer?.backgroundColor = NSColor.clear.cgColor
How can I clear the background?
If I use .floating controlsStyle, I don't get the background "slab".
Topic:
Media Technologies
SubTopic:
Audio
Hello.
To determine wether "AVB/EAV Mode" of a AV-capable network interfaces is turned on or off I query the IO registry and evaluate the property "AVBControllerState".
I was wondering if this is the "correct" approach and if there is anything known about the values for this property?
Network interfaces without AV capability may also carry this property (e.g.: for my WiFi adapter the value of 1) whereas the value for interfaces with AV capability can be 0 and 3. At least as far as I could observe with my limited amount of test devices at hand.
Is it safe to assume that a value of 3 means this feature is turned on, 0 that it is turned off and ignore values of 1?
Is there another approach to get to know the status of the "AVB/EAV Mode"?
Thanks for any insight.
Best regards,
Ingo
Is there any way for me to use an AutoMix api in my IOS apps, I would play tracks using the Apple Music api and use AutoMix to attempt to merge tracks.
Is this feature/api available to developers.