I'm building a streaming app on visionOS that can play sound from audio buffers each frame. The source audio buffer has 2 channels and is in a Float32 interleaved format.
However, when setting up the AVAudioFormat with interleaved to true, the app will crash with a memory issue:
AURemoteIO::IOThread (35): EXC_BAD_ACCESS (code=1, address=0x3)
But if I set AVAudioFormat with interleaved to false, and manually set up the AVAudioPCMBuffer, it can play audio as expected.
Could you please help me fix it? Below is the code snippet.
@Observable
final class MyAudioPlayer {
private var audioEngine: AVAudioEngine = .init()
private var audioPlayerNode: AVAudioPlayerNode = .init()
private var audioFormat: AVAudioFormat?
init() {
audioEngine.attach(audioPlayerNode)
audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: nil)
try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default)
try? AVAudioSession.sharedInstance().setActive(true)
audioEngine.prepare()
try? audioEngine.start()
audioPlayerNode.play()
}
// more code...
/// This crashes
private func audioFrameCallback_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) {
guard let buf,
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 480000, channels: 2, interleaved: true),
let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples))
else { return }
audioBuffer.frameLength = AVAudioFrameCount(samples)
if let data = audioBuffer.floatChannelData?[0] {
data.update(from: buf, count: samples * Int(format.channelCount))
}
audioPlayerNode.scheduleBuffer(audioBuffer)
}
/// This works
private func audioFrameCallback_Non_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) {
guard let buf,
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 480000, channels: 2, interleaved: false),
let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples))
else { return }
audioBuffer.frameLength = AVAudioFrameCount(samples)
if let data = audioBuffer.floatChannelData {
for channel in 0 ..< Int(format.channelCount) {
for frame in 0 ..< Int(audioBuffer.frameLength) {
data[channel][frame] = buf[frame * Int(format.channelCount) + channel]
}
}
}
audioPlayerNode.scheduleBuffer(audioBuffer)
}
}
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi, I'm facing an issuer with audio worklet in safari. This issue is clearly an iOS bug (it doesn't occur on iPad or Mac)
Here's the minimal reproduction:
Go to https://googlechromelabs.github.io/web-audio-samples/audio-worklet/basic/hello-audio-worklet/
Press start
Audio will not be playing
Open YouTube on another tab and start any video
Audio from the worklet will start playing
Is this a known issue? Any plans to address that? Any workaround available?
I'm building a streaming app on visionOS that can play sound from audio buffers each frame. The audio format has a bitrate of 48000, and each buffer has 480 samples.
I noticed when calling
audioPlayerNode.scheduleBuffer(audioBuffer)
The memory keeps increasing at the speed of 0.1MB per second And at around 4 minutes, the node seems to be full of buffers and had a hard reset, at which point, the audio is stopped temporary with a memory change. see attached screenshot.
However, if I call
audioPlayerNode.scheduleBuffer(audioBuffer, at: nil, options: .interrupts)
The memory leak issue is gone, but the audio is broken (sounds like been shortened).
Below is the full code snippet, anyone knows how to fix it?
@Observable
final class MyAudioPlayer {
private var audioEngine: AVAudioEngine = .init()
private var audioPlayerNode: AVAudioPlayerNode = .init()
private var audioFormat: AVAudioFormat?
init() {
audioEngine.attach(audioPlayerNode)
audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: nil)
try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default)
try? AVAudioSession.sharedInstance().setActive(true)
audioEngine.prepare()
try? audioEngine.start()
audioPlayerNode.play()
}
// more code...
/// callback every frame
private func audioFrameCallback_Non_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) {
guard let buf,
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 48000, channels: 2, interleaved: false),
let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples))
else { return }
audioBuffer.frameLength = AVAudioFrameCount(samples)
if let data = audioBuffer.floatChannelData {
for channel in 0 ..< Int(format.channelCount) {
for frame in 0 ..< Int(audioBuffer.frameLength) {
data[channel][frame] = buf[frame * Int(format.channelCount) + channel]
}
}
}
// memory leak here
audioPlayerNode.scheduleBuffer(audioBuffer)
}
}
I donate some INPlayMediaIntent to system, and I find them in Control center, when I click one of them to play media background, the handler don't execute resolve method, I wanna resolve some mediaItems for suggestion playlist
I have an audio app that can play audio on an AirPlay device.
On non-Apple TV devices, the AirPlay app (on Roku, Samsung, etc.) shows the now playing metadata: title, artist, and album art.
However, on tvOS 18.1, no metadata is shown. The Apple TV device plays the audio, but there is no now playing information shown, nor any other indicators.
Other media apps show the "Now Playing" controls on the upper right of the tvOS home screen.
Can someone point me in the direction of how to solve this issue? I think I am missing something somewhere in regards to the tvOS metadata implementation.
I’ve encountered an issue when trying to transcribe audio during a SharePlay session in VisionOS. Specifically, the AVAudioSession appears to fail when sharing audio, preventing successful transcription. The problem seems related to AVAudioSession.sharedInstance() and using the .mixWithOthers option, which is supposed to enable multiple audio sources to coexist without interference.
Here’s the relevant code snippet that throws the error:
private static func prepareEngine() throws -> (AVAudioEngine, SFSpeechAudioBufferRecognitionRequest) {
let audioEngine = AVAudioEngine()
let request = SFSpeechAudioBufferRecognitionRequest()
request.shouldReportPartialResults = true
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.playAndRecord, mode: .default, options: [.mixWithOthers, .allowBluetooth])
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
let inputNode = audioEngine.inputNode
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in
request.append(buffer)
}
audioEngine.prepare()
try audioEngine.start()
return (audioEngine, request)
}
The setup is designed to initialize an AVAudioEngine and a SFSpeechAudioBufferRecognitionRequest for real-time transcription, but fails within the SharePlay context. Notably, while .mixWithOthers is intended to handle concurrent audio sessions, it doesn’t appear to work as expected during SharePlay. The audioSession.setActive(true) line is where the setup typically fails, with no clear solution to proceed.
Has anyone else faced similar issues with AVAudioSession and SharePlay in VisionOS? Any insights on how to manage audio sharing or transcription during a SharePlay session would be greatly appreciated!
The specific error is:
The operation couldn't be completed. (com.apple.coreaudio.avfaudio error 561145187.)
I’m working on a memo app that records audio from the iPhone’s microphone (and other devices like MacBook or iPad) and processes it in 10-second chunks at a target sample rate of 16 kHz. However, I’ve encountered limitations with installTap in AVAudioEngine, which doesn’t natively support configuring a target sample rate on the mic input (the default being 44.1 kHz).
To address this, I tried using AVAudioMixerNode to downsample the mic input directly. Although everything seems correctly configured, no audio is recorded—just a flat signal with zero levels. There are no errors, and all permissions are granted, so it seems like an issue with downsampling rather than the mic setup itself.
To make progress, I implemented a workaround by tapping and resampling each chunk tapped using installTap (every 50ms in my case) with AVAudioConverter. While this works, it can introduce artifacts at the beginning and end of each chunk, likely due to separate processing instead of continuous downsampling.
Here are the key issues and questions I have:
1. Can we change the mic input sample rate directly using AVAudioSession or another native API in AVAudio? Setting up the desired sample rate initially would be ideal for my use case.
2. Are there alternatives to installTap for recording audio at a different sample rate or for continuously downsampling the live input without chunk-based artifacts?
This issue seems longstanding, as noted in a 2018 forum post:
https://forums.developer.apple.com/forums/thread/111726
Any guidance on configuring or processing mic input at a lower sample rate in real-time would be greatly appreciated. Thank you!
Dear Apple,
I am currently working on Mental Health related research project supported by South Korea Government funding.
In addition to SensorKit access, we are working on the data from microphone. Is there any contact point aside SensorKit access application to discuss the possibility of research data collection from restricted participant samples?
Topic:
Media Technologies
SubTopic:
Audio
Hello !
I am working on an app connected to an external streamer .
I would like to display current playing song on the Lock Screen.
I tried to update the information in MPNowPlayingInfoCenter but I need to play a sound on my iPhone for the control to be displayed .
Is there a way to do it without playing a sound?
If not, playing a silent sound would be the only solution ? validated by Apple ? :-/
Thank you
Frederic
Hi,
I'm developing a musicKit integration in my iOS App, and I want to select songs from recently played (done it), the problem is that the queue is not auto-generated and the user have to select other song if they want to go forward.
There is any method to ask for similar songs, or recommended songs, from a song that the user has already selected?
It will be really great :)
Also if you know it... There is any publisher for the music duration or I need to do a timer?? Thanks.
David.
Hello, I'm fairly new to AVAudioEngine and I'm trying to connect 2 mono nodes as left/right input to a stereo node. I was successful in splitting the input audio to 2 mono nodes using AVAudioConnectionPoint and channelMap.
But I can't figure out how to connect them back to a stereo node.
I'll post the code I have so far. The use case for this is that I'm trying to process the left/right channels with separate audio units.
Any ideas?
let monoFormat = AVAudioFormat(standardFormatWithSampleRate: nativeFormat.sampleRate, channels: 1)!
let leftInputMixer = AVAudioMixerNode()
let rightInputMixer = AVAudioMixerNode()
let leftOutputMixer = AVAudioMixerNode()
let rightOutputMixer = AVAudioMixerNode()
let channelMixer = AVAudioMixerNode()
[leftInputMixer, rightInputMixer, leftOutputMixer,
rightOutputMixer, channelMixer].forEach { engine.attach($0) }
let leftConnectionR = AVAudioConnectionPoint(node: leftInputMixer, bus: 0)
let rightConnectionR = AVAudioConnectionPoint(node: rightInputMixer, bus: 0)
plugin.leftInputMixer = leftInputMixer
plugin.rightInputMixer = rightInputMixer
plugin.leftOutputMixer = leftOutputMixer
plugin.rightOutputMixer = rightOutputMixer
plugin.channelMixer = channelMixer
leftInputMixer.auAudioUnit.channelMap = [0]
rightInputMixer.auAudioUnit.channelMap = [1]
engine.connect(previousNode, to: [leftConnectionR, rightConnectionR], fromBus: 0, format: monoFormat)
// Process right channel, pass through left channel
engine.connect(rightInputMixer, to: plugin.audioUnit, format: monoFormat)
engine.connect(plugin.audioUnit, to: rightOutputMixer, format: monoFormat)
engine.connect(leftInputMixer, to: leftOutputMixer, format: monoFormat)
// Mix back to stereo?
engine.connect(leftOutputMixer, to: channelMixer, format: stereoFormat)
engine.connect(rightOutputMixer, to: channelMixer, format: stereoFormat)
I am experiencing an issue while recording audio using AVAudioEngine with the installTap method. I convert the AVAudioPCMBuffer to Data and send it to a UDP server. However, when I receive the Data and play it back, there is continuous crackling noise during playback.
I am sending audio data using this library "https://github.com/mindAndroid/swift-rtp" by creating packet and send it.
Please help me resolve this issue. I have attached the code reference that I am currently using.
Thank you.
ViewController.swift
For an upcoming update of one of my apps, I’m facing an issue:
The .rate parameter of a AVAudioUnitTimePitch allows me to slow down an audio track without any issues: setting .rate to 0.7 or 0.8 results in an almost perfect playback without changing pitch.
However, whenever the .rate parameter is greater than 1 (e.g. 1.1 or 1.15), I’m starting to hear audio artifacts (“flattering”) in the audio output which is not so nice (even at .overlap = 32).
Intuitively, I’d’ve thought that speeding up the file should contain less artifacts than slowing it down??
I’ve tried different sample rates (44.1 kHz and 48 kHz), but same result.
Grateful for any input on this 🙏
I can't find any way to search for a song by title only. You can search for songs, but any term you provide appears to be applied to any metadata associated with the song. Look at the largely nonsensical results when I search for a song with the letters "de":
In many cases, that string doesn't appear anywhere. I used
MusicCatalogSearchRequest(term: searchTerm, types: [Song.self])
Likewise it stands to reason that people want to search for artist and album names using text strings. How do we do that?
Hi,
I have configured the stream as interleaved, but I am unsure if the function produces interleaved samples. So here my question:
Does AudioDeviceCreateIOProcID produce interleaved samples with microphone input?
Title: Unable to Access Microphone in Control Center Widget – Is It Possible?
Hello everyone,
I'm attempting to create a widget in the Control Center that accesses the microphone, similar to how Shazam does it. However, I'm running into an issue where the widget always prints "Microphone permission denied." It's worth mentioning that microphone access works fine when I'm using the app itself.
Here's the code I'm using in the widget:
swift
Copy code
func startRecording() async {
logger.info("Starting recording...")
print("Starting recording...")
recognizedText = ""
isFinishingRecognition = false
// First, check speech recognition authorization
let speechAuthStatus = await withCheckedContinuation { continuation in
SFSpeechRecognizer.requestAuthorization { status in
continuation.resume(returning: status)
}
}
guard speechAuthStatus == .authorized else {
logger.error("Speech recognition not authorized")
return
}
// Then, request microphone permission using our manager
let micPermission = await AudioSessionManager.shared.requestMicrophonePermission()
guard micPermission else {
logger.error("Microphone permission denied")
print("Microphone permission denied")
return
}
// Continue with recording...
}
Issues:
The code consistently prints "Microphone permission denied" when run from the widget.
Microphone access works without issues when the same code is executed from within the app.
Questions:
Is it possible for a Control Center widget to access the microphone?
If yes, what might be causing the "Microphone permission denied" error in the widget?
Are there additional permissions or configurations required to enable microphone access in a widget?
Any insights or suggestions would be greatly appreciated!
Thank you.
Hi,
May I ask if there is any API or similar way inside the iOS app to set up/switch the transparency and ANC modes of the AirPods Pro 2? One way is to set up one shortcut and activate that shortcut in the app, but it requires manually setting for a shortcut, which is not convenient.
Thx for any advice on that!
Hi,
May I ask if there is any iOS API or similar way to switch between the transparency and ANC modes of AirPods Pro 2? I know there is one way to configure and activate the shortcut in the APP, which requires an inconvenient manual setting. May I ask for any other advice? Thx in advance!
I'm developing an app that plays a WAV file through the Lightning headphone adapter. When i connect the adapter, a prompt appears asking whether to select "Headphones" or "Other Device" What does this setting actually do? I've noticed that it affects the maximum amplitude (volume) of the WAV output. Could you explain the precise difference between these two modes?
Topic:
Media Technologies
SubTopic:
Audio
Case-ID: 10075936
PLATFORM AND VERSION
iOS
Development environment: Xcode Xcode15, macOS macOS 14.5
Run-time configuration: iOS iOS18.0.1
DESCRIPTION OF PROBLEM
Our customer experienced an one-way audio issue when switching from the built-in microphone to AirPods Pro (model: A2084, version: 6F21) during a VoIP call. The issue occurred when the customer's voice could not be heard by the other party, but the customer could hear the other party's voice.
STEPS TO REPRODUCE
Here are the details:
After the issue occurred, subsequent VoIP calls experienced the same issue when using AirPods Pro, but the issue did not occur when using the built-in microphone. The issue could only be resolved by restarting the system, and killing the app did not work.
Log and code analysis:
In WebRTC, it listens for AVAudioSessionRouteChangeNotification. In the above scenario, when webrtc receives the route change notification, it will print the audio session configuration information. At this point, the input channel count was 0, which was abnormal:
[Webrtc] (RTCLogging.mm:33): (audio_device_ios.mm:535 HandleValidRouteChange): RTC_OBJC_TYPE(RTCAudioSession):
{
category: AVAudioSessionCategoryPlayAndRecord
categoryOptions: 128
mode: AVAudioSessionModeVoiceChat
isActive: 1
sampleRate: 48000.00
IOBufferDuration: 0.020000
outputNumberOfChannels: 2
inputNumberOfChannels: 0
outputLatency: 0.021500
inputLatency: 0.005000
outputVolume: 0.600000
isPreferredSpeaker: 0
isCallkit: 0
}
If app tries to call API, setPreferredInputNumberOfChannels at this point, it will fail with an error code of -50:
setConfiguration:active:shouldSetActive:error:]): Failed to set preferred input number of channels(1): The operation couldn’t be completed. (OSStatus error -50.)
Our questions:
When AVAudioSession is active, the category and mode are as expected. Why is the input channel count 0?
Assuming that the AVAudioSession state is abnormal at this point, why does killing the app not resolve the issue, and why does the system need to be restarted to resolve the issue?
Is it possible that the category and mode of the AVAudioSession fetched by the app is currently wrong? Does it need to be reset again each time the callkit is started if the category and mode fetched are the same as the values to be set?