I'm experiencing issues with audio playback in my React video player component specifically on iOS mobile devices (iPhone/iPad). Even after implementing several recommended solutions, including Apple's own guidelines, the audio still isn't working properly on iOS Safari. It works completely fine on Android. On iOS, I ensured the video doesn't autoplay (it requires user interaction). Here are all the details:
Environment
iOS Safari (latest version)
React 18
TypeScript
Video files: MP4 with AAC audio codec
Current Implementation
const VideoPlayer: React.FC<VideoPlayerProps> = ({
src,
autoplay = true,
}) => {
const videoRef = useRef<HTMLVideoElement>(null);
const isIOSDevice = isIOS(); // Custom iOS detection
const [touchStartY, setTouchStartY] = useState<number | null>(null);
const [touchStartTime, setTouchStartTime] = useState<number | null>(null);
// Handle touch start event for gesture detection
const handleTouchStart = (e: React.TouchEvent) => {
setTouchStartY(e.touches[0].clientY);
setTouchStartTime(Date.now());
};
// Handle touch end event with gesture validation
const handleTouchEnd = (e: React.TouchEvent) => {
if (touchStartY === null || touchStartTime === null) return;
const touchEndY = e.changedTouches[0].clientY;
const touchEndTime = Date.now();
// Validate if it's a legitimate tap (not a scroll)
const verticalDistance = Math.abs(touchEndY - touchStartY);
const touchDuration = touchEndTime - touchStartTime;
// Only trigger for quick taps (< 200ms) with minimal vertical movement
if (touchDuration < 200 && verticalDistance < 10) {
handleVideoInteraction(e);
}
setTouchStartY(null);
setTouchStartTime(null);
};
// Simplified video interaction handler following Apple's guidelines
const handleVideoInteraction = (e: React.MouseEvent | React.TouchEvent) => {
console.log('Video interaction detected:', {
type: e.type,
timestamp: new Date().toISOString()
});
// Ensure keyboard is dismissed (iOS requirement)
if (document.activeElement instanceof HTMLElement) {
document.activeElement.blur();
}
e.stopPropagation();
const video = videoRef.current;
if (!video || !video.paused) return;
// Attempt playback in response to user gesture
video.play().catch(err => console.error('Error playing video:', err));
};
// Effect to handle video source and initial state
useEffect(() => {
console.log('VideoPlayer props:', { src, loadingState });
setError(null);
setLoadingState('initial');
setShowPlayButton(false); // Never show custom play button on iOS
if (videoRef.current) {
// Set crossOrigin attribute for CORS
videoRef.current.crossOrigin = "anonymous";
if (autoplay && !hasPlayed && !isIOSDevice) {
// Only autoplay on non-iOS devices
dismissKeyboard();
setHasPlayed(true);
}
}
}, [src, autoplay, hasPlayed, isIOSDevice]);
return (
<Paper
shadow="sm"
radius="md"
withBorder
onClick={handleVideoInteraction}
onTouchStart={handleTouchStart}
onTouchEnd={handleTouchEnd}
>
<video
ref={videoRef}
autoPlay={!isIOSDevice && autoplay}
playsInline
controls
crossOrigin="anonymous"
preload="auto"
onLoadedData={handleLoadedData}
onLoadedMetadata={handleMetadataLoaded}
onEnded={handleVideoEnd}
onError={handleError}
onPlay={dismissKeyboard}
onClick={handleVideoInteraction}
onTouchStart={handleTouchStart}
onTouchEnd={handleTouchEnd}
{...(!isFirefoxBrowser && {
"x-webkit-airplay": "allow",
"x-webkit-playsinline": true,
"webkit-playsinline": true
})}
>
<source src={videoSrc} type="video/mp4" />
</video>
</Paper>
);
};
Apple's Guidelines Implementation
Removed custom play controls on iOS
Using native video controls for user interaction
Ensuring audio playback is triggered by user gesture
Following Apple's audio session guidelines
Properly handling the canplaythrough event
Current Behavior
Video plays but without sound on iOS mobile
Mute/unmute button in native video controls doesn't work
Audio works fine on desktop browsers and Android devices
Videos are confirmed to have AAC audio codec
No console errors related to audio playback
User interaction doesn't trigger audio as expected
Questions
Are there any additional iOS-specific requirements I'm missing?
Could this be related to iOS audio session handling?
Are there known issues with React's handling of video elements on iOS?
Should I be implementing additional audio context initialization?
Any insights or suggestions would be greatly appreciated!
Audio
RSS for tagIntegrate music and other audio content into your apps.
Posts under Audio tag
85 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I'm working on adding CarPlay support to an audio app and I'd like to mimic the behavior of the Apple Music app on launch.
Forgive me, but I think using Gherkin syntax here will help to best describe the desired behavior:
GIVEN the Apple Music app is in a cold state (not launched or in memory)
AND another audio app is actively playing audio
WHEN I launch the Apple Music app from CarPlay
THEN the Now Playing template is shown via a push
AND the appropriate Now Playing info is shown
AND the Now Playing button is shown on the tab bar
AND the actively playing audio from another audio app is NOT interrupted
The current behavior I see in my own app is that I can push on the Now Playing template and fill out the MPNowPlayingInfoCenter's info dictionary, but it won't render the info or show the Now Playing button on the tab bar until I start playing audio.
Also, is there a way to hide the Now Playing button after the queue of content has finished playing? I'm able to pop the Now Playing template, but the Now Playing button is still present and tapping it will navigate the user to the now blank Now Playing template.
I often find when doing basic actions in MusicKit it is incredibly slow compared to Apple's Music App. I've tried different versions, devices, networks, Apple's sample code, it all throughout the last several years, and it is all the same. Does anyone else have this issue?
Hello Apple Developers,
I am experiencing an issue where USB audio input (e.g., external USB microphone) is blocked when using AirPlay screen mirroring from my iPhone to a Mac or Apple TV. However, the built-in microphone continues to work without any problem.
Issue Details:
I am using an iPhone 15 (or latest device) running iOS [your iOS version].
I connect a USB audio interface/microphone via a USB-C adapter or Lightning adapter.
The USB microphone works perfectly for audio input before starting AirPlay.
The moment I enable AirPlay screen mirroring, the USB microphone stops working, and it disappears from available audio input sources.
The built-in microphone continues to function, but I cannot use the USB microphone while mirroring.
When I stop screen mirroring, the USB microphone immediately becomes available again.
Expected Behavior:
I would expect iOS to allow me to continue using an external USB microphone while mirroring my screen, just like it allows the built-in microphone to work.
Questions:
Is this an intentional restriction in iOS?
Is there any workaround to enable USB audio input while using AirPlay screen mirroring?
Is there a way to request a feature or configuration option to allow external USB microphones during AirPlay?
I appreciate any insights or guidance from the Apple team or fellow developers. Thanks in advance!
Best regards,
A month ago I was listening to music in Apple Music but then I saw a song, not anything weird, then I wanted to download it but it took a longer than usual and then an error apeared, I cant upload the picture of the text but it says, "The Song Name / Album / Artist 0% - Stopped (err = -12884)", I uninstall the app and then install it and it fixes, but when i want to enable Lossless Audio, the error appears, the error still aprearing until today and Im tired of uninstalling and installing the app so much times, can anyone help me?
I’m working on a macOS app, written in Swift. My goal is to record audio from an external microphone, e.g., one connected via USB.
For this, I’m using an AVCaptureSession and recording its output with an AVAssetWriter. This works perfectly in principle (and reliably with internal microphones, for example).
The problem occurs after my app has successfully completed the first recording and I then want to make additional recordings (which makes me think it might be process-dependent, because it works again after restarting the app).
The problem: Noisy or distorted-sounding audio files. In addition, the following error message appears in the Console from CoreAudio / its AudioConverter:
Input data proc returned inconsistent 512 packets for 2048 bytes; at 3 bytes per packet, that is actually 682 packets
It is easy to reproduce. This problem is reproducible even if I don’t configure the AVAssetWriter manually and instead let it receive its audioSettings using a preset from an AVOutputSettingsAssistant. I’m running on macOS 15.0 (24A335).
I’ve filed a feedback including a demo project → FB15333298 🎟️
I would greatly appreciate any help!
Have a great day,
Martin
Hi all,
I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in.
Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped.
Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played.
Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset?
I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity.
Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them
Thanks for any feedback!
I am developing an iOS app that needs to play spoken audio on demand from a server, while ducking the audio of background music from another app (e.g., SoundtrackYourBrand or Apple Music). This must work even when the app is in the background, and the server dictates when and what audio is played. Ideally, the message should be played within a minute of the server requesting it.
Current Attempt & Observations
I initially tried using Firebase Cloud Messaging (FCM) silent notifications to send a URL to an audio file, which the app would then play using AVPlayer.
This works consistently when the app is active, but in the background, it only works about 60% of the time.
In cases where it fails, iOS ducks the background music (e.g., from SoundtrackYourBrand) but never plays the spoken audio.
Interestingly, when I play the audio without enabling audio ducking, it seems to work 100% of the time from my limited testing, even in the background.
The app has background modes enabled for Audio, Background Fetch, and Remote Notifications.
Best Approach to Achieve This?
I’d like guidance on the best Apple-compliant approach to reliably play audio on command from the server, even when the app is in the background. Some possible paths:
Ensuring the app remains active in the background – Are there recommended ways to prevent the app from getting suspended, such as background tasks, a special background mode, or a persistent connection to the server?
Alternative triggering mechanisms – Would something like VoIP, Push-to-Talk, or another background service be better suited for this use case?
Built-in iOS speech synthesis (AVSpeechSynthesizer) – If playing external audio is unreliable, would generating speech dynamically from text be a more robust approach?
Streaming audio instead of sending a URL – Could continuous streaming from the server keep the app active and allow playback at the right moment?
I want to ensure the solution is reliable and works 100% of the time when needed. Any recommendations on the best approach for this would be greatly appreciated.
Thank you for your time and guidance.
I'm getting hundreds of the message below in Xcode. I've narrowed it down to when I instantiate the following
AVAudioUnitComponentManager.shared()
Message send exceeds rate-limit threshold and will be dropped. { reporterID=231700600717315, rateLimit=32hz }
I’m facing a problem while trying to achieve spatial audio effects in my iOS 18 app. I have tried several approaches to get good 3D audio, but the effect never felt good enough or it didn’t work at all.
Also what mostly troubles me is I noticed that AirPods I have doesn’t recognize my app as one having spatial audio (in audio settings it shows "Spatial Audio Not Playing"). So i guess my app doesn't use spatial audio potential.
First approach uses AVAudioEnviromentNode with AVAudioEngine. Chaining position of player as well as changing listener’s doesn’t seem to change anything in how audio plays.
Here's simple how i initialize AVAudioEngine
import Foundation
import AVFoundation
class AudioManager: ObservableObject {
// important class variables
var audioEngine: AVAudioEngine!
var environmentNode: AVAudioEnvironmentNode!
var playerNode: AVAudioPlayerNode!
var audioFile: AVAudioFile?
...
//Sound set up
func setupAudio() {
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .default, options: [])
try session.setActive(true)
} catch {
print("Failed to configure AVAudioSession: \(error.localizedDescription)")
}
audioEngine = AVAudioEngine()
environmentNode = AVAudioEnvironmentNode()
playerNode = AVAudioPlayerNode()
audioEngine.attach(environmentNode)
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: environmentNode, format: nil)
audioEngine.connect(environmentNode, to: audioEngine.mainMixerNode, format: nil)
environmentNode.listenerPosition = AVAudio3DPoint(x: 0, y: 0, z: 0)
environmentNode.listenerAngularOrientation = AVAudio3DAngularOrientation(yaw: 0, pitch: 0, roll: 0)
environmentNode.distanceAttenuationParameters.referenceDistance = 1.0 environmentNode.distanceAttenuationParameters.maximumDistance = 100.0
environmentNode.distanceAttenuationParameters.rolloffFactor = 2.0
// example.mp3 is mono sound
guard let audioURL = Bundle.main.url(forResource: "example", withExtension: "mp3") else {
print("Audio file not found")
return
}
do {
audioFile = try AVAudioFile(forReading: audioURL)
} catch {
print("Failed to load audio file: \(error)")
}
}
...
//Playing sound
func playSpatialAudio(pan: Float ) {
guard let audioFile = audioFile else { return }
// left side
playerNode.position = AVAudio3DPoint(x: pan, y: 0, z: 0)
playerNode.scheduleFile(audioFile, at: nil, completionHandler: nil)
do {
try audioEngine.start()
playerNode.play()
} catch {
print("Failed to start audio engine: \(error)")
}
...
}
Second more complex approach using PHASE did better. I’ve made an exemplary app that allows players to move audio player in 3D space. I have added reverb, and sliders changing audio position up to 10 meters each direction from listener but audio seems to only really change left to right (x axis) - again I think it might be trouble with the app not being recognized as spatial.
//Crucial class Variables:
class PHASEAudioController: ObservableObject{
private var soundSourcePosition: simd_float4x4 = matrix_identity_float4x4
private var audioAsset: PHASESoundAsset!
private let phaseEngine: PHASEEngine
private let params = PHASEMixerParameters()
private var soundSource: PHASESource
private var phaseListener: PHASEListener!
private var soundEventAsset: PHASESoundEventNodeAsset?
// Initialization of PHASE
init{
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .default, options: [])
try session.setActive(true)
} catch {
print("Failed to configure AVAudioSession: \(error.localizedDescription)")
}
// Init PHASE Engine
phaseEngine = PHASEEngine(updateMode: .automatic)
phaseEngine.defaultReverbPreset = .mediumHall
phaseEngine.outputSpatializationMode = .automatic //nothing helps
// Set listener position to (0,0,0) in World space
let origin: simd_float4x4 = matrix_identity_float4x4
phaseListener = PHASEListener(engine: phaseEngine)
phaseListener.transform = origin
phaseListener.automaticHeadTrackingFlags = .orientation
try! self.phaseEngine.rootObject.addChild(self.phaseListener)
do{
try self.phaseEngine.start();
}
catch {
print("Could not start PHASE engine")
}
audioAsset = loadAudioAsset()
// Create sound Source
// Sphere
soundSourcePosition.translate(z:3.0)
let sphere = MDLMesh.newEllipsoid(withRadii: vector_float3(0.1,0.1,0.1), radialSegments: 14, verticalSegments: 14, geometryType: MDLGeometryType.triangles, inwardNormals: false, hemisphere: false, allocator: nil)
let shape = PHASEShape(engine: phaseEngine, mesh: sphere)
soundSource = PHASESource(engine: phaseEngine, shapes: [shape])
soundSource.transform = soundSourcePosition
print(soundSourcePosition)
do {
try phaseEngine.rootObject.addChild(soundSource)
}
catch {
print ("Failed to add a child object to the scene.")
}
let simpleModel = PHASEGeometricSpreadingDistanceModelParameters()
simpleModel.rolloffFactor = rolloffFactor
soundPipeline.distanceModelParameters = simpleModel
let samplerNode = PHASESamplerNodeDefinition(
soundAssetIdentifier: audioAsset.identifier,
mixerDefinition: soundPipeline,
identifier: audioAsset.identifier + "_SamplerNode")
samplerNode.playbackMode = .looping
do {soundEventAsset = try
phaseEngine.assetRegistry.registerSoundEventAsset(
rootNode: samplerNode,
identifier: audioAsset.identifier + "_SoundEventAsset")
} catch {
print("Failed to register a sound event asset.")
soundEventAsset = nil
}
}
//Playing sound
func playSound(){
// Fire new sound event with currently set properties
guard let soundEventAsset else { return }
params.addSpatialMixerParameters(
identifier: soundPipeline.identifier,
source: soundSource,
listener: phaseListener)
let soundEvent = try! PHASESoundEvent(engine: phaseEngine,
assetIdentifier: soundEventAsset.identifier,
mixerParameters: params)
soundEvent.start(completion: nil)
}
...
}
Also worth mentioning might be that I only own personal team account
I’ve been researching how to achieve a recording playback effect in iOS similar to the hands-free calling effect in the system’s phone app. How can this be implemented? I tried using the voice chat recording method, but found that the volume of the speaker output is too low. How should this issue be addressed? I couldn’t find a suitable API. Could you provide me with some documentation or sample code? Thank you.
Hi all, I have spent a lot of time reading the tech note and watching the WDDC video that introduce the PTTFramework on iOS. I currently have a custom setup where I am using AVAudioEngine to schedule and play buffers that are being streamed through a call.
I am looking to use the PTTFramework to allow a user to trigger this push to talk behavior from the lock screen and the various places with the system UI it provides.
However I am unsure what the correct behavior is regarding the handling of the audio session. Right now I am using .playback when there is no active voice transmission so that devices such as AirPods can be in AD2P mode where applicable, and then transitioning to .playbackAndRecord category only when the mic input should become active. Following this change in my AVAudioEngine manager I am then manually activating and deactivating the audio session manually when the engine is either playing/recording or idle.
In the documentation it states that you should not attempt to activate or deactivate your audio session directly, but allow the framework to handle it.
Does that mean that I need to either call the request to transmit delegate function or set an active participant on the channel manager first, and then wait for the didBecomeActive delegate method to trigger before I actually attempt to play or record any audio? (I am using the fullDuplex mode currently.) I noticed that that delegate method will only trigger if the audio session wasn't active before doing one of the above (setting active participant, requesting transmit).
Lastly, when using the PTTFramework it also mentions that we get support for PTT devices and I notice on the didBeginTransmittingFrom property we have a handsfreeButton case. Is there any documentation or resources for what is actually supported out of the box for this? I am currently working on handling a lot of the push to talk through bluetooth LE, and wanted to make sure there wasn't overlap with what the system provides.
Thank you!
When a tab goes to sleep, all its resources gets killed. When the user has any interaction it gets active again. However, if there is an audio in the tab, it does not play again even after user interaction. One has to reload/reopen the tab.
Is this how it should work?
Is there a fix for this?
I like to suggest a different microphone dot icon for Voice Control. I had customized Voice Control to turn on a Flashlight. This caused confusion with the orange dot being switched on constantly.
I made an error in sending a security vulnerability to Apple Security about the orange dot microphone in always ON mode when iPhone is unlocked.
Looking for help on getting "On Tap" to work inside RCP for my AVP project. I can get it to work when using "on added to scene" but if I switch to "on tap", the audio will not play when attaching the audio to an entity in my scene. I'm using the same entity for the tap gesture that the audio is using for the emitter. Here is my work flow for the "on added to scene" that works correctly to help troubleshoot my non working "on tap".
Behaviors: "on added to scene". action - timeline
Input target: check mark enabled, allowed all
Collision set to default
Audio library: source mp3 file
Chanel Audio: resource mp3 file above
Timeline: Play Audio with mp3 file added
This set up in RCP allows my AVP project to launch correctly with audio "on added to scene". But when switching behaviors to "on tap", the audio will no longer play and I can not figure out why. I've tried several different options and nothing works. Please help!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Audio
USDZ
Reality Composer Pro
visionOS
I followed this guide, and added com.apple.developer.spatial-audio.profile-access as an entitlement to the app (via the + Capability button – Spatial Audio Profile). I have a audio graph that outputs to AVAudioEngine.
However, the Xcode Cloud build ended up with this error:
Invalid Code Signing Entitlements. Your application bundle's signature contains code signing entitlements that are not supported on iOS. Specifically, key 'com.apple.developer.spatial-audio.profile-access' in 'Payload/…' is not supported.
This guide says it's available on iOS. Does it mean not on iOS 17? In which case how can I provide fallback for iOS 17?
Hello!
I used the Apple CA Playthrough example code that pipes audio between devices. It uses AudioUnit callbacks to pipe the input to an output device, and I created a system equalizer with it - however users reported it stopped working in macOS 15. I am getting the error
HALPlugIn.cpp:552 HALPlugIn::DeviceGetCurrentTime: got an error from the plug-in routine, Error: 1937010544 (stop)
for the output device and no sound coming out of the speakers. The error only occurs when using a virtual device as an input, not using the microphone. First I thought the problem was in the loopback driver, but it also does not work with other loopback drivers like Blackhole.
STEPS TO REPRODUCE
Install a virtual device, for example "brew install blackhole-2ch" and run the CAPlayThrough example code (you need to add Mic Permission in the info.plist). Then set your system audio output to the virtual device, select the device as input in CAPlayThrough and hit start. You should see the error in console.
My question:
What did change in macOS 15 that could cause this? Is it something with the new permission handling maybe?
private var audioEngine = AVAudioEngine()
private var inputNode: AVAudioInputNode!
func startAnalyzing() {
inputNode = audioEngine.inputNode
let recordingFormat = inputNode.outputFormat(forBus: 0)
let hardwareSampleRate = recordingSession.sampleRate
inputNode.removeTap(onBus: 0)
if recordingFormat.sampleRate != hardwareSampleRate {
print("。")
let newFormat = AVAudioFormat(commonFormat: recordingFormat.commonFormat,
sampleRate: hardwareSampleRate,
channels: recordingFormat.channelCount,
interleaved: recordingFormat.isInterleaved)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: newFormat) { buffer, time in
self.processAudioBuffer(buffer, time: time)
}
} else {
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, time in
self.processAudioBuffer(buffer, time: time)
}
}
do {
audioEngine.prepare()
try audioEngine.start()
} catch {
print(": \(error)")
}
}
I back the app to the background and then call startAnalyzing(), which reports an error and the background recording permissions are configured。
error:
[10429:570139] [aurioc] AURemoteIO.cpp:1668 AUIOClient_StartIO failed (561145187)
[10429:570139] [avae] AVAEInternal.h:109 [AVAudioEngineGraph.mm:1545:Start: (err = PerformCommand(*ioNode, kAUStartIO, NULL, 0)): error 561145187
Audio engine couldn't start.
Is background boot not allowed?
I am trying to get access to raw audio samples from mic. I've written a simple example application that writes the values to a text file.
Below is my sample application. All the input samples from the buffers connected to the input tap is zero. What am I doing wrong?
I did add the Privacy - Microphone Usage Description key to my application target properties and I am allowing microphone access when the application launches. I do find it strange that I have to provide permission every time even though in Settings > Privacy, my application is listed as one of the applications allowed to access the microphone.
class AudioRecorder {
private let audioEngine = AVAudioEngine()
private var fileHandle: FileHandle?
func startRecording() {
let inputNode = audioEngine.inputNode
let audioFormat: AVAudioFormat
#if os(iOS)
let hardwareSampleRate = AVAudioSession.sharedInstance().sampleRate
audioFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareSampleRate, channels: 1)!
#elseif os(macOS)
audioFormat = inputNode.inputFormat(forBus: 0) // Use input node's current format
#endif
setupTextFile()
inputNode.installTap(onBus: 0, bufferSize: 1024, format: audioFormat) { [weak self] buffer, _ in
self!.processAudioBuffer(buffer: buffer)
}
do {
try audioEngine.start()
print("Recording started with format: \(audioFormat)")
} catch {
print("Failed to start audio engine: \(error.localizedDescription)")
}
}
func stopRecording() {
audioEngine.stop()
audioEngine.inputNode.removeTap(onBus: 0)
print("Recording stopped.")
}
private func setupTextFile() {
let tempDir = FileManager.default.temporaryDirectory
let textFileURL = tempDir.appendingPathComponent("audioData.txt")
FileManager.default.createFile(atPath: textFileURL.path, contents: nil, attributes: nil)
fileHandle = try? FileHandle(forWritingTo: textFileURL)
}
private func processAudioBuffer(buffer: AVAudioPCMBuffer) {
guard let channelData = buffer.floatChannelData else { return }
let channelSamples = channelData[0]
let frameLength = Int(buffer.frameLength)
var textData = ""
var allZero = true
for i in 0..<frameLength {
let sample = channelSamples[i]
if sample != 0 {
allZero = false
}
textData += "\(sample)\n"
}
if allZero {
print("Got \(frameLength) worth of audio data on \(buffer.stride) channels. All data is zero.")
} else {
print("Got \(frameLength) worth of audio data on \(buffer.stride) channels.")
}
// Write to file
if let data = textData.data(using: .utf8) {
fileHandle!.write(data)
}
}
}
Hi all,
I am developing a digital ****** processing application using AudioToolbox to capture audio from an audio loop application (BlackHole).
Environment:
MacOS Sonoma 14.4.1
Xcode 15.4
Quicktime 10.5 (I also tested with JRive Media Center)
BlackHole 2ch and 16ch
Problem: All audio samples received are zero.
Steps to recreate:
Set Mac Settings Sound audio output to BlackHole 2ch.
Set Mac Settings Sound audio input to BlackHole 2ch.
Authorise Xcode to access Microphone.
In Audio MIDI set "Use this device for sound input" and "Use this device for sound output". Set volume of both to 1.0 .
Play a 44.1 16-bit signed integer stereo FLAC file using Quicktime.
Start C++ application . Key details of my code below...
AudioStreamBasicDescription asbd = { 0 };
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked;
asbd.mSampleRate = 48000;
asbd.mBitsPerChannel = 32;
asbd.mBytesPerFrame = 8;
asbd.mChannelsPerFrame = 2;
asbd.mBytesPerPacket = asbd.mBytesPerFrame;
asbd.mFramesPerPacket = 1;
status = AudioQueueNewInput(&asbd,
read_audio_callback,
&userdata,
NULL,
NULL,
0,
&queue_ref);
for (uint8_t b = 0; b < num_buffers; b++) {
AudioQueueBufferRef buf_ref;
status = AudioQueueAllocateBuffer(queue_ref, audio_buf_size, &buf_ref);
printf("Allocate buffer status: %d length %d\n", status, buf_ref->mAudioDataByteSize);
status = AudioQueueEnqueueBuffer (queue_ref, buf_ref, 0, NULL);
printf ("Initial Enqueue Buffer status: %d\n", status);
}
status = AudioQueueStart(queue_ref, NULL);
Here is my callback:
void read_audio_callback(void * ptr, AudioQueueRef queue_ref, AudioQueueBufferRef buf_ref, const AudioTimeStamp * ts_not_used, uint32_t num_packets, const AudioStreamPacketDescription * aspd_not_used) {
if (num_packets > 0) {
uint32_t bytesize = buf_ref -> mAudioDataByteSize;
float * sample_buf_float = (float *)buf_ref -> mAudioData;
float data[bytesize / 4];
memcpy(data, sample_buf_float, bytesize);
OSStatus status = AudioQueueEnqueueBuffer(queue_ref, buf_ref, 0, NULL);
printf ("Enqueue buffer status: %d\n", status);
printf("Buffer length %d Packets received %d\n", bytesize, num_packets);
for (int j = 0; j < bytesize / 4; j++) {
printf("%f",data[j]);
}
}
printf("read_audio_callback called!\n");
}
All calls to Apple Audio functions return status of 0.
The samples in the buffer are all 0.0 . Why would this be the case?
Also, my callback is called even when playback is stopped. num_packets is always > 0 .
Appreciate any help.
Thanks in advance,
Geoff.