I'm trying to implement airplay into my app. I can successfully playback sound and trigger the airplay selector sheet. If the target device is a Bluetooth only device I can connect with no problem and stream the audio to the Bluetooth device, but if the audio device is a airplay specific device like a HomePod or an Apple TV when I select it, I get a spinning icon, indicating that it is trying to connect, and eventually it times out and stops without connecting.
I don't believe it is an AirPlay audio issue because if I go to a different app, for example a podcast app and select my HomePods for output, and then switch back to my app. My audio will correctly stream to the HomePod. Not only that, I have it so that my icon will change color to indicate that it is connected via airplay and it is correctly indicating that it is connected via AirPlay. But I cannot then disconnect it using the Airplay selector.
The issue appears to be in the AirPlay selection side, which I have spent several days attempting to troubleshoot mostly using ChatGPT to suggest code different than what I have to maybe work around the issue. Mostly it is focused on the audio player section, but it doesn't seem like that is really the route that is the problem.
Audio
RSS for tagIntegrate music and other audio content into your apps.
Posts under Audio tag
84 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I'm developing the VisionOS app. I want to know how to play spatial audio in addition to RealityKit? If it's iOS or macOS, how to play spatial audio in addition to RealityKit?
I'm experiencing issues with audio playback in my React video player component specifically on iOS mobile devices (iPhone/iPad). Even after implementing several recommended solutions, including Apple's own guidelines, the audio still isn't working properly on iOS Safari. It works completely fine on Android. On iOS, I ensured the video doesn't autoplay (it requires user interaction). Here are all the details:
Environment
iOS Safari (latest version)
React 18
TypeScript
Video files: MP4 with AAC audio codec
Current Implementation
const VideoPlayer: React.FC<VideoPlayerProps> = ({
src,
autoplay = true,
}) => {
const videoRef = useRef<HTMLVideoElement>(null);
const isIOSDevice = isIOS(); // Custom iOS detection
const [touchStartY, setTouchStartY] = useState<number | null>(null);
const [touchStartTime, setTouchStartTime] = useState<number | null>(null);
// Handle touch start event for gesture detection
const handleTouchStart = (e: React.TouchEvent) => {
setTouchStartY(e.touches[0].clientY);
setTouchStartTime(Date.now());
};
// Handle touch end event with gesture validation
const handleTouchEnd = (e: React.TouchEvent) => {
if (touchStartY === null || touchStartTime === null) return;
const touchEndY = e.changedTouches[0].clientY;
const touchEndTime = Date.now();
// Validate if it's a legitimate tap (not a scroll)
const verticalDistance = Math.abs(touchEndY - touchStartY);
const touchDuration = touchEndTime - touchStartTime;
// Only trigger for quick taps (< 200ms) with minimal vertical movement
if (touchDuration < 200 && verticalDistance < 10) {
handleVideoInteraction(e);
}
setTouchStartY(null);
setTouchStartTime(null);
};
// Simplified video interaction handler following Apple's guidelines
const handleVideoInteraction = (e: React.MouseEvent | React.TouchEvent) => {
console.log('Video interaction detected:', {
type: e.type,
timestamp: new Date().toISOString()
});
// Ensure keyboard is dismissed (iOS requirement)
if (document.activeElement instanceof HTMLElement) {
document.activeElement.blur();
}
e.stopPropagation();
const video = videoRef.current;
if (!video || !video.paused) return;
// Attempt playback in response to user gesture
video.play().catch(err => console.error('Error playing video:', err));
};
// Effect to handle video source and initial state
useEffect(() => {
console.log('VideoPlayer props:', { src, loadingState });
setError(null);
setLoadingState('initial');
setShowPlayButton(false); // Never show custom play button on iOS
if (videoRef.current) {
// Set crossOrigin attribute for CORS
videoRef.current.crossOrigin = "anonymous";
if (autoplay && !hasPlayed && !isIOSDevice) {
// Only autoplay on non-iOS devices
dismissKeyboard();
setHasPlayed(true);
}
}
}, [src, autoplay, hasPlayed, isIOSDevice]);
return (
<Paper
shadow="sm"
radius="md"
withBorder
onClick={handleVideoInteraction}
onTouchStart={handleTouchStart}
onTouchEnd={handleTouchEnd}
>
<video
ref={videoRef}
autoPlay={!isIOSDevice && autoplay}
playsInline
controls
crossOrigin="anonymous"
preload="auto"
onLoadedData={handleLoadedData}
onLoadedMetadata={handleMetadataLoaded}
onEnded={handleVideoEnd}
onError={handleError}
onPlay={dismissKeyboard}
onClick={handleVideoInteraction}
onTouchStart={handleTouchStart}
onTouchEnd={handleTouchEnd}
{...(!isFirefoxBrowser && {
"x-webkit-airplay": "allow",
"x-webkit-playsinline": true,
"webkit-playsinline": true
})}
>
<source src={videoSrc} type="video/mp4" />
</video>
</Paper>
);
};
Apple's Guidelines Implementation
Removed custom play controls on iOS
Using native video controls for user interaction
Ensuring audio playback is triggered by user gesture
Following Apple's audio session guidelines
Properly handling the canplaythrough event
Current Behavior
Video plays but without sound on iOS mobile
Mute/unmute button in native video controls doesn't work
Audio works fine on desktop browsers and Android devices
Videos are confirmed to have AAC audio codec
No console errors related to audio playback
User interaction doesn't trigger audio as expected
Questions
Are there any additional iOS-specific requirements I'm missing?
Could this be related to iOS audio session handling?
Are there known issues with React's handling of video elements on iOS?
Should I be implementing additional audio context initialization?
Any insights or suggestions would be greatly appreciated!
I'm working on adding CarPlay support to an audio app and I'd like to mimic the behavior of the Apple Music app on launch.
Forgive me, but I think using Gherkin syntax here will help to best describe the desired behavior:
GIVEN the Apple Music app is in a cold state (not launched or in memory)
AND another audio app is actively playing audio
WHEN I launch the Apple Music app from CarPlay
THEN the Now Playing template is shown via a push
AND the appropriate Now Playing info is shown
AND the Now Playing button is shown on the tab bar
AND the actively playing audio from another audio app is NOT interrupted
The current behavior I see in my own app is that I can push on the Now Playing template and fill out the MPNowPlayingInfoCenter's info dictionary, but it won't render the info or show the Now Playing button on the tab bar until I start playing audio.
Also, is there a way to hide the Now Playing button after the queue of content has finished playing? I'm able to pop the Now Playing template, but the Now Playing button is still present and tapping it will navigate the user to the now blank Now Playing template.
Hello Apple Developers,
I am experiencing an issue where USB audio input (e.g., external USB microphone) is blocked when using AirPlay screen mirroring from my iPhone to a Mac or Apple TV. However, the built-in microphone continues to work without any problem.
Issue Details:
I am using an iPhone 15 (or latest device) running iOS [your iOS version].
I connect a USB audio interface/microphone via a USB-C adapter or Lightning adapter.
The USB microphone works perfectly for audio input before starting AirPlay.
The moment I enable AirPlay screen mirroring, the USB microphone stops working, and it disappears from available audio input sources.
The built-in microphone continues to function, but I cannot use the USB microphone while mirroring.
When I stop screen mirroring, the USB microphone immediately becomes available again.
Expected Behavior:
I would expect iOS to allow me to continue using an external USB microphone while mirroring my screen, just like it allows the built-in microphone to work.
Questions:
Is this an intentional restriction in iOS?
Is there any workaround to enable USB audio input while using AirPlay screen mirroring?
Is there a way to request a feature or configuration option to allow external USB microphones during AirPlay?
I appreciate any insights or guidance from the Apple team or fellow developers. Thanks in advance!
Best regards,
A month ago I was listening to music in Apple Music but then I saw a song, not anything weird, then I wanted to download it but it took a longer than usual and then an error apeared, I cant upload the picture of the text but it says, "The Song Name / Album / Artist 0% - Stopped (err = -12884)", I uninstall the app and then install it and it fixes, but when i want to enable Lossless Audio, the error appears, the error still aprearing until today and Im tired of uninstalling and installing the app so much times, can anyone help me?
I am developing an iOS app that needs to play spoken audio on demand from a server, while ducking the audio of background music from another app (e.g., SoundtrackYourBrand or Apple Music). This must work even when the app is in the background, and the server dictates when and what audio is played. Ideally, the message should be played within a minute of the server requesting it.
Current Attempt & Observations
I initially tried using Firebase Cloud Messaging (FCM) silent notifications to send a URL to an audio file, which the app would then play using AVPlayer.
This works consistently when the app is active, but in the background, it only works about 60% of the time.
In cases where it fails, iOS ducks the background music (e.g., from SoundtrackYourBrand) but never plays the spoken audio.
Interestingly, when I play the audio without enabling audio ducking, it seems to work 100% of the time from my limited testing, even in the background.
The app has background modes enabled for Audio, Background Fetch, and Remote Notifications.
Best Approach to Achieve This?
I’d like guidance on the best Apple-compliant approach to reliably play audio on command from the server, even when the app is in the background. Some possible paths:
Ensuring the app remains active in the background – Are there recommended ways to prevent the app from getting suspended, such as background tasks, a special background mode, or a persistent connection to the server?
Alternative triggering mechanisms – Would something like VoIP, Push-to-Talk, or another background service be better suited for this use case?
Built-in iOS speech synthesis (AVSpeechSynthesizer) – If playing external audio is unreliable, would generating speech dynamically from text be a more robust approach?
Streaming audio instead of sending a URL – Could continuous streaming from the server keep the app active and allow playback at the right moment?
I want to ensure the solution is reliable and works 100% of the time when needed. Any recommendations on the best approach for this would be greatly appreciated.
Thank you for your time and guidance.
I'm getting hundreds of the message below in Xcode. I've narrowed it down to when I instantiate the following
AVAudioUnitComponentManager.shared()
Message send exceeds rate-limit threshold and will be dropped. { reporterID=231700600717315, rateLimit=32hz }
When a tab goes to sleep, all its resources gets killed. When the user has any interaction it gets active again. However, if there is an audio in the tab, it does not play again even after user interaction. One has to reload/reopen the tab.
Is this how it should work?
Is there a fix for this?
I like to suggest a different microphone dot icon for Voice Control. I had customized Voice Control to turn on a Flashlight. This caused confusion with the orange dot being switched on constantly.
I made an error in sending a security vulnerability to Apple Security about the orange dot microphone in always ON mode when iPhone is unlocked.
Looking for help on getting "On Tap" to work inside RCP for my AVP project. I can get it to work when using "on added to scene" but if I switch to "on tap", the audio will not play when attaching the audio to an entity in my scene. I'm using the same entity for the tap gesture that the audio is using for the emitter. Here is my work flow for the "on added to scene" that works correctly to help troubleshoot my non working "on tap".
Behaviors: "on added to scene". action - timeline
Input target: check mark enabled, allowed all
Collision set to default
Audio library: source mp3 file
Chanel Audio: resource mp3 file above
Timeline: Play Audio with mp3 file added
This set up in RCP allows my AVP project to launch correctly with audio "on added to scene". But when switching behaviors to "on tap", the audio will no longer play and I can not figure out why. I've tried several different options and nothing works. Please help!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Audio
USDZ
Reality Composer Pro
visionOS
I’ve been researching how to achieve a recording playback effect in iOS similar to the hands-free calling effect in the system’s phone app. How can this be implemented? I tried using the voice chat recording method, but found that the volume of the speaker output is too low. How should this issue be addressed? I couldn’t find a suitable API. Could you provide me with some documentation or sample code? Thank you.
I’m facing a problem while trying to achieve spatial audio effects in my iOS 18 app. I have tried several approaches to get good 3D audio, but the effect never felt good enough or it didn’t work at all.
Also what mostly troubles me is I noticed that AirPods I have doesn’t recognize my app as one having spatial audio (in audio settings it shows "Spatial Audio Not Playing"). So i guess my app doesn't use spatial audio potential.
First approach uses AVAudioEnviromentNode with AVAudioEngine. Chaining position of player as well as changing listener’s doesn’t seem to change anything in how audio plays.
Here's simple how i initialize AVAudioEngine
import Foundation
import AVFoundation
class AudioManager: ObservableObject {
// important class variables
var audioEngine: AVAudioEngine!
var environmentNode: AVAudioEnvironmentNode!
var playerNode: AVAudioPlayerNode!
var audioFile: AVAudioFile?
...
//Sound set up
func setupAudio() {
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .default, options: [])
try session.setActive(true)
} catch {
print("Failed to configure AVAudioSession: \(error.localizedDescription)")
}
audioEngine = AVAudioEngine()
environmentNode = AVAudioEnvironmentNode()
playerNode = AVAudioPlayerNode()
audioEngine.attach(environmentNode)
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: environmentNode, format: nil)
audioEngine.connect(environmentNode, to: audioEngine.mainMixerNode, format: nil)
environmentNode.listenerPosition = AVAudio3DPoint(x: 0, y: 0, z: 0)
environmentNode.listenerAngularOrientation = AVAudio3DAngularOrientation(yaw: 0, pitch: 0, roll: 0)
environmentNode.distanceAttenuationParameters.referenceDistance = 1.0 environmentNode.distanceAttenuationParameters.maximumDistance = 100.0
environmentNode.distanceAttenuationParameters.rolloffFactor = 2.0
// example.mp3 is mono sound
guard let audioURL = Bundle.main.url(forResource: "example", withExtension: "mp3") else {
print("Audio file not found")
return
}
do {
audioFile = try AVAudioFile(forReading: audioURL)
} catch {
print("Failed to load audio file: \(error)")
}
}
...
//Playing sound
func playSpatialAudio(pan: Float ) {
guard let audioFile = audioFile else { return }
// left side
playerNode.position = AVAudio3DPoint(x: pan, y: 0, z: 0)
playerNode.scheduleFile(audioFile, at: nil, completionHandler: nil)
do {
try audioEngine.start()
playerNode.play()
} catch {
print("Failed to start audio engine: \(error)")
}
...
}
Second more complex approach using PHASE did better. I’ve made an exemplary app that allows players to move audio player in 3D space. I have added reverb, and sliders changing audio position up to 10 meters each direction from listener but audio seems to only really change left to right (x axis) - again I think it might be trouble with the app not being recognized as spatial.
//Crucial class Variables:
class PHASEAudioController: ObservableObject{
private var soundSourcePosition: simd_float4x4 = matrix_identity_float4x4
private var audioAsset: PHASESoundAsset!
private let phaseEngine: PHASEEngine
private let params = PHASEMixerParameters()
private var soundSource: PHASESource
private var phaseListener: PHASEListener!
private var soundEventAsset: PHASESoundEventNodeAsset?
// Initialization of PHASE
init{
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .default, options: [])
try session.setActive(true)
} catch {
print("Failed to configure AVAudioSession: \(error.localizedDescription)")
}
// Init PHASE Engine
phaseEngine = PHASEEngine(updateMode: .automatic)
phaseEngine.defaultReverbPreset = .mediumHall
phaseEngine.outputSpatializationMode = .automatic //nothing helps
// Set listener position to (0,0,0) in World space
let origin: simd_float4x4 = matrix_identity_float4x4
phaseListener = PHASEListener(engine: phaseEngine)
phaseListener.transform = origin
phaseListener.automaticHeadTrackingFlags = .orientation
try! self.phaseEngine.rootObject.addChild(self.phaseListener)
do{
try self.phaseEngine.start();
}
catch {
print("Could not start PHASE engine")
}
audioAsset = loadAudioAsset()
// Create sound Source
// Sphere
soundSourcePosition.translate(z:3.0)
let sphere = MDLMesh.newEllipsoid(withRadii: vector_float3(0.1,0.1,0.1), radialSegments: 14, verticalSegments: 14, geometryType: MDLGeometryType.triangles, inwardNormals: false, hemisphere: false, allocator: nil)
let shape = PHASEShape(engine: phaseEngine, mesh: sphere)
soundSource = PHASESource(engine: phaseEngine, shapes: [shape])
soundSource.transform = soundSourcePosition
print(soundSourcePosition)
do {
try phaseEngine.rootObject.addChild(soundSource)
}
catch {
print ("Failed to add a child object to the scene.")
}
let simpleModel = PHASEGeometricSpreadingDistanceModelParameters()
simpleModel.rolloffFactor = rolloffFactor
soundPipeline.distanceModelParameters = simpleModel
let samplerNode = PHASESamplerNodeDefinition(
soundAssetIdentifier: audioAsset.identifier,
mixerDefinition: soundPipeline,
identifier: audioAsset.identifier + "_SamplerNode")
samplerNode.playbackMode = .looping
do {soundEventAsset = try
phaseEngine.assetRegistry.registerSoundEventAsset(
rootNode: samplerNode,
identifier: audioAsset.identifier + "_SoundEventAsset")
} catch {
print("Failed to register a sound event asset.")
soundEventAsset = nil
}
}
//Playing sound
func playSound(){
// Fire new sound event with currently set properties
guard let soundEventAsset else { return }
params.addSpatialMixerParameters(
identifier: soundPipeline.identifier,
source: soundSource,
listener: phaseListener)
let soundEvent = try! PHASESoundEvent(engine: phaseEngine,
assetIdentifier: soundEventAsset.identifier,
mixerParameters: params)
soundEvent.start(completion: nil)
}
...
}
Also worth mentioning might be that I only own personal team account
Hi all,
I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in.
Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped.
Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played.
Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset?
I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity.
Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them
Thanks for any feedback!
I followed this guide, and added com.apple.developer.spatial-audio.profile-access as an entitlement to the app (via the + Capability button – Spatial Audio Profile). I have a audio graph that outputs to AVAudioEngine.
However, the Xcode Cloud build ended up with this error:
Invalid Code Signing Entitlements. Your application bundle's signature contains code signing entitlements that are not supported on iOS. Specifically, key 'com.apple.developer.spatial-audio.profile-access' in 'Payload/…' is not supported.
This guide says it's available on iOS. Does it mean not on iOS 17? In which case how can I provide fallback for iOS 17?
Hello!
I used the Apple CA Playthrough example code that pipes audio between devices. It uses AudioUnit callbacks to pipe the input to an output device, and I created a system equalizer with it - however users reported it stopped working in macOS 15. I am getting the error
HALPlugIn.cpp:552 HALPlugIn::DeviceGetCurrentTime: got an error from the plug-in routine, Error: 1937010544 (stop)
for the output device and no sound coming out of the speakers. The error only occurs when using a virtual device as an input, not using the microphone. First I thought the problem was in the loopback driver, but it also does not work with other loopback drivers like Blackhole.
STEPS TO REPRODUCE
Install a virtual device, for example "brew install blackhole-2ch" and run the CAPlayThrough example code (you need to add Mic Permission in the info.plist). Then set your system audio output to the virtual device, select the device as input in CAPlayThrough and hit start. You should see the error in console.
My question:
What did change in macOS 15 that could cause this? Is it something with the new permission handling maybe?
Hi all,
I am developing a digital signal processing application using AudioToolbox to capture audio from an audio loop application (BlackHole).
Environment:
MacOS Sonoma 14.4.1
Xcode 15.4
Quicktime 10.5 (I also tested with JRive Media Center)
BlackHole 2ch and 16ch
Problem: All audio samples received are zero.
Steps to recreate:
Set Mac Settings Sound audio output to BlackHole 2ch.
Set Mac Settings Sound audio input to BlackHole 2ch.
Authorise Xcode to access Microphone.
In Audio MIDI set "Use this device for sound input" and "Use this device for sound output". Set volume of both to 1.0 .
Play a 44.1 16-bit signed integer stereo FLAC file using Quicktime.
Start C++ application . Key details of my code below...
AudioStreamBasicDescription asbd = { 0 };
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked;
asbd.mSampleRate = 48000;
asbd.mBitsPerChannel = 32;
asbd.mBytesPerFrame = 8;
asbd.mChannelsPerFrame = 2;
asbd.mBytesPerPacket = asbd.mBytesPerFrame;
asbd.mFramesPerPacket = 1;
status = AudioQueueNewInput(&asbd,
read_audio_callback,
&userdata,
NULL,
NULL,
0,
&queue_ref);
for (uint8_t b = 0; b < num_buffers; b++) {
AudioQueueBufferRef buf_ref;
status = AudioQueueAllocateBuffer(queue_ref, audio_buf_size, &buf_ref);
printf("Allocate buffer status: %d length %d\n", status, buf_ref->mAudioDataByteSize);
status = AudioQueueEnqueueBuffer (queue_ref, buf_ref, 0, NULL);
printf ("Initial Enqueue Buffer status: %d\n", status);
}
status = AudioQueueStart(queue_ref, NULL);
Here is my callback:
void read_audio_callback(void * ptr, AudioQueueRef queue_ref, AudioQueueBufferRef buf_ref, const AudioTimeStamp * ts_not_used, uint32_t num_packets, const AudioStreamPacketDescription * aspd_not_used) {
if (num_packets > 0) {
uint32_t bytesize = buf_ref -> mAudioDataByteSize;
float * sample_buf_float = (float *)buf_ref -> mAudioData;
float data[bytesize / 4];
memcpy(data, sample_buf_float, bytesize);
OSStatus status = AudioQueueEnqueueBuffer(queue_ref, buf_ref, 0, NULL);
printf ("Enqueue buffer status: %d\n", status);
printf("Buffer length %d Packets received %d\n", bytesize, num_packets);
for (int j = 0; j < bytesize / 4; j++) {
printf("%f",data[j]);
}
}
printf("read_audio_callback called!\n");
}
All calls to Apple Audio functions return status of 0.
The samples in the buffer are all 0.0 . Why would this be the case?
Also, my callback is called even when playback is stopped. num_packets is always > 0 .
Appreciate any help.
Thanks in advance,
Geoff.
Hi,
I have been working on a project that enables users to listen to their favorite music using a streaming service, which so far was Spotify. The app had a programmable 3D/2D interface with the ability to connect to devices in your home and have them react to music. As of September 2024, Spotify decomissioned their Audio Analysis API. I have seen other posts mention playing Apple Music through AVFoundation, which would break DRM and so it’s not supported. However, the Spotify Audio Analysis API does not allow for a full frequency reconstruction. It is entirely temporal data on beats, kicks, loudness, and timbre changes, which themselves are operators on the spectral data from the FFT. It would be very useful for the developer community if we get the ability to do this and it will probably Apple Music among developers and those who use their apps a lot more.
Would love to hear your thoughts about this and Happy New Year!
I'm trying to load Music Kit on the server with solid js. I can confirm that my implementation has been sufficient to return authentication tokens and for MusicKit.isAuthorized to return true. My issue is that if I reload the page, it only succeeds intermittently (perhaps 25% of the time?). My question is - what is wrong with my implementation? Removing the async keyword ensures it loads every time but playing and queuing music no longer works. I'm currently assuming this is an SSR issue but the docs haven't explicitly specified this isn't possible.
I have the following boilerplate:
export default createHandler(
() => (
<StartServer
document={({ assets, children, scripts }) => {
return (
<html lang="en">
<head>
<meta name="apple-music-developer-token" content={authResult.token} />
<meta name="apple-music-app-name" content="app name" />
<meta name="apple-music-app-build" content="1978.4.1" />
{assets}
<script
src="https://js-cdn.music.apple.com/musickit/v3/musickit.js"
async
/>
</head>
<body>
<div id="app">{children}</div>
{scripts}
</body>
</html>
)
}}
/>
))
When I first load my app, I'll encounter:
musickit.js:13 Uncaught TypeError: Cannot read properties of undefined (reading 'node')
at musickit.js:13:10194
at musickit.js:13:140
at musickit.js:13:209
The intermittence signals an issue relating to the async keyword. An expansion on this issue can be found here.
private var audioEngine = AVAudioEngine()
private var inputNode: AVAudioInputNode!
func startAnalyzing() {
inputNode = audioEngine.inputNode
let recordingFormat = inputNode.outputFormat(forBus: 0)
let hardwareSampleRate = recordingSession.sampleRate
inputNode.removeTap(onBus: 0)
if recordingFormat.sampleRate != hardwareSampleRate {
print("。")
let newFormat = AVAudioFormat(commonFormat: recordingFormat.commonFormat,
sampleRate: hardwareSampleRate,
channels: recordingFormat.channelCount,
interleaved: recordingFormat.isInterleaved)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: newFormat) { buffer, time in
self.processAudioBuffer(buffer, time: time)
}
} else {
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, time in
self.processAudioBuffer(buffer, time: time)
}
}
do {
audioEngine.prepare()
try audioEngine.start()
} catch {
print(": \(error)")
}
}
I back the app to the background and then call startAnalyzing(), which reports an error and the background recording permissions are configured。
error:
[10429:570139] [aurioc] AURemoteIO.cpp:1668 AUIOClient_StartIO failed (561145187)
[10429:570139] [avae] AVAEInternal.h:109 [AVAudioEngineGraph.mm:1545:Start: (err = PerformCommand(*ioNode, kAUStartIO, NULL, 0)): error 561145187
Audio engine couldn't start.
Is background boot not allowed?