Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

Issue with AVAudioEngine and AVAudioSession after Interruption and Background Transition - 561145187 error code
Description: I am developing a recording-only application that supports background recording using AVAudioEngine. The app segments the recording into 60-second files for further processing. For example, a 10-minute recording results in ten 60-second files. Problem: The application functions as expected in the background. However, after the app receives an interruption (such as a phone call) and the interruption ends, I can successfully restart the recording. The problem arises when the app then transitions to the background; it fails to restart the recording. Specifically, after ending the call and transitioning the app to the background, the app encounters an error and is unable to restart AVAudioSession and AVAudioEngine. The only resolution is to close and restart the app, which is not ideal for user experience. Steps to Reproduce: 1. Start recording using AVAudioEngine. 2. The app records and saves 60-second segments. 3. Receive an interruption (e.g., an incoming phone call). 4. End the call. 5. Transition the app to the background. 6. Transition the app to the foreground and the session will be activated again. 7. Attempt to restart the recording. Expected Behavior: The app should resume recording seamlessly after the interruption and background transition. Actual Behavior: The app fails to restart AVAudioSession and AVAudioEngine, resulting in a continuous error. The recording cannot be resumed without closing and reopening the app. How I’m Starting the Recording: Configuration: internal func setAudioSessionCategory() { do { try audioSession.setCategory( .playAndRecord, mode: .default, options: [.defaultToSpeaker, .mixWithOthers, .allowBluetooth] ) } catch { debugPrint(error) } } internal func setAudioSessionActivation() { if UIApplication.shared.applicationState == .active { do { try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) if audioSession.isInputGainSettable { try audioSession.setInputGain(1.0) } try audioSession.setPreferredIOBufferDuration(0.01) try setBuiltInPreferredInput() } catch { debugPrint(error) } } } Starting AVAudioEngine: internal func setupEngine() { if callObserver.onCall() { return } inputNode = audioEngine.inputNode audioEngine.attach(audioMixer) audioEngine.connect(inputNode, to: audioMixer, format: AVAudioFormat.validInputAudioFormat(inputNode)) } internal func beginRecordingEngine() { audioMixer.removeTap(onBus: 0) audioMixer.installTap(onBus: 0, bufferSize: 1024, format: AVAudioFormat.validInputAudioFormat(inputNode)) { [weak self] buffer, _ in guard let self = self, let file = self.audioFile else { return } write(file, buffer: buffer) } audioEngine.prepare() do { try audioEngine.start() recordingTimer = Timer.scheduledTimer(withTimeInterval: recordingInterval, repeats: true) { [weak self] _ in self?.handleRecordingInterval() } } catch { debugPrint(error) } } On the try audioEngine.start() call, I receive error code 561145187 in the catch block. Logs/Error Messages: • Error code: 561145187 Request: I would appreciate any guidance or solutions to ensure the app can resume recording after interruptions and background transitions without requiring a restart. Thank you for your assistance.
2
0
631
Jul ’24
ApplicationMusicPlayer Audio Session Issue When Switching to AVAudioEngine in Background
Hi! I'm developing a music player app that interchanges between ApplicationMusicPlayer and AVAudioEngine. I'm facing an issue when switching from playback via ApplicationMusicPlayer to AVAudioEngine while the app is in background. Based on testing, it seems like the issue has to do with being unable to set audio focus in background, causing error AVAudioSessionErrorCodeCannotInterruptOthers. I would like to check if ApplicationMusicPlayer has its own audio focus separated from the app's own audio focus. If it is, is there anything that I can do to ensure that ApplicationMusicPlayer returns focus to the app? (I notice that the issue does not occur if we are moving playback from AVAudioEngine to ApplicationMusicPlayer. Not sure why the opposite does not work)
1
0
397
Aug ’24
AVAudioPCMBuffer Memory Management
I’m using AVAudioEngine to get a stream of AVAudioPCMBuffers from the device’s microphone using the usual installTap(onBus:) setup. To distribute the audio stream to other parts of the program, I’m sending the buffers to a Combine publisher similar to the following: private let publisher = PassthroughSubject<AVAudioPCMBuffer, Never>() I’m starting to suspect I have some kind of concurrency or memory management issue with the buffers, because when consuming the buffers elsewhere I’m getting a range of crashes that suggest some internal pointer in a buffer is NULL (specifically, I’m seeing crashes in vDSP.convertElements(of:to:) when I try to read samples from the buffer). These crashes are in production and fairly rare — I can’t reproduce them locally. I never modify the audio buffers, only read them for analysis. My question is: should it be possible to put AVAudioPCMBuffers into a Combine pipeline? Does the AVAudioPCMBuffer class not retain/release the underlying AudioBufferList’s memory the way I’m assuming? Is this a fundamentally flawed approach?
1
1
1k
Oct ’21
How to load Media Player album artwork and release memory after
Media Player album artwork images do not release memory after loading so memory accumulates, eventually impacting performance and causing crashes. Steps To Reproduce: Download example project and run on device Grant access to library of 100+ albums Scroll through albums on any tab screen Observe steady memory increase in Xcode debug navigator and crash as you scroll Observe same memory accumulation on other screens that use different methods to get album artwork images Observations: Various methods of obtaining album artwork insufficiently release memory and impact app performance. 1 - Artwork Image releases memory sometimes if the images are small and you scroll slow, but app will still accumulate memory, drop frames when scrolling, and eventually crash. 2 - Value For Property behaves similar to Artwork Image even when using the implicit size of the artwork bounds. 3 - Image From Disk problem seems related to perform() method since you can remove UIImage and memory still accumulates All 3 methods result in higher retain counts than expected for artwork objects that could be preventing memory from releasing
0
0
358
Aug ’24
ios sound recognition: to what extent can developers access apple's built-in sound recognition?
hi, i am currently developing an app that has core functionalities reliant on detecting user laughter in the background. in our early stages we noticed apple's built-in sound recognition functionality. at the core, i am guessing that sound recognition requires permission from the user to access the microphone 24/7. currently, using the conventional avenue of background audio recording, a yellow indicator will be present on the top of the iphone screen indicating recording. this is not the case for sound recognition; instead. if all sound processing/recognition is kept on-device, is there any way to avoid the yellow dot and achieve sound laughter in a way that is similar to how apple's sound recognition does it? from the settings interface for sound recognition accessible to the user in the settings app, the only detectable "people" sounds are baby crying, coughing, and shouting. is it also possible to add laughter to this list somehow? thank you in advance.
2
0
461
Aug ’24
MPMusicPlayerControllers nowPlayingItem not changing song
MPMusicPlayerControllers nowPlayingItem no longer seems to be able to change a song. The code use to work but seems to be broken on iOS 16, 17 and now the iOS 18 beta. When newSong is triggered, the song restarts but it does not change songs. Instead I get the following error: Failed to set now playing item error=<MPMusicPlayerControllerErrorDomain.5 "Unable to play item <MPConcreteMediaItem: 0x9e9f0ef70> 206357861099970620" {}>. The documentation seems to indicate I’m doing things correctly. class MusicPlayer { var songTwo: MPMediaItem? let player = MPMusicPlayerController.applicationMusicPlayer func start() async { await MPMediaLibrary.requestAuthorization() let myPlaylistsQuery = MPMediaQuery.playlists() let playlists = myPlaylistsQuery.collections!.filter { $0.items.count > 2} let playlist = playlists.first! let songOne = playlist.items.first! songTwo = playlist.items[1] player.setQueue(with: playlist) play(songOne) } func newSong() { guard let songTwo else { return } play(songTwo) } private func play(_ song: MPMediaItem) { player.stop() player.nowPlayingItem = song player.prepareToPlay() player.play() } }
2
1
495
Jul ’24
How do I output different sounds to headphones and speakers while simultaneously recording, all without using AVAudioSession.Category.multiroute?
I need to find a way to allow recording from the mic while outputting two different sound streams to two different devices (speaker and headphones). I've done a fair bit of reading around using AVAudioSession.Category.multiroute but haven't found any modern examples. @theanalogkid posted a nice example using obj-C nine years ago, but others have noted that the code isn't readily translatable to Swift. To make matters worse, this is one of the very few examples on how to properly use multirouting. The official documentation is lacking, to say the least, and the WWDC 2012 session is, well, old enough to attend middle school and be a Taylor Swift fan, but definitely not in Swift. The few relevant forum posts here are spread over this middle schooler's life span and likely outdated, with most having no responses other than the poster's own plightful echo. They don't paint a pretty picture of .multiroute's health, with a recent poster noting that volume buttons don't work in this mode, contacting DTS and finding that there's no fix; another finding that it just doesn't work for certain devices, etc. Audio is giving me enough of a headache so I'd like to avoid slogging through this if possible. .multiroute feels like the developer mode of AVAudioSession, but without documentation. tl;dr - Without using .multiroute, is there a way to allow an app to output two different devices while simultaneously recording audio? If .multiroute is the only way to achieve this, can someone give me a quick rundown of how this category works?
1
0
469
Aug ’24
Configuring Apple Vision Pro's microphones to effectively pick up other speaker's voice
I am developing a visionOS app that captions speech in real environments. Currently, I am using Apple's built-in speech recognizer. However, when I was testing the app with a Vision Pro, the device seemed to only pick up the user's voice (in other words, the voices of the wearer of the Vision Pro device). For example, when the speech recognition task is running, and another person in front of me is talking, the system does not pick up the speech well. I tried to set the AVAudioSession to be equally sensitive to all directions: private func configureAudioSession() { do { try audioSession.setCategory(.record, mode: .measurement) try audioSession.setActive(true) if #available(visionOS 1.0, *) { let availableDataSources = audioSession.availableInputs?.first?.dataSources if let omniDirectionalSource = availableDataSources?.first(where: {$0.preferredPolarPattern == .omnidirectional}) { try audioSession.setInputDataSource(omniDirectionalSource) } } } catch { print("Failed to set up audio session: \(error)") } } And here is how I set up the speech recognition and configure the microphone inputs: private func startSpeechRecognition(completion: @escaping (String) -> Void) { do { // Cancel the previous task if it's running. if let recognitionTask = recognitionTask { recognitionTask.cancel() self.recognitionTask = nil } // The AudioSession is already active, creating input node. let inputNode = audioEngine.inputNode try inputNode.setVoiceProcessingEnabled(false) // Create and configure the speech recognition request recognitionRequest = SFSpeechAudioBufferRecognitionRequest() guard let recognitionRequest = recognitionRequest else { fatalError("Unable to create a recognition request") } recognitionRequest.shouldReportPartialResults = true // Keep speech recognition data on device if #available(iOS 13, *) { recognitionRequest.requiresOnDeviceRecognition = true } // Create a recognition task for speech recognition session. // Keep a reference to the task so that it can be canceled. recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest) { result, error in // var isFinal = false if let result = result { // Update the recognizedText completion(result.bestTranscription.formattedString) } else if let error = error { completion("Recognition error: \(error.localizedDescription)") } if error != nil || result?.isFinal == true { // Stop recognizing speech if there is a problem self.audioEngine.stop() inputNode.removeTap(onBus: 0) self.recognitionRequest = nil self.recognitionTask = nil } } // Configure the microphone input let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in self.recognitionRequest?.append(buffer) } audioEngine.prepare() try audioEngine.start() } catch { completion("Audio engine could not start: \(error.localizedDescription)") } }
0
0
501
Jul ’24
iOS 18 Beta 4 re-release CarPlay
When listening to music in carplay, the sound quality is very bad and I can't turn down the volume of the music. it acts like a voice assistant instead of music. when turning the volume down and up, it's like adjusting the volume of the voice assistant when it should turn up the volume of the music. So instead of music, it acts like the voice assistant is activated. This was previously reported in iOS 18 Beta 2. I have a 2024 Honda Ridgeline Black Edition.
1
1
425
Jul ’24
Is it possible to listen physical mute/unmute ring state?
I would really like to know if it is possible to get a state for the ringer on iOS. In my application I want to repeat the logic of audio rules from Instagram during watching a video. The video plays with sound. When the ringer switches to mute status the video's audio should be muted too. When you press the volume up or down button the audio should be unmuted. I found how to catch volume up/down buttons with AVAudioSession.sharedInstance().observe(\.outputVolume) but I couldn't find anything that could help me with the ringer state. AVAudioSession.Category can't achieve this effect. Also there is a possibility to check ringer state with Darwin notify lib like var token = NOTIFY_TOKEN_INVALID notify_register_dispatch( "com.apple.springboard.ringerstate", &token, .main ) { token in var state: UInt64 = 0 notify_get_state(token, &state) print("Changed to", state == 1 ? "ON" : "OFF") } but I'm not sure that this won't lead to the application being rejected. I don't know is it a private API usage or not. I will be glad to any advice and suggestions. Thanks
1
2
370
Jul ’24
Setting Audio Input node for AVAudioEngine causes outside audio to stop
I'm building an app that will allow users to record voice notes. The functionality of all that is working great; I'm trying to now implement changes to the AudioSession to manage possible audio streams from other apps. I want it so that if there is audio playing from a different app, and the user opens my app; the audio keep playing. When we start recording, any third party app audio should stop, and can then can resume again when we stop recording. This is my main audio setup code: private var audioEngine: AVAudioEngine! private var inputNode: AVAudioInputNode! func setupAudioEngine() { audioEngine = AVAudioEngine() inputNode = audioEngine.inputNode audioPlayerNode = AVAudioPlayerNode() audioEngine.attach(audioPlayerNode) let format = AVAudioFormat(standardFormatWithSampleRate: AUDIO_SESSION_SAMPLE_RATE, channels: 1) audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: format) } private func setupAudioSession() { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth]) try audioSession.setPreferredSampleRate(AUDIO_SESSION_SAMPLE_RATE) try audioSession.setPreferredIOBufferDuration(0.005) // 5ms buffer for lower latency try audioSession.setActive(true) // Add observers setupInterruptionObserver() } catch { audioErrorMessage = "Failed to set up audio session: \(error)" } } This is all called upon app startup so we're ready to record whenever the user presses the record button. However, currently when this happens, any outside audio stops playing. I isolated the issue to this line: inputNode = audioEngine.inputNode When that's commented out, the audio will play -- but I obviously need this for recording functionality. Is this a bug? Expected behavior?
0
0
441
Jul ’24
Issues with volume
I’m having issues with volume after installing ios 18 beta 4. The volume toggle in control centre is turned up to full volume and is disabled. It only gets enabled after playing music. As soon as I pause music it gets disabled and my iPhone turns mute. Also having issue when playing music on AirPods; as soon as I turn my screen on, the music pauses. It happens every time o do this action.
2
1
423
Jul ’24
Implementing Multi-Channel Audio Recording on iOS with Built-In and External Mics
Hi there community, First and foremost, a big thank you to everyone who takes the time to read this. TL;DR: How, if even possible, can I record multiple audio streams simultaneously on an iOS application (iPad/iPhone)? I'm working on a recorder for the iPad to gather data for a machine learning project focused on speech recognition. Our goal is to capture extensive speech data, which requires recording from multiple microphones. Specifically, I need to record from all mics connected to our Scarlett 4i4 audio interface and, most importantly, also record from the built-in mic on the iPad or iPhone at the same time. As a newcomer to Swift development, I initially explored AVAudioRecorder. However, I quickly realized that it only supports one active audio node at a time, making multi-channel recording impossible. (perhaps you can proof me wrong, would make my day) Next, I transitioned to using AVAudioEngine, but encountered the same limitation: I couldn't manage to get input nodes for both the built-in mic and the Scarlett interface channels simultaneously. The application started behaving oddly, often resulting in identical audio data being recorded across all files. Determined to find a solution, I delved deeper into the Core Audio framework, specifically using Audio Toolbox. My approach involved creating and configuring multiple Audio Units, each corresponding to a different audio input device. Here's a brief overview of my current implementation: Listing Available Input Devices: I used AVAudioSession to enumerate all available input devices. Creating Audio Units: For each device, I created an Audio Unit and attempted to configure it for recording. Setting Up Callbacks: I set up input and output callbacks to handle the audio processing. Despite my efforts over the last few days, I haven't had much success. The callbacks for the Audio Units don't seem to be invoked correctly, and I'm struggling to achieve simultaneous multi-channel recording. Below is a snippet of my latest attempt: let audioUnitCallback: AURenderCallback = { ( inRefCon: UnsafeMutableRawPointer, ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>, inTimeStamp: UnsafePointer<AudioTimeStamp>, inBusNumber: UInt32, inNumberFrames: UInt32, ioData: UnsafeMutablePointer<AudioBufferList>? ) -> OSStatus in guard let ioData = ioData else { return noErr } print("Input callback invoked") let audioUnit = inRefCon.assumingMemoryBound(to: AudioUnit.self).pointee var bufferList = AudioBufferList( mNumberBuffers: 1, mBuffers: AudioBuffer( mNumberChannels: 1, mDataByteSize: 0, mData: nil ) ) let status = AudioUnitRender(audioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList) if status != noErr { print("AudioUnitRender failed: \(status)") return status } // Copy rendered data to output buffer let buffer = UnsafeMutableAudioBufferListPointer(ioData)[0] buffer.mData?.copyMemory(from: bufferList.mBuffers.mData!, byteCount: Int(bufferList.mBuffers.mDataByteSize)) buffer.mDataByteSize = bufferList.mBuffers.mDataByteSize print("Rendered audio data") return noErr } let outputCallback: AURenderCallback = { ( inRefCon: UnsafeMutableRawPointer, ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>, inTimeStamp: UnsafePointer<AudioTimeStamp>, inBusNumber: UInt32, inNumberFrames: UInt32, ioData: UnsafeMutablePointer<AudioBufferList>? ) -> OSStatus in guard let ioData = ioData else { return noErr } print("Output callback invoked") // Process the output data if needed return noErr } In essence, I'm stuck and in need of guidance. Has anyone here successfully implemented multi-channel recording on iOS, especially involving both built-in microphones and external audio interfaces? Any shared experiences, insights, or suggestions on how to proceed would be immensely appreciated. Thank you once again for your time and assistance!
0
0
477
Jul ’24
May I ask why different bundle IDs affect the notification of AVAudioSessionInterruptionType
Some iOS apps with signatures or bundle IDs will receive the AVAudioSessionInterruptionTypeBegan callback when the headphones are disconnected, but will not receive the AVAudioSessionInterruptionTypeEnded callback. Not all bundle IDs can cause appeal issues, May I ask why different bundle IDs result in the above differences, and what are the settings that bind bundle IDs that affect the notification of AVAudioSessionInterruptionType
0
0
342
Jul ’24
AVSpeechSynthesizer doesn't notifyOthersOnDeactivation
Hello, I am building a new iOS app which uses AVSpeechSynthesizer and should be able to mix audio nicely with audio from other apps. AVSpeechSynthesizer seems to handle setting the AVAudioSession to active on it's own, but does not deactivate the audio session. This leads to issues, namely that other audio sources remain "ducked" after AVSpeechSynthesizer is done speaking. I have implemented deactivating the audio session myself, which "works", in that it allows other audio sources to become "un-ducked", but it throws this exception each time even though it appears successful. Error Domain=NSOSStatusErrorDomain Code=560030580 "Session deactivation failed" UserInfo={NSLocalizedDescription=Session deactivation failed} It appears to be a bug with how AVSpeechSynthesizer handles activating/deactivating the audio session. Below is a minimal example which illustrates the problem. It has two buttons, one which manually deactivates the audio sessions, which throws the exception, but otherwise works, and another button which leaves audio session management to the AVSpeechSynthesizer but does not "un-duck" other audio. If you play some audio from another app (ex: Music), you'll see the button which throws/catches an exception successfully ducks/un-ducks the audio, while the one without attempting to deactivate the session ducks but does not un-duck the audio. import AVFoundation struct ContentView: View { let workingSynthesizer = UnduckingSpeechSynthesizer() let brokenSynthesizer = BrokenSpeechSynthesizer() init() { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playback, mode: .voicePrompt, options: [.duckOthers]) } catch { print("Setup error info: \(error)") } } var body: some View { VStack { Button("Works Correctly"){ workingSynthesizer.speak(text: "Hello planet") } Text("-------") Button("Does not work"){ brokenSynthesizer.speak(text: "Hello planet") } } .padding() } } class UnduckingSpeechSynthesizer: NSObject { var synth = AVSpeechSynthesizer() let audioSession = AVAudioSession.sharedInstance() override init(){ super.init() synth.delegate = self } func speak(text: String) { let utterance = AVSpeechUtterance(string: text) synth.speak(utterance) } } extension UnduckingSpeechSynthesizer: AVSpeechSynthesizerDelegate { func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) { do { try audioSession.setActive(false, options: .notifyOthersOnDeactivation) } catch { // always throws an error // Error Domain=NSOSStatusErrorDomain Code=560030580 "Session deactivation failed" UserInfo={NSLocalizedDescription=Session deactivation failed} print("Deactivate error info: \(error)") } } } class BrokenSpeechSynthesizer { var synth = AVSpeechSynthesizer() let audioSession = AVAudioSession.sharedInstance() func speak(text: String) { let utterance = AVSpeechUtterance(string: text) synth.speak(utterance) } } (I have a separate issue where the first speech attempt takes a few seconds but I don't think it's related)
6
1
850
Jul ’24
Watchdog kills the app with 0x8BADF00D because of MusicKit activity
After integration with MusicKit, I have an issue with Watchdog. The crash log point on this stack trace: ProcessState: Running WatchdogEvent: scene-update WatchdogVisibility: Background WatchdogCPUStatistics: ( "Elapsed total CPU time (seconds): 72.560 (user 49.970, system 22.590), 39% CPU", "Elapsed application CPU time (seconds): 11.270, 6% CPU" ) reportType:CrashLog maxTerminationResistance:Interactive> Triggered by Thread: 0 Thread 0 Crashed: 0 libsystem_kernel.dylib 0x1dfa74808 mach_msg2_trap + 8 1 libsystem_kernel.dylib 0x1dfa78008 mach_msg2_internal + 80 2 libsystem_kernel.dylib 0x1dfa77f20 mach_msg_overwrite + 436 3 libsystem_kernel.dylib 0x1dfa77d60 mach_msg + 24 4 libdispatch.dylib 0x19e884b18 _dispatch_mach_send_and_wait_for_reply + 544 5 libdispatch.dylib 0x19e884eb8 dispatch_mach_send_with_result_and_wait_for_reply + 60 6 libxpc.dylib 0x1f386bac8 xpc_connection_send_message_with_reply_sync + 264 7 Foundation 0x195853998 __NSXPCCONNECTION_IS_WAITING_FOR_A_SYNCHRONOUS_REPLY__ + 16 8 Foundation 0x195850004 -[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:] + 2160 9 Foundation 0x1958c820c -[NSXPCConnection _sendSelector:withProxy:arg1:] + 116 10 Foundation 0x1958c7e80 _NSXPCDistantObjectSimpleMessageSend1 + 60 11 MediaPlayer 0x1a8c0ff24 -[MPMusicPlayerController _validateServer] + 128 12 MediaPlayer 0x1a8c3f4f8 -[MPMusicPlayerApplicationController _establishConnectionIfNeeded] + 2144 13 MediaPlayer 0x1a8c0fbb8 -[MPMusicPlayerController onServer:] + 52 14 MediaPlayer 0x1a8c0ec94 -[MPMusicPlayerController _nowPlaying] + 372 15 MediaPlayer 0x1a8c161a4 -[MPMusicPlayerController nowPlayingItem] + 24 16 MusicKit 0x213253e78 -[MusicKit_SoftLinking_MPMusicPlayerController nowPlayingItem] + 24 17 MusicKit 0x2136ec1bc 0x2131b9000 + 5452220 18 MusicKit 0x2136ec70c 0x2131b9000 + 5453580 19 MusicKit 0x2136ed839 0x2131b9000 + 5457977 20 MusicKit 0x213221c65 0x2131b9000 + 429157 21 MusicKit 0x21354b741 0x2131b9000 + 3745601 22 libswift_Concurrency.dylib 0x1a1d0e775 completeTaskWithClosure(swift::AsyncContext*, swift::SwiftError*) + 1 According to the log - the app is in the background and the stack trace has only MusicKit. How could we disable or avoid this activity to avoid the Watchdog issue?
0
0
405
Jul ’24
AVAudioSessionErrorCodeCannotInterruptOthers
We are a music app, encountered a scene, there is no way to resume playing music, so I would like to ask about the technical plan, how to achieve it. For example, when playing a video in another app, we pause the music playing and turn off the video, we should resume the music playing. Our code is implemented, so listen AVAudioSessionInterruptionNotification, when we received the notice and judge AVAudioSessionInterruptionOptionShouldResume, we play music came again, Error 560557684(AVAudioSessionErrorCodeCannotInterruptOthers) was reported. We were very confused NSError *error = nil; AVAudioSession *audioSession = [AVAudioSession sharedInstance]; [audioSession setCategory:AVAudioSessionCategoryPlayback withOptions:0 error:&error]; [audioSession setActive:YES error:&error]; We compared the apple music app and found that apple music can resume playing. Here is a video of the effects of our app: https://drive.google.com/file/d/1J94S2kxkEpNvG536yzCnKmE7IN3cGzIJ/view?usp=sharing Here's the apple music effect video: https://drive.google.com/file/d/1c1Kdgkn2nhy8SdDvRJAFF2sPvqJ8fL48/view?usp=sharing We want to improve our user experience. How can we do that?
0
0
337
Jul ’24