I'm trying to write 16-bit interleaved 2-channel data captured from a LiveSwitch audio source to a AVAudioFile. The buffer and file formats match but I get a bad parameter error from the API. Does this API not support the specified format or is there some other issue?
Here is the debugger output.
(lldb) po audioFile.url
▿ file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201
- _url : file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201
- _parseInfo : nil
- _baseParseInfo : nil
(lldb) po error
Error Domain=com.apple.coreaudio.avfaudio Code=-50 "(null)" UserInfo={failed call=ExtAudioFileWrite(_impl->_extAudioFile, buffer.frameLength, buffer.audioBufferList)}
(lldb) po buffer.format
<AVAudioFormat 0x302a12b20: 2 ch, 44100 Hz, Int16, interleaved>
(lldb) po audioFile.fileFormat
<AVAudioFormat 0x302a515e0: 2 ch, 44100 Hz, Int16, interleaved>
(lldb) po buffer.frameLength
882
(lldb) po buffer.audioBufferList
▿ 0x0000000300941e60
- pointerValue : 12894608992
This code handles the details of converting the Live Switch frame into an AVAudioPCMBuffer.
extension FMLiveSwitchAudioFrame {
func convertedToPCMBuffer() -> AVAudioPCMBuffer {
Self.convertToAVAudioPCMBuffer(from: self)!
}
static func convertToAVAudioPCMBuffer(from frame: FMLiveSwitchAudioFrame) -> AVAudioPCMBuffer? {
// Retrieve the audio buffer and format details from the FMLiveSwitchAudioFrame
guard
let buffer = frame.buffer(),
let format = buffer.format() as? FMLiveSwitchAudioFormat else { return nil }
// Extract PCM format details from FMLiveSwitchAudioFormat
let sampleRate = Double(format.clockRate())
let channelCount = AVAudioChannelCount(format.channelCount())
// Determine bytes per sample based on bit depth
let bitsPerSample = 16
let bytesPerSample = bitsPerSample / 8
let bytesPerFrame = bytesPerSample * Int(channelCount)
let frameLength = AVAudioFrameCount(Int(buffer.dataBuffer().length()) / bytesPerFrame)
// Create an AVAudioFormat from the FMLiveSwitchAudioFormat
guard let avAudioFormat = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: sampleRate, channels: channelCount, interleaved: true) else {
return nil
}
// Create an AudioBufferList to wrap the existing buffer
let audioBufferList = UnsafeMutablePointer<AudioBufferList>.allocate(capacity: 1)
audioBufferList.pointee.mNumberBuffers = 1
audioBufferList.pointee.mBuffers.mNumberChannels = channelCount
audioBufferList.pointee.mBuffers.mDataByteSize = UInt32(buffer.dataBuffer().length())
audioBufferList.pointee.mBuffers.mData = buffer.dataBuffer().data().mutableBytes // Directly use LiveSwitch buffer
// Transfer ownership of the buffer to AVAudioPCMBuffer
let pcmBuffer = AVAudioPCMBuffer(pcmFormat: avAudioFormat, bufferListNoCopy: audioBufferList) /* { buffer in
// Ensure the buffer is freed when AVAudioPCMBuffer is deallocated
buffer.deallocate() // Only call this if LiveSwitch allows manual deallocation
} */
pcmBuffer?.frameLength = frameLength
return pcmBuffer
}
}
This is the handler that is invoked with every frame in order to convert it for use with AVAudioFile and optionally update a scrolling signal display on the screen.
private func onRaisedFrame(obj: Any!) -> Void {
// Bail out early if no one is interested in the data.
guard isMonitoring else { return }
// Convert LS frame to AVAudioPCMBuffer (no-copy)
let frame = obj as! FMLiveSwitchAudioFrame
let buffer = frame.convertedToPCMBuffer()
// Hand subscribers a reference to the buffer for rendering to display.
bufferPublisher?.send(buffer)
// If we have and output file, store the data there, as well.
guard let audioFile = self.audioFile else { return }
do {
try audioFile.write(from: buffer) // FIXME: This call is throwing error -50
} catch {
FMLiveSwitchLog.error(withMessage: "Failed to write buffer to audio file at \(audioFile.url): \(error)")
self.audioFile = nil
}
}
This is how the audio file is being setup.
static var recordingFormat: AVAudioFormat = {
AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44_100, channels: 2, interleaved: true)!
}()
let audioFile = try AVAudioFile(forWriting: outputURL, settings: Self.recordingFormat.settings)
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
Not sure if this is the right forum to ask this question in, but could you please advise if I can use Apple Digital Masters logo (badge) in my iOS app that is playing music from Apple Music service?
Topic:
Media Technologies
SubTopic:
Audio
Hello,
We are developing a real-time speech recognition application and are utilizing AVAudioEngine with voice processing enabled on the input node. However, we have observed that enabling this mode interferes with the built-in iOS screen recording feature - specifically, the recorded video does not capture any audio when this mode is active.
Since we want users to be able to record their experience within our app, this issue significantly impacts our functionality. Is there a known workaround or recommended approach to ensure that both voice processing and screen recording can function simultaneously?
Any guidance would be greatly appreciated.
Thank you!
The device is connected to Bluetooth A and Bluetooth B, currently the audio is played through Bluetooth A, click the interface button, how to realize the code to switch to Bluetooth B?
We have the necessary background recording entitlements, and for many users... do not run into any issues.
However, there is a subset of users that routinely get recordings ending.. we have narrowed this down and believe it to be the work of the watch dog.
First we removed the entire view hierarchy when app is backgrounded. There is just 'Text("Recording")'
This got the CPU usage in profiler down to 0%. We saw massive improvements to recording success rate.
We walked away assuming that was enough. However we are still seeing the same sort of crashes. All in the background. We're using Observation to drive audio state changes to a Live Activity.
Are those Observations causing the problem? Why doesn't apple provide a better API to background audio? The internet is full of weird issues
https://stackoverflow.com/questions/76010213/why-is-my-react-native-app-sometimes-terminated-in-the-background-while-tracking
https://stackoverflow.com/questions/71656047/why-is-my-react-native-app-terminating-in-the-background-while-recording-ios-r
https://github.com/expo/expo/issues/16807
This is such a terrible user experience. And we have very little visibility into what is happening and why.
No where in apple documentation states that in order for background recording to work, the app can only be 'Text("Recording")'
It does not outline a CPU or memory threshold. It just kills us.
I am developing a VOD playback app, but when I stream video to an external monitor connected via HDMI with Lightning on iOS 18 or later, the screen goes dark and I cannot confirm playback.
The app I am developing does not detect the HDMI and display the Player separately, but simply mirrors the video.
We have confirmed that the same phenomenon occurs with other services, but we were able to confirm playback with some services such as Apple TV.
Please let us know if there are any other necessary settings such as video certificates required for video playback.
We would also like to know if the problem occurs with iOS 18 or later.
Topic:
Media Technologies
SubTopic:
Audio
We have application using PTT Framework to record audio messages when app is backgrounded. Right now we are using AVAudioRecorder for that purpose. And problem is one specific user has frequent issue - recorded audio contains only silence.
I've checked almost everything I can imagine but didn't find any possible reason of issue.
Conditions:
AVAudioRecorder uses following configuration:
[
AVEncoderAudioQualityKey: AVAudioQuality.low.rawValue,
AVFormatIDKey : kAudioFormatMPEG4AAC,
AVNumberOfChannelsKey: 1,
AVSampleRateKey: 16000.0
]
App waits both didBeginTransmitting and didActivate audioSession from PTChannelManager (audio session has playback category at that moment)
App does AVAudioSession category change to playAndRecord
App gets routeChangeNotification with categoryChange and category = playAndRecord
There is no any interruption notifications from AVAudioSession during recording
There is no any error notification from AVAudioRecorder
Any idea what exactly I do wrong? Is there anything else I should check?
Thanks in advance.
P.S. it looks like recording audio with AudioUnit has the same issue, but let's exclude it from question atm for simplicity.
Hello! I'm use AVFoundation for preview video and audio from selected device, and I try use AVAudioEngine for preview audio in real-time, but I can't or I don't understand how select input device? I can hear only my microphone in real-time
So far, I'm using AVCaptureAudioPreviewOutput for in real-time hear audio, but I think has delay.
On iOS works easy with AVAudioEngine, but on macOS bruh...
Topic:
Media Technologies
SubTopic:
Audio
Tags:
AudioToolbox
AVAudioSession
AVAudioEngine
AVFoundation
Hi guys,
I am having issue in live-streaming audio from Bluetooth headset and playing it live on the iPhone speaker.
I am able to redirect audio back to the headset but this is not what I want.
The issue happens when I am trying to override output - the iPhone switches to speaker but also switches a microphone.
This is example of the code:
import AVFoundation
class AudioRecorder {
let player: AVAudioPlayerNode
let engine:AVAudioEngine
let audioSession:AVAudioSession
let audioSessionOutput:AVAudioSession
init() {
self.player = AVAudioPlayerNode()
self.engine = AVAudioEngine()
self.audioSession = AVAudioSession.sharedInstance()
self.audioSessionOutput = AVAudioSession()
do {
try self.audioSession.setCategory(AVAudioSession.Category.playAndRecord, options: [.defaultToSpeaker])
try self.audioSessionOutput.setCategory(AVAudioSession.Category.playAndRecord, options: [.allowBluetooth]) // enables Bluetooth HFP profile
try self.audioSession.setMode(AVAudioSession.Mode.default)
try self.audioSession.setActive(true)
// try self.audioSession.overrideOutputAudioPort(.speaker) // doens't work
} catch {
print(error)
}
let input = self.engine.inputNode
self.engine.attach(self.player)
let bus = 0
let inputFormat = input.inputFormat(forBus: bus)
self.engine.connect(self.player, to: engine.mainMixerNode, format: inputFormat)
input.installTap(onBus: bus, bufferSize: 512, format: inputFormat) { (buffer, time) -> Void in
self.player.scheduleBuffer(buffer)
print(buffer)
}
}
public func start() {
try! self.engine.start()
self.player.play()
}
public func stop() {
self.player.stop()
self.engine.stop()
}
}
I am not sure if this is a bug or not.
Can somebody point me into the right direction?
I there a way to design a custom audio routing?
I would also appreciate some good documentation besides AVFoundation docs.
Mobile app - Ellie's Gift
https://apps.apple.com/gb/app/ellies-gift/id1617597875
Using AVFoundation to play audio tracks within the app.
Has always been working fine across apple and android, but iphone 14 and newer devices are unable to play audio.
Any idea's or suggestions?
Bug Report: ScreenCaptureKit System Audio Capture Crashes with EXC_BAD_ACCESS
Summary
When using ScreenCaptureKit to capture system audio for extended periods, the application crashes with EXC_BAD_ACCESS in Swift's error handling runtime. The crash occurs in swift_getErrorValue when trying to process an error from the SCStream delegate method didStopWithError. This appears to be a framework-level issue in ScreenCaptureKit or its underlying ReplayKit implementation.
Environment
macOS Sonoma 14.6.1
Swift 5.8
ScreenCaptureKit framework
Detailed Description
Our application captures system audio using ScreenCaptureKit's audio capture capabilities. After successfully capturing for several minutes (typically after 3-4 segments of 60-second recordings), the application crashes with an EXC_BAD_ACCESS error. The crash happens when the Swift runtime attempts to process an error in the SCStreamDelegate.stream(_:didStopWithError:) method.
The crash consistently occurs in swift_getErrorValue when attempting to access the class of what appears to be a null object. This suggests that the error being passed from the system framework to our delegate method is malformed or contains invalid memory.
Steps to Reproduce
Create an SCStream with audio capture enabled
Add audio output to the stream
Start capture and write audio data to disk
Allow the capture to run for several minutes (3-5 minutes typically triggers the issue)
The app will crash with EXC_BAD_ACCESS in swift_getErrorValue
Code Sample
func stream(_ stream: SCStream, didStopWithError error: Error) {
print("Stream stopped with error: \(error)") // Crash occurs before this line executes
}
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) {
guard type == .audio, sampleBuffer.isValid else { return }
// Process audio data...
}
Expected Behavior
The error should be properly propagated to the delegate method, allowing for graceful error handling and recovery.
Actual Behavior
The application crashes with EXC_BAD_ACCESS when the Swift runtime attempts to process the error in swift_getErrorValue.
Crash Log Details
Thread #35, queue = 'com.apple.NSXPCConnection.m-user.com.apple.replayd', stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
frame #0: 0x0000000194c3088c libswiftCore.dylib`swift::_swift_getClass(void const*) + 8
frame #1: 0x0000000194c30104 libswiftCore.dylib`swift_getErrorValue + 40
frame #2: 0x00000001057fba30 shadow`NewScreenCaptureService.stream(stream=0x0000600002de6700, error=Swift.Error @ 0x000000016b7b5e30) at NEW+ScreenCaptureService.swift:365:15
frame #3: 0x00000001057fc050 shadow`@objc NewScreenCaptureService.stream(_:didStopWithError:) at <compiler-generated>:0
frame #4: 0x0000000219ec5ca0 ScreenCaptureKit`-[SCStreamManager stream:didStopWithError:] + 456
frame #5: 0x00000001ca68a5cc ReplayKit`-[RPScreenRecorder stream:didStopWithError:] + 84
frame #6: 0x00000001ca696ff8 ReplayKit`-[RPDaemonProxy stream:didStopWithError:] + 224
Printing description of stream._streamQueue:
error: ObjectiveC.id:4294967281:18: note: 'id' has been explicitly marked unavailable here
public typealias id = AnyObject
^
error: /var/folders/v4/3xg1hmp93gjd8_xlzmryf_wm0000gn/T/expr23-dfa421..cpp:1:65: 'id' is unavailable in Swift: 'id' is not available in Swift; use 'Any'
Swift._DebuggerSupport.stringForPrintObject(Swift.UnsafePointer<id>(bitPattern: 0x104ae08c0)!.pointee)
^~
ObjectiveC.id:2:18: note: 'id' has been explicitly marked unavailable here
public typealias id = AnyObject
^
warning: /var/folders/v4/3xg1hmp93gjd8_xlzmryf_wm0000gn/T/expr23-dfa421..cpp:5:7: initialization of variable '$__lldb_error_result' was never used; consider replacing with assignment to '_' or removing it
var $__lldb_error_result = __lldb_tmp_error
~~~~^~~~~~~~~~~~~~~~~~~~
_
Before the crash, we observed this error message in the console:
[ERROR] *****SCStream*****RemoteAudioQueueOperationHandlerWithError:1015 Error received from the remote queue -16665
Additional Context
The issue occurs consistently after approximately 3-4 successful audio segment recordings of 60 seconds each
Commenting out custom segment rotation logic does not prevent the crash
The crash involves XPC communication with Apple's ReplayKit daemon
The error appears to be corrupted or malformed when crossing the XPC boundary
Workarounds Attempted
Added proper thread safety for all published properties using DispatchQueue.main.async
Implemented more robust error handling in the delegate methods
None of these approaches prevented the crash since it occurs at the Swift runtime level before our code executes.
Impact
This issue prevents reliable long-duration audio capture using ScreenCaptureKit.
This bug significantly limits the usefulness of ScreenCaptureKit for any application requiring continuous system audio capture for more than a few minutes.
Perhaps this issue might be related to a macOS bug where the system dialog indicates that the screen is being shared, even though nothing is actually being shared. Moreover, when attempting to stop sharing, nothing happens.
Is it possible to play WebM audio on iOS? Either with AVPlayer, AVAudioEngine, or some other API?
Safari has supported this for a few releases now, and I'm wondering if I missed something about how to do this. By default these APIs don't seem to work (nor does ExtAudioFileOpen).
Our usecase is making it possible for iOS users to play back audio recorded in our webapp (desktop versions of Chrome & Firefox only support webm as a destination format for MediaRecorder)
I've been trying to use AVMIDIControlChangeEvent with a bankSelect message type to change the instrument the sequencer uses on a AVMusicTrack with no luck.
I started with the Apple AVAEMixerSample, converting the initial setup/loading and portions dealing with the sequencer to Swift. I got that working and playing the "bluesyRiff" and then modified it to play individual notes. So my createAndSetupSequencer looked like
func createAndSetupSequencer() {
sequencer = AVAudioSequencer(audioEngine: engine)
// guard let midiFileURL = Bundle.main.url(forResource: "bluesyRiff", withExtension: "mid") else {
// print (" failed guard trying to get URL for bluesyRiff")
// return
// }
let track = sequencer.createAndAppendTrack()
var currTime = 1.0
for i: UInt32 in 0...8 {
let newNoteEvent = AVMIDINoteEvent(channel: 0, key: 60+i, velocity: 64, duration: 2.0)
track.addEvent(newNoteEvent, at: AVMusicTimeStamp(currTime))
currTime += 2.0
}
The notes played, so then I also replaced the gs_instruments sound bank with GeneralUser GS MuseScore v1.442 first by trying
guard let soundBankURL = Bundle.main.url(forResource: "GeneralUser GS MuseScore v1.442", withExtension: "sf2") else {
return}
do {
try sampler.loadSoundBankInstrument(at: soundBankURL, program: 0x001C, bankMSB: 0x79, bankLSB: 0x08)
} catch{....
}
This appears to work, the instrument (8 which is "Funk Guitar") plays. If I change to bankLSB: 0x00 I get the "Palm Muted guitar". So I know that the soundfont has these instruments
Stuff goes off the rails when I try to change the instruments in createAndSetupSequencer. Putting
let programChange = AVMIDIProgramChangeEvent(channel: 0, programNumber: 0x001C)
let bankChange = AVMIDIControlChangeEvent(channel: 0, messageType: AVMIDIControlChangeEvent.MessageType.bankSelect, value: 0x00)
track.addEvent(programChange, at: AVMusicTimeStamp(1.0))
track.addEvent(bankChange, at: AVMusicTimeStamp(1.0))
just before my add note loop doesn't produce any change. Loading bankLSB 8 (Funk) in sampler.loadSoundBankInstrument and trying to change with bankSelect 0 (Palm muted) in createAndSetupSequencer results in instrument 8 (Funk) playing not Palm Muted.
Loading bankLSB 0 (Palm muted) and trying to change with bankSelect 8 (Funk) doesn't work, 0 (Palm muted) plays
I also tried sampler.loadInstrument(at: soundBankURL) and then I always get the first instrument in the sound font file (piano)no matter what values I put in my programChange/bankChange
I've also changed the time in the track.addEvent to be 0, 1.0, 3.0 etc to no success
The sampler.loadSoundBankInstrument specifies two UInt8 parameters, bankMSB and BankLSB while the AVMIDIControlChangeEvent bankSelect value is UInt32 suggesting it might be some combination of bankMSB and BankLSB. But the documentation makes no mention of what this should look like. I tried various combinations of 0x7908, 0X0879 etc to no avail
I will also point out that I am able to successfully execute other control change events
For example adding
if i == 1 {
let portamentoOnEvent = AVMIDIControlChangeEvent(channel: 0, messageType: AVMIDIControlChangeEvent.MessageType.portamento, value: 0xFF)
track.addEvent(portamentoOnEvent, at: AVMusicTimeStamp(currTime))
let portamentoRateEvent = AVMIDIControlChangeEvent(channel: 0, messageType: AVMIDIControlChangeEvent.MessageType.portamentoTime, value: 64)
track.addEvent(portamentoRateEvent, at: AVMusicTimeStamp(currTime))
}
does produce a change in the sound. (As an aside, a definition of what portamento time is, other than "the rate of portamento" would be welcome. is it notes/seconds? freq/minute? beats/hour?)
I was able to get the instrument to change in a different program using MusicPlayer and a series of MusicTrackNewMIDIChannelEvent on a track but these operate on a MusicTrack not the AVMusicTrack which the sequencer uses.
Has anyone been successful in switching instruments through an AVMIDIControlChangeEvent or have any feedback on how to do this?
Issue:
Under certain conditions, using CallKit does not automatically enable the microphone.
Steps to Reproduce:
1.Start an outgoing call, then the user manually mutes the audio.
2.Receive a native incoming call, end the current call, then answer the new incoming call.(This order is important.)
3.End the incoming call.
4.Start another outgoing call and observe the microphone; do not manually mute or unmute.
Actual Behavior:
The audio icon indicates that the audio is unmuted, but the microphone remains off, and the small yellow dot in the top status bar (which represents the microphone) does not appear.
Expected Behavior:
The microphone should be on, consistent with the audio icon display, and the small yellow dot should appear in the top status bar.
Device:
iPhone 16 pro & iPhone 15 pro, iOS 18.0+
Can it be reproduced using speakerbox(CallKit Demo)?
YES
In MusicKit Web the playback states are provided as numbers.
For example the playbackStateDidChange event listener will return:
{oldState: 2, state: 3, item:...}
When the state changes from playing (2) to paused (3).
Those are pretty easy to guess, but I'm having a hard time with some of the others: completed,
ended,
loading,
none,
paused,
playing,
seeking,
stalled,
stopped,
waiting.
I cannot find a mapping of states to numbers documented anywhere. I got the above states from an enum in a d.ts file that is often incorrect/incomplete.
Can someone help out pointing to the docs or provide a mapping?
Thanks.
I’m working on a macOS app, written in Swift. My goal is to record audio from an external microphone, e.g., one connected via USB.
For this, I’m using an AVCaptureSession and recording its output with an AVAssetWriter. This works perfectly in principle (and reliably with internal microphones, for example).
The problem occurs after my app has successfully completed the first recording and I then want to make additional recordings (which makes me think it might be process-dependent, because it works again after restarting the app).
The problem: Noisy or distorted-sounding audio files. In addition, the following error message appears in the Console from CoreAudio / its AudioConverter:
Input data proc returned inconsistent 512 packets for 2048 bytes; at 3 bytes per packet, that is actually 682 packets
It is easy to reproduce. This problem is reproducible even if I don’t configure the AVAssetWriter manually and instead let it receive its audioSettings using a preset from an AVOutputSettingsAssistant. I’m running on macOS 15.0 (24A335).
I’ve filed a feedback including a demo project → FB15333298 🎟️
I would greatly appreciate any help!
Have a great day,
Martin
Hi all,
I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in.
Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped.
Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played.
Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset?
I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity.
Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them
Thanks for any feedback!
I am developing an iOS app that needs to play spoken audio on demand from a server, while ducking the audio of background music from another app (e.g., SoundtrackYourBrand or Apple Music). This must work even when the app is in the background, and the server dictates when and what audio is played. Ideally, the message should be played within a minute of the server requesting it.
Current Attempt & Observations
I initially tried using Firebase Cloud Messaging (FCM) silent notifications to send a URL to an audio file, which the app would then play using AVPlayer.
This works consistently when the app is active, but in the background, it only works about 60% of the time.
In cases where it fails, iOS ducks the background music (e.g., from SoundtrackYourBrand) but never plays the spoken audio.
Interestingly, when I play the audio without enabling audio ducking, it seems to work 100% of the time from my limited testing, even in the background.
The app has background modes enabled for Audio, Background Fetch, and Remote Notifications.
Best Approach to Achieve This?
I’d like guidance on the best Apple-compliant approach to reliably play audio on command from the server, even when the app is in the background. Some possible paths:
Ensuring the app remains active in the background – Are there recommended ways to prevent the app from getting suspended, such as background tasks, a special background mode, or a persistent connection to the server?
Alternative triggering mechanisms – Would something like VoIP, Push-to-Talk, or another background service be better suited for this use case?
Built-in iOS speech synthesis (AVSpeechSynthesizer) – If playing external audio is unreliable, would generating speech dynamically from text be a more robust approach?
Streaming audio instead of sending a URL – Could continuous streaming from the server keep the app active and allow playback at the right moment?
I want to ensure the solution is reliable and works 100% of the time when needed. Any recommendations on the best approach for this would be greatly appreciated.
Thank you for your time and guidance.
It's only occurs on iOS 18+. Backtrace attached below.
Exception Codes: 0x0000000000000000, 0x0000000000000000
Termination Reason: SIGNAL 6 Abort trap: 6
Terminating Process: NoteKeys [24384]
Triggered by Thread: 0
Last Exception Backtrace:
0 CoreFoundation 0x1a2d4c7cc __exceptionPreprocess + 164 (NSException.m:249)
1 libobjc.A.dylib 0x1a001f2e4 objc_exception_throw + 88 (objc-exception.mm:356)
2 CoreFoundation 0x1a2e47748 +[NSException raise:format:] + 128 (NSException.m:0)
3 AVFAudio 0x1bd41f4c8 -[AVMIDIPlayer play:] + 300 (AVMIDIPlayer.mm:145)
4 NoteKeys 0x1023c0670 SoundGenerator.playData() + 20 (SoundGenerator.swift:170)
5 NoteKeys 0x1023c0670 EditViewController.playBtnTapped(startIndex:) + 940 (EditViewController.swift:2034)
6 NoteKeys 0x1024497fc specialized Keyboard.playBtnTapped(sender:) + 1904 (Keyboard.swift:1249)
7 NoteKeys 0x10244631c Keyboard.playBtnTapped(sender:) + 4 (<compiler-generated>:0)
8 NoteKeys 0x10244631c @objc Keyboard.playBtnTapped(sender:) + 48
9 UIKitCore 0x1a58739cc -[UIApplication sendAction:to:from:forEvent:] + 100 (UIApplication.m:5816)
10 UIKitCore 0x1a58738a4 -[UIControl sendAction:to:forEvent:] + 112 (UIControl.m:942)
11 UIKitCore 0x1a58736f4 -[UIControl _sendActionsForEvents:withEvent:] + 324 (UIControl.m:1013)
12 UIKitCore 0x1a5fe8d8c -[UIButton _sendActionsForEvents:withEvent:] + 124 (UIButton.m:4198)
13 UIKitCore 0x1a5fea5a0 -[UIControl touchesEnded:withEvent:] + 400 (UIControl.m:692)
14 UIKitCore 0x1a57bb9ac -[UIWindow _sendTouchesForEvent:] + 852 (UIWindow.m:3318)
15 UIKitCore 0x1a57bb3d8 -[UIWindow sendEvent:] + 2964 (UIWindow.m:3641)
16 UIKitCore 0x1a564fb70 -[UIApplication sendEvent:] + 376 (UIApplication.m:12972)
17 UIKitCore 0x1a565009c __dispatchPreprocessedEventFromEventQueue + 1048 (UIEventDispatcher.m:2686)
18 UIKitCore 0x1a5659f3c __processEventQueue + 5696 (UIEventDispatcher.m:3044)
19 UIKitCore 0x1a5552c60 updateCycleEntry + 160 (UIEventDispatcher.m:133)
20 UIKitCore 0x1a55509d8 _UIUpdateSequenceRun + 84 (_UIUpdateSequence.mm:136)
21 UIKitCore 0x1a5550628 schedulerStepScheduledMainSection + 172 (_UIUpdateScheduler.m:1171)
22 UIKitCore 0x1a555159c runloopSourceCallback + 92 (_UIUpdateScheduler.m:1334)
23 CoreFoundation 0x1a2d20328 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 28 (CFRunLoop.c:1970)
24 CoreFoundation 0x1a2d202bc __CFRunLoopDoSource0 + 176 (CFRunLoop.c:2014)
25 CoreFoundation 0x1a2d1ddc0 __CFRunLoopDoSources0 + 244 (CFRunLoop.c:2051)
26 CoreFoundation 0x1a2d1cfbc __CFRunLoopRun + 840 (CFRunLoop.c:2969)
27 CoreFoundation 0x1a2d1c830 CFRunLoopRunSpecific + 588 (CFRunLoop.c:3434)
28 GraphicsServices 0x1eecfc1c4 GSEventRunModal + 164 (GSEvent.c:2196)
29 UIKitCore 0x1a5882eb0 -[UIApplication _run] + 816 (UIApplication.m:3844)
30 UIKitCore 0x1a59315b4 UIApplicationMain + 340 (UIApplication.m:5496)
31 NoteKeys 0x10254bc10 main + 68 (AppDelegate.swift:15)
32 dyld 0x1c870aec8 start + 2724 (dyldMain.cpp:1334)
Thanks very much for any help: )
Topic:
Media Technologies
SubTopic:
Audio
Hi,
I'm working on an audio mixing app, that comes with bundled audio units that provide some of the app's core functionality.
For the next release of that app, we are planning to make two changes:
make the app sandboxed
package the bundled audio units as .appex bundles instead as .component bundles, so we don't need to take care of the installation at the correct spot in the file system
When trying this new approach, we run into problems where [[AVAudioUnitEffect alloc] initWithAudioComponentDescription:] crashes when trying to load our audio unit with the exception:
AVAEInternal.h:109 [AUInterface.mm:468:AUInterfaceBaseV3: (AudioComponentInstanceNew(comp, &_auv2)): error -10863
Our audio unit has the `sandboxSafe flag enabled, and loads fine when the host app is not sandboxed, so I'm guessing I got the bundle id/code signing requirements for the .appex correct.
It seems, that my .appex isn't even loaded, and the system rejects it because of its metadata. Maybe there something wrong the Info.plist generated by Juice?
"BuildMachineOSBuild" => "23H222"
"CFBundleDisplayName" => "elgato_sample_recorder"
"CFBundleExecutable" => "ElgatoSampleRecorder"
"CFBundleIdentifier" => "com.iwascoding.EffectLoader.samplerecorderAUv3"
"CFBundleName" => "elgato_sample_recorder"
"CFBundlePackageType" => "XPC!"
"CFBundleShortVersionString" => "1.0.0.0"
"CFBundleSignature" => "????"
"CFBundleSupportedPlatforms" => [
0 => "MacOSX"
]
"CFBundleVersion" => "1.0.0.0"
"DTCompiler" => "com.apple.compilers.llvm.clang.1_0"
"DTPlatformBuild" => "24C94"
"DTPlatformName" => "macosx"
"DTPlatformVersion" => "15.2"
"DTSDKBuild" => "24C94"
"DTSDKName" => "macosx15.2"
"DTXcode" => "1620"
"DTXcodeBuild" => "16C5032a"
"LSMinimumSystemVersion" => "10.13"
"NSExtension" => {
"NSExtensionAttributes" => {
"AudioComponents" => [
0 => {
"description" => "Elgato Sample Recorder"
"factoryFunction" => "elgato_sample_recorderAUFactoryAUv3"
"manufacturer" => "Manu"
"name" => "Elgato: Elgato Sample Recorder"
"sandboxSafe" => 1
"subtype" => "Znyk"
"tags" => [
0 => "Effects"
]
"type" => "aufx"
"version" => 65536
}
]
}
"NSExtensionPointIdentifier" => "com.apple.AudioUnit-UI"
"NSExtensionPrincipalClass" => "elgato_sample_recorderAUFactoryAUv3"
}
"NSHighResolutionCapable" => 1
}
Any ideas what I am missing?