Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

Debugging CAAnimation that will not start under very strange circumstances.
I have a navigation controller with two VCs. One VC is pushed onto the NavController, the other is presented on top of the NavController. The presented VC has a relatively complex animation involving a CAEmitter -> Animate birth rate down -> Fade out -> Remove. The pushed VC has an 'inputAccessoryView' and can become first responder. The expected behavior is open presented VC -> Emitter Emits pretty pictures -> emitter stops gracefully. The animation works perfectly. UNLESS I open pushed VC -> Leave -> go to presented VC. In this case when I open the presented VC the emitter emits pretty pictures -> they never stop. (Please do not ask me how long it took to figure this much out πŸ€¬πŸ˜”) The animation code in question is: let animation = CAKeyframeAnimation(keyPath: #keyPath(CAEmitterLayer.birthRate)) animation.duration = 1 animation.timingFunction = CAMediaTimingFunction(name: .easeIn) animation.values = [1, 0 , 0] animation.keyTimes = [0, 0.5, 1] animation.fillMode = .forwards animation.isRemovedOnCompletion = false emitter.beginTime = CACurrentMediaTime() let now = Date() CATransaction.begin() CATransaction.setCompletionBlock { [weak self] in print("fade beginning -- delta: \(Date().timeIntervalSince(now))") let transition = CATransition() transition.delegate = self transition.type = .fade transition.duration = 1 transition.timingFunction = CAMediaTimingFunction(name: .easeOut) transition.setValue(emitter, forKey: kKey) transition.isRemovedOnCompletion = false emitter.add(transition, forKey: nil) emitter.opacity = 0 } emitter.add(animation, forKey: nil) CATransaction.commit() The delegate method is: extension PresentedVC: CAAnimationDelegate { func animationDidStop(_ anim: CAAnimation, finished flag: Bool) { if let emitter = anim.value(forKey: kKey) as? CALayer { emitter.removeAllAnimations() emitter.removeFromSuperlayer() } else { } } } Here is the pushed VC: class PushedVC: UIViewController { override var canBecomeFirstResponder: Bool { return true } override var canResignFirstResponder: Bool { return true } override var inputAccessoryView: UIView? { return UIView() } } So to reiterate - If I push pushedVC onto the navController, pop it, present PresentedVC the emitters emit, but then the call to emitter.add(animation, forKey: nil) is essentially ignored. The emitter just keeps emitting. Here are some sample happy print statements from the completion block: fade beginning -- delta: 1.016232967376709 fade beginning -- delta: 1.0033869743347168 fade beginning -- delta: 1.0054619312286377 fade beginning -- delta: 1.0080779790878296 fade beginning -- delta: 1.0088880062103271 fade beginning -- delta: 0.9923020601272583 fade beginning -- delta: 0.99943196773529 Here are my findings: The issue presents only when the pushed VC has an inputAccessoryView AND canBecomeFirstResponder is true It does not matter if the inputAccessoryView is UIKit or custom, has size, is visible, or anything. When I dismiss PresentedVC the animation is completed and the print statements show. Here are some unhappy print examples: fade beginning -- delta: 5.003802061080933 fade beginning -- delta: 5.219511032104492 fade beginning -- delta: 5.73025906085968 fade beginning -- delta: 4.330522060394287 fade beginning -- delta: 4.786169052124023 CATransaction.flush() does not fix anything Removing the entire CATransaction block and just calling emitter.add(animation, forKey: nil) similarly does nothing - the birth rate decrease animation does not happen I am having trouble creating a simple demo project where the issue is reproducible (it is 100% reproducible in my code, the entirety of which I'm not going to link here) so I think getting a "solution" is unrealistic. What I would love is if anyone had any suggestions on where else to look? Any ways to debug CAAnimation? I think if I can solve the last bullet - emitter.add(animation, forKey: nil) called w/o a CATransaction - I can break this whole thing. Why would a CAAnimation added directly to the layer which is visible and doing stuff refuse to run?
1
0
681
Jan ’24
How to use Swift and AVFoundation to stream/record USB microphone input?
I have a custom USB device that includes a microphone. I can see the microphone on macOS when I plug in the device so I know that it is working with the kernel and AV subsystems. I can enumerate and reference the microphone using AVCaptureDevice but I have not been able to figure out how to use this device reference with AVAudioEngine. I'm trying to accomplish two things with this microphone. I want to stream audio from the microphone and have it rendered to the speakers on my MacBook Pro. I want to capture sound data from the microphone and forward it to a live streaming API. To my mind, from what I've read, I need AVAudioEngine to do this but I'm having trouble determining from the documentation just how to go about it on macOS. It seems that there is a lot more information for iOS or iPadOS but since USB-C support is sparsely documented on those operating systems, I'm focusing on the desktop (macOS) for now. Can I convert an AVCaptureDevice into and audio input for AVAudioEngine? If not, how can I accomplish what I'm trying to do using whatever is available on AVFoundation?
1
0
1.2k
Jan ’24
Spatial Video in AVPlayerController vs Photos app
Hi, In the Destinations sample code project and related WWDC talk on spatial video, it seems to imply that the video player will show 3D stereoscopic videos. However, in the Photos app there's a vignetting in the simulator (and marketing material) when viewing spatial video β€” a portal kind of effect. Without access to a device I'm wondering if my spatial videos are actually being played as 3D spatial videos in the AVPlayerController, since I'm not seeing the vignetting. I'm thinking that the vignetting is a photos specific visual effect, but wanted to double check to make sure I'm not misunderstanding something about AVPlayerController. Does anyone know if spatial videos played through AVPlayerController will appear as stereoscopic, even if the vignetting isn't there? Has anyone tried the Destinations sample code to play spatial videos on a device to confirm? thanks!
2
1
1.2k
Jan ’24
MV-HEVC/Spatial Decoder/Encoder Requirements
What are the Mac hardware and software requirements to decode and encode MV-HEVC video with AVFoundation? Many of the new MV-HEVC-related keys require macOS 14.0+, so I'm guessing that macOS Sonoma or later is required on the software side. What about processor architectures? I can read an MV-HEVC source on my Apple Silicon M1. But when I run the same code on my Intel Mac mini (2018) running Sonoma 14.3, AVAssetReader's startReading() returns false. Similarly, when I try to create an AVAssetWriterInput with MV-HEVC output settings, I receive: -[AVAssetWriterInput initWithMediaType:outputSettings:sourceFormatHint:] Compression property MVHEVCVideoLayerIDs is not supported for video codec type hvc1' Is this because Intel-based Macs don't support MV-HEVC? Or am I missing something else?
0
0
750
Jan ’24
Handle AVAudioEngineConfigurationChange when record audio with AVAudioEngine
Hi everyone, I was working on some code that involves recording audio with AVAudioEngine and got an issue that just crashes the app: EXC_BREAKPOINT Exception 6, Code 1, Subcode 4304279688 +0x009888 AudioRecordModule.setupAudioEngine +0x009788 AudioRecordModule.setupAudioEngine +0x00c5bc AudioRecordModule.handleConfigurationChange Below is the relevant code in the Recorder class. public class AudioRecordModule: Module { private var audioEngine: AVAudioEngine? private func startRecording(options recordingOptions: RecordingOptions) { try AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: .mixWithOthers) try AVAudioSession.sharedInstance().setActive(true) outputFormat = AVAudioFormat( commonFormat: recordingOptions.bitDepth == 32 ? .pcmFormatInt32 : .pcmFormatInt16, sampleRate: Double(recordingOptions.sampleRate), channels: AVAudioChannelCount(recordingOptions.channels), interleaved: true )! let fileUri = URL(string: recordingOptions.fileUri)! let formatSettings: [String: Any] = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVSampleRateKey: recordingOptions.sampleRate, AVNumberOfChannelsKey: recordingOptions.channels, AVEncoderBitRateStrategyKey: AVAudioBitRateStrategy_Constant, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue, ] self.recordedFile = try AVAudioFile( forWriting: fileUri, settings: formatSettings, commonFormat: outputFormat.commonFormat, interleaved: outputFormat.isInterleaved ) if !hadSetupNotification { setupNotifications() } } func handleConfigurationChange() { DispatchQueue.main.async { self.releaseAudioEngine() self.setupAudioEngine() if self.state == "recording" { // we could attempt to keep recording do { try self.audioEngine?.start() } catch { self.internalPauseRecording() self.sendInterruptEvent() } } } } func setupNotifications() { nc.addObserver( forName: Notification.Name.AVAudioEngineConfigurationChange, object: nil, queue: nil ) { [weak self] _ in guard let weakself = self else { return } if weakself.state != "inactive" { weakself.handleConfigurationChange() } } } private func setupAudioEngine() { self.audioEngine = nil let audioEngine = AVAudioEngine() self.audioEngine = audioEngine let inputNode = audioEngine.inputNode let inputFormat = inputNode.inputFormat(forBus: 0) let converter = AVAudioConverter(from: inputFormat, to: outputFormat)! inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat) { (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in do { let inputBlock: AVAudioConverterInputBlock = { _, outStatus in outStatus.pointee = AVAudioConverterInputStatus.haveData return buffer } let frameCapacity = AVAudioFrameCount(self.outputFormat.sampleRate) * buffer.frameLength / AVAudioFrameCount(buffer.format.sampleRate) let outputBuffer = AVAudioPCMBuffer( pcmFormat: self.outputFormat, frameCapacity: frameCapacity )! var error: NSError? converter.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock) if let error = error { throw error } else { try self.recordedFile?.write(from: outputBuffer) } } catch { print(error) } } } private func releaseAudioEngine() { if let audioEngine = self.audioEngine { audioEngine.inputNode.removeTap(onBus: 0) audioEngine.stop() } audioEngine = nil } } Beside that, the record module works normally. It is just the configuration change that it does not handle well. I understand that when configuration changes, I need to reinit the audio engine to have the correct input format (since the new config/audio device can have different sample rate and such). If I don't do that, the app also crashes perhaps due to the mismatch. AVAudioRecorder is not an option for me. Thank you for your help.
0
0
830
Jan ’24
Integrating Spatial Audio
I'm looking for a sample code project on integrating Spatial Audio into my app, Tunda Island, a music-loving, make friends and dating app. I have gone as far as purchasing a book "Exploring MusicKit" by Rudrank Riyam but to no avail.
1
0
556
Feb ’24
iOS 17 camera capture assertions and issues
Hello, Starting in iOS 17, our application started having some issue publishing to our video session. More specifically the video capture seems to be broken in some, but not all sessions. What's troubling is that we're seeing that it fails consistently every 4 sessions. It also fails silently, without reporting any problems to the app. We only notice that there are no frames being rendered or sent to the remote devices. Here's what shows-up in the console: <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) Anyone seeing this? Any idea what could be the cause? Our sessions work perfectly on iOS16 and below. Thanks
1
1
610
Feb ’24
Climatisc changes and streaming, human behaviors need to be changed
Each time your listening music you are streaming from a server powered by frequently coal or gaz are rarely green energy. As a developper on IOS, i request to Apple to provide download of audio file into our audio app . The goal is not to resell the audio and violate authors right. You Tube already does that. it is time to find tips and tricks to reduce the consumption of the energy specially into data brodcasting and useless streaming of the same song again and again and again. is it possible to change the API in accordance to this reality.
0
0
473
Feb ’24
CoreAudio server plugin: 24bit big endian audio streams
At least under macOS Sonoma 14.2.1 kAudioFormatFlagIsBigEndian for 24bit audio doesn't seem to be supported by the CoreAudio engine when providing kAudioServerPlugInIOOperationWriteMix streaming buffers for our CoreAudio server plugin. Is that correct and to be expected? Or how should the AudioStreamBasicDescription be filled out on a kAudioStreamPropertyPhysicalFormat request to correctly announce 24bit big endian audio to CoreAudio? Thanks, hagen.
0
0
644
Feb ’24
Resetting categoryOptions when setting the mode in Swift AVAudioSession
When setting the mode during the configuration of an audio session in Swift, the previously configured categoryOptions get reset. For example, if you perform setMode as shown below, you will observe that all previously set categoryOptions are cleared. Example: try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .videoChat, options: [.allowBluetooth, .defaultToSpeaker]) try AVAudioSession.sharedInstance().setMode(.voiceChat) If you need to change the mode while maintaining the categoryOptions, you have to perform setCategory once again. Although the exact reason for this behavior has not been identified, the practical impact on the application's functionality is not yet clear. Why do you think this handling is in place?
0
0
588
Feb ’24
airpod2 a2dp option issue
Hello, I am having issue with the setting my avaudiosession output to bluetooth a2dp device. I want to use built in mic for the input and a2dp device (airpod pro 2) for the output route. Whenever I set the .allowBluetoothA2DP for my avaudioSession option, the output changes to speaker. the mode is default and category is playandrecord. If I do the same procedure with airpod pro 1, the output sets to the airpod pro 1. I am having the trouble when I use airpod pro 2 with iphone with ios 17. It seems like there is no issue with ios version below 17. Anyone went through this kind of issue? Thank you in advance.
0
1
443
Feb ’24
AVAudioEngine: Is there a way to play audio at full volume while having an active input tap?
My project has uses an AVAudioEngine with a very simple setup: A Speech recognizer running on a tap on the engine's input with separate AVAudioPlayerNodes handling playback. try session.setCategory(.playAndRecord, mode: .default, options: []) try session.setActive(true, options: .notifyOthersOnDeactivation) try session.setAllowHapticsAndSystemSoundsDuringRecording(true) filePlayerNode ---> engine.mainMixerNode bufferPlayerNode --> engine.mainMixerNode engine.mainMixerNode --> engine.outputNode //bufferPlayer.scheduleBuffer() is called on its own queue The input works fine since the buffers can be collected into a file and plays back correctly, and also because the recognizer works fine; but when I try to play the live audio by sending the buffer to the bufferPlayer on this or another device, the buffer audio plays at a very low volume, sometimes with severe distortions. If I lower the sample rate via AVAudioConverter, the distortions get worse. I've tried experimenting with the AVAudioSession category options, having separate AVAudioEngines, and much, much more, yet I still haven't figured this out. It's gotten to the point where I've fixed almost all the arcane and minor issues in my audio system, yet I still can't play back my voice properly. The ability to both play and record simultaneously is a basic feature of phones--when on speaker mode, a phone doesn't need to behave like a walkie-talkie. In my mind, it's inconceivable that the relatively new AVAudioEngine doesn't have a implementation for this, since the main issue (feedback loops) can be dealt with via a simple primitive circuit. Live video chat apps like FaceTime wouldn't be possible without this, yet to my surprise I found no answers online (what I did find were articles explaining how to write a file while playback is occurring). Is there truly no way to do this on AVAudioEngine? Am I missing something fundamental? Any pointers would be greatly appreciated
1
0
881
Feb ’24
Random EXC_BAD_ACCESS using AVFoundation
My app uses the AVFoundation to pronounce some words. Running the app from Xcode, either to a simulator or device, I frequently get this crash at start-up AXSpeech (13): EXC_BAD_ACCESS (code=EXC_I386_GPFLT). It seems to occur randomly, maybe 20%-30% of the time I launch the app. When it does not crash, audio works as expected. When launched from a device, it never crashes (at least, so far). Here's the code that outputs speech: Declared at the top level of the View struct @State var synth = AVSpeechSynthesizer() In the View, as part of a Button's closure: let utterance = AVSpeechUtterance(string: answer) utterance.voice = AVSpeechSynthesisVoice(language: "en-US") synth.speak(utterance) Any idea on how to stop this? It doesn't stop development, but sure slows it down, requiring multiple app starts often.
0
0
378
Feb ’24
Random EXC_BAD_ACCESS using AVFoundation
My app uses the AVFoundation to pronounce some words. Running the app from Xcode, either to a simulator or device, I frequently get this crash at start-up: AXSpeech (13): EXC_BAD_ACCESS (code=EXC_I386_GPFLT). It seems to occur randomly, maybe 20%-30% of the time I launch the app. When it does not crash, using audio works as expected. When launched from the device, it never crashes (so far, at least). Here's the code that outputs speech: Declared at the top level of the View struct: @State var synth = AVSpeechSynthesizer() In the View, as part of a Button's action closure: let utterance = AVSpeechUtterance(string: answer) utterance.voice = AVSpeechSynthesisVoice(language: "en_US") synth.speak(utterance) Any idea on how to stop this? It's annoying having to launch the app multiple times to test on a simulator or device.
0
0
391
Feb ’24
Fisheye Projection
Does anyone have any knowledge or experience with Apple's fisheye projection type? I'm guessing that it's as the name implies: a circular capture in a square frame (encoded in MV-HEVC) that is de-warped during playback. It'd be nice to be able to experiment with this format without guessing/speculating on what to produce.
2
7
799
Feb ’24
MV-HEVC/Spatial Video Reading and Rendering Articles
I don't know when these were posted, but I noticed them in the AVFoundation documentation last night. There have been a lot of questions about working with this format, and these are useful. They also include code samples. Reading multiview 3D video files: https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/reading_multiview_3d_video_files Converting side-by-side 3D video to multiview HEVC: https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/converting_side-by-side_3d_video_to_multiview_hevc
2
0
1.1k
Feb ’24