AVAudioEngine

RSS for tag

Use a group of connected audio node objects to generate and process audio signals and perform audio input and output.

AVAudioEngine Documentation

Posts under AVAudioEngine tag

55 Posts
Sort by:
Post not yet marked as solved
2 Replies
756 Views
Hello, I started to set audio stereo recording (both audio and video are recorded) and the audio quality seems to be lower than quality obtained with native camera application (configured for stereo). Using console to check the log, I found a difference between camera app and mine regarding MXSessionMode (of mediaserverd) in fact, camera application gives MXSessionMode = SpatialRecording and mine MXSessionMode = VideoRecording How can I configure capture session to finally have MXSessionMode = SpatialRecording? Any suggestion? Best regards
Posted
by
Post not yet marked as solved
0 Replies
592 Views
In voip application , when the CallKit is enabled if we try playing a video through AVplayer the video content is updated frame by frame and the audio of the content is not audible . This issue is observed only in iOS 17, any idea how can we resolve this
Posted
by
Post not yet marked as solved
5 Replies
1.6k Views
Hello, I'm facing an issue with Xcode 15 and iOS 17: it seems impossible to get AVAudioEngine's audio input node to work on simulator. inputNode has a 0ch, 0kHz input format, connecting input node to any node or installing a tap on it fails systematically. What we tested: Everything works fine on iOS simulators <= 16.4, even with Xcode 15. Nothing works on iOS simulator 17.0 on Xcode 15. Everything works fine on iOS 17.0 device with Xcode 15. More details on this here: https://github.com/Fesongs/InputNodeFormat Any idea on this? Something I'm missing? Thanks for your help 🙏 Tom PS: I filed a bug on Feedback Assistant, but it usually takes ages to get any answer so I'm also trying here 😉
Posted
by
Post not yet marked as solved
0 Replies
729 Views
Hi there, I'm having some trouble with AVAudioMixerNode only working when there is a single input, and outputting silence or very quiet buzzing when >1 input node is connected. My setup has voice processing enabled, input going to a sink, and N source nodes going to the main mixer node, going to the output node. In all cases I am connecting nodes in the graph with the same declared format: 48kHz 1 channel Float32 PCM. This is working great for 1 source node, but as soon as I add a second it breaks. I can reproduce this behaviour in the SignalGenerator sample, when the same format is used everywhere. Again, it'll work fine with 1 source node even in this configuration, but add another and there's silence. Am I doing something wrong with formats here? Is this expected? As I understood it with voice processing on and use of a mixer node I should be able to use my own format essentially everywhere in my graph? My SignalGenerator modified repro example follows: import Foundation import AVFoundation // True replicates my real app's behaviour, which is broken. // You can remove one source node connection // to make it work even when this is true. let showBrokenState: Bool = true // SignalGenerator constants. let frequency: Float = 440 let amplitude: Float = 0.5 let duration: Float = 5.0 let twoPi = 2 * Float.pi let sine = { (phase: Float) -> Float in return sin(phase) } let whiteNoise = { (phase: Float) -> Float in return ((Float(arc4random_uniform(UINT32_MAX)) / Float(UINT32_MAX)) * 2 - 1) } // My "application" format. let format: AVAudioFormat = .init(commonFormat: .pcmFormatFloat32, sampleRate: 48000, channels: 1, interleaved: true)! // Engine setup. let engine = AVAudioEngine() let mainMixer = engine.mainMixerNode let output = engine.outputNode try! output.setVoiceProcessingEnabled(true) let outputFormat = engine.outputNode.inputFormat(forBus: 0) let sampleRate = Float(format.sampleRate) let inputFormat = format var currentPhase: Float = 0 let phaseIncrement = (twoPi / sampleRate) * frequency let srcNodeOne = AVAudioSourceNode { _, _, frameCount, audioBufferList -> OSStatus in let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList) for frame in 0..<Int(frameCount) { let value = sine(currentPhase) * amplitude currentPhase += phaseIncrement if currentPhase >= twoPi { currentPhase -= twoPi } if currentPhase < 0.0 { currentPhase += twoPi } for buffer in ablPointer { let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer) buf[frame] = value } } return noErr } let srcNodeTwo = AVAudioSourceNode { _, _, frameCount, audioBufferList -> OSStatus in let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList) for frame in 0..<Int(frameCount) { let value = whiteNoise(currentPhase) * amplitude currentPhase += phaseIncrement if currentPhase >= twoPi { currentPhase -= twoPi } if currentPhase < 0.0 { currentPhase += twoPi } for buffer in ablPointer { let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer) buf[frame] = value } } return noErr } engine.attach(srcNodeOne) engine.attach(srcNodeTwo) engine.connect(srcNodeOne, to: mainMixer, format: inputFormat) engine.connect(srcNodeTwo, to: mainMixer, format: inputFormat) engine.connect(mainMixer, to: output, format: showBrokenState ? inputFormat : outputFormat) // Put the input node to a sink just to match the formats and make VP happy. let sink: AVAudioSinkNode = .init { timestamp, numFrames, data in .zero } engine.attach(sink) engine.connect(engine.inputNode, to: sink, format: showBrokenState ? inputFormat : outputFormat) mainMixer.outputVolume = 0.5 try! engine.start() CFRunLoopRunInMode(.defaultMode, CFTimeInterval(duration), false) engine.stop()
Posted
by
Post not yet marked as solved
1 Replies
532 Views
Hello, I have struggled to resolve issue above question. I could speak utterance when I turn on my iPhone, but when my iPhone goes to background mode(turn off iPhone), It doesn't speak any more. I think it is possible to play audio or speak utterance because I can play music on background status in youtube. Any help please??
Posted
by
Post not yet marked as solved
0 Replies
452 Views
My User Generated Content for my App is audio-based only and anonymous. All the content is deleted after 24 hours. Do I still need a report button, since I don't know the user and the content gets deleted anyway?
Posted
by
Post not yet marked as solved
0 Replies
627 Views
From an app that reads audio from the built-in microphone, I'm receiving many crash logs where the AVAudioEngine fails to start again after the app was suspended. Basically, I'm calling these two methods in the app delegate's applicationDidBecomeActive and applicationDidEnterBackground methods respectively: let audioSession = AVAudioSession.sharedInstance() func startAudio() throws { self.audioEngine = AVAudioEngine() try self.audioSession.setCategory(.record, mode: .measurement)} try audioSession.setActive(true) self.audioEngine!.inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil, block: { ... }) self.audioEngine!.prepare() try self.audioEngine!.start() } func stopAudio() throws { self.audioEngine?.stop() self.audioEngine?.inputNode.removeTap(onBus: 0) self.audioEngine = nil try self.audioSession.setActive(false, options: [.notifyOthersOnDeactivation]) } In the crash logs (iOS 16.6) I'm seeing that this works fine several times as the app is opened and closed, but suddenly the audioEngine.start() call fails with the error Error Domain=com.apple.coreaudio.avfaudio Code=-10851 "(null)" UserInfo={failed call=err = AUGraphParser::InitializeActiveNodesInInputChain(ThisGraph, *GetInputNode())} and the audioEngine!.inputNode.outputFormat(forBus: 0) is something like <AVAudioFormat 0x282301c70: 2 ch, 0 Hz, Float32, deinterleaved> . Also, right before installing the tap, audioSession.availableInputs contains an entry of type MicrophoneBuiltIn but audioSession.currentRoute lists no inputs at all. I was not able to reproduce this situation on my own devices yet. Does anyone have an idea why this is happening?
Posted
by
Post not yet marked as solved
0 Replies
667 Views
Hi! I am working on an audio application on iOS. This is how I retreive the workgroup from the remoteIO audiounit (ioUnit). The unit is initialized and is working fine (meaning that it is regularly called by the system). os_workgroup_t os_workgroup{nullptr}; uint32_t os_workgroup_index_size; if (status = AudioUnitGetProperty(ioUnit, kAudioOutputUnitProperty_OSWorkgroup, kAudioUnitScope_Global, 0, &os_workgroup, &os_workgroup_index_size); status != noErr) { throw runtime_error("AudioUnitSetProperty kAudioOutputUnitProperty_OSWorkgroup - Failed with OSStatus: " + to_string(status)); } However the resulting os_workgroup's value is 0x40. Which seems not correct. No wonder I cannot join any other realtime threads to the workgroup as well. The returned status however is a solid 0. Can anyone help?
Posted
by
Post not yet marked as solved
0 Replies
554 Views
When recording audio over bluetooth from AirPods to iPhone using the AVAudioRecorder the Bluetooth audio codec used is always AAC-ELD independent of the codec to store which is selected in the AVAudioRecorder instance. As far as I know must every Bluetooth device support SBC, hence, it should be possible for the AirPods to transmit the recorded audio using the SBC codec instead of AAC-ELD. However, I could not find any resource on how the request this codec using the AVAudioRecorder or AVAudioEngine. Is it possible to request SBC at all and if yes how?
Posted
by
Post not yet marked as solved
6 Replies
1.2k Views
Prior to iOS 17, I used AVAudioFile to open (for reading) the assetURL of MPMediaItem for songs that the user purchased through iTunes Store. With the iOS 17 Beta, this seems no longer possible as AVAudioFile throws this: ExtAudioFile.cpp:211 about to throw -54: open audio file AVAEInternal.h:109 [AVAudioFile.mm:135:AVAudioFileImpl: (ExtAudioFileOpenURL((CFURLRef)fileURL, &_extAudioFile)): error -54 Also can't copy the url to Documents directory because I get this: The file “item.m4a” couldn’t be opened because URL type ipod-library isn’t supported. This seems to be affecting other apps on the App Store besides mine, and it will reflect very badly on my app if this makes into the final iOS 17 because I have encouraged users to buy songs on iTunes Store to use with my app. Now there seems like there is no way to access them. Is this a known bug, or is there some type of workaround?
Posted
by
Post not yet marked as solved
0 Replies
862 Views
Our app is a game written in Unity where we have most of our audio playback handled by Unity. However, one of our game experiences utilized microphone input for speech recognition, and so in order for us to perform echo cancellation (while the game has audio playback), we setup an audio stream from Unity to native Swift code that performs the mixing of the input/output nodes. We however found that by streaming the audio buffer to our AVAudioSession: The volume of the audio playback appears to output differently When capturing a screen recording of the app, the audio playback being played from AVAudioSession does not get captured at all. Looking to figure out what could be causing the discrepency in playback as well as capture behaviour during screen recordings. We setup the AVAudioSession with this configuration: AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, options: .mixWithOthers) with inputNode.setVoiceProcessingEnabled(true) after connecting our IO and mixer nodes. Any suggestions or ideas on what to look out for would be appreciated!
Posted
by
Post not yet marked as solved
3 Replies
1.3k Views
Hi community I'm developing an application for MacOS and i need to capture the mic audio stream. Currently using CoreAudio in Swift i'm able to capture the audio stream using IO Procs and have applied the AUVoiceProcessing for prevent echo from speaker device. I was able to connect the audio unit and perform the echo cancellation. The problem that i'm getting is that when i'm using AUVoiceProcessing the gain of the two devices get reduced and that affects the volume of the two devices (microphone and speaker). I have tried to disable the AGC using the property kAUVoiceIOProperty_VoiceProcessingEnableAGCbut the results are the same. There is any option to disable the gain reduction or there is a better approach to get the echo cancellation working?
Posted
by
Post not yet marked as solved
0 Replies
868 Views
We are developing an app that uses external hardware to measure analogue hearing-loop performance . It uses audio jack on phone/iPad. With the new hardware on iPad using USB-C , we have noticed that the same input , one with lighting adapter and one with usb-C adapter - both produce way different input levels. The USB-C is ~23dB lower, with the same code and settings. That's almost 10x difference. Is there any way to control the USB-C adapter? am I missing something ? The code simply uses AVAudioInputNode and block attached to it via self.inputNode.installTap we do adjust gain to 1.0 let gain: Float = 1.0 try session.setInputGain(gain) But that still does not help. I wish there was an apple lab I could go to , to speak to engineers about it.
Posted
by
Post not yet marked as solved
0 Replies
692 Views
Hi, I'm working hard with Logic Pro and it's the 4th time that the application crashes. This is report I receive. What can I do to fix? Thank you in advance Translated Report (Full Report Below) Process: Logic Pro X [1433] Path: /Applications/Logic Pro X.app/Contents/MacOS/Logic Pro X Identifier: com.apple.logic10 Version: 10.7.7 (5762) Build Info: MALogic-5762000000000000~2 (1A85) App Item ID: 634148309 App External ID: 854029738 Code Type: X86-64 (Native) Parent Process: launchd [1] User ID: 501 Date/Time: 2023-07-01 09:16:42.7422 +0200 OS Version: macOS 13.3.1 (22E261) Report Version: 12 Bridge OS Version: 7.4 (20P4252) Anonymous UUID: F5E0021C-707D-3E26-12BC-6E1D779A746A Time Awake Since Boot: 2700 seconds System Integrity Protection: enabled Crashed Thread: 0 Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000010 Exception Codes: 0x0000000000000001, 0x0000000000000010 Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11 Terminating Process: exc handler [1433] VM Region Info: 0x10 is not in any region. Bytes before following region: 140737486778352 REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL UNUSED SPACE AT START ---> shared memory 7fffffe7f000-7fffffe80000 [ 4K] r-x/r-x SM=SHM Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 Logic Pro X 0x108fe6972 0x108a75000 + 5708146 1 Logic Pro X 0x108def2d3 0x108a75000 + 3646163 2 Foundation 0x7ff80e4b3f35 __NSFireDelayedPerform + 440 3 CoreFoundation 0x7ff80d623478 CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION + 20 4 CoreFoundation 0x7ff80d622ff3 __CFRunLoopDoTimer + 807 5 CoreFoundation 0x7ff80d622c19 __CFRunLoopDoTimers + 285 6 CoreFoundation 0x7ff80d608f79 __CFRunLoopRun + 2206 7 CoreFoundation 0x7ff80d608071 CFRunLoopRunSpecific + 560 8 HIToolbox 0x7ff817070fcd RunCurrentEventLoopInMode + 292 9 HIToolbox 0x7ff817070dde ReceiveNextEventCommon + 657 10 HIToolbox 0x7ff817070b38 _BlockUntilNextEventMatchingListInModeWithFilter + 64 11 AppKit 0x7ff81069a7a0 _DPSNextEvent + 858 12 AppKit 0x7ff81069964a -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1214 13 Logic Pro X 0x10a29885d 0x108a75000 + 25311325 14 MAToolKit 0x1117f0e37 0x1116ec000 + 1068599 15 MAToolKit 0x1117f64ae 0x1116ec000 + 1090734 16 AppKit 0x7ff8108864b1 -[NSWindow(NSEventRouting) _handleMouseDownEvent:isDelayedEvent:] + 4330 17 AppKit 0x7ff8107fdcef -[NSWindow(NSEventRouting) _reallySendEvent:isDelayedEvent:] + 404 18 AppKit 0x7ff8107fd93f -[NSWindow(NSEventRouting) sendEvent:] + 345 19 Logic Pro X 0x108ebf486 0x108a75000 + 4498566 20 AppKit 0x7ff8107fc319 -[NSApplication(NSEvent) sendEvent:] + 345 21 Logic Pro X 0x10a2995f4 0x108a75000 + 25314804 22 Logic Pro X 0x10a2990c9 0x108a75000 + 25313481 23 Logic Pro X 0x10a29337f 0x108a75000 + 25289599 24 Logic Pro X 0x10a29962e 0x108a75000 + 25314862 25 Logic Pro X 0x10a2990c9 0x108a75000 + 25313481 26 AppKit 0x7ff810ab6bbe -[NSApplication _handleEvent:] + 65 27 AppKit 0x7ff81068bcdd -[NSApplication run] + 623 28 AppKit 0x7ff81065fed2 NSApplicationMain + 817 29 Logic Pro X 0x10956565d 0x108a75000 + 11470429 30 dyld 0x7ff80d1d441f start + 1903 Thread 1:: caulk.messenger.shared:17 0 libsystem_kernel.dylib 0x7ff80d4ef52e semaphore_wait_trap + 10 1 caulk 0x7ff816da707e caulk::semaphore::timed_wait(double) + 150 2 caulk 0x7ff816da6f9c caulk::concurrent::details::worker_thread::run() + 30 3 caulk 0x7ff816da6cb0 void* caulk::thread_proxy<std::__1::tuple<caulk::thread::attributes, void (caulk::concurrent::details::worker_thread::)(), std::__1::tuplecaulk::concurrent::details::worker_thread*>>(void) + 41 4 libsystem_pthread.dylib 0x7ff80d52e1d3 _pthread_start + 125 5 libsystem_pthread.dylib 0x7ff80d529bd3 thread_start + 15 Thread 2:: com.apple.NSEventThread 0 libsystem_kernel.dylib 0x7ff80d4ef5b2 mach_msg2_trap + 10 1 libsystem_kernel.dylib 0x7ff80d4fd72d mach_msg2_internal + 78 2 libsystem_kernel.dylib 0x7ff80d4f65e4 mach_msg_overwrite + 692 3 libsystem_kernel.dylib 0x7ff80d4ef89a mach_msg + 19 4 SkyLight 0x7ff81219f7ac CGSSnarfAndDispatchDatagrams + 160 5 SkyLight 0x7ff8124b8cfd SLSGetNextEventRecordInternal + 284 6 SkyLight 0x7ff8122d8360 SLEventCreateNextEvent + 9 7 HIToolbox 0x7ff81707bfea PullEventsFromWindowServerOnConnection(unsigned int, unsigned char, __CFMachPortBoost*) + 45 8 HIToolbox 0x7ff81707bf8b MessageHandler(__CFMachPort*, void*, long, void*) + 48 9 CoreFoundation 0x7ff80d637e66 __CFMachPortPerform + 244 10 CoreFoundation 0x7ff80d60a5a3 CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION + 41 11 CoreFoundation 0x7ff80d60a4e3 __CFRunLoopDoSource1 + 540 12 CoreFoundation 0x7ff80d609161 __CFRunLoopRun + 2694 13 CoreFoundation 0x7ff80d608071 CFRunLoopRunSpecific + 560 14 AppKit 0x7ff8107fa909 _NSEventThread + 132 15 libsystem_pthread.dylib 0x7ff80d52e1d3 _pthread_start + 125 16 libsystem_pthread.dylib 0x7ff80d529bd3 thread_start + 15
Posted
by
Post not yet marked as solved
0 Replies
1.1k Views
I have a timer in one of my apps. I now want to add audio that plays at the end of the timer. It's a workout app and the sound should remind the user that it is time for the next exercise. The audio should duck music playback (Apple Music / Spotify) and also work in background. Background audio is enabled for the app. I am not able to achieve everything at the same time. I set the audio session to category playback with options duckOthers. do { try AVAudioSession.sharedInstance().setCategory( .playback, options: .duckOthers ) } catch { print(error) } For playback I just use the AVAudioPlayer. When the user starts the timer, i schedule a timer in the future and play the sound. While this works perfectly in the foreground, the sound is not played back when going to background, as timers are not fired in the background, but rather when the user puts the app back in foreground. I have also tried using AVAudioEngine and AVAudioPlayerNode, as the latter can start playback delayed. The case from above works now, but the audio ducking begins immediately when initialising the AVAudioEngine, which is also not what i want. Is there any other approach that I am not aware of?
Posted
by
Post not yet marked as solved
1 Replies
954 Views
I am analysing sounds by tapping the mic on the Mac. All is working well, but it disrupts other (what I assume) are low priority sounds e.g. dragging an item off the dock, sending a message is messages, speaking something in Shortcuts or Terminal. Other sounds like music.app playing, Siri speaking are not disrupted. The disruption sounds like the last part of the sound being repeated two extra times, very noticeable. This is the code: import Cocoa import AVFAudio class AudioHelper: NSObject { let audioEngine = AVAudioEngine() func start() async throws { audioEngine.inputNode.installTap(onBus: 0, bufferSize: 8192, format: nil) { buffer, time in } try audioEngine.start() } } I have tried increasing the buffer, changing the qos to utility (in the hope the sound analysis would become less important than the disrupted sounds),running on a non-main thread, but no luck. MacOS 13.4.1 Any assistance would be appreciated.
Posted
by
Post not yet marked as solved
2 Replies
1.3k Views
I cannot seem to create an AVAudioFile from a URL to be played in an AVAudioEngine. Here is my complete code, following the documentation. import UIKit import AVKit import AVFoundation class ViewController: UIViewController { let audioEngine = AVAudioEngine() let audioPlayerNode = AVAudioPlayerNode() override func viewDidLoad() { super.viewDidLoad() streamAudioFromURL(urlString: "https://samplelib.com/lib/preview/mp3/sample-9s.mp3") } func streamAudioFromURL(urlString: String) { guard let url = URL(string: urlString) else { print("Invalid URL") return } let audioFile = try! AVAudioFile(forReading: url) let audioEngine = AVAudioEngine() let playerNode = AVAudioPlayerNode() audioEngine.attach(playerNode) audioEngine.connect(playerNode, to: audioEngine.outputNode, format: audioFile.processingFormat) playerNode.scheduleFile(audioFile, at: nil, completionCallbackType: .dataPlayedBack) { _ in /* Handle any work that's necessary after playback. */ } do { try audioEngine.start() playerNode.play() } catch { /* Handle the error. */ } } } I am getting the following error on let audioFile = try! AVAudioFile(forReading: url) Thread 1: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.coreaudio.avfaudio Code=2003334207 "(null)" UserInfo={failed call=ExtAudioFileOpenURL((CFURLRef)fileURL, &_extAudioFile)} I have tried many other .mp3 file URLs as well as .wav and .m4a and none seem to work. The documentation makes this look so easy but I have been trying for hours to no avail. If you have any suggestions, they would be greatly appreciated!
Posted
by
Post not yet marked as solved
0 Replies
634 Views
The Situation I'm on macOS and I have an AVCaptureSession with camera and audio device inputs which are fed into an AVCaptureMovieFileOutput. What I am looking for is a way to map audio device input channels to file output audio channels, preferably using an explicit channel map. By default, AVCaptureMovieFileOutput takes (presumably) the maximum number of input channels from an audio device that matches an audio format supported by the capture output, and records all of them. This works as expected for mono devices like the built-in microphone and stereo USB mics, the result being either a 1ch mono or a 2ch stereo audio track in the recorded media file. However, the user experience breaks down for 2ch input devices that have an input signal on only one channel, which is reasonable for a 2ch audio interface with one mic connected. This produces a stereo track with the one input channel panned hard to one side. It gets even weirder for multichannel interfaces. For example, an 8ch audio input device results in a 7.1 audio track in the recorded media file with input audio mapped to separate tracks. This is far from ideal during playback, where audio sources are surprisingly coming from seemingly random directions. The Favored Solution Ideally, users should be able to select which channels of their audio input device will be mapped to which audio channel in the recorded media file via UI. The resulting channel map would be configured somewhere on the capture session. The Workaround I have found that AVCaptureFileOutput does not respond well to channel layouts that are not standard audio formats like mono, stereo, quadrophonic, 5.1, and 7.1. This means, channel descriptions and channel bitmaps are out of the question. What does work, is configuring the output with one of the supported channel layouts and disabling audio channels via AVCaptureConnection. With that, the output's encoder produces reasonable results for mono and stereo input devices, if the configured channel layout is kAudioChannelLayoutTag_Stereo, but anything else is mixed down to mono. I am somewhat sympathetic to this solution in so far that in lieu of an explicit channel map the best guess the audio encoder could make, is mixing every enabled channel down to mono. But, as described above, this breaks for 2ch input devices where only one channel is connected to a signal source. The result is a stereo track with audio hard panned to one side. The Question Is there a way to implement the described favored solution with AVCapture* API only, and if not, what's the preferred way of dealing with this scenario - going directly for AVAudioEngine and AVAssetWriter?
Posted
by