Post not yet marked as solved
I am using AVAudioSession with playAndRecord category as follows:
private func setupAudioSessionForRecording() {
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setActive(false)
try audioSession.setPreferredSampleRate(Double(48000))
} catch {
NSLog("Unable to deactivate Audio session")
}
let options:AVAudioSession.CategoryOptions = [.allowAirPlay, .mixWithOthers]
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.default, options: options)
} catch {
NSLog("Could not set audio session category \(error)")
}
do {
try audioSession.setActive(true)
} catch {
NSLog("Unable to activate AudioSession")
}
}
Next I use AVAudioEngine to repeat what I say in the microphone to external speakers (on the TV connected with iPhone using HDMI Cable).
//MARK:- AudioEngine
var engine: AVAudioEngine!
var playerNode: AVAudioPlayerNode!
var mixer: AVAudioMixerNode!
var audioEngineRunning = false
public func setupAudioEngine() {
self.engine = AVAudioEngine()
engine.connect(self.engine.inputNode, to: self.engine.outputNode, format: nil)
do {
engine.prepare()
try self.engine.start()
}
catch {
print("error couldn't start engine")
}
audioEngineRunning = true
}
public func stopAudioEngine() {
engine.stop()
audioEngineRunning = false
}
The issue is I hear some kind of reverb/humming noise after I speak for a few seconds that keeps getting amplified and repeated. If I use a RemoteIO unit instead, no such noise comes out of speakers. I am not sure if my setup of AVAudioEngine is correct. I have tried all kinds of AVAudioSession configuration but nothing changes.
The link to sample audio with background speaker noise is posted [here] in the Stackoverflow forum (https://stackoverflow.com/questions/72170548/echo-when-using-avaudioengine-over-hdmi#comment127514327_72170548)
We start a voice recording via
self.avAudioRecorder = try AVAudioRecorder(
url: self.recordingFileUrl,
settings: settings
)
self.avAudioRecorder.record()
At certain point, we will stop the recording via
self.avAudioRecorder.stop()
I was wondering, is it safe to perform file copy on self.recordingFileUrl immediately, after self.avAudioRecorder.stop()?
Is all recording data has been flushed to self.recordingFileUrl and self.recordingFileUrl file is closed properly?
Post not yet marked as solved
I want to record both IMU data and Audio data from Airpods Pro. I have tried many times, and I failed. I can successfully record the IMU data and iPhone's microphone data simultaneously. When I choose Airpods Pro's microphone in the setCategory() function, the IMU data collection process stopped.
If I change recordingSession.setCategory(.playAndRecord, mode: .default, options: .allowBluetooth) to ecordingSession.setCategory(.playAndRecord, mode: .default), everything is okay except the audio is recorded from the handphone. If I add options: .allowBluetooth, the IMU update will stop. Could you give me some suggestions for this?
Below are some parts of my code.
let My_IMU = CMHeadphoneMotionManager()
let My_writer = CSVWriter()
var write_state: Bool = false
func test()
{
recordingSession = AVAudioSession.sharedInstance()
do {
try recordingSession.setCategory(.playAndRecord, mode: .default, options: .allowBluetooth)
try recordingSession.setActive(true)
recordingSession.requestRecordPermission() { [unowned self] allowed in
DispatchQueue.main.async {
if allowed == false {print("failed to record!")}
}
}
} catch {
print("failed to record!")
}
let audioFilename = getDocumentsDirectory().appendingPathComponent("test_motion_Audio.m4a")
let settings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 8000,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
do
{
audioRecorder = try AVAudioRecorder(url: audioFilename, settings: settings)
audioRecorder.delegate = self
audioRecorder.record()
}
catch
{
print("Fail to record!")
finishRecording()
}
write_state.toggle()
let dir = FileManager.default.urls(
for: .documentDirectory,
in: .userDomainMask
).first!
let filename = "test_motion_Audio.csv"
let fileUrl = dir.appendingPathComponent(filename)
My_writer.open(fileUrl)
APP.startDeviceMotionUpdates(to: OperationQueue.current!, withHandler: {[weak self] motion, error in
guard let motion = motion, error == nil else { return }
self?.My_writer.write(motion)
})
}
Post not yet marked as solved
I can no longer run my app in the simulator with a version lower than iOS14.
I tried iOS12, iOS12.4, iOS13.7 and they all crash with the same error.
This only started since upgrading to Big Sur. Nothing has changed in my code base.
This is the code that crashes:
// Provide callback audio rendering function on the unit
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct)
);
NSAssert1(status == noErr, @"Error setting callback: %d", (int)status);
//start audio
AudioOutputUnitStart(_audioUnit);
The error "Thread 1: signal SIGABRT" occurs at the last line...
AudioOutputUnitStart(_audioUnit);
And the errors in the console are:
/Library/Audio/Plug-Ins/HAL/JackRouter.plugin/Contents/MacOS/JackRouter: mach-o, but not built for iOS simulator
2020-11-25 15:17:34.229006-0800 PolyNome[20602:879922] Cannot find function pointer New_JackRouterPlugIn for factory <CFUUID 0x600003f11a80> 7CB18864-927D-48B5-904C-CCFBCFBC7ADD in CFBundle/CFPlugIn 0x7fbe247a7390 </Library/Audio/Plug-Ins/HAL/JackRouter.plugin> (bundle, not loaded)
2020-11-25 15:17:34.461372-0800 PolyNome[20602:880389] [AudioHAL_Client] HALB_IOBufferManager.cpp:226:GetIOBuffer: HALB_IOBufferManager::GetIOBuffer: the stream index is out of range
2020-11-25 15:17:34.461528-0800 PolyNome[20602:880389] [AudioHAL_Client] HALB_IOBufferManager.cpp:226:GetIOBuffer: HALB_IOBufferManager::GetIOBuffer: the stream index is out of range
2020-11-25 15:17:34.474317-0800 PolyNome[20602:880389] [aqme] 254: AQDefaultDevice (1): output stream 0: null buffer
2020-11-25 15:17:34.475993-0800 PolyNome[20602:880389] [aqme] 1640: EXCEPTION thrown (-50): -
2020-11-25 15:17:43.424327-0800 PolyNome[20602:879922] RPCTimeout.mm:55:_ReportRPCTimeout: Start: Mach message timeout. Apparently deadlocked. Aborting now.
CoreSimulator 732.18.0.2 - Device: iPhone 8 Plus (796F538B-78DA-4FE7-9005-317621931E88) - Runtime: iOS 12.4 (16G73) - DeviceType: iPhone 8 Plus
As I said, the only thing that's changed since it last worked is that I upgraded to Big Sur.
It runs fine in the iOS Simulator, but not in simulators with a lower iOS version.
The Audio Output of the Simulator is set to Internal Speakers.
Any help is much appreciated.
Post not yet marked as solved
I have a RemoteIO unit that successfully playbacks the microphone samples in realtime via attached headphones. I need to get the same functionality ported using AVAudioEngine, but I can't seem to make a head start. Here is my code, all I do is connect inputNode to playerNode which crashes.
var engine: AVAudioEngine!
var playerNode: AVAudioPlayerNode!
var mixer: AVAudioMixerNode!
var engineRunning = false
private func setupAudioSession() {
var options:AVAudioSession.CategoryOptions = [.allowBluetooth, .allowBluetoothA2DP]
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.default, options: options)
try AVAudioSession.sharedInstance().setAllowHapticsAndSystemSoundsDuringRecording(true)
} catch {
MPLog("Could not set audio session category")
}
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setActive(false)
try audioSession.setPreferredSampleRate(Double(44100))
} catch {
print("Unable to deactivate Audio session")
}
do {
try audioSession.setActive(true)
} catch {
print("Unable to activate AudioSession")
}
}
private func setupAudioEngine() {
self.engine = AVAudioEngine()
self.playerNode = AVAudioPlayerNode()
self.engine.attach(self.playerNode)
engine.connect(self.engine.inputNode, to: self.playerNode, format: nil)
do {
try self.engine.start()
}
catch {
print("error couldn't start engine")
}
engineRunning = true
}
But starting AVAudioEngine causes a crash:
libc++abi: terminating with uncaught exception of type NSException
*** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason:
'required condition is false: inDestImpl->NumberInputs() > 0 || graphNodeDest->CanResizeNumberOfInputs()'
terminating with uncaught exception of type NSException
How do I get realtime record and playback of mic samples via headphones working?
Post not yet marked as solved
I have an audio session enabled and I'm using the screen sharing extension using ReplayKit.
If screen sharing is working and you force close the app during a phone call, the screen sharing extension does not end. Is there a way?
Post not yet marked as solved
Hello,
Our beta testers have recently had issues connecting their AirPods to the Apple Watch. We have a AVAudioSession in the watch app, and have set up workout etc. regular Bluetooth headphones have no issues and we can see the connection through the AVAudioSession.routeChangeNotification on them, but not AirPods for some reason? Any one with an idea of what may cause this? Thank you
Post not yet marked as solved
Here is my situation:
I'm working on an app that has background music. This should obviously be muted by the silent switch. For this reason, I'm using AVAudioSession category ambient.
However, part of the app is a list of audio samples. Each one has a preview button, a big ol' rightwards facing triangle ▶️. That's a play button, and by tapping it the user is unambiguously indicating they want to hear the audio. But in ambient, those sounds are silenced if the silent switch is enabled; the play button switches to pause, the progress indicator runs as normal, but no sound comes out. To the user, this looks like a bug. I can't even tell them why it's not working, because there's no API to tell that it's not.
Ok, fine, I can switch the category to playback and we'll hear them… but then we'll also hear the background music suddenly fade in, and then back out after we're done playing the sample.
Alright, so we can pause the BGM before changing the category and playing the sample, then change it back and unpause the BGM. This now works as expected if you have the device on silent.
But these are only samples – sound effects, really – and they should play over the background music, not interrupt it. If we don't have the silent switch enabled, we get the BGM suddenly fading out before playing our sample, and that's almost as bad.
Basically, I feel like I'm being hamstrung by the all-or-nothing nature of the AVAudioSession. There are some sounds I want to play regardless of the position of the silent switch, and others that I want to mute if the silent switch is enabled. But I have no idea how to do that, or if it's even possible.
Basically, here are the options as I see them, and the issues with each of them.
Category
Silent
Ring
Issue
playback
Plays all audio
Plays all audio
Unacceptable, BGM overrides silent switch
ambient
Plays no audio
Plays all audio
Sample list appears broken when device is silenced
ambient => playback
Plays BGM and SFX when user presses play button
Plays correctly
Weird fade in of background music when silenced
ambient => playback, pausing BGM
Plays correctly
BGM pauses before playing SFX
Weird fade out of background music when not silenced
Can anyone offer any advice, here?
Post not yet marked as solved
I recently met a weird crash which only appears in iOS 14.5 and iOS 14.5.1, which I cannot reproduce.
But the crash log only contains code from the system framework, it did contain any code from my app, does anybody know why? Any infomation will be appreciated. Below is the stacktrace, "TaxiDriverOwner" is my project name.
Date/Time: 2022-04-06 15:20:45.2457 +0800
Launch Time: 2022-04-06 15:15:54.6033 +0800
OS Version: iPhone OS 14.5.1 (18E212)
Release Type: User
Baseband Version: 1.62.11
Report Version: 104
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Triggered by Thread: 0
Thread 0 name:
Thread 0 Crashed:
0 libsystem_kernel.dylib 0x00000001c9be9334 __pthread_kill + 8
1 libsystem_pthread.dylib 0x00000001e760daa0 pthread_kill + 272 (pthread.c:1392)
2 libsystem_c.dylib 0x00000001a4e56b90 abort + 104 (abort.c:110)
3 libsystem_c.dylib 0x00000001a4e56024 __assert_rtn + 292 (assert.c:96)
4 libAXSpeechManager.dylib 0x00000001c8253df8 -[AXSpeechManager _updateAuxiliarySession].cold.2 + 44 (AXSpeechManager.m:360)
5 libAXSpeechManager.dylib 0x00000001c824a070 -[AXSpeechManager _updateAuxiliarySession] + 1672 (AXSpeechManager.m:360)
6 libAXSpeechManager.dylib 0x00000001c824a60c -[AXSpeechManager _handleMediaServicesWereLost:] + 96 (AXSpeechManager.m:517)
7 CoreFoundation 0x000000019ba31534 __CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ + 28 (CFNotificationCenter.c:706)
8 CoreFoundation 0x000000019ba314dc ___CFXRegistrationPost_block_invoke + 52 (CFNotificationCenter.c:173)
9 CoreFoundation 0x000000019ba30a48 _CFXRegistrationPost + 440 (CFNotificationCenter.c:198)
10 CoreFoundation 0x000000019ba30408 _CFXNotificationPost + 716 (CFNotificationCenter.c:1071)
11 Foundation 0x000000019cd2867c -[NSNotificationCenter postNotificationName:object:userInfo:] + 64 (NSNotification.m:575)
12 AudioSession 0x00000001a34c4718 -[AVAudioSession privateHandleServerDied] + 196 (AVAudioSession_iOS.mm:2871)
13 AudioSession 0x00000001a34bdaf0 (anonymous namespace)::HandlePropertyListenerCallback(unsigned int, objc_selector*) + 60 (AVAudioSession_iOS.mm:255)
14 libdispatch.dylib 0x000000019b6c1a54 _dispatch_call_block_and_release + 32 (init.c:1466)
15 libdispatch.dylib 0x000000019b6c37ec _dispatch_client_callout + 20 (object.m:559)
16 libdispatch.dylib 0x000000019b6d1c40 _dispatch_main_queue_callback_4CF + 884 (inline_internal.h:2557)
17 CoreFoundation 0x000000019ba501f8 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 16 (CFRunLoop.c:1790)
18 CoreFoundation 0x000000019ba4a0d0 __CFRunLoopRun + 2524 (CFRunLoop.c:3118)
19 CoreFoundation 0x000000019ba491c0 CFRunLoopRunSpecific + 600 (CFRunLoop.c:3242)
20 GraphicsServices 0x00000001b3031734 GSEventRunModal + 164 (GSEvent.c:2259)
21 UIKitCore 0x000000019e4b77e4 -[UIApplication _run] + 1072 (UIApplication.m:3269)
22 UIKitCore 0x000000019e4bd054 UIApplicationMain + 168 (UIApplication.m:4740)
23 TaxiDriverOwner 0x00000001002228a0 main + 68 (AppDelegate.swift:17)
24 libdyld.dylib 0x000000019b705cf8 start + 4
Thread 1 name:
Thread 1:
0 libsystem_kernel.dylib 0x00000001c9bc44fc mach_msg_trap + 8
1 libsystem_kernel.dylib 0x00000001c9bc3884 mach_msg + 76 (mach_msg.c:103)
2 CoreFoundation 0x000000019ba4fd10 __CFRunLoopServiceMachPort + 372 (CFRunLoop.c:2641)
3 CoreFoundation 0x000000019ba49bb0 __CFRunLoopRun + 1212 (CFRunLoop.c:2974)
4 CoreFoundation 0x000000019ba491c0 CFRunLoopRunSpecific + 600 (CFRunLoop.c:3242)
5 Foundation 0x000000019cd29fac -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 232 (NSRunLoop.m:377)
6 Foundation 0x000000019cd29e78 -[NSRunLoop(NSRunLoop) runUntilDate:] + 92 (NSRunLoop.m:424)
7 UIKitCore 0x000000019e56c38c -[UIEventFetcher threadMain] + 516 (UIEventFetcher.m:929)
8 Foundation 0x000000019ce9b2fc __NSThread__start__ + 864 (NSThread.m:724)
9 libsystem_pthread.dylib 0x00000001e760cc00 _pthread_start + 320 (pthread.c:881)
10 libsystem_pthread.dylib 0x00000001e7615758 thread_start + 8
Post not yet marked as solved
I've currently received a task that requires to evaluate the possibility, as title, of recording via mic on BLE headset and play sound via built-in speaker at same time on iOS.
I've done implementing forcing audio device set to built-in speaker whenever the BLE headset is connected/disconnected. It works if both mic/speaker need to be set to built-in one. But after days of search and try, I found that it is not possible to make mic/speaker set separately. Even specifying input device on AVAudioEngine is supported only on MacOS, not iOS.
Can anyone or any technician give me a persuading answer about "Possibility of record via mic on BLE headset and play sound via built-in speaker at same time"?
Post not yet marked as solved
How can you add a live audio player to Xcode where they will have a interactive UI to control the audio and they will be able to exit out of the app and or turn their device off and it will keep playing? Is their a framework or API that will work for this? Thanks! Really need help with this…. 🤩 I have looked everywhere and haven’t found something that works….
Post not yet marked as solved
How can you add a live audio player to Xcode where they will have a interactive UI to control the audio and they will be able to exit out of the app and or turn their device off and it will keep playing? Is their a framework or API that will work for this? Thanks! Really need help with this…. 🤩
Post not yet marked as solved
I'm unclear on how to access the inward facing microphone in the AirPods Pro (not the outward facing one). If this is possible, can you point me in the right direction?
More context is that there is a ticking noise coming a spasm inside someone's ears that I'd like to try canceling for them.
The standard AirPods Pro noise cancellation modes don't have any effect on the sound.
I know latency may be too high to do this on the phone with a custom app, but thought if I could reach the point of that being the problem, then I could experiment with predictive algorithms.
Thank you in advance for ideas or recommendations.
Post not yet marked as solved
Hi,
I'm trying to get this example working on MacOS now that SFSpeechRecognizer is available for the platform. A few questions ...
Do I need to make an authorization request of the user if I intend to use "on device recognition"?
When I ask for authorization to use speech recognition the dialog that pops up contains text that's not in my speech recognition usage description indicating that recordings will be sent to Apple's servers. But that is not accurate if I am using on device recognition (as far as I can tell). Is there a way to suppress that language if I am not using online speech recognition?
Is there an updated example of the article I linked to that describes how to accomplish the same thing with MacOS instead of IOS? My compiler is complaining that AVAudioSession() is not available in MacOS and I'm not sure how to set things up for passing audio from the microphone to the speech recognizer.
Thanks :-D
Brian Duffy
Post not yet marked as solved
Could anyone help with this crush? I don't understand what's the exact reason for it.
Here's the log:
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Subtype: KERN_INVALID_ADDRESS at 0x0000000000000000
Exception Codes: 0x0000000000000001, 0x0000000000000000
VM Region Info: 0 is not in any region. Bytes before following region: 4303306752
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
UNUSED SPACE AT START
--->
__TEXT 1007f4000-100808000 [ 80K] r-x/r-x SM=COW ...
Exception Note: EXC_CORPSE_NOTIFY
Termination Reason: SIGNAL 11 Segmentation fault: 11
Terminating Process: exc handler [9393]
Triggered by Thread: 0
Kernel Triage:
VM - pmap_enter failed with resource shortage
VM - pmap_enter failed with resource shortage
Thread 0 name:
Thread 0 Crashed:
0 libobjc.A.dylib 0x00000001997652d8 lookUpImpOrForward + 76 (objc-runtime-new.h:2123)
1 libobjc.A.dylib 0x00000001997609e4 _objc_msgSend_uncached + 68
2 AVFAudio 0x00000001f3390dd8 tryToSetPlayerSessionListener(AVAudioPlayer*) + 32 (AVAudioPlayer.mm:86)
3 AVFAudio 0x00000001f33e8164 AVAudioPlayerCpp::allocAudioQueue() + 660 (AVAudioPlayerCpp.mm:1535)
4 AVFAudio 0x00000001f33e7b60 AVAudioPlayerCpp::prepareToPlayQueue() + 32 (AVAudioPlayerCpp.mm:813)
5 AVFAudio 0x00000001f33e7a1c AVAudioPlayerCpp::playQueue(AudioTimeStamp const*) + 116 (AVAudioPlayerCpp.mm:922)
6 AVFAudio 0x00000001f33e75f4 AVAudioPlayerCpp::DoAction(unsigned int, unsigned long, void const*) + 168 (AVAudioPlayerCpp.mm:644)
7 AVFAudio 0x00000001f3391864 -[AVAudioPlayer play] + 40 (AVAudioPlayer.mm:544)
8 0x0000000100903ef0 thunk for @escaping @callee_guaranteed () -> () + 20 (<compiler-generated>:0)
9 libdispatch.dylib 0x0000000180ba9924 _dispatch_call_block_and_release + 32 (init.c:1517)
10 libdispatch.dylib 0x0000000180bab670 _dispatch_client_callout + 20 (object.m:560)
11 libdispatch.dylib 0x0000000180bb9b70 _dispatch_main_queue_callback_4CF + 944 (inline_internal.h:2601)
12 CoreFoundation 0x0000000180ef1d84 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 16 (CFRunLoop.c:1795)
13 CoreFoundation 0x0000000180eabf5c __CFRunLoopRun + 2540 (CFRunLoop.c:3144)
14 CoreFoundation 0x0000000180ebf468 CFRunLoopRunSpecific + 600 (CFRunLoop.c:3268)
15 GraphicsServices 0x000000019ca4b38c GSEventRunModal + 164 (GSEvent.c:2200)
16 UIKitCore 0x0000000183861088 -[UIApplication _run] + 1100 (UIApplication.m:3493)
17 UIKitCore 0x00000001835df958 UIApplicationMain + 2092 (UIApplication.m:5046)
18 libswiftUIKit.dylib 0x0000000198791fa0 UIApplicationMain(_:_:_:_:) + 104 (UIKit.swift:530)
19 0x000000010080b850 main + 80 (<compiler-generated>:11)
20 0x000000010080b850 $main + 92 (ListeningViewController.swift:0)
21 0x000000010080b850 main + 108
22 dyld 0x0000000100ea9aa4 start + 520 (dyldMain.cpp:879)
Post not yet marked as solved
Hi!
I am making an app that records the screen and microphone in MacOS, I specifically need it to run in a subprocess, for this I use fork(); The truth is that it marks the following error in loop:
Error:
CMIOHardware.cpp:379:CMIOObjectGetPropertyData Error: 2003332927, failed
2022-03-01 20:19:41.708913+0100 WebCamProj[8051:91700] [] CMIO_Unit_Convertor_VideoToolboxCompressor.cpp:461:rebuildVideoCompressor ### Err -12903
2022-03-01 20:19:41.709213+0100 WebCamProj[8051:91700] [] CMIO_Unit_Convertor_VideoToolboxCompressor.cpp:1958:doCompressBuffer [0x15382da00] EXCEPTION ON ERROR -12903
The example on which I base is the following, it comes from Apple's official website for developers:
https://developer.apple.com/library/archive/qa/qa1740/_index.html
The fragment that executes the operation looks something like this:
pid_t pidScreenRecorder;
if ((pidScreenRecorder = fork()) == 0) {
NSURL *pathUrl = [[NSURL alloc] initFileURLWithPath:[NSString stringWithFormat:@"%@%s", NSHomeDirectory(), "/myScreen.mov"]];
ScreenRecord *screenRec = [[ScreenRecord alloc] init];
[screenRec startScreenRecording:pathUrl];
}
I do not know why this error occurs only with this API, I have used others, for example AVAudioRecorder individually and I have not had any problem.
Thanks you.
Post not yet marked as solved
This just seems like a useful thing to have when rendering audio. For example, let's say you have an effect that pitches audio up/down. That typically requires that you know the sample rate of the incoming audio. The way I do this right now is just to save the sample rate after the AUAudioUnit's render resources have been allocated, but being provided this info on a per-render-callback basis seems more useful.
Another use case is for AUAudioUnit's on the input chain. Since the format for connections must match the hardware format, you can no longer explicitly set the format that you expect the audio to come in at. You can check the sample rate on the AVAudioEngine's input node or the sample rate on the AVAudioSession singleton, but when you are working with the audio from within the render callback, you don't want to be accessing those methods due to the possibility they are blocking calls. This is especially true when using the AVAudioSinkNode where you don't have the ability to set the sample rate before the underlying node's render resources are allocated.
Am I missing something here, or does this actually seem useful?
Post not yet marked as solved
When using VoiceProcessingIO audio unit with voicechat audio session mode to have echo cancellation, I can't play audio in stereo, it only allows mono audio.
How can I enable stereo playback with echo cancellation?
Is it some kind of limitation? since it isn't mentioned anywhere in the documentation.
Post not yet marked as solved
I’m developing a voice communication app for the iPad with both playback and record and using AudioUnit of type kAudioUnitSubType_VoiceProcessingIO to have echo cancellation.
When playing the audio before initializing the recording audio unit, volume is high. But if I'm playing the audio after initializing the audio unit or when switching to remoteio and then back to vpio the playback volume is low.
It seems like a bug in iOS, any solution or workaround for this? Searching the net I only found this post without any solution: https://developer.apple.com/forums/thread/671836
Post not yet marked as solved
Is there a way for me to programmatically query whether if my AVAudioSession is able to play even when app is minimized/screen is locked? I need this to debug background audio permissions as my AVAudioSession keeps getting paused while app goes into background and it resumes once it goes into the foreground. Moreover, when I try to call setActive for AVAudioSession in didEnterBackground, it gives me the error code 561015905 which says it is permission related.
My Info.plist already has
<key>UIBackgroundModes</key>
<array>
<string>audio</string>
</array>
added to it.