Post not yet marked as solved
As the title suggests I am using AVAudioEngine for SpeechRecognition input & AVAudioPlayer for sound output.
Apple says in this talk https://developer.apple.com/videos/play/wwdc2019/510 that the setVoiceProcessingEnabled function very usefully cancels the output from speaker to the mic. I set voiceProcessing on the Input and output nodes.
It seems to work however the volume is low, even when the system volume is turned up. Any solution to this would be much appreciated.
Post not yet marked as solved
How can I record audio in a keyboard extension? I've enabled microphone support by enabling "RequestsOpenAccess". When I try to record, I get the error below in the console. This doesn't make sense as Apple's docs seem to say that microphone access is allowed with Full Keyboard Access. What is the point of enabling the microphone if the app cannot access the data from the microphone?
-CMSUtilities- CMSUtility_IsAllowedToStartRecording: Client sid:0x2205e, XXXXX(17965), 'prim' with PID 17965 was NOT allowed to start recording because it is an extension and doesn't have entitlements to record audio.
Post not yet marked as solved
Basically for this iPhone app I want to be able to record from either the built in microphone or from a connected USB audio device while simultaneously playing back processed audio on connected AirPods. It's a pretty simple AVAudioEngine setup that includes a couple of effects units. The category is set to .playAndRecord with the .allowBluetooth and .allowBluetoothA2DP options added. With no attempts to set the preferred input and AirPods connected, the AirPods mic will be used and output also goes to the AirPods. If I call setPreferredInput to either built in mic or a USB audio device I will get input as desired but then output will always go to the speaker. I don't really see a good explanation for this and overrideOutputAudioPort does not really seem to have suitable options.
Testing this on iPhone 14 Pro
Post not yet marked as solved
I work on a video conferencing application, which makes use of AVAudioEngine and the videoChat AVAudioSession.Mode
This past Friday, an internal user reported an "audio cutting in and out" issue with their new iPhone 14 Pro, and I was able to reproduce the issue later that day on my iPhone 14 Pro Max. No other iOS devices running iOS 16 are exhibiting this issue.
I have narrowed down the root cause to the videoChat AVAudioSession.Mode after changing line 53 of the ViewController.swift file in Apple's "Using Voice Processing" sample project (https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing) from:
try session.setCategory(.playAndRecord, options: .defaultToSpeaker)
to
try session.setCategory(.playAndRecord, mode: .videoChat, options: .defaultToSpeaker)
This only causes issues on my iPhone 14 Pro Max device, not on my iPhone 13 Pro Max, so it seems specific to the new iPhones only.
I am also seeing the following logged to the console using either device, which appears to be specific to iOS 16, but am not sure if it is related to the videoChat issue or not:
2022-09-19 08:23:20.087578-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:71 Invalid input size for property 1684431725
2022-09-19 08:23:20.087605-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:225 Invalid input size for property 1684431725
I am assuming 1684431725 is 'dfcm' but I am not sure what Audio Session Property that might be.
Post not yet marked as solved
Hello,
I started to set audio stereo recording (both audio and video are recorded) and the audio quality seems to be lower than quality obtained with native camera application (configured for stereo).
Using console to check the log, I found a difference between camera app and mine regarding MXSessionMode (of mediaserverd)
in fact, camera application gives MXSessionMode = SpatialRecording and mine MXSessionMode = VideoRecording
How can I configure capture session to finally have MXSessionMode = SpatialRecording?
Any suggestion?
Best regards
Post not yet marked as solved
Sound Pad throws an error on the audio device on iPad.
Sound Pad
Version 1.0.0
3 April 2023
Swift 5.8 Edition
A fatal error was found in AudioPlayer.swift
Line which causes this
var engine: AVAudioEngine
Post not yet marked as solved
Hi community
I'm developing an application for MacOS and i need to capture the mic audio stream. Currently using CoreAudio in Swift i'm able to capture the audio stream using IO Procs and have applied the AUVoiceProcessing for prevent echo from speaker device. I was able to connect the audio unit and perform the echo cancellation.
The problem that i'm getting is that when i'm using AUVoiceProcessing the gain of the two devices get reduced and that affects the volume of the two devices (microphone and speaker).
I have tried to disable the AGC using the property kAUVoiceIOProperty_VoiceProcessingEnableAGCbut the results are the same.
There is any option to disable the gain reduction or there is a better approach to get the echo cancellation working?
Post not yet marked as solved
In voip application , when the CallKit is enabled if we try playing a video through AVplayer the video content is updated frame by frame and the audio of the content is not audible . This issue is observed only in iOS 17, any idea how can we resolve this
Post not yet marked as solved
Hello, I have struggled to resolve issue above question.
I could speak utterance when I turn on my iPhone, but when my iPhone goes to background mode(turn off iPhone), It doesn't speak any more.
I think it is possible to play audio or speak utterance because I can play music on background status in youtube.
Any help please??
Post not yet marked as solved
Prior to iOS 17, I used AVAudioFile to open (for reading) the assetURL of MPMediaItem for songs that the user purchased through iTunes Store. With the iOS 17 Beta, this seems no longer possible as AVAudioFile throws this:
ExtAudioFile.cpp:211 about to throw -54: open audio file
AVAEInternal.h:109 [AVAudioFile.mm:135:AVAudioFileImpl: (ExtAudioFileOpenURL((CFURLRef)fileURL, &_extAudioFile)): error -54
Also can't copy the url to Documents directory because I get this:
The file “item.m4a” couldn’t be opened because URL type ipod-library isn’t supported.
This seems to be affecting other apps on the App Store besides mine, and it will reflect very badly on my app if this makes into the final iOS 17 because I have encouraged users to buy songs on iTunes Store to use with my app. Now there seems like there is no way to access them.
Is this a known bug, or is there some type of workaround?
Post not yet marked as solved
My User Generated Content for my App is audio-based only and anonymous.
All the content is deleted after 24 hours. Do I still need a report button, since I don't know the user and the content gets deleted anyway?
Post not yet marked as solved
From an app that reads audio from the built-in microphone, I'm receiving many crash logs where the AVAudioEngine fails to start again after the app was suspended.
Basically, I'm calling these two methods in the app delegate's
applicationDidBecomeActive and
applicationDidEnterBackground
methods respectively:
let audioSession = AVAudioSession.sharedInstance()
func startAudio() throws {
self.audioEngine = AVAudioEngine()
try self.audioSession.setCategory(.record, mode: .measurement)}
try audioSession.setActive(true)
self.audioEngine!.inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil, block: { ... })
self.audioEngine!.prepare()
try self.audioEngine!.start()
}
func stopAudio() throws {
self.audioEngine?.stop()
self.audioEngine?.inputNode.removeTap(onBus: 0)
self.audioEngine = nil
try self.audioSession.setActive(false, options: [.notifyOthersOnDeactivation])
}
In the crash logs (iOS 16.6) I'm seeing that this works fine several times as the app is opened and closed, but suddenly the audioEngine.start() call fails with the error
Error Domain=com.apple.coreaudio.avfaudio Code=-10851 "(null)" UserInfo={failed call=err = AUGraphParser::InitializeActiveNodesInInputChain(ThisGraph, *GetInputNode())}
and the audioEngine!.inputNode.outputFormat(forBus: 0) is something like
<AVAudioFormat 0x282301c70: 2 ch, 0 Hz, Float32, deinterleaved>
. Also, right before installing the tap, audioSession.availableInputs contains an entry of type MicrophoneBuiltIn but audioSession.currentRoute lists no inputs at all.
I was not able to reproduce this situation on my own devices yet.
Does anyone have an idea why this is happening?
Post not yet marked as solved
Hi!
I am working on an audio application on iOS. This is how I retreive the workgroup from the remoteIO audiounit (ioUnit). The unit is initialized and is working fine (meaning that it is regularly called by the system).
os_workgroup_t os_workgroup{nullptr};
uint32_t os_workgroup_index_size;
if (status = AudioUnitGetProperty(ioUnit, kAudioOutputUnitProperty_OSWorkgroup, kAudioUnitScope_Global, 0,
&os_workgroup, &os_workgroup_index_size);
status != noErr)
{
throw runtime_error("AudioUnitSetProperty kAudioOutputUnitProperty_OSWorkgroup - Failed with OSStatus: " +
to_string(status));
}
However the resulting os_workgroup's value is 0x40. Which seems not correct. No wonder I cannot join any other realtime threads to the workgroup as well. The returned status however is a solid 0.
Can anyone help?
Post not yet marked as solved
When recording audio over bluetooth from AirPods to iPhone using the AVAudioRecorder the Bluetooth audio codec used is always AAC-ELD independent of the codec to store which is selected in the AVAudioRecorder instance.
As far as I know must every Bluetooth device support SBC, hence, it should be possible for the AirPods to transmit the recorded audio using the SBC codec instead of AAC-ELD. However, I could not find any resource on how the request this codec using the AVAudioRecorder or AVAudioEngine.
Is it possible to request SBC at all and if yes how?
Post not yet marked as solved
Our app is a game written in Unity where we have most of our audio playback handled by Unity. However, one of our game experiences utilized microphone input for speech recognition, and so in order for us to perform echo cancellation (while the game has audio playback), we setup an audio stream from Unity to native Swift code that performs the mixing of the input/output nodes.
We however found that by streaming the audio buffer to our AVAudioSession:
The volume of the audio playback appears to output differently
When capturing a screen recording of the app, the audio playback being played from AVAudioSession does not get captured at all.
Looking to figure out what could be causing the discrepency in playback as well as capture behaviour during screen recordings.
We setup the AVAudioSession with this configuration:
AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, options: .mixWithOthers)
with
inputNode.setVoiceProcessingEnabled(true)
after connecting our IO and mixer nodes.
Any suggestions or ideas on what to look out for would be appreciated!
Post not yet marked as solved
It is so frustrating and the second time this has happened to me. I found a fix for the first time, but can't seem to find one now. Help!
Anyone know what to do to stop the noise?
Lisa
Post not yet marked as solved
I am using AVSpeechSynthesizer to get audio buffer and play,
I am using AVAudioEngine and AVAudioPlayerNode to play the buffer.
But I am getting error.
[avae] AVAEInternal.h:76 required condition is false: [AVAudioPlayerNode.mm:734:ScheduleBuffer: (_outputFormat.channelCount == buffer.format.channelCount)]
2023-05-02 03:14:35.709020-0700 AudioPlayer[12525:308940] *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: _outputFormat.channelCount == buffer.format.channelCount'
Can anyone please help me to play the AVAudioBuffer from AVSpeechSynthesizer write method?
Post not yet marked as solved
I’m developing a voice communication app for the iPad with both playback and record and using AudioUnit of type kAudioUnitSubType_VoiceProcessingIO to have echo cancellation.
When playing the audio before initializing the recording audio unit, volume is high. But if I'm playing the audio after initializing the audio unit or when switching to remoteio and then back to vpio the playback volume is low.
It seems like a bug in iOS, any solution or workaround for this? Searching the net I only found this post without any solution: https://developer.apple.com/forums/thread/671836
Post not yet marked as solved
We are developing an app that uses external hardware to measure analogue hearing-loop performance . It uses audio jack on phone/iPad.
With the new hardware on iPad using USB-C , we have noticed that the same input , one with lighting adapter and one with usb-C adapter - both produce way different input levels.
The USB-C is ~23dB lower, with the same code and settings. That's almost 10x difference.
Is there any way to control the USB-C adapter? am I missing something ?
The code simply uses AVAudioInputNode and block attached to it via self.inputNode.installTap
we do adjust gain to 1.0
let gain: Float = 1.0
try session.setInputGain(gain)
But that still does not help.
I wish there was an apple lab I could go to , to speak to engineers about it.
Post not yet marked as solved
I've noticed that enabling voice processing on AVAudioInputNode change the node's format - most noticeably channel count.
let inputNode = avEngine.inputNode
print("Format #1: \(inputNode.outputFormat(forBus: 0))")
// Format #1: <AVAudioFormat 0x600002bb4be0: 1 ch, 44100 Hz, Float32>
try! inputNode.setVoiceProcessingEnabled(true)
print("Format #2: \(inputNode.outputFormat(forBus: 0))")
// Format #2: <AVAudioFormat 0x600002b18f50: 3 ch, 44100 Hz, Float32, deinterleaved>
Is this expected? How can I interpret these channels?
My input device is an aggregate device where each channel comes from a different microphone. I then record each channels to separate files.
But when voice processing messes up with the channels layout, I cannot rely on this anymore.