Post not yet marked as solved
I work on a video conferencing application, which makes use of AVAudioEngine and the videoChat AVAudioSession.Mode
This past Friday, an internal user reported an "audio cutting in and out" issue with their new iPhone 14 Pro, and I was able to reproduce the issue later that day on my iPhone 14 Pro Max. No other iOS devices running iOS 16 are exhibiting this issue.
I have narrowed down the root cause to the videoChat AVAudioSession.Mode after changing line 53 of the ViewController.swift file in Apple's "Using Voice Processing" sample project (https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing) from:
try session.setCategory(.playAndRecord, options: .defaultToSpeaker)
to
try session.setCategory(.playAndRecord, mode: .videoChat, options: .defaultToSpeaker)
This only causes issues on my iPhone 14 Pro Max device, not on my iPhone 13 Pro Max, so it seems specific to the new iPhones only.
I am also seeing the following logged to the console using either device, which appears to be specific to iOS 16, but am not sure if it is related to the videoChat issue or not:
2022-09-19 08:23:20.087578-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:71 Invalid input size for property 1684431725
2022-09-19 08:23:20.087605-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:225 Invalid input size for property 1684431725
I am assuming 1684431725 is 'dfcm' but I am not sure what Audio Session Property that might be.
Post not yet marked as solved
Hi, I face an issue with AVSpeechSynthesizer after iOS 16.
Crashed: com.apple.TextToSpeech.SpeechThread
0 libobjc.A.dylib 0x3518 objc_release + 16
1 libobjc.A.dylib 0x3518 objc_release_x0 + 16
2 libobjc.A.dylib 0x15d8 AutoreleasePoolPage::releaseUntil(objc_object**) + 196
3 libobjc.A.dylib 0x4f40 objc_autoreleasePoolPop + 256
4 libobjc.A.dylib 0x329dc objc_tls_direct_base<AutoreleasePoolPage*, (tls_key)3, AutoreleasePoolPage::HotPageDealloc>::dtor_(void*) + 168
5 libsystem_pthread.dylib 0x1bd8 _pthread_tsd_cleanup + 620
6 libsystem_pthread.dylib 0x4674 _pthread_exit + 84
7 libsystem_pthread.dylib 0x16d8 _pthread_start + 160
8 libsystem_pthread.dylib 0xba4 thread_start + 8
I got many crash reports from my clients, but unfortunately, I can't reproduce this on my test devices. Does anybody face this also?
Post not yet marked as solved
Hello
My app record voice with Record_Engine class(AvaudioEngine).
Problem:
Even though i made my app go to background, app return to inactive state in few seconds(about 3 seconds after watch screen locked).
Example:
How can i leave my app background.
like built-in record app.
and my recorder class is here.
//Recorder.swift
class Record_Engine : ObservableObject {
@Published var recording_file : AVAudioFile!
private var engine: AVAudioEngine!
private var mixerNode: AVAudioMixerNode!
init() {
setupSession()
setupEngine()
}
fileprivate func setupSession() {
let session = AVAudioSession.sharedInstance()
do {
try session.setCategory(AVAudioSession.Category.playAndRecord, mode: .default)
try session.setActive(true)
} catch {
print(error.localizedDescription)
}
}
fileprivate func setupEngine() {
engine = AVAudioEngine()
mixerNode = AVAudioMixerNode()
mixerNode.volume = 0
engine.attach(mixerNode)
makeConnections()
engine.prepare()
}
fileprivate func makeConnections() {
let inputNode = engine.inputNode
let inputFormat = inputNode.outputFormat(forBus: 0)
engine.connect(inputNode, to: mixerNode, format: inputFormat)
let mainMixerNode = engine.mainMixerNode
let mixerFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: inputFormat.sampleRate, channels: 1, interleaved: false)
engine.connect(mixerNode, to: mainMixerNode, format: mixerFormat)
}
func startRecording() throws {
let tapNode: AVAudioNode = mixerNode
let format = tapNode.outputFormat(forBus: 0)
self.recording_file = try AVAudioFile(forWriting: get_file_path(), settings: format.settings)
tapNode.installTap(onBus: 0, bufferSize: 8192, format: format, block: {
(buffer, time) in
do {try self.recording_file.write(from: buffer)}
catch {print(error.localizedDescription)}
})
try engine.start()
}
}
Post not yet marked as solved
some downloaded video, edited in photo album. when choose it in UIimagePickerController, the result Video lose audio!
BUT,if not edit in album,it's correct!
Post not yet marked as solved
In my app i use AVPlayer to be able to enter Picture in Picture mode in my video and it works great, but for some reason my app got canceled anyway because of this reason, i want to know what i did wrong
Post not yet marked as solved
I am getting an error in iOS 16. This error doesn't appear in previous iOS versions.
I am using RemoteIO to playback live audio at 4000 hz. The error is the following:
Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets
This is how the audio format and the callback is set:
// Set the Audio format
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 4000;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
AURenderCallbackStruct callbackStruct;
// Set output callback
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
Note that the mSampleRate I set is 4000 Hz.
In iOS 15 I get 0.02322 seconds of buffer duration (IOBufferDuration) and 93 frames in each callback. This is expected, because:
number of frames * buffer duration = sampling rate
93 * 0.02322 = 4000 Hz
However, in iOS 16 I am getting the aforementioned error in the callback.
Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets
Since the number of frames is equal to the number of packets, I am getting 1 or 2 frames in the callback and the buffer duration is of 0.02322 seconds. This didn't affect the playback of the "raw" signal, but it did affect the playback of the "processed" signal.
number of frames * buffer duration = sampling rate
2 * 0.02322 = 0.046 Hz
That doesn't make any sense. This error appears for different sampling rates (8000, 16000, 32000), but not for 44100. However I would like to keep 4000 as my sampling rate.
I have also tried to set the sampling rate by using the setPreferredSampleRate(_:) function of AVAudioSession, but the attempt didn't succeed. The sampling rate was still 44100 after calling that function.
Any help on this issue would be appreciated.
Post not yet marked as solved
I'm working on an audio recording app which uses an external USB Audio interface (e.g, Focusrite Scarlett Solo) connected to an iPhone.
When I run AVAudioSession.sharedInstance().currentRoute.inputs it returns the interface correctly.
1 element
- 0 : <AVAudioSessionPortDescription: 0x28307c650, type = USBAudio; name = Scarlett Solo USB; UID = AppleUSBAudioEngine:Focusrite:Scarlett Solo USB:130000:1,2; selectedDataSource = (null)>
Channels are returned correctly as well.
po AVAudioSession.sharedInstance().currentRoute.inputs.first?.channels
▿ Optional<Array<AVAudioSessionChannelDescription>>
▿ some : 2 elements
- 0 : <AVAudioSessionChannelDescription: 0x283070b60, name = Scarlett Solo USB 1; label = 4294967295 (0xffffffff); number = 1; port UID = AppleUSBAudioEngine:Focusrite:Scarlett Solo USB:130000:1,2>
- 1 : <AVAudioSessionChannelDescription: 0x283070b70, name = Scarlett Solo USB 2; label = 4294967295 (0xffffffff); number = 2; port UID = AppleUSBAudioEngine:Focusrite:Scarlett Solo USB:130000:1,2>
When I connect the inputNode to mainMixerNode in AVAudioEngine it uses multi-channel input so the Line/Instrument input is on the right channel and Microphone input is on the left.
How can I make it so that I use only the 2nd Channel (guitar) as a mono to be played back in both speakers?
I've been looking through some docs and discussions but could not find the answer.
I tried changing channels to 1 in audio format but as expected it plays the first channel in mono but I can't select 2nd channel to be played instead.
let input = engine.inputNode
let inputFormat = input.inputFormat(forBus: 0)
let preferredFormat = AVAudioFormat(
commonFormat: inputFormat.commonFormat,
sampleRate: inputFormat.sampleRate,
channels: 1,
interleaved: false
)!
engine.connect(input, to: engine.mainMixerNode, format: preferredFormat)
Post not yet marked as solved
I have the following code to connect inputNode to mainMixerNode of AVAudioEngine:
public func setupAudioEngine() {
self.engine = AVAudioEngine()
let format = engine.inputNode.inputFormat(forBus: 0)
//main mixer node is connected to output node by default
engine.connect(self.engine.inputNode, to: self.engine.mainMixerNode, format: format)
do {
engine.prepare()
try self.engine.start()
}
catch {
print("error couldn't start engine")
}
engineRunning = true
}
But I am seeing a crash in Crashlytics dashboard (which I can't reproduce).
Fatal Exception: com.apple.coreaudio.avfaudio
required condition is false: IsFormatSampleRateAndChannelCountValid(format)
Before calling the function setupAudioEngine I make sure the AVAudioSession category is not playback where mic is not available. The function is called where audio route change notification is handled and I check this condition specifically. Can someone tell me what I am doing wrong?
Fatal Exception: com.apple.coreaudio.avfaudio
0 CoreFoundation 0x99288 __exceptionPreprocess
1 libobjc.A.dylib 0x16744 objc_exception_throw
2 CoreFoundation 0x17048c -[NSException initWithCoder:]
3 AVFAudio 0x9f64 AVAE_RaiseException(NSString*, ...)
4 AVFAudio 0x55738 AVAudioEngineGraph::_Connect(AVAudioNodeImplBase*, AVAudioNodeImplBase*, unsigned int, unsigned int, AVAudioFormat*)
5 AVFAudio 0x5cce0 AVAudioEngineGraph::Connect(AVAudioNode*, AVAudioNode*, unsigned long, unsigned long, AVAudioFormat*)
6 AVFAudio 0xdf1a8 AVAudioEngineImpl::Connect(AVAudioNode*, AVAudioNode*, unsigned long, unsigned long, AVAudioFormat*)
7 AVFAudio 0xe0fc8 -[AVAudioEngine connect:to:format:]
8 MyApp 0xa6af8 setupAudioEngine + 701 (MicrophoneOutput.swift:701)
9 MyApp 0xa46f0 handleRouteChange + 378 (MicrophoneOutput.swift:378)
10 MyApp 0xa4f50 @objc MicrophoneOutput.handleRouteChange(note:)
11 CoreFoundation 0x2a834 __CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__
12 CoreFoundation 0xc6fd4 ___CFXRegistrationPost_block_invoke
13 CoreFoundation 0x9a1d0 _CFXRegistrationPost
14 CoreFoundation 0x408ac _CFXNotificationPost
15 Foundation 0x1b754 -[NSNotificationCenter postNotificationName:object:userInfo:]
16 AudioSession 0x56f0 (anonymous namespace)::HandleRouteChange(AVAudioSession*, NSDictionary*)
17 AudioSession 0x5cbc invocation function for block in avfaudio::AVAudioSessionPropertyListener(void*, unsigned int, unsigned int, void const*)
18 libdispatch.dylib 0x1e6c _dispatch_call_block_and_release
19 libdispatch.dylib 0x3a30 _dispatch_client_callout
20 libdispatch.dylib 0x11f48 _dispatch_main_queue_drain
21 libdispatch.dylib 0x11b98 _dispatch_main_queue_callback_4CF
22 CoreFoundation 0x51800 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__
23 CoreFoundation 0xb704 __CFRunLoopRun
24 CoreFoundation 0x1ebc8 CFRunLoopRunSpecific
25 GraphicsServices 0x1374 GSEventRunModal
26 UIKitCore 0x514648 -[UIApplication _run]
27 UIKitCore 0x295d90 UIApplicationMain
28 libswiftUIKit.dylib 0x30ecc UIApplicationMain(_:_:_:_:)
29 MyApp 0xc358 main (WhiteBalanceUI.swift)
30 ??? 0x104b1dce4 (Missing)
Post not yet marked as solved
My app samples the various inputs available on the iPhone and iPad and performs a frequency analysis. In addition to using the internal accelerometer and gyroscope I can also sample the microphone and USB input devices such as accelerometers through the audio input subsystem. The highest sample rate I use with the microphone and USB devices is the 48 KHz of the audio sampling subsystem. This provides a bandwidth of 24 kHz (Nyquist frequency) on the sampled signal. This has worked for many generations of iPhone and iPad until now. When I use my iPhone 14 Pro there is a sharp frequency cutoff at about 8 kHz. I see an artifact at the same frequency when I use the simulators. BUT when I use my 11" iPad Pro, or my current generation iPhone SE I do not see this effect and get good data out to 24 kHz. The iPad Pro does show some rolloff near 24 kHz which is noticeable but not a problem for most applications.
The rolloff at 8 kHz is a serious problem for my customers who are testing equipment vibration and noise. I am wondering if this is related to the new microphone options "Standard", "Voice Isolation", and "Wide Spectrum". But if so, why only on the iPhone 14Pro and the simulators? I have searched the documentation but apparently it is not possible to programmatically change the microphone mode and the Apple documentation on how to use this new feature is lacking.
I am using AVAudioSession and AVAudioRecorder methods to acquire the data through the audio capture hardware. This code has been working well for me for over 10 years so I do not think it is a code problem but it could be a configuration problem because of new hardware in the iPhone 14 although I have not found anything in the documentation.
Examples from various devices and a simulator are shown below for microphone. Does anyone have an idea what may be causing this problem?
iPhoneSE 3rd Gen
iPad Gen 9
iPad Pro 11in
iPhone 14Pro
iPad 10th Generation Simulator
Post not yet marked as solved
Hi, I want to record audio from the keyboard extension of my parent app.
Post not yet marked as solved
I recently released my first ShazamKit app, but there is one thing that still bothers me.
When I started I followed the steps as documented by Apple right here : https://developer.apple.com/documentation/shazamkit/shsession/matching_audio_using_the_built-in_microphone
however when I was running this on iPad I receive a lot of high pitched feedback noise when I ran my app with this configuration. I got it to work by commenting out the output node and format and only use the input.
But now I want to be able to recognise the song that’s playing from the device that has my app open and was wondering if I need the output nodes for that or if I can do something else to prevent the Mic. Feedback from happening.
In short:
What can I do to prevent feedback from happening
Can I use the output of a device to recognise songs or do I just need to make sure that the microphone can run at the same time as playing music?
Other than that I really love the ShazamKit API and can highly recommend to have a go with it!
This is the code as documented in the above link (I just added the comments of what broke it for me)
func configureAudioEngine() {
// Get the native audio format of the engine's input bus.
let inputFormat = audioEngine.inputNode.inputFormat(forBus: 0)
// THIS CREATES FEEDBACK ON IPAD PRO
let outputFormat = AVAudioFormat(standardFormatWithSampleRate: 48000, channels: 1)
// Create a mixer node to convert the input.
audioEngine.attach(mixerNode)
// Attach the mixer to the microphone input and the output of the audio engine.
audioEngine.connect(audioEngine.inputNode, to: mixerNode, format: inputFormat)
// THIS CREATES FEEDBACK ON IPAD PRO
audioEngine.connect(mixerNode, to: audioEngine.outputNode, format: outputFormat)
// Install a tap on the mixer node to capture the microphone audio.
mixerNode.installTap(onBus: 0,
bufferSize: 8192,
format: outputFormat) { buffer, audioTime in
// Add captured audio to the buffer used for making a match.
self.addAudio(buffer: buffer, audioTime: audioTime)
}
}
Post not yet marked as solved
I am using Twillio Video package When i accept the call then the code
debug("applyAudioSettings => setActive")
let session: AVAudioSession = AVAudioSession.sharedInstance()
try session.setActive(true, options: AVAudioSession.SetActiveOptions.notifyOthersOnDeactivation)
debug("applyAudioSettings => After SetActive setting")
is causing error
Post not yet marked as solved
Crashed: com.apple.TextToSpeech.SpeechThread
0 libobjc.A.dylib 0x1c20 objc_msgSend + 32
1 libobjc.A.dylib 0x15d8 AutoreleasePoolPage::releaseUntil(objc_object**) + 196
2 libobjc.A.dylib 0x4f40 objc_autoreleasePoolPop + 256
3 libobjc.A.dylib 0x329dc objc_tls_direct_base<AutoreleasePoolPage*, (tls_key)3, AutoreleasePoolPage::HotPageDealloc>::dtor_(void*) + 168
4 libsystem_pthread.dylib 0x1bd8 _pthread_tsd_cleanup + 620
5 libsystem_pthread.dylib 0x4674 _pthread_exit + 84
6 libsystem_pthread.dylib 0x16d8 _pthread_start + 160
7 libsystem_pthread.dylib 0xba4 thread_start + 8
Post not yet marked as solved
We have a VOIP application. We are setting AudioSession category on appLaunch. Need to support A2Dp profiles.
do {
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .spokenAudio, options: [.mixWithOthers, .allowBluetooth, .allowBluetoothA2DP, .duckOthers])
try AVAudioSession.sharedInstance().setActive(true)
MCXLogVerbose("[Audio] [App] AVAudioSession activation to playAndRecord SUCCESSFUL")
}
catch {
MCXLogVerbose("[Audio] [App] AVAudioSession activation to playAndRecord FAILED")
}
We also have a framework which handles everything regarding networking, audio capturing, transmission etc. Apart from the application side, we also need to change category according to necessities.
Everything works fine while application is in foreground. But in background state. it seems category can't be set. ERROR: "AVAudioSession.ErrorCode.cannotInterruptOthers", meaning another session is active(i guess). But i could not find any.
As a result, incoming audio sometimes plays at receiver or speaker or airpod randomly, after a single failure.
We are trying to switch category between playAndrecord and playback.
do {
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio, options: [.mixWithOthers, .duckOthers])
try AVAudioSession.sharedInstance().setActive(true)
MCXLogVerbose("[Audio] [App] AVAudioSession activation to playback SUCCESSFUL")
}
catch {
MCXLogVerbose("[Audio] [App] AVAudioSession activation to playback FAILED")
}
**
Few confusions here,
Why does it not fail while in foreground?
Why simple straightforward demo application can set category from background without any failure?
It seems iOS15 doesn't create audio routing errors even if session had failed to change its category. Whats changed?
Am i missing any subtle steps to follow?
Also airplay shows wrong port as selected.
**
Post not yet marked as solved
aurioTouch sample codes runs OK on iOS15 devices, but on iOS16, inNumberFrames of performRender always becomes 1. On iOS15, inNumberFrames is usually 512 or 1024. Sometimes smaller or bigger, but not 1.
`// Render callback function
static OSStatus performRender (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)`
I know aurioTouch is old sample code, but it runs OK on iOS15, so I suppose there should be difference between iOS15 and iOS16.
Any help on this issue would be appreciated.
Post not yet marked as solved
I'm making an app that has an alarm clock function that allows a user to set a time for their alarm sound to play. I've seen apps like Alarmy and Sleep cycle be able to play sounds for an indefinite period of time even when the app is in the background or the phone is on DND, with notifications silenced. How can I ensure that the alarm will play while in the background, assuming that the user does not terminate the app?
I've looked the multiple types of background tasks available to developers, but I'm not too sure on which one to use, and none of them seem to serve this purpose. I know it's possible because of the current apps on the app store but I'm not sure what approach they're using.
With silent push notifications, the task seems to only be allowed to run for 30 seconds. Preferably, the sound would play until the user stops it.
Hi
After IOS 16 update users are experiencing audio distortion when attempting to change Pitch and/or Tempo in the app which uses AVAudiounitTimepitch to achieve this. Prior to IOS 16 it has worked quite admirably. I see suggestions to change the algorithm used but am able to achieve this. Any pointers are appreciated
Post not yet marked as solved
Hi.
In Picture in Picture, a large number of work queue threads are created and CPU usage increases considerably.
This happens when startPictureInPicture() is executed on a class created with AVSampleBufferController(contentSource:.init(...)). It does not matter how many times AVSampleBufferDisplayLayer() isenque()`, it is reproduced when the Picture in Picture screen appears.
Checking the work queue thread, it looks like it is run by AVTimer of AVKIt, but I can't figure out how to control this.
Does anyone know anything about it?
This seems to occur on iOS 16.1 or later.
See: https://github.com/uakihir0/UIPiPView/issues/17
Post not yet marked as solved
On alarm time, the alarm will get from the server and will be played but the alarm ringtone length is up to 2 mint. How can we achieve this? I used silent push notification and try to get toune from the server and play in AVAudioPlayer, it works when the app is in foreground mode but in background mode, it gives an error in getting ringtone Error "Software caused connection abort"
What should be the best solution to achieve this?
Post not yet marked as solved
I'm building a recording app, and have built an App Shortcut that allows the user to start recording via Siri ("Hey Siri, start a recording").
This works well until the audio session is activated for the first time. After that, the device no longer listens to "Hey Siri".
I'd like to have the following behaviour:
while the app is recording, the device should not listen to Siri
when the app is not recording, the device should listen to Siri
I've currently configured AVAudioSession this way:
try session.setCategory(.playAndRecord, options: [.defaultToSpeaker, .allowBluetoothA2DP, .allowAirPlay, .mixWithOthers])
I've tried switching the category back and forth between .playAndRecord and .playback when recording/not recording, but that didn't help either.
I'd also like to avoid changing the category or activating/deactivating the audio session, since it always takes a bit of time. When the audio session is configured and/or activated, the user experiences a "slow" record button: there's a noticeable delay between tapping the button and the device actually recording. When it is already configured, the record button starts recording instantly.
What impact does AVAudioSession have on Siri?