The following is my playground code. Any of the apple audio units show the plugin view, however anything else (i.e. kontakt, spitfire, etc.) does not. It does not error, just where the visual is expected is blank.
import AppKit
import PlaygroundSupport
import AudioToolbox
import AVFoundation
import CoreAudioKit
let manager = AVAudioUnitComponentManager.shared()
let description = AudioComponentDescription(componentType: kAudioUnitType_MusicDevice,
componentSubType: 0,
componentManufacturer: 0,
componentFlags: 0,
componentFlagsMask: 0)
var deviceComponents = manager.components(matching: description)
var names = deviceComponents.map{$0.name}
let pluginName: String = "AUSampler" // This works
//let pluginName: String = "Kontakt" // This does not
var plugin = deviceComponents.filter{$0.name.contains(pluginName)}.first!
print("Plugin name: \(plugin.name)")
var customViewController:NSViewController?
AVAudioUnit.instantiate(with: plugin.audioComponentDescription, options: []){avAudioUnit, error in
var ilip = avAudioUnit!.auAudioUnit.isLoadedInProcess
print("Loaded in process: \(ilip)")
guard error == nil else {
print("Error: \(error!.localizedDescription)")
return
}
print("AudioUnit successfully created.")
let audioUnit = avAudioUnit!.auAudioUnit
audioUnit.requestViewController{ vc in
if let viewCtrl = vc {
customViewController = vc
var b = vc?.view.bounds
PlaygroundPage.current.liveView = vc
print("Successfully added view controller.")
}else{
print("Failed to load controller.")
}
}
}
AudioToolbox
RSS for tagRecord or play audio convert formats parse audio streams and configure your audio session using AudioToolbox.
Posts under AudioToolbox tag
41 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Periodically when testing I am running into a situation where the app hangs and beach balls forever when using AVAudioEngine.
This seems to log out when this affect happens:
Now when this happens if I pause the debugger it's hanging at a call to:
[engine connect:playerNode
to:engine.mainMixerNode
format:buffer.format];
#0 0x000000019391ca9c in __psynch_mutexwait ()
#1 0x0000000104d49100 in _pthread_mutex_firstfit_lock_wait ()
#2 0x0000000104d49014 in _pthread_mutex_firstfit_lock_slow ()
#3 0x00000001938928ec in std::__1::recursive_mutex::lock ()
#4 0x00000001ef80e988 in CADeprecated::RealtimeMessenger::_PerformPendingMessages ()
#5 0x00000001ef818868 in AVAudioNodeTap::Uninitialize ()
#6 0x00000001ef7fdc68 in AUGraphNodeBase::Uninitialize ()
#7 0x00000001ef884f38 in AVAudioEngineGraph::PerformCommand ()
#8 0x00000001ef88e780 in AVAudioEngineGraph::_Connect ()
#9 0x00000001ef8b7e70 in AVAudioEngineImpl::Connect ()
#10 0x00000001ef8bc05c in -[AVAudioEngine connect:to:format:] ()
Current all my audio engine related calls are on the main queue (though I am curious about this https://forums.developer.apple.com/forums/thread/123540?answerId=816827022#816827022).
In any case, anyone know where I'm going wrong here?
It’s been established that generally speaking background apps cannot record audio while the foreground app is already reading audio data from the microphone, but are there exceptions? For instance, is there an exception for certain Apple apps?
If so, and there’s a special exception that most programmers don’t know about but some Apple’s engineers do and perhaps some hackers do as well, wouldn’t the mechanism that allows that eventually be exploited?
I'd like to know:
Let's say there's a backgrounded app which has microphone access, such as Signal or SoundHound or Shazam. It's established that these apps are allowed to record audio in the user's environment even after being backgrounded, seemingly for as long as they want and even upload that sound data.
But can they ALSO continue recording even while another app that is in the foreground is using the microphone, such as the Phone app or Signal?
I've been generating new Audio Unit Extension apps with Xcode 16 (and newer), and although they generally work initially, it is easy (although I'm not sure how to do it reliably) to cause the app to no longer be able to instantiate the audiounit. Generally the call to AVAudioUnit.findComponent fails and SimplePlayEngine hits the fatalError("Failed to find component with type...")
In the most recent project, merely adding files to the extension (without making any use of them) caused it to go off the rails.
If I "Archive" the app+plugin, there is no audio unit extension in the bundle.
If I switch to the audiounit extension and build it it's fine. If I look at the build folder in Library/Developer/Xcode/project_folder the extension_name.appex is there.
Any ideas? If I can coax an unmodified audio unit extension project to exhibit this behavior I'll attach it here. Right now what I have has code I don't want to share.
Hello
We have an application that play some sound via the system sound APIs from the AudioToolbox framework.
AudioServicesCreateSystemSoundID(url as CFURL, &soundID)
AudioServicesPlaySystemSoundWithCompletion(soundID)
Our make sure that an active audio session is available before playing the system sound. But when the device is connected to a BluetoothA2DP device. The sound are played on through the device speaker and not through the bluetooth A2DP device.
Our AudioSesison is configured with the following categories
[.allowBluetooth, .defaultToSpeaker, .allowBluetoothA2DP]
Sound played from the AVAudioPlayer are played on the allowBluetoothA2DP device with similar code.
Is this a bug in the AudioToolbox framework?
I'm trying to make an app that is able to quietly run in the background. It needs to detect other apps' or the system's incoming video and/or audio, using only on-device resources to determine if it might be a scam caller.
It will tap into an escalating cascade of resources to do so. For video/image scam detection, it uses OpenCV to detect faces, then refers to a known database of reported scam imagery. For audio scam calls, we defer to known techniques of voice modulation in frequency and/or amplitude. Each video and/or audio result will be relayed via notification banner as well as recorded in-app. Crucially, if the results are uncertain, users have the option to submit it to a global collaborative cloud database for investigative teams; 60 second audio snippets or series of images where faces were detected (60 second equivalent).
In the end, we expect to deploy this app across most parts of Asia and Africa, thereby protecting generations of iPhone and iPad users.
However, we have not been able to find a method that does this, and there is no known correspondance able to provide such technical guidance.
Please assist.
I’m looking to add DAW-like capabilities to my macOS music app, and AVAudioEngine seems like the right tool for the job.
However, I haven’t been able to find any documentation on how to save the user’s AVAudioEngine configuration—specifically the connections between nodes and the internal states of each node—to a file.
Does AVAudioEngine provide any API for saving and restoring this state, or does it need to be handled manually? If it’s manual, are there any sample "DAW" apps or resources that demonstrate how this can be implemented?
Any guidance would be greatly appreciated.
Thanks,
BD
Hi,
I use AudioQueueNewInput() with my very own run loop and dedicated thread. But now it doesn't show the mic alert window.
Howto fix this?
AudioQueueNewInput(&(core_audio_port->record_format),
ags_core_audio_port_handle_input_buffer,
core_audio_port,
ags_core_audio_port_input_run_loop, kCFRunLoopDefaultMode,
0,
&(core_audio_port->record_aq_ref));
Hello,
Using ShazamKit, based on a shazam catalog result, would it be possible to detect the audio-recorded FPS (speed)?
I'm thinking that the shazam catalog which was created from an audio file can be used to compare the speed of a live recorded audio.
Thank you!
I'm running into an issue where in some cases, when the AUHostingServiceXPC_arrow process is shut down by Logic, the process is terminated abruptly without calling AP_Close on all of the plugins hosted in the process. In our case, we have filesystem resources we need to clean up, and having stale files around from the last run can cause issues in new sessions, so this leak is having some pretty gnarly effects.
I can reproduce the issue using only Apple sample plugins, and it seems to be triggered by a timeout. If I have two different AU plugins in the session, and I add a 1 second sleep to the destructor of one of the sample plugins, Logic will force terminate the process and the remaining destructors are not called (even for the plugins without the 1 second sleep).
Is there a way to avoid this behavior? Or to safely clean up our plugin even if other plugins in the session take a second to tear down?
Hello, I have a question regarding the voice and sound recognition features on the iPhone 15 Pro.
The iPhone 15 Pro is equipped with four microphones, and I understand that for features like Apple’s sound recognition and when invoking Siri, the microphone(s) must always be active. My question is whether the device uses a single microphone (mono channel) for these functions or if multiple microphones are activated simultaneously.
I would appreciate clarification on how the microphones are utilized in sound and voice recognition features.
Thank you for your assistance.
Best regards.
Hello, I have a question regarding the voice and sound recognition features on the iPhone 15 Pro.
The iPhone 15 Pro is equipped with four microphones, and I understand that for features like Apple’s sound recognition and when invoking Siri, the microphone(s) must always be active. My question is whether the device uses a single microphone (mono channel) for these functions or if multiple microphones are activated simultaneously.
I would appreciate clarification on how the microphones are utilized in sound and voice recognition features.
Thank you for your assistance.
Best regards.
Currently we tested iOS AAC LC encoder using AudioToolbox framework, no matter we set mManufacturer to kAppleHardwareAudioCodecManufacturer or kAppleSoftwareAudioCodecManufacturer, it always run on CPU.
Hello,
As explained in this link, the AVAssetReaderTrackOutput.copyNextSampleBuffer() returns a CMSampleBuffer in linear PCM audio format.
I want to place this audio buffer into an AVAssetWriterInput of type kAudioFormatMPEG4AAC, but I can't manage the conversion.
Could you help me by providing an extension that returns a CMSampleBuffer converted from linear PCM audio format to kAudioFormatMPEG4AAC?
Example:
extension CMSampleBuffer {
func fromPCMToAAC() -> CMSampleBuffer? {
// Here, get a new AudioStreamBasicDescription, create a CMSampleBuffer and a CMBlockBuffer
}
}
I've tried multiple times but without success.
Software: iOS 18.1
XCode: 16.0
Thank you!
Since upgrading to tvOS 18, the above function isn't working for me in converting a stream with these formats. It does work in decoding AAC, however.
https://developer.apple.com/documentation/audiotoolbox/1503098-audioconverterfillcomplexbuffer?language=objc
I pass a valid ioOutputDataPacketSize in, but it always comes out as zero.
Has anyone else observed this too?
I wonder if this is related to the issue being discussed widely about 5.1 sound being broken for many people after upgrading to tvOS 18?
https://discussions.apple.com/thread/255769102?login=true&sortBy=rank
EDIT: further information; the callback gets called once, asking for 1 packet (which is ok). I give it one packet and return noErr. However, after this, the callback is never invoked again. Must be a bug?
EDIT2: the same code continues to work correctly on macOS in decoding the same audio stream.
AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50
from: 0 ch, 16000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame
to: 2 ch, 16000 Hz, Int16, interleaved
AQMEIO_HAL.cpp:2773 iOSSimulatorAudioDevice-15111-0: Abandoning I/O cycle because reconfig pending (1).
HALC_ProxySystem.cpp:163 HALC_ProxySystem::GetObjectInfo: got an error from the server, Error: 560947818 (!obj)
HALC_ShellObject.mm:213 HALC_ShellObject::HasProperty: there is no proxy object
AudioHardware-mac-imp.cpp:1224 AudioObjectRemovePropertyListener: no object with given ID 160
HALSystem.cpp:2216 AudioObjectPropertiesChanged: no such object
why? Can't record on ios17. Normal recording before iOS 16.
Calls to ExtAudioFileRead are throwing OSStatus 561145203 (AVAudioSessionErrorCodeResourceNotAvailable) on iOS and iPadOS 18 -- earlier versions of iOS have not exhibited this behavior. This is a longstanding code path that has seen a spike of these error codes since iOS 18's release.
The following is also printed to the Xcode 16 console:
Hello! The new lower latency support for AirPods in Game Mode is impressive, but I'm not sure of the best way to handle the transition into/out of Game Mode while audio is playing. In order to lower the latency, the system appears to drop some number of samples, with the result being a good deal less latency. My use case is macOS where it's easier to switch in/out of the fullscreen game (a simple swipe left), thus causing more issues for Game Mode since the audio is playing the entire time. It would be nice if offscreen games could remain in game mode, but I understand not wanting to give developers that control.
Are there any best practices for avoiding or masking the audio glitch caused by this skip-ahead? Is there a system event I can receive to know when Game Mode is about to be enabled or disabled, where I could perhaps fade out the audio? My callback checks the inTimestamp->mSampleTime value to detect gaps, but it only rarely detects a Game Mode gap, even though the audio skip-ahead always happens.
BTW, I am currently only developing on macOS (15.0) and I'm working at a low level with AudioUnit callbacks and a SpatialMixer. I am not currently using any higher-level audio APIs.
And here's a few questions I don't necessarily expect answers to, but it doesn't hurt to ask: Is there any additional technical details about how this latency reduction works, or exactly how much of a reduction is achieved (or said another way, how many samples are dropped)? How much does this affect AirPods battery life? And finally, is there a way to query the actual latency value? I check the value for kAudioDevicePropertyLatency but it seems to always report 160ms for AirPods. Thanks!
I recently upgraded my iPhone 13 to iOS 18, and I'm facing two issues.
I didn't get the call recording feature. When I make a call and the person picks up, the call recording icon is not showing.
The option to take notes during a call has disappeared. Earlier, the notes option used to be available on the call screen itself.
Please help fix these two issues, or let me know if it's possible to resolve them from my end.