Post not yet marked as solved
I receive a buffer from[AVSpeechSynthesizer convertToBuffer:fromBuffer:] and want to schedule it on an AVPlayerNode.
The player node's output format need to be something that the next node could handle and as far as I understand most nodes can handle a canonical format.
The format provided by AVSpeechSynthesizer is not something thatAVAudioMixerNode supports.
So the following:
AVAudioEngine *engine = [[AVAudioEngine alloc] init];
playerNode = [[AVAudioPlayerNode alloc] init];
AVAudioFormat *format = [[AVAudioFormat alloc]
initWithSettings:utterance.voice.audioFileSettings];
[engine attachNode:self.playerNode];
[engine connect:self.playerNode to:engine.mainMixerNode format:format];
Throws an exception:
Thread 1: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 \"(null)\""
I am looking for a way to obtain the canonical format for the platform so that I can use AVAudioConverter to convert the buffer.
Since different platforms have different canonical formats, I imagine there should be some library way of doing this. Otherwise each developer will have to redefine it for each platform the code will run on (OSX, iOS etc) and keep it updated when it changes.
I could not find any constant or function which can make such format, ASDB or settings.
The smartest way I could think of, which does not work:
AudioStreamBasicDescription toDesc;
FillOutASBDForLPCM(toDesc, [AVAudioSession sharedInstance].sampleRate,
2, 16, 16, kAudioFormatFlagIsFloat, kAudioFormatFlagsNativeEndian);
AVAudioFormat *toFormat = [[AVAudioFormat alloc] initWithStreamDescription:&toDesc];
Even the provided example for iPhone, in the documentation linked above, uses kAudioFormatFlagsAudioUnitCanonical and AudioUnitSampleType which are deprecated.
So what is the correct way to do this?
Post not yet marked as solved
The very little and outdated 'documentation' shared by Apple about CoreAudio and CoreMIDI server plugins suggested to use syslog for logging.
At least since Bug Sur syslog doesn't end up anywhere.
(So, while you seem to think its OK to not document your APIs you could at least remove not working APIs then! Not to do so causes unnecessary and frustrating bug hunting?)
Should we replace syslog by unified logging?
For debugging purpose only our plugins write to our own log files. Where can I find suitable locations? Where is this documented?
Thanks,
hagen.
Post not yet marked as solved
Hi,
I've released an open-source AUv3 MIDI processor plugin for iOS and macOS that records and plays MIDI messages in a sample accurate fashion and doesn't ever apply any quantization.
I've tested this plugin with 120 beta testers and everything seemed to work fine. However, now that I've released it, there seems to be a problem in Logic Pro X on some Mac computers with MIDI FX processor plugins that are using Catalyst.
You can find my plugin here:
http://uwyn.com/mtr/
... and the source code here:
https://github.com/gbevin/MIDITapeRecorder
When I trace the AUv3 instantiation, I see Logic Pro X obtaining the internalRenderBlock several times, but then never ever calling it. This means there's no render callback and there's never any MIDI parameter events received.
I've talked to the developer of ZOA, which is also a MIDI processor plugin using Catalyst and he's running into exactly the same problem: https://www.audiosymmetric.com/zoa.html
Another developer that’s working on a MIDI processor plugin has been trying to track this down for weeks also.
When I test this on my M1 Max MacBook Pro, is always internalRenderBlock, however an my M1 MacBook Air and Intel 2019 MacBook Pro, it is never called.
Any thoughts or ideas to work around this would be really helpful.
Thanks!
Post not yet marked as solved
We're trying to join our audio worker threads to a CoreAudio HAL audio workgroup, but haven't managed to this working yet.
Here's what we do:
Fetch audio workgroup handle from the CoreAudio device:
UInt32 Count = sizeof(os_workgroup_t);
os_workgroup_t pWorkgroup = NULL;
::AudioDeviceGetProperty(SomeCoreAudioDeviceHandle, kAudioUnitScope_Global, 0,
kAudioDevicePropertyIOThreadOSWorkgroup, &Count, &pWorkgroup);
This succeeds on a M1 Mini for the "Apple Inc.: Mac mini Speakers" on OSX 11.1.
The returned handle looks fine as well:
[(NSObject*)pWorkgroup debugDescription] returns
"{xref = 2, ref = 1, name = AudioHALC Workgroup}"
Join some freshly created worker threads to the workgroup via:
os_workgroup_join_token_s JoinToken;
int Result = ::os_workgroup_join(pWorkgroup, &JoinToken);
The problem:
Result from os_workgroup_join always is EINVAL, Invalid argument - whatever we do. Both arguments, the workgroup handle and the join token are definitely valid. And the device hasn't been stopped or reinitialized here, so the workgroup should not be cancelled?
Has anyone else managed to get this working? All examples out there seem to successfully use the AUHAL workgroup instead of the audio device HAL API.
Post not yet marked as solved
let volumePropertyAddress = AudioObjectPropertyAddress(
mSelector: kAudioHardwareServiceDeviceProperty_VirtualMainVolume,
mScope: kAudioDevicePropertyScopeOutput,
mElement: kAudioObjectPropertyElementMaster
)
let status = AudioObjectSetPropertyData(deviceId, &theAddress, 0, nil, size, &theValue)
Then App freezes. Is it not possible to call the AudioObjectSetPropertyData method on the main thread
Post not yet marked as solved
Hi,
I've problem with an AU host (based on Audio Toolbox/Core Audio, not AVFoundation) when running on macOS 11 or later and Apple Silicon – it crashes after some operations in GUI. The weird is, it crashes in IOThread. Could this be caused by some inappropriate operation in GUI (eg. outside the main thread) that effects the IOThread? Sounds quite improbable to me. And I did not find anything suspicious in the code.
There are two logs in the debugger:
[AUHostingService Client] connection interrupted.
rt_sender::signal_wait failed: 89
...
And here is the crash log:
Crash log:
...
Thanks,
Tomas
Post not yet marked as solved
Hi all,
I'm using AVAudioEngine to play multiple nodes at various times (like GarageBand for example).
So far I managed to play the various files at the right time using this code :
DispatchQueue.global(qos: .background).async {
AudioManager.instance.audioEngine.attach(AudioManager.instance.mixer)
AudioManager.instance.audioEngine.connect(AudioManager.instance.mixer, to: AudioManager.instance.audioEngine.outputNode, format: nil)
// !important - start the engine *before* setting up the player nodes
try! AudioManager.instance.audioEngine.start()
for audioFile in data {
// Create and attach the audioPlayer node for this file
let audioPlayer = AVAudioPlayerNode()
AudioManager.instance.audioEngine.attach(audioPlayer)
AudioManager.instance.nodes.append(audioPlayer)
// Notice the output is the mixer in this case
AudioManager.instance.audioEngine.connect(audioPlayer, to: AudioManager.instance.mixer, format: nil)
let fileUrl = audioFile.audio.fileUrl
if let file : AVAudioFile = try? AVAudioFile.init(forReading: fileUrl) {
let time = audioFile.start > 0 ? AudioManager.instance.secondsToAVAudioTime(hostTime: mach_absolute_time(), time: Double(audioFile.start / CGFloat.secondsToPoints)) : nil
audioPlayer.scheduleFile(file, at: time, completionHandler: nil)
audioPlayer.play(at: time)
}
}
}
Basically my data object contains struct that have a reference to an audio fileURL and the startPosition at which it should begin.
That works great.
now I would like to export all these tracks mixed into a single file and save it to the Document's directory of the user.
How can I achieve this?
Thanks for your help.
Post not yet marked as solved
I'm trying to change device of the inputNode of AVAudioEngine.
To do so, I'm calling setDeviceID on its auAudioUnit. Although this call doesn't fail, something wrong happens to the output busses.
When I ask for its format, it shows a 0Hz and 0 channels format. It makes the app crash when I try to connect the node to the mainMixerNode.
Can anyone explain what's wrong with this code?
avEngine = AVAudioEngine()
print(avEngine.inputNode.auAudioUnit.inputBusses[0].format)
// <AVAudioFormat 0x1404b06e0: 2 ch, 44100 Hz, Float32, non-inter>
print(avEngine.inputNode.auAudioUnit.outputBusses[0].format)
// <AVAudioFormat 0x1404b0a60: 2 ch, 44100 Hz, Float32, inter>
// Now, let's change a device from headphone's mic to built-in mic.
try! avEngine.inputNode.auAudioUnit.setDeviceID(inputDevice.deviceID)
print(avEngine.inputNode.auAudioUnit.inputBusses[0].format)
// <AVAudioFormat 0x1404add50: 2 ch, 44100 Hz, Float32, non-inter>
print(avEngine.inputNode.auAudioUnit.outputBusses[0].format)
// <AVAudioFormat 0x1404adff0: 0 ch, 0 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved>
// !!!
// Interestingly, 'inputNode' shows a different format than `auAudioUnit`
print(avEngine.inputNode.inputFormat(forBus: 0))
// <AVAudioFormat 0x1404af480: 1 ch, 44100 Hz, Float32>
print(avEngine.inputNode.outputFormat(forBus: 0))
// <AVAudioFormat 0x1404ade30: 1 ch, 44100 Hz, Float32>
Edit:
Further debugging revels another puzzling thing.
avEngine.inputNode.auAudioUnit == avEngine.outputNode.auAudioUnit // this is true ?!
inputNode and outputNode share the same AUAudioUnit. And its deviceID is by default set to the speakers. It's so confusing to me...why would inpudeNode's device be a speaker?
Post not yet marked as solved
I'm trying to convert a CMSampleBuffer to a AVAudioPCMBuffer instance to be able to perform audio processing in realtime. I wrote an optional initialiser for my extension to pass a CMSampleBuffer reference. Unfortunately I simply don't know how to write to the AVAudioPCMBuffer's data. Here is my code so far:import AVKit
extension AVAudioPCMBuffer {
static func create(from sampleBuffer: CMSampleBuffer) -> AVAudioPCMBuffer? {
guard let description: CMFormatDescription = CMSampleBufferGetFormatDescription(sampleBuffer),
let sampleRate: Float64 = description.audioStreamBasicDescription?.mSampleRate,
let numberOfChannels: Int = description.audioChannelLayout?.numberOfChannels
else { return nil }
guard let blockBuffer: CMBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer) else {
return nil
}
let length: Int = CMBlockBufferGetDataLength(blockBuffer)
let audioFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: sampleRate, channels: AVAudioChannelCount(numberOfChannels), interleaved: false)
let buffer = AVAudioPCMBuffer(pcmFormat: audioFormat!, frameCapacity: AVAudioFrameCount(length))!
buffer.frameLength = buffer.frameCapacity
for channelIndex in 0...numberOfChannels - 1 {
guard let channel: UnsafeMutablePointer<Float> = buffer.floatChannelData?[channelIndex] else { return nil }
for pointerIndex in 0...length - 1 {
let pointer: UnsafeMutablePointer<Float> = channel.advanced(by: pointerIndex)
pointer.pointee = 100
}
}
return buffer
}
}Does anyone knows how to convert a CMSampleBuffer to AVAudioPCMBuffer and back again? I assume there is no other way if I want to interact with AVAudioEngine, am I right?
I have a situation where at any one time my AudioUnit can be represented by only a few parameters or literally many 100s.This dynamic situation is under the control of the user and the maximum number of parameters and their hierarchy cannot be predicted in advance. ( at least not accurately ).When the parameters are to change I am setting the parameterTree property by creating a new tree with the child nodes and posting KVC notifications ….. create childGroups …
[self willChangeValueForKey:@"parameterTree"];
self.parameterTree = [AUParameterTree createTreeWithChildren: childGroups ];
[self didChangeValueForKey:@"parameterTree"];Most the of the app’s User Interface and AudioUnit is coded in Swift, the engine is in coded in C/C++ with an objectiveC AUAudioUnit class that acts as the go between hence the above.However there is a popular host app that crashes when I do this and it looks like the host is hanging on to the parameterTree object that’s it’s passed originally when the AU first launches but never updates it even after the KVC notifications are posted.So after that long explanation… Am I doing this correctly? OR Is there a solution that can create and rebuild a tree without making a new AUParameterTree object?If I can do that the host in question may not crash, ( although it might anyway because all the parameters have changed ).I have posted a code example to the host developer but sadly got a response which gave me the impression he was not prepared to work on a fix.Thanks!
Post not yet marked as solved
Hi,
Wondering if anyone has found a solution to the automatic volume reduction on the host computer using the OSX native screen share application.
The volume reduction makes it nearly impossible to comfortably continue working on the host computer when there is any audio involved. Is there a way to bypass to this function? It seems to be the same native function that FaceTime uses to reduce the system audio volume to create priority for the application.
Please help save my speakers! Thanks.
Post not yet marked as solved
We need to developpe a new "control surface mapping driver" for Logic Pro X, to match the "simple" fonctionnal requiements of our Tangerine Automation InterfaceI'm trying to find the info on how to create the "mapping driver" that will translate our interface "hardware/midi mapping" to Logic Pro X internal controllers acces. (volume, mutes, automation modes etc...)Ou interface is already reconnized as 5 ports Plug&Play USB midi device. It can be used with the HUI mapping in Logic Pro x but we want to get better control behavior, with our own mapping Any pointer on where to look in the apple developer section would be appriciateddbmdbu
Post not yet marked as solved
I’m using AVFoundation to access camera on iPad.
But with AVFoundation, CoreMedia is also imported, which in-turn imports CoreAudio and CoreVideo.
Keeping privacy concerns in mind, is there any way by which I can ensure that the app is never able to access Microphone or Video Recording?
AVfoundation
CoreMedia
Post not yet marked as solved
I’m using AVFoundation for image capture using camera on iPad.
But I’m not using Video or Audio related functionality.
Looks like with AVFoundation; CoreMedia, CoreVideo and CoreAudio are also imported in any project.
Is there any way by which I can remove these libraries(CoreMedia, CoreVideo and CoreAudio) from my app.
I have used otool to list all the frameworks and libraries being used by my framework.
Post not yet marked as solved
I'm trying to play an audio content built from NSData inside a library (.a). It works properly when my code is inside an app. But it is not working when in a library, I get no error and no sound playing.
NSError * errorAudio = nil;
NSError * errorFile;
// Clear all cache
NSArray* tmpDirectory = [[NSFileManager defaultManager] contentsOfDirectoryAtPath:NSTemporaryDirectory() error:NULL];
for (NSString *file in tmpDirectory) {
[[NSFileManager defaultManager] removeItemAtPath:[NSString stringWithFormat:@"%@%@", NSTemporaryDirectory(), file] error:NULL];
}
// Set temporary directory and temporary file
NSURL * tmpDirURL = [NSURL fileURLWithPath:NSTemporaryDirectory() isDirectory:YES];
NSURL * soundFileURL = [[tmpDirURL URLByAppendingPathComponent:@"temp"] URLByAppendingPathExtension:@"wav"];
[[NSFileManager defaultManager] createDirectoryAtURL:tmpDirURL withIntermediateDirectories:NO attributes:nil error:&errorFile];
// Write NSData to temporary file
NSString *path= [soundFileURL path];
[audioToPlay writeToFile:path options:NSDataWritingAtomic error:&errorFile];
if (errorFile) {
// Error while writing NSData
} else {
// Init audio player
self.audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:soundFileURL error:&errorAudio];
if (errorAudio) {
// Audio player could not be initialized
} else {
// Audio player was initialized correctly
[audioPlayer prepareToPlay];
[audioPlayer stop];
[audioPlayer setCurrentTime:0];
[audioPlayer play];
}
}
I don't check errorFile intros piece of code, but when debugging I can see that value is nil.
My header file
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVFoundation.h>
@property(nonatomic, strong) AVAudioPlayer * audioPlayer;
My m file
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVFoundation.h>
@synthesize audioPlayer;
I've been checking for dozens of posts but cannot find any solution, it always works properly in an app, but not in a library. Any help would be greatly appreciated.
I have just begun to start building plugins for Logic using the AUv3 format. After a few teething problems to say the least I have a basic plug in working (integrated with SwiftUI which is handy) but the install and validation process is still buggy and bugging me!
Does the stand alone app which Xcode generates to create the plugin have to always be running separately to Logic for the AUv3 to be available? Is there no way to have it as a permanently available plugin without running that?
If anyone has any links to a decent tutorial please.. there are very few I can find on YouTube or anywhere else and the Apple tutorials and examples aren't great.
Post not yet marked as solved
I'm making an Auv3 MIDI plugin (aumi type) which will be a port of one of my iOS ones My basic framework - which at this point is litterally an AUv3 with busses and midi output set and aumi type appears in Logic in the MIDI fx drop down 8 times. I have a couple of other bought pugins (Reaktor and Numerology) which appear in the list only onceAny ideas what I might need to change ?
Post not yet marked as solved
I am using AVAudioEngine to analyse some audio tracks and quite often the `while engine.manualRenderingSampleTime < sourceFile.length {` loop takes a moment to start receiving audio data. Looking at the soundwave looks like the input is simply delayed. This wouldn't be a problem if the length of the buffer would vary depending on this latency but the length unfortunately stays always the same, loosing the final part of the track.I took the code from the official tutorial and this seems to happen regardless if I add an EQ effect or not. Actually looks like the two analysis (with or without EQ) done one after the other return the same anomaly. let format: AVAudioFormat = sourceFile.processingFormat
let engine = AVAudioEngine()
let player = AVAudioPlayerNode()
engine.attach(player)
if compress {
let eq = AVAudioUnitEQ(numberOfBands: 2)
engine.attach(eq)
let lowPass = eq.bands[0]
lowPass.filterType = .lowPass
lowPass.frequency = 150.0
lowPass.bypass = false
let highPass = eq.bands[1]
highPass.filterType = .highPass
highPass.frequency = 100.0
highPass.bypass = false
engine.connect(player, to: eq, format: format)
engine.connect(eq, to: engine.mainMixerNode, format: format)
}else{
engine.connect(player, to: engine.mainMixerNode, format: format)
}
do {
let maxNumberOfFrames: AVAudioFrameCount = 4096
try engine.enableManualRenderingMode(.offline, format: format, maximumFrameCount: maxNumberOfFrames)
} catch {
fatalError("Could not enable manual rendering mode, \(error)")
}
player.scheduleFile(sourceFile, at: nil)
do {
try engine.start()
player.play()
}catch{
fatalError("Could not start engine, \(error)")
}
// buffer to which the engine will render the processed data
let buffer: AVAudioPCMBuffer = AVAudioPCMBuffer(pcmFormat: engine.manualRenderingFormat, frameCapacity: engine.manualRenderingMaximumFrameCount)!
//
var pi = 0
//
while engine.manualRenderingSampleTime < sourceFile.length {
do {
let framesToRender = min(buffer.frameCapacity, AVAudioFrameCount(sourceFile.length - engine.manualRenderingSampleTime))
let status = try engine.renderOffline(framesToRender, to: buffer)
switch status {
case .success:
// data rendered successfully
let flength = Int(buffer.frameLength)
points.reserveCapacity(pi + flength)
if let chans = buffer.floatChannelData?.pointee {
let left = chans.advanced(by: 0)
let right = chans.advanced(by: 1)
for b in 0..<flength {
let v:Float = max(abs(left[b]), abs(right[b]))
points.append(v)
}
}
pi += flength
case .insufficientDataFromInputNode:
// applicable only if using the input node as one of the sources
break
case .cannotDoInCurrentContext:
// engine could not render in the current render call, retry in next iteration
break
case .error:
// error occurred while rendering
fatalError("render failed")
}
} catch {
fatalError("render failed, \(error)")
}
}
player.stop()
engine.stop()I thought it was perhaps a simulator issue, but also on the device is happening. Am I doing anything wrong? Thanks!
Post not yet marked as solved
I’m using AVAudioEngine to get a stream of AVAudioPCMBuffers from the device’s microphone using the usual installTap(onBus:) setup.
To distribute the audio stream to other parts of the program, I’m sending the buffers to a Combine publisher similar to the following:
private let publisher = PassthroughSubject<AVAudioPCMBuffer, Never>()
I’m starting to suspect I have some kind of concurrency or memory management issue with the buffers, because when consuming the buffers elsewhere I’m getting a range of crashes that suggest some internal pointer in a buffer is NULL (specifically, I’m seeing crashes in vDSP.convertElements(of:to:) when I try to read samples from the buffer).
These crashes are in production and fairly rare — I can’t reproduce them locally.
I never modify the audio buffers, only read them for analysis.
My question is: should it be possible to put AVAudioPCMBuffers into a Combine pipeline? Does the AVAudioPCMBuffer class not retain/release the underlying AudioBufferList’s memory the way I’m assuming? Is this a fundamentally flawed approach?
Post not yet marked as solved
Hi!
How would you synchronize bpm, pitch and playhead position on 10-20 different devices*, all on the same closed ethernet network?
*Mac, iPad and iPhone.
A single device is master.
Required latency tolerance in sub millisecond range, ideally sample sync.
All the devices will play up to 64 channels of audio each. 48khz 24bit wav.
I have considered two strategies:
Broadcast all user events from master and replicate them on the slaves.
Broadcast a continuous stream from master, comparing it on the slaves and slightly increasing / decreasing the corresponding parameter.
I have the feeling there are some better options out there as these are neither fail safe nor very accurate.
I have looked into the Ableton Link SDK, but it does not support position sync (only beat sync).
All the best.