Post not yet marked as solved
I'd like to use AVAudioConverter to convert audio captured from the microphone to μLaw.Unfortunately, when I try to create an output buffer to convert into, I get an exception. I've tried both AVAudioPCMBuffer and AVAudioCompressedBuffer, and neither works for me.Is this supposed to work?Thanks!let format = AVAudioFormat(settings: [AVFormatIDKey: NSNumber(value: kAudioFormatULaw), AVSampleRateKey: 8000, AVNumberOfChannelsKey: 1])
let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: 1000)
// required condition is false: isPCMFormat
let buffer = AVAudioCompressedBuffer(format: format, packetCapacity: 1000)
// required condition is false: !(fmt.IsPCM() || fmt.mFormatID == kAudioFormatALaw || fmt.mFormatID == kAudioFormatULaw)
Post not yet marked as solved
Hi,I have a couple of doubts about the API for installing a tap on an AVAudioEngine node.1) Does the timestamp on the callback refer to the time at which the buffer starts? And is it right that I should compensate for input lag if I want to calculate the host time corresponding to these samples as accurately as possible? At the moment I'm doing this calcuation to calculate the host time corresponding to the first sample in the buffer:let inputLatency = AVAudioSession.sharedInstance().inputLatency
input.installTap(onBus: 0, bufferSize: sampleRate / 10, format: nil) { buffer, timestamp in
let bufferStart = AVAudioTime.seconds(forHostTime: timestamp.hostTime) - inputLatency
}2) The documentation states that the callback may happen off the main thread — in practice this always seems to be the case. Could there be any negative consequences to performing signal processing on the thread on which the callback occurs? Or is it essentially a serial queue set up just for this tap? Obviously the safest thing would be to dispatch straight away to my own context, but is that actually necessary in practice?Thanks!~Milo
Post not yet marked as solved
I'm working on a project which requires synchronised audio output from two devices. To do this I'm using a peer-to-peer connection to calculate the offset between system uptime (mach_absolute_time) on the devices, then triggering playback of each track using AVAudioPlayer.play(atTime:). With calibration of output latency this works really well, syncing the devices to within ~10ms.I'm syncing the devices only at the start of a three hour session, but this was working fine initially on the two devices I was using (iPhone 6 and 4th generation Apple TV). Sadly after updating to some new, more powerful hardware (iPad Pro 2017, Apple TV 4K) I'm getting some significant clock drift after the initial sync, despite these two devices having the same CPU. The iPad's clock gains on the Apple TV's to such an extent that even if I compensated for the drift when triggering playback, the sync would become imperfect over the course of a single track lasting 40-50 minutes.Is it normal to see clock drift between CPUs of the same type? Am I just unlucky with these two particular devices? Is there anything I can do about this? How do other multiroom playback systems such as Airplay solve this problem?Many thanks!