Core Audio

RSS for tag

Interact with the audio hardware of a device using Core Audio.

Core Audio Documentation

Posts under Core Audio tag

71 Posts
Sort by:
Post not yet marked as solved
0 Replies
341 Views
I’m using AVFoundation for image capture using camera on iPad. But I’m not using Video or Audio related functionality. Looks like with AVFoundation; CoreMedia, CoreVideo and CoreAudio are also imported in any project. Is there any way by which I can remove these libraries(CoreMedia, CoreVideo and CoreAudio) from my app. I have used otool to list all the frameworks and libraries being used by my framework.
Posted
by
Post not yet marked as solved
0 Replies
435 Views
I'm trying to play an audio content built from NSData inside a library (.a). It works properly when my code is inside an app. But it is not working when in a library, I get no error and no sound playing. NSError * errorAudio = nil; NSError * errorFile; // Clear all cache NSArray* tmpDirectory = [[NSFileManager defaultManager] contentsOfDirectoryAtPath:NSTemporaryDirectory() error:NULL]; for (NSString *file in tmpDirectory) {     [[NSFileManager defaultManager] removeItemAtPath:[NSString stringWithFormat:@"%@%@", NSTemporaryDirectory(), file] error:NULL]; } // Set temporary directory and temporary file NSURL * tmpDirURL = [NSURL fileURLWithPath:NSTemporaryDirectory() isDirectory:YES]; NSURL * soundFileURL = [[tmpDirURL URLByAppendingPathComponent:@"temp"] URLByAppendingPathExtension:@"wav"]; [[NSFileManager defaultManager] createDirectoryAtURL:tmpDirURL withIntermediateDirectories:NO attributes:nil error:&errorFile]; // Write NSData to temporary file NSString *path= [soundFileURL path]; [audioToPlay writeToFile:path options:NSDataWritingAtomic error:&errorFile]; if (errorFile) {     // Error while writing NSData } else {     // Init audio player     self.audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:soundFileURL error:&errorAudio];     if (errorAudio) {         // Audio player could not be initialized     } else {         // Audio player was initialized correctly         [audioPlayer prepareToPlay];         [audioPlayer stop];         [audioPlayer setCurrentTime:0];         [audioPlayer play];     } } I don't check errorFile intros piece of code, but when debugging I can see that value is nil. My header file #import <AudioToolbox/AudioToolbox.h> #import <AVFoundation/AVFoundation.h> @property(nonatomic, strong) AVAudioPlayer * audioPlayer; My m file #import <AudioToolbox/AudioToolbox.h> #import <AVFoundation/AVFoundation.h> @synthesize audioPlayer; I've been checking for dozens of posts but cannot find any solution, it always works properly in an app, but not in a library. Any help would be greatly appreciated.
Posted
by
Post not yet marked as solved
0 Replies
400 Views
I’m using AVAudioEngine to get a stream of AVAudioPCMBuffers from the device’s microphone using the usual installTap(onBus:) setup. To distribute the audio stream to other parts of the program, I’m sending the buffers to a Combine publisher similar to the following: private let publisher = PassthroughSubject<AVAudioPCMBuffer, Never>() I’m starting to suspect I have some kind of concurrency or memory management issue with the buffers, because when consuming the buffers elsewhere I’m getting a range of crashes that suggest some internal pointer in a buffer is NULL (specifically, I’m seeing crashes in vDSP.convertElements(of:to:) when I try to read samples from the buffer). These crashes are in production and fairly rare — I can’t reproduce them locally. I never modify the audio buffers, only read them for analysis. My question is: should it be possible to put AVAudioPCMBuffers into a Combine pipeline? Does the AVAudioPCMBuffer class not retain/release the underlying AudioBufferList’s memory the way I’m assuming? Is this a fundamentally flawed approach?
Post not yet marked as solved
0 Replies
357 Views
Hi! How would you synchronize bpm, pitch and playhead position on 10-20 different devices*, all on the same closed ethernet network? *Mac, iPad and iPhone. A single device is master. Required latency tolerance in sub millisecond range, ideally sample sync. All the devices will play up to 64 channels of audio each. 48khz 24bit wav. I have considered two strategies: Broadcast all user events from master and replicate them on the slaves. Broadcast a continuous stream from master, comparing it on the slaves and slightly increasing / decreasing the corresponding parameter. I have the feeling there are some better options out there as these are neither fail safe nor very accurate. I have looked into the Ableton Link SDK, but it does not support position sync (only beat sync). All the best.
Posted
by
Post not yet marked as solved
1 Replies
442 Views
I have an AVComposition playback via AVPlayer where AVComposition has multiple audio tracks with audioMix applied. My question is how is it possible to compute audio meter values for the audio playing back through AVPlayer? Using MTAudioProcessingTap it seems you can only get callback for one track at a time. But if that route has to be used, it's not clear how to get sample values of all the audio tracks at a given time in a single callback?
Posted
by
Post not yet marked as solved
0 Replies
535 Views
My iOS app using CoreMIDI is able to receive MIDI messages from various keyboards, but I have just received a note from a customer notifying me that my app does not appear to receive MIDI messages from his Casio CT-S1 keyboard. I am stymied on how to diagnose and fix this issue. Clearly there must be something amiss in my CoreMIDI integration. I thought perhaps someone else might have encountered a similar odd situation such as this. If it helps, here is my CoreMIDI integration code. Thanks! Regards, Brad
Posted
by
Post marked as solved
1 Replies
608 Views
Hey folks, I've been able to build and run the 'StarterAudioUnitExample' project provided by Apple, in Xcode 12.5.1, and run and load in Logic 10.6.3 on macOS 11.5.2. However, when trying to recreate the same plugin from a blank project, I'm having trouble with AUVAL or Logic actually instantiating and loading the component. See auval output below: validating Audio Unit Tremelo AUv2 by DAVE: AU Validation Tool Version: 1.8.0 Copyright 2003-2019, Apple Inc. All Rights Reserved. Specify -h (-help) for command options VALIDATING AUDIO UNIT: 'aufx' - 'trem' - 'DAVE' Manufacturer String: DAVE AudioUnit Name: Tremelo AUv2 Component Version: 1.0.0 (0x10000) PASS TESTING OPEN TIMES: COLD: FATAL ERROR: OpenAComponent: result: -1,0xFFFFFFFF validation result: couldn’t be opened Does anyone, hopefully someone from Apple, know what the error code <FATAL ERROR: OpenAComponent: result: -1,0xFFFFFFFF> actually refers too? I've been Googling for hours, and nothing I have found had worked for me so far. Also, here is the info.plist file too : Anyone that could help steer me in the right direction? Thanks, Dave
Posted
by
Post marked as solved
1 Replies
532 Views
I have just begun to start building plugins for Logic using the AUv3 format. After a few teething problems to say the least I have a basic plug in working (integrated with SwiftUI which is handy) but the install and validation process is still buggy and bugging me! Does the stand alone app which Xcode generates to create the plugin have to always be running separately to Logic for the AUv3 to be available? Is there no way to have it as a permanently available plugin without running that? If anyone has any links to a decent tutorial please.. there are very few I can find on YouTube or anywhere else and the Apple tutorials and examples aren't great.
Posted
by
Post marked as solved
1 Replies
592 Views
We have an audio app that utilises a custom internal audio unit attached to AVAudioEngine to do DSP processing. Currently: MIDI arrives at the input port for the app (created with MIDIDestinationCreateWithProtocol). For MIDI 1 we use AUScheduleMIDIEventBlock to pass the events from the midi input to the audio unit. All works well for MIDI 1. So while we ponder on it ourselves we have some questions to throw into the ether... a) For MIDI 2 there appears to be no equivalent method to AUScheduleMIDIEventBlock to send UMP to an audio unit? b) We initially chose the audio unit approach because MIDI and audio processing is all handled neatly, but is this approach essentially redundant? Would it be better to put a tap somewhere on the AVAudioEngine and pass MIDI 2 events directly from the input to the tap? I fear in that case synchronising MIDI to audio nicely would be a pain? c) perhaps we should wait until apple implement a UMP version of AUScheduleMIDIEventBlock?
Posted
by
Post not yet marked as solved
0 Replies
469 Views
Hi, I'm attempting to call audioComponentFindNext() from an iOS application (built with juce) to get a list of all available plugins. I've got an issue whereby the function is only returning the generic system plugins and missing any the 3rd party installed plugins. This issue is currently found when called from within another auv3 plugin though I have also seen it from within a normal iOS app. (Ran on iPad air 4), it the moment is working fine from an iOS app. I've tried setting microphone access and inter-app audio capabilities as I saw it suggested on similar forum posts though it has not solved my problem. Any advice would be very appreciated Thanks
Posted
by
Post not yet marked as solved
0 Replies
780 Views
I'm trying to change device of the inputNode of AVAudioEngine. To do so, I'm calling setDeviceID on its auAudioUnit. Although this call doesn't fail, something wrong happens to the output busses. When I ask for its format, it shows a 0Hz and 0 channels format. It makes the app crash when I try to connect the node to the mainMixerNode. Can anyone explain what's wrong with this code? avEngine = AVAudioEngine() print(avEngine.inputNode.auAudioUnit.inputBusses[0].format) // <AVAudioFormat 0x1404b06e0: 2 ch, 44100 Hz, Float32, non-inter> print(avEngine.inputNode.auAudioUnit.outputBusses[0].format) // <AVAudioFormat 0x1404b0a60: 2 ch, 44100 Hz, Float32, inter> // Now, let's change a device from headphone's mic to built-in mic. try! avEngine.inputNode.auAudioUnit.setDeviceID(inputDevice.deviceID) print(avEngine.inputNode.auAudioUnit.inputBusses[0].format) // <AVAudioFormat 0x1404add50: 2 ch, 44100 Hz, Float32, non-inter> print(avEngine.inputNode.auAudioUnit.outputBusses[0].format) // <AVAudioFormat 0x1404adff0: 0 ch, 0 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved> // !!! // Interestingly, 'inputNode' shows a different format than `auAudioUnit` print(avEngine.inputNode.inputFormat(forBus: 0)) // <AVAudioFormat 0x1404af480: 1 ch, 44100 Hz, Float32> print(avEngine.inputNode.outputFormat(forBus: 0)) // <AVAudioFormat 0x1404ade30: 1 ch, 44100 Hz, Float32> Edit: Further debugging revels another puzzling thing. avEngine.inputNode.auAudioUnit == avEngine.outputNode.auAudioUnit // this is true ?! inputNode and outputNode share the same AUAudioUnit. And its deviceID is by default set to the speakers. It's so confusing to me...why would inpudeNode's device be a speaker?
Posted
by
Post not yet marked as solved
1 Replies
641 Views
In the documentation for AUAudioUnitBusArray, there is this passage: Some audio units (e.g. mixers) support variable numbers of busses, via subclassing. I tried to implement this by subclassing AUAudioUnitBusArray, creating my own internal array to store the buses, and overriding isCountChangeable to true and setBusCount to now increase the number of buses if the count is less than the current count. However, I don't think this will work because AUAudioUnitBus has several properties that I can't set such as ownerAudioUnit and index. I would also have to change all the observer functions like addObserver(toAllBusses:forKeyPath:options:context:), which seems overkill for a class that is designed for subclassing. I know about replaceBusses(busArray:) but wouldn't that override the current buses in the bus array since it's copying them?
Posted
by
Post not yet marked as solved
4 Replies
1.3k Views
Hi, I am interested in decoding multichannel Higher-Order Ambisonics feeds to the Spatial Audio renderer discussed in the "Immerse your app in spatial audio" WWDC21 talk. However, I could not find any documentation about which multichannel audio formats are actually supported by the renderer, and a search for "Ambisonics" in the developer documentation only contains results pertaining to the Audio DriverKit. Can someone please enlighten me? Thank you!
Posted
by
Post marked as solved
1 Replies
692 Views
My app is using RemoteIO to record audio. It doesn’t do any playback. RemoteIO seems to be broadly compatible with the new Sound Recognition feature in iOS14, but I’m seeing a glitch when sound recognition is first enabled. If my app is started and I initialise RemoteIO, and then turn on Sound Recognition (say via control centre), the RemoteIO input callback is not called thereafter, until I tear down the audio unit and set it back up. So something like the following: Launch app RemoteIO is initialised and working, can record Turn on Sound Recognition via Settings or control centre widget Start recording with already-set up RemoteIO Recording callback is never again called Though no input callbacks are seen, kAudioOutputUnitProperty_IsRunning is reported as true, so the audio unit thinks it is active Tear down audio unit Set up audio unit again Recording works Buffer size is changed, reflecting some effect on the audio session of the Sound Recognition feature I also noticed that when Sound Recognition is enabled, I see several (usually 3) AVAudioSession.routeChangeNotifications in quick succession. When Sound Recognition is disabled while RemoteIO is set up, I don’t see this problem. I’m allocating my own buffers so it’s not a problem with their size. What could be going on here? Am I not handling a route change properly? There doesn’t seem to be a reliable sequence of events I can catch to know when to reset the audio unit. The only fix I’ve found here is to hack in a timer that checks for callback activity shortly after starting recording, and resets the audio unit if no callback activity is seen. Better than nothing, but not super reliable.
Post not yet marked as solved
26 Replies
16k Views
I'm very excited about the new AirTag product and am wondering if there will be any new APIs introduced in iOS 14.5+ to allow developers to build apps around them outside the context of the Find My network? The contexts in which I am most excited about using AirTags are: Gaming Health / Fitness-focused apps Accessibility features Musical and other creative interactions within apps I haven't been able to find any mention of APIs. Thanks in advance for any information that is shared here. Alexander
Posted
by
Post not yet marked as solved
17 Replies
3k Views
I have a USB audio interface that is causing kernel traps and the audio output to "skip" or dropout every few seconds. This behavior occurs with a completely fresh install of Catalina as well as Big Sur with the stock Music app on a 2019 MacBook Pro 16 (full specs below). The Console logs show coreaudiod got an error from a kernel trap, a "USB Sound assertion" in AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644, and the Music app "skipping cycle due to overload." I've added a short snippet from Console logs around the time of the audio skip/drop out. The more complete logs are at this gist: https://gist.github.com/djflux/08d9007e2146884e6df1741770de5105 I've also opened a Feedback Assistant ticket (FB9037528): https://feedbackassistant.apple.com/feedback/9037528 Does anyone know what could be causing this issue? Thanks for any help. Cheers, Flux aka Andy. Hardware Overview:  Model Name: MacBook Pro  Model Identifier: MacBookPro16,1  Processor Name: 8-Core Intel Core i9  Processor Speed: 2.4 GHz  Number of Processors: 1  Total Number of Cores: 8  L2 Cache (per Core): 256 KB  L3 Cache: 16 MB  Hyper-Threading Technology: Enabled  Memory: 64 GB  System Firmware Version: 1554.80.3.0.0 (iBridge: 18.16.14347.0.0,0) System Software Overview: System Version: macOS 11.2.3 (20D91)  Kernel Version: Darwin 20.3.0  Boot Volume: Macintosh HD  Boot Mode: Normal  Computer Name: mycomputername  User Name: myusername  Secure Virtual Memory: Enabled  System Integrity Protection: Enabled USB interface: Denon DJ DS1 Snippet of Console logs error 21:07:04.848721-0500 coreaudiod HALS_IOA1Engine::EndWriting: got an error from the kernel trap, Error: 0xE00002D7 default 21:07:04.848855-0500 Music HALC_ProxyIOContext::IOWorkLoop: skipping cycle due to overload default 21:07:04.857903-0500 kernel USB Sound assertion (Resetting engine due to error returned in Read Handler) in /AppleInternal/BuildRoot/Library/Caches/com.apple.xbs/Sources/AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644 ... default 21:07:05.102746-0500 coreaudiod Audio IO Overload inputs: 'private' outputs: 'private' cause: 'Unknown' prewarming: no recovering: no default 21:07:05.102926-0500 coreaudiod   CAReportingClient.mm:508  message {   HostApplicationDisplayID = "com.apple.Music";   cause = Unknown;   deadline = 2615019;   "input_device_source_list" = Unknown;   "input_device_transport_list" = USB;   "input_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2";   "io_buffer_size" = 512;   "io_cycle" = 1;   "is_prewarming" = 0;   "is_recovering" = 0;   "issue_type" = overload;   lateness = "-535";   "output_device_source_list" = Unknown;   "output_device_transport_list" = USB;   "output_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2"; }: (null)
Posted
by
Post not yet marked as solved
1 Replies
723 Views
I created a multitimbral sampler based on 16 instances of AVAudioUnitSampler (one per MIDI channel/part). It plays fine when receiving MIDI messages (or using the application virtual keyboard) on a single part. However I tried to play a Midifile using AVAudioSequencer (assiging each AVMusicTrack destination to the corresponding AVAudioUnitSampler instance). It uses 3 parts, one with a pad (samples up to 30s - 100MB total for the aupreset, many simultaneous notes - up to 6 - with many sustain messages), one with a bass sound (a single note at a time, 10MB size aupreset), and one with a lead sound (also one note at a time). Some notes are cut before end or do not play (mainly for the third part), as if there weren't resources left. I'm using the Simulator, and can't try anymore on my real iPad (won't boot since and will need repair or replacement). The Xcode monitoring tab shows only 2 to 3 percent processor used (and 60 MB memory used). However the Simulator runs on an old mac (mid-2010 mac mini - Core2Duo 2,4 Ghz). Is AVAudioUnitSampler suited to be using such way, or should I subclass AVAudioUnitMIDIInstrument (creating an audiounit with kAudioUnitSubTypeMIDISynth subtype as detailed in Gene De Lisa blog post, and loading a soundfont bank using kMusicDevicePropertySoundBankURL) ? Then the only way to change a part instrument would be to send a program change to the AVAudioUnitMIDIInstrument subclass ? I don't know how. Or should I used the kAudioUnitSubType_DLSSynth ?
Posted
by
Post not yet marked as solved
4 Replies
1.1k Views
Hi all, I'm using AVAudioEngine to play multiple nodes at various times (like GarageBand for example). So far I managed to play the various files at the right time using this code : DispatchQueue.global(qos: .background).async {             AudioManager.instance.audioEngine.attach(AudioManager.instance.mixer)             AudioManager.instance.audioEngine.connect(AudioManager.instance.mixer, to: AudioManager.instance.audioEngine.outputNode, format: nil)           // !important - start the engine *before* setting up the player nodes           try! AudioManager.instance.audioEngine.start()                      for audioFile in data {             // Create and attach the audioPlayer node for this file             let audioPlayer = AVAudioPlayerNode()             AudioManager.instance.audioEngine.attach(audioPlayer)             AudioManager.instance.nodes.append(audioPlayer)             // Notice the output is the mixer in this case             AudioManager.instance.audioEngine.connect(audioPlayer, to: AudioManager.instance.mixer, format: nil)             let fileUrl = audioFile.audio.fileUrl             if let file : AVAudioFile = try? AVAudioFile.init(forReading: fileUrl) {                 let time = audioFile.start > 0 ? AudioManager.instance.secondsToAVAudioTime(hostTime: mach_absolute_time(), time: Double(audioFile.start / CGFloat.secondsToPoints)) : nil                 audioPlayer.scheduleFile(file, at: time, completionHandler: nil)                 audioPlayer.play(at: time)             }           }         } Basically my data object contains struct that have a reference to an audio fileURL and the startPosition at which it should begin. That works great. now I would like to export all these tracks mixed into a single file and save it to the Document's directory of the user. How can I achieve this? Thanks for your help.
Posted
by
Post not yet marked as solved
4 Replies
3.9k Views
We are using AudioUnit and AUGraph to provide recording feature for millions of users. For a long time, we have been receiving user feedbacks about recording failures. Most of user logs show that AudioOutputUnitStart returns 1852797029 (kAudioCodecIllegalOperationError).It seems that once this error happens, AudioOutputUnitStart will always return this error code unit rebooting device or sometimes rebooting won't do the trick. Does anyone experience this error code or know the possible cause of it
Posted
by
Post not yet marked as solved
1 Replies
1.3k Views
Hi,I'm developing a VoIP application using CallKit.The AVAudioSession activated by my application has been muted after emergency alert.This issue happens on Google Duo / Facebook Messenger / Zoom also.This issue is 100% reproducible running on iOS 13.3 ~ 13.5.1.2 kinds of audio interruption "AVAudioSessionInterruptionTypeBegan" and "AVAudioSessionInterruptionTypeEnded" events placed at exact time.And my application tries to activate audio session after "AVAudioSessionInterruptionTypeEnded" event, but it fails every time.**** Error Logs ****Error Domain=NSOSStatusErrorDomain Code=1701737535 "(null)" --&gt; AVAudioSessionErrorCodeMissingEntitlementPlease let me know if you can give me some tip or hint to overcome this issue.Best Regards.
Posted
by