Posts

Post not yet marked as solved
2 Replies
1.6k Views
Hi,I'm working with USB Midi devices on iOS apps, and I'm currently looking for a way to identifiy them.I know for a fact that getting the different MIDI properties with CoreMIDI API is not sufficiently reliable, as some devices return only generic information like "USB MIDI DEVICE" but are nonetheless recognized by other apps.After some research, it appears that the the best solution is to get the USB-ID of the device (VID-PID : vendor ID and product ID). My question is the following : How can I get the USBID of a USB MIDI device on iOS? The IOKit API is not available on iOS, so what is the solution to access USB devices on iOS?Thank you all for the attention,Best,Thomas
Posted Last updated
.
Post not yet marked as solved
1 Replies
925 Views
Hello all,I've been working with custom AudioUnit for AVAudioEngine on iOS for a while (Audio Unit v3), and I now want to build an Audio Unit plug-in for macOS DAWs (Logic Pro or Ableton Live for example). I've been looking about the subject for a while, but I can't manage to understand how it works.It seems that part of the solution is Audio Unit Extensions, as I saw in WWDC 2015 - 508 and WWDC 2016 - 507, but I don't get how I can reuse the .appex product in Logic Pro or Ableton Live. I've used 3rd party audio units for these softwares but I always get them as .components bundles, which is to be copied in the /Library/Audio/Plug-Ins directory. I tried to copy the .appex in the same folder but it doesn't seem to be recognized by Logic Pro.Any idea about what am I missing here?Thank to you all 🙂Tom
Posted Last updated
.
Post not yet marked as solved
1 Replies
275 Views
Hello all, I'm trying to write an AAC-compressed audio file in M4A or CAF file format with AVAudioFile. I'm using the following settings dictionary: settings = [AVFormatIDKey: kAudioFormatMPEG4AAC, 						AVSampleRateKey: 48000.0, 						AVNumberOfChannelsKey: 2, 						AVEncoderBitRateStrategyKey: AVAudioBitRateStrategy_Variable, 						AVEncoderBitRatePerChannelKey: 64, 						AVEncoderAudioQualityForVBRKey: AVAudioQuality.high, 						AVEncoderBitDepthHintKey: 16] as [String : Any] Output file is OK, but my issues are changing the bit rate strategy, the bit rate value or the encoder audio quality has no impact whatsoever on the output file, switching from CAF to M4A file format with the same settings make a big change in the file size but it should not. My question is simple: Is there an issue with my settings dictionary or is it a known issue with AVAudioFile? I spent quite some time exploring how to build this dictionary on the web and the Apple documentation (very limited on this topic I must say), but I can't find what I do wrong... More details about this issue are available here : https://gitlab.com/AudioScientist/avaudiofileaacparameters Thank you for your help :)
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.3k Views
Dear all,I've been working, for quite a while now, on the migration of all my iOS musical apps from AUGraph/AudioToolbox to AVAudioEngine/AVFoundation.We released an app recently with this update and discovered a new source of exceptions intoduced by AVAudioEngine (logs with Fabric's Crashlytics).The app is basically an audio player (with lots of fancy features requiring signal processing code embedded in a custom audio unit) and provides background audio. The AVAudioSession is in Payback category with no option, as it's not supposed to mix with other sources. Everything is embedded in a single custom AudioUnit directly connected to the OutputNode inside an AVAudioEngine.I have two new weird exceptions I don't know how to interpret, especially as they are not reproductible at all, and concerns mostly old devices.1. At the end of an interruptionWhen I receive the InterruptionBegan notification, I stop the AVAudioEngine.When I receive the InterruptionEnded notification, I start back the AVAudioEngine.This last action causes, sometimes, an exception with code 560557684 (AVAudioSessionErrorCodeCannotInterruptOthers). This happens mostly on iPhone 5. But as I said, it is not reproductible and works like a charm most of the times.I was able to reproduce it once on an iPhone 5 by starting the app, starting a song, putting the app on background, and then calling the phone through FaceTime. I got the error during the InterruptionEnded notification, even though no other app was running at that time.2. At startupThis one is the weirdest for me. Sometimes, starting the AVAudioEngine during the app startup (inside didFinishLaunchingWithOptions) causes the same error (560557684: AVAudioSessionErrorCodeCannotInterruptOthers). This happen mostly on iPhone 4s and iPhone 5, and I never managed to reproduce it.The only page I found about a similar problem is the following:https://stackoverflow.com/questions/29036294/avaudiorecorder-not-recording-in-background-after-audio-session-interruption-endBut the solutions here are not very satisfying as :- I don't want my app to mix with others, and once again, it all works most of the time.- My app already uses remote control events so this doesn't solve anything.My questions :- Did anybody encounter a similar problem?- I don't event know how to start to deal with that, any advice?Thank you 🙂Tom
Posted Last updated
.
Post not yet marked as solved
0 Replies
557 Views
Hello,According to documentation, AVAudioInputNode shouldn't be able to perform sample rate conversion, but it appears that it can perform it on some cases.The question following is: shall we or shall we not rely on AVAudioInputNode for sample rate conversion?I wrote a complete project with two documented examples to illustrate the issue :https://github.com/AudioScientist/AVAudioInputNodeSRConversionThank you all 🙂Tom
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.2k Views
Hello,I had some issues with AVAudioMixerNode. As a reminder, AVAudioMixerNode is supposed to work with inputs and outputs of any sample rate and take care of the sample rate conversions. Hence, it is a handy tool to convert the audio input's data format and especially its sample rate, when it does not conform to the internal processing format of the AVAudioEngine.Summarized scenario:Inside an AVAudioEngine,- connect the input node to an AVAudioMixer (let's call it myMixer), using the input node's input format (AVAudioInputNode is not supposed to perform format conversion)- connect myMixer to the main mixer (which is itself automatically connected to the output node), using a sample rate different than the audio session's sample rate (the hardware sample rate),the audio signal won't go through from input to output unless there is some audio coming from another source (for example an AVAudioPlayerNode) to myMixer.Details:All details, with several test projects, are available on the following GitHub repo:https://github.com/AudioScientist/AVAudioMixerNodeIssuesMy question:Is it a bug or something I do wrong?Thank you all 🙂TomPS: I already submitted it on Apple's bug report platform
Posted Last updated
.
Post not yet marked as solved
0 Replies
733 Views
Hi all,I'm a struggling with the notion of "maximum frames per slice" in the context of AVAudioEngine and home-made AUAudioUnits.I'll make it simple with two questions :Q1. Is it possible to control the maximum number of frames per render loop (maxFramesPerSlice) in an AVAudioEngine?In AUGraph, I used to set it manually on the remoteIO with AudioUnitSetProperty and kAudioUnitProperty_MaximumFramesPerSlice. But I just read here that I'm not supposed to set it on I/O units. It seemed to work however.Q2. Is it possible to get the maxFramesPerSlice of the ouputNode in an AVAudioEngine?I understood that the maxFramePerSlice is set for each AVAudioNode at startup (more precisely at prepare) of the AVAudioEngine, but I need this info at other parts of my code, is there any way to get this value and get notified when it changes?Thank you for your help 🙂Tom
Posted Last updated
.
Post not yet marked as solved
0 Replies
603 Views
Dear all,I'm currently working on an audio app with input only, for sound analysis. As usual, I'm working with AVAudioEngine with custom AUAudioUnit to embed my own real-time audio processing code.My question is the following: how can I properly schedule the calls of the input node's render callback?I'm used to work with both input and output, and in that case, the system callback on the output node triggers everything up to the input node's render callback. But without an output, I have no idea how to schedule my calls to the input node's render callback.For now, the solution I found is to create an output bus on my analysis node, and plug it to the output node. That way, I can rely on the output node calls for the scheduling while allways sending empty buffer (with the OutputIsSilence flag).Any better idea?Thank you all 🙂Tom
Posted Last updated
.
Post not yet marked as solved
0 Replies
597 Views
Dear all,I'm working on custom AUAudioUnit for both iOS and macOS. For now, my AudioUnits are only integrated in my own app's AVAudioEngine, but I'm considering building AudioUnit extensions in the near future.I'm trying to understand how to declare the numbers of channels and the channel layouts supported by my AudioUnit, which seems necessary for a proper AudioUnit extension distribution.All I found for the moment is the following, in AUAudioUnit.h:@interface AUAudioUnit : NSObject [...] @property (NS_NONATOMIC_IOSONLY, readonly, copy, nullable) NSArray *channelCapabilities; [...] @end interface AUAudioUnitBus : NSObject [...] @property (NS_NONATOMIC_IOSONLY, readonly, copy, nullable) NSArray *supportedChannelLayoutTags; [...] @endMy questions are the following :1. What should I do with my AUAudioUnit's channelCapabilities? Should I override it with a computed properties, or synthesize it manually?2. I don't see any way to specify the supported channel layout tags on the input and output busses I instanciated for my AudioUnit, as supportedChannelLayoutTags is readonly. What I am supposed to do with it?3. Is there anything else I missed on the subject?Thank you all for your help 🙂Tom
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.3k Views
Dear all,I have a pretty fundamental question about real-time audio, but I cannot find a proper answer. The context is low-level audio programming on mobile platforms, that is the programming my own audio modules with their own C/C++ audio render functions.Consider the following case:My audio graph (the API it is implemented in doesn’t matter really) contains several independent modules, whose render functions are called and audio outputs are mixed together at each audio render loop call.At some point, for some reason (there can be plenty, but let’s say the order comes from outside the audio thread, which is generally the case), I need to free and reallocate the resources of one of the modules (some of these resources are accessed inside the render function), without disturbing the other modules.If the audio render loop is running precisely at the moment I receive the order to reallocate resources, I have to wait for the render function to finish to avoid freeing memory that it currently being read or written.I have to prevent the render function of this particular module, but not those of the other modules, from running while I’m doing memory free/alloc.As it is always the case within the audio thread, I shall not use any lock, or free/alloc inside the audio render function.I cannot stop the audio render loop, because the other modules are potentially running and producing audio output.My question is then:How can I SAFELY and PROPERLY free and reallocate some resources used inside the audio thread, without disturbing the audio render loop?Thank you all for your input 🙂Tom
Posted Last updated
.
Post not yet marked as solved
1 Replies
755 Views
Dear all,I faced a quite annoying problem. I'll try to explain it as simply as possible. Note that my code is C/C++/Obj-C/Obj-C++ only.I'm building a framework for which I use several pre-built static libraries, distributed and included in my project as followsprebuilt libraryA.alibraryB.ainclude libraryA headerA1.hheaderA2.hlibraryB headerB1.hheaderB2.hAn important point is that libraryB depends on library A, as headerB1.h contains the following line :#import <libraryA/headerA1.h>All this, along with some additional source code, gets compiled into my framework, which contains everything at the end, ready for distribution.My problem is that I'd like to put headerB1.h, and therefore headerA1.h in the public headers of the framework.I can build the framework, but as the Headers directory of a framework is "flat" (no folders tree), whenever the framework client tries to#import <myFramework/headerB1.h>this leads to a compilation error as the libraryA folder doesn't exist anywhere in the framework.I can summarize my problem with the following question :How can I expose as public in a framework a header from an internal static library, so that the app client can use that.A few details about my case to illustrate with a concrete example :- libraryA/headerA1.h defines basic types (mostly enums)- libraryB is a high-level module with an interface, defined in library/headerB1.h and intended to be public, but using some basic types from libraryA/headerA1.hThank you all for your help 🙂Tom
Posted Last updated
.
Post not yet marked as solved
0 Replies
610 Views
Hello,I need to access to the ring/mute switch state. I have many apps with SoloAmbient AudioSession, so users can switch off the sound of the app with the ring/mute switch state (very common functionnality apparently). I'd like to access the mute switch state to know if the user is using the app with or without the sound.After a lot of searching, it seems that all previous methods to do that (AudioSession's current route, time needed to play a mute sound with an audio player etc.) don't work anymore. I tried everything I could to check the status of the audio chain in AVAudioSession and AVAudioEngine, but it seems that the mute action happens further downstream.I'd like to have a confirmation from Apple audio team : There really isn't any way to access the mute switch state ?ThanksTom
Posted Last updated
.
Post not yet marked as solved
0 Replies
1.3k Views
Dear all,I'm currently studying the AVAudioEngine API in order to gradually transfer my apps from the old and soon-to-be-deprecated AUGraph API... and I'm facing huge issues at the very beginning of my exploration of AVAudioEngine.The general context of my exploration so far is a music playing with players, effects, taps and mixer.1. Accessing curent position of an AVAudioPlayerNodeI simply want to access the current AVAudioPlayerNode's position within the scheduled buffer or file. After some research, I found the following way to access to current frame position of an AVAudioPlayerNode :AVAudioTime *nodeTime=self.playerNode.lastRenderTime; AVAudioTime *playerTime=[self.playerNode playerTimeForNodeTime:nodeTime]; AVAudioFramePosition framePosition = playerTime.sampleTime;My problem is that the playerTimeForNodeTime function returns nil if AVAudioPlayerNode is paused. So in the following basic scenario:S1 schedule a file or buffer in an AVAudioPlayerNode,S2 play the AVAudioPlayerNode,S3 pause the AVAudioPlayerNode,S4 play (resume) the AVAudioPlayerNode,I'm completely blind about the position of the AVAudioPlayerNode within the scheduled file or buffer between the steps S3 and S4, even though, according to AVAudioPlayerNode's documentation: "The player's sample time does not advance while the node is paused.", unlike the stop function, which flushes all scheduled buffers/files.Is there a way to get access to the AVAudioPlayerNode's current position while in paused state? The only solution I see so far is to make a new class to store the current position when the pause function is called in order to be able to access it while the AVAudioPlayerNode is paused, which seems not very elegant and makes me think that I'm in the wrong way...2. Seeking in a file scheduled in a AVAudioPlayerNodeStill in my music player context, I want to to schedule audio files in my AVAudioPlayerNodes and seek within those files while playing. The only solution I found so far is the following:BOOL wasPlaying = playerNode.isPlaying; [playerNode stop]; self.playerNode scheduleSegment:audioFile startingFrame:newFramePosition frameCount:(AVAudioFrameCount)(audioFile.length-newFramePosition) atTime:nil completionHandler:nil]; if (wasPlaying) [playerNode play];Again, this seems weird and not very elegant.... Is there a proper way to seek within a file or buffer currently playing in a AVAudioPlayerNode?Finally, my guess after my first experiments is that AVAudioPlayerNode is not designed for music playback, given the two basic problems I faced. So my final question is: What could be the proper solution to play music with custom effects/taps/mixers within the AVAudioEngine framework?Thank you all for your attention and help 🙂BestTom
Posted Last updated
.
Post not yet marked as solved
2 Replies
738 Views
Hello everyone,I've recently started to have troubles with my iOS audio apps. All apps concerned are based on AUGraph with a unique AudioUnit RemoteIO, sending output audio through a custom render callback (basic scheme you can find on several sample codes and guides from Apple documentation).Sometimes, though rarely, the physical audio output simply shuts down, apparently for no reason, while the render callback continues to be called as normal.I'm still struggling to identify a reproducible scenario. I already saw it happen with a sound generating app (AVAudioSessionCategoryPlayback) on background, without anything happening. I found a way to restart audio (without restarting the app) by resetting - the AudioSession's category, buffer duration and sample rate,- and the stream format of the Bus 0 input on the RemoteIO Audio Unit.But still, I don't know :- Why this happens? It seems that the AudioSession and/or RemoteIO enter in some kind of weird state which prevents them to work properly.- How to get noticed when this happens.- How to prevent this to happen (which is the most important obviously)?Does anyone have any clue on what happens here?Thank you all for your help 🙂Best,😎 Tom 😎
Posted Last updated
.