Different input gain level on iOS devices

Is there a special reason for why the iPhone 4s reports a default mic-input-gain-level of 0.142857 (when opening a SharedAudioSession for a recording with AVAudioRecorder) and the iPad 2 reports a default gain of 0.75 ???


JFYI - when changing the gain to another value (e.g. the max 1.0) it will be back at the old default value when asking the SharedAudioSession at restart. So this really seem to be default values!



Any ideas?

iPhone 6 reports an initial default value of -> audioSession.inputGain: 0.368421


Why these differences? No Apple staff around?


Anyone?

There is not much to say here as these numbers aren't meant to be interpreted in any specific way. Different hardware will report different values. It would be nice to have all devices specifically report some consistent target level that has some meaning, so I'd suggest maybe you should file a bug asking for that.

The headers (both the deprecated AudioSession.h see kAudioSessionProperty_InputGainScalar and AVAudioSession.h see setInputGain:...) have detailed descriptions about when to expect set values to be restored.

I mean - what are numbers for if I don't get the chance to interpret them in some way. This property is even read- and WRITE-able.


We are talking about the device INPUT gain level of a mic-pre-amplifier. The device doesn't have its own 'opinion' about what amount of pre-amplification it is applying to the input signal. It must have been set by someone. The question here is - why does Apple set them to different values? It is eventually a relative value between 0.0 and 1.0 which means between the MIN and MAX possible amplification rate. The min-value or at least the max-value is very likely to be the same for every ' SOFTWARE-input' on every device as we are on the same operating system on all Apple mobile devices. So we are limited by the bit-rate (as to volume) as we are limiting the frequence-range (with the sample-frequency). And even the bit-rate of the amplitude is just a relative value that rasterizes our max possible input-signal into finer grained volume-slices. The max-value of a 16bit sample does have the same (audible) volume as a 24bit sample! So we see that the only important thing and point of measure is the actual voltage of the analog signal (the input gain) - all the rest is purely interpretation until it reaches any output hardware - where it turns back to something real and TRUTH is spoken again!


So when my application selects a different input gain for different recording situations like nearfield vs. ambient recording it should better BE the same amplification grade on every device with the same gain-value set. Otherwise I had to check for every device first, guessing at which ratio from 0.0 to 1.0 they MIGHT have the same behaviour... - That would be inacceptable! The user had to do all the gain-tuning of the input and I couldn't provide presets for different real world scenarious . So I strongly guess there must be some consistent behaviour of this value.

I'm sure you get the picture!!!

So the question is - why IS the default level on the various devices different. Is it because iPhones are primarily used as nearfield devices being spoken into very close to the mic and an iPad more likely (guessing here) as a non-nearfield device? But the phone app could lower the input-gain for its purposes - to refrain from peaks and overloads when being used - so I also don't see a value in setting a pre-amplifier to such different amounts like 0.14 for the phone and 0.75 for a tablet - which is a huge difference.

And by the way - why do two phones report different values 0.14 vs 0.37 - which already makes a big difference in the result of your recording ?!?


As I almost everytime find a reason and good explanation for why Apple is doing a certain thing in a certain way I do assume here as well that there is a special reason for that and that it isn't just a random value.


I just would like to know what this special reason is?



Thx kid


...and thanks for the links to the header files ;-)

The implementation of the microphone gain path is actually very different across devices and the default level has also changed over time as the hardware has improved.

Since the intention was never to have apps infer anything from the numbers, as previously mentioned there's not much to discuss regarding the API. However if you're seeing different “default” levels on two devices that are the exact same hardware model, that’s certainly unexpected.

Looks like you have much material for a bug report here were we can improve things and it would be great to have a radar(s) on this.

As long as I didn't know whether this is a feature ore a bug I didn't want to file a radar.


So what information should I provide with the radar. What is the thing you guys are looking for what I didn't tell yet?


Or should I just copy my messages to the bugreporter?

What would be most helpful is a bug report describing the "feature" - this is the functionality you are specifically looking for. So, a bug with how you expect this property to report gain levels back to you and what you expect them to mean is a very good starting point. Mention what you're trying to accomplish or represent with this value to your users. Additionally, if you think there's a better (more useful) way for this type of control to be exposed, discuss that as well. For example, maybe a gain view like MPVolumeView? Or not?


Thanks!

Different input gain level on iOS devices
 
 
Q