I mean - what are numbers for if I don't get the chance to interpret them in some way. This property is even read- and WRITE-able.
We are talking about the device
INPUT gain level of a mic-pre-amplifier. The device doesn't have its own 'opinion' about what amount of pre-amplification it is applying to the input signal. It must have been set by someone. The question here is -
why does Apple set them to different values? It is eventually a relative value between 0.0 and 1.0 which means between the MIN and MAX possible amplification rate. The min-value or at least the max-value is very likely to be the same for every '
SOFTWARE-input' on every device as we are on the same operating system on all Apple mobile devices. So we are limited by the bit-rate (as to volume) as we are limiting the frequence-range (with the sample-frequency). And even the bit-rate of the amplitude is just a relative value that rasterizes our max possible input-signal into finer grained volume-slices. The max-value of a 16bit sample does have the same (audible) volume as a 24bit sample! So we see that the only important thing and point of measure is the actual voltage of the analog signal (the input gain) - all the rest is purely interpretation until it reaches any output hardware - where it turns back to something real and TRUTH is spoken again!
So when my application selects a different input gain for different recording situations like nearfield vs. ambient recording it should better BE the same amplification grade on every device with the same gain-value set. Otherwise I had to check for every device first, guessing at which ratio from 0.0 to 1.0 they MIGHT have the same behaviour... - That would be inacceptable! The user had to do all the gain-tuning of the input and I couldn't provide presets for different real world scenarious . So I strongly guess there must be some consistent behaviour of this value.
I'm sure you get the picture!!!
So the question is - why
IS the default level on the various devices different. Is it because iPhones are primarily used as nearfield devices being spoken into very close to the mic and an iPad more likely (guessing here) as a non-nearfield device? But the phone app could lower the input-gain for its purposes - to refrain from peaks and overloads when being used - so I also don't see a value in setting a pre-amplifier to such different amounts like 0.14 for the phone and 0.75 for a tablet - which is a huge difference.
And by the way - why do two phones report different values 0.14 vs 0.37 - which already makes a big difference in the result of your recording ?!?
As I almost everytime find a reason and good explanation for why Apple is doing a certain thing in a certain way I do assume here as well that there is a special reason for that and that it isn't just a random value.
I just would like to know what this special reason is?
Thx kid
...and thanks for the links to the header files ;-)