Rounding when math.h and casting do not do the trick!!!

When I programmatically set the scale of a UIScrollView to a certain ratio of the screen in scrollViewDidEndZooming,then scrollViewDidZoom is invoked once more by the framework automatically and so is scrollViewDidEndZooming !


To avoid an infinite loop I compare the scale property once more against my required zoom-levels (1 // 1.2857 // 1.8 // 3 and 9) to let the function pass through in case of an unchanged scale-value. When zooming in one direction the values as I set them in the first pass are reported identical when scrollViewDidEndZooming gets invoked for a second time by the framework. When zooming in the other direction the two CGFloats are 'off' due to rounding errors (1 // 1.28569996 // 1.79999995 // 3 and 9).


My problem now is:


No matter, how I round(), assign or cast my off-values, when finally assigning them to a CGFloat the debugger reports back the same old off-value and my comparison function fails and causes undesired jumps between my predfined zoom-levels.


I could of course set the threshhold values of my function to the off-values reported by the debugger - and actually it works this way now - but I am not sure if this is very future-proof and also as a consequence eventually causes a 1-pixel-off glitch and puts my views on the wrong spot on screen.


So I'd prefer an accurate procedure. But I can't figure out how to round my 1.79999995 to 1.8 in an other way...



Any ideas?

Answered by QuinceyMorris in 48955022

It may be irritating, but it isn't odd. 🙂


If you never calculated a floating point value, but only ever stored constants and compared between them, nothing odd would ever happen. The oddness results from comparing a constant and a calculated value, or two calculated values.


Even if you are not doing the calcuations yourself, you may be using calculated values from the Cocoa frameworks. For example, if you set a CGFloat scale factor on a view, then retrieve the scale factor property, you don't know whether the scale factor given back to you was (say) recalculated from a view frame rect, or was stored internally in a different precision that might slightly change the value.


Inspecting the returned value in the debugger won't help, because the original and re-calculated values may have the same decimal representation, but be different binary values.

Can't you just do a

if(fabs(oldValue-newValue)>.00001){

//change the scale

}

You can't actually round anything to 1.8, because 1.8 isn't representable exactly in binary floating point.


Your best choice, if you're referring to a small number of standard zoom levels, is to represent them as integers scaled by a factor of 10 or 100 or 1000 or whatever you need to get enough precision. Then multiply your actual zoom values by the same factor, round to an integer and compare.


This is vaguely similar to what PBK suggested, but the problem with the absolute threshold is that you don't really know how small it needs to be to "overlook" inaccuries in binary floating point representation vs. how large it needs to be to match successfully against your zoom levels.

But isn't it odd that -


  • when I am 'zooming in' and tell the scroll view in scrollViewDidEndZooming to take a scale-factor of 1.8 and I read its value back from the property it says 1.79999995 - my function compares against 1.8 and gets hit...
  • when I am 'zooming out' and do the same, the property also reports 1.79999995 - my function compares against 1.8 and does NOT get hit - I have to change my fuction value to 1.79999995 to get a hit...


BTW: I am already running into problems. The incorrect zoom-level leads to triggering a view that should actually be out of sight because of the zooming. When I hit the upper edge of the screen however, the action of this off-screen view gets triggered...


This happens only in zoom level 1.8 - all other factors define a clear edge with no mis-triggering - even the 1.28569996 one...

Accepted Answer

It may be irritating, but it isn't odd. 🙂


If you never calculated a floating point value, but only ever stored constants and compared between them, nothing odd would ever happen. The oddness results from comparing a constant and a calculated value, or two calculated values.


Even if you are not doing the calcuations yourself, you may be using calculated values from the Cocoa frameworks. For example, if you set a CGFloat scale factor on a view, then retrieve the scale factor property, you don't know whether the scale factor given back to you was (say) recalculated from a view frame rect, or was stored internally in a different precision that might slightly change the value.


Inspecting the returned value in the debugger won't help, because the original and re-calculated values may have the same decimal representation, but be different binary values.

Thanks a lot Quincey, I got the point.


I further investigated my situation and rewrote my catcher function from scratch and found out the following:


The value of the scale when set to 1.8 is actually somewhere between 1.79999995x and 1.79999996


  • So when going in one direction I have to be smaller than 1.79999995x - (that's why 1.79999995 worked but 1.8 didn't)
  • and going in the other direction I have to be bigger than 1.79999996 -(that's why 1.8 worked but 1.79999995 didn't)


That does the trick - no rounding needed - as long as the floatingpoint implementation doesn't change in the future ;-)

Q: Do you think this is an issue to think of?


But what still remains is my problem with the incorrect triggering on the top of the screen in the 1.8 zoom-level. The funny thing is that the borders seem to be 100% correct - not even a pixel off as far as I can tell from the visible views.




But when I pan down from the upper bezel of the iPad the off-screen view gets triggered as soon as I reach the actual touch area of the screen and I can move my finger at least 3-4 mm further into the screen. Much to far for a view being off by one pixel. It almost seems like it's the size of the status-bar which I always keep deactivated in my application...


Could this be an iOS bug? But why only in this very zoom-scale out of 5 preset scales. The other (1x 3x 9x) show no symtoms, neither does the 1.28569996 x one!


Q: Any ideas on that?


And a last question while we are at it!

Q: Do you know if... - the framework tells me whether the pinchGestureRecognizer of my scroll-view is currently 'zooming in or out' - I just found 'zooming' but that's not enough in my case? If not -

Q: where is the best place to hook in and stash away the 'last scale-value' and where to compare it against a current one in order to find out the direction of my zooming?


Thanks again for your time


Much obliged!

Also thank you to you for your time...

Don't use the "integer after multiplying by 100" if what you want is a floating point adjustment - you will regret obfuscating the code if you ever want to change the scale in a few weeks (imagine trying to introduce that 1.2857 if the scale is only 100) or if anyone else tries to use your code. The problem is using "==" to compare two floating point values - don't ever do that - instead use the fabs(X-Y)<.00001. It's easier to understand.

You can call it obfuscation, I call it the substitution of a tractable algorithm in place of an intractable one. If I cut an orange in quarters, and then count the quarters — 1, 2, 3, 4 — I'm not obfuscating the orange, I'm just counting quarters.


>> The problem is using "==" to compare two floating point values - don't ever do that


I wish you'd stop saying that, because it's terrible advice. It promotes the idea that float point numbers are somehow unstable, as if their values change spontaneously. The real problems are (or include):


1. When you use a floating point constant like 1.8, it's easy to make the mistake of thinking that it represents the real number 1.8. As the OP found, it actually represents a slightly smaller real number, and you forget that at your peril.


2. When you perform a floating point calculation, such as addition or multiplication, you may introduce error further down the line because the result cannot be represented within the precision available.


3. When your floating point number represents a real-world measurement, for example, a temperature sensor reading, then there is likely error in the measurement itself.


Comparison of values with error certainly shouldn't be done with '==', but that's true of integer values just as much as floating point values — comparing 1000 against 1001 is always invalid, if the values have (say) only 2 significant digits.


>> instead use the fabs(X-Y)<.00001


And I wish you'd stop saying that, too. Even if there is no measurement or calculation error in X and Y, the fact that floating point representations are not evenly spaced along the real number line means that a fixed comparison interval is going to blow up in your face eventually.

Actually, your solution may already fail on a 32-bit device, where CGFloat only has 24 binary digits of precision. I think that's equivalent to about 6 decimal digits, though I don't remember exactly right now. I'd also point out that what you did is (more or less) use a multiplier of 1,000,000,000, but for your special case of 1.8 you "optimized" away the details so there's no actual multiplication or rounding.


I don't know the answer to your second question. It may be a deeper issue in your code, or it may be a matter of the geometry that iOS uses (e.g. accounting for the status bar even when hidden).


Doesn't UIPinchGestureRecognizer's "scale" property tell you what you what you want to know about whether you're zoomed in or out (greater or less than 1.0)?

I see your point. Let's agree to disagree. I believe that "==" for floating point numbers is a bad idea and that 'multiplying by 100' will end up being more confusing then fabs(X-Y)<.00001. Note that int(1.8*100)=180 but int(1.237*100)=123 so if you introduced 1.237 later you would need to switch to *1000 and.....been there, done that, bad experience.


Using '==' for integers is very different than for floating point numbers. Integers are precisely defined, 4/3=1 in an 8 bit Apple 2 and a 64 bit iBook no matter what compiler you use. Floating point numbers depend on many variables; 4/3=1.3333 or 1.3333333333 or 1.33333333333333333.

Rounding when math.h and casting do not do the trick!!!
 
 
Q