Strange things happening on corners and with finishing passes

Floating point always made my head hurt in class.

I had no problem with integers. 2’s complement binary, octal, hexadecimal, decimal… no problem.

Floating point, and I lose track of the numbers. Every time I think I grasp the concepts and look at the raw data I get lost. Worse in that the machines try to “helpfully” interpret them into scientific notation.

Then, there was the infamous Intel floating point bug, leading to the joke:

Anyway, regardless of the presence of a bug or no, floating point math relies on lookup tables which have varying degrees (no pun intended) of accuracy, and may result in conflicts with reality. Give them a tiny bit of leeway, and they seem to be able to work stuff out. The lookups for conic sections (arcs) and trigonometry (distance on a 2D plane) use different tables that might be individually very good, but have slight inconsistencies when dealing with what we can plainly see should be the same result. Add in the vagaries of digital floating point math, and it makes me wonder that this stuff ever works at all.

Thank you. Decimal to Floating-Point Converter - Exploring Binary includes the following:

Inside the computer, most numbers with a decimal point can only be approximated; another number, just a tiny bit away from the one you want, must stand in for it. For example, in single-precision floating-point [e.g. Estlcam], 0.1 becomes 0.100000001490116119384765625. If your program is printing 0.1, it is lying to you; if it is printing 0.100000001, it’s still lying, but at least it’s telling you you really don’t have 0.1.

The reason for the slight inaccuracy is that the computer only understands 0’s and 1’s and floating point numbers have to be converted (Jeffeb3’s post) to a string of 32 (or 64) 0’s and 1’s, e.g. 00111101110011001100110011001101 for that .01.

Q: What’s .2 + .2?
A: .4000000059604645775390625 (32-bit, Estlcam)
or .4000000000000000222044605 (64-bit)

1 Like