Floating point always made my head hurt in class.
I had no problem with integers. 2’s complement binary, octal, hexadecimal, decimal… no problem.
Floating point, and I lose track of the numbers. Every time I think I grasp the concepts and look at the raw data I get lost. Worse in that the machines try to “helpfully” interpret them into scientific notation.
Then, there was the infamous Intel floating point bug, leading to the joke:
Anyway, regardless of the presence of a bug or no, floating point math relies on lookup tables which have varying degrees (no pun intended) of accuracy, and may result in conflicts with reality. Give them a tiny bit of leeway, and they seem to be able to work stuff out. The lookups for conic sections (arcs) and trigonometry (distance on a 2D plane) use different tables that might be individually very good, but have slight inconsistencies when dealing with what we can plainly see should be the same result. Add in the vagaries of digital floating point math, and it makes me wonder that this stuff ever works at all.