How Python Compares Floats and Ints: When Equals Isn’t Really Equal
How Python Compares Floats and Ints: When Equals Isn’t Really Equal
![](https://lemmy.world/pictrs/image/5936275d-9e37-4723-bc16-f7bcd3dc9727.jpeg?format=webp&thumbnail=128)
Another Python gotcha and an investigation into its internals to understand why this happens
![How Python Compares Floats and Ints: Why It Can Give Surprising Results](https://lemmy.world/pictrs/image/5936275d-9e37-4723-bc16-f7bcd3dc9727.jpeg?format=webp)
How Python Compares Floats and Ints: When Equals Isn’t Really Equal
Another Python gotcha and an investigation into its internals to understand why this happens
TL;DR:
In Python, following returns False.
9007199254740993 == 9007199254740993.0
The floating point number 9007199254740993.0 is internally represented in memory as 9007199254740992.0 (due to how floating point works).
Python has special logic for comparing int with floats. Here it will try to compare the int 9007199254740993 with the float 9007199254740992.0. Python sees that the integer parts are different, so it will stop there and return False.
Comparing floats for equality is generally a bad idea anyways.
Floats should really only be used for approximate math. You need something like Java's BigDecimal or BigInteger to handle floating point math with precision.
Looks like this is the equivalent for Python:
Comparing is fine, but it should be fuzzy. Less than and greater than are fine, so you basically should only be checking for withing a range of values, not a specific value.
I assume this is because that number is so large that it loses precision, in which case this is more of a quirk of floating point than a quirk of Python.
Disclaimer: Have not read the article yet.
It’s both. As you said it’s because of loss of floating point precision, but it’s also with some of the quirks how Python compares int with float. These two together causes this strange behavior.
If i'm comparing ints with floats, it is my fault in the first place
Exactly, I'd expect a warning, if not an error.
I geuss it's something like : if close enough, set to true.
Now I'll read the article and discover it's like 100x more complex.
Edit : It is indeed at least 100x more complex.
it's not only more complex it also doesn't work like you described at all
Did nobody read the manual?
IEEE 754 double precision: The 53-bit significand precision gives from 15 to 17 significant decimal digits precision.
I'm not sure where the 17 comes from. It's 15.
The "15 to 17" part is worded somewhat confusingly, but it's not wrong.
The number of bits contained in a double is equivalent to ~15.95 decimal digits. If you want to store exactly a decimal number with a fixed number of significant digits, floor(15.95) = 15
digits is the most you can hope for. However, if you want to store exactly a double by writing it out as a decimal number, you need 17 digits.
Well-written 👍
Do we have a js type situation here
Probably more like the old precision problem. It ecists in C/C++ too and it's just how fliats and ints work.