Stop using floats
Stop using floats
Stop using floats
Serious answer: Posits seem cool, like they do most of what floats do, but better (in a given amount of space). I think supporting them in hardware would be awesome, but of course there's a chicken and egg problem there with supporting them in programming languages.
Posits aside, that page had one of the best, clearest explanations of how floating point works that I've ever read. The authors of my college textbooks could have learned a thing or two about clarity from this writer.
I had the great honour of seeing John Gustafson give a presentation about unums shortly after he first proposed posits (type III unums). The benefits over floating point arithmetic seemed incredible, and they seemed largely much more simple.
I also got to chat with him about “Gustafson’s Law”, which kinda flips Amdahl’s Law on its head. Parallel computing has long been a bit of an interest for me I was also in my last year of computer science studies then and we were covering similar subjects at the time. I found that timing to be especially amusing.
No real use you say? How would they engineer boats without floats?
Just invert a sink.
Just build submarines, smh my head.
I know this is in jest, but if 0.1+0.2!=0.3 hasn't caught you out at least once, then you haven't even done any programming.
IMO they should just remove the equality operator on floats.
Me making my first calculator in c.
what if i add more =
That should really be written as the gamma function, because factorial is only defined for members of Z. /s
But that's not because floats are inaccurate. A very very pedantic compiler wouldn't even let you write f64 x = 0.1;
because 0.1 (and also 0.2 and 0.3) can't be converted to a float exactly (note that 0.5, 0.25, 0.125, etc. can be stored exactly!)
The moment you write f64 x = 0.1;
and expect the computer to store that inside a float you already made a wrong assumption. What the computer actually stores is the float value that is as close as possible to 0.1. But not because floats are inaccurate, but because floats are base 2. Note that floating point types in general don't have to be base 2 - they can be any base (for example decimal types are base 10) but IEEE754 floats are base 2, because it allows for simpler hardware implementations.
An even more pedantic compiler would only let you write floating point in binary like 10.10110001b
and let you do the conversation, because it would make it blatantly obvious that most base 10 decimals can't even be converted without information loss. So the "inaccuracy" is not(!) because float calculations are inaccurate but because many people wrongly assume that the base 10 literal they wrote can be stored inside a float.
Floats are actually really accurate (ignoring some Intel FPU hardware bugs). I skipped a lot of details which you can find here: https://zeta.one/floats-are-not-inaccurate/
Equipped with that knowledge your calculation 0.1+0.2 != 0.3
can simply be translated into: "The closest float to 0.1" + "The closest float to 0.2" is not equal to "The closest float to 0.3". Keep in mind that the addition itself is perfectly accurate and without any error/rounding(!) on every EEE754 conforming implementation.
Based and precision pilled.
As a programmer who grew up without a FPU (Archimedes/Acorn), I have never liked float. But I thought this war had been lost a long time ago. Floats are everywhere. I've not done graphics for a bit, but I never saw a graphics card that took any form of fixed point. All geometry you load in is in floats. The shaders all work in floats.
Briefly ARM MCU work was non-float, but loads of those have float support now.
I mean you can tell good low level programmers because of how they feel about floats. But the battle does seam lost. There is lots of bit of technology that has taken turns I don't like. Sometimes the market/bazaar has spoken and it's wrong, but you still have to grudgingly go with it or everything is too difficult.
But if you throw an FPU in water, does it not sink?
It's all lies.
IMO, floats model real observations.
And since there is no precision in nature, there shouldn't be precision in floats either.
So their odd behavior is actually entirely justified. This is why I can accept them.
I just gave up fighting. There is no system that is going to both fast and infinitely precision.
So long ago I worked in a game middleware company. One of the most common problems was skinning in local space vs global space. We kept having customers try and have global skinning and massive worlds, then upset by geometry distortion when miles away from the origin.
I'd have to boulder check, but I think old handheld consoles like the Gameboy or the DS use fixed-point.
I'm pretty sure they do, but the key word there is "old".
Floats make a lot of math way simpler, especially for audio, but then you run into the occasional NaN error.
On the PS3 cell processor vector units, any NaN meant zero. Makes life easier if there is errors in the data.
Floats are only great if you deal with numbers that have no needs for precision and accuracy. Want to calculate the F cost of an a* node? Floats are good enough.
But every time I need to get any kind of accuracy, I go straight for actual decimal numbers. Unless you are in extreme scenarios, you can afford the extra 64 to 256 bits in your memory
That's not really true and it depends on what you mean. If your decimal datatype has the same number of bits it's not more accurate than base 2 floats. This is often hidden because many decimal implementations aren't 64 bit but 128 bit or more. But what it can do is exactly represent base 10 numbers which is not a requirement for a lot of applications.
You can use floats everywhere where you don't need numbers to be base 10. With base 2 floats the operations couldn't be more accurate given the limit of 64 bits. But if you write f64 x = 0.1;
and one assumes that the computer somehow stored 0.1
inside x they already made a wrong assumption. 0.1 can't be converted into a float because it's a periodic in base 2. A very very pedantic compiler wouldn't even let you compile that and force you to pick a value that actually can be represented.
Down the rabbit hole: https://zeta.one/floats-are-not-inaccurate/
Good and bad use-cases for floats
Floats can be used everywhere where it doesn’t matter that you can’t store a 100% accurate base ten representations. For example positions and speeds in 3D games and animations, “analog” values like temperatures, speed of a vehicle, geo positions with longitude and latitude, a persons weight or heart pressure. In fact if you develop games there is no way around 32 bit floats because GPUs are f32 number crunching beasts. Modern 3D games wouldn’t be possible without all those fast f32 calculations.
You shouldn’t use binary floats if you need or expect accurate base ten calculations (addition, subtraction, multiplication, - note that divisions also introduce errors quickly in decimal types) and for dimensions that have a smallest unit that can’t be broken down, for example like money. If you need to handle money just store the amount of cents as integers and only divide by 100 in your display function.
This is exactly my point. Don't use floats when you need to get accurate stuff, but use it when you need a "feel" for it
I have been thinking that maybe modern programming languages should move away from supporting IEEE 754 all within one data type.
Like, we've figured out that having a null
value for everything always is a terrible idea. Instead, we've started encoding potential absence into our type system with Option
or Result
types, which also encourages dealing with such absence at the edges of our program, where it should be done.
Well, NaN
is null
all over again. Instead, we could make the division operator an associated function which returns a Result<f64>
and disallow f64
from ever being NaN
.
My main concern is interop with the outside world. So, I guess, there would still need to be a IEEE 754 compliant data type. But we could call it ieee_754_f64
to really get on the nerves of anyone wanting to use it when it's not strictly necessary.
Well, and my secondary concern, which is that AI models would still want to just calculate with tons of floats, without error-handling at every intermediate step, even if it sometimes means that the end result is a shitty vector of NaN
s, that would be supported with that, too.
I agree with moving away from float
s but I have a far simpler proposal... just use a struct of two integers - a value and an offset. If you want to make it an IEEE standard where the offset is a four bit signed value and the value is just a 28 or 60 bit regular old integer then sure - but I can count the number of times I used floats on one hand and I can count the number of times I wouldn't have been better off just using two integers on -0 hands.
Floats specifically solve the issue of how to store a ln absurdly large range of values in an extremely modest amount of space - that's not a problem we need to generalize a solution for. In most cases having values up to the million magnitude with three decimals of precision is good enough. Generally speaking when you do float arithmetic your numbers will be with an order of magnitude or two... most people aren't adding the length of the universe in seconds to the width of an atom in meters... and if they are floats don't work anyways.
I think the concept of having a fractionally defined value with a magnitude offset was just deeply flawed from the get-go - we need some way to deal with decimal values on computers but expressing those values as fractions is needlessly imprecise.
Nan isn't like null at all. It doesn't mean there isn't anything. It means the result of the operation is not a number that can be represented.
The only option is that operations that would result in nan are errors. Which doesn't seem like a great solution.
While I get your proposal, I'd think this would make dealing with float hell. Do you really want to .unwrap()
every time you deal with it? Surely not.
One thing that would be great, is that the /
operator could work between Result
and f64
, as well as between Result
and Result
. Would be like doing a .map(|left| left / right)
operation.
Well, not every time. Only if I do a division or get an ieee_754_f64
from the outside world. That doesn't happen terribly often in the applications I've worked on.
And if it does go wrong, I do want it to explode right then and there. Worst case would be, if it writes random NaN
s into some database and no one knows where they came from.
As for your suggestion with the slash accepting Result
s, yeah, that could resolve some pain, but I've rarely seen multiple divisions being necessary back-to-back and I don't want people passing around a Result<f64>
in the codebase. Then you can't see where it went wrong anymore either.
So, personally, I wouldn't put that division operator into the stdlib, but having it available as a library, if someone needs it, would be cool, yeah.
Float is bloat!
Call me when you found a way to encode transcendental numbers.
Perhaps you can encode them as computation (i.e. a function of arbitrary precision)
Hard to do as those functions are often limits and need infinite function applications. I'm telling you, math.PI is a finite lie!
May I propose a dedicated circuit (analog because you can only ever approximate their value) that stores and returns transcendental/irrational numbers exclusively? We can just assume they're going to be whatever value we need whenever we need them.
While we're at it, what the hell is -0 and how does it differ from 0?
From time to time I see this pattern in memes, but what is the original meme / situation?
It's my favourite format. I think the original was 'stop doing math'
Thank you 😁
math are numbers and therefore non-physical, and therefore esoterical, so stop giving it credit.
/s
Out of topic but how does one get a profile pic on lemmy? Also love you ken.
you can configure it in the web interface. just go to your profile
Thank you!
Go to "Settings" (cog wheel) and then "Avatar":
There are probably a lot of scientific applications (e.g. statistics, audio, 3D graphics) where exponential notation is the norm and there’s an understanding about precision and significant digits/bits. It’s a space where fixed-point would absolutely destroy performance, because you’d need as many bits as required to store your largest terms. Yes, NaN and negative zero are utter disasters in the corners of the IEEE spec, but so is trying to do math with 256bit integers.
For a practical explanation about how stark a difference this is, the PlayStation (one) uses an integer z-buffer (“fixed point”). This is responsible for the vertex popping/warping that the platform is known for. Floating-point z-buffers became the norm almost immediately after the console’s launch, and we’ve used them ever since.
While it's true the PS1 couldn't do floating point math, it did NOT have a z-buffer at all.
What's the problem with -0?
It conceptually makes sense for to negativ values to close to 0 to be represented as -0.
In practice I have never seen a problem with -0.
On NaN: While its use cases can nowadays be replaced with language constructs like result types, it was created before exceptions or sum types. The way it propagates kind of mirrors Haskells monadic Maybe
.
We should be demanding more and better wrapper types from our language/standard library designers.
Precision piled.
The meme is right for once
I'm like, it's that code on the right what I think it is? And it is! I'm so happy now
Obviously floating point is of huge benefit for many audio dsp calculations, from my observations (non-programmer, just long time DAW user, from back in the day when fixed point with relatively low accumulators was often what we had to work with, versus now when 64bit floating point for processing happens more as the rule) - e.g. fixed point equalizers can potentially lead to dc offset in the results. I don't think peeps would be getting as close to modeling non-linear behavior of analog processors with just fixed point math either.
Audio, like a lot of physical systems, involve logarithmic scales, which is where floating-point shines. Problem is, all the other physical systems, which are not logarithmic, only get to eat the scraps left over by IEEE 754. Floating point is a scam!
Not only for audio, but everything that doesn't have to be an exact base 10 representation (like money). Anything that represents something "analog" or "measured" is perfectly fine to store in a float. Temperature, humidity, windspeed, car velocity, rocket acceleration, etc. Calculations with floats are perfectly accurate and given the same bit length are as accurate as decimal types. The only thing they can't do is exactly(!) represent base 10 decimals but for a very large amount of applications that doesn't matter.
I actually hate floats. Integers all the way (unless i have no other choice)
The only reason for floating point numbers is to use your laptop as a life buoy
uses 64 bit double instead
Uhm, I haven't programmed in a low level language in years. I use python for my job now, and all I know are floats and ints. I don't know what this foreign language is you speak of.
One of the most accurate ones of this format.
Floats are heresy
Integers have fallen billions must use long float
The problem is, that most languages have no native support other than 32 or 64 bit floats and some representations on the wire don't either. And most underlying processors don't have arbitrary precision support either.
So either you choose speed and sacrifice precision, or you choose precision and sacrifice speed. The architecture might not support arbitrary precision but most languages have a bignum/bigdecimal library that will do it more slowly. It might be necessary to marshal or store those values in databases or over the wire in whatever hacky way necessary (e.g. encapsulating values in a string).
Stop using floats
Why?