A truly logic system would be entirely designed around a base-12 number system. But we were born with an imperfect set of 10 fingers and that doomed us.
Those aliens have 6 fingers. It's an absolutely ironic twist that their discussion on measuring systems is super illogical for them, and yet logical is the verbiage they use.
Basically it's because 12 is more divisible than 10. Factors of 10 are 1,2,5 and 10. 12 has 1,2,3,4,6 and 12. This gives more flexibility when discussing numbers. Our time is technically using base 12, which is why we can say quarter past 4 and it means a traditional whole number. That's the argument I've heard anyway
I believe this is also why we have 360 "degrees" in a circle, and not 365. The ancients hated that a year was clise to, but not exactly, 365 days. They chalked it up to the imperfection of Earth relative to the heavens. But a perfect year should be 360 days because it is divisible by every single digit number but 7.
On the matter of days in a years, there's also the idea of spliting the year into 13 months of 28 days each for a total of 364 days, closer matching the lunar cycle (and women's body). Every day of the year would always be on the same day of the week.
Then the extra day? It's world day, a global holiday for celebrating the new year, and it doesn't belong in any weekday. Sometimes we'd need an extra leap year day (just like right now with 29th February) so they would just both be world day.
A base 12 metric system is the best of all worlds. 1/3 of a cm is 0.4cm or 4.
Your way seems like the worst of all worlds. What is 0.1 feet in inches? shrugs. If you're going to use a different system to the people around you, why not use normal metric?
I said decimal inches, not decimal feet. Also, I use them personally with my own projects, not when giving measurements to other people. 4.25 inches makes more sense to me than 4 1/4 inches. I could use cm but I'm more used to inches since I live in the US. If I were to give my measurements to someone else I'd use fractions, since that is the standard here.
Base 6 however is perfect for 2 hands with 5 fingers each. You can easily represent the six possible digits 0 1 2 3 4 5 on each hand, and can therefore comfortably count to 55 (decimal 35) with two hands, using our familiar place-value numeral system.
A base 12 number system would have two extra symbols. Twelve would be written 10 and be called ten, and the number 144 would be written 100 and be called one hundred.
Everything you may think is inherent to base 10 is largely not. The quirky rules of 9's multiplication table would apply to 11's. Pi and e would still be irrational, and continue being no no matter which base of N you choose. Long division would work the same. Etc.
Yep. In computer science you sometimes need to calculate with hexadecimal numbers where 10-15 are the letters A-F. You just use another factor for scaling "easily".
In hexadecimal 10 is 16 in decimal. So if you do C * 10 it's C0 but that is 192 in decimal (12 * 16, remember the base is 16).
Whats cool though is that (all hexadecimal):
10 / 2 = 8
10 is 2 to the power of 4 which means 10 is divisible by 2 4 times.
Similarly (and arguably even cooler) with a base 12 system 10 is divisible by 2 AND 3!
I'll also defend fractional measurements over decimal to my dying breath. Decimal measurements can't express precision very well at all. You can only increase or decrease precision by a power of 10.
If your measurement is precise to a quarter of a unit, how do you express that in decimal? ".25" is implying that your measurement is precise to 1/100th - misrepresenting precision by a factor of 25.
Meanwhile with fractions it's easy. 1/4. Oh, your measurement of 1/4 meter is actually super duper precise? Great! Just don't reduce the fraction.
928/3712 is the same number as 1/4 or .25, but now you know exactly how precise the measurement is. Whereas with a decimal measurement you either have to say it's precise to 1/1000th (0.250), which is massively understating the precision, or 1/10000th (0.2500), which is massively overstating it.
Honestly, I don't give a shit either way. Wish us 'mericans were on the same wavelength as the rest of the world, but we're awful in so many ways it doesn't even register.
However, this troll is gold and I think you're all sleeping on his genius
i’ve never heard of anyone using non-reduced fractions to measure precision. if you go into a machine shop and ask for a part to be milled to 16/64”, they will ask you what precision you need, they would never assume that means 16/64”+-1/128”.
if you need custom precision in any case, you can always specify that by hand, fractional or decimal.
But you can't specify it with decimal. That's my point. How do you tell the machine operator it needs to be precise to the 64th in decimal? "0.015625" implies precision over 15,000x as precise as 1/64th. The difference between 1/10 and 1/100 is massive, and decimal has no way of expressing it with significant figures.
sure you can, you say “i need a hole with diameter 0.25” +- 0.015625“”. it doesn’t matter that you have more sig figs when you state your precision
but regardless, that’s probably not the precision you care about. there’s a good chance that you actually want something totally different, like 0.25+-0.1”. with decimal, it’s exceptionally clear what that means, even for complicated/very small decimals. doing the same thing fractionally has to be written as 1/4+-1/10”, meaning you have to figure out what that range of values are (7/20” to 3/20”)
Having to provide a "+/-" for a measurement is a silly alternative to using a measurement that already includes precision. You're just so used to doing things a stupid way that you don't see it.
providing an arbitrarily non-reduced fraction is an even sillier alternative. the same fundamental issue arises either way, and it’s much clearer to use obvious semantics that everyone can understand
How do you represent 1/64th in decimal without implying greater or lesser precision? Or 1/3rd? Or 1/2 or literally anything that isn't a power of 10?
You're defending the practice of saying "this number, but maybe not because we can't actually measure that precisely, so here's some more numbers you can use to figure out how precise or measurements are"
How is that a more elegant solution than simply having the precision recorded in a single rational measurement?
No measured value will be perfectly precise, so it doesn't make sense to use that as a criteria for a system of measurement. You're never going to be able to cut a board to exactly 1/3 of a foot, so it doesn't matter that the metric value will be rounded a bit.