Let’s look at two questions first:
0.1 + 0.2 == 0.3; // false
9999999999999999999 == 100000000000000000; // true
The first problem is the accuracy of decimals, which has been discussed in many blogs in the industry. The second problem was that last year, when a systematic database was correcting the data, it found that some data were duplicated. This article will start from the norm and summarize the above issues.
Maximum integer
Numbers in JavaScript are stored using IEEE 754 double precision 64-bit floating point numbers, and their format is:
sxmx 2^e
s is a sign bit, indicating positive and negative. m is the mantissa, with 52 bits. e is the exponent, with 11 bits. In the ECMAScript specification, the range given by e is [-1074, 971]. In this way, it is easy to deduce that the maximum integer that JavaScript can represent is:
1 x (2^53 - 1) x 2^971 = 1.7976931348623157e+308
This value is Number.MAX_VALUE
Similarly, the value of Number.MIN_VALUE can be deduced as:
1 x 1 x 2^(-1074) = 5e-324
Note that MIN_VALUE represents the positive number closest to 0, not the smallest number. The smallest number is -Number.MAX_VALUE
The accuracy of the decimal is lost
JavaScript numbers are double-precision floating-point numbers and are stored in binary in computers. When the number of significant bits exceeds 52 bits, there will be accuracy loss. for example:
The binary of decimal 0.1 is 0.0 0011 0011 0011 … (Loop 0011)
The binary of decimal 0.2 is 0.0011 0011 0011 … (Loop 0011)
0.1 + 0.2 Addition can be expressed as:
e = -4; m = 1.10011001100...1100 (52 bits)
+ e = -3; m = 1.10011001100...1100 (52 bits)
--------------------------------------------------------------------------------------------------------------------------------
e = -3; m = 0.11001100110...0110
+ e = -3; m = 1.10011001100...1100
--------------------------------------------------------------------------------------------------------------------------------
e = -3; m = 10.01100110011...001
--------------------------------------------------------------------------------------------------------------------------------
= 0.01001100110011...001
= 0.3000000000000000004 (decimal)
Based on the above calculation, we can also draw a conclusion: when the finite number of decimal decimals in binary representation does not exceed 52 bits, it can be stored accurately in JavaScript. for example:
0.05 + 0.005 == 0.055 // true
Further rules, such as:
0.05 + 0.2 == 0.25 // true
0.05 + 0.9 == 0.95 // false
Rounding modes of IEEE 754 need to be considered, and those who are interested can further study it.
The accuracy of large integers is lost
This question is rarely mentioned. First, we have to figure out what the problem is:
1. What is the maximum integer that JavaScript can store?
This question has been answered earlier, and is Number.MAX_VALUE, a very large number.
2. What is the maximum integer that JavaScript can store without losing precision?
According to sxmx 2^e, the sign bit is positive, the 52-bit mantissa is fully padded with 1, and the exponent e is the maximum value of 971. Obviously, the answer is still Number.MAX_VALUE.
What exactly is our problem? Go back to the starting code:
9999999999999999999 == 100000000000000000; // true
It is obvious that 16 9s are far less than 308 10s. This problem has nothing to do with MAX_VALUE, and it has to be attributed to the mantis m with only 52 digits.
It can be described in code:
var x = 1; // In order to reduce the calculation amount, the initial value can be set to be larger, such as Math.pow(2, 53) - 10
while(x != x + 1) x++;
// x = 9007199254740992 i.e. 2^53
That is to say, when x is less than or equal to 2^53, it is ensured that the accuracy of x is not lost. When x is greater than 2^53, the accuracy of x may be lost. for example:
When x is 2^53 + 1, its binary representation is:
100000000000...001 (There are 52 0s in the middle)
When storing with double precision floating point numbers:
e = 1; m = 10000..00 (Total 52 0s, 1 is hidden bit)
Obviously, this is the same as 2^53 storage.
According to the above idea, it can be outlined that for 2^53 + 2, its binary is 100000…0010 (51 0s in the middle), and it can also be stored accurately.
Rule: When x is greater than 2^53 and the number of binary significant digits is greater than 53 bits, there will be accuracy loss. This is essentially the same as the loss of precision of decimals.
hidden bit can be used for reference: A tutorial about Java double type.
summary
The accuracy of decimals and large integers is not only lost in JavaScript. Strictly speaking, any programming language (C/C++/C#/Java, etc.) that uses the IEEE 754 floating point number format to store floating point types has accuracy loss issues. In C# and Java, Decimal and BigDecimal encapsulation classes are provided to perform corresponding processing, so as to avoid accuracy loss.
Note: There is already a decimal proposal in the ECMAScript specification, but it has not been officially adopted yet.
Finally, test everyone:
Number.MAX_VALUE + 1 == Numer.MAX_VALUE;
Number.MAX_VALUE + 2 == Numer.MAX_VALUE;
...
Number.MAX_VALUE + x == Numer.MAX_VALUE;
Number.MAX_VALUE + x + 1 == Infinity;
...
Number.MAX_VALUE + Number.MAX_VALUE == Infinity;
// question:
// 1. What is the value of x?
// 2. Infinity - Number.MAX_VALUE == x + 1; is true or false?
The above brief discussion on the accuracy loss of decimals and large integers in JavaScript is all the content I share with you. I hope you can give you a reference and I hope you can support Wulin.com more.