All numbers in JavaScript, whether integers or decimals, are of type Number. Inside the program, the essence of the Number type is a 64-bit floating point number, which is consistent with the floating point number of the double type in Java; therefore, all numbers in JavaScript are floating point numbers. Following the IEEE 754 standard (floating point arithmetic standard), the range of values that JavaScript can represent is plus or minus 1.7976931348623157 times 10 to the power of 308, and the minimum decimal number that can represent is plus or minus 5 times 10 to the power of minus 324. These two boundary values can be obtained by accessing the MAX_VALUE attribute and MIN_VALUE attribute of the Number object respectively.
For integers, according to the requirements of the ECMAScript standard (http://ecma262-5.com/ELS5_HTML.htm#Section_8.5), the range of integers that JavaScript can represent and perform precise arithmetic operations is: to the power of 53 of plus or minus 2, that is, the range from the minimum value -9007199254740992 to the maximum value +9007199254740992; for integers exceeding this range, JavaScript can still perform operations, but does not guarantee the accuracy of the calculation results. It is worth noting that for integer bit operations (such as shifting and other operations), JavaScript only supports 32-bit integer numbers, that is, integers from -2147483648 to +2147483647.
experiment
Displays the absolute value of the maximum number and the absolute value of the minimum decimal number in JavaScript:
The code copy is as follows:
console.log(Number.MAX_VALUE);
console.log(Number.MIN_VALUE);
The results are 1.7976931348623157e+308 and 5e-324.
JavaScript cannot give an exact calculation result for integers outside the power of 53 from plus or minus 2:
The code copy is as follows:
var a = 9007199254740992;
console.log(a+3);
The correct calculation result should be 9007199254740995, but the calculation result given by JavaScript is 9007199254740996. After trying to change the calculation formula, you can find that as long as the integer is greater than 9007199254740992, errors in this calculation result will occur frequently. If the deviation in calculation accuracy is acceptable, the consequences of the following example will be even more serious:
The code copy is as follows:
var MAX_INT = 9007199254740992;
for (var i = MAX_INT; i < MAX_INT + 2; ++i) {
// infinite loop
}
Due to calculation accuracy issues, the above for statement will fall into a dead loop.
For bit operations, JavaScript only supports 32-bit integer numbers:
The code copy is as follows:
var smallInt = 256;
var bigInt = 2200000000;
console.log(smallInt / 2);
console.log(smallInt >> 1);
console.log(bigInt / 2);
console.log(bigInt >> 1);
It can be seen that for integers within 32 bits (256), JavaScript can perform correct bit operations, and the result is consistent with the result of the division operation (128). For integers other than 32 bits, JavaScript can perform correct division operations (1100000000), but the results obtained after performing bit operations are far from the correct results (-1047483648).