Native Decimal Arithmetic in JavaScript
JavaScript’s native number type is 64-bit (double precision) floating point, following the IEEE 754 standard. Floating point is an approximation of real numbers and is not 100% precise. Since there are an infinite amount of real numbers but only a finite amount of possible encodings in a 64-bit scheme, this means that not every real number can be represented.
Floating point gives a close approximation that is suitable for general purpose mathematics. Most popular general purpose programming languages (for example C/C++, Java, Python) also use IEEE 754 floating point as their native non-integral number type. On the other hand, languages like RPG and COBOL are designed specifically for business application programming and use fixed point decimal values.
Floating point arithmetic can lead to unexpected results, especially for programmers who are used to precise fixed point arithmetic. For example, consider this output from a Node.js REPL session:
As you can see, in floating point arithmetic, 0.1 + 0.2 is not exactly 0.3. Nor is 0.2 + 0.4 exactly 0.6. As discussed above, this is because floating point numbers are an approximation of real numbers.
To deal with this behavior, programmers must be aware of it and truncate/round values to the desired number of decimal positions. Otherwise, subtle errors can be introduced, especially with repeated calculations.
Another behavior that can be unexpected is that JavaScript numbers with more than 15 digits will “silently” (meaning, without throwing an exception) lose precision. For example:
For solutions to both of these problems, see: