Skip to content

Glossary

IEEE 754

The float-point standard behind every CPU

IEEE 754 is the standard for floating-point arithmetic, first published in 1985 and updated in 2008 and 2019. Defines binary and decimal floating-point formats, rounding behaviour, exception handling, and the special values (NaN, ±Infinity, −0).

The standard double-precision format is what JavaScript Numbers, Python floats, and most other “number” types in modern languages use:

  • 64 bits total
  • 1 bit sign
  • 11 bit exponent
  • 52 bit mantissa
  • ~15-17 significant decimal digits of precision
  • Range roughly 4.9e−324 to 1.8e+308

The famous example: 0.1 + 0.2 === 0.3 returns false in nearly every language. The reason: 0.1 and 0.2 don’t have exact binary representations (just like 1/3 has no exact decimal representation). Their stored values are very close to but not exactly 0.1 and 0.2, and the sum compounds the tiny error. The result is 0.30000000000000004.

Implications for our tools: most conversions are accurate to ~15 decimal digits, which is more than any input we accept. Where this matters: cryptocurrency amounts, where 18-decimal Wei exceeds the Number range — we use BigInt there to keep precision exact.

Related

Published May 15, 2026