What is JavaScript's highest integer value that a number can go to without losing precision?


Is this defined by the language? Is there a defined maximum? Is it different in different browsers?

JavaScript has two number types: Number and BigInt.

The most frequently-used number type, Number, is a 64-bit floating point IEE 754 number.

The largest exact integral value of this type is Number.MAX_SAFE_INTEGER, which is:

  • 253-1, or
  • +/- 9007199254740991, or
  • nine quadrillion seven trillion one hundred ninety-nine billion two hundred fifty-four million seven hundred forty thousand nine hundred ninety-one

To put this in perspective: one quadrillion bytes is a petabyte (or one thousand terabytes).

"Safe" in this context refers to the ability to represent integers exactly and to correctly compare them.

From the spec:

Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type (indeed, the integer 0 has two representations, +0 and -0).

To safely use integers larger than this, you need to use BigInt, which has no upper bound.

Note that the bitwise operators and shift operators operate on 32-bit integers, so in that case, the max safe integer is 231-1, or 2147483647.

const log = console.log
var x = 9007199254740992
var y = -x
log(x == x + 1) // true !
log(y == y - 1) // also true !

// Arithmetic operators work, but bitwise/shifts only operate on int32:
log(x / 2)      // 4503599627370496
log(x >> 1)     // 0
log(x | 1)      // 1


Technical note on the subject of the number 9007199254740992: There is an exact IEEE-754 representation of this value, and you can assign and read this value from a variable, so for very carefully chosen applications in the domain of integers less than or equal to this value, you could treat this as a maximum value.

In the general case, you must treat this IEEE-754 value as inexact, because it is ambiguous whether it is encoding the logical value 9007199254740992 or 9007199254740993.


>= ES6:

Number.MIN_SAFE_INTEGER;
Number.MAX_SAFE_INTEGER;

<= ES5

From the reference:

Number.MAX_VALUE;
Number.MIN_VALUE;

console.log('MIN_VALUE', Number.MIN_VALUE);
console.log('MAX_VALUE', Number.MAX_VALUE);

console.log('MIN_SAFE_INTEGER', Number.MIN_SAFE_INTEGER); //ES6
console.log('MAX_SAFE_INTEGER', Number.MAX_SAFE_INTEGER); //ES6


It is 253 == 9 007 199 254 740 992. This is because Numbers are stored as floating-point in a 52-bit mantissa.

The min value is -253.

This makes some fun things happening

Math.pow(2, 53) == Math.pow(2, 53) + 1
>> true

And can also be dangerous :)

var MAX_INT = Math.pow(2, 53); // 9 007 199 254 740 992
for (var i = MAX_INT; i < MAX_INT + 2; ++i) {
    // infinite loop
}

Further reading: http://blog.vjeux.com/2010/javascript/javascript-max_int-number-limits.html


In JavaScript, there is a number called Infinity.

Examples:

(Infinity>100)
=> true

// Also worth noting
Infinity - 1 == Infinity
=> true

Math.pow(2,1024) === Infinity
=> true

This may be sufficient for some questions regarding this topic.


Jimmy's answer correctly represents the continuous JavaScript integer spectrum as -9007199254740992 to 9007199254740992 inclusive (sorry 9007199254740993, you might think you are 9007199254740993, but you are wrong! Demonstration below or in jsfiddle).

console.log(9007199254740993);

However, there is no answer that finds/proves this programatically (other than the one CoolAJ86 alluded to in his answer that would finish in 28.56 years ;), so here's a slightly more efficient way to do that (to be precise, it's more efficient by about 28.559999999968312 years :), along with a test fiddle:

/**
 * Checks if adding/subtracting one to/from a number yields the correct result.
 *
 * @param number The number to test
 * @return true if you can add/subtract 1, false otherwise.
 */
var canAddSubtractOneFromNumber = function(number) {
    var numMinusOne = number - 1;
    var numPlusOne = number + 1;
    
    return ((number - numMinusOne) === 1) && ((number - numPlusOne) === -1);
}

//Find the highest number
var highestNumber = 3; //Start with an integer 1 or higher

//Get a number higher than the valid integer range
while (canAddSubtractOneFromNumber(highestNumber)) {
    highestNumber *= 2;
}

//Find the lowest number you can't add/subtract 1 from
var numToSubtract = highestNumber / 4;
while (numToSubtract >= 1) {
    while (!canAddSubtractOneFromNumber(highestNumber - numToSubtract)) {
        highestNumber = highestNumber - numToSubtract;
    }
    
    numToSubtract /= 2;
}        

//And there was much rejoicing.  Yay.    
console.log('HighestNumber = ' + highestNumber);


To be safe

var MAX_INT = 4294967295;

Reasoning

I thought I'd be clever and find the value at which x + 1 === x with a more pragmatic approach.

My machine can only count 10 million per second or so... so I'll post back with the definitive answer in 28.56 years.

If you can't wait that long, I'm willing to bet that

  • Most of your loops don't run for 28.56 years
  • 9007199254740992 === Math.pow(2, 53) + 1 is proof enough
  • You should stick to 4294967295 which is Math.pow(2,32) - 1 as to avoid expected issues with bit-shifting

Finding x + 1 === x:

(function () {
  "use strict";

  var x = 0
    , start = new Date().valueOf()
    ;

  while (x + 1 != x) {
    if (!(x % 10000000)) {
      console.log(x);
    }

    x += 1
  }

  console.log(x, new Date().valueOf() - start);
}());

The short answer is “it depends.”

If you’re using bitwise operators anywhere (or if you’re referring to the length of an Array), the ranges are:

Unsigned: 0…(-1>>>0)

Signed: (-(-1>>>1)-1)…(-1>>>1)

(It so happens that the bitwise operators and the maximum length of an array are restricted to 32-bit integers.)

If you’re not using bitwise operators or working with array lengths:

Signed: (-Math.pow(2,53))…(+Math.pow(2,53))

These limitations are imposed by the internal representation of the “Number” type, which generally corresponds to IEEE 754 double-precision floating-point representation. (Note that unlike typical signed integers, the magnitude of the negative limit is the same as the magnitude of the positive limit, due to characteristics of the internal representation, which actually includes a negative 0!)


ECMAScript 6:

Number.MAX_SAFE_INTEGER = Math.pow(2, 53)-1;
Number.MIN_SAFE_INTEGER = -Number.MAX_SAFE_INTEGER;

Many answers earlier have showed the result true of 9007199254740992 === 9007199254740992 + 1 to verify that 9 007 199 254 740 991 is the max safe integer.

What if we keep doing accumulation:

input: 9007199254740992 + 1  output: 9007199254740992  // expected: 9007199254740993
input: 9007199254740992 + 2  output: 9007199254740994  // expected: 9007199254740994
input: 9007199254740992 + 3  output: 9007199254740996  // expected: 9007199254740995
input: 9007199254740992 + 4  output: 9007199254740996  // expected: 9007199254740996

We could find out that among numbers greater than 9 007 199 254 740 992, only even numbers are representable.

It's an entrance to explain how double-precision 64-bit binary format works on this. Let's see how 9 007 199 254 740 992 be held (represented) by using this binary format.

Using a brief version to demonstrate it from 4 503 599 627 370 496:

  1 . 0000 ---- 0000  *  2^52            =>  1  0000 ---- 0000.  
     |-- 52 bits --|    |exponent part|        |-- 52 bits --|

On the left side of the arrow, we have bit value 1, and an adjacent radix point, then by multiplying 2^52, we right move the radix point 52 steps, and it goes to the end. Now we get 4503599627370496 in binary.

Now we start to accumulate 1 to this value until all the bits are set to 1, which equals to 9 007 199 254 740 991 in decimal.

  1 . 0000 ---- 0000  *  2^52  =>  1  0000 ---- 0000.  
                       (+1)
  1 . 0000 ---- 0001  *  2^52  =>  1  0000 ---- 0001.  
                       (+1)
  1 . 0000 ---- 0010  *  2^52  =>  1  0000 ---- 0010.  
                       (+1)
                        . 
                        .
                        .
  1 . 1111 ---- 1111  *  2^52  =>  1  1111 ---- 1111. 

Now, because that in double-precision 64-bit binary format, it strictly allots 52 bits for fraction, no more bit is available to carry for adding one more 1, so what we can do is to set all bits back to 0, and manipulate the exponent part:

  |--> This bit is implicit and persistent.
  |        
  1 . 1111 ---- 1111  *  2^52      =>  1  1111 ---- 1111. 
     |-- 52 bits --|                     |-- 52 bits --|

                          (+1)
                                     (radix point has no way to go)
  1 . 0000 ---- 0000  *  2^52 * 2  =>  1  0000 ---- 0000. * 2  
     |-- 52 bits --|                     |-- 52 bits --|

  =>  1 . 0000 ---- 0000  *  2^53 
         |-- 52 bits --| 

Now we get the 9 007 199 254 740 992, and with number greater than it, what the format could hold is 2 times of the fraction, it means now every 1 addition on the fraction part actually equals to 2 addition, that's why double-precision 64-bit binary format cannot hold odd numbers when number is greater than 9 007 199 254 740 992:

                            (consume 2^52 to move radix point to the end)
  1 . 0000 ---- 0001  *  2^53  =>  1  0000 ---- 0001.  *  2
     |-- 52 bits --|                 |-- 52 bits --|

So when the number get to greater than 9 007 199 254 740 992 * 2 = 18 014 398 509 481 984, only 4 times of the fraction could be held:

input: 18014398509481984 + 1  output: 18014398509481984  // expected: 18014398509481985
input: 18014398509481984 + 2  output: 18014398509481984  // expected: 18014398509481986
input: 18014398509481984 + 3  output: 18014398509481984  // expected: 18014398509481987
input: 18014398509481984 + 4  output: 18014398509481988  // expected: 18014398509481988

How about number between [ 2 251 799 813 685 248, 4 503 599 627 370 496 )?

 1 . 0000 ---- 0001  *  2^51  =>  1 0000 ---- 000.1
     |-- 52 bits --|                |-- 52 bits  --|

The bit value 1 after radix point is 2^-1 exactly. (=1/2, =0.5) So when the number less than 4 503 599 627 370 496 (2^52), there is one bit available to represent the 1/2 times of the integer:

input: 4503599627370495.5   output: 4503599627370495.5  
input: 4503599627370495.75  output: 4503599627370495.5  

Less than 2 251 799 813 685 248 (2^51)

input: 2251799813685246.75   output: 2251799813685246.8  // expected: 2251799813685246.75 
input: 2251799813685246.25   output: 2251799813685246.2  // expected: 2251799813685246.25 
input: 2251799813685246.5    output: 2251799813685246.5

// If the digits exceed 17, JavaScript round it to print it.
//, but the value is held correctly:

input: 2251799813685246.25.toString(2) 
output: "111111111111111111111111111111111111111111111111110.01"
input: 2251799813685246.75.toString(2) 
output: "111111111111111111111111111111111111111111111111110.11"
input: 2251799813685246.78.toString(2)   
output: "111111111111111111111111111111111111111111111111110.11"

And what is the available range of exponent part? the format allots 11 bits for it. Complete format from Wiki: (For more details please go there)

IEEE 754 Double Floating Point Format.svg

enter image description here

So to make the exponent part be 2^52, we exactly need to set e = 1075.


Other may have already given the generic answer, but I thought it would be a good idea to give a fast way of determining it :

for (var x = 2; x + 1 !== x; x *= 2);
console.log(x);

Which gives me 9007199254740992 within less than a millisecond in Chrome 30.

It will test powers of 2 to find which one, when 'added' 1, equals himself.


Anything you want to use for bitwise operations must be between 0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 - 1).

The console will tell you that 0x80000000 equals +2147483648, but 0x80000000 & 0x80000000 equals -2147483648.


Try:

maxInt = -1 >>> 1

In Firefox 3.6 it's 2^31 - 1.


At the moment of writing, JavaScript is receiving a new data type: BigInt. It is a TC39 proposal at stage 4 to be included in EcmaScript 2020. BigInt is available in Chrome 67+, FireFox 68+, Opera 54 and Node 10.4.0. It is underway in Safari, et al... It introduces numerical literals having an "n" suffix and allows for arbitrary precision:

var a = 123456789012345678901012345678901n;

Precision will still be lost, of course, when such a number is (maybe unintentionally) coerced to a number data type.

And, obviously, there will always be precision limitations due to finite memory, and a cost in terms of time in order to allocate the necessary memory and to perform arithmetic on such large numbers.

For instance, the generation of a number with a hundred thousand decimal digits, will take a noticeable delay before completion:

console.log(BigInt("1".padEnd(100000,"0")) + 1n)

...but it works.


I did a simple test with a formula, X-(X+1)=-1, and the largest value of X I can get to work on Safari, Opera and Firefox (tested on OS X) is 9e15. Here is the code I used for testing:

javascript: alert(9e15-(9e15+1));

Let's get to the sources

Description

The MAX_SAFE_INTEGER constant has a value of 9007199254740991 (9,007,199,254,740,991 or ~9 quadrillion). The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(2^53 - 1) and 2^53 - 1.

Safe in this context refers to the ability to represent integers exactly and to correctly compare them. For example, Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will evaluate to true, which is mathematically incorrect. See Number.isSafeInteger() for more information.

Because MAX_SAFE_INTEGER is a static property of Number, you always use it as Number.MAX_SAFE_INTEGER, rather than as a property of a Number object you created.

Browser compatibility

enter image description here


I write it like this:

var max_int = 0x20000000000000;
var min_int = -0x20000000000000;
(max_int + 1) === 0x20000000000000;  //true
(max_int - 1) < 0x20000000000000;    //true

Same for int32

var max_int32 =  0x80000000;
var min_int32 = -0x80000000;

In the Google Chrome built-in javascript, you can go to approximately 2^1024 before the number is called infinity.


Scato wrotes:

anything you want to use for bitwise operations must be between 0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 - 1).

the console will tell you that 0x80000000 equals +2147483648, but 0x80000000 & 0x80000000 equals -2147483648

Hex-Decimals are unsigned positive values, so 0x80000000 = 2147483648 - thats mathematically correct. If you want to make it a signed value you have to right shift: 0x80000000 >> 0 = -2147483648. You can write 1 << 31 instead, too.


In JavaScript the representation of numbers is 2^53 - 1.

However, Bitwise operation are calculated on 32 bits ( 4 bytes ), meaning if you exceed 32bits shifts you will start loosing bits.


Firefox 3 doesn't seem to have a problem with huge numbers.

1e+200 * 1e+100 will calculate fine to 1e+300.

Safari seem to have no problem with it as well. (For the record, this is on a Mac if anyone else decides to test this.)

Unless I lost my brain at this time of day, this is way bigger than a 64-bit integer.


Node.js and Google Chrome seem to both be using 1024 bit floating point values so:

Number.MAX_VALUE = 1.7976931348623157e+308