Saturday, March 2, 2013

16777217

If you ever find yourself wondering what the smallest positive integer value is that you cannot represent exactly in a single precision floating point number (using a standard binary32 representation as defined in IEEE 754-2008), the answer is the title of this post.

This little bit of trivia is something I discovered while playing with the tutorial I wrote for the data structures class I'm TA'ing (that tutorial again, in case you missed the plug in my previous post, is available here). The reason it happens is that the exponent of 16777216 is 24, but only 23 bits are used to represent the mantissa (non-exponent part) of the number.

For a double-precision floating-point number, 52 bits are used to represent the mantissa, so the integers from -9007199254740992 to 9007199254740992 can be exactly represented. This is more integers than you can represent with an integer! At least, it's more bits than you can represent with a normal 32-bit integer, which in retrospect is kind of obvious since the number 32 is smaller than 52.

Here's another equally nerdy bit of trivia: you can toggle between lowercase and capital letters that are represented as ASCII characters by toggling the 5th least-significant bit of the character. This works because the value of each lowercase letter is exactly 32 plus the value of the uppercase letter. Since the alphabet contains 26 letters, I assume that this spacing was designed deliberately to make case transformations easier. It's hard to imagine a situation in which switching the case of a letter would be so important that it would require this level of optimization, but the ASCII table was defined a long time ago and perhaps it was necessary for Ye Olde Computers.

In summary, I now know far more about data types than I've ever found necessary before. This is what happens when I try to teach stuff.

PS. If you're about to say "the exponent of 16777216 isn't 24, it's 7, dummy" that's because you are thinking in base ten:
16777216 = 1.6777216  * 10^7
when it's actually calculated in base two:
16777216 = 1.0000000 * 2^24

PPS. If you're thinking "52 bits can't represent the integers from -9007199254740992 to 9007199254740992" then you're right. The sign bit isn't counted in the mantissa and the number zero is represented when all 64 bits in the double are off (aside from the sign bit, which can be either on or off). Zero is a special case; we're basically rounding two values that are really close to zero to exactly zero. These notes go into more detail.

PPPS. For anyone who writes code in JavaScript, one of its quirks is that it stores all numbers as doubles. This means that JavaScript has full coverage of integers in the range -9007199254740992 to 9007199254740992. There are larger and smaller integers that can be exactly represented in JavaScript, but you need to start counting in twos (and once you hit 2^54, you'll need to count in fours, and so on).