1
votes

I was wondering whether there has been some work on an alternative to floating point numbers where the number is simply represented as an exponent (and a sign bit). It would be similar to floating point numbers, except that the mantissa would be skipped, and the base b would (usually) not be 2.

As a consequence, the only numbers representable would be some powers of b.

Here a simple example: Let the b be 2^(2^-4), and let's use 8 bits for the representation. The first bit is used for the sign, the remaining 7 for the exponent, which is in two's complement. Then

00000000 represents (2^(2^-4))^0 = 1
00000001 represents (2^(2^-4))^1 ≈ 1.044
10000000 represents -1
01000001 represents (2^(2^-4))^-63 ≈ 0.065
10111111 represents -(2^(2^-4))^63 ≈ -15.32

Note that special cases for 0, NaN, etc, could be added.

The representation has certain advantages over the usual floating point representation. For example, multiplication becomes addition, and the representable numbers are more smoothly distributed. One disadvantage might be computing addition (the implementation I came up with is using a binary tree, which could be fast when the tree is small and implemented in hardware).

Any information related to this representation would be welcome (whether it has been considered, why it would be bad, its name if it has one, etc).

1
How would this be useful if you need a number which isn't an integer power of b? I don't see how such a system could gain much traction. It isn't a general scheme for representing real numbers. - John Coleman
@JohnColeman It's similar to floats. They can only represent a finite number of reals. Still, they're useful for computation. Note that the precision can be tuned though b; a b closer to 1 increases it, but decreases the expressible range. This representation is as efficient as normal floats in the sense that with the same number of bits, it's possible to represent a similar range with a similar precision (by choosing b appropriately). - Stephane Bersier
Technically, this may be at least partly off-topic for Stack Overflow, since it asks for off-site resources. But it is not uninteresting. But I think it would be more interesting if you showed concrete use cases. I am not aware of work along these lines. - Eric Postpischil
Does addition, within bounds, depend only on the difference of the exponents? Consider b^x + b^y = (b^0 + b^(y-x))•b^x, so knowing 1+b^z for all z makes it easy to compute b^x + b^y. (That is for both elements positive; a similar result holds for all combinations of signs.) - Eric Postpischil
@EricPostpischil Thanks for pointing that out. I removed the reference to links. I hope my question will not be suspended! The use cases are similar to floats, as far as I can see. If they could be implemented efficiently, which I think has more chances to happen for low numbers of bits, then the smooth distribution and its simplicity could give it an edge over traditional floats. - Stephane Bersier

1 Answers

1
votes

This is a logarithmic number system: “LNS can be considered as a floating-point number with the significand being always equal to 1.”