If anyone can find a binary representation for 0.3 or 0.1, let me know. While you're at it, publish a paper in ACM or IEEE Computer, and earn yourself a position at some prestigious university. Even with *80* bits of precision, you can't represent either number precisely, and so when printing values, you have to introduce a fudge factor to make the display right. Thankfully, Python doesn't bother with a fudge factor that I know of, so it exposes floating point's warts. For example:
Code: Select all
kc5tja@capella ~]python
Python 2.6 (r26:66714, Jan 22 2009, 12:45:42)
[GCC 4.1.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> a=0.1
>>> a
0.10000000000000001
>>> b=0.3
>>> b
0.29999999999999999
BCD has the benefit that it can represent numbers which binary cannot, thus invalidating the argument that binary floating point is always more precise than an equivalently-sized BCD floating point quantity. Representing 0.3 in BCD requires only two bytes of space, but known binary representations takes an infinite amount of RAM.
I do not know if the reverse holds as well; it is at least conceivable that binary potentially can represent numbers which BCD similarly couldn't represent compactly. However, I cannot think of any examples at the moment.
Note that this explicitly doesn't cover
rational representation of numbers.
Binary floating point works exactly like its decimal equivalent, more commonly known as "scientific notation." The trick is to recognize binary figures after the decimal point:
Code: Select all
8 4 2 1 0.5 0.25 0.125
=======================================
3 2 1 0 -1 -2 -3
2 2 2 2 2 2 2
So representing the number 16, you could express it as 1000 which is the same as 1000.0e+0 (where "e" refers to a power of two, not a power of ten). This is the same as writing 1.000e+3 (notice the movement of the binary point and its relationship with the exponent). Hence, the number 16 can be expressed as the tuple <3,0>, with 3 being the exponent, and 0 being the mantissa bits, sans the '1' to save one bit of space.
Conversely 1/2 is expressed as 0.5 in decimal, of 0.1 in binary. This is because 2^-1 = 1/(2^1) = 1/2. 0.25 is expressed as 2^-2 = 1/(2^2) = 1/4 = 0.25. And so on. So:
Code: Select all
Decimal Binary Float Bytes
0.5 0.1 1.0e-1 <-1, $0000000000>
0.25 0.01 1.0e-2 <-2, $0000000000>
0.75 0.11 1.1e-1 <-1, $8000000000>
1.0 1.0 1.0e+0 <0, $0000000000>
1.25 1.01 1.01e+0 <0, $4000000000>
1.5 1.1 1.10e+0 <0, $8000000000>
1.75 1.11 1.11e+0 <0, $C000000000>
2.0 10.0 1.0e+1 <1, $0000000000>
You represent the fractional part of a number as a sum of pure binary fractions, just as you'd represent the decimal portion of a number as a sum of fractions of powers of ten.
To answer the OP's question, I don't know how you can convert from some ASCII representation into a floating point number in some clever way, but you can always brute-force the solution. Given a number like 3.14159, you can always break the number up into an integer (3) and a fraction (14159 / 100000). Converting an integer into a floating point number is a trivial exercise in bit-shifting, so I won't cover that. However, if you evaluate the fraction, you'll get 0.14159, to which you can just add the 3.0 to yield the final result, 3.14159 in binary floating point.
Hope this helps.