Page 2 of 3
Re: OT: Binary or decimal for floating point?
Posted: Thu Nov 01, 2018 9:19 am
by GARTHWILSON
Looking briefly at the HP-12C's design, it has a 10-decimal-digit mantissa, equivalent to about 33-34 bits in binary. This means it's incapable of accurately representing the 11-digit sums given in @whartung's example. I don't know what the precision of his machine was, but it can't possibly have been a HP-12C!
I have not studied the innards of HP's handhelds, but I think many of the models used three more digits in intermediate results than they did in memory storage and display. My HP-71 uses 12 digits normally but 15 in chained calculations where things are kept in its 64-bit Saturn processor registers rather than in variables.
Re: OT: Binary or decimal for floating point?
Posted: Thu Nov 01, 2018 9:26 am
by Chromatix
Yes, they did - but that only guarded against loss of precision during the calculation; the result was still limited to 10 significant digits.
Re: OT: Binary or decimal for floating point?
Posted: Thu Nov 01, 2018 9:32 am
by BigEd
Just for fun, I ran some tests, mostly on calculators:
Try x / 3 * 3 - x for various x
Code: Select all
7 8 10 11
hp 35 -1E-9 1E-9 -1E-9 0
hp 15c -1E-9 1E-9 -1E-9 0
hp 35s -1E-11 1E-11 -1E-11 0
hp 30b -1E-11 1E-11 -1E-11 0
casio SL300 -1E-7 -2E-7 -1E-6 -1E-6 1981
4 banger -1E-7 -2E-7 -1E-6 -1E-6 2016
casio fx115s 0 0 0 0
sharp EL556D 0 0 0 0
TI-1500 -1E-7 -2E-7 -1E-6 -1E-6
TI-55 -1E-10 1E-10 0 0
TI-57 -1E-10 -2E-10 -1E-9 -1E-9
Sinclair Scientific
-1E-5 -2E-5 -1E-4 -2E-4
BBC Basic 2 -1.86E-9 0 0 0 Also Basic 4
MSBasic C1P 0 0 0 0
(While I think this is a relatively good test for decimal, I'd try a bit harder for a test of binary floats.)
Re: OT: Binary or decimal for floating point?
Posted: Thu Nov 01, 2018 9:42 am
by BitWise
For rounding difficulties, addition and subtraction aren't too much of a problem, multiplication by integers doesn't introduce any difficulty so long as the word is wide enough. But as soon as you have multiplication by non-integers, as with the calculation of discounts or taxes, or division, or any kind of interest payment or yield calculation, that's when you find that fixed or floating point doesn't match the behaviour of the real numbers, and currency insists on finite representation so that's not a real number either.
The solution, I think, is not in computer arithmetic, but in accounting practices which define what the right answer is. And those practices had better be compatible with some kind of computer arithmetic!
In investment banking factors like interest rates, yields, delta, volatility etc. are derived using floating point calculations (
https://en.wikipedia.org/wiki/Greeks_(finance) but once they get applied to a monetary amount the results are usually held as some kind of packed decimal value (like Java's BigDecimal, SQL decimal columns, etc.) to ensure that the accuracy of the value is maintained during processing especially when transferred between systems.
Banks like mine have hundreds of applications covering stages of the trade life-cycle for different products we deal in and the various reports that have to be generated. Each trade may end up being represented in 10-20 of them. We try not to introduce errors and create reconciliation issues as data moves between them.
Re: OT: Binary or decimal for floating point?
Posted: Thu Nov 01, 2018 9:43 am
by Chromatix
The HiPER Calc Pro app on my smartphone gives -1E-92 for 7, 8, 10 if I enter it all in one go, but 0 if I enter it step by step (which causes it to recognise and handle repeating decimals in the intermediate results). It gives 0 by both methods for 11.
Re: OT: Binary or decimal for floating point?
Posted: Thu Nov 01, 2018 9:56 am
by BigEd
Here's
x / 5 * 5 - x
for x=81
Code: Select all
MSBasic C1P 7.63E-6
BBC Basic 2 2.98E-8 Also BBC Basic 4
I'm pretty sure Basic 4 introduced some tweaks to improve the accuracy of arithmetic, but I haven't seen that difference in action yet.
It's normal for non-HP calculators to try to round towards integers especially, whereas floating point packages normally treat numbers alike. It's easy enough with Intel x87 floating point to get different answers depending on whether you use the internal 80 bit format for intermediates or spill out to 64 bit doubles. I have a feeling AMD gave different results for the same code at one time - perhaps it lacked the 80 bit internals. Later x86 floating point doesn't have the 80 bit format, IIRC.
Re: OT: Binary or decimal for floating point?
Posted: Thu Nov 01, 2018 10:02 am
by Chromatix
Modern x86 and AMD64 CPUs still include full x87 support with the 80-bit internal format and all the transcendental functions. They *also* include SSE and SSE2 instructions which have IEEE single and double precision natively, and support only the basic functions in hardware.
Also, I believe the improvements in BBC BASIC IV were to the transcendental functions' accuracy and the basic operations' speed. There probably isn't any difference in the basic operations' accuracy. However, BBC BASIC VI introduced IEEE-754 double precision as a replacement for the old 40-bit format previously used.
Re: OT: Binary or decimal for floating point?
Posted: Thu Nov 01, 2018 10:19 am
by BigEd
I just had a vague recollection that there's one extra bit of accuracy in multiplication or division in Basic 4, or something like that. It's difficult to get rounding right!
Re: OT: Binary or decimal for floating point?
Posted: Thu Nov 01, 2018 10:24 am
by BigEd
Free42 by Thomas Okken is a nice RPN calculator app. It gives -1E-33, 1E-33, -1E-33 and zero.
Re: OT: Binary or decimal for floating point?
Posted: Thu Nov 01, 2018 1:03 pm
by BigEd
OK, found an example, where BBC Basic 4 calculates zero, correctly, and Basic 2 calculates -1:
2415919106/2147483652*2147483652-2415919106
Re: OT: Binary or decimal for floating point?
Posted: Thu Nov 01, 2018 1:22 pm
by Chromatix
Examining these examples with IEEE-754 binary half-precision format, we start with:
Code: Select all
3 = 0 10000 1000000000
7 = 0 10001 1100000000
8 = 0 10010 0000000000
10 = 0 10010 0100000000
11 = 0 10010 0110000000
Dividing the four initial values by 3 gives, with guard bits before rounding:
Code: Select all
7/3 = 0 10000 0010101010 10 1
8/3 = 0 10000 0101010101 01 1
10/3 = 0 10000 1010101010 10 1
11/3 = 0 10000 1101010101 01 1
In each case, the final sticky bit is set because the remainder is non-zero after producing all the mantissa and guard bits. The rounded values are:
Code: Select all
7/3 = 0 10000 0010101011
8/3 = 0 10000 0101010101
10/3 = 0 10000 1010101011
11/3 = 0 10000 1101010101
If we now re-multiply these results by 3, we get:
Code: Select all
7/3*3 = 0 10001 1100000000 10 0 <-- a tie case for rounding, resolved in favour of the even mantissa, which is exactly 7.0.
8/3*3 = 0 10001 1111111111 10 0 <-- another tie case for rounding, resolved by incrementing the mantissa *and* exponent to get exactly 8.0.
10/3*3 = 0 10010 0100000000 01 0 <-- this unambiguously rounds down to exactly 10.0.
11/3*3 = 0 10010 0101111111 11 0 <-- this unambiguously rounds up to exactly 11.0.
So, with just 16 bits, IEEE-standard floating-point binary arithmetic passes an accuracy test that many pre-IEEE and decimal floating-point implementations fail with 32, 40 or even more.
Re: OT: Binary or decimal for floating point?
Posted: Thu Nov 01, 2018 1:26 pm
by BigEd
As I say, not the best test for binary. We need more bits in the divisor, I think, to stress the rounding a bit more. (Which is not to say that IEEE isn't well-designed, or a big advance on predecessors.)
Re: OT: Binary or decimal for floating point?
Posted: Thu Nov 01, 2018 1:31 pm
by Chromatix
Quite true, but I think we'll find that binary floating-point passes more tests than decimal on average. One reason for this is that decimal arithmetic changes precision sharply by factors of 10 at each exponent increment, while binary arithmetic changes precision by factors of 2. Binary is also more efficient with its use of storage space, being able to store 1024 distinct values in 10 bits where BCD requires 12 bits for 1000 distinct values.
So I stand by an assertion that decimal arithmetic is useful only where it is legally mandated. For everything else, use binary.
Re: OT: Binary or decimal for floating point?
Posted: Thu Nov 01, 2018 1:33 pm
by BigEd
See the head post!
Re: OT: Binary or decimal for floating point?
Posted: Thu Nov 01, 2018 3:34 pm
by 1024MAK
Tests on what I had to hand:-
x / 3 * 3 - x for various x
Code: Select all
7 8 10 11
————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————
0 0 0 0 Calculator + app on iPad mini without
intermediate results
-1.000000083x10-09 1.000000083x10-09 -1.000000083x10-09 0 Calculator + app on iPad mini with intermediate results
0 0 0 0 Sci:Pro Calc app on iPad mini with intermediate results
0 0 0 0 Calc app on Psion 5mx
0 0 0 0 Casio PB-410F without intermediate results
-1E09 1E09 -1E09 0 Casio PB-410F with intermediate results
-0.00000000001 -0.00000000002 -0.00000000001 -0.00000000001 Cheap 12 digit calculator from Wilco (from China)
0 0 0 0 TI83 Plus
0 0 0 0 Casio fx-451
-0 -0 -0.000001 -0.000001 Casio Personal M-1 (8 digit display)
8.881784197E-16 0 0 0 Calc app on Psion Workabout without intermediate results
-1.00000008274E-11 9.99911264898E-12 -1.00008890058E-11 0 Calc app on Psion Workabout with intermediate results
Mark