Page 2 of 2
Re: Comparison of the different floating point formats.
Posted: Wed May 29, 2019 3:05 pm
by barrym95838
Thanks for your helpful participation, Ed.
FWIW, Applesoft "agrees" more or less with your Version 4r32 result:
Code: Select all
]LIST
10 E = 0
20 FOR I = 15 TO 17 STEP 2
30 FOR J = 1 TO I
40 A = J / I
50 F = A * A / A - A
60 E = E + ABS (F)
70 NEXT
80 NEXT
90 PRINT E
]RUN
1.26601663E-09
]PR#0
Re: Comparison of the different floating point formats.
Posted: Wed May 29, 2019 4:00 pm
by drogon
I've been looking at this thread with interest as I have a need for a single-precision FP format, preferably IEEE 754 to interface with a system where I have some libraries that use that format, but not any actual low-level code like add, multiply and so on ...
I got some slightly surprising results from this test program, which does some divisions and multiplications and sums up the absolute errors:
Code: Select all
10 E=0
20 FOR I = 15 TO 17 STEP 2
30 FOR J=1 TO I
40 A=J/I
50 F=A*A/A-A
60 E=E+ABS(F)
70 NEXT
80 NEXT
90 PRINT E
The oddity is that an older version of Acorn's BBC Basic gets a smaller result, and therefore seems to be more accurate than the newer:
Code: Select all
1982 version 2 8.14907253E-10
1988 version 4r32 1.26601662E-9
I was curious about this as I want to use 32-bit IEEE 754 for a project (rather than Woz, etc. FP) so gave it a go on an ATmega which uses that format (or is supposed to) and it yielded
Just to compare, in double precision IEEE 754 using my desktop i3:
I can reproduce
with BBC Basic 4, and
using older BBC Basics.
Really no surprise that the 5-byte format that BBC Basic uses shows more precision than the 4-byre IEEE format I guess.
ehBasic (4-byte floats) yields:
and Applesoft (5-byte floats AIUI):
Interesting observations here: ebHasic and IEEE 754 appear to be the same although I've just dome some superficial checks - and it appears that while Microsofts format (MBF) is similar in terms of bits used (1+8+23), the binary representation is different and IEEE 754 calls for more bits to be used when performing intermediate calculations. MBF preceded IEEE 754 and for 8-bit computers was probably good enough at the time. MSs 5-byte format wasn't generally used on the 6502 as they ran out of code space in typical 8K ROMs at the time - Applesoft having a 12K ROM being a notable exception. (What I've gained from wikipedia and a few other sites this afternoon)
However, Applesoft 5-byte format is virtually identical to BBC Basics 5-byte format in terms of precision with this trivial test (might actually be identical, masked by rounding in printing - I'd need to look at the actual binary values to check)
So - for me, I'm still after a native 6502 implementation of IEEE 754 for single precision numbers, but the above has been an interesting little investigation.
-Gordon
Re: Comparison of the different floating point formats.
Posted: Wed May 29, 2019 4:14 pm
by BigEd
Oh, that's quite interesting in itself - I think Applesoft has rounded up incorrectly. I think the number we are printing must be
Code: Select all
> PRINT 1392/1024/1024/1024/1024
1.26601662E-9
which is exactly
.000000001266016624867916107177734375
and so shouldn't be printed as
(One thing about IEEE and
Acorn's floating point format is that they get one extra bit for free, because the mantissa, unless zero, would always be normalised to have a leading 1, which therefore need not be stored. I'm not sure about other Basics.)
I think I may have found the difference in 4r32: when A=1/17 (or 2/17, 4/17, 8/17 or 16/17) then although A*A is computed identically, A*A/A comes out different by 2^-36. Probably one unit in the last place, at a guess. (Edit: confirmed. Mantissa of 1/17 is 70F0F0F1 in hex - it's a repeating number and the missing F rounds up to 1 - and the mantissa of the inaccurate result is coming out as 70F0F0F2.)
Re: Comparison of the different floating point formats.
Posted: Wed May 29, 2019 10:44 pm
by barrym95838
Thanks for your research efforts, drogon. Microsoft's 40-bit floats were more prevalent than you imply, though, at least in raw population numbers. I'm relatively certain that Commodore had been using them in all of their 8-bit machines (VIC-20, C=64, etc.), all the way back to the first PETs in 1977. And they sold millions of those little machines!
Re: Comparison of the different floating point formats.
Posted: Thu May 30, 2019 5:20 am
by dclxvi
SANE (Standard Apple Numerics Enviroment) uses IEEE 754; there is a 6502 implementation. It's also in the Apple IIgs Toolbox, but the implementation (the one in ROM, anyway) just switches to 8-bit mode and executes the 6502 code, if memory serves.
Apple Assembly Line published a series of articles implementing a 18 digit BCD floating library (DP18), starting in the May 1984 issue. They also sold a 21 digit binary floating point libary called DPFP.
http://www.txbobsc.com/aal/index.html
The floating point format used by KIM Focal is the same as the Rankin/Woz format (by coincidence, I suspect).
There's also this discussion of the Rankin/Woz routines from 15 years ago:
viewtopic.php?f=2&t=495
Re: Comparison of the different floating point formats.
Posted: Thu May 30, 2019 3:27 pm
by whartung
SANE (Standard Apple Numerics Enviroment) uses IEEE 754; there is a 6502 implementation. It's also in the Apple IIgs Toolbox, but the implementation (the one in ROM, anyway) just switches to 8-bit mode and executes the 6502 code, if memory serves.
When you think about it, this makes complete sense. I don't think 8-bit 6502 code on an '816 is at any particular disadvantage in this case. The system is probably dominated by 8-bit data motion beyond any potential benefit you might get from using 16b ADC and SBC.
Re: Comparison of the different floating point formats.
Posted: Thu May 30, 2019 3:52 pm
by BigEd
Re: Comparison of the different floating point formats.
Posted: Fri May 31, 2019 8:42 pm
by BigEd
Hmm, I think I might have learnt something - see the thread over on stardot where I look at little further at Basic 4 vs Basic 2:
https://stardot.org.uk/forums/viewtopic ... 20#p238620
The bottom line is that my efforts to make a figure of merit by computing A*A/A-A might well be misguided: a non-zero answer for various values of A could be correct, for correctly-rounded finite-precision computation.
That said, I might still be missing something.
Re: Comparison of the different floating point formats.
Posted: Sat Jun 01, 2019 4:56 pm
by Chromatix
True, but generally smaller errors are still better. Correct rounding minimises error at each step.
Re: Comparison of the different floating point formats.
Posted: Sat Jun 01, 2019 6:55 pm
by BigEd
Careful with that - double rounding can introduce error. There's a particular case where it's beneficial to round to odd, where the normal choice is to round to even. I can't find the article I read about this, but here's a similar one:
https://www.exploringbinary.com/double- ... nversions/
Re: Comparison of the different floating point formats.
Posted: Wed Jun 05, 2019 7:26 pm
by barrym95838
Here's the Woz normalizer in the Apple ][ ROM:
Code: Select all
F455: A5 F9 NORM1 LDA M1 HIGH-ORDER MANT1 BYTE.
F457: C9 C0 CMP #$C0 UPPER TWO BITS UNEQUAL?
F459: 30 0C BMI RTS1 YES, RETURN WITH MANT1 NORMALIZED
F45B: C6 F8 DEC X1 DECREMENT EXP1.
F45D: 06 FB ASL M1+2
F45F: 26 FA ROL M1+1 SHIFT MANT1 (3 BYTES) LEFT.
F461: 26 F9 ROL M1
F463: A5 F8 NORM LDA X1 EXP1 ZERO?
F465: D0 EE BNE NORM1 NO, CONTINUE NORMALIZING.
F467: 60 RTS1 RTS RETURN.
That CMP #$C0; BMI RTS1 is a pretty neat hack! I made my updated version even faster [Edit: at least faster for two or more shifts] by keeping M1 in A and X1 in Y for the loop, and I made it so X can point to either accumulator to minimize the need for SWAP:
Code: Select all
norma:
ldx #fpa ; point to fpa.
normx:
ldy 0,x ; expx
beq normzz ;
lda 1,x ; high-order mantx byte.
norm2:
cmp #$c0 ; upper two bits equal? (neat!)
bmi normz ; no: done.
asl 3,x ; yes:
rol 2,x ; left shift mantx and
rol ;
dey ; decrement exp.
bne norm2 ; if exp > 2^-128 then loop.
normz:
sta 1,x ; store high-order mantx byte.
sty 0,x ; store expx and return.
normzz:
rts ;
;