Hello, friends!
In an
earlier thread I indicated that I would be revisiting the
32-bit floating point package composed by Roy Rankin and the indomitable Steve Wozniak over four decades ago. It has been an interesting journey so far, and it is far from complete, but I wanted to share a snapshot of my work for discussion and critique.
Please be brutally honest. I have much more "in progress" but it's not ready for prime-time.
Code:
...
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; floatyxa: convert a 24-bit signed integer in Y:X:A
; (H:M:L) into a normalized 32-bit floating point
; number in facc.
; Expects: the high byte in register Y, the mid byte
; in register X and the low byte in register A.
; Returns: the normalized value in facc.
; 19 bytes
floatyxa:
sty facc+1 ;
stx facc+2 ; initialize significand
sta facc+3 ;
ldx #facc ; point to facc
ora facc+2 ; is significand zero?
ora facc+1 ;
beq normz ; yes: clear exp and return
tya ; no: load sig.h,
ldy #$96 ; init exp to 2^22,
bne norm2 ; normalize
;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; fsub: subtract two floating point numbers.
; Expects: the minuend in farg and the subtrahend in
; facc; both should be normalized prior to calling
; fsub to retain as much precision as possible.
; A subtrahend of -2^128 will trigger an overflow.
; Returns: the normalized difference (farg - facc) in
; facc, "rounded" to contain the 24 most-significant
; significand bits (including sign); may alter farg.
; 5 bytes
fsub:
ldx #facc ; point to subtrahend
jsr fnegx ; negate it and fall through
;
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
; fadd: add two floating point numbers.
; Expects: the two addends in facc and farg; both
; should be normalized prior to calling fadd to
; retain as much precision as possible.
; Returns: the normalized sum (farg + facc) in facc,
; "rounded" to contain the 24 most-significant
; significand bits (including sign); may alter farg.
; 61 bytes
fadd:
ldx #farg ;
lda facc ; check significand alignment
sec ;
sbc farg ; compare the exponents
bcs align ; align farg to match facc
ldy farg ;
sty facc ;
ldx #facc ; align facc to match farg
eor #$ff ; negate shift counter
adc #1 ;
align:
clc ; pre-clear "guard bit"
beq aligned ; aligned if 0 shift count
tay ; init shift counter
lda 1,x ; sig.h
cpy #26 ; no need to asr > 25 bits
bcc align2 ;
ldy #25 ;
align2:
cmp #$80 ; copy sign to carry
ror ;
ror 2,x ; asr significand
ror 3,x ;
dey ;
bne align2 ;
sta 1,x ;
aligned:
lda farg+3 ; add aligned significands
adc facc+3 ; ("guard bit" from the
sta facc+3 ; alignment is in carry)
lda farg+2 ;
adc facc+2 ;
sta facc+2 ;
lda farg+1 ;
adc facc+1 ;
ldx #facc ; point to facc, fall through
;
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
; normx: normalize fp# in four consecutive bytes of ZP
; (exp:sig.h:sig.m:sig.l).
; Expects: pointer to fp# in register X, sig.h in
; register A, result of last arithmetic operation on
; sig.h in C and V flags.
; Returns: normalized number in fp# if possible (punts
; too large to ovfl & leaves too small denormalized).
; 29 bytes
normx:
ldy 0,x ; load exponent
bvc norm3 ; no overflow: just normalize
ror ; significand overflowed: ror
ror 2,x ; significand, carry to msb
ror 3,x ;
iny ; increment exp
beq ovfl ; result overflowed: punt
norm2:
cmp #$c0 ; sig.h b7 = b6 ? (neat!)
bmi norm4 ; no: done
asl 3,x ; yes:
rol 2,x ; left shift significand
rol ;
dey ; and decrement exp
norm3:
bne norm2 ; loop unless denormal
norm4:
sta 1,x ; store sig.h
normz:
sty 0,x ; store exp and return
rts ;
;
...
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
; fabsx: replace fp# in four consecutive bytes of ZP
; (exp:sig.h:sig.m:sig.l) with its normalized
; absolute value.
; An input of -2^128 will trigger an overflow.
; Expects: pointer to fp# in register X.
; Returns: normalized absolute value of fp#.
; 7 bytes
fabsx:
clv ;
lda 1,x ; is fp# negative?
bpl normx ; no: just normalize
inc sgn ; complement sign bit
;
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
; fnegx: negate and normalize fp# in four consecutive
; bytes of ZP (exp:sig.h:sig.m:sig.l).
; An input of -2^128 will trigger an overflow.
; Expects: pointer to fp# in register X.
; Returns: negated and normalized value of fp#.
; 20 bytes
fnegx:
sec ; negate fp#
lda #0 ;
sbc 3,x ;
sta 3,x ;
lda #0 ;
sbc 2,x ;
sta 2,x ;
lda #0 ;
sbc 1,x ;
jmp normx ; normalize (or asr if ovfl)
;
...
Aside from detangling the original code in the interests of clarity and a modest speed increase, I have also added a "guard bit" in the carry flag to assist in the "rounding up" of results in which the alignment-shifted addend lost a "1" most recently off the right end of its significand. I realize that this may not be a rigorous (or even correct) method of maintaining precision, and I would definitely like your input on this matter as I continue to put the final touches on the multiply and divide subroutines, hopefully coming soon. After that, fix, log, log10, exp, and a custom sqr (assuming the stars align appropriately)!
Please note that these snippets are a work in progress, and are
completely untested at this time.
Many thanks!
_________________
Got a kilobyte lying fallow in your 65xx's memory map? Sprinkle some
VTL02C on it and see how it grows on you!
Mike B.
(about me) (learning how to github)