barrym95838 wrote:
To me, it's not a matter of modern vs. vintage, but more of a matter of correct vs. incorrect. The 6502's overflow flag worked correctly for signed binary from its inception ... it was up to the programmer to make proper use of it. In the case of Forth, I believe that access to the overflow flag was limited for application programs, so that duty should have fallen on the person writing the primitives.
That's an interesting idea. I suppose if someone wanted to implement
< and
D< as high level and keep them as fast as possible, one wild idea would be to write the subtraction primitives (
- and
D- ) so that if there was an overflow they would store -1 in a variable named OVERFLOW and store a zero in it otherwise. Better yet, use a VALUE or soft constant. The high level
D< could then be:
Code:
: D< D- D0< OVERFLOW XOR ;
I've never tried this so I don't know if the overall size would be smaller. It just seems simpler to have
< and
D< as primitives that test the processors overflow flag.
Quote:
The discussion in Section 2.2.2 makes clever use of the phrases "handles the vast majority" and "is generally safe", so I guess it boiled down to a matter of efficiency vs. exhaustive correctness. I have little doubt that a non-zero percentage of modern software hides similar trade-offs, documented or not.
Mike
Section 2.2.2 also states that different behaviour can be expected from different versions of polyforth.
Quote:
16 bit versions of polyFORTH ISD-4 use the fully signed model (option “a” in Fig. 2.1) to implement most
relationals, as well as MAX and MIN. 32-bit versions of polyFORTH use the circular model.