6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Fri Nov 22, 2024 6:44 pm

All times are UTC




Post new topic Reply to topic  [ 38 posts ]  Go to page 1, 2, 3  Next
Author Message
PostPosted: Wed Mar 29, 2023 5:55 pm 
Offline

Joined: Sun Mar 19, 2023 2:04 pm
Posts: 137
Location: about an hour outside of Springfield
I have mentioned a 'calculator' before, what I mean is math like this:
https://www.youtube.com/watch?v=TxidmVD90EM&list=PLbxFfU5GKZz2-4Y3YwRFVVDEMlmxYfY6y&ab_channel=wenshenpsu
linear programming, linear algebra, differential equations, gradient descent, clustering, support vector machines, voronoi diagrams...
things like that.

Id like to discuss how the 65xx family ASM can begin to support 'big math'. Not so much big numbers, high numbers of dimensionality (parameters) and big data sets. Just looking ahead here. Copying data and adding two numbers together is a start, there should be a goal in mind.


Top
 Profile  
Reply with quote  
PostPosted: Mon May 15, 2023 4:59 pm 
Offline

Joined: Wed Jan 08, 2014 3:31 pm
Posts: 578
This is an old post, but I just noticed it. The 6502 can pretty easily handle 32 to 40 bit floating point, as most versions of BASIC supported it back in the day. Doing it in assembly just requires a good math library.

If you don't want to use a library, fixed point is incredibly simple, and that's how I did the Mandelbrot set in assembly.

To do transcendental functions you can use CORDIC or look up tables.


Top
 Profile  
Reply with quote  
PostPosted: Tue May 16, 2023 1:33 pm 
Offline

Joined: Sun Mar 19, 2023 2:04 pm
Posts: 137
Location: about an hour outside of Springfield
I actually like BCD for its 'truthiness'; that it will match what is done on paper.
CORDIC looks really useful for a system Im working on, though that has a 64-bit math-coprocessor.

The matrix operations and such are of the highest priority for me, beyond a full implementation of all the math routines exposed with variables.
I am basically starting from scratch and found several resources online to use for now.

In the end, a free standing or hosted C implementation is the goal.
40-bit math seems like a decent goal, many of the first computers used 40-bit words IIRC.


Top
 Profile  
Reply with quote  
PostPosted: Tue May 16, 2023 3:31 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 730
Location: Tokyo, Japan
wayfarer wrote:
I actually like BCD for its 'truthiness'; that it will match what is done on paper.

It doesn't for me. Only binary matches what I do on paper.

Possibly because I do my paper calculations in hexadecimal.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Tue May 16, 2023 4:55 pm 
Offline

Joined: Sun Mar 19, 2023 2:04 pm
Posts: 137
Location: about an hour outside of Springfield
cjs wrote:
wayfarer wrote:
I actually like BCD for its 'truthiness'; that it will match what is done on paper.

It doesn't for me. Only binary matches what I do on paper.

Possibly because I do my paper calculations in hexadecimal.


I am referring to the floating point rounding error; my understanding is BCD never has this issue.
I am planning to explore this further, I intend to do a lot of finite/discrete mathematics as I delve into 65xx ASM wherever possible.

https://en.wikipedia.org/wiki/Binary-coded_decimal
https://en.wikibooks.org/wiki/Digital_Circuits/CORDIC

I also like that BCD converts to ASCII by merely adding 0011 to the high-nibble.
Given that it is literally just the binary for the number directly in a nibble, it is very convenient to work with.
It is such a part of 65xx at this point, I think it is worth exploring the efficiency of BCD vs traditional math operations for any serious maths library built... :?:


Top
 Profile  
Reply with quote  
PostPosted: Tue May 16, 2023 7:15 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8505
Location: Midwestern USA
wayfarer wrote:
cjs wrote:
wayfarer wrote:
I actually like BCD for its 'truthiness'; that it will match what is done on paper.

It doesn't for me. Only binary matches what I do on paper.

Possibly because I do my paper calculations in hexadecimal.

I am referring to the floating point rounding error; my understanding is BCD never has this issue.

I think Curt was being facetious—no one I know does any pencil-and-paper calculations in anything other than decimal. :D

Quote:
I am planning to explore this further, I intend to do a lot of finite/discrete mathematics as I delve into 65xx ASM wherever possible.

https://en.wikipedia.org/wiki/Binary-coded_decimal

From the perspective of working with the 65C02/65C816, that Wikipedia article you refer to is mostly extraneous drivel. 65xx MPUs work with packed BCD, which is simple to describe and understand.

With the 65C02, ADC and SBC consume an extra clock cycle in decimal mode (decimal mode has no timing effect with the 65C816 in native mode). More critically, BCD multiplication and division functions are convoluted and slow compared to their binary counterparts. Typically, these functions have to break down a BCD numeral, operate on one nybble or the other, and then re-pack the nybbles before moving to the next byte. My experience is that a BCD multiplication function runs at about 25 percent of the speed of a binary FP function in handling numbers of equivalent magnitude—division is even slower.

Something else to consider is decimal mode only affects ADC and SBC. DEC and INC always operate in binary mode, which makes using a BCD number as a counter inconvenient.

Quote:
I also like that BCD converts to ASCII by merely adding 0011 to the high-nibble. Given that it is literally just the binary for the number directly in a nibble, it is very convenient to work with.

Well, there’s a little more to it. Since a BCD numeral is $00-$99 ($0000-$9999 with the 16-bit 65C816 accumulator), shifting and masking is required to separate the nybbles prior to conversion. Also, the actual BCD-to-ASCII conversion is typically done by ORing a nybble with %00110000, not adding (using ADC means a preceding CLC is required).


Quote:
It is such a part of 65xx at this point, I think it is worth exploring the efficiency of BCD vs traditional math operations for any serious maths library built... :?:

Efficiency? The “classic” binary formats, such as excess-128 and IEEE-754, are computationally faster overall and use less storage per number.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Tue May 16, 2023 8:28 pm 
Offline

Joined: Wed Jan 08, 2014 3:31 pm
Posts: 578
wayfarer wrote:
I am referring to the floating point rounding error; my understanding is BCD never has this issue.


All forms of finite precision arithmetic with have rounding and truncation errors. Single precision binary floating point arithmetic is just a little more obvious than BCD. Usually adding the machine epsilon and truncating solves these errors. See: https://en.wikipedia.org/wiki/Machine_epsilon

You'll note that various fixed with BCD formats also have a machine epsilon as well.

Where BCD has an advantage is it seems a little easier to write an arbitrary precision arithmetic package. Although as pointed out BCD multiplication isn't easy. But I saww it done with a lookup table which took a lot of memory but was obviously speedy.


Top
 Profile  
Reply with quote  
PostPosted: Tue May 16, 2023 9:47 pm 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8543
Location: Southern California
The main thing I've heard of about round-off errors with hex is in dollars and cents, because .01 in decimal does not have any exact representation in hex with a reasonable number of digits.  However, hexadecimal works just fine if you represent a dollar as 100 (64h), not 1.00, and a cent is just 1.  If you need tenths of cents, then represent the dollar as 1,000 (3E8h) instead.  This does not keep you from converting to a more-normal representation (like $28.39) when it's time for human-readable output.  In the internal representation, $.99 times two is still $1.98, $2.49 + $14.40 is still $16.89, etc., and it is exact.

For other things, consider that 32 bits of BCD, without sign and exponent, has more than a 1½ extra digit precision disadvantage compared to the same number of bits in hex. Working in hex will reduce the errors accumulated in chained operations, and at the same time, allow the computer to perform better.  Yes, there's computing time involved in the conversion between hex and decimal; but if you have a lot of chained operations and only do the conversion when it's time for human I/O, you'll come out ahead by doing the internal stuff in hex.

That said, if your focus is on a calculator, you may well have to go with BCD.  Also, floating-point is pretty much required for a calculator.  But for other things, you can often get away with scaled-integer.  Notice I did not say "fixed-point."  Fixed-point is a limited subset of scaled-integer.  I tell more about scaled-integer in my article here.  It puts more of a burden on the programmer, as he has to mind the ranges and scale factors, but lets the computer perform better, particularly in situations where there's no floating-point unit on the processor, as floating-point adds a lot of overhead in that case.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Tue May 16, 2023 10:05 pm 
Offline

Joined: Sun Mar 19, 2023 2:04 pm
Posts: 137
Location: about an hour outside of Springfield
GARTHWILSON wrote:
The main thing I've heard of about round-off errors with hex is in dollars and cents, because .01 in decimal does not have any exact representation in hex with a reasonable number of digits.  However, hexadecimal works just fine if you represent a dollar as 100 (64h), not 1.00, and a cent is just 1.  If you need tenths of cents, then represent the dollar as 1,000 (3E8h) instead.  This does not keep you from converting to a more-normal representation (like $28.39) when it's time for human-readable output.  In the internal representation, $.99 times two is still $1.98, $2.49 + $14.40 is still $16.89, etc., and it is exact.

For other things, consider that 32 bits of BCD, without sign and exponent, has more than a 1½ extra digit precision disadvantage compared to the same number of bits in hex. Working in hex will reduce the errors accumulated in chained operations, and at the same time, allow the computer to perform better.  Yes, there's computing time involved in the conversion between hex and decimal; but if you have a lot of chained operations and only do the conversion when it's time for human I/O, you'll come out ahead by doing the internal stuff in hex.

That said, if your focus is on a calculator, you may well have to go with BCD.  Also, floating-point is pretty much required for a calculator.  But for other things, you can often get away with scaled-integer.  Notice I did not say "fixed-point."  Fixed-point is a limited subset of scaled-integer.  I tell more about scaled-integer in my article here.  It puts more of a burden on the programmer, as he has to mind the ranges and scale factors, but lets the computer perform better, particularly in situations where there's no floating-point unit on the processor, as floating-point adds a lot of overhead in that case.


there is some discussion of floating-point errors out there, hex/binary and decimal/BCD do share the common base10 representation, I am looking at some ASCII/BCD based routines, there is a clock I want to use with BCD, and a calculator (or calc software) is desired.

I am not convinced that BCD is appropriate for 'everything'; I do think it may have some uses still, and to at least support or have use of its operations where possible and pragmatic seems wise.

your scaled-integer math, look up tables, digital slide rules and the above mentioned CORDIC all seem like interesting place to explore.

It is important to look at, BCD will always have 2/5ths the storage density of an 8-bit integer.

so a C library I have some notes on a struct to represent 'a number'; which goes beyond the basic C data types.
In 65xx ASM, we basically only have a 'byte' and 'BCD' numbers, everything we need to build from scratch; I would think the C data types are one place to start, though it is inadequate for my needs.

Lots of other languages get talked about on here, like Forth or BASIC, what are some ways those languages represent numbers, and how did you implement that in 65xx?


Top
 Profile  
Reply with quote  
PostPosted: Tue May 16, 2023 10:10 pm 
Offline

Joined: Sun Mar 19, 2023 2:04 pm
Posts: 137
Location: about an hour outside of Springfield
BigDumbDinosaur wrote:
I think Curt was being facetious—no one I know does any pencil-and-paper calculations in anything other than decimal. :D

gotcha :P
Quote:
With the 65C02, ADC and SBC consume an extra clock cycle in decimal mode (decimal mode has no timing effect with the 65C816 in native mode). More critically, BCD multiplication and division functions are convoluted and slow compared to their binary counterparts. Typically, these functions have to break down a BCD numeral, operate on one nybble or the other, and then re-pack the nybbles before moving to the next byte. My experience is that a BCD multiplication function runs at about 25 percent of the speed of a binary FP function in handling numbers of equivalent magnitude—division is even slower.

Something else to consider is decimal mode only affects ADC and SBC. DEC and INC always operate in binary mode, which makes using a BCD number as a counter inconvenient.

Quote:
I also like that BCD converts to ASCII by merely adding 0011 to the high-nibble. Given that it is literally just the binary for the number directly in a nibble, it is very convenient to work with.

Well, there’s a little more to it. Since a BCD numeral is $00-$99 ($0000-$9999 with the 16-bit 65C816 accumulator), shifting and masking is required to separate the nybbles prior to conversion. Also, the actual BCD-to-ASCII conversion is typically done by ORing a nybble with %00110000, not adding (using ADC means a preceding CLC is required).


Quote:
It is such a part of 65xx at this point, I think it is worth exploring the efficiency of BCD vs traditional math operations for any serious maths library built... :?:

Efficiency? The “classic” binary formats, such as excess-128 and IEEE-754, are computationally faster overall and use less storage per number.[/color]

noted...

the 65816 still does all the BCD stuff, its just faster in a couple spots?

I am looking at an ASCII based system for Oxide, so whatever formats I use will do that, including using control signals in sticks 0-1 for motors and such... like a 3d printer. cybernetics is neat stuff.


Top
 Profile  
Reply with quote  
PostPosted: Tue May 16, 2023 11:32 pm 
Offline

Joined: Wed Jan 08, 2014 3:31 pm
Posts: 578
You might want to take a look at Calc65 which is a BCD math package: http://www.crbond.com/downloads/fltpt65.cba


Top
 Profile  
Reply with quote  
PostPosted: Wed May 17, 2023 12:33 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8505
Location: Midwestern USA
wayfarer wrote:
the 65816 still does all the BCD stuff, its just faster in a couple spots?

In theory, it should be faster, since the 65C816 can process 16 bits at a time—addition and subtraction will immediately benefit. It’s also helpful that those two operations do not incur a one-cycle penalty in decimal mode as they do with the 65C02.

That said, BCD multiplication and division involves a lot of repetitive addition and subtraction, respectively, as the common shift-a-bit-and-add (multiplication) and shift-a-bit-and-subtract (division) algorithms are not BCD-compatible. With decimal-mode algorithms, a typical procedure is to isolate a nybble and use it as a counter to add or subtract as required. That means each time a nybble is processed, either masking or right-shifting is required to create the counter. From what I recall, this procedure doesn’t readily scale with 16 bits, because it works with four bits at a time. The last time I implemented a decimal mode math package was 30+ years ago on the 65C02. So I really don't know how well it would perform on the 65C816.

Something that is convenient with BCD numbers is they can be readily multiplied or divided by 10 by doing four left (multiply) or right (divide) rotates, with carry cleared at the start of the rotate. That would definitely be faster on the 65C816, since it can rotate 16 bits in a single instruction and can hold a 16-bit BCD number in the accumulator, thus allowing the fast accumulator mode shift (two clock cycles per shift vs. five cycles for direct page addressing or eight cycles for absolute addressing—even more if indexed). Multiply/divide by 10 is a procedure useful in converting between BCD and ASCII, and is also of value in normalizing a BCD floating point number after some sort of processing.

Despite that, you can’t get around the need to do a lot of repetitive adding and subtracting when multiplying and dividing. Needless to say, it only gets worse with Fourier transforms, transcendental operations, etc.

Martin_H wrote:
You might want to take a look at Calc65 which is a BCD math package: http://www.crbond.com/downloads/fltpt65.cba

Note that Bond’s assembly language source is designed to work with a somewhat-strange assembler he developed. I’ve looked at the source code but have not tried to assemble or otherwise use it.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Wed May 17, 2023 2:43 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8543
Location: Southern California
wayfarer wrote:
It is important to look at, BCD will always have 2/5ths the storage density of an 8-bit integer.

I'm not sure what you mean here.  In BCD, each nybble of a byte can have the value of 0-9, so a byte, which is two nybbles, can have 0-99d.  That is approximately 2/5 of 255d; but integer below 100d takes one byte whether in BCD or in binary.  Representing 50,000d requires three bytes in BCD but only two in binary.

Quote:
Lots of other languages get talked about on here, like Forth or BASIC.  What are some ways those languages represent numbers, and how did you implement that in 65xx?

In Forth, internal representations and operations are always done in binary, and conversion to and from other bases is done when it's time for human I/O (or in the case of your RTC IC, it would be done for that).  The way the conversion is done is not always the most efficient, but the same routines work for any base, so there's no need to build anything from scratch, even if for some odd reason you wanted base 7 or 13 or 19 for example.  The method is described briefly in the article I linked to.  Most things are in base 2, 10, or 16; but there is sometimes a need for other ones.  For example, if you were keeping time in seconds and displaying in the format HH:MM:SS, your conversion would use base 10 for the right-most seconds digit, base six for the next digit to the left of that, same for minutes, then back to base 10 for the hours.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Wed May 17, 2023 4:03 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 730
Location: Tokyo, Japan
BigDumbDinosaur wrote:
I think Curt was being facetious—no one I know does any pencil-and-paper calculations in anything other than decimal. :D

Facetious, yes, but not entirely so: there's a key kernel of very important truth in what I was saying, and a lesser kernel of perhaps (or perhaps not) interesting truth. More on the latter later.

wayfarer wrote:
I am referring to the floating point rounding error; my understanding is BCD never has this issue.

That is incorrect. BCD has rounding issues just as hexadecimal does, as does any base. While the rounding errors will be different rounding errors, they are still there.

Consider, for example, working in base 3. In that case, 1/10 = 0.1, and there is no rounding error at all. But in base 10, 1/3 = 0.333… which you will eventually have to round to a slightly different value, at least if you want to store it in memory or ever want your program to terminate.

If you're going to choose a base to try to avoid rounding errors as much as possible, but is still of reasonable size, base 60 is a good choice. (There's a reason that the ancient Babylonians used it.) But of course this assumes you're using it for your outside-the-computer work as well; if you're going to convert it to base 10 (or something equally ridiculous) outside the computer you're going to run into rounding issues again because of the conversion.

Base 60 is admittedly a bit annoying to use everywhere, because of difficulties inherent in writing it, but there are definitely better bases to be chosen than 10, which is in general a pretty bad base for most things. This is why when I'm made King of the World my first decree will be that everything that was previously done in base 10 will in the future be done in base 12. (This will also have the beneficent side effect of bringing in world peace, because everybody will be too busy dealing with this change to prosecute wars and the like. Oh, and for the British who will now have to "undecimalise" their currency, sorry about that! You never should have changed!)

Quote:
I am planning to explore this further, I intend to do a lot of finite/discrete mathematics as I delve into 65xx ASM wherever possible.

Sounds to me as if you might be better off with bigints (high-precision or unlimited-precision integers) for that sort of thing, depending on exactly what you're doing. These, again tend to work best in binary representation internally, though, offering some memory savings and considerably more speed. In which case you might consider (also depending on what you're doing) just using hex representation externally, since it's essentially isomorphic. You can get a sense of the costs of conversion from a bit of 6502 bigint code I wrote a while back and its unit tests.

Remember, your desire to work in base 10 is, from a mathematical point of view, completely arbitrary. You're not going to get better answers in that base; you're probably going to get worse ones than several other more sensible bases.

Quote:
It is such a part of 65xx at this point, I think it is worth exploring the efficiency of BCD vs traditional math operations for any serious maths library built... :?:

It's been explored; that's a reason that for a long time we've done most maths in binary when using binary computers.

BigDumbDinosaur wrote:
I think Curt was being facetious—no one I know does any pencil-and-paper calculations in anything other than decimal. :D

Back to this! While admittedly I no longer do many calculations using an actual pencil and paper (though I certainly did back in the early '80s before I got my TI Programmer and, later, HP-16C!), I still do the equivalent all the time at my command line. I have a little wrapper around the Unix bc(1) program (an infinite precision calculator) that lets me easily specify hex input and/or output, and so I quite frequently am doing things like `c -Hh ffe2-c8` (which spits out `FF1A`).

Now admittedly I'm doing only integer calculations these days, but part of the reason for that is that the only floating point I've been working with lately has been with the MSX-BASIC representations, where MS had switched to BCD significands. But I'm sure I'll have use of bc(1)'s floating point support (which works fine: `c -Hh 5a.29/3` gives `1E.0D9`) when I get around to writing my own floating point libraries.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Wed May 17, 2023 5:44 am 
Offline

Joined: Mon Jan 19, 2004 12:49 pm
Posts: 983
Location: Potsdam, DE
cjs wrote:
Oh, and for the British who will now have to "undecimalise" their currency, sorry about that! You never should have changed!


Oh rats! And just when I'd got used to decimalisation, after which I didn't have to consider working simulateously in:
  • base 4 - farthings to penny
  • base 12 - pennies to shilling
  • base 20 - shillings to pounds
  • (base 21 - shillings to guineas)
  • base 10 - pounds
Or for weights (we'll ignore drams and grains and pennyweights and scruples and their like)
  • base 16 - ounces to pounds
  • base 14 - pounds to stones
  • base 8 - stones to hundredweight
  • base 12 - hundredweight to tons

And the crazy thing is, people are *still* complaining about decimalisation because it's 'too hard'.

No thanks, I'll stick to ISO measures - though I will allow base 60/12|24/7 for time (even the Germans cheerfully announce the time as 'ten past half eight' when they mean 07:40... go figure.

Neil


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 38 posts ]  Go to page 1, 2, 3  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 16 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: