6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Fri Nov 22, 2024 7:00 pm

All times are UTC




Post new topic Reply to topic  [ 38 posts ]  Go to page Previous  1, 2, 3  Next
Author Message
PostPosted: Wed May 17, 2023 12:01 pm 
Offline

Joined: Wed Jan 08, 2014 3:31 pm
Posts: 578
Re: Duodecimal or dozenal

There's a group trying to promote the switch from base ten to twelve. Here's a link to their website:

http://www.dozenalsociety.org.uk/

I am sympathetic with the desire to switch, as twelve is a highly composite number. There's a reason why a dozen is a dozen after all. But translation from binary to duodecimal isn't as easy as going to octal or hex. So I am not sure we would be that much better off.


Top
 Profile  
Reply with quote  
PostPosted: Wed May 17, 2023 2:51 pm 
Offline

Joined: Sun Nov 08, 2009 1:56 am
Posts: 411
Location: Minnesota
Quote:
Martin_H wrote:
You might want to take a look at Calc65 which is a BCD math package: http://www.crbond.com/downloads/fltpt65.cba

Note that Bond’s assembly language source is designed to work with a somewhat-strange assembler he developed. I’ve looked at the source code but have not tried to assemble or otherwise use it.Martin_H wrote:
You might want to take a look at Calc65 which is a BCD math package: http://www.crbond.com/downloads/fltpt65.cba

Note that Bond’s assembly language source is designed to work with a somewhat-strange assembler he developed. I’ve looked at the source code but have not tried to assemble or otherwise use it.


I don't know. I just took a quick look at that and the only thing I didn't recognize for a moment was the "branch to local label" form:

Code:
bcc @B
bcs @F


where 'B' stands for "backward branch to previous @" and 'F' for "forward branch to next @". I don't know if this could be extended to "@BB" or "@FF" since those never appeared, but I kind of like the way '@' looks standing all by itself in the label field.

The source itself is also pretty well commented.


Top
 Profile  
Reply with quote  
PostPosted: Wed May 17, 2023 3:36 pm 
Offline

Joined: Wed Jan 08, 2014 3:31 pm
Posts: 578
A few years ago I ported that BCD code to the ophis assembler and added macros to make the code more concise in places. It does work, but I haven't used it in any projects.

https://github.com/Martin-H1/6502/blob/ ... cdmath.asm


Top
 Profile  
Reply with quote  
PostPosted: Wed May 17, 2023 5:15 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 730
Location: Tokyo, Japan
Martin_H wrote:
I am sympathetic with the desire to switch, as twelve is a highly composite number. There's a reason why a dozen is a dozen after all. But translation from binary to duodecimal isn't as easy as going to octal or hex. So I am not sure we would be that much better off.

I've thought about this quite extensively (yes, I am making preparations to be Ruler Of The World/BDFL), and I seriously considered bases 8 and 16. But overall, base 12 seems to work better. After all, you divide things by thirds all the time in real life, and you have to write a binary/base-12 conversion routines only once.

I spent considerable time trying to make base 60 work first, though, but it just wasn't happening, due to notation problems.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Wed May 17, 2023 6:28 pm 
Offline
User avatar

Joined: Fri Aug 03, 2018 8:52 am
Posts: 746
Location: Germany
while i was standing in line in the supermarket, looking at the thread... i thought of something.
and now i have to write the whole thing out before i can decide if it's stupid or not.

basically, what if you were to use fractions to do math?
so a number would be represented by a pair of integers, one of which is the Numerator and the other the Denominator.
for example 0.333 would be 1/3, 0.1 would be 1/10, and so on. of course that doesn't mean you have infinite precision, but if you use 16-bit ints for both numbers you should have some decent range.
the idea sounds similar to fixed point numbers, but it seems to work a lot more differently.

also ironically, with fractions addition and subtraction become more complicated, while multiplication and division become relatively simple.
having a hardware MUL/DIV co-processor that can handle 2 numbers at the same time would make this a lot faster.

if my plate wasn't already so full i might've tried this out


Top
 Profile  
Reply with quote  
PostPosted: Wed May 17, 2023 7:06 pm 
Offline

Joined: Mon Jan 19, 2004 12:49 pm
Posts: 983
Location: Potsdam, DE
I've seen this proposed somewhere, but some time ago and I can't for the life of me remember where...

I suppose each number requires both a numerator and a denominator, and they'd both, I think, have to be big enough - 32 bits? - to be able to give both a reasonable range and a reasonably small fraction. So it's taking a lot of space for what might be a small number, unless you have a separate 'normalised' type with an implicit denominator (or indeed numerator) of 1.

Addition or subtraction would probably be easiest with a 'double' number type which can hold the two necessary cross multiplication results... so now you're looking at ideally a processor which can do an efficient 32x32=64 bit multiply (and divide, too, though perhaps you can dispose of the lower order bits by simple truncation?)

Neil


Top
 Profile  
Reply with quote  
PostPosted: Wed May 17, 2023 7:26 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 730
Location: Tokyo, Japan
Proxy wrote:
basically, what if you were to use fractions to do math?
so a number would be represented by a pair of integers, one of which is the Numerator and the other the Denominator.

That's called rational numbers, and yeah, that's a standard data type in some languages, such as Haskell.

As you (implicitly) point out, it works best when you have arbitrary-precision integers. But even with lowish-precision integers it has its uses.

It's interesting and useful enough that, now that you've reminded me about it, I've added it to my enormous list of Things I Should Write For 8-Bit Computers.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Wed May 17, 2023 8:06 pm 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8543
Location: Southern California
Proxy wrote:
basically, what if you were to use fractions to do math?
so a number would be represented by a pair of integers, one of which is the Numerator and the other the Denominator.

That's what rational numbers are all about, and the use of scaled-integer.  (Oh, I see Curt posted and said that while I was writing.)

Quote:
the idea sounds similar to fixed-point numbers, but it seems to work a lot more differently.

Right.  A fact that is constantly overlooked is that "integer" does not always mean "fixed-point."  Fixed-point is a limited subset of scaled-integer.

Quote:
also ironically, with fractions addition and subtraction become more complicated, while multiplication and division become relatively simple.

Addition and subtraction are trivial if the denominator is the same.  Don't divide until you need to.  A nice thing is that inversion requires only swapping the two numbers.

Quote:
having a hardware MUL/DIV co-processor that can handle 2 numbers at the same time would make this a lot faster.

Forth has the */ and related operators for this purpose, even on systems with no multiply and divide hardware.  (Forth allows for, but discourages, the use of floating-point, or at least used to in the years before processors had hardware floating-point units.)

Forum member Samuel Falvo (forum name kc5tja) writes, "The PC/GEOS (aka GeoWorks Ensemble) operating system used rationals for its GUI and was one of the first OSes on the entire planet to make use of scalable font technology, all without an FPU, and with adequate performance on an 8086 processor."


barnacle wrote:
I suppose each number requires both a numerator and a denominator, and they'd both, I think, have to be big enough - 32 bits? - to be able to give both a reasonable range and a reasonably small fraction.

Not necessarily.  A pair of unsigned 16-bit numbers can represent anything from 1/65,536 to 65,536, plus 0 and even infinity.  Not counting 0 and infinity, that's still a 32-bit range, even though neither number is 32-bit.  The */ Forth operator I mentioned above uses a multiple-precision intermediate result to cut errors even though that intermediate result is not saved.  You do the multiplication first so you don't lose the precision, and then do the division.  The intermediate result is double-precision to avoid overflow.  To multiply by π (3.1415926...) for example, you multiply your input number by 355 and get a 32-bit intermediate result, then divide that by 113 and get a 16-bit final result.  You can optionally handle the remainder after the division for more accuracy.  The error in this fraction is only 0.0000085%.  I have more such numbers at http://wilsonminesco.com/16bitMathTable ... pprox.html .

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Wed May 17, 2023 8:51 pm 
Offline

Joined: Mon Jan 19, 2004 12:49 pm
Posts: 983
Location: Potsdam, DE
Indeed they can, but they require 32 bits to represent that 32 bit range. This is of course a problem with floats: e.g. a 32 bit float has a massive range, but it still only has 2^32 possibilities; there are a lot of values that cannot be accurately represented. And I think that the same applies to rational numbers in that there are a lot of gaps, but I haven't thought it through properly yet so I could be wrong there.

I was aware of the */ operator and that was what I had in mind with the 32x32=64 comment. To add two rational numbers (assuming 32 bit parameters) a/b and c/d, you'd need to cross multiply to get ad/bd and bc/bd, add those two together, and then scale back this 64 bit sum to 32 bits somehow. Can you just shift both parts of the sum by 32 bits? I dunno...

Of course, the question with any numeric system is: does it adequately represent the values which I need to represent?

Neil


Top
 Profile  
Reply with quote  
PostPosted: Wed May 17, 2023 9:16 pm 
Offline

Joined: Mon Jan 19, 2004 12:49 pm
Posts: 983
Location: Potsdam, DE
Hmm, I did a quick spreadsheet to look at the distribution of values with a four bit numerator and denominator.

It's clear that there are some problems - maybe they go away with more bits but I don't expect so. In particular, while the digits values 0-15 can be exactly represented, there are no fractional values available above 8. There are also a lot of repeated values: every integer below 8 is duplicated at least once, and '1' appears fifteen times. There are also fifteen zeros and fifteen infinities; many of the fractions are also present multiple times (0.5, 0.33. 0.25, 0.2, and 0.167).

That's a lot of wasted space.

Neil


Attachments:
rational.png
rational.png [ 87.68 KiB | Viewed 3291 times ]
Top
 Profile  
Reply with quote  
PostPosted: Wed May 17, 2023 9:42 pm 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8543
Location: Southern California
Representation of any given number of values will require at least enough bits for that many.  That's not to say the denominator always has to be stored with the numerator though.  For example, if you have an array or table of percentages, you don't need to store the "100" (ie, the denominator) with each number.  It is implied.

Similarly, for a particular setting, the denominator may be understood, like it dealing with angles, where a 16-bit circle works out nicely, scaling 360° to be exactly 65,536 counts, with 1 lsb being .00549316°, or .32959 arc-minutes, or 19.7754 arc-seconds.  The MOD function doesn't even come up, as there's no need for it.  359° plus 2° is still 1°, and 45° minus 90° is still -45° or 315°.  You can consider it signed or unsigned, either way.  90° is represented by $4000, 180° is represented by $8000, and 270° or -90° is represented by $C000.  A radian is represented by $28BE.  When you take the sin or cos, the output range will be ±1, which, for best resolution, we can scale to as high as ±$7FFF or $8000 depending on how you want to work the trapping (since you can't get +$8000 in a 16-bit signed number).

In the trig functions, what about tangent, since the function has a nasty habit of going to ±infinity and ±90°?  Just return the sin & cos both, as a rational number.  Even infinity is represented, because you can have the denominator (the cos portion) as 0.  What you do with infinity is your business :lol: , but it can be represented!

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Wed May 17, 2023 10:20 pm 
Offline
User avatar

Joined: Fri Aug 03, 2018 8:52 am
Posts: 746
Location: Germany
more bits doesn't make that issue go away, it just gives the spikes more "resolution" so to say...
Attachment:
Jf96ZJyWao.png
Jf96ZJyWao.png [ 657.42 KiB | Viewed 3282 times ]

looking at this, it makes sense that the largest fractional number is exactly half the maximum integer you can have.
so for 4 bits that's 15. and 15/2 = 7.5.

GARTHWILSON wrote:
Representation of any given number of values will require at least enough bits for that many. That's not to say the denominator always has to be stored with the numerator though. For example, if you have an array or table of percentages, you don't need to store the "100" (ie, the denominator) with each number. It is implied.

wouldn't using an implied or fixed denominator just turn the number into a scaled integer?


Top
 Profile  
Reply with quote  
PostPosted: Thu May 18, 2023 1:48 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8543
Location: Southern California
True; I was kind of mixing them together.  Most of the tables I provide in that section of my website are 128KB, with 65,536  16-bit results, all pre-calculated, so there's no interpolation needed; but for the inverse, it's twice that size, with 65,536  32-bit results, intended partly for simplifying division by looking up the reciprocal to multiply by, using other tables to speed the multiplication.  You don't have to use all four bytes of a reciprocal in every case; but the availability means you can get the needed resolution in whatever part of the range you're working in.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Fri May 19, 2023 4:03 am 
Offline

Joined: Sun Mar 19, 2023 2:04 pm
Posts: 137
Location: about an hour outside of Springfield
[[last page]]

NZQRC

Integer, Signed, Rational, High-Precision 'point' numbers (or Scientific Notation (OR|| Symbolic Real Numbers)), Complex Numbers

so integers are straightforward, Signed numbers require a bit and center distribution around zero yet are still easily represented binary or bcd numbers, the 'Q' numbers are rational and the numerator and denominator should ideally be any other type of number however for simplicity I would say use either an Integer or Signed Int for both,

now normally floating- or fixed- point numbers would be next however I think as you move into more symbolic maths, the limitations of storing this number is more obvious; moving to a system where representation is not 'fixed until needed' and 'always exactly predictable to a given place or level of precision' so if you want Pi, you calculate it out from a Unit circle or some other convenient meter and you select how many digits of precision when you 'call Pi'. Same for Phi or e or whatever, so you never have a rounding or truncation error, you have whatever you calculate out that you need 'at run time'; this makes things a bit harder, though still reliable,

and then lastly having a solid representation of complex numbers facilitates all kinds of neat things.

I think having both fixed and floating point numbers is okay for a math heavy library, and then you have scientific notation, which while basically a 'float', it is a fixed point number, with an exponent, and these can then be operated on in ways that might be faster than traditional floating point math,,

logarithms should be covered somewhere, as well as exponential numbers

still, a good matric library can do lots of things with just 1s and 0s...


Top
 Profile  
Reply with quote  
PostPosted: Fri May 19, 2023 9:40 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 730
Location: Tokyo, Japan
wayfarer wrote:
NZQRC
Integer, Signed, Rational, High-Precision 'point' numbers (or Scientific Notation (OR|| Symbolic Real Numbers)), Complex Numbers

It may help to look at the number system classifications on Wikipedia. You can, of course, use terms however you like, but using the standard terms in the standard ways will reduce confusion all around.

In math, integers are signed as they go on infinitely in both directions along the number line (except when using modular arithmetic).

ℕ is the "natural" numbers, which have a starting point and go infinitely in one direction. . These may start at 0 or 1, if you're counting them using numerals, or might be counted without numerals at all, as in Peano numerals (Z, S(Z), S(S(Z)), ...) or Church encoding.

Typical computer encodings of numbers use modular arithmetic, and it's because of that that you distinguish between signed and unsigned representations: you're trying to save space and time by reducing the range of your numbers to something covering the range you need. (Adding even one bit to your range can, depending on how many bits you already have, double the storage space and more than double the cycles needed to do your arithmetic. This is not necessarily just on byte or word boundaries, either; a system I'm working on uses 14-bit "small" ints because adding a 15th bit would blow up my space and time as above.)

ℝ are the "real" numbers. These do not have inherent precision; the precision, and whether or not you use "scientific notation" (more on this below) are artifacts of your implementation.

Quote:
...the 'Q' numbers are rational and the numerator and denominator should ideally be any other type of number however for simplicity I would say use either an Integer or Signed Int for both...

Actually, it's simpler to use a signed integer for just one of the numerator or denominator (typically the numerator). If both are signed you then have to deal with calculating the sign based on the two signs in your representation: +/+ and -/- are positive, and +/- and -/+ are negative.

Quote:
...and then you have scientific notation, which while basically a 'float', it is a fixed point number, with an exponent, and these can then be operated on in ways that might be faster than traditional floating point math...

What you've described is traditional floating point math: a significand scaled by an integer exponent of a fixed base. This is not, however, "scientific notation"; whether you choose to represent (i=.123,e=2) as 12.3 or 1.23 × 10² (the latter being scientific notation) is irrelevant to the floating point format. (I understand that this distinction may seem subtle.)


Quote:
...however I think as you move into more symbolic maths, the limitations of storing this number is more obvious; moving to a system where representation is not 'fixed until needed' and 'always exactly predictable to a given place or level of precision' so if you want Pi, you calculate it out from a Unit circle or some other convenient meter...

Or you just do symbolic math and π is π everywhere, including in your output.

You may also find it useful to look at how numeric towers and number systems are dealt with in functional programming languages, which tend to be a little more rigorous about how they set these things up. The Spire package for Scala is a simpler example, and if you want to get really serious you can look at the various numeric towers available for Haskell, which usually work up to the various sets of operations from basic algebraic structures such as monoids, semigroups, and so on and are very careful (due to the nature of the language) about things like partial functions (such as division). The Numbers section of the Prelude is what most people use, but NumHask looks like a good alternative, though I've not investigated it.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 38 posts ]  Go to page Previous  1, 2, 3  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 28 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: