Let's Talk About Math Baby
Re: Let's Talk About Math Baby
Re: Duodecimal or dozenal
There's a group trying to promote the switch from base ten to twelve. Here's a link to their website:
http://www.dozenalsociety.org.uk/
I am sympathetic with the desire to switch, as twelve is a highly composite number. There's a reason why a dozen is a dozen after all. But translation from binary to duodecimal isn't as easy as going to octal or hex. So I am not sure we would be that much better off.
There's a group trying to promote the switch from base ten to twelve. Here's a link to their website:
http://www.dozenalsociety.org.uk/
I am sympathetic with the desire to switch, as twelve is a highly composite number. There's a reason why a dozen is a dozen after all. But translation from binary to duodecimal isn't as easy as going to octal or hex. So I am not sure we would be that much better off.
-
teamtempest
- Posts: 443
- Joined: 08 Nov 2009
- Location: Minnesota
- Contact:
Re: Let's Talk About Math Baby
Quote:
Martin_H wrote:
You might want to take a look at Calc65 which is a BCD math package: http://www.crbond.com/downloads/fltpt65.cba
Note that Bond’s assembly language source is designed to work with a somewhat-strange assembler he developed. I’ve looked at the source code but have not tried to assemble or otherwise use it.Martin_H wrote:
You might want to take a look at Calc65 which is a BCD math package: http://www.crbond.com/downloads/fltpt65.cba
Note that Bond’s assembly language source is designed to work with a somewhat-strange assembler he developed. I’ve looked at the source code but have not tried to assemble or otherwise use it.
You might want to take a look at Calc65 which is a BCD math package: http://www.crbond.com/downloads/fltpt65.cba
Note that Bond’s assembly language source is designed to work with a somewhat-strange assembler he developed. I’ve looked at the source code but have not tried to assemble or otherwise use it.Martin_H wrote:
You might want to take a look at Calc65 which is a BCD math package: http://www.crbond.com/downloads/fltpt65.cba
Note that Bond’s assembly language source is designed to work with a somewhat-strange assembler he developed. I’ve looked at the source code but have not tried to assemble or otherwise use it.
Code: Select all
bcc @B
bcs @FThe source itself is also pretty well commented.
Re: Let's Talk About Math Baby
A few years ago I ported that BCD code to the ophis assembler and added macros to make the code more concise in places. It does work, but I haven't used it in any projects.
https://github.com/Martin-H1/6502/blob/ ... cdmath.asm
https://github.com/Martin-H1/6502/blob/ ... cdmath.asm
Re: Let's Talk About Math Baby
Martin_H wrote:
I am sympathetic with the desire to switch, as twelve is a highly composite number. There's a reason why a dozen is a dozen after all. But translation from binary to duodecimal isn't as easy as going to octal or hex. So I am not sure we would be that much better off.
I spent considerable time trying to make base 60 work first, though, but it just wasn't happening, due to notation problems.
Curt J. Sampson - github.com/0cjs
Re: Let's Talk About Math Baby
while i was standing in line in the supermarket, looking at the thread... i thought of something.
and now i have to write the whole thing out before i can decide if it's stupid or not.
basically, what if you were to use fractions to do math?
so a number would be represented by a pair of integers, one of which is the Numerator and the other the Denominator.
for example 0.333 would be 1/3, 0.1 would be 1/10, and so on. of course that doesn't mean you have infinite precision, but if you use 16-bit ints for both numbers you should have some decent range.
the idea sounds similar to fixed point numbers, but it seems to work a lot more differently.
also ironically, with fractions addition and subtraction become more complicated, while multiplication and division become relatively simple.
having a hardware MUL/DIV co-processor that can handle 2 numbers at the same time would make this a lot faster.
if my plate wasn't already so full i might've tried this out
and now i have to write the whole thing out before i can decide if it's stupid or not.
basically, what if you were to use fractions to do math?
so a number would be represented by a pair of integers, one of which is the Numerator and the other the Denominator.
for example 0.333 would be 1/3, 0.1 would be 1/10, and so on. of course that doesn't mean you have infinite precision, but if you use 16-bit ints for both numbers you should have some decent range.
the idea sounds similar to fixed point numbers, but it seems to work a lot more differently.
also ironically, with fractions addition and subtraction become more complicated, while multiplication and division become relatively simple.
having a hardware MUL/DIV co-processor that can handle 2 numbers at the same time would make this a lot faster.
if my plate wasn't already so full i might've tried this out
Re: Let's Talk About Math Baby
I've seen this proposed somewhere, but some time ago and I can't for the life of me remember where...
I suppose each number requires both a numerator and a denominator, and they'd both, I think, have to be big enough - 32 bits? - to be able to give both a reasonable range and a reasonably small fraction. So it's taking a lot of space for what might be a small number, unless you have a separate 'normalised' type with an implicit denominator (or indeed numerator) of 1.
Addition or subtraction would probably be easiest with a 'double' number type which can hold the two necessary cross multiplication results... so now you're looking at ideally a processor which can do an efficient 32x32=64 bit multiply (and divide, too, though perhaps you can dispose of the lower order bits by simple truncation?)
Neil
I suppose each number requires both a numerator and a denominator, and they'd both, I think, have to be big enough - 32 bits? - to be able to give both a reasonable range and a reasonably small fraction. So it's taking a lot of space for what might be a small number, unless you have a separate 'normalised' type with an implicit denominator (or indeed numerator) of 1.
Addition or subtraction would probably be easiest with a 'double' number type which can hold the two necessary cross multiplication results... so now you're looking at ideally a processor which can do an efficient 32x32=64 bit multiply (and divide, too, though perhaps you can dispose of the lower order bits by simple truncation?)
Neil
Re: Let's Talk About Math Baby
Proxy wrote:
basically, what if you were to use fractions to do math?
so a number would be represented by a pair of integers, one of which is the Numerator and the other the Denominator.
so a number would be represented by a pair of integers, one of which is the Numerator and the other the Denominator.
As you (implicitly) point out, it works best when you have arbitrary-precision integers. But even with lowish-precision integers it has its uses.
It's interesting and useful enough that, now that you've reminded me about it, I've added it to my enormous list of Things I Should Write For 8-Bit Computers.
Curt J. Sampson - github.com/0cjs
- GARTHWILSON
- Forum Moderator
- Posts: 8775
- Joined: 30 Aug 2002
- Location: Southern California
- Contact:
Re: Let's Talk About Math Baby
Proxy wrote:
basically, what if you were to use fractions to do math?
so a number would be represented by a pair of integers, one of which is the Numerator and the other the Denominator.
so a number would be represented by a pair of integers, one of which is the Numerator and the other the Denominator.
Quote:
the idea sounds similar to fixed-point numbers, but it seems to work a lot more differently.
Quote:
also ironically, with fractions addition and subtraction become more complicated, while multiplication and division become relatively simple.
Quote:
having a hardware MUL/DIV co-processor that can handle 2 numbers at the same time would make this a lot faster.
Forum member Samuel Falvo (forum name kc5tja) writes, "The PC/GEOS (aka GeoWorks Ensemble) operating system used rationals for its GUI and was one of the first OSes on the entire planet to make use of scalable font technology, all without an FPU, and with adequate performance on an 8086 processor."
barnacle wrote:
I suppose each number requires both a numerator and a denominator, and they'd both, I think, have to be big enough - 32 bits? - to be able to give both a reasonable range and a reasonably small fraction.
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
Re: Let's Talk About Math Baby
Indeed they can, but they require 32 bits to represent that 32 bit range. This is of course a problem with floats: e.g. a 32 bit float has a massive range, but it still only has 2^32 possibilities; there are a lot of values that cannot be accurately represented. And I think that the same applies to rational numbers in that there are a lot of gaps, but I haven't thought it through properly yet so I could be wrong there.
I was aware of the */ operator and that was what I had in mind with the 32x32=64 comment. To add two rational numbers (assuming 32 bit parameters) a/b and c/d, you'd need to cross multiply to get ad/bd and bc/bd, add those two together, and then scale back this 64 bit sum to 32 bits somehow. Can you just shift both parts of the sum by 32 bits? I dunno...
Of course, the question with any numeric system is: does it adequately represent the values which I need to represent?
Neil
I was aware of the */ operator and that was what I had in mind with the 32x32=64 comment. To add two rational numbers (assuming 32 bit parameters) a/b and c/d, you'd need to cross multiply to get ad/bd and bc/bd, add those two together, and then scale back this 64 bit sum to 32 bits somehow. Can you just shift both parts of the sum by 32 bits? I dunno...
Of course, the question with any numeric system is: does it adequately represent the values which I need to represent?
Neil
Re: Let's Talk About Math Baby
Hmm, I did a quick spreadsheet to look at the distribution of values with a four bit numerator and denominator.
It's clear that there are some problems - maybe they go away with more bits but I don't expect so. In particular, while the digits values 0-15 can be exactly represented, there are no fractional values available above 8. There are also a lot of repeated values: every integer below 8 is duplicated at least once, and '1' appears fifteen times. There are also fifteen zeros and fifteen infinities; many of the fractions are also present multiple times (0.5, 0.33. 0.25, 0.2, and 0.167).
That's a lot of wasted space.
Neil
It's clear that there are some problems - maybe they go away with more bits but I don't expect so. In particular, while the digits values 0-15 can be exactly represented, there are no fractional values available above 8. There are also a lot of repeated values: every integer below 8 is duplicated at least once, and '1' appears fifteen times. There are also fifteen zeros and fifteen infinities; many of the fractions are also present multiple times (0.5, 0.33. 0.25, 0.2, and 0.167).
That's a lot of wasted space.
Neil
- GARTHWILSON
- Forum Moderator
- Posts: 8775
- Joined: 30 Aug 2002
- Location: Southern California
- Contact:
Re: Let's Talk About Math Baby
Representation of any given number of values will require at least enough bits for that many. That's not to say the denominator always has to be stored with the numerator though. For example, if you have an array or table of percentages, you don't need to store the "100" (ie, the denominator) with each number. It is implied.
Similarly, for a particular setting, the denominator may be understood, like it dealing with angles, where a 16-bit circle works out nicely, scaling 360° to be exactly 65,536 counts, with 1 lsb being .00549316°, or .32959 arc-minutes, or 19.7754 arc-seconds. The MOD function doesn't even come up, as there's no need for it. 359° plus 2° is still 1°, and 45° minus 90° is still -45° or 315°. You can consider it signed or unsigned, either way. 90° is represented by $4000, 180° is represented by $8000, and 270° or -90° is represented by $C000. A radian is represented by $28BE. When you take the sin or cos, the output range will be ±1, which, for best resolution, we can scale to as high as ±$7FFF or $8000 depending on how you want to work the trapping (since you can't get +$8000 in a 16-bit signed number).
In the trig functions, what about tangent, since the function has a nasty habit of going to ±infinity and ±90°? Just return the sin & cos both, as a rational number. Even infinity is represented, because you can have the denominator (the cos portion) as 0. What you do with infinity is your business
, but it can be represented!
Similarly, for a particular setting, the denominator may be understood, like it dealing with angles, where a 16-bit circle works out nicely, scaling 360° to be exactly 65,536 counts, with 1 lsb being .00549316°, or .32959 arc-minutes, or 19.7754 arc-seconds. The MOD function doesn't even come up, as there's no need for it. 359° plus 2° is still 1°, and 45° minus 90° is still -45° or 315°. You can consider it signed or unsigned, either way. 90° is represented by $4000, 180° is represented by $8000, and 270° or -90° is represented by $C000. A radian is represented by $28BE. When you take the sin or cos, the output range will be ±1, which, for best resolution, we can scale to as high as ±$7FFF or $8000 depending on how you want to work the trapping (since you can't get +$8000 in a 16-bit signed number).
In the trig functions, what about tangent, since the function has a nasty habit of going to ±infinity and ±90°? Just return the sin & cos both, as a rational number. Even infinity is represented, because you can have the denominator (the cos portion) as 0. What you do with infinity is your business
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
Re: Let's Talk About Math Baby
more bits doesn't make that issue go away, it just gives the spikes more "resolution" so to say...
looking at this, it makes sense that the largest fractional number is exactly half the maximum integer you can have.
so for 4 bits that's 15. and 15/2 = 7.5.
wouldn't using an implied or fixed denominator just turn the number into a scaled integer?
so for 4 bits that's 15. and 15/2 = 7.5.
GARTHWILSON wrote:
Representation of any given number of values will require at least enough bits for that many. That's not to say the denominator always has to be stored with the numerator though. For example, if you have an array or table of percentages, you don't need to store the "100" (ie, the denominator) with each number. It is implied.
- GARTHWILSON
- Forum Moderator
- Posts: 8775
- Joined: 30 Aug 2002
- Location: Southern California
- Contact:
Re: Let's Talk About Math Baby
True; I was kind of mixing them together. Most of the tables I provide in that section of my website are 128KB, with 65,536 16-bit results, all pre-calculated, so there's no interpolation needed; but for the inverse, it's twice that size, with 65,536 32-bit results, intended partly for simplifying division by looking up the reciprocal to multiply by, using other tables to speed the multiplication. You don't have to use all four bytes of a reciprocal in every case; but the availability means you can get the needed resolution in whatever part of the range you're working in.
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
Re: Let's Talk About Math Baby
[[last page]]
NZQRC
Integer, Signed, Rational, High-Precision 'point' numbers (or Scientific Notation (OR|| Symbolic Real Numbers)), Complex Numbers
so integers are straightforward, Signed numbers require a bit and center distribution around zero yet are still easily represented binary or bcd numbers, the 'Q' numbers are rational and the numerator and denominator should ideally be any other type of number however for simplicity I would say use either an Integer or Signed Int for both,
now normally floating- or fixed- point numbers would be next however I think as you move into more symbolic maths, the limitations of storing this number is more obvious; moving to a system where representation is not 'fixed until needed' and 'always exactly predictable to a given place or level of precision' so if you want Pi, you calculate it out from a Unit circle or some other convenient meter and you select how many digits of precision when you 'call Pi'. Same for Phi or e or whatever, so you never have a rounding or truncation error, you have whatever you calculate out that you need 'at run time'; this makes things a bit harder, though still reliable,
and then lastly having a solid representation of complex numbers facilitates all kinds of neat things.
I think having both fixed and floating point numbers is okay for a math heavy library, and then you have scientific notation, which while basically a 'float', it is a fixed point number, with an exponent, and these can then be operated on in ways that might be faster than traditional floating point math,,
logarithms should be covered somewhere, as well as exponential numbers
still, a good matric library can do lots of things with just 1s and 0s...
NZQRC
Integer, Signed, Rational, High-Precision 'point' numbers (or Scientific Notation (OR|| Symbolic Real Numbers)), Complex Numbers
so integers are straightforward, Signed numbers require a bit and center distribution around zero yet are still easily represented binary or bcd numbers, the 'Q' numbers are rational and the numerator and denominator should ideally be any other type of number however for simplicity I would say use either an Integer or Signed Int for both,
now normally floating- or fixed- point numbers would be next however I think as you move into more symbolic maths, the limitations of storing this number is more obvious; moving to a system where representation is not 'fixed until needed' and 'always exactly predictable to a given place or level of precision' so if you want Pi, you calculate it out from a Unit circle or some other convenient meter and you select how many digits of precision when you 'call Pi'. Same for Phi or e or whatever, so you never have a rounding or truncation error, you have whatever you calculate out that you need 'at run time'; this makes things a bit harder, though still reliable,
and then lastly having a solid representation of complex numbers facilitates all kinds of neat things.
I think having both fixed and floating point numbers is okay for a math heavy library, and then you have scientific notation, which while basically a 'float', it is a fixed point number, with an exponent, and these can then be operated on in ways that might be faster than traditional floating point math,,
logarithms should be covered somewhere, as well as exponential numbers
still, a good matric library can do lots of things with just 1s and 0s...
Re: Let's Talk About Math Baby
wayfarer wrote:
NZQRC
Integer, Signed, Rational, High-Precision 'point' numbers (or Scientific Notation (OR|| Symbolic Real Numbers)), Complex Numbers
Integer, Signed, Rational, High-Precision 'point' numbers (or Scientific Notation (OR|| Symbolic Real Numbers)), Complex Numbers
In math, integers are signed as they go on infinitely in both directions along the number line (except when using modular arithmetic).
ℕ is the "natural" numbers, which have a starting point and go infinitely in one direction. . These may start at 0 or 1, if you're counting them using numerals, or might be counted without numerals at all, as in Peano numerals (Z, S(Z), S(S(Z)), ...) or Church encoding.
Typical computer encodings of numbers use modular arithmetic, and it's because of that that you distinguish between signed and unsigned representations: you're trying to save space and time by reducing the range of your numbers to something covering the range you need. (Adding even one bit to your range can, depending on how many bits you already have, double the storage space and more than double the cycles needed to do your arithmetic. This is not necessarily just on byte or word boundaries, either; a system I'm working on uses 14-bit "small" ints because adding a 15th bit would blow up my space and time as above.)
ℝ are the "real" numbers. These do not have inherent precision; the precision, and whether or not you use "scientific notation" (more on this below) are artifacts of your implementation.
Quote:
...the 'Q' numbers are rational and the numerator and denominator should ideally be any other type of number however for simplicity I would say use either an Integer or Signed Int for both...
Quote:
...and then you have scientific notation, which while basically a 'float', it is a fixed point number, with an exponent, and these can then be operated on in ways that might be faster than traditional floating point math...
Quote:
...however I think as you move into more symbolic maths, the limitations of storing this number is more obvious; moving to a system where representation is not 'fixed until needed' and 'always exactly predictable to a given place or level of precision' so if you want Pi, you calculate it out from a Unit circle or some other convenient meter...
You may also find it useful to look at how numeric towers and number systems are dealt with in functional programming languages, which tend to be a little more rigorous about how they set these things up. The Spire package for Scala is a simpler example, and if you want to get really serious you can look at the various numeric towers available for Haskell, which usually work up to the various sets of operations from basic algebraic structures such as monoids, semigroups, and so on and are very careful (due to the nature of the language) about things like partial functions (such as division). The Numbers section of the Prelude is what most people use, but NumHask looks like a good alternative, though I've not investigated it.
Curt J. Sampson - github.com/0cjs