6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Wed May 08, 2024 1:18 am

All times are UTC




Post new topic Reply to topic  [ 31 posts ]  Go to page 1, 2, 3  Next
Author Message
PostPosted: Mon Jan 24, 2011 5:41 am 
Offline

Joined: Wed Jan 19, 2011 10:20 pm
Posts: 42
Hi all,
for some fun, I am attempting to make a 'compiler' that compiles a high-level language similar to Pascal (Delphi) into C64 assembly code, and I have some number questions if I may :)

I need some code (C/C++/Pascal, whatever) that can convert a number, either an integer (4, 2, -5) or floating point (-1.5, 0.3E-10, etc.) into the compact memory form that the C64 floating point routines can use.

In other words, I want to change this for example:

Code:
const
  a = -1.5; //floating point number
  b = 3;     //integer number


into

Code:
const_a
  .byte <a as some hex digits in C64 fp compact format>

const_b
  .byte <b as some hex digits in C64 fp compact format>


so the C64 can do maths on this number using it's built-in floating point routines...

Any ideas?

I have looked up how the C64 floating point maths stuff works, but I am totally confused :)

cheers,
Paul

_________________
"The plastic veneer of civilization is easily melted in the heat of the moment" - Paul Nicholls


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 24, 2011 6:46 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8176
Location: Midwestern USA
paul_nicholls wrote:
Hi all,
for some fun, I am attempting to make a 'compiler' that compiles a high-level language similar to Pascal (Delphi) into C64 assembly code, and I have some number questions if I may :)

C64 assembly code? Is that anything like 6502 assembly language? <Grin>

Quote:
I need some code (C/C++/Pascal, whatever) that can convert a number, either an integer (4, 2, -5) or floating point (-1.5, 0.3E-10, etc.) into the compact memory form that the C64 floating point routines can use.

The "compact memory form" of which you refer is called excess-128 notation and was a "feature" of early Micro-Soft BASIC (the BASIC that Bill Gates complained was being stolen by the Homebrew Computer Club). Excess-128 is explained in quite a few books about the C-64, of which Mapping the Commodore 64 immediately comes to mind. That tome details the BASIC ROM locations that handle floating point math.

You should be aware that the version of excess-128 notation used in all Commodore 8 bit machines is not very accurate. It only produces a seven significant digit result and is prone to decimal-to-binary conversion errors. If arithmetic results must be accurate (as for, say, a financial program), you'd be better off writing your own routines using BCD, which while slower, doesn't suffer from D-to-B conversion errors. Take a look at Kimath, which is around here somewhere.

Lastly, if you decide to go with BCD, be sure to disable IRQs while your math package is being called. The C-64 interrupt handlers do not place the MPU to binary mode. If it is in decimal mode when an interrupt occurs your machine may go belly-up or engage in rude and bizarre behavior.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 24, 2011 11:16 pm 
Offline

Joined: Wed Jan 19, 2011 10:20 pm
Posts: 42
BigDumbDinosaur wrote:
paul_nicholls wrote:
Hi all,
for some fun, I am attempting to make a 'compiler' that compiles a high-level language similar to Pascal (Delphi) into C64 assembly code, and I have some number questions if I may :)

C64 assembly code? Is that anything like 6502 assembly language? <Grin>


hahaha! Good one :)

BigDumbDinosaur wrote:
I need some code (C/C++/Pascal, whatever) that can convert a number, either an integer (4, 2, -5) or floating point (-1.5, 0.3E-10, etc.) into the compact memory form that the C64 floating point routines can use.

The "compact memory form" of which you refer is called excess-128 notation and was a "feature" of early Micro-Soft BASIC (the BASIC that Bill Gates complained was being stolen by the Homebrew Computer Club). Excess-128 is explained in quite a few books about the C-64, of which Mapping the Commodore 64 immediately comes to mind. That tome details the BASIC ROM locations that handle floating point math.[/quote]

Ok, good to know :)

BigDumbDinosaur wrote:
You should be aware that the version of excess-128 notation used in all Commodore 8 bit machines is not very accurate. It only produces a seven significant digit result and is prone to decimal-to-binary conversion errors.


Ok, now I an confused - this site:

http://www.devili.iki.fi/Computers/Commodore/C64/Programmers_Reference/Chapter_1/page_005.html

Says 5 bytes of memory for fp numbers, and 10 places of accuracy?

BigDumbDinosaur wrote:
If arithmetic results must be accurate (as for, say, a financial program), you'd be better off writing your own routines using BCD, which while slower, doesn't suffer from D-to-B conversion errors. Take a look at Kimath, which is around here somewhere.

Lastly, if you decide to go with BCD, be sure to disable IRQs while your math package is being called. The C-64 interrupt handlers do not place the MPU to binary mode. If it is in decimal mode when an interrupt occurs your machine may go belly-up or engage in rude and bizarre behavior.


It sounds like from here:

http://www.c64-wiki.com/index.php/Floating_point_arithmetic

that nothing is lost using the 5 byte format for fp numbers due to the assumption:

"Since the mantissa is always in the 1-to-2 range, the first binary digit will always be a "1" — no need to store that."

Anway, I don't suppose you still could point me to somewhere (if possible) how to create a 5 byte fp number suitable for usage with the FAC and ARG routines using some algorithm (not c64 routines)?

I would still like to know this :)

cheers,
Paul

_________________
"The plastic veneer of civilization is easily melted in the heat of the moment" - Paul Nicholls


Top
 Profile  
Reply with quote  
PostPosted: Tue Jan 25, 2011 10:04 am 
Offline

Joined: Wed Oct 22, 2003 4:07 am
Posts: 51
Location: Norway
paul_nicholls wrote:
BigDumbDinosaur wrote:
paul_nicholls wrote:
Hi all,
for some fun, I am attempting to make a 'compiler' that compiles a high-level language similar to Pascal (Delphi) into C64 assembly code, and I have some number questions if I may :)

C64 assembly code? Is that anything like 6502 assembly language? <Grin>


hahaha! Good one :)

BigDumbDinosaur wrote:
paul_nicholls wrote:
I need some code (C/C++/Pascal, whatever) that can convert a number, either an integer (4, 2, -5) or floating point (-1.5, 0.3E-10, etc.) into the compact memory form that the C64 floating point routines can use.

The "compact memory form" of which you refer is called excess-128 notation and was a "feature" of early Micro-Soft BASIC (the BASIC that Bill Gates complained was being stolen by the Homebrew Computer Club). Excess-128 is explained in quite a few books about the C-64, of which Mapping the Commodore 64 immediately comes to mind. That tome details the BASIC ROM locations that handle floating point math.


Ok, good to know :)

BigDumbDinosaur wrote:
You should be aware that the version of excess-128 notation used in all Commodore 8 bit machines is not very accurate. It only produces a seven significant digit result and is prone to decimal-to-binary conversion errors.


Ok, now I an confused - this site:

http://www.devili.iki.fi/Computers/Commodore/C64/Programmers_Reference/Chapter_1/page_005.html

Says 5 bytes of memory for fp numbers, and 10 places of accuracy?

9 significant digits, not 10 or 7, BDD might be thinking of single precision floating point (32bit fp) which has 7 (decimal) digit precision. (Intermediate results will have higher precision though, for correct rounding). The 5 byte fp format of the c64 is good enough for what you asked, it can accurately represent all 32 bit integer and all (single precision) floating point numbers.

paul_nicholls wrote:
BigDumbDinosaur wrote:
If arithmetic results must be accurate (as for, say, a financial program), you'd be better off writing your own routines using BCD, which while slower, doesn't suffer from D-to-B conversion errors. Take a look at Kimath, which is around here somewhere.

Lastly, if you decide to go with BCD, be sure to disable IRQs while your math package is being called. The C-64 interrupt handlers do not place the MPU to binary mode. If it is in decimal mode when an interrupt occurs your machine may go belly-up or engage in rude and bizarre behavior.


It sounds like from here:

http://www.c64-wiki.com/index.php/Floating_point_arithmetic

that nothing is lost using the 5 byte format for fp numbers due to the assumption:

"Since the mantissa is always in the 1-to-2 range, the first binary digit will always be a "1" — no need to store that."

Not storing the most significant 1 is standard practice and does not lose anything (in fact you gain 1 bit of precision overall). The problem is that in conversion from fractional decimal number to binary fractional number you usually lose some precision. But overall binary numbers has a precision advantage over BCD of the same size, because BCD does not use all possible combination of bits available for any given number of bits.

paul_nicholls wrote:
Anway, I don't suppose you still could point me to somewhere (if possible) how to create a 5 byte fp number suitable for usage with the FAC and ARG routines using some algorithm (not c64 routines)?

I would still like to know this :)

cheers,
Paul


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Tue Jan 25, 2011 11:26 am 
Offline

Joined: Wed Jan 19, 2011 10:20 pm
Posts: 42
Thanks for the extra info Thowllly :)

I am perfectly happy to be able to use up to 32 bit integer and single floating point values in my compiler...I am not looking at doing financial programs, just some fun c64 coding - maybe some games, etc.

cheers,
Paul

_________________
"The plastic veneer of civilization is easily melted in the heat of the moment" - Paul Nicholls


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Tue Jan 25, 2011 11:56 am 
Offline
User avatar

Joined: Tue Nov 16, 2010 8:00 am
Posts: 2353
Location: Gouda, The Netherlands
Financial programs don't use floating point anyway. It's too sensitive to accumulating errors when adding up small amounts. Instead they use integer amounts of cents (or smaller, depending on the application).


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Tue Jan 25, 2011 1:36 pm 
Offline
User avatar

Joined: Tue Mar 02, 2004 8:55 am
Posts: 996
Location: Berkshire, UK
Arlet wrote:
Financial programs don't use floating point anyway. It's too sensitive to accumulating errors when adding up small amounts. Instead they use integer amounts of cents (or smaller, depending on the application).


Yep. We only use floating point when calculating prices (e.g. for derivatives) and for risk measures (e.g. delta, gamma, Value@Risk). Monetary amounts are usually in BCD like formats.

_________________
Andrew Jacobs
6502 & PIC Stuff - http://www.obelisk.me.uk/
Cross-Platform 6502/65C02/65816 Macro Assembler - http://www.obelisk.me.uk/dev65/
Open Source Projects - https://github.com/andrew-jacobs


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Tue Jan 25, 2011 5:09 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
If anyone can find a binary representation for 0.3 or 0.1, let me know. While you're at it, publish a paper in ACM or IEEE Computer, and earn yourself a position at some prestigious university. Even with *80* bits of precision, you can't represent either number precisely, and so when printing values, you have to introduce a fudge factor to make the display right. Thankfully, Python doesn't bother with a fudge factor that I know of, so it exposes floating point's warts. For example:

Code:
kc5tja@capella ~]python
Python 2.6 (r26:66714, Jan 22 2009, 12:45:42)
[GCC 4.1.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> a=0.1
>>> a
0.10000000000000001
>>> b=0.3
>>> b
0.29999999999999999


BCD has the benefit that it can represent numbers which binary cannot, thus invalidating the argument that binary floating point is always more precise than an equivalently-sized BCD floating point quantity. Representing 0.3 in BCD requires only two bytes of space, but known binary representations takes an infinite amount of RAM.

I do not know if the reverse holds as well; it is at least conceivable that binary potentially can represent numbers which BCD similarly couldn't represent compactly. However, I cannot think of any examples at the moment.

Note that this explicitly doesn't cover rational representation of numbers.

Binary floating point works exactly like its decimal equivalent, more commonly known as "scientific notation." The trick is to recognize binary figures after the decimal point:

Code:
 8    4    2    1   0.5   0.25   0.125
=======================================
 3    2    1    0    -1    -2     -3
2    2    2    2    2     2      2


So representing the number 16, you could express it as 1000 which is the same as 1000.0e+0 (where "e" refers to a power of two, not a power of ten). This is the same as writing 1.000e+3 (notice the movement of the binary point and its relationship with the exponent). Hence, the number 16 can be expressed as the tuple <3,0>, with 3 being the exponent, and 0 being the mantissa bits, sans the '1' to save one bit of space.

Conversely 1/2 is expressed as 0.5 in decimal, of 0.1 in binary. This is because 2^-1 = 1/(2^1) = 1/2. 0.25 is expressed as 2^-2 = 1/(2^2) = 1/4 = 0.25. And so on. So:

Code:
Decimal Binary    Float   Bytes
    0.5    0.1   1.0e-1   <-1, $0000000000>
   0.25   0.01   1.0e-2   <-2, $0000000000>
   0.75   0.11   1.1e-1   <-1, $8000000000>
    1.0    1.0   1.0e+0   <0, $0000000000>
   1.25   1.01   1.01e+0  <0, $4000000000>
    1.5    1.1   1.10e+0  <0, $8000000000>
   1.75   1.11   1.11e+0  <0, $C000000000>
    2.0   10.0   1.0e+1   <1, $0000000000>


You represent the fractional part of a number as a sum of pure binary fractions, just as you'd represent the decimal portion of a number as a sum of fractions of powers of ten.

To answer the OP's question, I don't know how you can convert from some ASCII representation into a floating point number in some clever way, but you can always brute-force the solution. Given a number like 3.14159, you can always break the number up into an integer (3) and a fraction (14159 / 100000). Converting an integer into a floating point number is a trivial exercise in bit-shifting, so I won't cover that. However, if you evaluate the fraction, you'll get 0.14159, to which you can just add the 3.0 to yield the final result, 3.14159 in binary floating point.

Hope this helps.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Tue Jan 25, 2011 6:43 pm 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8432
Location: Southern California
Quote:
If anyone can find a binary representation for 0.3 or 0.1, let me know.

Simple.  That's partly what scaled-integer is for.  In financial applications for example, instead of trying to represent dollars directly and putting up with errors in the fractional part (ie, the cents), you represent the cents; so $.30 becomes 1E in hex, and it is perfectly accurate.  Of course, you could do the same thing with floating point hex, so it could be represented as 1Ee0.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Tue Jan 25, 2011 7:28 pm 
Offline

Joined: Wed Oct 22, 2003 4:07 am
Posts: 51
Location: Norway
kc5tja wrote:
If anyone can find a binary representation for 0.3 or 0.1, let me know. While you're at it, publish a paper in ACM or IEEE Computer, and earn yourself a position at some prestigious university. Even with *80* bits of precision, you can't represent either number precisely, and so when printing values, you have to introduce a fudge factor to make the display right. Thankfully, Python doesn't bother with a fudge factor that I know of, so it exposes floating point's warts. For example:

Code:
kc5tja@capella ~]python
Python 2.6 (r26:66714, Jan 22 2009, 12:45:42)
[GCC 4.1.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> a=0.1
>>> a
0.10000000000000001
>>> b=0.3
>>> b
0.29999999999999999


BCD has the benefit that it can represent numbers which binary cannot, thus invalidating the argument that binary floating point is always more precise than an equivalently-sized BCD floating point quantity. Representing 0.3 in BCD requires only two bytes of space, but known binary representations takes an infinite amount of RAM.


I haven't seen anyone claim that binary floating point is always more precise than the same size BCD fp. It's only natural that for numbers already in decimal form BCD is a better fit. But converting an arbitrary number to fp will usually result in a smaller error for binary than BCD, because binary fp have more possible number it can represent than BCD fp. A 32 bit binary fp can represent about 4 billion different numbers, while a 32 bit BCD fp can only represent about 100 million numbers (if the exponent is BCD too, more if not). (Edit: forgot a 0. Oh, and good thing we use base 10 and not base 15, then we would have problems converting back from binary fractions too, not only to them...)


kc5tja wrote:
I do not know if the reverse holds as well; it is at least conceivable that binary potentially can represent numbers which BCD similarly couldn't represent compactly. However, I cannot think of any examples at the moment.


I don't believe the reverse holds. Decimal can represent all rational numbers of the form X/Y (X and Y are integers) where Y only contains the factors 2 and 5. Binary can only represent rational numbers where Y only contain the factor 2. If we use base 30 instead then Y could contain the factors 2, 3 and 5 (and storing base 30 digits in 5 bits would be less waste den base 10 digits in 4 bits). Base 210 (adding the next prime number; 7) would be able to represent even more rational numbers, and so on. But there is still more irrational numbers than rational numbers.

kc5tja wrote:
Note that this explicitly doesn't cover rational representation of numbers.

Binary floating point works exactly like its decimal equivalent, more commonly known as "scientific notation." The trick is to recognize binary figures after the decimal point:

Code:
 8    4    2    1   0.5   0.25   0.125
=======================================
 3    2    1    0    -1    -2     -3
2    2    2    2    2     2      2


So representing the number 16, you could express it as 1000 which is the same as 1000.0e+0 (where "e" refers to a power of two, not a power of ten). This is the same as writing 1.000e+3 (notice the movement of the binary point and its relationship with the exponent). Hence, the number 16 can be expressed as the tuple <3,0>, with 3 being the exponent, and 0 being the mantissa bits, sans the '1' to save one bit of space.

Conversely 1/2 is expressed as 0.5 in decimal, of 0.1 in binary. This is because 2^-1 = 1/(2^1) = 1/2. 0.25 is expressed as 2^-2 = 1/(2^2) = 1/4 = 0.25. And so on. So:

Code:
Decimal Binary    Float   Bytes
    0.5    0.1   1.0e-1   <-1, $0000000000>
   0.25   0.01   1.0e-2   <-2, $0000000000>
   0.75   0.11   1.1e-1   <-1, $8000000000>
    1.0    1.0   1.0e+0   <0, $0000000000>
   1.25   1.01   1.01e+0  <0, $4000000000>
    1.5    1.1   1.10e+0  <0, $8000000000>
   1.75   1.11   1.11e+0  <0, $C000000000>
    2.0   10.0   1.0e+1   <1, $0000000000>


You represent the fractional part of a number as a sum of pure binary fractions, just as you'd represent the decimal portion of a number as a sum of fractions of powers of ten.

To answer the OP's question, I don't know how you can convert from some ASCII representation into a floating point number in some clever way, but you can always brute-force the solution. Given a number like 3.14159, you can always break the number up into an integer (3) and a fraction (14159 / 100000). Converting an integer into a floating point number is a trivial exercise in bit-shifting, so I won't cover that. However, if you evaluate the fraction, you'll get 0.14159, to which you can just add the 3.0 to yield the final result, 3.14159 in binary floating point.

Hope this helps.
Here is one way to do it. Don't know if it counts as brute force. To convert 0.14159 to binary, try and see if you can subtract 1/2 without getting a negative number, if no the first bit is 0, if yes the first bit is 1 (and you should actually do the subtraction). Then try if you can subtract 1/4, then 1/8, then 1/16, 1/32.... and so on until you run out of bits or the remainder is zero. And if you want correct rounding then I guess you should see how large the remainder is compared to doing one subtraction "too much" and adjusting accordingly.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Tue Jan 25, 2011 10:10 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
GARTHWILSON wrote:
Simple. That's partly what scaled-integer is for.


OK, but I'm not going to multiply all my floating point values by the product of all unrepresentable numbers just to get perfect accuracy. Scaling works fine if, and only if, you are dealing with divisors equal to your scaling factor.

Things break down horrifically, for example, if I divide your scaled integer by 9. Oops.

As you can see, it's not so simple a problem.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Tue Jan 25, 2011 10:13 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
Thowllly wrote:
Here is one way to do it. Don't know if it counts as brute force. To convert 0.14159 to binary, try and see if you can subtract 1/2 without getting a negative number


Doesn't this system of successive approximation require that 0.14159 be in binary form already though? Otherwise, I like it.

I suppose you can use a system of scaled integers, where you first try to subtract 14159-50000, then 25000, then 12500, etc. But, this seems like it brings along with it a host of complexity.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Tue Jan 25, 2011 10:50 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10800
Location: England
Not sure how helpful this is going to be...

BBC Basic has a similar representation, but regardless of the similarity, Steve Fewell has writtten up pseudocode as a prelude to an annotated disassembly of Basic 4's code for "ASCNUM: Handle Exponential values and complete number conversion"

The outline is to convert the mantissa to binary (read each digit, multiply the present value by 10 and add in the digit) and then to multiply or divide the value you have by 10, repeatedly, until the exponent you have is exhausted. All the calculations are done in the target format of course, so the result you get is in the right format. If you can code up adding a small integer to a value and adding two values then you have all the pieces you need for positive exponents (because doubling is just adding a value to itself, and if you can double and add then you can multiply by 10.) To divide by 10, consult Steve's page on that (Float Divide 10.)


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Tue Jan 25, 2011 11:01 pm 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8432
Location: Southern California
kc5tja wrote:
GARTHWILSON wrote:
Simple. That's partly what scaled-integer is for.


OK, but I'm not going to multiply all my floating point values by the product of all unrepresentable numbers just to get perfect accuracy. Scaling works fine if, and only if, you are dealing with divisors equal to your scaling factor.

Things break down horrifically, for example, if I divide your scaled integer by 9. Oops.

As you can see, it's not so simple a problem.

Most inputs, scaled or not, will, when divided, result in numbers that cannot be represented perfectly with a limited number of digits, either binary or decimal (like a dollar divided into nine equal parts); but the argument, "You can't represent a cent exactly" is eliminated.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Wed Jan 26, 2011 2:52 am 
Offline
User avatar

Joined: Thu Mar 11, 2004 7:42 am
Posts: 362
Getting back to the original topic, here is an ancient post of mine describing (more or less) the steps that the various BASICs used to convert FP numbers. Untested, as the post says, and I haven't tried it since then. Also, EhBASIC has been updated since then, so whether those label names are still correct, I don't know. But it's a starting point.

viewtopic.php?p=3148&highlight=#3148

There are other FP related threads (with very long-winded posts from me) from back then, which cover some of the details of the various 6502 FP implementions available.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 31 posts ]  Go to page 1, 2, 3  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: