6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Thu Nov 14, 2024 5:47 pm

All times are UTC




Post new topic Reply to topic  [ 9 posts ] 
Author Message
PostPosted: Fri Jul 16, 2004 12:34 pm 
Offline

Joined: Tue Jul 06, 2004 5:16 am
Posts: 11
How to convert an APPLE's floating-point number to a printable ASCII?

1)Howto convert ascii to "float"
2)Howto convert "float" to ascii
The Float-point number is stored in APPLE's format which is discussed in "I can't understand FLOATING POINT REPRESENTATION"


Top
 Profile  
Reply with quote  
PostPosted: Sat Jul 17, 2004 12:45 am 
Offline

Joined: Tue Sep 30, 2003 5:46 pm
Posts: 30
Location: Central Wisconsin
tedaz wrote:
How to convert an APPLE's floating-point number to a printable ASCII?

1)Howto convert ascii to "float"
2)Howto convert "float" to ascii
The Float-point number is stored in APPLE's format which is discussed in "I can't understand FLOATING POINT REPRESENTATION"


I think I can point you in the right direction on this one - I was looking at this a number of months ago as I played with an Applesoft compiler. There was a document that came out called "Applesoft: Internals" that was part of the Apple Technical Information Library.

Look here.

In particular, these documents should be helpful: FP 1, FP 2, FP 3.

I converted this into a Word document with better formatting. Think I can release it to the general public - or is that a bad thing??


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sun Jul 18, 2004 9:58 am 
Offline

Joined: Sun Aug 24, 2003 7:17 pm
Posts: 111
As a general "Apple independent" approach to this I would suggest to use a table of the ASCII codes "0","1",.... ,"9" in 10 consequtive bytes and to have corresponding floating point representations in a sequence of for example 40 bytes if 4 byte sizes are used (8 bytes "double precision" is usually more adequate for mathematical/scientific computations => 80 bytes).

A sequence of characters representing an integer, of type "45932" , could then be converted to a floating point number in a loop as follows:

Initalise a variable "value" with floating point 0.

In the loop read next character and collect corresponding floating point value from the table
Floating point multiply the variable "value" with 10 and floating point add the value picked out of the table

To convert a non-integer of type "459.32" one has to have a special branch when the ASCII character is "." and keep track of how many integers follow the decimal point. The value in variable value "value" should then when the loop above has found a " " and has been terminated be divided by 10 as many times as there were decimals.

One should first write the routines for floating point add, subtract,multiply and divide and then write input output routines as above. This has obviously already been done for APPLE and COMMODORE with their BASIC systems. But probably not for the IEEE floating point format that nowadays is standard. If it would be considered useful I could consider to write a whole floating point system for 6502 using IEEE representation (a significant task) but I guess those times are gone when 6502 was used for scientific/technical computations with floating point numbers!


Top
 Profile  
Reply with quote  
PostPosted: Sun Jul 18, 2004 10:59 am 
Offline

Joined: Tue Jul 06, 2004 5:16 am
Posts: 11
Thanks for "Mats".
I want to write a statistics program in 6502 assembly language.
So, I need floating-point routines.
I have already get a FADD/FSUB/FMUL/FDIV.

But, I meet a serious problem:How to convert an internal floating-point number to a printable string? How to get a floating-point number from an input ASCII string?

Can "Mats" give a small example?


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Mon Jul 19, 2004 7:34 am 
Offline
User avatar

Joined: Thu Mar 11, 2004 7:42 am
Posts: 362
Applesoft uses a different floating representation than the routines in the wozfp3.txt file, so those Applesoft routines won't quite do the trick.

You can look at how EhBASIC does it (LAB_2887 and LAB_296E) for hints. Here's the idea:

To convert a string such as -123.45E-67 which is -123.45 * 10 ^ -67:

1. Initialize M to 0. M is a FP number.

2. Accumulate the mantissa digits. Work left to right using M = M * 10 + digit until you get to the exponent.

3. Count the number of digits after the decimal point and call this count K. K is a 1-byte signed integer.

4. Initialize N to zero. N is a 1-byte signed integer.

5. Accumulate the exponent digits. Work left to right using N = N * 10 + digit.

6. If there was a minus sign before the exponent (i.e. after E) then negate N, i.e. N = -N.

7. The number we want is M * 10 ^ (N-K), so start with M.

8. If N-K is less than zero, divide by 10 ^ (K-N), which you can do by dividing by 10 (K-N) times.

9. If N-K is greater than zero, multiply by 10 ^ (N-K), which you can do by multplying by 10 (N-K) times.

10. If there was a minus sign before the manitissa, negate the result.

It's not terrific, but it's okay.

In the example -123.45E-67, M = 12345, K = 2, and N = -67. N-K is less than zero, so dividing by 10^-69 is 1.2345 * 10 ^ -65 which is negated to give the correct result, -1.2345 * 10 ^ -65.

You may wish to put some limit on the maximum number of mantissa digits that are accumulated. In EhBASIC 1.000 works as you would expect, but if you put 50 zeros after the decimal point you'll get an overflow error, because M = 10^50 which is a larger number than EhBASIC can handle.

To output an FP number, you'll have to decide the maximum number of mantissa digits to output. Here's the idea for outputing a FP number N in scientfic notation, where D is the maximum number of mantissa digits, and 10 ^ M < 2 ^ 23:

1. If N is zero, output a zero and end.

2. If N is negative, output a minus sign and negate N

3. Initialize K to 0, K is a 1-byte signed integer.

4. If N < 10 ^ (D-1), then multiply N by 10 until N >= 10 ^ (D-1), and decrement K each time you multiply by 10.

5. If N >= 10 ^ D, then divide by 10 until N < 10 ^ D, and increment K each time you divide by 10.

6. With N in FP1 (i.e. the mantissa in FP1 and the exponent in X1) shift M1 right (150-X1) times. Now M1 contains a 3-byte (unsigned) integer (M1+0 is the high byte, M1+2 is the low byte), and 10 ^ (D-1) <= M1 <= 10 ^ D.

7. output the integer in M1, but put a decimal point between the first and second digit, then output "E", then the signed intger K+D-1.

Example: N = 12, D = 6

Multiply 12 by 10 4 times, so K = -4 and FP1 = 120000 (100000 <= 120000 < 1000000). X1 = 148 and M1 = $75300, so shift M1 right twice to get M1 = $1D4C0 = 120000. K+D-1 = -4+6-1 = 1, so the output is 1.20000E1.

Again, not terrific, but okay.

The idea for a routine that outputs a number without an "E" followed by an exponent, is to replace step 7 with:

7. If K < 1-D, output a decimal point, then output -K-D zeros, then the integer M1.

8. If 1-D <= K < 0, output the integer M1, but put a decimal point between the (K+D)th digit and the (K+D+1)th digit.

9. If K >= 0, output the integer M1, followed by K zeros.

In example above, D = 6, and K = -4, so 0 <= K+D-1 < D-1, the decimal point is between the 2nd and 3rd digits, and the output is 12.0000.

Once you get the above working, you can add other features.

Note that a number like .1 cannot be represented exactly even though it can be entered in a string exactly. It the same idea as 1/3 which is approximately .333333 but not exactly .333333. So if you convert a number and immediately output it, the output may be a slightly different number than the input. For this reason, you may wish to consider using BCD FP routines likes DP18 or KIMATH, both of which can represent .1 exactly. Their input and output routines are much simpler.

I haven't tested all of the above, so it may need some tweaking, but it should be enough to get started.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Thu Aug 05, 2004 8:40 am 
Offline

Joined: Sun Aug 24, 2003 7:17 pm
Posts: 111
First of all, the Steve Wozniak floating point system contains the subroutines FLOAT and FIX to convert between 16 bit integer representation and floating point representation. With these routines you can immediately derive the floating point representation of any integer between -32768 ($8000) and +32767 ($7FFF) and transform back to integer representation. One should specially note that 10 has the floating point representation $83 50 00 00!

To convert a decimal fraction of type 23.58 one then uses FLOAT to get the representation for 2358 ($936) which is 8B 49 B0 00 and then divides twice with 10 ($0A) having floating point representation $83 50 00 00 using subroutine FDIV. The floating point representation of 23.58 is in this way found to be $84 5E 51 EA.

To convert an integer outside the range, for example 2358000000, to floating point format one instead multiplies 2358 ($936) which is 8B 49 B0 00 with 10 ($0A) having floating point representation $83 50 00 00 six times using FMUL.

To transform back one uses subroutine FIX. This routine truncates the floating point value to an integer (it obviously has to!!). In order not to loose the decimal part one should therefore multiply or divide the value with 10 using FMUL or FDIV before using FIX to make the value fall in the interval from 32767/10 to 32767 (for negative values between -32768 and -32768/10). The resulting integer should then be assigned an "implicit decimal point" based on these initial multiplications/divisions with 10!

If you really need ASCII representation you could then go via the BCD representation, routines to go between binary integer representation and BCD representation are available in the Source Code Repository!


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Fri Aug 13, 2004 11:32 pm 
Offline

Joined: Thu Jul 24, 2003 8:01 pm
Posts: 24
[quote="Mats"]One should first write the routines for floating point add, subtract,multiply and divide and then write input output routines as above. This has obviously already been done for APPLE and COMMODORE with their BASIC systems. But probably not for the IEEE floating point format that nowadays is standard. If it would be considered useful I could consider to write a whole floating point system for 6502 using IEEE representation (a significant task) but I guess those times are gone when 6502 was used for scientific/technical computations with floating point numbers![/quote]

Apple Inc. has written IEEE floating point routines for 6502, 65816, and 68000. See S.A.N.E. (Standard Apple Numerics Engine) and the Apple Numerics Manual (second edition). The 65816 version is included in the Apple IIgs firmware, the 6502 version is used by AppleWorks, and the 68000 is used in the Macintosh.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sat Jan 22, 2005 5:22 pm 
Offline

Joined: Thu May 08, 2003 6:27 pm
Posts: 4
I suggest tedaz to give up apple float format.

IEEE format is more widely used .

_________________
6502 yeah~!


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Mon Mar 28, 2005 4:45 pm 
Offline

Joined: Sun Aug 24, 2003 7:17 pm
Posts: 111
There is a very good reason to stick to the Apple floating point format instead of the IEEE one, this is the availability of the Apple floating point routines.

If one really wants to convert to IEEE format the best approach is probably to use a floating point co-processor.

Maybe the co-processor

http://www.micromegacorp.com/umfpu.html

can be interfaced with a 6502


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 9 posts ] 

All times are UTC


Who is online

Users browsing this forum: No registered users and 16 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: