How to convert an APPLE's floating-point number to a printable ASCII?
1)Howto convert ascii to "float"
2)Howto convert "float" to ascii
The Float-point number is stored in APPLE's format which is discussed in "I can't understand FLOATING POINT REPRESENTATION"
How to convert an APPLE's floating-point number to ASCII?
Re: How to convert an APPLE's floating-point number to ASCII
tedaz wrote:
How to convert an APPLE's floating-point number to a printable ASCII?
1)Howto convert ascii to "float"
2)Howto convert "float" to ascii
The Float-point number is stored in APPLE's format which is discussed in "I can't understand FLOATING POINT REPRESENTATION"
1)Howto convert ascii to "float"
2)Howto convert "float" to ascii
The Float-point number is stored in APPLE's format which is discussed in "I can't understand FLOATING POINT REPRESENTATION"
Look here.
In particular, these documents should be helpful: FP 1, FP 2, FP 3.
I converted this into a Word document with better formatting. Think I can release it to the general public - or is that a bad thing??
As a general "Apple independent" approach to this I would suggest to use a table of the ASCII codes "0","1",.... ,"9" in 10 consequtive bytes and to have corresponding floating point representations in a sequence of for example 40 bytes if 4 byte sizes are used (8 bytes "double precision" is usually more adequate for mathematical/scientific computations => 80 bytes).
A sequence of characters representing an integer, of type "45932" , could then be converted to a floating point number in a loop as follows:
Initalise a variable "value" with floating point 0.
In the loop read next character and collect corresponding floating point value from the table
Floating point multiply the variable "value" with 10 and floating point add the value picked out of the table
To convert a non-integer of type "459.32" one has to have a special branch when the ASCII character is "." and keep track of how many integers follow the decimal point. The value in variable value "value" should then when the loop above has found a " " and has been terminated be divided by 10 as many times as there were decimals.
One should first write the routines for floating point add, subtract,multiply and divide and then write input output routines as above. This has obviously already been done for APPLE and COMMODORE with their BASIC systems. But probably not for the IEEE floating point format that nowadays is standard. If it would be considered useful I could consider to write a whole floating point system for 6502 using IEEE representation (a significant task) but I guess those times are gone when 6502 was used for scientific/technical computations with floating point numbers!
A sequence of characters representing an integer, of type "45932" , could then be converted to a floating point number in a loop as follows:
Initalise a variable "value" with floating point 0.
In the loop read next character and collect corresponding floating point value from the table
Floating point multiply the variable "value" with 10 and floating point add the value picked out of the table
To convert a non-integer of type "459.32" one has to have a special branch when the ASCII character is "." and keep track of how many integers follow the decimal point. The value in variable value "value" should then when the loop above has found a " " and has been terminated be divided by 10 as many times as there were decimals.
One should first write the routines for floating point add, subtract,multiply and divide and then write input output routines as above. This has obviously already been done for APPLE and COMMODORE with their BASIC systems. But probably not for the IEEE floating point format that nowadays is standard. If it would be considered useful I could consider to write a whole floating point system for 6502 using IEEE representation (a significant task) but I guess those times are gone when 6502 was used for scientific/technical computations with floating point numbers!
I need floating-point routines write in 6502 ASM.
Thanks for "Mats".
I want to write a statistics program in 6502 assembly language.
So, I need floating-point routines.
I have already get a FADD/FSUB/FMUL/FDIV.
But, I meet a serious problem:How to convert an internal floating-point number to a printable string? How to get a floating-point number from an input ASCII string?
Can "Mats" give a small example?
I want to write a statistics program in 6502 assembly language.
So, I need floating-point routines.
I have already get a FADD/FSUB/FMUL/FDIV.
But, I meet a serious problem:How to convert an internal floating-point number to a printable string? How to get a floating-point number from an input ASCII string?
Can "Mats" give a small example?
Applesoft uses a different floating representation than the routines in the wozfp3.txt file, so those Applesoft routines won't quite do the trick.
You can look at how EhBASIC does it (LAB_2887 and LAB_296E) for hints. Here's the idea:
To convert a string such as -123.45E-67 which is -123.45 * 10 ^ -67:
1. Initialize M to 0. M is a FP number.
2. Accumulate the mantissa digits. Work left to right using M = M * 10 + digit until you get to the exponent.
3. Count the number of digits after the decimal point and call this count K. K is a 1-byte signed integer.
4. Initialize N to zero. N is a 1-byte signed integer.
5. Accumulate the exponent digits. Work left to right using N = N * 10 + digit.
6. If there was a minus sign before the exponent (i.e. after E) then negate N, i.e. N = -N.
7. The number we want is M * 10 ^ (N-K), so start with M.
8. If N-K is less than zero, divide by 10 ^ (K-N), which you can do by dividing by 10 (K-N) times.
9. If N-K is greater than zero, multiply by 10 ^ (N-K), which you can do by multplying by 10 (N-K) times.
10. If there was a minus sign before the manitissa, negate the result.
It's not terrific, but it's okay.
In the example -123.45E-67, M = 12345, K = 2, and N = -67. N-K is less than zero, so dividing by 10^-69 is 1.2345 * 10 ^ -65 which is negated to give the correct result, -1.2345 * 10 ^ -65.
You may wish to put some limit on the maximum number of mantissa digits that are accumulated. In EhBASIC 1.000 works as you would expect, but if you put 50 zeros after the decimal point you'll get an overflow error, because M = 10^50 which is a larger number than EhBASIC can handle.
To output an FP number, you'll have to decide the maximum number of mantissa digits to output. Here's the idea for outputing a FP number N in scientfic notation, where D is the maximum number of mantissa digits, and 10 ^ M < 2 ^ 23:
1. If N is zero, output a zero and end.
2. If N is negative, output a minus sign and negate N
3. Initialize K to 0, K is a 1-byte signed integer.
4. If N < 10 ^ (D-1), then multiply N by 10 until N >= 10 ^ (D-1), and decrement K each time you multiply by 10.
5. If N >= 10 ^ D, then divide by 10 until N < 10 ^ D, and increment K each time you divide by 10.
6. With N in FP1 (i.e. the mantissa in FP1 and the exponent in X1) shift M1 right (150-X1) times. Now M1 contains a 3-byte (unsigned) integer (M1+0 is the high byte, M1+2 is the low byte), and 10 ^ (D-1) <= M1 <= 10 ^ D.
7. output the integer in M1, but put a decimal point between the first and second digit, then output "E", then the signed intger K+D-1.
Example: N = 12, D = 6
Multiply 12 by 10 4 times, so K = -4 and FP1 = 120000 (100000 <= 120000 < 1000000). X1 = 148 and M1 = $75300, so shift M1 right twice to get M1 = $1D4C0 = 120000. K+D-1 = -4+6-1 = 1, so the output is 1.20000E1.
Again, not terrific, but okay.
The idea for a routine that outputs a number without an "E" followed by an exponent, is to replace step 7 with:
7. If K < 1-D, output a decimal point, then output -K-D zeros, then the integer M1.
8. If 1-D <= K < 0, output the integer M1, but put a decimal point between the (K+D)th digit and the (K+D+1)th digit.
9. If K >= 0, output the integer M1, followed by K zeros.
In example above, D = 6, and K = -4, so 0 <= K+D-1 < D-1, the decimal point is between the 2nd and 3rd digits, and the output is 12.0000.
Once you get the above working, you can add other features.
Note that a number like .1 cannot be represented exactly even though it can be entered in a string exactly. It the same idea as 1/3 which is approximately .333333 but not exactly .333333. So if you convert a number and immediately output it, the output may be a slightly different number than the input. For this reason, you may wish to consider using BCD FP routines likes DP18 or KIMATH, both of which can represent .1 exactly. Their input and output routines are much simpler.
I haven't tested all of the above, so it may need some tweaking, but it should be enough to get started.
You can look at how EhBASIC does it (LAB_2887 and LAB_296E) for hints. Here's the idea:
To convert a string such as -123.45E-67 which is -123.45 * 10 ^ -67:
1. Initialize M to 0. M is a FP number.
2. Accumulate the mantissa digits. Work left to right using M = M * 10 + digit until you get to the exponent.
3. Count the number of digits after the decimal point and call this count K. K is a 1-byte signed integer.
4. Initialize N to zero. N is a 1-byte signed integer.
5. Accumulate the exponent digits. Work left to right using N = N * 10 + digit.
6. If there was a minus sign before the exponent (i.e. after E) then negate N, i.e. N = -N.
7. The number we want is M * 10 ^ (N-K), so start with M.
8. If N-K is less than zero, divide by 10 ^ (K-N), which you can do by dividing by 10 (K-N) times.
9. If N-K is greater than zero, multiply by 10 ^ (N-K), which you can do by multplying by 10 (N-K) times.
10. If there was a minus sign before the manitissa, negate the result.
It's not terrific, but it's okay.
In the example -123.45E-67, M = 12345, K = 2, and N = -67. N-K is less than zero, so dividing by 10^-69 is 1.2345 * 10 ^ -65 which is negated to give the correct result, -1.2345 * 10 ^ -65.
You may wish to put some limit on the maximum number of mantissa digits that are accumulated. In EhBASIC 1.000 works as you would expect, but if you put 50 zeros after the decimal point you'll get an overflow error, because M = 10^50 which is a larger number than EhBASIC can handle.
To output an FP number, you'll have to decide the maximum number of mantissa digits to output. Here's the idea for outputing a FP number N in scientfic notation, where D is the maximum number of mantissa digits, and 10 ^ M < 2 ^ 23:
1. If N is zero, output a zero and end.
2. If N is negative, output a minus sign and negate N
3. Initialize K to 0, K is a 1-byte signed integer.
4. If N < 10 ^ (D-1), then multiply N by 10 until N >= 10 ^ (D-1), and decrement K each time you multiply by 10.
5. If N >= 10 ^ D, then divide by 10 until N < 10 ^ D, and increment K each time you divide by 10.
6. With N in FP1 (i.e. the mantissa in FP1 and the exponent in X1) shift M1 right (150-X1) times. Now M1 contains a 3-byte (unsigned) integer (M1+0 is the high byte, M1+2 is the low byte), and 10 ^ (D-1) <= M1 <= 10 ^ D.
7. output the integer in M1, but put a decimal point between the first and second digit, then output "E", then the signed intger K+D-1.
Example: N = 12, D = 6
Multiply 12 by 10 4 times, so K = -4 and FP1 = 120000 (100000 <= 120000 < 1000000). X1 = 148 and M1 = $75300, so shift M1 right twice to get M1 = $1D4C0 = 120000. K+D-1 = -4+6-1 = 1, so the output is 1.20000E1.
Again, not terrific, but okay.
The idea for a routine that outputs a number without an "E" followed by an exponent, is to replace step 7 with:
7. If K < 1-D, output a decimal point, then output -K-D zeros, then the integer M1.
8. If 1-D <= K < 0, output the integer M1, but put a decimal point between the (K+D)th digit and the (K+D+1)th digit.
9. If K >= 0, output the integer M1, followed by K zeros.
In example above, D = 6, and K = -4, so 0 <= K+D-1 < D-1, the decimal point is between the 2nd and 3rd digits, and the output is 12.0000.
Once you get the above working, you can add other features.
Note that a number like .1 cannot be represented exactly even though it can be entered in a string exactly. It the same idea as 1/3 which is approximately .333333 but not exactly .333333. So if you convert a number and immediately output it, the output may be a slightly different number than the input. For this reason, you may wish to consider using BCD FP routines likes DP18 or KIMATH, both of which can represent .1 exactly. Their input and output routines are much simpler.
I haven't tested all of the above, so it may need some tweaking, but it should be enough to get started.
First of all, the Steve Wozniak floating point system contains the subroutines FLOAT and FIX to convert between 16 bit integer representation and floating point representation. With these routines you can immediately derive the floating point representation of any integer between -32768 ($8000) and +32767 ($7FFF) and transform back to integer representation. One should specially note that 10 has the floating point representation $83 50 00 00!
To convert a decimal fraction of type 23.58 one then uses FLOAT to get the representation for 2358 ($936) which is 8B 49 B0 00 and then divides twice with 10 ($0A) having floating point representation $83 50 00 00 using subroutine FDIV. The floating point representation of 23.58 is in this way found to be $84 5E 51 EA.
To convert an integer outside the range, for example 2358000000, to floating point format one instead multiplies 2358 ($936) which is 8B 49 B0 00 with 10 ($0A) having floating point representation $83 50 00 00 six times using FMUL.
To transform back one uses subroutine FIX. This routine truncates the floating point value to an integer (it obviously has to!!). In order not to loose the decimal part one should therefore multiply or divide the value with 10 using FMUL or FDIV before using FIX to make the value fall in the interval from 32767/10 to 32767 (for negative values between -32768 and -32768/10). The resulting integer should then be assigned an "implicit decimal point" based on these initial multiplications/divisions with 10!
If you really need ASCII representation you could then go via the BCD representation, routines to go between binary integer representation and BCD representation are available in the Source Code Repository!
To convert a decimal fraction of type 23.58 one then uses FLOAT to get the representation for 2358 ($936) which is 8B 49 B0 00 and then divides twice with 10 ($0A) having floating point representation $83 50 00 00 using subroutine FDIV. The floating point representation of 23.58 is in this way found to be $84 5E 51 EA.
To convert an integer outside the range, for example 2358000000, to floating point format one instead multiplies 2358 ($936) which is 8B 49 B0 00 with 10 ($0A) having floating point representation $83 50 00 00 six times using FMUL.
To transform back one uses subroutine FIX. This routine truncates the floating point value to an integer (it obviously has to!!). In order not to loose the decimal part one should therefore multiply or divide the value with 10 using FMUL or FDIV before using FIX to make the value fall in the interval from 32767/10 to 32767 (for negative values between -32768 and -32768/10). The resulting integer should then be assigned an "implicit decimal point" based on these initial multiplications/divisions with 10!
If you really need ASCII representation you could then go via the BCD representation, routines to go between binary integer representation and BCD representation are available in the Source Code Repository!
Mats wrote:
One should first write the routines for floating point add, subtract,multiply and divide and then write input output routines as above. This has obviously already been done for APPLE and COMMODORE with their BASIC systems. But probably not for the IEEE floating point format that nowadays is standard. If it would be considered useful I could consider to write a whole floating point system for 6502 using IEEE representation (a significant task) but I guess those times are gone when 6502 was used for scientific/technical computations with floating point numbers!
There is a very good reason to stick to the Apple floating point format instead of the IEEE one, this is the availability of the Apple floating point routines.
If one really wants to convert to IEEE format the best approach is probably to use a floating point co-processor.
Maybe the co-processor
http://www.micromegacorp.com/umfpu.html
can be interfaced with a 6502
If one really wants to convert to IEEE format the best approach is probably to use a floating point co-processor.
Maybe the co-processor
http://www.micromegacorp.com/umfpu.html
can be interfaced with a 6502