6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Fri Nov 22, 2024 6:11 pm

All times are UTC




Post new topic Reply to topic  [ 16 posts ]  Go to page Previous  1, 2
Author Message
PostPosted: Mon Aug 12, 2024 9:10 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10985
Location: England
barrym95838 wrote:
BigDumbDinosaur wrote:
Oh, never mind.  A C-64 can’t represent anything to 116,000 places.

If a 48K Apple ][ can do it, shouldn't a 64K C-64 be able to do it with some sleight-of-hand (maybe only in ML)?

Absolutely - a C64, AIUI, has easy access to a memory map with 64k of RAM available. Not so much, in Basic, but then Basic is only a good way to figure out what's needed to be done - that's how we used it in our adventure - not a good way to undertake the big computation.

I should surely take a better look at the Woz article. First observation is nice orientation for us:
Quote:
Although this computation has little intrinsic value or use, the experience was stimulating and educational. The problems I was forced to overcome gave me insights that greatly contributed to new floating-point routines.


Second observation is that a naive approach would need two bignums and only deliver half as much e as there is available RAM. So, some cleverness needed. In our case to overlap the two bignum usages - we use one big-endian and one little-endian number to help with the sharing of space. In Woz' case, it looks like he's able to do the arithmetic in reverse order, so only one working bignum is needed.

We need to perform very many divisions, 4 per digit or 7 per three digits, which is hundreds of thousands. Woz only needs to perform some thirty thousand, but they are shaped somewhat like ours:
Quote:
The problem at hand calls for the division of a very large dividend (possibly several kilobytes) by a moderate divisor (2 bytes).

(Our divisors range up to 4 bytes)

Edit: I see we also rediscovered/reinvented another tactic Woz used:
Quote:
Also, the critical steps 3, 4, and 5 can be coded eight times in-line to avoid the overhead time of a loop. And because the divisor changes infrequently, it can be coded as fast immediate-mode data. After each full divide, the code, which resides in programmable memory, can be modified for the next divisor.


Edit: I see both projects also used screen memory at one stage to help visualise progress:
Quote:
The e array begins at hexadecimal location 800 (which is the most significant byte of the array). This secondary text-screen page of the Apple II allows you to view roughly the first 1 K bytes of e as they are calculated. Although the character representation is not readily useful, it is at least comforting to observe that the program is working on the correct section of memory.


Edit: another informative snippet from Woz
Quote:
In a later program that calculated e to 116,000 digits, I used 47 K bytes (188 pages of 256 bytes each) of memory, and the maximum divisor was 28,800. The divisors were grouped into fifteen blocks of 2 K-byte divisors each, and the number of memory pages not to be divided were precalculated for each block (see table 3). This version of the program used a lookup table to determine how many pages to divide (188 minus the number not to divide) for each divisor. This technique proved extremely beneficial because it reduced the computation time from four days to two.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 16 posts ]  Go to page Previous  1, 2

All times are UTC


Who is online

Users browsing this forum: No registered users and 22 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: