Quote:
Yet to be finished is the 640x400 graphic LCD and IBM keyboard ports.
I'm sure many here (including myself) would be interested in the software interface to these. I have most of the info here on the IBM AT keyboard, but I don't think it tells how to turn the three lights off and on.
Back on name lengths... 32 works out nicely in Forth not just because it's a reasonable length, but it allows the first byte of a Forth word's name field to have 5 bits for the length, 1 for precedence, 1 for smudge, and always leave the high bit set to facilitate finding the beginning of the name field when backing up from the code field or link field addresses. I won't go into the significance of the extra bits much here since Forth is not the point of this forum, but I'll be glad to discuss the innards with anyone on private E-mail. Having the 5 LSBs tell the length of the name means you don't have to use up 31 bytes for every name—only as much as the name needs. So if you had 6 bits for name length allowing names up to 63 bytes but never used more than 31, the extra allowance would not take any more room in memory, or in your case, the hard disc.
Quote:
Quote:
As to an availability map, file-allocation table, etc. on a disc, I cannot speak from experience; but have you considered file chains?
Thought about it but it will mean, I think, having pointers stored in each sector. This makes data transfers a little harder than the
512 bytes/sector if you use a FAT but the big minus for chains would be searching the chain for free space an a big drive. Ok so I'm currently only using 17 sectors (8.5K) on the Trailblazer but I want to be able to use it all easily.
Thinking a little more about it, I do think it would be impractical to use the file chain idea on a large disc (1.8GB!), if in fact you really have thousands of files on it instead of fewer really long ones or leaving much of the disc untouched. (I'm sure that if you do use anywhere near that much space with a 6502 system, it would be for digital recording of some type, and not for humongous programs!) But to avoid having to search super-long chains of files, you could have several chains on the one disc, possibly dividing the files into categories. Within a file though, there would not need to be info in each sector pointing to where the next sector is (as in a system where they can get badly fragmented), so reading should not be difficult or slow. If you can keep the whole file together, part of the info in the file's header would tell how long the file is, and you can just read sequentially until you get to the end. Of course avoiding fragmentation if you edit the file and make it longer will require either scooting other files around to make room, or putting it somewhere else where there's enough room all in one place. It might be a good idea to store the file with a few extra sectors at the end, so the length can be changed some without having to scoot other files around as often. The file's header would tell not only how long the file is, but how much space is allocated to it, so that as you skip through the chain looking for a particular file, the header for the next file is easy to find.
Quote:
Quote:
I've done this in battery-backed RAM without a FAT (yes, in a 6502 system).
Want to share the details?
In this case, there was just one chain in the memory. Partly to keep extra checks on the integrity of the chain, one variable held the number of files in the chain, another held the address of the first byte after the last EOF, and that byte had to hold a certain value. If any discrepancy was found after skipping through the chain by the headers, we figured that the back-up power went down low enough to really start fouling things up, so we would wipe it out and give the dreaded "memory lost" message. In reality, that never seemed to happen; but if it were more than once in a blue moon and the back-up power all seemed fine, the thing to do would have been to have a utility program that would search through the chain and try to put the pieces back together, warning if a given file appeared to be corrupted. Then at least you don't lose everything without an explanation. Since this file chain was all in RAM that's not divided into sectors or anything like that, we just put files right up against each other so all the free space was together at the end. This means that if a file was edited and enlarged, we had to scoot all the following files down to make room, and if the file was shrunk, all the following files would be moved in to take up the extra space produced. Being in RAM, all the extra movement of files went fast enough that the user was unaware of it.
With the sectored system of a disc, you would normally start every file header at the beginning of a sector, so that if the file chain ever did get corrupted, it would be easier to rescue most or possibly all of the data.