6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sat Sep 28, 2024 8:51 pm

All times are UTC




Post new topic Reply to topic  [ 41 posts ]  Go to page Previous  1, 2, 3  Next
Author Message
PostPosted: Fri Jan 24, 2020 8:38 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8403
Location: Midwestern USA
cjs wrote:
I've separated the block device interface from the filesystem implementation for obvious reasons.

Excellent! That opens the door to (in theory) using any kind of mass storage on which random access is possible.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Fri Jan 24, 2020 9:45 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
BigDumbDinosaur wrote:
cjs wrote:
I've separated the block device interface from the filesystem implementation for obvious reasons.
Excellent! That opens the door to (in theory) using any kind of mass storage on which random access is possible.

Well, in practice too, I hope! Even semi-kinda-random-access. (DECtape, I hear you calling me!*)

More seriously, that's the general idea, and I'm also hoping one might be able to do other filesystems, where necesary, also above that block driver. But I'm not entirely sure I've divided things up the right way exactly. Experience will tell, I guess. (And comments from those experienced with this kind of stuff!)

----------
* Or for you younger kids, that new "stringy floppies" thing. Your parents will tell you about DECtape when you get older.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Fri Jan 24, 2020 10:06 pm 
Offline
User avatar

Joined: Wed Feb 14, 2018 2:33 pm
Posts: 1467
Location: Scotland
[quote="cjs" (And comments from those experienced with this kind of stuff!) [/quote]

For each open file you'll probably need at least one block of data in RAM... Those 4K blocks will soon eat up resources in a 64KB 6502... One reason I stuck with 512 byte blocks in RuFS. I also cache some disk and file metadata too which you might also want to do if using slower (ie. floppy) technology...

-Gordon

_________________
--
Gordon Henderson.
See my Ruby 6502 and 65816 SBC projects here: https://projects.drogon.net/ruby/


Top
 Profile  
Reply with quote  
PostPosted: Fri Jan 24, 2020 10:27 pm 
Offline

Joined: Mon Nov 25, 2019 4:46 am
Posts: 19
floobydust wrote:
Well, the bridging comment was strictly a historical recap. I'm not looking to implement anything on older (vintage) 6502 machines.

I also noted that a standalone filesystem, e.g., intelligent storage subsystem, would be preferred. In this case, you could write a command processor for any system and add the appropriate hardware interface to the intelligent storage subsystem.

Also... re: the Forth FAT implementation... very interesting indeed, but I'm leaning towards a native assembler based filesystem. I will read up on that thread more however and thanks for pointing it out... I generally don't look in the Forth section.

There's still the option to implement something other than FAT, but there's trade-offs for going a different route, which could be a separate thread unto itself.

I'm using the Seagate ATA Interface Reference Manual (36111-001 Rev. C, dated 21st May 1993) for the hardware and BIOS design.
My current plan is to use 24-bit Logical Block Addressing. This will limit BIOS access to 8GB (16-million 512-byte blocks), but I think this should be sufficient for an initial release. With an 8GB limit, FAT32 will pretty much be a requirement to keep the cluster size down, as the media capacity goes up beyond 512MB.

I also have a decent collection of older 2.5-inch (Travelstar) IDE drives which have sizes below 8GB (5120MB, 4320MB, 3250MB, 2210MB and smaller!) which are mostly in near new condition. I also have some Apacer ATA modules (512MB) which plug directly into a 2.5-inch connector (and the PCB supports this module). I'm also considering a PCB layout that will have a Compact Flash Type-II socket instead of the 44-pin IDE connector. This would permit use of the older IBM Microdrive in addition to Compact Flash cards.


while not C02 based, there is a nice example using an ATMEL AVR from years ago implementing a PATA to RS232 adapter, with a newer part such as the mega1284p and it's 16KB of SRAM implementing later model drives should be easy. not that we would ever need a multi terabyte storage device, but it's nice to know that one day if that is the only viable option we could use it. https://www.opend.co.za/hardware/avride/avride.htm the larger on chip SRAM facilitates holding one or more 4K sectors in the drive controller chip for manipulation before writing it back to disk.


Top
 Profile  
Reply with quote  
PostPosted: Sat Jan 25, 2020 4:18 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
drogon wrote:
For each open file you'll probably need at least one block of data in RAM... Those 4K blocks will soon eat up resources in a 64KB 6502... One reason I stuck with 512 byte blocks in RuFS.

Ah, yes. I now realize it wasn't clear that support for block sizes larger than 256 bytes was there to provide an option for situations or media where you really have to have larger blocks, as opposed to being anything you'd want to use normally. I've tweaked the document to say so.

Quote:
I also cache some disk and file metadata too which you might also want to do if using slower (ie. floppy) technology...

Yes, that option's certainly available. In fact, that's the exact reason that the on-disk free block map, file size fields in the directory, and the like are noted to be "cached" values that may be inaccurate; it allows the particular file system implementation to decide the trade-off it wants to make between read and write speeds, and how much extra work might need to be done after removing and reloading a volume. Read-only filesystem implementations might actually use none of that at all; systems with significant extra RAM (e.g., in other banks) might have an implementation that caches in RAM even the entire free block map and directory, if that's worthwhile for them.

Come to think about it, it would also probably be worth caching in the filesystem the last-used file number (8 bits) so that one could always do a (modular) increment of that when generating a new file number; this would allow deletion or replacement of a file without having to go through and explicitly deallocate every block. On moderately large filesystems, especially with constant "seek" time, the actual block deallocation might be done only when you run out of known-unused blocks, as a single pass through the disk. I'm not sure about the best place to store that information: the free block or the beginning of the first directory block or....

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Sat Jan 25, 2020 5:33 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8403
Location: Midwestern USA
drogon wrote:
For each open file you'll probably need at least one block of data in RAM...

"Block of data" in this context is somewhat ambiguous. What I think you are referring to is a file descriptor, which in the UNIX/Linux world is an inode. The S51K filesystem introduced with UNIX System IV used a surprisingly compact 128 byte in-core inode structure. My 816NIX filesystem uses an inode size that will fit into 128 bytes on disk, with an in-core size of 148 bytes. The in-core copy is somewhat larger due to the presence of housekeeping data that is not maintained in the disk copy.

The amount of space in memory required by a file descriptor would ultimately depend on how much state information needs to be immediately available to the filesystem driver. Most filesystem drivers are designed to keep a file's descriptor in core as long as the file remains opened by at least one process. Doing so economizes on MPU time consumed in updating the descriptor when the file is being accessed—especially during write operations, and also cuts down on disk activity. When the file is closed the inode is written back to disk, A background process may periodically write "dirty" descriptors to reduce the chances of a crash resulting in a corrupted or inaccessible file. Other disk buffers may also be written at that time for the same reason.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Sat Jan 25, 2020 6:34 am 
Offline
User avatar

Joined: Wed Feb 14, 2018 2:33 pm
Posts: 1467
Location: Scotland
BigDumbDinosaur wrote:
drogon wrote:
For each open file you'll probably need at least one block of data in RAM...

"Block of data" in this context is somewhat ambiguous.


I meant a media block representing data out of an open file. If the media has a 4096 byte block size then you'll need to read a block of that into memory to serve a program that's taking data byte at a time from that file. You soon get memory issues with more than one file open, although in a small system that may not be an issue.

-Gordon

_________________
--
Gordon Henderson.
See my Ruby 6502 and 65816 SBC projects here: https://projects.drogon.net/ruby/


Top
 Profile  
Reply with quote  
PostPosted: Sat Jan 25, 2020 6:44 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
BigDumbDinosaur wrote:
drogon wrote:
For each open file you'll probably need at least one block of data in RAM...

"Block of data" in this context is somewhat ambiguous. What I think you are referring to is a file descriptor....

Hmm. I took it as "a data block beyond any metadata you need to keep in memory." The idea being, if there's an open file, and I want to append a byte to it, I need to have in memory the block I'm modifying. It would be possible to read modify and write it every time there's an append, freeing the memory needed for that block between append operations, but that would be pretty darn inefficient.

More generally, the issue arises for pretty much any sort of file modification. I don't see any way around that. I did however, try to design the block I/O interface so that one could avoid buffers (i.e., have zero-copy disk I/O) for what I perceive to be the more common cases of reading an entire file or writing an entire file de novo (e.g., SAVE "myprog.bas" or LOAD "myprog.bas").

For reading the idea is that you transfer bytes read from the disk controller directly to the target address in memory, including the metadata bytes at the end of a block, but then, after doing the appropriate thing with the metadata (such as getting the address of the next block) you set your next start point for depositing data to the start of that metadata, overwriting it with the actual file data from the next block.

Writing might be a little more tricky, depending on the disk controller. If you're chugging bytes out an I/O port it's probably fine; you ensure you've got your metadata handy elsewhere in memory, and then go through the BLKSIZ-6 bytes of file data followed by the six bytes of metadata for the block. If you must use DMA and you don't have scatter-gather, you could save elsewhere 6 bytes after your current BLKSIZ chunk, replace those with the metadata, and then restore the old data after the write.

Quote:
The S51K filesystem introduced with UNIX System IV used a surprisingly compact 128 byte in-core inode structure. My 816NIX filesystem uses an inode size that will fit into 128 bytes on disk, with an in-core size of 148 bytes.

Ya, you kids these days with your massive 32, 48 and even 64 KB RAM systems can afford to throw around hundres of bytes of memory like that. :-)

So one of things I like about about my design (assuming it actually works! :-P) is that very little state is required for reading through a file: basically just the current block number (two bytes), once the file's been opened. Every block read gives you both the data and the next (and previous) block number. Great for sequential reads, though obviously if you're doing random access you're going to pay the price there in having to read every block from the start of the file to your furthest seek point, even if you do after that cache all the block numbers in memory. I think it's a worthwhile tradeoff for that kinds of operations I do (which generally involve just reading and writing whole files), but if you've got a heavily random-access-oriented application you might want a different filesystem for it. (I'd imagine that an RDBMS, for example, would simply want to use a block device directly through the block I/O layer, for example.)

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Sat Jan 25, 2020 6:51 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
drogon wrote:
I meant a media block representing data out of an open file. If the media has a 4096 byte block size then you'll need to read a block of that into memory to serve a program that's taking data byte at a time from that file. You soon get memory issues with more than one file open, although in a small system that may not be an issue.

Ok, so yes, I did read you right.

That said, if memory pressure is a serious issue, one way of dealing with this is to extend the block I/O API to allow reading of partial blocks, if the hardware can support this. Say, for example, you want at the moment to read just bytes 32-63 of the current block. If the block I/O API were extended to take parameters along the lines of block number, starting byte and length, and you had a controller where you do programmed I/O reading a byte at a time, you could read bytes 0-31 of the block, throwing them away as you go, read bytes 32-63 and write them to RAM starting at the target address, and continue to read the rest, throwing them away (or possibly abort the read, if your controller supports such a thing). Not efficient when you have to go back and read the block again to get the next bytes, but such is the nature of trying to minimize RAM use, I suppose.

But again, I don't anticipate 4096 byte blocks being a normal thing; it's there just to support that wonky CD-ROM interface you somehow bodged on to your Apple 1. :-)

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Sat Jan 25, 2020 3:07 pm 
Offline
User avatar

Joined: Wed Feb 14, 2018 2:33 pm
Posts: 1467
Location: Scotland
cjs wrote:
More generally, the issue arises for pretty much any sort of file modification. I don't see any way around that. I did however, try to design the block I/O interface so that one could avoid buffers (i.e., have zero-copy disk I/O) for what I perceive to be the more common cases of reading an entire file or writing an entire file de novo (e.g., SAVE "myprog.bas" or LOAD "myprog.bas").


That makes sense and it's what the Acorn MOS (and my RubyOS) starts with: A basic "file at a time" interface (OSFILE). This, as it suggests deals with the whole file stuff - ideal for BASIC load/save and works to tape as well as floppy. (the BBC Micro, like most of the time came with a tape interface, but could be easily upgraded for disk) That more or less negates the need for in-memory data buffers as you can go directly from RAM to media (and back again, although reading the last block may involve a buffer space unless you want to overwrite RAM - e.g. loading a 10 byte program from a 512 byte block size media device - overwriting the "unused" 502 bytes may or may not be an issue. (Sort of harks back to the CP/M filesystem which handled text files badly IMO - Ctrl-Z terminated and all that)

Filing systems for the Beeb (there were many) were expected to support that interface (possibly, even read-only - ROMs) and all the rest was a bonus...

There are then additional system calls to let you start to do byte at a time access to files, seek, append and so on.

Reading the Acorn DFS and ADFS manuals might be of interest, or possible the Advanced User Guide which details all the system calls and the filing system interface.

http://stardot.org.uk/mirrors/www.bbcdocs.com/filebase/essentials/BBC%20Microcomputer%20Advanced%20User%20Guide.pdf

(round about page 333)

Cheers,

-Gordon

_________________
--
Gordon Henderson.
See my Ruby 6502 and 65816 SBC projects here: https://projects.drogon.net/ruby/


Top
 Profile  
Reply with quote  
PostPosted: Sun Jan 26, 2020 3:55 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8403
Location: Midwestern USA
drogon wrote:
BigDumbDinosaur wrote:
drogon wrote:
For each open file you’ll probably need at least one block of data in RAM...

"Block of data" in this context is somewhat ambiguous.

I meant a media block representing data out of an open file. If the media has a 4096 byte block size then you’ll need to read a block of that into memory to serve a program that’s taking data byte at a time from that file. You soon get memory issues with more than one file open, although in a small system that may not be an issue.

Oh, you’re referring to the buffer pool.  Yep, that can be a problem in a small system whose mass storage eats up several KB per block.  Theoretically, you can have more files opened than can be buffered—if you are willing to put up with a lot of thrashing.  Throughput will go into the toilet, but it can be made to work.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Sun Mar 24, 2024 5:51 pm, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Sun Jan 26, 2020 6:34 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
drogon wrote:
That more or less negates the need for in-memory data buffers as you can go directly from RAM to media (and back again, although reading the last block may involve a buffer space unless you want to overwrite RAM - e.g. loading a 10 byte program from a 512 byte block size media device - overwriting the "unused" 502 bytes may or may not be an issue. (Sort of harks back to the CP/M filesystem which handled text files badly IMO - Ctrl-Z terminated and all that)

Yeah, CP/M not having all file lengths be multiples of a block was one of the several things it did in not such a great way.

The overwriting RAM thing may or may not be an issue; it does rather depend on if your block driver uses PIO (which I think that most would). Thinking about this further, I think it might be an idea to be able to request reads of partial blocks (i.e., parameters destination address, block number, starting byte in block, length) and let the block driver decide if it's going to support that or return an error. Or perhaps it should just always support it and always have its own one-sector (or more) buffer available in RAM if that's necessary for the particular hardware. But the former makes for simpler block drivers (at the expense of the client having to deal with buffering, if required), which is probably the better way to go.

Quote:
Reading the Acorn DFS and ADFS manuals might be of interest, or possible the Advanced User Guide which details all the system calls and the filing system interface.
http://stardot.org.uk/mirrors/www.bbcdocs.com/filebase/essentials/BBC%20Microcomputer%20Advanced%20User%20Guide.pdf (round about page 333)

Yes, thanks for that. It's definitely interesting reading and providing further food for thought.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Sun Jan 26, 2020 1:36 pm 
Offline

Joined: Mon May 21, 2018 8:09 pm
Posts: 1462
A natural block size for the 6502 is 256 bytes, as it's easy to write loops of that length. This doesn't need to correspond to the physical block or sector size of the underlying device, as long as you have a way of reading partial or multiple sectors to match a block, and of updating a partial sector when writing a block (which obviously doesn't matter for read-only devices like CD-ROMs). I think it's reasonable to allocate space for 4 such blocks, as most reasonable programs for a small machine will have at most 3 files open at once plus a cache of a directory block, and 1KB is reasonably affordable in a 64KB address space.

LRU cache-replacement rules may be an appropriate starting point, perhaps with a modification to keep dirty blocks (being written) longer than clean ones (being read).

Another option is to take the BBC Master's approach of paging in dedicated memory banks for certain memory-intensive system functions. This would allow choosing a block size without the need to consider the address space consumed by multiples thereof, and indeed the same block of address space might be reused for unrelated functions. For example, a 16K block of "sideways" paged space could contain 12K of filesystem code, and 4K that was further paged onto a large RAM, permitting filesystem blocks up to 4KB size.


Top
 Profile  
Reply with quote  
PostPosted: Sun Jan 26, 2020 10:56 pm 
Offline

Joined: Mon Sep 17, 2018 2:39 am
Posts: 137
Hi!

Chromatix wrote:
A natural block size for the 6502 is 256 bytes, as it's easy to write loops of that length. This doesn't need to correspond to the physical block or sector size of the underlying device, as long as you have a way of reading partial or multiple sectors to match a block, and of updating a partial sector when writing a block (which obviously doesn't matter for read-only devices like CD-ROMs). I think it's reasonable to allocate space for 4 such blocks, as most reasonable programs for a small machine will have at most 3 files open at once plus a cache of a directory block, and 1KB is reasonably affordable in a 64KB address space.


In the Atari, BW-DOS can handle up to 6 open files, reserving a total of only two sector buffers of 256 bytes each, and for each open file (or folder) it uses 8 bytes: 1 byte for status, 3 bytes for the current file position (files are up to 2^24 bytes length), 2 bytes for current block map sector and 2 bytes for the current data sector. Disks can have up to 65536 sectors of 256 bytes, so any disk is up to 16MB - this was huge at the time. Normal disks were 720 or 1040 sectors of 128 bytes, double-density disks with 256 byte sectors were very uncommon.

If you read from two files one byte at a time it is very slow, as it reads a full data sector from file 1, returns one byte, then reads a full data sector from file #2 overwriting the buffer, returns one byte, etc. But if you read (or write) big blocks from one file at a time, it is fast as fast as possible.

Remember that BW-DOS, including all its buffers, variables, the command line processor (with support for batch files) and all the filesystem code used exactly 6116 bytes, less than 6kB.

Have Fun!


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 27, 2020 3:25 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
dmsc wrote:
If you read from two files one byte at a time it is very slow, as it reads a full data sector from file 1, returns one byte, then reads a full data sector from file #2 overwriting the buffer, returns one byte, etc. But if you read (or write) big blocks from one file at a time, it is as fast as possible.

Well, not quite, unless you can ask the DOS to read into its own buffer and then process the data from it yourself, and you don't need to do a bulk copy of the entire buffer elsewhere in RAM (e.g., because it's one block in sequence of a program you're loading).

Giving a kernel subsystem an arbitrary address in memory and asking it to place data directly directly there, without using intermediate kernel buffers, is known as "zero-copy I/O", and was heavily used at in networking protocol stacks in the '90s (at least in Unix and its clones). I think that there's a lot of potential in this for older eight-bit systems, particularly since PIO is often used instead of DMA, even for block devices. Not only could this result in even more savings than on DMA systems when it comes to moves of block data (because PIO is significantly slower), but also with at least some PIO systems scatter-gather I/O may be an option.

I've heard that that zero-copy I/O was used in some custom loaders for games and the like on the Apple II in order to double load speed. With Apple DOS and ProDOS the sectors were interleaved on the track in order to provide time to do a copy after reading a sector but before reading the next one; thus a full track read would need at least two revolutions. I think it should be possible to do a full track read in one revolution (although the timing was very tight) if one can avoid buffer copies between sectors.

Even on block storage systems without seek delays, such as flash RAM (SD cards, etc.), this can make a large difference in speed. Modern flash RAM is much faster than memory on old machines, so the slowest part of I/O is usually copying data. Copying it only once instead of twice can double I/O speed.

There are some other tricks one can use to help along systems like this, too. Trailer encapsulation, also used in networking (developed in 4.2BSD around 1982-1984), put metadata about chunks of information after the information itself. That's why in the filesystem design I described earlier, which has per-block metadata (file number, length of data and next and previous block numbers), I put it at the end; it's then possible (if the BIO interface supports it) to read a block at a given starting address and read the next block at the address where the metadata starts, overwriting the metadata and producing a contiguous sequence of file data in memory from a non-contiguous sequence on disk without any memory copies if the BIO can do zero-copy I/O.

(As it turns out, Atari DOS stores file metadata in blocks, and at the end of each block. I don't know if they were considering the trailer idea at the time, though. It's exceedingly similar to what I do in the filesystem I described earlier in this thread, though I didn't know about this at the time I was doing my design. More information can be found in Chapter 9 of De Re Atari.)

Quote:
Remember that BW-DOS, including all its buffers, variables, the command line processor (with support for batch files) and all the filesystem code used exactly 6116 bytes, less than 6kB.

Such concision is admirable! Though one must remember that the drives themselves stored and ran the code for dealing with actual sector reads and writes, which I am guessing saved a half kilobyte or more. (The Apple II RTWS was 1193 bytes, but due to the extreme simplicity of the disk controller that may have been larger than the code would have been had the disk controller hardware been doing more of the work.)

I don't know if the design I did could ever include all that and still be that small, but I am hoping so. And I'm also trying to design it in a way that one can leave out more sophisticated or unnecessary components and make it significantly smaller yet, while maintaining compatibility with filesystems written by more capable systems.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 41 posts ]  Go to page Previous  1, 2, 3  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 17 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: