6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sat Sep 28, 2024 10:24 pm

All times are UTC




Post new topic Reply to topic  [ 41 posts ]  Go to page Previous  1, 2, 3
Author Message
PostPosted: Mon Jan 27, 2020 4:26 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8520
Location: Southern California
This talk about having buffers in RAM for various number of files open at once, and the amount of precious RAM taken by these buffers, reminds me of these earlier discussions:

Programming challenge: dynamic memory allocator and
dynamic memory allocator, methods, uses, limitations, etc.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 27, 2020 12:29 pm 
Offline

Joined: Mon Sep 17, 2018 2:39 am
Posts: 137
Hi!

cjs wrote:
dmsc wrote:
If you read from two files one byte at a time it is very slow, as it reads a full data sector from file 1, returns one byte, then reads a full data sector from file #2 overwriting the buffer, returns one byte, etc. But if you read (or write) big blocks from one file at a time, it is as fast as possible.

Well, not quite, unless you can ask the DOS to read into its own buffer and then process the data from it yourself, and you don't need to do a bulk copy of the entire buffer elsewhere in RAM (e.g., because it's one block in sequence of a program you're loading).

Giving a kernel subsystem an arbitrary address in memory and asking it to place data directly directly there, without using intermediate kernel buffers, is known as "zero-copy I/O", and was heavily used at in networking protocol stacks in the '90s (at least in Unix and its clones). I think that there's a lot of potential in this for older eight-bit systems, particularly since PIO is often used instead of DMA, even for block devices. Not only could this result in even more savings than on DMA systems when it comes to moves of block data (because PIO is significantly slower), but also with at least some PIO systems scatter-gather I/O may be an option.


Well, BW-DOS (and other similar DOSs like Sparta DOS) used direct I/O when possible. The Atari API has two methods "read characters" and "write characters", the application passes 1 byte as operation, 16bit buffer address and 16bit length. If the length allows reading a full sector to application memory, this is done, avoiding the intermediate buffer.

Quote:
I've heard that that zero-copy I/O was used in some custom loaders for games and the like on the Apple II in order to double load speed. With Apple DOS and ProDOS the sectors were interleaved on the track in order to provide time to do a copy after reading a sector but before reading the next one; thus a full track read would need at least two revolutions. I think it should be possible to do a full track read in one revolution (although the timing was very tight) if one can avoid buffer copies between sectors.

Even on block storage systems without seek delays, such as flash RAM (SD cards, etc.), this can make a large difference in speed. Modern flash RAM is much faster than memory on old machines, so the slowest part of I/O is usually copying data. Copying it only once instead of twice can double I/O speed.



In the Atari, the "standard" interface speed is only 19200 baud, so you have plenty of time to copy buffers around using modern storage. But, as you said, using floppy disks you need to be fast enough so that when you read the next sector you don't need to wait for a full revolution - halving the loading speed!

Modern serial loaders patch the OS to provide faster speeds, up to 126kbaud, this is still about 10kB/s after all the protocol overhead, so well bellow buffer copying speed. And of course, people have flash disks connected to the CPU bus, but those needs special DOS and drivers, and reach reading speeds of about 70kB/s


Quote:

There are some other tricks one can use to help along systems like this, too. Trailer encapsulation, also used in networking (developed in 4.2BSD around 1982-1984), put metadata about chunks of information after the information itself. That's why in the filesystem design I described earlier, which has per-block metadata (file number, length of data and next and previous block numbers), I put it at the end; it's then possible (if the BIO interface supports it) to read a block at a given starting address and read the next block at the address where the metadata starts, overwriting the metadata and producing a contiguous sequence of file data in memory from a non-contiguous sequence on disk without any memory copies if the BIO can do zero-copy I/O.

(As it turns out, Atari DOS stores file metadata in blocks, and at the end of each block. I don't know if they were considering the trailer idea at the time, though. It's exceedingly similar to what I do in the filesystem I described earlier in this thread, though I didn't know about this at the time I was doing my design. More information can be found in Chapter 9 of De Re Atari.)



Yes, Atari DOS 2.0 and 2.5 stored 125 data bytes per sector, with the last three bytes a pointer to the next data sector and the number of bytes used in this sector. The format is really bad for performance, as you can't do direct I/O (as you need to copy the last three bytes to a different location), you need to read the full file to append at the end, and it was easy to corrupt a disk.

Also, DOS 2.0 and 2.5 were limited to 1024 sectors and did not support sub-directories, so are really only appropriate for small floppy disks.

The format sed by Sparta DOS (and BW-DOS) is more advanced, "unix like". It uses two types of sectors, "sector maps" which store the sector numbers for the file, two bytes per sector, and "data blocks" that store the full data. Having an index (he sector map), you can seek to any position on the file really fast. Also, directories are stored the same as files, and you can have as many sub-directories as you want.

Quote:

Quote:
Remember that BW-DOS, including all its buffers, variables, the command line processor (with support for batch files) and all the filesystem code used exactly 6116 bytes, less than 6kB.

Such concision is admirable! Though one must remember that the drives themselves stored and ran the code for dealing with actual sector reads and writes, which I am guessing saved a half kilobyte or more. (The Apple II RTWS was 1193 bytes, but due to the extreme simplicity of the disk controller that may have been larger than the code would have been had the disk controller hardware been doing more of the work.)



Yes, the Atari OS leaves all the low-level handling to the drive, the SIO interface has simple commands like "read sector number X", "give disk status", etc. But this had the advantage of allowing many devices to use the same interface, like hard disks and modems, working transparently. Today, there are flash (and SD) card adapters, and even bluetooth dongles that can load files from your phone :)

Also, the Atari OS is layered, you have the CIO layer (that deals with file-based devices and handlers, like "E:" for the editor, "P:" for the printer, "D:" for the DOS) and that layer calls the SIO layer that deals with low level devices via de serial bus. This allows to have different DOS using the same physical devices, or use SIO accelerators with any DOS, as you can install a new CIO devices or add SIO handlers.

Quote:

I don't know if the design I did could ever include all that and still be that small, but I am hoping so. And I'm also trying to design it in a way that one can leave out more sophisticated or unnecessary components and make it significantly smaller yet, while maintaining compatibility with filesystems written by more capable systems.


Have Fun!


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 27, 2020 2:20 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
dmsc wrote:
In the Atari, the "standard" interface speed is only 19200 baud, so you have plenty of time to copy buffers around using modern storage....Modern serial loaders patch the OS to provide faster speeds, up to 126kbaud, this is still about 10kB/s after all the protocol overhead, so well bellow buffer copying speed.
...
Well, BW-DOS (and other similar DOSs like Sparta DOS) used direct I/O when possible. The Atari API has two methods "read characters" and "write characters", the application passes 1 byte as operation, 16bit buffer address and 16bit length. If the length allows reading a full sector to application memory, this is done, avoiding the intermediate buffer.

Sounds like there should be no problem doing zero-copy I/O for any read (though probably not writes), then. After all, if you're reading bytes from the drive at such low rates, you have plenty of time to write individual bytes as you receive them to wherever you like in RAM (or just throw them away). Or does the hardware force you to do DMA to buffers?

Quote:
Yes, Atari DOS 2.0 and 2.5 stored 125 data bytes per sector, with the last three bytes a pointer to the next data sector and the number of bytes used in this sector. The format is really bad for performance, as you can't do direct I/O (as you need to copy the last three bytes to a different location), you need to read the full file to append at the end, and it was easy to corrupt a disk.

I don't see why you can't do zero-copy I/O as I described above, and in my earlier post. Again, unless it forces you to do DMA to fixed buffers.

Yes, if you don't keep the last block number somewhere appends will be slower, but that's not a common operation on 8-bit systems in my experience, and even if the last block number isn't stored on the disk, it's cheap to cache in memory even after the file is closed, if you have applications that later re-open and append to files.

But it does give me the idea that in my filesystem it might be handy to add a "last block hint" to the metadata for a file (in the directory) to make seeks faster. It doesn't have to be updated frequently because even if it's not actually the last block, but a few blocks before the end, it still saves a lot of reading when seeking to the end of a large file. (Even if the hint is completely wrong that's still no big deal, since you'll always know after reading the hinted block that it's not part of the file. I've recently updated the design of my filesystem to ensure that file numbers can't be reused until all that file's blocks are scrubbed (which need not be done immediately on delete and in fact should not be, to allow some time for undelete), so accidentally pointing to a deallocated block with the same file number should never be a problem.

Quote:
Also, DOS 2.0 and 2.5 were limited to 1024 sectors and did not support sub-directories, so are really only appropriate for small floppy disks.

The format sed by Sparta DOS (and BW-DOS) is more advanced, "unix like". It uses two types of sectors, "sector maps" which store the sector numbers for the file, two bytes per sector, and "data blocks" that store the full data. Having an index (he sector map), you can seek to any position on the file really fast. Also, directories are stored the same as files, and you can have as many sub-directories as you want.

That's all nice stuff to have, but adds considerably to the complexity of the filesystem code and also makes you make more tradeoffs between write speed and ability to recover from a power failure someone yanking a floppy out in the middle of a write. On an 8-bit system, one should really think about how worthwhile all that is.

Quote:
...the SIO interface has simple commands like "read sector number X", "give disk status", etc. But this had the advantage of allowing many devices to use the same interface, like hard disks and modems, working transparently. Today, there are flash (and SD) card adapters, and even bluetooth dongles that can load files from your phone :)

Yes, I think that USB, a huge evolution of Atari's SIO interface, has shown us how wonderful something like this can be. :-)

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 27, 2020 9:03 pm 
Offline

Joined: Mon May 21, 2018 8:09 pm
Posts: 1462
Usually the constraint on where you put the data is how fast the hardware needs you to empty its buffer. Floppy controllers of this vintage didn't have a buffer big enough to hold a whole sector of data. Indeed the WD1772 had only a one-byte buffer (similar to the 6551 or 6850 UARTs), so you had to service it in lockstep with the data rate of the underlying disk. That's why the BBC Micro dedicated the NMI interrupt to servicing the FDC (and, I think, the Econet if you had it).

Since the data rate is only a few kilobytes per second, there's actually no reason why a software routine couldn't read the data out of the buffer and directly into the application memory; the critical factor is the interrupt service latency rather than the number of cycles required. But a DMA engine might need a complete, page-aligned sector buffer as its target.


Top
 Profile  
Reply with quote  
PostPosted: Tue Jan 28, 2020 2:12 am 
Offline

Joined: Mon Sep 17, 2018 2:39 am
Posts: 137
Hi!

cjs wrote:
dmsc wrote:
In the Atari, the "standard" interface speed is only 19200 baud, so you have plenty of time to copy buffers around using modern storage....Modern serial loaders patch the OS to provide faster speeds, up to 126kbaud, this is still about 10kB/s after all the protocol overhead, so well bellow buffer copying speed.
...
Well, BW-DOS (and other similar DOSs like Sparta DOS) used direct I/O when possible. The Atari API has two methods "read characters" and "write characters", the application passes 1 byte as operation, 16bit buffer address and 16bit length. If the length allows reading a full sector to application memory, this is done, avoiding the intermediate buffer.

Sounds like there should be no problem doing zero-copy I/O for any read (though probably not writes), then. After all, if you're reading bytes from the drive at such low rates, you have plenty of time to write individual bytes as you receive them to wherever you like in RAM (or just throw them away). Or does the hardware force you to do DMA to buffers?



It is not the hardware, but the default OS routines, that always transfer one block at at time, as the API is at high level - the OS builds the command frame, sends it, waits for ACK or NACK, reads the data if available, calculates the check-sum and retries if not valid. To support zero-copy I/O you would need to re-implement all that in your own code, making the DOS a lot bigger and only compatible with standard devices.

One of the limitations of early computers (like the Atari) are that the OS were not very advanced, as the designers had little time to implement all. The Atari OS has a fair share of redundant code and slow algorithms :)

Quote:
Quote:
Yes, Atari DOS 2.0 and 2.5 stored 125 data bytes per sector, with the last three bytes a pointer to the next data sector and the number of bytes used in this sector. The format is really bad for performance, as you can't do direct I/O (as you need to copy the last three bytes to a different location), you need to read the full file to append at the end, and it was easy to corrupt a disk.

I don't see why you can't do zero-copy I/O as I described above, and in my earlier post. Again, unless it forces you to do DMA to fixed buffers.

Yes, if you don't keep the last block number somewhere appends will be slower, but that's not a common operation on 8-bit systems in my experience, and even if the last block number isn't stored on the disk, it's cheap to cache in memory even after the file is closed, if you have applications that later re-open and append to files.

But it does give me the idea that in my filesystem it might be handy to add a "last block hint" to the metadata for a file (in the directory) to make seeks faster. It doesn't have to be updated frequently because even if it's not actually the last block, but a few blocks before the end, it still saves a lot of reading when seeking to the end of a large file. (Even if the hint is completely wrong that's still no big deal, since you'll always know after reading the hinted block that it's not part of the file. I've recently updated the design of my filesystem to ensure that file numbers can't be reused until all that file's blocks are scrubbed (which need not be done immediately on delete and in fact should not be, to allow some time for undelete), so accidentally pointing to a deallocated block with the same file number should never be a problem.

Quote:
Also, DOS 2.0 and 2.5 were limited to 1024 sectors and did not support sub-directories, so are really only appropriate for small floppy disks.

The format sed by Sparta DOS (and BW-DOS) is more advanced, "unix like". It uses two types of sectors, "sector maps" which store the sector numbers for the file, two bytes per sector, and "data blocks" that store the full data. Having an index (he sector map), you can seek to any position on the file really fast. Also, directories are stored the same as files, and you can have as many sub-directories as you want.

That's all nice stuff to have, but adds considerably to the complexity of the filesystem code and also makes you make more tradeoffs between write speed and ability to recover from a power failure someone yanking a floppy out in the middle of a write. On an 8-bit system, one should really think about how worthwhile all that is.



In my experience, it was DOS 2.5 the one that corrupted disks the most, but perhaps it depends more on the implementation than in the format.

Of special importance is to always update the block bitmap before updating file pointers. This is not easy in the DOS 2.5 case as you can't update the links after the data. In the Sparta DOS format you write the data blocks, then update the bitmap, then you can write the sector map. This ensures that you can't have a valid file pointing to unused space.

Quote:

Quote:
...the SIO interface has simple commands like "read sector number X", "give disk status", etc. But this had the advantage of allowing many devices to use the same interface, like hard disks and modems, working transparently. Today, there are flash (and SD) card adapters, and even bluetooth dongles that can load files from your phone :)

Yes, I think that USB, a huge evolution of Atari's SIO interface, has shown us how wonderful something like this can be. :-)


:-)

Have Fun!


Top
 Profile  
Reply with quote  
PostPosted: Tue Jan 28, 2020 5:14 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
Right, so zero-copy I/O can be done on the Atari 8-bitters with standard hardware, though the OS code in ROM that most systems use doesn't do that. Well, as usual, you pays your money and you takes your choice, in this case, extra space and implementation time.

dmsc wrote:
In my experience, it was DOS 2.5 the one that corrupted disks the most, but perhaps it depends more on the implementation than in the format.

Of special importance is to always update the block bitmap before updating file pointers. This is not easy in the DOS 2.5 case as you can't update the links after the data. In the Sparta DOS format you write the data blocks, then update the bitmap, then you can write the sector map. This ensures that you can't have a valid file pointing to unused space.


Right. It takes some careful thought and design to produce a system that can better survive someone removing a diskette in the middle of a write or whatever. It requires both setting up the filesystem in specific ways and have the code do things in specific ways, and even more care if you want good performance in whatever scenarios you're anticipating. Having to write first information that's not yet valid is a typical thing that will make your filesystem less reliable.

In my (tentative) plan, the free block bitmap is a hint, rather than authoratative information, which means that you can delay writes to it indefinitely, though if you lose your current state in RAM you may need to do some (possibly moderately expensive) filesystem checking to get it back to an accurate state later on. (The free block bitmap is actually optional, to avoid wasting space in things like ROM images.)

Writes are always safe because the filesystem always confirms that a block is free before writing it by having read it once first, either during the write itself or earlier. Now that I think about it, this makes it difficult to do really efficient writes on slow, removable media, though they will always be quite safe. Perhaps I want to rethink again the tradeoffs, and try to provide an optional way to be more efficient, at perhaps some cost in safety.

There are some good tricks out there, such as log-structured filesystems, that can be both very efficient and very safe, but they then tend to hit read performance. You just can't win. :-)

(BTW, you may want to consider trimming quotes a bit when you use them. A good rule of thumb is that if your quotes are larger than the text you wrote, it's likely to be hard to read for others and worth considering a trim.)

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Sat Feb 01, 2020 2:33 pm 
Offline

Joined: Mon Sep 17, 2018 2:39 am
Posts: 137
Hi!

cjs wrote:
In my (tentative) plan, the free block bitmap is a hint, rather than authoratative information, which means that you can delay writes to it indefinitely, though if you lose your current state in RAM you may need to do some (possibly moderately expensive) filesystem checking to get it back to an accurate state later on. (The free block bitmap is actually optional, to avoid wasting space in things like ROM images.)

Writes are always safe because the filesystem always confirms that a block is free before writing it by having read it once first, either during the write itself or earlier. Now that I think about it, this makes it difficult to do really efficient writes on slow, removable media, though they will always be quite safe. Perhaps I want to rethink again the tradeoffs, and try to provide an optional way to be more efficient, at perhaps some cost in safety.



Yes, reading a block before writing it back would slow-down the writes a lot in rotational media, as you would need to wait a full disk revolution in the drive.. In the Atari DOS there was a flag to verify all the writes; in that mode the DOS would read a sector just after writing it and compare the data - all people I knew turned it off because it made all the write operations too slow :-)

In you case, it would work if you assume that writes are less common than reads in your workflow; It also has the added benefit that you can detect disk changes on devices that don't provide a disk-change signal. In the Atari, the various DOSs reads the first sector on file-open operations to verify that the disk has not changed, and assumes that the disk does not change between the open and close calls.

Quote:
There are some good tricks out there, such as log-structured filesystems, that can be both very efficient and very safe, but they then tend to hit read performance. You just can't win. :-)


Also, log-structured filesystems tend to use a lot of RAM to keep the filesystem index, and it has to be rebuilt whenever you "mount" the filesystem.

Quote:

(BTW, you may want to consider trimming quotes a bit when you use them)


Yes, sorry for that!

Have Fun!


Top
 Profile  
Reply with quote  
PostPosted: Sat Feb 01, 2020 3:19 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
dmsc wrote:
Yes, reading a block before writing it back would slow-down the writes a lot in rotational media, as you would need to wait a full disk revolution in the drive.. In the Atari DOS there was a flag to verify all the writes; in that mode the DOS would read a sector just after writing it and compare the data - all people I knew turned it off because it made all the write operations too slow :-)

Yeah, that pretty much cinches it, then; fortunately enough I've thought about it and come up with a new plan. We leave the free block bitmap as a hint, but going the other way: the filesystem will never write a block unless it's first been marked as allocated in the free block map. Thus, when you're saving a large program, you can look to see that you have a dozen blocks to write, read the FBM, find an appropriate (preferably consecutive) series of blocks, mark them as allocated and write back the FBM, and then go write your directory entry and blocks.

This should make write overhead lower even than FAT-based systems and yet also maintain very good safety: removing media in the middle of a write can leave you with at worst the last-written block being not completely valid (the next-block pointer will point to a block not part of the file), which is easy to handle (the file is simply truncated), and even if the disk isn't checked, no part of the filesystem structure will ever lead to further corruption.

You are, of course, left with blocks marked as allocated in the FBM that are not actually allocated, which wastes space, but is otherwise harmless. If you've got a disk that has a truncated file or is mysteriously missing space, you can run a "recover" program that will read the whole disk and fix the situation.

Quote:
In you case, it would work if you assume that writes are less common than reads in your workflow....

They are in most workflows, so reads definitely want to be as highly optimized as possible, consistent with not making other operations too terrible. Given appropriate sector skew (enough to read the next block number and, if it's on the same track, get a request back to the drive before that block comes under the head) this shouldn't be too bad on rotating media, though I now realize I should think a bit further on the block driver/filesystem interface and how I might be able to do single-rotation track reads where possible. (Obviously, none of this is an issue if you're using an SD card or whatever.)

Quote:
It also has the added benefit that you can detect disk changes on devices that don't provide a disk-change signal. In the Atari, the various DOSs reads the first sector on file-open operations to verify that the disk has not changed, and assumes that the disk does not change between the open and close calls.


Actually, I don't think my previous system did that reliably for reasons I won't get into at the moment. But I have been thinking about this a bit, especially since I noticed that the C64 actually writes a disk ID into every sector that allows it to detect a disk change. It does this in a pretty clever way: it's written not in the data block of a sector but in the header block before that, along with the address of that sector. This allows it to detect media changes for free because the disk ID comes along with the information that it's reading anyway as the track spins by while it's determining when to start writing the data block.

That idea, along with some drives providing disk change information through other means anyway, leads me to think that informing of a disk change is probably best handled by the block device interface. For hardware that supplies a "media changed" signal, the BDI just passes that on. If the drives do not support that, the BDI author can decide if it's better (or even possible) to use a non-standard format or, like many systems of the day, simply not bother and hope that the user can't change disks fast enough in the middle of a write to have the write continue after the media change.

Also, having thought further on this, I realize now I probably wasn't overtly clear enough about how important simplicity is in this design. The more I think about it, the more focused I become on ensuring that, while the filesystem is reliable and reasonably speedy, it's also as simple as possible to implement. (Definitely simpler than MS-DOS FAT.) That would basically be its competitive advantage.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Sun Feb 02, 2020 6:38 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8404
Location: Midwestern USA
dmsc wrote:
Yes, reading a block before writing it back would slow-down the writes a lot in rotational media, as you would need to wait a full disk revolution in the drive.

That may be true with a floppy disk, but unlikely with a reasonably modern hard disk. Nowadays, hard disks buffer a full track in anticipation that the same track will be accessed again (based upon statistical models of common filesystems). So the cost of a read before write would be mostly in the bus traffic required to initiate the two transactions, not the mechanical latency of the drive.

That said, I question the quality of a filesystem's design in which read before write is necessary. The entire point of maintaining a buffer cache is to minimize the cost of disk accesses without unduly exposing the filesystem to loss of consistency. That's what makes implementing a foolproof filesystem no simple matter.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Sun Feb 02, 2020 8:23 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
BigDumbDinosaur wrote:
That said, I question the quality of a filesystem's design in which read before write is necessary. The entire point of maintaining a buffer cache is to minimize the cost of disk accesses without unduly exposing the filesystem to loss of consistency. That's what makes implementing a foolproof filesystem no simple matter.


Buffers, hahahaha. :-) Not a lot of room for those on the typical target platform for my filesystem, which commonly is an SBC with 32K of RAM or a circa-1980 microcomputer that might not even have that much. And the most common file access pattern is "read a complete file" or "write a complete file," too. Also, since removable media with no disk change line is a very common configuration, I think that write caching is best avoided (except perhaps for byte-by-byte appends to the last block of a file).

That said, yeah, I think that one can achieve significantly faster write speeds with no loss of safety using a "definitely free or likely not free" bitmap that's written before the data blocks themselves, and then distributing the block list for a file over the file's data sectors. This gives several advantages over a consolidated block-list type filesystem like the variants of MS-DOS FAT:
  • Trashing a free block map is usually harmless. Reads are entirely unaffected (since they don't use the map) and, so long as you can tell the FBM has been trashed, you can know not to do writes until you've rebuilt the map. (Or just search for free blocks by reading the sectors themselves.)
  • Writing can be a bit faster since the FBM is considerably smaller than a FAT (typically 1/12th or 1/16th the size), often just a single block.
  • Recovery of partial file writes (say, when a floppy was removed in the middle of writing a file) is more accurate (I think it can be exact) becuase the file's block list is updated incrementally as you write each sector, rather than the blocks being marked as part of the file before they've been written. Thus you're left with a file that contains exactly what was written, rather than a full-length file with incorrect data in the latter portion. (It's possible to do such incremental updates with a consolidated block list as well, but at a large cost in performance.)

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Sat Feb 15, 2020 1:48 pm 
Offline

Joined: Sun Feb 22, 2004 9:01 pm
Posts: 93
drogon wrote:
(although if they assume things about the Acorn filing systems like :drive.$.path.to.extension.file then they might be in for a surprise)

If a program wants to ensure it can work across different platforms, it needs to read OSBYTE 0 to get the platform type to form filename paths with, doing things like:

A%=0:X%=1:os%=((USR&FFF4)AND&FF00)DIV256
d$=".":s$="/":IF(os%AND-24):d$="/":s$=".":IF(os%AND-32):d$="\"
....
filename$=dir$+d$+name$+s$+ext$

_________________
--
JGH - http://mdfs.net


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 41 posts ]  Go to page Previous  1, 2, 3

All times are UTC


Who is online

Users browsing this forum: Google [Bot] and 14 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: