6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Fri Nov 22, 2024 3:22 pm

All times are UTC




Post new topic Reply to topic  [ 49 posts ]  Go to page 1, 2, 3, 4  Next
Author Message
PostPosted: Mon Jan 28, 2019 3:29 pm 
Offline

Joined: Mon May 21, 2018 8:09 pm
Posts: 1462
It's still (just) possible to get hold of new floppy drives and disks, in the formerly ubiquitous 3.5" high-density format. Every so often someone remembers using floppy disks with a classic micro, and wonders whether they can also do so with a homebrew build. The answer is of course "yes, but there are better options" - but, being who I am, I decided to look into the problem a bit more deeply.

The "easy" option is to obtain a standard PC-type drive (or two) and interface them through a standard PC-type floppy drive controller. You then have to implement a filesystem and software driver conforming to the capabilities and restrictions of the standard format; frankly at this point you might as well just implement FAT12 and have done with it. The programming interface of the FDC is arcane and frankly bizarre in places, and requires you to either poll continuously for data, implement a pretty quick ISR and never mask it while accessing the disk, or add a DMA controller to your design.

Yuck.

But then I looked at the datasheet for the drive itself, and things started to actually make sense. (That's a "3-mode" version of the drive, which isn't precisely the same as what you get in a standard PC; for that, ignore the 1.6MB mode and look at the 1MB/2MB ones.) The drive actually has enough intelligence and drive electronics built in that you could reasonably build a controller by hand, without needing to know anything significant about analogue electronics. In particular, the drive takes care of converting magnetic flux reversals on the disk into active-low pulse trains, and vice versa, which I had thought would be the hardest part of implementing an FDC. What you *do* need to get right is the timing and encode/decode logic, but you get a fair amount of freedom in designing that if you ignore the MFM-based PC specs.

Stepping back a bit, most classic micros were satisfied with single-sided, 40-track, single-density 5.25" drives offering roughly 180KB of "formatted capacity". In principle you can replace any of those drives with a 3.5" drive and keep all the same formatting parameters. From there, if you step up to double-sided and 80 tracks, which any 3.5" disk supports, you get four times the capacity, still at "single density". But what does the latter mean?

In fact, single density just means you're using FM encoding with a 250KHz (4-microsecond) maximum flux-transition rate (referenced to the outer track at 300rpm). FM is a very primitive encoding, though one almost universal among classic micros - the Apple ][ being a notable exception - which simply lays down one pulse and one space for a zero, and two pulses for a one; this appears on the disk as a low frequency for zero, and a high frequency for one. You therefore need 8 microseconds to encode a bit on the disk, so at 300rpm (5 revs per second) you can theoretically fit 3125 bytes on a track. Much overhead is then spent on synchronisation and address marks, and gaps to allow writing individual sectors without accidentally overwriting a preceding or following one, so in practice you can only fit five 512-byte sectors per track, for 2560 bytes usable capacity, or eight 256-byte sectors for 2048 bytes usable capacity.

The Apple ][ uses a slightly more sophisticated encoding called 5-in-3 GCR and, like the C64's 1541 drive, reduces the number of sectors on the inner tracks of the disk (aka "zone recording") to keep the physical bit density within limits while employing higher density on the outer tracks. There's a lesson here worth following. Early Macs got a slightly upgraded version of this called 6-in-2 GCR, which allowed a double-sided double-density 3.5" disk to store 800KB, versus the PC's 720KB format on the same disks.

Double density works with exactly the same disks and drives as single density, and changes only the encoding to MFM, which requires at most one flux reversal per bit. With the same maximum flux-transition rate, this means MFM can store a bit in 4µs at 300rpm, so twice as much raw data per track as with FM. The encoding and decoding circuits are a little bit more complicated, but still reasonably practical for low-cost 1980s technology. The overheads actually work out such that you can put in nine 512-byte sectors per track, for 4608 bytes usable capacity out of the theoretical raw 6250 bytes. (This is still a rather poor 73.7% storage efficiency.)

High density disks use a better-quality disk coating with higher magnetic coercivity, allowing the maximum flux reversal rate to be doubled to 500KHz (2µs) at 300rpm. The PC's very straightforward adaptation of MFM to this gives 18 512-byte sectors per track. Multiplying by two sides and 80 tracks gives the familiar 1440KB per disk. But the theoretical raw capacity is now 12500 bytes per track, for a total of 2000000 bytes per disk, hence the drive manufacturer's description of "2MB mode". As for the "1.6MB mode", that comes from applying the same 500KHz MFM encoding to a disk spinning at 360rpm, where you get only 1/6th of a second and the flux reversals end up slightly further apart.

There was also a short-lived 2880KB "extended density" format, using special barium-coated disks and special drives with a different magnetic field orientation. I think those are now very difficult to obtain. The format again simply doubled the flux transition rate and applied MFM encoding, obtaining 36 sectors per track.

Now, why am I going into so much detail? Because MFM became obsolete as soon as the Compact Disc appeared. By employing a slight variation of the CD's encoding scheme, we can substantially increase the usable recording density on existing disks and, in the process, give ourselves a reasonable excuse for implementing our own FDC. All within spec.

MFM requires twice as many "symbol bits" as there are data bits, just as FM does. FM requires every "clock bit" to be a flux transition; MFM effectively deletes the "clock bit" if there's a neighbouring data bit with a flux transition. So the minimum distance between flux transitions is 1 symbol bit for FM, or 2 for MFM - or three for the EFM (eight-to-fourteen modulation) used on CDs. So we can pack 50% more symbol bits onto a track with EFM as with MFM. Looking at the maximum distance between flux transitions, which is important for accurate clock recovery, we get 2 symbols (twice the minimum) for FM, 4 symbols (again, twice the minimum) for MFM, and 11 symbols (nearly four times the minimum) for EFM. That indicates a more challenging clock-recovery task with EFM, but not beyond reason; it's hard to make a physically spinning disk change significantly in speed within 8µs, especially by accident.

CD's application of EFM actually requires 17 symbol bits per 8-bit data group, as 3 non-data-carrying "merging bits" are used to preserve the invariants of minimum and maximum transition spacings, as well as DC-balance between lands and pits which is apparently important for laser tracking. For a floppy disk, the DC-balance metric is not important, so we can use just 2 merging bits, thereby using exactly twice the number of symbol bits as data bits, just as with FM and MFM. With the 50% higher density of symbol bits, that means we can now put down 18750 raw bytes per track, for a theoretical disk capacity of 3000000 bytes. How many of those bytes are actually usable will depend on how much overhead (in terms of synchronisation and sector gaps) we accept, but it should definitely be possible to get 16KB per track, for a total of 2560KB per disk - almost twice the usable capacity of a standard PC format of the same disk, using the same drive! It may also be possible to include a smaller metadata sector alongside the large data sector(s).

Let's leave aside the question of how to convert EFM to binary and back, and simply assume we can write software to do that, feeding the controller directly with EFM patterns. Probably a CPLD could be made to do it too.

A more pressing question is how to correctly time the pulse trains, both on read and write, to produce the relatively complex and precise patterns that EFM requires. I've recently advocated the use of a 24MHz master clock to drive a DRAM controller and derive 8MHz or 12MHz CPU clocks, and it fortuitously turns out that a 24MHz master clock is also popular for FDC chips, from which a 1MHz, 500KHz, 250KHz or 300KHz symbol clock can be derived by simple frequency division - as can the 1.5MHz symbol clock we now need (divide by 16).

A frequency divider is really just a free-running counter with a reset upon reaching some matching value; we can use the same counter to produce EFM pulses by feeding it a sequence of delta counts, or to measure the time delta between pulses when reading a disk - and delta-time readings are naturally self-synchronising for clock recovery. An 8-bit counter/comparator should be sufficient since 11*16 fits comfortably. If we dump this delta-time data into a FIFO or a specialised bank of RAM, we can interface it to the host computer for decoding. Dumping it to a RAM buffer would eliminate the need to have the CPU grab it in realtime, and we can probably interface it as a giant circular FIFO to minimise the host address space consumption.

The same scheme, incidentally, should be capable of producing and decoding MFM format tracks, so we don't need to lose compatibility with PC-formatted disks after all.

A wrinkle here, which unfortunately the drive electronics don't handle for us, is that closely-spaced magnetic flux reversals have an unfortunate habit of migrating away from each other just after being written - this, broadly speaking, is because the effect of neighbouring magnetised regions overlaps to some extent, and the read head sees the sum of that overlap. Thus it's necessary to move closely-spaced flux reversals even more closely to each other as pre-compensation for that migration. The question of how closely-spaced and how much to compensate is an open one, as it's dependent on the disk, the drive, and even the track on the disk. PCs simply apply a flat 125ns pre-compensation (three 24MHz clocks) on minimum-spaced MFM pulses, but since EFM is more subtle, I suspect it'd be wise to record test disks using real hardware and then observe how the migration behaves in practice. The necessary pre-compensation can then be incorporated into the delta-times of the binary-to-EFM encoding when writing a real disk.


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 28, 2019 4:04 pm 
Online
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10985
Location: England
Nice idea! There are many degrees of freedom in how to do it, as you note, but any kind of from-the-ground-up floppy interface would be a very good project.

BTW anyone seeking a grounding in floppy formats and technologies, see André's recent Floppy Notes page, and the links within.


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 28, 2019 4:56 pm 
Offline
User avatar

Joined: Wed Feb 14, 2018 2:33 pm
Posts: 1488
Location: Scotland
Nice article, thanks.

Not sure you're quite there about the Apple II disk though. the drives were as cut down as Apple (Woz) could get - no rotation index and no track zero sensors for example, only 35 tracks and the stepper was controlled in software (so rather than direction + pulse you had control over all 4 coils). This actually gave it faster seek times than other drives of the time as you could ramp up the stepper speed and ramp it down again. No track zero sensor, hence the oft-heard buzz/clatter at boot time (or bad disk time!) as it did 37 brute-force seeks to track zero to get to track zero... Whirrrrrr du du du dug, chug chug chug ... It also opened up a whole new industry for providing copy protection too .....

The on-disk encoding was 5:3 GCR in DOS 3.1 & 3.2 but 6:2 GCR in DOS 3.3 (and ProDOS). This required new ROMs on the disk card and a little "16" sticker to indicate 16 sectors per track rather than the older 13 sectors at the upgrade time from 3.2 to 3.3 (1980 from what I recall)

I'm not aware of any sort of zone recording on the Apple II though. The early Macs featured variable speed drives to facilitate this. I remember doing some low-level disk programming on the Apple II, and while I dabbled with some stuff at the bit level, (trying to write a bit-copier) mostly I used the RWTS (Read/Write Track/Sector) level and it was 16 sectors on all 35 tracks all the time. I don't recall seeing anything variable when using tools like Copy II+ and Locksmith either... But this was 35+ years ago, so who knows...

Apple II drives are actually really easy to work on - and I've just this weekend refurbished a DuoDrive - which has the amusing property of only having one analog board for both drives - and why not as you can only use one at a time and both drives are in the same physical box...

However I do not think that today I'd like to write the software to run one of them at the low-level Woz did back then. Apple DOS was about 10K in size, and while some of that was hooks back into BASIC, it was very comprehensive for the time handling the low level writing at the bit level, to the track/sector level and then file handling including random access and sparse files.

Then again, they are relatively cheap and easy to get hold of via ebay, etc. so if anyone does fancy going back to the sub-bit level, then good luck :-)

I am looking at implementing a file-system for my SBC, but am torn between just using FAT or my own - which I did make a start on, but many years back on a BBC Micro - which was sadly stolen along with all my disks before I finished it, but I think I still have my notes somewhere - the temptation to use FAT is strong though - for ease of file transfer and compatibility if nothing else, yucky as that may be...

However one thing to not forget (IMO) is that we're dealing with an 8-bit system with 64KB of RAM (or a few MB in an '816 system), so keep track of perspective - Putting down a multi GB type of interface, while maybe giving you a real "wow, look at this" feeling isn't that practical in reality. It's like sticking a GoTek unit on my BBC Micro with a single USB data key that has all the BBC Micro floppies on it, 10 times over ... The old Apple II disk was fine - 130KB capacity, typically about 30K of RAM max, so lots of room for many BASIC programs and a few data files. The current "PC" 1.44MB drives are more than adequate in a 6502 although possibly limiting in a 4MB '816

Cheers,

-Gordon

_________________
--
Gordon Henderson.
See my Ruby 6502 and 65816 SBC projects here: https://projects.drogon.net/ruby/


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 28, 2019 5:18 pm 
Offline

Joined: Mon May 21, 2018 8:09 pm
Posts: 1462
I'd just like to point out that most C64 demos include several "flip disk" points in their sequence, which more-or-less implies they're trickling data out of the 1541 almost continuously during their runtime. Giving a C64 a bigger (and maybe faster) disk would make it that much more capable in the demoscene. The BBC Micro already has comparatively ready access to twin, double-sided, 80-track drives, which are that much more flexible. The difference amounts to the BBC Micro being able to pull off a decent rendition of the "Bad Apple" video, while the C64… can't. Yet.

Anyway, being able to transfer files directly to/from a PC is potentially useful, but implementing that requires correctly following a bunch of specifications which make little engineering sense (to me, anyway). That's the sort of thing which is headache-inducing rather than fun. What's fun is working within the constraints of a particular mechanical drive and physical disk format, and potentially managing to get more out of it than decades of PC-compatible engineers have.


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 28, 2019 5:35 pm 
Online
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10985
Location: England
Once you've got bytes off a disk which you put on there, which is a major milestone, it might be worth reflecting that the sector-by-sector organisation is a choice, not a necessity. Likewise, a filesystem is a choice, it's only one way of arranging for access to a block device. History has explored other choices, and perhaps left some unexplored. (Exposing a raw block device is one alternative, and exposing an object-oriented database is another.)


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 28, 2019 5:53 pm 
Offline

Joined: Mon May 21, 2018 8:09 pm
Posts: 1462
Oh yes, filesystems are a rich sea of possibility. It's certainly not necessary, for example, to restrict files to occupy integer numbers of sectors and leaving the unused portions to waste. Removing that limitation opens the door to using large sectors, which involve less overhead per track - and that's the key to actually getting 16KB of data plus some amount of metadata on each track. Recent high-capacity hard drives do the same, with 4KB physical sectors and emulated 512-byte sector access for legacy filesystems.

UNIX filesystems offer a source of inspiration here. Each file is really a numbered "inode" referring to a list of locations containing data, but without a name. Names are provided by special inodes containing directory information. A single inode can be referred to be more than one directory entry (a "hard link"), so it contains a reference count to be sure of when all the directory entries pointing to it have been removed (or now point to new inodes). Conceptually, the drive could handle files at the inode level by itself, but have the host computer worry about which inodes contain directories and what format they're in - and that's called an object-based filesystem. Apparently it's considered cutting-edge technology in the datacentre, but it seems to be how the old 1541 worked as well.

Then you could think about the relative unreliability of floppy disks and how to counteract that with error correction codes. Lots and lots of interesting work to do!


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 28, 2019 5:57 pm 
Online
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10985
Location: England
You may know that the Amiga did track-at-a-time floppy access, which makes sense once you have lots of RAM, and you could even do away with inter-sector gaps. A modern 6502 system could have enough RAM to make this worthwhile.


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 28, 2019 6:01 pm 
Offline

Joined: Mon May 21, 2018 8:09 pm
Posts: 1462
Since there can theoretically be 100,000 flux transitions per track, it would be feasible to implement a double track buffer in delta-time format using a 256K SRAM chip. Read head 0, then head 1, then step to the next track and wait for the index mark, while the CPU crunches through it all and turns the EFM into bytes. All done in three-fifths of a second - hopefully.


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 28, 2019 6:06 pm 
Online
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10985
Location: England
I think the Amiga doesn't even wait for the index mark: it's enough to wait for a sector mark. One reason to have more than one sector per track.

Edit: see for example http://lclevy.free.fr/adflib/adf_info.html


Last edited by BigEd on Tue Feb 05, 2019 1:47 pm, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 28, 2019 6:26 pm 
Offline

Joined: Mon May 21, 2018 8:09 pm
Posts: 1462
Also bear in mind that reading that pair of tracks yields a 32KB blob of data in well under a second. And we were just talking about the applicability of such a project to a 64KB machine!


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 28, 2019 6:30 pm 
Offline
User avatar

Joined: Wed Feb 14, 2018 2:33 pm
Posts: 1488
Location: Scotland
The book: Beneath Apple DOS is well worth a read if you've not seen it. It's online now, e.g. https://fabiensanglard.net/fd_proxy/pri ... %20DOS.pdf although I have an original here, somewhere. It explains how Apple DOS works from the file level down to the bit level. It's well worth a read - e.g. it shows how the free sector bitmaps work and there is a resemblance to the Unix inode system in there - a catalog (directory) entry points to a sector that contains a list of track+sector numbers of each sector in the file, or a pointer to the next sector containing more track/sector pairs. There are no sub-directories in Apple DOS though - that appeared in ProDOS which I never really got into a all - but I'm sure there is an equivalent book for ProDOS.

An optimisation that some 3rd-partys provided for Apple DOS was to re-arrange the sector order on disk, so if sequentially reading a file and DOS just read read sector 3 on a track, then copy it to the user, then read sector 4, but oops, you just missed the start of sector 4, so had to wait another whole rotation... But by re-arranging the sector interleaving you could get some very fast transfers indeed. Not quite as fast as reading in a whole track (and take up 4K of your precious RAM) but almost as good as.

I found the BBC Micros DFS and ADFS somewhat confusing though and it never felt any faster than the Apple, despite having 4 years of extra research on their hands to do it in...

And I might be mistaken, but I'm sure I used some small rack-mount system - 6809, possibly (flex9?), that had a filing system that used linked lists of sectors for each file. I though that rather odd at the time, but then - late 70's/early 80's memory was still at a premium and if you weren't z80, cp/m you had to do it yourself.

-Gordon

_________________
--
Gordon Henderson.
See my Ruby 6502 and 65816 SBC projects here: https://projects.drogon.net/ruby/


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 28, 2019 6:38 pm 
Offline
User avatar

Joined: Wed Feb 14, 2018 2:33 pm
Posts: 1488
Location: Scotland
Chromatix wrote:
Then you could think about the relative unreliability of floppy disks and how to counteract that with error correction codes. Lots and lots of interesting work to do!


And here I am, today, using 40 year old floppys... Apple, admittedly, and one thing I've heard is that they have lasted so long due to the relatively low recording density, however I know they won't last forever and I'm slowly copying them to my home server via a serial interface on my apple - no different really to devices like the GoTek for other platforms - BBC Micro, Atari, and whatnot.

The Apple disks didn't have ECC, but they did have a one byte checksum per sector. ECC would have been great, but then there's that 1Mhz 6502 to write the code in...

I have a few that the cases have worn thin in - and one that the top of the case has worn right away - I am amazed that these have survived all these years though - and then there's the ones we put the notch in to turn them over and use the other (supposedly bad) side - same as you mentioned for the C64 disks earlier... They all more or less just worked - to the dismay of the shops trying to sell us more with stories like "the disks go the other way, dislodging dust and scratching the surfaces" and so on. They were probably right, but that didn't stop us.

Cheers,

-Gordon

_________________
--
Gordon Henderson.
See my Ruby 6502 and 65816 SBC projects here: https://projects.drogon.net/ruby/


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 28, 2019 6:48 pm 
Offline

Joined: Mon May 21, 2018 8:09 pm
Posts: 1462
The BBC Micro just used off-the-shelf hardware for the disk subsystem, so it was still single-density encoding at 300rpm. I forget what the first official FDC chip was, but third-parties and the BBC Master soon moved to the WD1770. The mechanical bits would have been Shugart or compatible. Acorn were not in the floppy-drive business; they were in the computer business, and they made a pretty fine computer.

(Trivia: after Shugart's first floppy-drive business was bought and promptly run into the ground, he founded Seagate. But Seagate's hard drives didn't really become reliable until they bought Connor.)

The speed of the disk dictated how fast you could read a track, and the amount of data on each track was the same as any other single-density drive. The filesystem assumed a single-sided disk, and treated the second side as a separate disk (drives 0 and 2 were, confusingly, opposite sides on the same disk), so you didn't get to optimise speed by reading both sides of the disk between track steps. I don't think it used interleave or skew, either, and the filesystem made all files contiguous so you couldn't implement that "by hand".

But it was reasonably reliable, and many times faster than a 1200bps tape. That's what mattered at the time. Only when software grew into the megabytes did hard drives become essential, and you can largely thank Micro$oft for that.


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 28, 2019 7:06 pm 
Online
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10985
Location: England
Chromatix wrote:
The BBC Micro ... I don't think it used interleave or skew, either, and the filesystem made all files contiguous so you couldn't implement that "by hand".


Found this post with a quick search of stardot:
Quote:
In updating the Flex Formatter earlier today I repeated some of the tests I'd done 20+ years ago. With BBC filing systems that store data in sectors sequentially and so can use multiple sector read/writes the fastest track layout is 0,1,2,3,4,5,6,7,8,9. But, with systems such as Flex which have all the disk sectors in a linked list and so have to load each sector one at a time the fastest track layout is 0,5,1,6,2,7,3,8,4,9.


(Even though files in Acorn's DFS are in logically contiguous sectors, the sectors could still be interleaved physically - sector IDs are encoded in the sector header and need not be physically sequential (or even unique!))

(Because the BBC Micro can act as a host to a variety of CPUs and OSes, the file system in use isn't necessarily an Acorn one, or even one running on a 6502.)


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 28, 2019 7:34 pm 
Offline
User avatar

Joined: Tue Mar 05, 2013 4:31 am
Posts: 1385
Funny how some things come back around after decades. Not that long ago, I took one of Commodore 1581 drives apart, pulled all of the chips off the PCB, installed sockets and replaced all of the caps as well. I also put in a socket for the oscillator. I even managed to get the source code assembled/linked with WDC Tools and made some changes to the code. I've always liked the 1581 drive... size, speed, capacity and the simple fact that it uses a fairly standard 3.5" diskette drive and an industry standard FDC, the WD1772.

Over the past week, I pulled out some of my older Teac drives (FD-235HF/HG models) and started to do some checking for some of the lesser used signals, like the HD-OUT which is Pin 2 (if configured). I've also accumulated a fair amount of documentation for these drives and their laptop cousins, the FD-05HF/HG (I have a couple of these as well) which use a 26-pin FFC cable connection. The FD-235 drives can have several different controller boards and some of the S# jumper pads can give you some different options. I traced some of the land patterns to jumper pads, then metered them out with power. Turns out that placing a jumper on the S7 pad will provide the HD-Out line on pin 2, meaning this pin is high if an HD (2.0MB) diskette is inserted and low if a normal density (1.0 MB) diskette is inserted. This is nice, let the drive configure itself for the proper diskette density and then signal the FDC so it uses the correct data rates.

As I'm looking to build a small FDC board to plug into my latest SBC, I've got quite a few parts around which I've collected over the years. I have a few of the NOS VLSI sourced WD1772 controllers, a few of the Atari made upgraded ones that support the 500k data rates, an older WD2797 which is already built up on an expansion board for the Vic-20 and I even found a couple Nec765 controllers in the attic, but these require too many additional parts for the size I want. I've also looked at the FDC37C78-HT controller (Mouser has them in stock) which is quite nice. While I like the concept presented at the start of this thread, I think I'll probably stick with a WD controller, as I still have the (BIOS) code I wrote back in the 80's to drive the 2797 based board on the Vic-20.

Filesystems are another topic as well as the physical diskette format (sector size, interleave/skew, sectors per track). I've been looking at the older Minix filesystem as a possible option, or perhaps use the Commodore 1581 FS. And as already mentioned.... there's always FAT, but somehow I don't want to use it, but we'll see.

_________________
Regards, KM
https://github.com/floobydust


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 49 posts ]  Go to page 1, 2, 3, 4  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 32 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: