Ruud wrote:
And a 6502 isn't a Z80. CP/M 3.0 is supports memory banking but it can only handle 15 extra banks of 32 KB and the way it is done doesn't cheer me up either, certainly if seen from the point of view of a 6502.
It pretty much is a matter of just translating assembly sources, actually. CP/M was ported to a number of processors that equally aren't memory layout compatible with the Z-80 (68000 comes to mind!). The question is, however, does anyone *WANT* such an OS? I wager to think no, or else someone would have provided one. I think Apple's ProDOS comes the closest, but I understand that there was (until the Apple IIgs at least) a lot of opposition around it.
Quote:
The idea of this OS (a nice name, anyone?) is that it should run on various existing hardware and hardware yet to be build. CP/M requires a minimum of at least 16 KB AFAIK. 3.0 needs at least 48 KB. And my VIC-20 only has 5 KB on board.....
Use GEOS/64 for an inspiration. 8KB for the core kernel, DOS, and event-driven architecture. And that's the key: make it event driven by design, and ruthlessly avoid redundancy. I'll get more into this below.
Quote:
If a CP/M program wants to output a byte to the floppydisk, harddisk, videoscreen or whatever, it stores the byte somewhere and calls a vector. This vector then leads to the machine dependant part that does the wanted actions. What has to be done is to define all needed vectors. We can use CP/M as base but DOS has many more then CP/M and therefor is a better base to start with.
I strongly recommend using a microkernel-like architecture, so that the proliferation of "kernel" API calls are minimized. I seem to recall reading that Acorn OS did something much like this. This style of kernel design is also strongly recommended for other reasons:
* Fewer OS calls means it's smaller, and therefore more ROM friendly.
* Fewer OS calls and semantics means fewer bugs, and therefore, less ROM revisions.
* Fewer OS calls makes for a more flexible architecture, if the OS calls are orthogonal with respect to each other. For example, the OS called Plan-9 is able to do nearly EVERYTHING that it does using only open(), read(), write(), and close(). There are a few other system calls, but they're used far less often. Even graphics are handled this way.
Quote:
Do we need all those vectors DOS is using? I don't know and in fact I don't care. The idea is to dreamup a mechanism that allows me to use various number of vectors. (OTOH, 65536 vectors should be enough IMHO)
If you're so hell-bent on having a huge proliferation of vectors, I recommend instead the AmigaOS-style library system, where libraries supply their own vectors. To call a function:
Code:
; First, open the library, so that we can access it
; elsewhere.
lda #<libName
sta $00
lda #>libName
sta $01
lda #libVersion
sta $02
jsr SysOpenLibrary ; one of the few global OS calls
bcc couldNotOpenLibraryForSomeReason
lda libBase
sta myLibBase+1
lda libBase+1
sta myLibBase+2
; Now let's make a few calls to it.
... ; load parameters somehow here.
ldx #MYLIB_FUNC_1
jsr myLibBase
ldx #MYLIB_FUNC_2
jsr myLibBase
; Here's where we store the library "entry points", so to speak.
myLibBase:
jmp $0000
yourLibBase:
jmp $0000
...
Now a library would need a convenient way to decode the function ID that is in the X register:
Code:
ORG LibraryLoadAddress
entryPoint:
jmp (myTable,X)
myTable:
dw func1
dw func2
dw func3
...etc...
This way, vectors are kept library-specific, and take up space only for the modules that are currently loaded.
Quote:
CP/M has its own filesystem. Should we define our own one as well?
MUCH more important is not the filesystem proper, but the interface to a filesystem. Installable filesystems are damn handy to have, and makes the specific choice of filesystem less relavent.
Quote:
No because, for example, I want to be able to play my C64 games under the new OS as well. In case of Commodore an extra problem is that it is not the computer but the drive that defines how and where data is stored on a disk. It is possible to define your own layout but IMHO it is not worth the trouble.
Experience has shown that Commodore's layout is sorely suboptimal. Especially for devices with slow seek times, you want a filesystem that is optimized to minimize fragmentation whenever possible. Also, its layout does not permit subdirectories, often strongly desired even on small storage devices for organization.
The only thing that Commodore DID get right is the placement of the root directory blocks and the BAM on the center cylinder.
Quote:
- vectors
I feel they should be provided by the modules loaded into the system and managed by a core foundation. A single pool of vectors, no matter how big, will always be inefficient (too few vectors used in the system, or too many to remember, or. . .).
A better solution still might be instead to make use of message passing between multiple tasks in a (perhaps cooperatively) multitasking system, and build the core operating system as a microkernel.
However, the problem with this approach is that you're providing a leaky abstraction: you're trying to convince the programmer that his code more or less has ownership of the machine as a whole, and therefore, every program tends to be written as if it were initializing itself, running its own event loop, etc. This leads to *MASSIVE* code replication -- something a 6502 system can do without!
Instead, building an
event-driven architecture is perhaps the best overall choice. Code written for the OS takes the form of a large collection of callbacks, which are invoked by the OS in response to certain stimuli. The benefits of this are many:
* A single, system-wide event queue and kernel dispatcher, which eliminates all the duplication from all the other programs designed to run under it.
* Can "multitask" without explicit support for multitasking. It can do this because the system-wide event queue can interleave events for multiple programs. The performance will be comparable to cooperative multitasking, without all the overhead of task switching. Sophisticated kernels can even profile how long it takes to handle a specific event handler, and can implement fairness algorithms based on this if necessary (probably only on a 65816, where enough memory can exist to support this). Kernels optimized for more sophisticated processors (e.g., a 65816, or an emulated 6502/65816 environment hosted on a more sophisticated processor) can provide real multitasking easily enough.
* 80% of all software today spend 80% of their time waiting for something to happen. Remember that back when Unix and VMS were invented, computers were built for batch-processing of work, not human interaction. Today, the situation is reversed. Therefore, it seems that optimizing the OS to support the most common occurance will make for a more space- and time-efficient OS architecture. GEOS/64 proved this quite nicely!
* Compute-heavy tasks can be supported by a number of mechanisms:
+ Submit an event for a single loop iteration to kick off the loop. The loop iteration handler then will, if necessary, submit another event for itself. Thus, the loop self-perpetuates until such time it is done (in which case it just doesn't re-issue its own event).
+ Register an Idle handler so that the OS periodically calls it while waiting for something to arrive in the event queue.
+ Support true multitasking of some sort, and run the compute-heavy tasks as background processes.
* Event-driven architectures can scale to multitasking or multiple processor architectures relatively easy. OS-supplied shims can be inserted transparently between the application and the core kernel to support these features transparently. Commercial distributed object middleware libraries and standards like DCOM, MTS, and CORBA make regular use of things called "thread pools," which satisfies all the requirements for an event-driven architecture of this nature.
As a clear example of this style of OS, look no further than GEOS/64. GEOS API is big and bulky, but nonetheless demonstrates the power of having a system-wide event queue. I wish I had another example, but unfortunately a design prototype I implemented in Linux one day can't be found. As far as commercial examples, look at some of the more sophisticated Enterprise JavaBeans implementations, or CORBA or DCOM/MTS "thread pools" implementations.
Quote:
- filesystem
While it'd be nice to have one, there are plenty off-the-shelf FSes to choose from. I recommend the LEAN filesystem
http://freedos-32.sourceforge.net/showdoc.php?page=leanfs.
Quote:
- handling of the available memory
- page swapping
These can be unified. The application can dynamically manage memory using memory allocation and deallocation functions. PC/GEOS demonstrated that this concept can also apply towards allocating space in a disk file to emulate memory. Once allocated, the memory can be locked or unlocked (brought into memory or released from memory, respectively) as required, allowing the memory manager to efficiently swap memory segments from or to disk as required. In fact, PC/GEOS application data files are just virtual address spaces!! This is why PC/GEOS was obnoxiously fast on a 8088 when we still have to wait 30+ seconds for Word to open an office document on a Pentium 4 with hyperthreading.
Hopefully, I've provided food for thought on this issue. I strongly advocate a consistent OS architecture. While I am a huge fan of Forth for OS-level interaction, I recognize that not everyone wants this. If I were to accept a more traditional OS architecture, this is what I'd want to see.
Note, however, that a microkernel approach, where multiple tasks exchange messages with each other, is perhaps *the* ideal solution, but it requires support for preemptive multitasking, at the very least. I strongly urge folks to study the L4 microkernel for inspiration in this area. It has something like 12 to 14 system calls, and that is it. The two most frequently used are SEND and RECEIVE, used to send and receive a message, respectively.
SEND will block the calling task until its message is received. RECEIVE will block until a message becomes available. No message queues -- you send directly to task IDs. Thus, memory consumption is *constant* (unlike the event driven architecture I pointed out above), and if you can load a program into memory, you can NEVER run out of memory due to excessive message passing.
But, again, it requires preemptive multitasking at the very least.