6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Tue Nov 12, 2024 10:55 am

All times are UTC




Post new topic Reply to topic  [ 39 posts ]  Go to page Previous  1, 2, 3  Next
Author Message
PostPosted: Thu Jan 12, 2017 4:53 am 
Offline
User avatar

Joined: Sun Jun 30, 2013 10:26 pm
Posts: 1949
Location: Sacramento, CA, USA
BigDumbDinosaur wrote:
... Canadian computer whiz Craig Bruce had developed a UNIX-like operating environment back in the latter 1980s, written entirely in 6502 assembly language, that ran on the Commodore 128. He had it up on his website at one time, available for download. I don't have a current link, so I don't know if it's still available ...

Mike N. seems to have a lot of Craig's ACE stuff here:

https://github.com/mnaberez/ace

Mike B.


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 12, 2017 5:43 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10977
Location: England
Good find!


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 12, 2017 12:15 pm 
Offline
User avatar

Joined: Tue Mar 02, 2004 8:55 am
Posts: 996
Location: Berkshire, UK
A number of early microprocessors had portable disk operating systems. The 6800 had FLEX, the 8080/Z80 CP/M and the 6809 FLEX2 and OS/9. Machines using these processors tended to be very simple: a big block of RAM, a few serial/parallel ports, a disk controller and a small boot ROM. They don't usually have a video generator.

Systems based on the 6502 tended to have built in video which makes systems cheaper but gobbles up large chunks of RAM. Systems like the BBC, Apple ][, PET, etc. are so different from each other it was difficult to write large portable applications.

I ran CP/M on my BBC with a Z80 second processor until the early 90's. It was much easier writing applications in compiled C, Pascal or Modular 2 on the Z80 than on the BBC directly. Mine spent most of its time acting as a dumb terminal for the more powerful second processor. The closest I got to using a compiled language on the BBC was BCPL with the add on stand alone generator and even then it was interpreted CINTCODE rather than native 6502.

_________________
Andrew Jacobs
6502 & PIC Stuff - http://www.obelisk.me.uk/
Cross-Platform 6502/65C02/65816 Macro Assembler - http://www.obelisk.me.uk/dev65/
Open Source Projects - https://github.com/andrew-jacobs


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 12, 2017 7:24 pm 
Offline

Joined: Sat Dec 13, 2003 3:37 pm
Posts: 1004
I'm not sure what folks would be expecting from an OS. The OSes job is to abstract the computers services and devices.

For the simple 8-Bit machines, those were effectively serial and parallel ports, and disk drives.

CP/M had a simple concept of Users for basic permissions and visibility.

The larger machines were distinguished by sophisticated I/O hardware and multiple processes. The early Unix were 64K machines, but the CPU architecture made that much simpler for multiple processes to share memory than the early 8-Bit CPUs. You could certainly do a 6502 with multiple processes, but you would need a relocating loader and some mechanism of share zero page and the stack. It's possible, just not really efficient. 8080/Z80 were much more flexible in this case, but you'd still need a relocating loader. (Mind I'm talking basic hardware here, not something with external support logic.)

With the 6809 it was possible to do position independent code, so multi processing was much more efficient.

But the true magic came when memory management units became ubiquitous. Then every process had their own address space.

Even today, if you look at an OS: device drivers and process handling, multiple users and their access control issues, interprocess communications, and, obviously important today, lots of work on the networking stack which has become almost more ubiquitous today than permanent storage.

Clearly modern architectures drive different demands, but, in the end, the OS is basically an abstraction on top of hardware device drivers.


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 12, 2017 11:21 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8482
Location: Midwestern USA
whartung wrote:
I'm not sure what folks would be expecting from an OS. The OSes job is to abstract the computers services and devices.

I recall when I was working with a Basic-IV mini in the 1970s that the documentation for the beast described the operating system as software that shielded the programmer from the pain and agony of accessing and controlling the hardware. At its most basic, that's all that folks should expect on a small system.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Fri Jan 13, 2017 5:09 am 
Offline
User avatar

Joined: Tue Mar 05, 2013 4:31 am
Posts: 1385
BigDumbDinosaur wrote:
whartung wrote:
I'm not sure what folks would be expecting from an OS. The OSes job is to abstract the computers services and devices.

I recall when I was working with a Basic-IV mini in the 1970s that the documentation for the beast described the operating system as software that shielded the programmer from the pain and agony of accessing and controlling the hardware. At its most basic, that's all that folks should expect on a small system.


That sounds more like a well designed and documented BIOS than an OS, but for a "small system" it's quite reasonable.

_________________
Regards, KM
https://github.com/floobydust


Top
 Profile  
Reply with quote  
PostPosted: Fri Jan 13, 2017 6:31 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8482
Location: Midwestern USA
floobydust wrote:
BigDumbDinosaur wrote:
whartung wrote:
I'm not sure what folks would be expecting from an OS. The OSes job is to abstract the computers services and devices.

I recall when I was working with a Basic-IV mini in the 1970s that the documentation for the beast described the operating system as software that shielded the programmer from the pain and agony of accessing and controlling the hardware. At its most basic, that's all that folks should expect on a small system.

That sounds more like a well designed and documented BIOS than an OS, but for a "small system" it's quite reasonable.

Keep in mind the machine to which I was referring was early 1970s technology. Operating system design as we know it today was barely visible in those days.

In any case and in strict terms, the operating system is intended to run the system, not hold the hands of the users. The hand-holding is the job of higher level functions, e.g., an interpretive shell, something that Microsoft apparently forgot sometime around 1995 when they decided to weld the GUI to the kernel in Windows 95. :D

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Fri Jan 13, 2017 8:28 am 
Offline
User avatar

Joined: Thu Jun 23, 2011 2:12 am
Posts: 229
Location: Rancho Cucamonga, California
floobydust wrote:

That sounds more like a well designed and documented BIOS than an OS, but for a "small system" it's quite reasonable.


It was. Back then, in the days of CP/M, the BIOS was the operating system. The expectations that people have about operating systems have greatly expanded over the years. Windows also started out as a program that ran on top of an OS (and architecturally, it still is, though it's presented as a single product).

===Jac


Top
 Profile  
Reply with quote  
PostPosted: Fri Jan 13, 2017 8:40 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10977
Location: England
Does CP/M manage memory at all? By which I mean, does it tell the application where usable memory starts and ends? I'm guessing there's no concept of allocation or of multiple users of memory needing to coexist. (Whereas in DOS, I gather there's the Terminate-and-Stay-Resident, which must mean that there are multiple occupants.)

Are CP/M programs, or indeed DOS programs, relocatable? It seems to me that load-time relocation as the Amiga did, or a CPU instruction set which allows for position-independent code, are major steps forward.


Top
 Profile  
Reply with quote  
PostPosted: Fri Jan 13, 2017 9:55 am 
Offline
User avatar

Joined: Tue Mar 05, 2013 4:31 am
Posts: 1385
My comment was a bit tongue in cheek. During my Big Blue days, I worked on some really old stuff... where the programs where on wire boards, followed by 80-column punch cards, later 96-column punch cards. Machines back then had fairly simple operation codes (System/3 control code $F8 was to feed a card from hopper 1) and a machine with 16KB of core storage was huge, both physically and resource wise. There was no actual concept of an OS. Later machines (like the System/32, 34, 38) had control storage and main storage. Their control storage held microcode, which made up the actual processor instructions. The OS in these machines were more apparent but still somewhat hidden under programming like RPG. But I digress in old history, but agree on the BIOS comments for the early machines.

I lived the entire development life cycle from the first IBM PC, to XT, AT, PS/2, etc., and every release of DOS, OS/2 and Windows 1.0 through Windows NT. Not surprisingly, the first 32 function calls for PC DOS 1.0 were identical to the 32 function calls in CP/M. There's some history around this, but a separate thread would be better. We referred to early Windows as a DOS extension, which it was. Oddly enough, MS continued that architecture (a stretch no doubt) up until NT, where Cutler and Co. came over to write it. Later releases of DOS (2.0) extended the "OS" to support hard disks and terminate and stay resident programs (TSRs) became popular, i.e., code from Borland and their cool little pop-up utilities. There was some early level of memory allocation with DOS, but no protection of it. The BIOS size continued to grow with later machines from the top of memory, DOS continued to grow from the bottom of memory and what was left in the middle was for user programs and memory on graphics and other adapters. We also started putting in adapter cards for 3270 terminal emulation and the terminal code itself was a TSR so you could toggle between using DOS and accessing the mainframe. 5250 emulation was also available. The PC/370 was unique as a pair of adapter boards, which contained a pair of custom-micro-coded Motorola 68K chips that made up the 370 mainframe architecture in a desktop. Needless to say the 640KB became a tough barrier to work in and the lack of memory protection and multitasking pushed the need for a new OS, that resulted in OS/2 and later Windows. For what it's worth, we did have CP/M-86, which came out quite a bit later (when Dr. Gary came off his high horse) but it was too late, DOS had become the standard and CP/M-86 was toast. I recall tossing a dozen or more cases of it (CP/M-86) from a stock room back in the 90's.

With the 6502, it never really had a commercial OS. So many vendors used it, but almost everyone did something different with. The once piece of code that was common was MS basic. Perhaps one of the reasons was the supporting I/O. Commodore (and MOS) had their own set of chips for video, audio, timers, ports. Atari had their own as well, plus the BBC, OSI, etc. and then the arcade machines used both the 6502, 8080 and Z/80 chips along with custom I/O chips. That made for a very large 6502 base, but one which was wildly moving as well.

The one fact remains... we're all here on this forum decades later still using the 65(C)02, some of the same I/O chips (and a lot of other ones) and still doing a lot of different things with it. And for a lot of different reasons, who knew? :mrgreen:

_________________
Regards, KM
https://github.com/floobydust


Top
 Profile  
Reply with quote  
PostPosted: Fri Jan 13, 2017 10:23 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10977
Location: England
(Got to love that custom microcode - thanks for that detail floobydust!)


Top
 Profile  
Reply with quote  
PostPosted: Fri Jan 13, 2017 11:18 am 
Offline
User avatar

Joined: Tue Mar 05, 2013 4:31 am
Posts: 1385
BigEd wrote:
(Got to love that custom microcode - thanks for that detail floobydust!)


Yes, microcode was stored on a disk drive in a special location. There was just enough "smarts" built in to go out and load microcode into control storage first, then you had a CPU that actually worked. Every now and then there would be a microcode update, or patch. A set of diskettes would be sent and the CE (IBM's service rep) would apply them by invoking a utility and feeding diskettes to the machine. Considering the timeframe, it was quite an advanced technology. System Boot started with "initial microcode load".

_________________
Regards, KM
https://github.com/floobydust


Top
 Profile  
Reply with quote  
PostPosted: Fri Jan 13, 2017 12:15 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10977
Location: England
Back in the 80s, in my first job, the chip simulator ran on a VAX with some customised microcode to improve performance for that specific task. I think writeable control store is a powerful idea - I suppose these days it would be reconfigurable computing, because who has time to run microcode?


Top
 Profile  
Reply with quote  
PostPosted: Fri Jan 13, 2017 12:51 pm 
Offline

Joined: Sun Apr 10, 2011 8:29 am
Posts: 597
Location: Norway/Japan
BigEd wrote:
Does CP/M manage memory at all? By which I mean, does it tell the application where usable memory starts and ends? I'm guessing there's no concept of allocation or of multiple users of memory needing to coexist. (Whereas in DOS, I gather there's the Terminate-and-Stay-Resident, which must mean that there are multiple occupants.)
CP/M didn't allocate anything - your program simply had to assume an origin of $0100. I don't recall if there was a way to check the top limit of memory - but if there wasn't an official way there's still a way to do it I'm sure (every time there's a warm-start, and that's after almost every program exit, the BDOS and CCP are loaded into high memory and CP/M knows where to put it. Your program has to end before that. EDIT: BDOS at the top, and CCP (command processor below that) - your program could overwrite the CCP, as long as you ended your program with a call to the warmstart function, which was the recommended way anyway. The warmstart would re-load the CCP.)

DOS had, or got, that TRS hack feature, via a specific INT call which left a chunk of memory alone. CP/M didn't clear out anything, so you could still hack something together to leave some code 'up there' somewhere. But you would have to hack some vectors to be able to intercept e.g. special key sequences to get to the code. But isn't that what you have to do in DOS as well? (I vagualy recall some CP/M tool that worked something like that. But it's been a long time so I'm not certain.)

MP/M was a multi-user CP/M, each user connected via their own terminal. So MP/M must have had a concept of memory management, but I'm not familiar with it. I have only used MP/M in single user mode (effectively CP/M), once, when I was called in to rescue all the files on the computer of an accounting firm. Somehow they managed to delete it all.. got it back though. So I don't know the internals of MP/M (the filesystem is just normal CP/M). CP/M had 'user' areas in the filesystem, but that's something else. A bit like virtual directories (the filesystem was otherwise flat, no subdirectories).

Quote:
Are CP/M programs, or indeed DOS programs, relocatable? It seems to me that load-time relocation as the Amiga did, or a CPU instruction set which allows for position-independent code, are major steps forward.
Run-time relocatable programs, or position-independent code, depends on the facilites of the CPU. I guess it's possible to write a small POC 8080 or Z80 programs. I don't think those architectures are particularly good for pic. But what programmers did was to create relocatable object code, which could be done by various means - e.g. the PRL format. That's useful for linking modules (libraries) together, without that it's difficult to write real programs the way we still do them. Incidentally, the BDOS itself was relocatable - via the PRL concept, although it wasn't actually called PRL until MP/M or CP/M-3.0 AFAIK. Otherwise it wouldn't have been possible to create (via MOVCPM) a new version of CP/M when you e.g. increased the amount of RAM in your computer (the source of the BDOS wasn't provided as part of CP/M).


Last edited by Tor on Sat Jan 14, 2017 2:58 pm, edited 2 times in total.

Top
 Profile  
Reply with quote  
PostPosted: Fri Jan 13, 2017 1:03 pm 
Offline

Joined: Sun Apr 10, 2011 8:29 am
Posts: 597
Location: Norway/Japan
You can still load some microcode into Intel and AMD processors. They sometimes deliver microde updates to fix bugs. So in that sense the 'writeable control store' is still with us, although I suspect there's a core part of the microcode which can't be written to (but that last part is pure guesswork from my side).

As far as minis and mainframes are concerned, the writeable control store has probably been with us as long as the microcode concept. The Norsk Data minis I worked with used the concept both for (most of) their 16-bit models, and, in particular, the 32-bit models where you would get a new microcode file regularly. I still have one or two of those available. The cheapest model of the 32-bit series, for example, didn't come with floating point hardware, so you would load a control store which implemented the FP functions in microcode.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 39 posts ]  Go to page Previous  1, 2, 3  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 3 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: