6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sat Nov 16, 2024 6:48 am

All times are UTC




Post new topic Reply to topic  [ 41 posts ]  Go to page Previous  1, 2, 3  Next
Author Message
PostPosted: Thu Nov 28, 2019 10:54 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 730
Location: Tokyo, Japan
ttlworks wrote:
It's interesting to see, how management\marketing decisions had shaped the landscape since the invention of the 4004. In a parallel universe, we all would have 68k powered PCs...

A more plausible parallel universe, to my mind, would have been one without the 6502, with the 6800 or a variant becoming the processor of choice for the 1977 Trinity and following computers. The primary issue that nixed this was the same one that resulted in the 6502 in the first place: Motorola management not realizing that, for all the apparent success and advantages of the 6800, where there's the ability to have a cheaper product, someone will do it and take over. (One could argue that this wasn't so obvious at the time, but exactly this had happened multiple times in the mainframe and minicomputer markets over the last decade or so, and even Intel had achieved success with some of their earlier products through selling them at a huge loss and banking on the market to grow and production prices to fall.)

I don't think that would have had any effect on the 16-bit era, though. The 68000 offered little in the way of backwards compatibility with the 68xx ecosystem and it's not clear that backwards compatibility was particularly prized by IBM anyway. The most common story is that IBM chose the 8088 for the PC because of familiarity with the Intel ecosystem gained from building the Datamaster and because they had a license to manufacture Intel-designed CPUs. Had the 8086 architecture been a fresher start rather than hewing so closely to the design goal of being a nearly-drop-in replacement for the 8085, I don't think that would have made any difference.

There might also have been issues with the 68000 at the time. Bill Gates has said in an interview that he and Paul Allen convinced IBM to go 16-bit, and "we looked at 68000 which unfortunately wasn't debugged at the time so decided to go 8086."

Quote:
...running nice and reliable without Microsoft.

I wouldn't count on that. Keep in mind that from 1978 onward the vast majority of 6502 users were running Microsoft software when they turned on their computers, and that followed their (more moderate) success on two previous CPU architectures. The MS-DOS thing was certainly influenced by luck, but mainly by Gates' desire to get his software running on everything and charge everybody for it.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 28, 2019 10:56 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 730
Location: Tokyo, Japan
ttlworks wrote:
If you try designing a CPU which uses as few chip space as possible while making efficient use of a small memory size, no matter what approach you take: the end result somehow always shows a tendency to resemble the 6502 ISA.

Really? So what's up with the RCA 1802, then?

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 28, 2019 11:01 am 
Offline
User avatar

Joined: Fri Nov 09, 2012 5:54 pm
Posts: 1431
The 6502 deliberately was engineered/optimized to work as what we nowadays call an "embedded microcontroller".
//It just had happened that it had ended up in computers too, because it had delivered a similar performance than the competing CPUs at a much lower price when being introduced.

Yes, size is cost: the sqare inch of processed silicon tends to be more expensive than the square inch of gold.

In the 8085, the registers take a lot of space on the silicon.
In the 6809, the instruction decoder\sequencer takes a lot of space on the silicon. //IMHO commercial success of the 68HC11 microcontrollers wasn't too big.
The 8031 core might be small, but when trying to build something different than a PLC with it, the ROM for the code probably takes a lot of space on the silicon.
The PIC16 core might be impressively small, but when trying to do "serious" applications with it, the ROM for the code... :roll:


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 28, 2019 11:13 am 
Offline
User avatar

Joined: Fri Nov 09, 2012 5:54 pm
Posts: 1431
cjs wrote:
The MS-DOS thing was certainly influenced by luck

Yes. If IBM would have built the PC with a 68000 and without asking Microsoft for writing the operating system... :lol:

cjs wrote:
Really? So what's up with the RCA 1802, then?

I think the registers take a lot of space on the silicon, and things like conditional branches and calling subroutines might give you sort of a headache at some point.
But the radiation hardened implementations of the RCA 1802 sure had made it far: Galileo spacecraft, Hubble telescope, Magellan Venus probe...
...for that sort of applications, the price of the CPU probably doesn't matter.


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 28, 2019 11:28 am 
Offline
User avatar

Joined: Fri Nov 09, 2012 5:54 pm
Posts: 1431
Intel always had tried to keep some backward compatibility, no matter what.
This made things more easy for the end customers, who were able to resort to an "existing ecosystem".

Motorola always did a cut:
6809 wasn't too compatible to 6800. 68k wasn't too compatible to 6800. ColdFire wasn't too compatible to 68k.
Customers had stated that using Motorola CPUs always felt like if somebody was pulling the rug below their feet.

;---

Now that you mentioned it: there is a very impressive bug list for the Intel 8086..80386 CPUs,
but not much seems to be known about the bugs in Motorola 68k CPUs, but IMHO there are supposed to be some.

I remember that the 68HC11 had bugs related to the status register after Motorola had moved the 68HC11 production from Scotland to another country.


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 28, 2019 11:58 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 730
Location: Tokyo, Japan
ttlworks wrote:
6809 wasn't too compatible to 6800.

The 6809 was about as compatible with the 6800 as the 8086 was with the 8085. For both systems you'd put your old assembler source code into a translating assembler for the new architecture, probably tweak a few things mainly related to initializtion, and you're done.

I think that dropping 6800 binary compatibility for the 6809 was a reasonable decision. It wasn't like there were a huge number of (or many at all) microcomputer manufacturers that could have taken advantage of it. (Hitachi's BASIC Master series is the only mass-market microcomputer that comes to mind.) There were far more 6800s in embedded systems and the like, and for them reassembling their source code is a trivial cost when building and testing a platform with the new chip.

(For MOS with the 65816 it made sense to spend gates on binary compatibilty rather than reducing the price or adding more features since they hoped to sell it as an upgrade to existing microcomputer systems where most users had neither the source code for their programs nor the wherewithal to reassemble them.)

Quote:
68k wasn't too compatible to 6800.

Not at all, actually, and that saved software developers enormous amounts of time and money overall. For new code or an actual port of old software (as opposed to an automatic translation) it was easy to take advantage of 16-bit data and >16-bit address space on the 68000 and a pain on the 8086. Intel eventually came around to that point of view and made their 32-bit architecture incompatible with the 16-bit architecture, relegating the 16-bit architecture to a separate mode.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 28, 2019 1:47 pm 
Offline
User avatar

Joined: Fri Nov 09, 2012 5:54 pm
Posts: 1431
6800 only had one index register, disabling the interrupts for using/abusing the stackpointer as a second index register doesn't sound like a clean approach to solving this problem.
Some of the 6800 instructions do map directly to the 6809 at binary level, but a few certainly don't.

68000 assembly language was "the language of the gods": 68020 plus FPU gave you a chance to directly translate your C code more or less 1:1 into 68k machine code instructions.
Coding in 68k assembly felt like to waltz.
Contrary to this, RISC coding sometimes felt like climbing out of the boxing ring, counting your bruises, then trying to remember how you got each of them and in what kind of order.

65816 was meant to be sold as an upgrade, especially the 65802 (now out of production) which was pin compatible to the 6502 and binary compatible to the 65816.

For operating systems in the 80s, the Intel 8086 segmenting scheme probably made sense, but I think that making use of it isn't too healthy for a coder in the long run...


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 28, 2019 2:01 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10980
Location: England
cjs wrote:
ttlworks wrote:
It's interesting to see, how management\marketing decisions had shaped the landscape since the invention of the 4004. In a parallel universe, we all would have 68k powered PCs...

A more plausible parallel universe, to my mind, would have been one without the 6502, with the 6800 or a variant becoming the processor of choice for the 1977 Trinity and following computers. The primary issue that nixed this was the same one that resulted in the 6502 in the first place: Motorola management not realizing that, for all the apparent success and advantages of the 6800, where there's the ability to have a cheaper product, someone will do it and take over.
...
I don't think that would have had any effect on the 16-bit era, though. ... The most common story is that IBM chose the 8088 for the PC because of familiarity with the Intel ecosystem gained from building the Datamaster and because they had a license to manufacture Intel-designed CPUs. Had the 8086 architecture been a fresher start rather than hewing so closely to the design goal of being a nearly-drop-in replacement for the 8085, I don't think that would have made any difference.

There might also have been issues with the 68000 at the time. Bill Gates has said in an interview that he and Paul Allen convinced IBM to go 16-bit, and "we looked at 68000 which unfortunately wasn't debugged at the time so decided to go 8086."

Nice link, thanks. See also here:
Quote:
68000 was carefully considered. "AN excellent architecture chip, it has
proven to be a worthy competitor to the Intel-based architecture."
there wer four major concerns:

1) 16 bit data path would require more bus buffers, therefore a more
expensive system board.

2) more memory chips for a minimum configuration.

3) while it had a performance advantage, the 68000 was not as memory
efficient.

4) Companion and support chips not as well covered as Intel.o

He also felt the the 68000 didn't have as good software and support tools,
and the similar register model allowed the porting of 8080 tools
to the 8086/8088.


As for Moto losing out on the low-cost market tapped by the 6502, there's a possible view that they weren't too worried. Their big markets were first automotive, and then pagers, where volumes are way higher than even the VIC-20's millions. See the oral histories:



Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 28, 2019 3:38 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 730
Location: Tokyo, Japan
ttlworks wrote:
6800 only had one index register, disabling the interrupts for using/abusing the stackpointer as a second index register doesn't sound like a clean approach to solving this problem.

The single index register certainly was a major annoyance, and arguably their worst technical design decision. That said, the 6502 has its own annoyances. It's (perhaps arguably) a better design than the 6800, especially given the focus on reducing cost, but it's nowhere near as nice as the 6809.

Quote:
Some of the 6800 instructions do map directly to the 6809 at binary level, but a few certainly don't.

Which ones don't map? I'm not aware of any, but maybe I missed something. I'm not seeing anything obvious from your link. (I suppose that some of the translations might not be entirely obvious, such as TSX to LEAX ,S.)

Quote:
For operating systems in the 80s, the Intel 8086 segmenting scheme probably made sense, but I think that making use of it isn't too healthy for a coder in the long run...

Although there were apparently people inside Intel arguing strongly that the segmentation scheme made sense even for new software (I've heard stories about meeting where it was argued that people would have 96 KB code areas where the code segment register would point to $00000 or $08000 so that code in the $08000-$0FFFF area could be called with short calls from both $00000-$07FFF and $10000-$17FFF), I'm not buying that the segmentation helped in any real way, given that all the OSs were single-process anyway. It seems pretty clear to me that the major advantage was that it made it very easy to upgrade embedded 8085 systems that were bursting at the seams with code, data or both. (I discuss this in more detail here.)

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 28, 2019 4:10 pm 
Offline
User avatar

Joined: Fri Nov 09, 2012 5:54 pm
Posts: 1431
Quote:
the 6502 has its own annoyances

If you need index registers with more than 8 Bits, and 256 Bytes of stack won't do, it certainly has.
It puts quite a limit to the operating system you might be able to use in your computer.

Quote:
Which ones don't map? I'm not aware of any, but maybe I missed something.

When comparing 6800 and 6809 instruction map:
For instance, TST is at a different place in the "OP2" group, and the instructions for pushing/pulling registers in the "MISC3" group are a different pair of shoes.

Haven't considered, that the 8086 segmenting scheme might be helpful for upgrading from 8085 systems.
But over time, it had brought us a lot of trouble.
640kB should do for everybody... no wait, 640MB... better make that 640GB. ;)


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 28, 2019 4:50 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 730
Location: Tokyo, Japan
ttlworks wrote:
Quote:
the 6502 has its own annoyances

If you need index registers with more than 8 Bits, and 256 Bytes of stack won't do, it certainly has.

Acutally, the 256 byte stack limit doesn't seem like a big issue to me, but I keep forgetting that for languages like C it's really helpful. (Perhaps score a point for the 6800 there.) I tend towards languages that dump stuff in a heap.

The 8-bit index registers have their own pain, though. You now have to worry about exceeding the 256 byte limit on data structure size. You have to find another way to pass around base addresses of things. (The zero page and [zp],y addressing mode is pretty helpful with this, though that brings up its own issues, such as having to do multibyte additions/subtractions when incrementing/decrementing those base addresses.) There's probably more.

I've not written enough 6800 code to truly know that the 6502 has made a better set of tradeoffs here, but it looks to me like it probably did.

Quote:
When comparing 6800 and 6809 instruction map:
For instance, TST is at a different place in the "OP2" group, and the instructions for pushing/pulling registers in the "MISC3" group are a different pair of shoes.

Let me say again, to be clear, like the 8086 vs. the 8085 the 6809 is source compatible with the 6800, not binary compatible. You run your 6800 source code through a translating assembler, and out comes 6809 binary code. As far as I'm aware, there are very few issues you have to deal with when doing this (though grovelling in the stack may be one of them).

Such a translating assembler isn't hard to write, though you didn't need to do so since Motorola provided one.

This is a pretty reasonable compromise to make when you're looking at a new processor with such vastly increased capabilities; trying for binary compatibility generally would mean an emulation mode or just a hot mess. Intel made the same choice. I'm not sure if MOS/Commodore ever did this for the native mode of the 65816.

Quote:
Haven't considered, that the 8086 segmenting scheme might be helpful for upgrading from 8085 systems.
But over time, it had brought us a lot of trouble.

Yup. It was great for what turned out to be a tiny part of the market in the long run, and caused everybody else, including Intel, massive pain that is still ongoing.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 28, 2019 5:18 pm 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
cjs wrote:
the 6809 is source compatible with the 6800, not binary compatible. You run your 6800 source code through a translating assembler, and out comes 6809 binary code. [...]
This is a pretty reasonable compromise to make when you're looking at a new processor with such vastly increased capabilities
Agree.

ttlworks wrote:
Haven't considered, that the 8086 segmenting scheme might be helpful for upgrading from 8085 systems.
Dieter, I think what they did made a lot of sense. But only in a limited context.

ttlworks wrote:
But over time, it had brought us a lot of trouble.
Right -- the context was too short-sighted. Segmentation (a la Real Mode) had a limited shelf life. :|

But this could've been substantially extended if the paragraph size weren't fixed at 16 bytes. As memory got cheaper, it would've been attractive to introduce processors that could switch the paragraph size to 64 bytes, for example, or even 256 or more. Suddenly Real Mode's ceiling of 1 MB rises to a more future-proof value of 16 MB or more. :!:

-- Jeff

_________________
In 1988 my 65C02 got six new registers and 44 new full-speed instructions!
https://laughtonelectronics.com/Arcana/ ... mmary.html


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 28, 2019 6:44 pm 
Offline
User avatar

Joined: Mon Apr 23, 2012 12:28 am
Posts: 760
Location: Huntsville, AL
I realize that most of us on this forum are old and very much curmudgeons, and I have spent untold amounts of money avoiding the industry leaders like Itty Bitty Machines, Intel, Microsoft, etc.

In reality, I think that the architecture that we all seem to hate, that of the x86, is long gone, and the majority of us who ever have the opportunity to program an Intel/AMD product today never have to work with those processors' in real mode. By the time we get access to the machine, the machine has been placed into a mode where a flat memory model is in place.

Perhaps ARM and similar RISCy machines will eventually match the performance of the Intel/AMD products, but for the time being, the RISCy Intel/AMD processors are significantly more capable than the competitors.

Threads like the one here carrying on about the 8086 real mode architectural limitations are getting to be like broken records since it's been nearly 2 decades since I used an x86 in that mode. I used to carry on about it as well, but the demonstrated performance of the modern Intel/AMD processor makes all of my arguments against its real mode architectural limitations moot. Just like me thinking that the NEC APC was a better x86 computer than the IBM PC/XT because the APC used an 8086 instead of an 8088. In the end, the IBM PC and compatibles were better computers because of ... price, application software, more sources of supply, open HW specifications, etc. and not because the 8088 was a better processor than the 65816/65802, the National 32000, the Motorola 68000 family, the DEC/Compaq Alpha, the SPARC, the Intergraph Clipper, the MIPS, the Inmos Transputer, the IBM/Motorola/Apple PowerPC, the ARM ARMv8, or the <name another processor>.

_________________
Michael A.


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 28, 2019 9:38 pm 
Offline
User avatar

Joined: Wed Mar 01, 2017 8:54 pm
Posts: 660
Location: North-Germany
From the hardware point of view most if not everything is said me thinks.

But the software (had) have a huge impact as well. Remember the "killer app" Turbo Pascal? This excels poor 8088 above the mighty 68K in terms of usability.

An Atmel engineer once told me that during the development of the AVR risc controller they invited the people from IAR to discuss about a C compiler for the AVR family. IAR explained that a few extra capabilities for R26..31 (aka X, Y, and Z reg) would help a lot. Well, Atmel implemented these special instructions and so the IAR C compiler produces extraordinary fast and dense code => there was nearly no reason to fall back to assembly language. I don't know whether they coined the 'time to market' phrase but AVR µCs and IAR compilers really did the trick.


-- my 2 cents


Top
 Profile  
Reply with quote  
PostPosted: Fri Nov 29, 2019 1:29 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 730
Location: Tokyo, Japan
MichaelM wrote:
Threads like the one here carrying on about the 8086 real mode architectural limitations are getting to be like broken records since it's been nearly 2 decades since I used an x86 in that mode.

I'm not sure about others, but here I've been discussing microprocessors during the 1974-1989 time period, when that 8086 mode was heavily used even on 286+ processors. What happened after that isn't really relevant to the experience of microprocessor software and hardware developers during that decade and a half. This forum is appropriate for discussion of historical issues, is it not?

(Heck, for that matter, from 1977 there was at least one widely used 32-bit flat architecture in the non-microprocessor world that solved all the problems we're discussing, if you were willing to spend several tens of thousands of dollars or more for each machine. Also not terribly relevant to most microcomputer users.)

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 41 posts ]  Go to page Previous  1, 2, 3  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: