GARTHWILSON wrote:
65LUN02, it's not clear whether you have delved into the 65816. It would undoubtedly have been designed different if Apple hadn't required that it be able to run legacy '02 software;
I hadn't seen teh 65816 until a few months ago. No offense to you, WDC, or Apple, but from reading through the datasheet and reading about how it's used, it looks like the offspring of a shotgun marriage between the 65C02 and 80286.
GARTHWILSON wrote:
there's always the tradeoff between efficiency and having a larger continuous address space, and I think the '816 has a pretty good compromise.
This is where we'll disagree. I suspect why you and WDC and others are fine with segments is that (after reading your website), your work is mostly with embedded systems. Bill Mensch's response to me is that the microcontroller market was (and still is) bigger than the microcomputer market.
My decades in tech were all spent with microcomputers, from Apple ][ to Mac then a side trip to PenPoint, General Magic, Palm, and Windows CE PDAs, then the first set of phones that ran apps, all with server-side software running in rack-mounted servers until the cloud hid all that complexity away.
The commonality in my work was graphical UIs. VGA had a 640x480, 16 color spec in 1987. The EO 440 tablet computer of 1993 had a 480x640 screen with 4 level of gray. Both of those required 76800 bytes of memory per screen buffer. That alone is more than 64K. Whatever Mac II I had in the early 90s had 32-bit color by then, and thus more than 2 banks of 64K memory just for the screen.
Sure, you can deal with that in banks, but if that was the better choice the 80386 wouldn't have moved to a flat memory model.
GARTHWILSON wrote:
Yes, the '02 left a lot of openings available in the op-code table; but the '816 filled them all in with not just long addressing but also a lot of new instructions and addressing modes, making it able to efficiently do things the '02 couldn't do gracefully, or at all.
Yes, I see that. My point in my post is a what-if question of how those opcodes would have evolved if, for example the Apple /// had been such a bit hit that the Lisa and Macintosh were built upon the success of the ][ and ///, with annual incremental iterations along the way. And in that hypothetical what if, what if Apple had purchased MOS instead of Commodore and thus what if Apple was driving the chip design, optimizing it for microcomputers instead of microcontrollers?
We know from the 2010s what that looks like for ARM-core SoCs. We know from the IIc and IIgs how Apple eventually shrunk the IIe down to a handful of custom chips. Imagine if Apple did that back in 1979.
Mike Markkula could have taken an hour in 1977 to predict the future size, cost, and speed of RAM into the 1990s, just as Gordon Moore took an hour in 1965 to predict the future density and cost of integrated circuits. I suspect someone at Motorola did that. I suspect that is how they managed to convince the decision makers to jump to 32-bit registers and 24-bit addresses in the 68000. To get that chip out in 1979 means that proposal got a green light in 1976 or 1977. To see how big a leap the 68000 was in the industry,
https://en.wikipedia.org/wiki/Transistor_count shows the CPUs, the number of transistors, and the year of release, sorted by year.
As I said in my first post in this thread, I think in foresight Apple and others saw the 68000 as the leapfrog solution away from 8-bits and 64K. In hindsight I think they would have been better off with an incremental 65..02 path, but incrementalism wasn't the culture of the 70s and 80s and neither MOS nor WDC seemed to be competing using that strategy.
GARTHWILSON wrote:
I'm not saying everyone should necessarily flock to the '816, but rather that we can learn from it before designing upscale 65-family processors. The extra saving and restoring the '816 has to do for interrupts definitely increases the overhead beyond what the '02 needs. It still dramatically outperforms something like the 68000 in interrupt performance though.
I read most of your website in the last few days. You are clearly an '816 fan. That's fine. It does what you need in your work, which is why it exists and why it is still being purchased.
Interrupt performance is not a spec I've ever seen touted for a microcomputer, PDA, or smartphone. Just as cache size and virtual memory performance are not common specs for microcontrollers. There are two markets for CPUs and their optimization needs are different.
In all the code I or my team wrote over multiple decades, the total amount of assembly code in released products was measured in under 10 pages. Ease of writing and ease of maintenance was valued above speed for all but the critical loops, and even then, the critical loops were only optimized if the were noticeably slow. Speed to market of the products was much more important than the speed of the products.
I said I worked in the era of "every byte counts" and it did, but typically that was every byte sent on the wire (or wireless) and every byte of the data set. That every byte didn't include every byte of the code itself. In the early 90s, if it fit on a 800k floppy, that was sufficient. By the 00s it just had to fit on a CD ROM. If the code fit on one of those, then it fit into RAM on the computer it ran on.
This is why given a time machine back to 1977, I would visit Cupertino and convince Steve and Steve to ask for a 652402 with the only change being a flat, 24-bit address space. Put that in an Apple /// and Woz would have pushed for 560x192 with at least 16 colors, which requires 26K per screen buffer. The IIgs maxed out its graphics at 640×200, no doubt because its next to impossible to have a screen buffer that spans more than 64K on a CPU with a 16-bit address bus.