SBC-3 Development update

Topics related to the SBC- series of printed circuit boards, designed by Daryl Rictor and popular with many 6502.org visitors.
User avatar
8BIT
Posts: 1787
Joined: 30 Aug 2002
Location: Sacramento, CA
Contact:

Post by 8BIT »

Thanks for your example. I was under the false impression that the 68000 was better than that.

Yes, lack of volume is definitely the cause of the high cost.

Daryl
kc5tja
Posts: 1706
Joined: 04 Jan 2003

Post by kc5tja »

I think it'd be fair to say that the 68000 family is definitely the superior of the two, as the 68020, 030, 040, and 060 are simply awesome CPUs. But the 68000, for as much as I love that CPU to death, really isn't a good performer all things considered.

From a programming perspective, though, I absolutely adore the 68000 and its successors. I've never found an easier assembly language. History suggests that the 68000 was strongly influenced by the IBM System/360 architecture; if so, I suspect I'd find that platform quite pleasant as well.
User avatar
BitWise
In Memoriam
Posts: 996
Joined: 02 Mar 2004
Location: Berkshire, UK
Contact:

Post by BitWise »

kc5tja wrote:
History suggests that the 68000 was strongly influenced by the IBM System/360 architecture; if so, I suspect I'd find that platform quite pleasant as well.
Hhhm, the 360 has 15 general purpose 32-bit registers (R0 is the constant zero - some must be used to point to the code and data sections of the executing application), no hardware stack (you construct one in software), variable length binary and decimal arithmetic, EBCDEC character encoding, (originally) a really bad floating point format (based on powers of 16 rather than 2), and a fixation with base + displacement addressing (no auto increment/decrement that I can remember).

Makes me cringe every time I remember the opcodes. The 68000 is much better.
Andrew Jacobs
6502 & PIC Stuff - http://www.obelisk.me.uk/
Cross-Platform 6502/65C02/65816 Macro Assembler - http://www.obelisk.me.uk/dev65/
Open Source Projects - https://github.com/andrew-jacobs
OwenS
Posts: 105
Joined: 26 Jul 2007

Post by OwenS »

kc5tja wrote:
I think it'd be fair to say that the 68000 family is definitely the superior of the two, as the 68020, 030, 040, and 060 are simply awesome CPUs. But the 68000, for as much as I love that CPU to death, really isn't a good performer all things considered.

From a programming perspective, though, I absolutely adore the 68000 and its successors. I've never found an easier assembly language. History suggests that the 68000 was strongly influenced by the IBM System/360 architecture; if so, I suspect I'd find that platform quite pleasant as well.
I must admit that the 68k assembly language is very, very nice...
...But ARM* beats it :).

(* ARM in ARM mode, not Thumb/Thumb2 mode; Thumb may be smaller, but it's much less regular and much more compiler targeted)
kc5tja
Posts: 1706
Joined: 04 Jan 2003

Post by kc5tja »

BitWise wrote:
some must be used to point to the code and data sections of the executing application)
OS convention and not at all enforced by the hardware, as far as I can see from Googling. However, you get the same thing with RISCs too. You should study the ABIs of Unix and Windows for RISC processors some time.

BTW, you get the same thing with the 6502 as well, for zero-page often has specially reserved locations to hold OS-specific variables.
Quote:
no hardware stack
Neither do RISC CPUs. Hardware stacks, unless they're genuine stacks and not convenient stack-like abstractions on top of available RAM, take resources which can be better spent on other functions. They're simply unnecessary, and for leaf subroutines (which occurs a lot in well-factored code, particularly in Forth, Lisp, all functional languages, and well-written C), actually slows things down.

The Intel chips invest a massive amount of silicon into emulating a true stack on-die to minimize these kinds of delays. Stack caching, branch target buffers, and branch prediction logic all are required to restore the mythical 1-cycle overhead for branches in x86.
Quote:
variable length binary and decimal arithmetic
Which you don't have to use if you don't want/need to. Contrast with other CPUs, which require multiple instructions to execute using a carry or extension bit (the latter in the case of the 68K family).
Quote:
EBCDIC(sic) character encoding
and ASCII too. The S/360 supported both explicitly. They dropped ASCII support for the S/370, because nobody used it (I think this was a mistake, too, of course.). And, you can readily support ASCII with the S/370 and S/390 as well; just don't use any of the dedicated string instructions that expect EBCDIC. Or, pre-convert all your text format.

And I can guarantee you that, with enough demand, IBM will absolutely return ASCII support to the instruction set. The problem is, no demand exists.

Implying that the S/360 is highly faulty because it uses EBCDIC really is no different than claiming the 6502 is trash because it uses PETSCII, or ATASCII.
Quote:
a fixation with base + displacement addressing
Because all program code had to operate PC-relative, and data structures could be moved about in memory at any time as part of memory compaction (remember, paging wasn't introduced until the 370). This almost certainly explains why certain registers were, by OS convention, used to hold addresses to certain structures. Since the OS lacks knowledge of what is and is not a pointer, and we hadn't invented pointer-tagging or tag-maps yet, it makes sense to pre-designate registers to hold pointers.

BTW, you had the same thing in classic MacOS, using its handle mechanism.

All things considered, in my coding, I find I use base+displacement addressing 80% of the time, and immediate mode the rest. Except for 6502/65816 coding, I virtually never use absolute addressing. And if you're coding object-oriented software, you will be using base+offset addressing that much more, because not only do you dereference your data that way, you dereference your code that way too!

Every time you use ABS,X or (DP),Y, you're in effect using base+displacement addressing (the former uses ABS as the displacement for the bsae X, and the latter uses Y as the displacement for the base in (DP)).
Quote:
The 68000 is much better.
I didn't say the 68000 was a System/360. I said it had inspiration from the S/360. It was also inspired from many of DEC's offerings as well, including the PDP-11 and PDP-10.
ralphw
Posts: 2
Joined: 14 Feb 2011
Contact:

Benchmarks (65816 vs. 68000) for graphics

Post by ralphw »

Quote:
This is meaningless. The 65816 is capable of moving four bytes in the same amount of time the 68000 is capable of only moving two...

Additionally, the 68000 has 32-bit registers, but 32-bit operations require additional cycles. This is because it only has a 16-bit ALU inside.

WDC used to have an interesting article on their site comparing the relative performance of the 68000 to the 65816. Surprisingly, the 65816 was only 20% slower than the 68000 at the same clock speed.
So I have to ask the question - has anyone (else) ported QuickDraw to their homebrew 65816 and done some benchmarking? 128k Mac running a 68000 at 8Mhz vs. a 128k 65816 running at 8Mhz would be an interesting test.

The 68000 assembler source to quickdraw is publicly available now, making such a project feasible.
Post Reply