BitWise wrote:
some must be used to point to the code and data sections of the executing application)
OS convention and not at all enforced by the hardware, as far as I can see from Googling. However, you get the same thing with RISCs too. You should study the ABIs of Unix and Windows for RISC processors some time.
BTW, you get the same thing with the 6502 as well, for zero-page often has specially reserved locations to hold OS-specific variables.
Quote:
no hardware stack
Neither do RISC CPUs. Hardware stacks, unless they're genuine stacks and not convenient stack-like abstractions on top of available RAM, take resources which can be better spent on other functions. They're simply unnecessary, and for leaf subroutines (which occurs
a lot in well-factored code, particularly in Forth, Lisp,
all functional languages, and well-written C), actually slows things down.
The Intel chips invest a
massive amount of silicon into emulating a true stack on-die to minimize these kinds of delays. Stack caching, branch target buffers, and branch prediction logic all are required to restore the mythical 1-cycle overhead for branches in x86.
Quote:
variable length binary and decimal arithmetic
Which you don't have to use if you don't want/need to. Contrast with other CPUs, which require multiple instructions to execute using a carry or extension bit (the latter in the case of the 68K family).
Quote:
EBCDIC(sic) character encoding
and ASCII too. The S/360 supported both explicitly. They dropped ASCII support for the S/370, because nobody used it (I think this was a mistake, too, of course.). And, you can readily support ASCII with the S/370 and S/390 as well; just don't use any of the dedicated string instructions that expect EBCDIC. Or, pre-convert all your text format.
And I can guarantee you that, with enough demand, IBM will absolutely return ASCII support to the instruction set. The problem is, no demand exists.
Implying that the S/360 is highly faulty because it uses EBCDIC really is no different than claiming the 6502 is trash because it uses PETSCII, or ATASCII.
Quote:
a fixation with base + displacement addressing
Because all program code had to operate PC-relative, and data structures could be moved about in memory at any time as part of memory compaction (remember, paging wasn't introduced until the 370). This almost certainly explains why certain registers were, by OS convention, used to hold addresses to certain structures. Since the OS lacks knowledge of what is and is not a pointer, and we hadn't invented pointer-tagging or tag-maps yet, it makes sense to pre-designate registers to hold pointers.
BTW, you had the same thing in classic MacOS, using its handle mechanism.
All things considered, in my coding, I find I use base+displacement addressing 80% of the time, and immediate mode the rest. Except for 6502/65816 coding, I virtually
never use absolute addressing. And if you're coding object-oriented software, you will be using base+offset addressing that much more, because not only do you dereference your data that way, you dereference your
code that way too!
Every time you use ABS,X or (DP),Y, you're in effect using base+displacement addressing (the former uses ABS as the displacement for the bsae X, and the latter uses Y as the displacement for the base in (DP)).
Quote:
The 68000 is much better.
I didn't say the 68000
was a System/360. I said it had inspiration
from the S/360. It was also inspired from many of DEC's offerings as well, including the PDP-11 and PDP-10.