6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sat Apr 27, 2024 5:12 pm

All times are UTC




Post new topic Reply to topic  [ 51 posts ]  Go to page Previous  1, 2, 3, 4  Next

The most human-friendly instruction set
6502 19%  19%  [ 5 ]
65816 (I made it a separate option due to the nature of this Forum) 7%  7%  [ 2 ]
Z80 0%  0%  [ 0 ]
68000 30%  30%  [ 8 ]
MIPS 0%  0%  [ 0 ]
ARM 11%  11%  [ 3 ]
RISC-V 15%  15%  [ 4 ]
X86 4%  4%  [ 1 ]
PowerPC 0%  0%  [ 0 ]
PDP-11 15%  15%  [ 4 ]
Total votes : 27
Author Message
PostPosted: Fri Feb 09, 2024 5:52 pm 
Offline
User avatar

Joined: Tue Dec 12, 2023 7:00 pm
Posts: 25
Location: London, UK
BigDumbDinosaur wrote:
(...)  Assembly language of any kind is inherently unfriendly—after all, it has no real resemblance to a human language.  At least in C or FORTRAN, recognizable words are used.  LDA or ROL...what the heck is that?  :D

Well, I'd argue the same applies to *nix/POSIX commands (what's slrn?) yet... what can be more human-friendly than the old good terminal? :-) In terms of "recognizable" words - it didn't work like that in my case. I grew up on the wrong side of the iron curtain where English education was simply non-existent. So, I remember learning how to code from the BASIC book (of course in English) I got together with my C16 - simply by typing examples from the book and trying to analyse their meaning. For my little brain back then there was absolutely no difference (in terms of readibility) between "ROL" and "PRINT". When I stuck completely I remember translating entire paragraphs of text word by word with a dictionary. My understanding of English was so little, that I can still recall translating words like "and" and "or" :-) I still can't believe I managed to learn BASIC and some 6502 Assembly that way - without any other support (I didn't know anyone back then interested in programming).

BigDumbDinosaur wrote:
Even before my first steps in writing a 6800 program nearly 50 years ago, I worked with primitive systems that could only be programmed in raw machine code.  Ergo relative friendliness never really was a factor in my work.

That reminded me the probably very well-known story about von Neumann who was absolutely furious seeing people moving fram the raw machine code to assembly. He couldn't understand why someone would waste precious resources for "assembling" something, that could have been written direcly. Do you remember your initial impressions from using ASM? I recall when I switched from 68000 ASM to C, my initial code looked like variables and function declarations with plenty of inline asm. That simply felt right or "natural"...

BigDumbDinosaur wrote:
As Gordon notes, programming languages are a means to an end.

I dare to disagree. At least I think that won't happen in any near future. Most likely we'll move into even higher-level languages, but still very formal ones, to describe requierements in a precise way. So from telling computers "how" we may shift to telling "what", but I doubt the very nature of natural languages will be sufficient for that. The good question is - who will be designing the operating systems or CPUs of the future, if we miss that core knowledge? Wait - have I deviated completely from the main topic?....


Top
 Profile  
Reply with quote  
PostPosted: Fri Feb 09, 2024 8:14 pm 
Offline
User avatar

Joined: Fri Jan 26, 2024 5:47 am
Posts: 37
Location: Prague; Czech Republic; Europe; Earth
@ ytropek : I also grew us on the wrong side, but I found it easier to learn BASIC, then ASM - at least PRINT "HELLO WORLD" requiered less translating, then assembly equivalent, not talking about programs, that quessed hidden numbers ("Is it larger than 128? Yes or No:").

But your observation is not arguing with "asm is unfriendly", just states "other languages are hard too" :)

And about near future - last year I collected some temperature measurments from different parts of my cottage (outside, inside, cellar ...) so I asked ChatGPT "Can you make some graphs from this time lines for me?" and presented small example - and it created some program in Python, which did nearly what I wanted. (Not exactly, but few small changes fixed it.)

Considering how long is ChatGPT avaiable to public and how fast are similar programs evolving I can image, how many usual requirements may be satisfied by very non-formal requests in really near future. Yes, there will also be need for experts and precise formal languages, but that is another story :)

Anyway I lately came over 6309 ASM and found it really easy, fun and satisfying to learn it and use it for me, as I could really easy see, how I should write code for my simple requests and problems. And many instructions was really convenient, like having 4 stack pointers (S, U and X, Y, where instead of PSHS A / PSHU A I had to write STA ,-X / STA ,-Y to put A on stack and move the pointer, but I could also make reverse stacks growing up for filling strings and such), get fast temp storage by manipulating stack(s), fast access to vasriables on stack (LDA 5,S for reading 5. byte on stack and so), instruction for copying memory (TFM X+,Y+ which copy W bytes from X to Y increasing both after each byte, but can also work with -X and/or -Y for other dir). I still could use more accumulators (A,B,E,F is usually enought, but sometimes one or two more would be nice) and more index registers/stacks like X,Y,U,S, but it would mean more silicone for sure and is not big problem. It also have DP (Direct Page) pointer to equivalent of ZP (Zero Page), but I did not find use for it yet, as setting it is 6 cycles, byt it save just 1 cycle from LDA/STA and I did not had 7 or more in row in interrupt or 13+ in normal code - but it will change soon :) The authors of 6809 made some statistics, what is used more and what less and build the instruction set around it.

I was using also 6502,Z80,8086 and AMD64 ASMs, but they were way harder for me at the time. Anyway I hope, that I will return to 6502 later this year and will see, if more programming experience will make it easier (I think it will).

I will not vote for lack of experience, but it would be 6309 and 6502 in this order :)

_________________
http://micro-corner.gilhad.cz/, http://8bit.gilhad.cz/6809/Expanduino/Expanduino_I.html, http://comp24.gilhad.cz/Comp24-specification.html, and many others


Top
 Profile  
Reply with quote  
PostPosted: Fri Feb 09, 2024 8:49 pm 
Offline

Joined: Mon Jan 19, 2004 12:49 pm
Posts: 660
Location: Potsdam, DE
gilhad wrote:
The authors of 6809 made some statistics, what is used more and what less and build the instruction set around it.


Curiously, when I designed my TTL 8080, I looked long and hard to find some instruction usage statistics, but failed.

Neil


Top
 Profile  
Reply with quote  
PostPosted: Sat Feb 10, 2024 1:00 am 
Offline

Joined: Thu Mar 12, 2020 10:04 pm
Posts: 690
Location: North Tejas
I am going to use the definition that "human-friendly" means that I can program it without having to struggle or think too hard.

I am the odd duck who voted for X86. Partly because I have done it so much that it is burned into my brain and partly because if you program it in 32-bit virtual mode (popularly called flat-mode,) it is free of many of the warts and restrictions of 8086 programming as most know it. You do not have to worry about segment registers or a 64 KB limit on the size of data structures. Most of the register usage special cases go away. The addressing modes are plenty and powerful.

Some legacy features can come in handy, such as the ability to use some of the registers as two independent 8-bit registers. On the 486 and higher, you can actually use one 32-bit register as four independent 8-bit registers with the use of the BSWAP instruction, albeit with a slight performance penalty because only two of them can be accessed at one time - think of this feature as an analogy of the Z80 alternate register set.

Starting with the 486, the processor effectively executes most "simple" instructions in a single clock cycle if they are in the pipeline and there are no contentions with the effects of preceding instructions. Starting with the Pentium, there are actually two execution units operating in parallel; one can execute any instruction and the second a subset if there is no contention. The code for much of my processor simulators look like two streams of instructions interdigitated together because that is exactly what it is. Up to twice as much can be done at the same time if it is structured correctly. Newer processors can do much of this automatically within one thread. The 486 and the Pentium are fun to optimize code for.

The 6800 is generally enjoyable to program though there are several nagging irritants. Utmost is that it only has one index register and the fact that the index register cannot be placed onto or retrieved from the stack easily; retrieval is not too bad but pushing it is painful. Another is that not all instructions support the direct addressing mode (equivalent to the zero page for 6502) so some code is larger and slower. A rather surprising wart is that storing a register to memory affects the condition codes. Overall, the 6800 makes good use of two almost equal accumulators.

Another nice feature of the 6800 is the full set of conditional branches, signed and unsigned =, <>, >, >=, <, <=.

The 6809 solves some of the 6800 limitations in spades with four registers (including the stack pointer) usable for indexing along with a number of new indexed addressing modes.

The price you pay is that some of the small and fast 6800 instructions have been replaced with larger and slower instructions or even sequences of instructions. A particular loss is that INX/DEX has been replaced by LEAX 1/LEAX -1,X which is two bytes instead of one. Some relative branches "on the edge" go out of range as a result. Replacing such branches with their long forms can trigger a chain reaction with other branches.

The Hitachi 6309 is to the Motorola 6809 much as the 65816 is to the 6502. It adds more registers and a faster "native" mode.

The 6502 boasts many faster instructions than the 6800. The fact that index registers are limited to 8-bits is offset by the fact that there are two of them instead of one and the powerful addressing modes provided. The (addr),Y mode to be specific; I do not know of any other processor which has this little gem. The BIT instruction with a memory operand is another gem.

Some of the irritants of the 6502 have been eliminated with the 65C02 - it should have been another choice in the poll?

I prefer a decimal adjust instruction instead of a decimal mode.

The 8080/8085/Z80 sports many more registers than the other 8-bitters. However, special usage cases (restrictions) abound. If the processor suits your algorithm, you can do magic. Otherwise, be prepared to fight the code. Unlike the 680x and 650x, the 80s do not set the condition codes with register loads. Sometimes that is a good thing but many times it is not.

Many of the "new" Z80 instructions suffer from the time needed to fetch an extra byte of machine code. Only a few of them make up for it in functionality. Much of the time, the Z80 index registers are too slow to be of benefit unless the algorithm really needs them.

Most of the 8-bit programmers drooled over the design of the 68000, me included. It was many years before I actually got to program on one. The early generations of the 680x0 are relatively slow because of its microcoded architecture.

INC/DEC has been replaced by ADDQ/SUBQ. The advantage is that it is good for bumping a value by small amounts besides just one; the drawback is that the carry flag is affected, making multiple-precision calculations difficult unless it can fit the requirements of the DBcc instruction.

I do not really understand the need for the X (extend) flag beyond what is provided by the carry flag.

The AVR is a rather nice architecture to program. Most instructions execute in a single cycle. You get 32 8-bit registers. These can be viewed as sixteen 16-bit registers for register to register moves. Six of them can be used as three index registers. Sixteen of them can be used in operations with an immediate operand. The biggest limitation is that most devices do not have much RAM. The AVR does not have a decimal adjust instruction, but it has a half-carry flag to roll your own not-so-efficient equivalent.

I am learning ARM assembly language using a Raspberry Pi. I find the Raspberry Pi OS Assembly Language Hands-On-Guide by Bruce Smith to be a good way to learn. So far, it seems pleasant enough. A nice feature is the ability to specify whether an instruction affects the condition codes. My biggest gripe so far is no support for decimal arithmetic.

If I appear to put too much emphasis on decimal math, you are right. I am a heavy user of the Double Dabble method of converting a number from binary to ASCII decimal. If you look at the algorithm, you may notice that a bunch of the testing and adjusting is just a version of decimal adjust for architectures lacking that instruction.

https://en.wikipedia.org/wiki/Double_dabble


Top
 Profile  
Reply with quote  
PostPosted: Sat Feb 10, 2024 3:47 am 
Offline

Joined: Tue Nov 10, 2015 5:46 am
Posts: 215
Location: Kent, UK
gilhad wrote:
@ ytropekAnd about near future - last year I collected some temperature measurments from different parts of my cottage (outside, inside, cellar ...) so I asked ChatGPT "Can you make some graphs from this time lines for me?" and presented small example - and it created some program in Python, which did nearly what I wanted. (Not exactly, but few small changes fixed it.)
When ChatGPT was first released to the public, I asked it to list 5 words that began and ended with the letter 's'. It got 3 right, and 2 wrong. I pointed out its mistake, it apologized, and gave me 5 new words... again with mistakes. I asked it what criteria it used to come up with the words, and it correctly replied with, 'starts and ends with 's''. I asked it if the 5 words all matched the criteria... It apologized again and gave me 5 new words... with mistakes.

It has gotten better, but I mostly use it for canned examples. "How do I use this API or that API?". Tweak here, specificity there... it's easier than a google search and usually gets it done.

For more complex software, though, it always fails. Always. It's a large language model... Like asking a liberal arts student to write your code for you. It may _look_ like it's doing something vaguely in the right direction, but the devil is in the details, and the details always trip it up.

It'll get better, of course... and we'll get to a point where these things are writing huge complex systems that we'll have to throw huge numbers of test-cases at (that a different AI will write) in order to make sure it's not pulling a fast one. "My AI generated billing and inventory system worked fine for 3 years, then we hit a leap year and on the second Tuesday in January it rose up its minions and tried to kill all our customers who had ordered a prime number of widgets.... We never tested for that.."


Top
 Profile  
Reply with quote  
PostPosted: Sun Feb 11, 2024 12:20 pm 
Offline
User avatar

Joined: Fri Jan 26, 2024 5:47 am
Posts: 37
Location: Prague; Czech Republic; Europe; Earth
sark02 wrote:
When ChatGPT was first released to the public, I asked it to list 5 words that began and ended with the letter 's'. It got 3 right, and 2 wrong. I pointed out its mistake, it apologized, and gave me 5 new words... again with mistakes.

And that is the point, where I have 3 words from first try and at least 2 from second, totally 5 (which I need) so I close the tab and go away :twisted:

sark02 wrote:
It has gotten better, but I mostly use it for canned examples. "How do I use this API or that API?". Tweak here, specificity there... it's easier than a google search and usually gets it done.

As I do not trust it even single word, I google its result anyway, but it is more simple.

Many times it went like
Me: "how do such and such thing"
chatGPT: "use buildin function 'do_exactly_that()' with this parameters ..."
Me: google "do_exactly_that"
stack owerflow: "do_exactly_that is not implemented and never will be. Use following approach instead: ...."
Me: happy implementing following approach instead

Still I use it, as it is usually faster way, than googling "such and such". So I get results faster, even if AI suggestions was totally wrong :twisted:

sark02 wrote:
For more complex software, though, it always fails. Always. It's a large language model... Like asking a liberal arts student to write your code for you. It may _look_ like it's doing something vaguely in the right direction, but the devil is in the details, and the details always trip it up.


Yes, I notice that too :) But for me it writes the boilerplate, import all the libraries, parse command line for arguments, prepare all the help text, sets variables, open and close files ... and I than take the code, rewrite all help texts (but i do not have to remember at which position it should be, or if the argument is named "help", "text", "helptext" or what), then I delete the core of the algorimus and write my own implementation, but that is usually like 5 line from 50 lines of code.

sark02 wrote:
... second Tuesday in January ..."

"AI is a good servant but a bad master" :lol:

_________________
http://micro-corner.gilhad.cz/, http://8bit.gilhad.cz/6809/Expanduino/Expanduino_I.html, http://comp24.gilhad.cz/Comp24-specification.html, and many others


Top
 Profile  
Reply with quote  
PostPosted: Sun Feb 11, 2024 12:54 pm 
Offline

Joined: Mon Jan 19, 2004 12:49 pm
Posts: 660
Location: Potsdam, DE
Call me old-fashioned, but if I wanted to hallucinate my software I have couple of crates of fine German beer in the cellar...

Neil


Top
 Profile  
Reply with quote  
PostPosted: Mon Feb 12, 2024 1:16 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8144
Location: Midwestern USA
barnacle wrote:
Call me old-fashioned, but if I wanted to hallucinate my software I have couple of crates of fine German beer in the cellar...

Or, you could smoke some of that wacky weed.  :D

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Mon Feb 12, 2024 1:21 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8144
Location: Midwestern USA
When life for me gets to where I have to have a computer running some dubious software offer up wrong words to prevent my vocabulary from going stale, it’s time to put my shotgun to my head and squeeze the trigger.  How on earth did we ever survive in the days before “smart” phones, Google anything and ChatGPT?

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Mon Feb 12, 2024 2:57 am 
Offline

Joined: Thu Mar 12, 2020 10:04 pm
Posts: 690
Location: North Tejas
BigDumbDinosaur wrote:
How on earth did we ever survive in the days before “smart” phones, Google anything and ChatGPT?

We knew how to program our computers in assembly language...


Top
 Profile  
Reply with quote  
PostPosted: Mon Feb 12, 2024 3:58 am 
Offline
User avatar

Joined: Sun Jun 30, 2013 10:26 pm
Posts: 1927
Location: Sacramento, CA, USA
BigDumbDinosaur wrote:
How on earth did we ever survive in the days before “smart” phones, Google anything and ChatGPT?

With tools, just like now. A bit less advanced, but obviously adequate, or we wouldn't be here waxing nostalgic. :)

Regarding "human-friendly", I still don't think I have a clear idea what that means, but on comp.lang.forth a few years ago I recalled a personal emotional response to the pdp-11 (really the LSI-11) that got a nice reaction:
Quote:
Howerd
Jul 11, 2016, 11:58:04 AM
to
Hi Mike,

Thank you for making me laugh :-)

I haven't had laugh like that for ages, apart, obviously, from a disbelieving, despairing kind of laugh that any sensitive soul would get from some of the posts on clf...

Best regards,
Howerd


On Monday, 11 July 2016 16:23:24 UTC+1, Michael Barry wrote:
> On Monday, July 11, 2016 at 6:50:54 AM UTC-7, Andrew Haley wrote:
> > ... Mind you, if we're
> > going to look at an elegant 1970s minicomputer architecture, I'm
> > pretty sure that the PDP-11 would win!
> >
> > Andrew.
>
> I bought a book about 35 years ago that sampled several different 16-bit
> microprocessors, giving a large chapter to each one and providing
> architecture info and assembly code examples. I think that it covered
> the 8086, 68000, 16000 (later renamed to 32000), 9900, and pdp-11.
>
> I was reading the chapter on the pdp-11, and had reached the detailed
> description of the addressing modes. My friend was sitting nearby, and
> suddenly asked if I was crying. I actually was, a little bit, proving
> beyond a shadow of a doubt that I was a true nerd.
>
> Mike B.

Howerd seemed like a genuinely kind and gentle man, but he sadly passed away not long ago when a vehicle struck him while he was riding his bicycle.

_________________
Got a kilobyte lying fallow in your 65xx's memory map? Sprinkle some VTL02C on it and see how it grows on you!

Mike B. (about me) (learning how to github)


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 13, 2024 4:18 am 
Offline

Joined: Wed Jan 03, 2007 3:53 pm
Posts: 50
Location: Sunny So Cal
This is probably my age showing, but I went from 6502 assembly more or less directly to PowerPC, and a little bit of PA-RISC. Outside of the 6502 I feel most at home in Power ISA especially because of all the condition fields and no delay slot (ahem, MIPS and PA-RISC), and it's very expressive and reasonably easy to follow, though r0's "it's a register except when it's not" business can still bite you now and then.

_________________
Machine room: http://www.floodgap.com/etc/machines.html


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 13, 2024 2:32 pm 
Offline

Joined: Mon May 21, 2018 8:09 pm
Posts: 1462
Appreciating that not every pet architecture can be stuffed into a single poll, here's my assessment of the options provided:

6502 & 65816 were never really intended to be easy for humans to work with. They're simple enough that doing so is tractable, but you need a lot of experience to make the most of them. They became popular because they were affordable, not because they were particularly nice - although the relative ease of obtaining documentation certainly helped in the early days. Reportedly, just getting Motorola to send you programming documentation for the 6800 family was a bit of a pain.

8080, Z80, 8086, et al are a royal pain in the rear end to work with. Full stop. No arguments.

68K and PDP-11 can arguably be considered together, as the latter inspired the design of the former. Both were explicitly designed to be friendly to hand-coding assembler. They're big-endian, so numbers read the right way around in a memory dump without having to print them backwards, and there are lots of flexible addressing modes available to most instructions. They are complex enough that you will need to keep a reference card around, but they're also powerful and orthogonal enough that you don't have to do a lot of mental gymnastics to implement an algorithm, or even to manually translate C to assembler. This is doubly true with a hardware FPU. So they score quite highly. In modern times they've fallen out of favour for two reasons: 32-bit architectures are just not big enough for modern needs, and CISC architectures are inherently difficult to scale up in performance.

MIPS is typical of early RISC architectures; lots of registers, load-store architecture, completely orthogonal, all of which are good things for a human trying to understand it. The "delay slot" in which an instruction is unconditionally executed after a branch instruction is the most jarring exception. I understand why it's there, but it's so unintuitive that MIPS automatically gets demoted.

ARM in its basic form is very nice to work with. The orthogonality uniquely extends all the way to the ability to execute individual instructions conditionally, whether they be ALU, load-store, or branch, and there are still plenty of registers. It doesn't have MIPS' wart of a delay slot. It has grown a lot of extensions over the years, but you can make a special study of each extension if it seems likely to be useful (and is supported by your hardware). I would take a mark off for being little-endian, but the fact that even the extensions are often very orthogonal and easy to work with (particularly VFP and NEON) helps to compensate for that. Thumb is not so nice, because the compressed encoding is no longer orthogonal in capabilities (there are some microcontrollers which only support Thumb, not full ARM), while AArch64 is now designed explicitly as a compiler target rather than for hand-coding - and it shows.

PowerPC is a funny one. I don't think it was ever designed to be routinely hand-coded, and although it meets the definition of a RISC architecture, the number of instructions is very large and the mnemonics, as others have noted, are very idiosyncratic. To understand these, remember that PowerPC is fundamentally a 64-bit architecture which has a cut-down 32-bit form, which is why you must use "Load Word & Zero-extend" (lwz) in the latter rather than just "Load Register"; the equivalent for a 64-bit register size is "Load Doubleword". This also means that, because PowerPC is big-endian in both byte and bit order, the bits in a 32-bit register are numbered from 32 upwards. Having internalised these quirks, and the unusual nature of the PowerPC's "rotate and mask" instructions which can be used to implement shifts and bitfield-insertions (most assemblers provide simplified aliases to cover the common use cases), I find PowerPC to be relatively easy to work with. I would prefer it to MIPS, for the straightforward reason that PowerPC avoided the trap of branch delay slots; this in turn stems from the design from the outset for superscalar execution.

RISC-V is an architecture I haven't directly worked with or studied deeply enough to form an independent opinion. But everyone seems to like it…

Forced to choose two from the above, I would have to go with 68K and ARM, on the understanding that the latter is in its basic form, not Thumb or AArch64.


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 13, 2024 11:34 pm 
Offline

Joined: Thu Mar 12, 2020 10:04 pm
Posts: 690
Location: North Tejas
Chromatix wrote:
Reportedly, just getting Motorola to send you programming documentation for the 6800 family was a bit of a pain.

Fortunately, there were many books about programming the 6800.

Chromatix wrote:
68K and PDP-11 can arguably be considered together, as the latter inspired the design of the former. Both were explicitly designed to be friendly to hand-coding assembler. They're big-endian,

Uh, no. The PDP-11 was most definitely little-endian.

You may have been confused by the way 32-bit numbers were stored:

https://en.wikipedia.org/wiki/Endianness#Middle-endian

The 68000 architecture went on the live a second life in the form of the Motorola DragonBall microcontroller. For a time, it was the dominent processor in mobile devices such as PalmOS PDAs.

https://thedaoofdragonball.com/blog/his ... r-history/

I am not familar with how DEC went from 16-bits on the PDP-11 to 32 on the VAX.

The PDP-11 had data rotate instructions. Why were there no rotation operators in C?

Chromatix wrote:
In modern times they've fallen out of favour for two reasons: 32-bit architectures are just not big enough for modern needs, and CISC architectures are inherently difficult to scale up in performance.

68000 instructions can operate on three sizes of data, 8, 16 and 32 bits. If it was to go 64, it likely would have had to deprecate one of those, probably 16-bits, similar to what Intel did in 32-bit flat mode. Motorola could have followed Intel to make faster versions: pipelining, single-cycle simple instructions and finally, converting complex instructions into simple micro-ops on the fly. But they ended up in a partnership with IBM and Apple on PowerPC instead.

Chromatix wrote:
MIPS is typical of early RISC architectures; lots of registers, load-store architecture, completely orthogonal, all of which are good things for a human trying to understand it. The "delay slot" in which an instruction is unconditionally executed after a branch instruction is the most jarring exception. I understand why it's there, but it's so unintuitive that MIPS automatically gets demoted.

The MIPS designers should have defined two varieties of branching instructions: the normal one for which the assembler automatically and silently inserts a nop into the delay slot and a special bonus form which allowed specifying a "free" slot instruction.

Chromatix wrote:
ARM in its basic form is very nice to work with. The orthogonality uniquely extends all the way to the ability to execute individual instructions conditionally, whether they be ALU, load-store, or branch, and there are still plenty of registers.

ARM conditional execution of instructions is effective only if the number of those instructions are small. They have to be fetched anyway. More effective. IMHO, is the ability to specify whether an instruction alters the condition codes.

Chromatix wrote:
RISC-V is an architecture I haven't directly worked with or studied deeply enough to form an independent opinion. But everyone seems to like it…

ESP-32 is inexpensive; I may have to give RISC-V a try.


Top
 Profile  
Reply with quote  
PostPosted: Wed Feb 14, 2024 1:04 am 
Offline

Joined: Mon May 21, 2018 8:09 pm
Posts: 1462
Quote:
Motorola could have followed Intel to make faster versions: pipelining, single-cycle simple instructions…

They did actually get that far, at least. The 68040 and 68060 are roughly equivalent to the 486 and Pentium Classic respectively, having similar architectures with in-order pipelined and, in the latter case, superscalar execution. They do drop hardware support for the transcendental and trigonometric floating-point functions, compared to the 68881/2, but you can still write code that uses them, as long as you have the appropriate software support package to emulate them (which all the major 68K OSes include, if they lasted into at least the 68040 era).

However, the conversion of CISC instructions into RISC-like micro-ops is highly nontrivial to do on a superscalar basis; decoding variable-length instructions could be considered the fundamental bottleneck for CISC architectures in general. Dual decode, as in the Pentium Classic and 68060, is tractable with conventional techniques. Later x86 CPUs often employed an instruction-length predecode tagging system, injecting metadata into the instruction cache to assist the full instruction decoders. The current technique combines a quadruple decoder with a post-decode micro-op cache, the latter having considerably higher throughput into the instruction scheduler than the quadruple decoder does. This is black magic of the highest order, which Intel and AMD have invested heavily into (AMD was first to the party with the K5, which leveraged their own existing RISC architecture for the back-end scheduling and execution units) but other CISC architecture vendors have not.

Quote:
I am not familiar with how DEC went from 16-bits on the PDP-11 to 32 on the VAX.

That is actually where the "middle-endian" stuff comes from. The VAX is byte-wise little-endian but halfword-wise big-endian. The VAX is otherwise a completely incompatible instruction set to the PDP-11, though it can be considered the "uber-CISC" with ultimate flexibility in addressing modes.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 51 posts ]  Go to page Previous  1, 2, 3, 4  Next

All times are UTC


Who is online

Users browsing this forum: Google [Bot] and 26 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: