GARTHWILSON wrote:
* the languages themselves?
No. This applies just as well to functional, to object-oriented, and to procedural language categories. Because stack architectures are a subset of accumulator architectures, only stack-based languages like Forth categorically prove the exception.
Quote:
* the types of applications done most with those languages
No. C, Java, and numerous other languages of the C-family have been applied to problems as diverse as arcade games to finance to medical imaging to CAD and more.
Quote:
that the computers they are run on normally have more hardware like video and sound cards (or chip sets) to support the processor?
No. Hand-crafted 2.8MHz 65816 assembly demonstrates a level of graphical performance for the Apple IIgs that competes surprisingly favorably with the Amiga's dedicated blitter hardware running a 3.57MHz[1]. Anyone who has used Deluxe Paint II on the Apple IIgs will prove this point with great facility by grabbing an eighth of a 320x200x16-color screen as a brush, then selecting the spirograph tool, and drawing with the brush. Granted, the Amiga is still faster in practice, but when push comes to shove, the 65816 has sufficient horsepower to alleviate the need for dedicated graphics hardware compared to comparable 68K-based systems.
That being said, it's clear from published benchmarks that compilers for the 68K architecture produces 2x to 3x faster code. How does one explain this performance discrepency? I predict compilers for the 68K family do not have to juggle memory to and from a single accumulator, or tweak word-width flags, or have to constantly reload the Y register for random-access to data structures. Each of these offers a small speed up, but since they all occur in combination, their effects
multiply.
Quote:
* the size of the job, or maybe that the industry is stuck on preëmptive multitasking?
Preemptive multitasking is a huge, huge, huge performance booster in practice. Cooperative multitasking is more time efficient only in closed environments such as you'd find in deep-embedded applications. For everything else, cooperative has been shown to be fatally susceptible to poor programming practices (even accidental), to the point of rendering a computer so unresponsive as to require a reboot. C.f. Windows 3.0, OS/2 1.3 and earlier, the Contiki event-driven OS, etc.
Additionally, if your kernel has a well-chosen set of task primitives, you'll find coding for a preemptive multitasking environment quite easy. The Amiga operating system, for example has one,
and only one, system call that puts a task to sleep -- Wait(). Now, there exists other blocking system calls, but all of these functions ultimately have to call Wait() in order to put the task to sleep. Conversely, there exists one, and only one, mechanism for waking a task -- Signal().
On top of these basic primitives, AmigaOS provides message queues (called "message ports" in AmigaOS lingo), semaphores, and I think a few other basic primitives. But of all the primitives supplied, take a guess as to which
one (and I do emphasize the singular here) is preferred for very nearly
everything in the OS?
Whether you're implementing a GUI application, a device driver, or a filesystem implementation, you're going to be working with
message queues. It's far simpler than the bullshido you get in PThreads or Win32 Threads, and it essentially mimics how you'd build a real-world, multi-processor embedded application anyway. Oh, and it's also the programming model used for Erlang -- if you've coded an Intuition application for AmigaOS, you already
know how to code parallel applications in Erlang even if you don't know how to code in Erlang yet.
The numbers speak for themselves. In 256K of space (mostly generated from a C compiler, at that), Kickstart provides a plurality of device drivers, libraries, and supporting background tasks, all communicating with message queues. Add in the disk-resident software, and that figure goes up to about 1MB or so (Workbench shipped on an 880K floppy, uncompressed binary images). Binaries were kept small, the OS is still considered darn tiny by today's standards, and yet it still relies heavily on preemptive multitasking.
No MMU required -- just brains.
Quote:
* or maybe that portability has been more sacred than the ability to get close to the heart of the machine?
I suspect that without the business imperative (deliver a product on-time and within budget), the need for high-level programming languages would not be as strong. As it is, companies use HLLs with the full knowledge and acceptance that they're trading some fraction of runtime performance for improved programmer productivity. Coding a program faster than your competitor means, automatically, that you can respond to customer desires faster, which earns you a greater market share. You don't use Ruby to write a real-time engine control package. But it works great for delivering a production-ready web application to millions of users in only two months.
Alternatively, now that the market is saturated with HLL coders, it also means you need not spend so much time training new hires, which also reduces overheads for the company.
Quote:
I'm trying to understand if any of it is truly relevant to us who may just want to handle bigger numbers in one gulp.
Like I said in my original post, if all you want is a bigger gulp size, then you can get by with a wider byte. But, even
this fundamentally alters the "look and feel" of the CPU.
Code:
lda aBigNumber
clc
adc anotherBigNumber
simply feels a lot different when coding than:
Code:
lda aBigNumber
clc
adc anotherBigNumber
sta aBigNumber
lda aBigNumber+2
adc anotherBigNumber+2
sta aBigNumber+2
___________________
1. Before people chop my head off for making this statement, if you pick up the Amiga ROM Hardware Reference Manual, you will observe that the blitter can write to RAM no faster than one word every two cycles at 7.15909MHz, assuming all you're doing is filling memory with a fixed value. It runs slower still if you're using it to process one or more source channels.