6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sun Nov 24, 2024 4:36 am

All times are UTC




Post new topic Reply to topic  [ 64 posts ]  Go to page 1, 2, 3, 4, 5  Next
Author Message
PostPosted: Sat Oct 13, 2012 6:15 pm 
Offline

Joined: Tue Jul 05, 2005 7:08 pm
Posts: 1043
Location: near Heidelberg, Germany
Hi there,

recently I stumbled over this piece: http://www.alfonsomartone.itb.it/aunlzr.html

It is a somewhat heated argument that the ZX Spectrum is faster than the C64, and that the paper stating the opposite is wrong.

The article above mentioned is here at ffd2.com: http://www.ffd2.com/fridge/speccy/score The code is unavailable.

The ffd2.com article states a "typical" ratio of 3 cycles Z80 to 1 cycle 6502. This would make the Z80 in the Spectrum faster - depending on the concrete benchmark - than the C64, as the spectrum's Z80 runs with 3.5 MHz.

Is this article above just a heated blowup of a spectrum fan or is there any good critics in it (like the CDIR comment)? Without the code I don't know.

Other benchmarks are:
http://www.hpmuseum.org/cgi-sys/cgiwrap ... ead=120687 (found here: viewtopic.php?f=1&t=2234&view=previous and scroll to Ed's post from April 24 - which actually refers to the one I started with :-)

Is it time to reevaluate this? Or do we just keep with the current state of knowledge?

Edit: clarified a bit which article is which

_________________
Author of the GeckOS multitasking operating system, the usb65 stack, designer of the Micro-PET and many more 6502 content: http://6502.org/users/andre/


Top
 Profile  
Reply with quote  
PostPosted: Sun Oct 14, 2012 11:18 pm 
Offline

Joined: Sat Dec 13, 2003 3:37 pm
Posts: 1004
Comparing the 6502 to the Z80 is a different problem than comparing the Sepctrum to the C64, the Mhz gap being the primary distinction.

Back in the day, pre IBM PC, the 4Mhz Z80 was down right ubiquitous, everyone and their uncle had a Z80 machine available, no doubt empowered by CP/M. On the other hand, these machines had little to really distinguish themselves. Some had video cards, some didn't, many used serial terminals, but beyond that: 4MHz, 64K, 2 floppies, knock your self out. There were some advanced, multi-user systems, but they were all pretty much unique.

But the driver was certainly that the 4Mhz Z80 was pretty fast for its day, the hardware stable, and the large software library available from CP/M.

At the same time, they were also expensive. 6502 machines were cheaper, the chip itself was cheaper, and, even at 1Mhz, it was "fast enough". But for the home market, the vendors focused on the embedded graphics and sound capabilities to discriminate themselves from each other. As an Atari owner, we never got in to benchmark battles with C64 folks, or Apple folks, even though the Atari was running at 1.7 MHz in contrast to the C64 and Apples 1Mhz. Speed never came up, we were too busy doing display list demos with our 4 joysticks and smooth scrolling and what not. Also, the Ataris BASIC was notorious for being slow.

Now, I can't speak to why the difference in Mhz never bubbled up to the surface. Perhaps all that cool Atari hardware starved cycles from the CPU, lowering net through put. But at the time, it really never came up.

Finally, Z80s ran business software, ran 80 char screens, had fast disk drives. 6502s ran games and other home computing tasks. Even the Apple ][ relied more on the Z80 CP/M card for "business" uses in contrast to the 6502 version. The simple truth is that CP/M ruled the Z80, not the other way around. 6502 machines had no need to any compatibility whatsoever with each other, and were able to offer a lot more innovation to the market. As an Atari bigot, I never cared much for the C64, but I certainly respect it and it's impact to home computing in the large.

By the time the PC came out, it basically stomped on the Z80 market. I can't say if the 4.77Mhz 8088 was faster than a 4.77Mhz Z80, tic for tic. I believe if you can live within the 64K limits of a raw Z80, it's faster than a similarly clocked 8088. Ciarcia's SB180 was his answer for those folks running legacy CP/M looking for a faster machine without going the PC route. The Hitachi 64180 was a faster Z80 (not just in Mhz) plus more.

Of course, today, we have 20Mhz Z80's and faster 65816's, but at the same time we have GHz "8088s". So, now it's all about computational power per watt and other issues than raw speed.


Top
 Profile  
Reply with quote  
PostPosted: Mon Oct 15, 2012 8:26 am 
Offline

Joined: Sun Apr 10, 2011 8:29 am
Posts: 597
Location: Norway/Japan
whartung wrote:
[..]
Of course, today, we have 20Mhz Z80's and faster 65816's, but at the same time we have GHz "8088s". So, now it's all about computational power per watt and other issues than raw speed.

Thanks for the trip down the memory lane! Ah, good old days.. it's true that nowadays it's mostly about power per watt (I care more about battery consumption than anything else when I shop notebooks or tablets, and even our customers with racks of hardware start to notice the huge power consumption of these GHz monsters).
However, just recently I started on a new project at work, and for the first time in a couple of decades it's suddenly about performance again.. so I've been doing profiling, and analysis, and thinking about how to do bit-fiddling just a bit faster, slicing off a few milliseconds here and there. It all adds up to some significant numbers. It's one of those problems where it doesn't help to throw cores at it, the only thing that helps, everything else being the same, is the GHz number.. so it would be much better with a single 4GHz core than a 16-core 2.4GHz system.
Anyway, lots of fun! Just like in the old days.

-Tor


Top
 Profile  
Reply with quote  
PostPosted: Mon Oct 15, 2012 2:37 pm 
Offline

Joined: Sat Dec 13, 2003 3:37 pm
Posts: 1004
A few years ago, Apple was in the midst of a whole "Mhz don't matter" campaign with the PowerPC v Intel chips. PPC has caught up somewhat in the Mhz area, but both lines have changed quite a bit in time, so I don't know if they're faster for what you'll want to do or not.


Top
 Profile  
Reply with quote  
PostPosted: Mon Oct 15, 2012 3:37 pm 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
Tor wrote:
I've been doing profiling [...] It's one of those problems where it doesn't help to throw cores at it, the only thing that helps, everything else being the same, is the GHz number..
Sounds like you've got a handle on what the important issues are, Tor. Of course, as per whartung's remarks, a GHz of performance on one machine isn't necessarily the same as a GHz of performance on another machine. This is not just the "instructions per clock" (IPC) issue but also a question of various cache sizes and behaviors. Cache issues can be very important! It used to be that a programmer's choice of algorithm was the overriding decision, performance-wise. Nowadays that's often still true, but beware -- cache issues can turn a superficially "good" algorithm into a loser and a supposedly inferior algorithm into a winner.

As for 6502 versus Z80, I recall a quotation from Chuck Peddle... (sorry, I don't have a link handy). Naturally he was bullish in regard to the 6502's overall performance. However, he equitably conceded that a Z80 could be faster if the application was one which would keep the Z80's larger register set full of often-used data.

Which in a way brings us back to cache issues. Hmmm....

-- Jeff

_________________
In 1988 my 65C02 got six new registers and 44 new full-speed instructions!
https://laughtonelectronics.com/Arcana/ ... mmary.html


Top
 Profile  
Reply with quote  
PostPosted: Mon Oct 15, 2012 4:23 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
Dr Jefyll wrote:
...a Z80 could be faster if the application was one which would keep the Z80's larger register set full of often-used data.

Yep, assuming interrupt latency wasn't an issue. :D

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Mon Oct 15, 2012 5:18 pm 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8546
Location: Southern California
In computers, a speed difference of 20% in computers is pretty insignificant. My workbench computer starts having trouble at 40% above the clock speed I've been running it; so yes it could go 20% faster, but that's not enough difference to really matter. I found the max and backed it off a bit for some margin. Now if I could double the speed, that would be quite significant (but still not earthshaking). Comparing different computers though, you have to compare for your application, not individual instructions like this man has chosen to pick on. Like I tried to explain to a boss in the 1980's, the power of a computer depends on more than what processor and clock speed it had.

But repeating an earlier post:

There were a couple of paragraphs in an article by Jack Crenshaw in the 9/98 issue of Embedded Systems Programming where he talks about different BASICs he used on computers in the 1970's and 1980's, and said the 6800 and 6502 always seemed to run them faster than any other processor. He says that to him, the 6502 was a "near knock-off" of the 6800, and says he liked the 6800 architecture far more than that of the 80 family, even though his work made him much more familiar with the latter. Quoting two paragraphs:

"To me, the 8080 and Z80 always seemed to be superior chips
to the 6800 and 6502. The 8080 had seven registers to the
6800's two (plus two index registers). The Z80 added
another seven, plus two more index registers. Nevertheless,
I can't deny that, benchmark after benchmark, BASIC
interpreters based on the 68s consistently outperformed
those for the 80s.

"The biggest problem with the 68s was that they had no
16-bit arithmetic. Though the 8080 and Z80 were basically
8-bit processors, at least they had 16-bit registers (three
for the 8080, eight for the Z80), and you could actually
perform arithmetic in them, shift them, test them, and so
on. You couldn't do any of these things with the 6800 or
6502, which is one reason I still don't understand, to this
day, how the 68s could outperform the 80s in benchmarks."

After learning the 6502's instruction set and bus usage, I remember being impressed by the relative inefficiency of the 80 family, including the number of clock cycles it took to execute a single instruction, and how many extra instructions were needed because it did not have things like the 6502's automatic decimal arithmetic with the D flag, and the compare-to-0 implied in all logical, bit, arithmetic, and load instructions.

I don't know why he says, "You couldn't do any of these things in the 6800 or 6502" though. Sure you could.

The 80's even at that time though were generally run at 4MHz or a little higher IIRC, and they were still losing to the 1MHz 6502's and 6800's.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Mon Oct 15, 2012 7:10 pm 
Offline
User avatar

Joined: Tue Nov 16, 2010 8:00 am
Posts: 2353
Location: Gouda, The Netherlands
Z80 interrupt isn't that bad is it ? It takes 11 cycles for an NMI, 13 cycles for Mode 0/Mode 1, and 19 cycles for Mode 2, which includes a vectored jump. This is more cycles than a 6502, but the Z80 can run at higher clock frequencies (using the same RAM), so it's not so bad as it looks. Also, the Z80 has an extra register bank that could be reserved for a fast interrupt routine, eliminating the need to save registers.


Top
 Profile  
Reply with quote  
PostPosted: Mon Oct 15, 2012 9:15 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
Arlet wrote:
Z80 interrupt isn't that bad is it ? It takes 11 cycles for an NMI, 13 cycles for Mode 0/Mode 1, and 19 cycles for Mode 2, which includes a vectored jump. This is more cycles than a 6502, but the Z80 can run at higher clock frequencies (using the same RAM), so it's not so bad as it looks. Also, the Z80 has an extra register bank that could be reserved for a fast interrupt routine, eliminating the need to save registers.

It's still a few clocks slower. What matters is not that one MPU is slightly quicker on the draw than the other when an IRQ occurs but the effect of scale. Those one or two extra clock cycles consumed by the Z80 quickly add up when a lot of interrupting is going on, e.g., when asynchronous I/O is in progress.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Mon Oct 15, 2012 9:57 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
I think comparisons with clock speeds can be misleading, because the Z80 takes several ticks per memory access. Comparing systems with the same memory cycle times might be more relevant - and that brings us to the point that Z80 was much easier to use in a large-memory DRAM system. The 6502 won for lower chip count in a smallish SRAM system. Now we have huge SRAMs the situation is different. I'm not sure how the memory cycle times compared in those days between SRAM and DRAM.

The point is well made that the Z80's register pool is handy, until it's too small. (The alternate register set is clearly a means to help with interrupt servicing, although it can alternatively be used for extra space.)

Those discussions linked are entertaining in their way but there's a lot of heat in the arguments and not so much light.

Is there some aspect of the Z80 implementations which allows it to run faster on a given process? I see it's sold now at 50MHz, compared to 65xx's 16MHz, which is still 3x faster.


Top
 Profile  
Reply with quote  
PostPosted: Mon Oct 15, 2012 10:37 pm 
Offline

Joined: Sat Dec 13, 2003 3:37 pm
Posts: 1004
GARTHWILSON wrote:
"To me, the 8080 and Z80 always seemed to be superior chips
to the 6800 and 6502. The 8080 had seven registers to the
6800's two (plus two index registers). The Z80 added
another seven, plus two more index registers. Nevertheless,
I can't deny that, benchmark after benchmark, BASIC
interpreters based on the 68s consistently outperformed
those for the 80s.

This is really the crux of the matter for many. Mhz and what not don't really come in to play when your primary interface with the computer is the BASIC language, or VisiCalc, or Excel, or PhotoShop.

As I mentioned earlier, even though the Atari was clocked ~80% faster than the C64, speed was never really mentioned in the comparisons between the two machines, save that Atari Basic was known to be slow. So, whatever benefits the processor may give you are consumed by the underlying tool set.

Side anecdote regarding Atari Basics. My favorite was OSS BASIC XL. One of its distinguishing features is the worlds best computer language statement. The statement keyword was "FAST". The FAST statement instructed the runtime to run through the program and pre-compute all of the looping constructs. Turns out when you have "FOR I = 1 TO 10 : NEXT I", the runtime push the LINE NUMBER of the top of the loop (the FOR statement). When it hit the NEXT statement, the runtime had to SEARCH the entire program for the line number on each iteration. So, you ended up with code that had common routines at the top of the program for faster access. BASIC XL routed around that by pre-computing the destinations at start. It causes a notable delay, but, boy, was it fast. It had a real impact. But I always enjoyed the idea that the language had such a thing as the "FAST" command. Kind of like Turbo Switches on later PCs.

Apple during the "Mhz" wars were running PhotoShop benchmarks to show their machines were faster than Windows machines, and in the end, that's all that really mattered to those who use the computer.


Top
 Profile  
Reply with quote  
PostPosted: Tue Oct 16, 2012 5:09 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
whartung wrote:
The statement keyword was "FAST". The FAST statement instructed the runtime to run through the program and pre-compute all of the looping constructs. Turns out when you have "FOR I = 1 TO 10 : NEXT I", the runtime push the LINE NUMBER of the top of the loop (the FOR statement). When it hit the NEXT statement, the runtime had to SEARCH the entire program for the line number on each iteration. So, you ended up with code that had common routines at the top of the program for faster access. BASIC XL routed around that by pre-computing the destinations at start. It causes a notable delay, but, boy, was it fast. It had a real impact. But I always enjoyed the idea that the language had such a thing as the "FAST" command.

Actually, that feature mimics the way Business BASIC dialects (e.g., BB86, Thoroughbred, etc.) optimize their runtime performance (BB has been around since the early 1970s). Like Atari BASIC, BB would tokenize keywords, verbs, etc., to both shrink the size of the program and to accelerate runtime processing. However, BB also generates a symbol table that is attached the "compiled" program, which is loaded into workspace when the program is run. The symbol table has the address of each variable declared and used in the program, so variable lookup is done via a binary search on the symbol table, instead of by looking at the variable space in RAM. However, where the real speed payoff came was in using labels for statements, e.g., GOSUB PRINT_A_CHAR instead of GOSUB 10000. The PRINT_A_CHAR subroutine's relative address would be in the symbol table and again, would be found by doing a binary search. A similar arrangement existed for using line numbers as targets, but because they could not be sorted into lexical order, a separate table maintained the binary line number linked to a relative address. A linear search of this table was required.

Over the years I've written hundreds of megabytes of BB code and have always used the statement labels. The difference in performance when a subroutine is called in a loop is noticeable.

Quote:
Kind of like Turbo Switches on later PCs.

Not really. The turbo switch boosted the clock speed. Atari's FAST statement was an execution optimizer that would only affect a BASIC program. Machine code would run as it always did.

Quote:
Apple during the "Mhz" wars were running PhotoShop benchmarks to show their machines were faster than Windows machines, and in the end, that's all that really mattered to those who use the computer.

Perception is everything. Since much of a user's time is spent watching the screen being drawn by the currently-executing program, optimizations that improve screen painting can appear to make a slower system seem faster, at least until compute-bound processing gets into the picture.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Tue Oct 16, 2012 6:36 am 
Offline
User avatar

Joined: Tue Nov 16, 2010 8:00 am
Posts: 2353
Location: Gouda, The Netherlands
BigEd wrote:
Is there some aspect of the Z80 implementations which allows it to run faster on a given process? I see it's sold now at 50MHz, compared to 65xx's 16MHz, which is still 3x faster.

I don't see why the 6502 couldn't be clocked at much higher frequencies. If I can get a core to run at 100 MHz in an FPGA, it should be able to go several times that in an ASIC. I think the limiting factor is the memory speed. The 6502 requires a memory access time of less than half a clock, so while it may be possible to make a 100 MHz core, it would be virtually impossible to meet that speed with the memory system.

That also means that for comparisons between Z80 and 6502, I don't think we should look at the raw cycle counts. Instead, divide the Z80 cycle counts by 3 or 4, to reflect the naturally higher clock rate.


Top
 Profile  
Reply with quote  
PostPosted: Tue Oct 16, 2012 7:49 am 
Offline
User avatar

Joined: Tue Sep 11, 2007 8:48 am
Posts: 27
Location: Pruszków/Poland
Pity we could buy 100 MIPS 8051's today... This should be 65xx, not some ugly i51 stuff.

_________________
Practice safe HEX !


Top
 Profile  
Reply with quote  
PostPosted: Tue Oct 16, 2012 1:19 pm 
Offline
User avatar

Joined: Mon Apr 23, 2012 12:28 am
Posts: 760
Location: Huntsville, AL
The 8051 has always been a poor excuse for a general purpose processor. I certainly appreciate its multi-vendor support in all of their specific implementations, but in an instruction set and architecture comparison, the Z80, MC6800, and WDC65C02 easily outclass it. That so many C compilers are available for this little beast with 5 address spaces is a testament to its widespread use and to the ingenuity of the compiler developers.

Modern re-implementations of this architecture have certainly improved the cycles per instruction ratio, but at the cost of substantial increase in the size of the logic. The additional speed certainly improves the performance, but its never going to convince me that its a better processor than most of its predecessors. It would be like agreeing that the x86 base architecture is the best because it won the processor wars. If there ever had been a single chip implementation of the Z80 or 65C02 with similar capabilities as the 8051, then the 8051 would not even be in the discussion.

The commercial success of the 8051 can not be disputed, and I believe it is for that reason alone that there continues to be significant work in the development of modern derivatives for inclusion into ASICs and other purposes.

_________________
Michael A.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 64 posts ]  Go to page 1, 2, 3, 4, 5  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 22 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: