6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sun Apr 28, 2024 11:21 pm

All times are UTC




Post new topic Reply to topic  [ 18 posts ]  Go to page 1, 2  Next
Author Message
 Post subject: MOS6502 Release Schedule
PostPosted: Wed May 24, 2017 5:10 pm 
Offline

Joined: Wed May 24, 2017 5:07 pm
Posts: 5
Does anyone have a detail released schedule for the MOS6502? so far i've only been able to find references that say November 1975. i was hoping to find some dates with more granularity such as work week information...

thanks


Top
 Profile  
Reply with quote  
PostPosted: Wed May 24, 2017 6:57 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10793
Location: England
I don't have much, but I can say there's a 6502 with the ROR bug dated week 51, 1975. It sounds as if the ROR bug existed until June 76, in chips of revision C and earlier. Revision D fixes the ROR bug.

Ref:
http://www.weihenstephan.org/~michaste/ ... meline.pdf

MOS did have some working chips they were handing out at WESCON in September 1975, see
https://en.wikipedia.org/wiki/File:MOS_ ... t_1975.jpg

Here's a quote about that:
Quote:
The economics of the semiconductor industry were also in Wozniak’s favor. Chips seldom sold long at their introductory price. Competing devices from the dozen or so major semiconductor manufacturers usually ensured that prices would fall fast and dramatically. In the fall of 1975 the laws of the industry held true and played havoc with the pricing of eight-bit microprocessors. Wozniak first stumbled on the change when he and Baum traipsed to an electronics trade show in San Francisco and spotted a new microprocessor, the MOS Technology 6502, made by a Costa Mesa, California, company. The men at MOS Technology were aiming the 6502 at high-volume markets like copiers, printers, traffic signals, and pinball machines rather than the small computer-hobbist. The 6502 was almost identical to the Motorola 6800 and the MOS Technology salesmen pointedly stated that their company was trying to make a smaller, simpler version of the older chip. The similarities were so blatant that they eventually became the subject of a lawsuit between the two companies but for Wozniak and other hobbyists, legal squabbles were a distant blur. The critical issue was price. The Motorola 6800 cost $175. The MOS Technology cost $25. Wozniak fished a 6502 out of a large glass bowl brimming with microprocessors and immediately modified his plans. He abandoned the 6800 and decided to write a version of the computer language BASIC that would run on the 6502.”

- from https://www.quora.com/Was-Steve-Wozniak ... he-Apple-I
quoting Michael Moritz' book, The Little Kingdom.


Top
 Profile  
Reply with quote  
PostPosted: Thu May 25, 2017 12:55 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8153
Location: Midwestern USA
BigEd wrote:
I don't have much, but I can say there's a 6502 with the ROR bug dated week 51, 1975. It sounds as if the ROR bug existed until June 76, in chips of revision C and earlier. Revision D fixes the ROR bug.

Ref:
http://www.weihenstephan.org/~michaste/ ... meline.pdf

MOS did have some working chips they were handing out at WESCON in September 1975, see
https://en.wikipedia.org/wiki/File:MOS_ ... t_1975.jpg

Here's a quote about that...

Quote:
...Wozniak fished a 6502 out of a large glass bowl brimming with microprocessors and immediately modified his plans...

It still amazes me how a little serendipity got into the picture and changed the course of computer history.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Thu May 25, 2017 5:48 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10793
Location: England
A bit more about those glass jars:
Quote:
During our 2006 interview Chuck explained that selling a dramatically less expensive CPU was not as easy as it sounded. A few years earlier there had been a high profile scam involving a company that claimed it could produce mainframe terminals it would lease for just $10 per month. The company had went bankrupt in a cloud of scandal after taking millions of dollars from investors, and blamed the failure on industries inability to produce cheap chips.

In an effort to drum up interest in the chip they ran an advert stating that anyone could not only see, but they could buy the amazing $25 microprocessor at the WestCon (Western Electronics Show and Convention) in 1975. Unfortunately, when MOS arrived at the show they were told that, in an effort the keep the show ‘high brow’, exhibitors were not allowed to sell product at their booths. Chuck quickly rented a nearby hotel room and had is very attractive wife, sit at a table with two glass jars full of newly minted MOS 6501’s. Little did the buyers know that all of the chips in the bottom of those jars were defective. “Image is everything”, Chuck says.

- from http://www.commodore.ca/commodore-histo ... -computer/

And here's Mike Garetz in Confessions of a Micropath:
Quote:
Godbout Electronics was selling 8008 chips for about $50, so I thought I would look deeper into building an 8008 system from scratch. Well, the 8008 is a real strange little chip, and it's even stranger when you don't understand anything. This put me off the 800 series from Intel. About the same time Motorola had announced the 6800. I rushed down to the local distributor and paid $35 for the giant Motorola manuals and also gathered up all the free literature I could. The Motorola manuals were confusing and not very clearly written. They make perfect sense now, but now I'm not a novice. Even though the Motorola manuals were confusing, they were great compared to what Intel had released, which was downright cryptic! Little glimmers of understanding were flashing in my brain.
...
By now it was August and these strange ads had appeared from a company called MOS Technology. They were announcing a new line of microprocessors for $20 and up. $20.00! And, they said you could buy the things at the upcoming Wescon show in September. This was un-heard of. Remember that at this time an 8080 was still $175.00.

What a furor this created. Intel and Motorola seemed to be implying that the $20 price was a phony comeon, like you could only get that price if you ordered a million units. Other people were convinced that it was an out-and-out fraud. One salesman I talked to was convinced of this, and I remember him distinctly telling me that the microprocessor chips had reached their bottom price—$175— and that we'd never see them go any lower. I countered by saying that soon we'd see the price of micro- processors drop to under $10 in the next year or so. He said "Never!"— told me I was crazy and everyone else standing around agreed with the salesman.

The only thing to do was to wait and see what happened at Wescon. Well, along came the day of the show and, sure enough, there was MOS Technology but no chips in sight. I was informed that no selling was allowed on the floor, but that the chips were available in their hospitality suite. Away I went to the hospitality suite to find out the real story.

There they were! A big glass bowl of chips and stacks of manuals. They also had a KIM and a TIM system running. A guy named Chuck Peddle was there, happy to explain the features of his newly born baby. They plied me with a drink and I sat down on one of the couches with a copy of the manual to have a look. The damn thing made sense. Take my money!

I went home that evening with a 6502 chip and a hardware and I still needed more "modules" for my system so I decided to build them. software manual. My own computer and all for $25 dollars. Little did I know that I would invest another $300 before my homebrew 6502 system would work.

It is interesting to note that this very day Intel and Motorola an- nounced price reductions on their processors to $79.00. The microcomputing craze was really beginning. I would like to point out that no one has really credited Chuck Peddle for bringing the microprocessors within reach of all of us.

- https://archive.org/stream/creativecomp ... 3/mode/1up

And here's a bit from Bill Mensch about pricing and yield:

Quote:
Mensch: Well, the first wafers that I worked on a Motorola, they were like 2 and 1/2 inch wafers. But I think we probably were three, I don't know if they had three and a half, but I don't think they were four inch wafers. But what Ed Armstrong, the process guy that was the head of process at Motorola for the NMOS process at Motorola, he grew a long beard waiting for 10 good die-per-wafer, and we were getting like 100 good die-per-wafer on the 6502. We had at least 10 times the yield per wafer, and it was because of their “spot knocking”..

Diamond: And what is that, “spot knocking”?

Mensch: Well, when you build a mask, you have flaws sometimes in the material that's used--

Diamond: Now, are these contact masks at this point?

Mensch: Well, no, that's another thing. The contact mask meant that you would wear out your mask after using it a few times, so these were projection-- proximity, so it was proximity if I got it right. And so that means it didn't touch the wafer-- so if you've got a good mask, you had a good mask that you could use for hundreds of wafers as long as you didn't damage it. So the “spot knocking” meant that if you compared two, you're not going to have a hole in the same place on both masks. So then if you went in and put little ink or something to cover the hole up, you could create a perfect mask. That's what we had.

Diamond: So you retouched the masks.

Mensch: Yeah, yeah. Motorola wasn't doing that, so we had the advantages in the manufacturing. So when we sold a $20 6501 and a $25 6502, we were making money.

Diamond: So your team went to MOS to build a microprocessor. Who came up with the architecture of the processor?

Mensch: Well, it would be Chuck, I would say. I'm a semiconductor engineer, so I'm building what Chuck wants built. When I did step in and started defining things is when I realized there were some basic things here that could make a big difference. Motorola 6800 had a clock generator that they sold for $69 on top of a $375 microprocessor, so if you add the two together, you got over $400.

- from the CHM's Oral History at http://archive.computerhistory.org/reso ... df#page=18


Top
 Profile  
Reply with quote  
PostPosted: Tue May 30, 2017 3:22 pm 
Offline
User avatar

Joined: Wed Aug 17, 2005 12:07 am
Posts: 1207
Location: Soddy-Daisy, TN USA
Quote:
Wozniak fished a 6502 out of a large glass bowl brimming with microprocessors and immediately modified his plans. He abandoned the 6800 and decided to write a version of the computer language BASIC that would run on the 6502.”


Imagine if we still lived in this world....

Imagine the very notion of being able to change the main CPU at the last minute and still produce a great product that sells millions.

Imagine Dell or HP dropping the x86/i64 architecture at the last minute and going with PowerPC or ARM.

We really live in a different world.

_________________
Cat; the other white meat.


Top
 Profile  
Reply with quote  
PostPosted: Tue May 30, 2017 6:18 pm 
Offline

Joined: Sat Dec 13, 2003 3:37 pm
Posts: 1004
HP did choose the Itanium architecture, but that's never been well received.

At the consumer level, x86 crushed everything else. (Actually, Windows crushed everything else -- there were other OSs on x86 that never really got anywhere, such as BeOS).

Apple has managed to come out with some of the most successful ARM based computers (commonly known as the iPhone and iPad), partly because they would forgo any notion of legacy software compatibility. They let the community start from scratch (to a point). I completely believe Apple could ship a Mac ARM laptop successfully with 6mos of warning to the development community.

Apple also the only company that's actually managed to migrate their entire line across 5 different architectures. 6502 -> 65816 -> 68000 -> PPC -> x86 -> ARM. (Well, not true, it is at the consumer level, IBM has done some great stuff over the years.)

Of course, early on, with the Mac, they completely punted on legacy software, so the 65816 -> 68000 may be a bit disingenuous. But they did pull it off successfully from 68K through ARM.

68K through x86 was done compatibly, by bundling simulators to run legacy software.

The jump to ARM was done through source code compatibility (with much of the original Macintosh Obj-C code portable to the new platform, but, obviously not entirely -- but the underlying OS was mostly compatible, and iOS was quite comfortable for experience MacOS developers).

The Unix world was very successful is jumping from architecture to architecture. Back in the heyday when everyone and their brother offered a Unix offering, our business relied on Informix database systems and development tools, one of which created p-code binaries which ran cross architecture. But, in truth, the development environment (i.e. vi) was consistent across platforms.

Working on everything from PCs ('486), HP (PA-RISC), Data General (Motorola 88K), Sun (SPARC), Sequent (multiprocessing x86), IBM (PowerPC), all I needed was a modem and a login, the software ported and worked fine (mind I wasn't doing the porting, so I don't know what trauma's Informix's internal porting staff suffered).

The 6502 wars showed how little CPU compatibility really mattered, in the end. They were all pretty much completely different machines.


Top
 Profile  
Reply with quote  
PostPosted: Tue May 30, 2017 8:37 pm 
Offline

Joined: Tue Jul 24, 2012 2:27 am
Posts: 672
whartung wrote:
At the consumer level, x86 crushed everything else. (Actually, Windows crushed everything else -- there were other OSs on x86 that never really got anywhere, such as BeOS).

Regardless of Windows, x86 is still on top in performance per watt (tons of performance at many watts at the high end; ARM has lower absolute power draw at the low end, but not the perf/watt ratios). I think POWER architecture has chips with greater absolute performance (especially as their caches have been absolutely massive), but cost and wattage are much higher than x86. So yeah, once Intel really got rolling, their chips kept on top, and volume meant consumer affordability.

_________________
WFDis Interactive 6502 Disassembler
AcheronVM: A Reconfigurable 16-bit Virtual CPU for the 6502 Microprocessor


Top
 Profile  
Reply with quote  
PostPosted: Tue May 30, 2017 9:36 pm 
Offline

Joined: Sat Dec 13, 2003 3:37 pm
Posts: 1004
White Flame wrote:
Regardless of Windows, x86 is still on top in performance per watt (tons of performance at many watts at the high end; ARM has lower absolute power draw at the low end, but not the perf/watt ratios). I think POWER architecture has chips with greater absolute performance (especially as their caches have been absolutely massive), but cost and wattage are much higher than x86. So yeah, once Intel really got rolling, their chips kept on top, and volume meant consumer affordability.

Sure, but outside of the laptop market, the consumer market has never been particularly obsessed with power usage. They care indirectly, of course: battery life, fan noise, etc.

Today, of course, in the mobile market, power usage is A#1. And datacenters care about bang/watt, simply because they have so many. Apple has spent a lot of time on power management both at the hardware and software level. The modern iPhone has 4 cores, 2 are "slow" cores, and 2 are fast. The kernel schedules slow stuff to the lower power cores (the "downloading web page" thread doesn't need the CPU of the "render web page" thread, for example). They're not alone at that, mind, it's just an anecdote of the levels the industry is going for power management.

Meanwhile, consumers are trying to cram as much wattage in to the PCs as possible to drive the high end GPUs, which can give a rip about power management -- for them it's more about cooling. I think some of the Pro Mac laptops have two GPU sets, the low power "great for word processing" chip, and the power sucking, higher end "great for gaming" GPU. Use the latter when the laptop is plugged in.

Also, of note, is the fast and slow clock on the WDC MCU boards. An interesting idea of bumping the clock speed based on which memory bank is being accessed. I can see that being handy for low power data logging. Don't need 14MHz to read a temperature sensor.


Top
 Profile  
Reply with quote  
PostPosted: Tue May 30, 2017 10:53 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8153
Location: Midwestern USA
White Flame wrote:
So yeah, once Intel really got rolling, their chips kept on top, and volume meant consumer affordability.

Actually, the watershed moment for the x86 architecture came in the summer of 1999 with the debut of AMD's Athlon. Up until that time, no x86 MPU could equal the performance of the RISC boxes. Not only did the development of the Athlon start the gradual fading out of the RISC architecture and its replacement with commodity hardware, it forced Intel out of a certain complacency that had set in after the Pentium II came to market.

The second watershed moment for x86 came in 2004 when AMD released the AMD64 architecture in the form of the "Sledgehammer" series that we now know as "Opteron." It could be argued that had AMD not developed a way to make the 32 and 64 bit ISA co-exist in one device with no compromise in performance (unlike the Itanium, which was a dog with 32 bit instructions) we might have continued down the 32 bit path for several more years.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Wed May 31, 2017 11:35 am 
Offline

Joined: Sun Apr 10, 2011 8:29 am
Posts: 597
Location: Norway/Japan
Well.. many people claim that the current x86 architecture is simply a VM running on a RISC architecture. I can't really disagree when looking at today's Intel CPUs, they are completely different beasts from yesteryear. Tons of registers etc. The registers exposed to programmers are just what the x86 VM wants you to see.

AMD played two parts here: First, by competing with Intel they got Intel to start working on performance. But secondly, they effectively killed Itanium as a future 64-bit platform, which is a bit of a pity - when it was around it was way faster than the same-generation x86. We had an Itanium server at work. Great system. But that road was effectively blocked when the 'easy' road of amd64 was made available, there wasn't enough traffic on the Itanium road to make it a cheaper, faster option. I for one couldn't care less about 32-bit 'Windows applications' compatibility and would never have looked back if Itanium had gotten enough momentum to get into consumer products. The only potential (and definitely possible) roadblock to that future would be if there weren't any other alternative 64-bit competition, to continue to pressure Intel to improve performance.


Top
 Profile  
Reply with quote  
PostPosted: Wed May 31, 2017 4:50 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8153
Location: Midwestern USA
Tor wrote:
AMD played two parts here...they effectively killed Itanium as a future 64-bit platform...

That may be, but at the time, the Itanium was much too expensive to become a mainstream processor. Since then, the x86-64 architecture has gotten so ridiculously fast it doesn't matter any more.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Wed May 31, 2017 6:07 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10793
Location: England
Ah Itanium, that was supposed to take off:
Image

I haven't checked the benchmark scores, but when we were building out the compute farm in our chip team, we went from 450MHz SPARC in Sun Ultra boxes, to 933MHz Pentium III in IBM boxes, to Athlon something or other (1200MHz??) in Supermicro rackmount units. Up to that point we'd run headless desktop style units. At some point we also got an H-P PA-RISC system, but mainly because it could be fitted with a huge amount of RAM. I'm not sure what huge was at that time, maybe 16G, maybe a bit more. So, it feels like there was a window when Pentium seemed like a better choice than SPARC, and AMD weren't in the running.


Top
 Profile  
Reply with quote  
PostPosted: Wed May 31, 2017 6:58 pm 
Offline

Joined: Thu Mar 03, 2011 5:56 pm
Posts: 277
Tor wrote:
Well.. many people claim that the current x86 architecture is simply a VM running on a RISC architecture. I can't really disagree when looking at today's Intel CPUs, they are completely different beasts from yesteryear. Tons of registers etc. The registers exposed to programmers are just what the x86 VM wants you to see.

AMD played two parts here: First, by competing with Intel they got Intel to start working on performance. But secondly, they effectively killed Itanium as a future 64-bit platform, which is a bit of a pity - when it was around it was way faster than the same-generation x86. We had an Itanium server at work. Great system. But that road was effectively blocked when the 'easy' road of amd64 was made available, there wasn't enough traffic on the Itanium road to make it a cheaper, faster option. I for one couldn't care less about 32-bit 'Windows applications' compatibility and would never have looked back if Itanium had gotten enough momentum to get into consumer products. The only potential (and definitely possible) roadblock to that future would be if there weren't any other alternative 64-bit competition, to continue to pressure Intel to improve performance.


Quoting from the wikipedia article on the AMD AM29000 processor family (https://en.wikipedia.org/wiki/AMD_Am29000#Versions):

Quote:
Several portions of the 29050 design were used as the basis for the K5 series of x86-compatible processors. The FPU was used unmodified[dubious – discuss], while the rest of the core design was used along with complex microcode to translate x86 instructions to 29k-like code on the fly.


So yes, from the K5 and onwards at least some processors in the x86 family were RISC processors that translated CISC instructions to sequences of RISC-like operations.


Top
 Profile  
Reply with quote  
PostPosted: Wed May 31, 2017 7:01 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10793
Location: England
I think what happens is that RISC and CISC become labels to apply to the instruction set architecture, not to the implementations. While an x86 may in fact have hundreds of registers in the implementation, when writing code, or generating code, you have to look to the architectural registers to decide how to refer to values.


Top
 Profile  
Reply with quote  
PostPosted: Thu Jun 01, 2017 8:57 am 
Offline

Joined: Sun Apr 10, 2011 8:29 am
Posts: 597
Location: Norway/Japan
BigDumbDinosaur wrote:
Tor wrote:
AMD played two parts here...they effectively killed Itanium as a future 64-bit platform...

That may be, but at the time, the Itanium was much too expensive to become a mainstream processor. Since then, the x86-64 architecture has gotten so ridiculously fast it doesn't matter any more.
Sure it's fast (but not as fast as Power 8, not even Power 7, for certain tasks). Noone can know for certain, but I believe Itanium could have reached farther than the current range can. After all, that was the whole point of Itanium - to cut the legacy to the old architecture in order to achieve more. Of course it was too expensive, as it had to - but that would change with scale. But scale could never happen when amd64 came around. The ties to the legacy stayed.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 18 posts ]  Go to page 1, 2  Next

All times are UTC


Who is online

Users browsing this forum: Proxy and 13 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: