How was memory decoded in the 6052's heyday

Let's talk about anything related to the 6502 microprocessor.
User avatar
cbmeeks
Posts: 1254
Joined: 17 Aug 2005
Location: Soddy-Daisy, TN USA
Contact:

Re: How was memory decoded in the 6052's heyday

Post by cbmeeks »

Regarding the remarks of "not everything back in the day wasn't so good".

So, how true is that really? My first computer was the TI 99-4/a. Which had 16K of RAM. My second computer was the C64. I did so much programming on those machines! One thing it taught me was to be sparse with memory. Be efficient.

Nowadays, I see Java developers (and JavaScript) release *GIGANTIC* modules simply because "we all have broadband and 16GB of RAM". It's pathetic.
Cat; the other white meat.
unclouded
Posts: 81
Joined: 24 Feb 2015

Re: How was memory decoded in the 6052's heyday

Post by unclouded »

cbmeeks wrote:
Nowadays, I see Java developers (and JavaScript) release *GIGANTIC* modules simply because "we all have broadband and 16GB of RAM". It's pathetic.
To be fair, developer time is more expensive than machine time, so it's a commercial decision.

I love the 6502 but I don't think I'd give up being able to write a piece of code in Ruby to iron out the bugs first before porting it to 6502 assembly.
User avatar
GARTHWILSON
Forum Moderator
Posts: 8775
Joined: 30 Aug 2002
Location: Southern California
Contact:

Re: How was memory decoded in the 6052's heyday

Post by GARTHWILSON »

unclouded wrote:
cbmeeks wrote:
Nowadays, I see Java developers (and JavaScript) release *GIGANTIC* modules simply because "we all have broadband and 16GB of RAM". It's pathetic.
To be fair, developer time is more expensive than machine time, so it's a commercial decision.
It's also about the customer's time though, time spent to load things, time shopping for and installing more memory when increasing demands of new software cause problems with an otherwise-good computer, etc.. I do wish the developers would be more considerate.
Quote:
I love the 6502 but I don't think I'd give up being able to write a piece of code in Ruby to iron out the bugs first before porting it to 6502 assembly.
I try things out interactively in Forth on my 6502 workbench computer. When the concept is proven, I can re-write things in assembly and try again, without having to change any of the software that uses it.

I have benefited a ton from being able to process pictures on the computer and incorporate them in emails, watch YouTube videos on computer history, aircraft, science, health, etc.; but as long as I can still see more optimization to be had on the 8-bitters in both hardware and software, I'm committed to the little guys, and I'll leave the 32- and 64-bit stuff for someone else and ask them to make more efficient use of the resources and not expect us all to have this year's latest computer like their bosses give them since it's their living and they can't be wasting time. They forget that.

I remember many years ago (late 1980's?) when a new technology was introduced that improved memory prices and speed a lot in one step, and immediately Microsoft was saying, "This is great because now we don't have to be as careful and we can get new software out faster," and what happened is that the user never got the benefit. Boot-up times made no net improvement, the "disc full" messages came up just as often, and there were just as many bugs.

Today's technology has brought about a lot of conveniences, but I have to say it has not improved the "happiness factor" of life.
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
User avatar
BigDumbDinosaur
Posts: 9428
Joined: 28 May 2009
Location: Midwestern USA (JB Pritzker’s dystopia)
Contact:

Re: How was memory decoded in the 6052's heyday

Post by BigDumbDinosaur »

unclouded wrote:
cbmeeks wrote:
Nowadays, I see Java developers (and JavaScript) release *GIGANTIC* modules simply because "we all have broadband and 16GB of RAM". It's pathetic.
To be fair, developer time is more expensive than machine time, so it's a commercial decision.
...that costs the end user in consumed bandwidth and time waiting for an elephantine Java app or JavaScript-laden web page to download and render. If a page takes more than 10 seconds to load and render I usually move on. My time is very valuable to me.
GARTHWILSON wrote:
unclouded wrote:
I love the 6502 but I don't think I'd give up being able to write a piece of code in Ruby to iron out the bugs first before porting it to 6502 assembly.
I try things out interactively in Forth on my 6502 workbench computer. When the concept is proven, I can re-write things in assembly and try again, without having to change any of the software that uses it.
Once in a great while when wrestling with a new concept I will write a program in Thoroughbred Dictionary-IV (a high-powered timesharing form of BASIC designed for heavy business use supporting hundreds or thousands of users) to test my code theories. For example, there were some parts of my mkfs 65C816 program that I modeled in TB BASIC. Once I have demonstrated that the theory is sound I will write the equivalent in assembly language. However, such cases are rare and most of the time I can visualize the program in assembly language right from the start.
Quote:
I remember many years ago (late 1980's?) when a new technology was introduced that improved memory prices and speed a lot in one step, and immediately Microsoft was saying, "This is great because now we don't have to be as careful and we can get new software out faster," and what happened is that the user never got the benefit. Boot-up times made no net improvement, the "disc full" messages came up just as often, and there were just as many bugs.
I remember that as well, as well as the constant reboots that were necessary because the Microsoft stuff was so unstable (Windows 1.0 and Windows 386 immediately come to mind).
Quote:
Today's technology has brought about a lot of conveniences, but I have to say it has not improved the "happiness factor" of life.
My mantra of "New technology isn't the same as good technology." still holds today. On average, things happen no faster in Windows than they did 15 years ago, and memory and disk space consumption has gotten to the point of ridiculousness.
x86?  We ain't got no x86.  We don't NEED no stinking x86!
User avatar
BillO
Posts: 1038
Joined: 12 Dec 2008
Location: Canada

Re: How was memory decoded in the 6052's heyday

Post by BillO »

GARTHWILSON wrote:
They used more cascaded logic, further cutting into the access time, and further holding the clock speed down in a day when memory was slow already. One IC I wish were available in the faster families (like 74AC__ and 74VHC__) is the '154. It's like a '138 but has four address (in addition to G1\ and G2\) inputs, and 16 outputs.
I use GALs for RAM and I/O select logic these days. Fast, cheap and configurable. A GAL22V10 will give you up to 10 outputs.
Bill
User avatar
GARTHWILSON
Forum Moderator
Posts: 8775
Joined: 30 Aug 2002
Location: Southern California
Contact:

Re: How was memory decoded in the 6052's heyday

Post by GARTHWILSON »

I have not looked at the current speeds of GALs, but last I did look, they were still pretty slow, like 10-15ns. Are there faster 22V10's now that don't require an expensive programmer? (I'm party thinking about the brand of programmable logic that most programmers won't program correctly. Was that Atmel?) 74LVC1G__ gates are in the 3-5ns (max) times for cascaded gate pairs @ 5V. The '139 has 3.6ns max PD @ 5V. They can't lose their programming either like EPROM which is only guaranteed to hold it for ten years. They're pretty attractive.
http://www.ti.com/paramsearch/docs/para ... 8nom=0.8;5
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
AldoBrasil
Posts: 13
Joined: 14 Apr 2017

Re: How was memory decoded in the 6052's heyday

Post by AldoBrasil »

Dan Moos wrote:
OK, I've got a single dip with 32k of RAM on it. But that wasn't available back on the early eighties, and the RAM was spread over many chips.

Did those chips work just like the modern ones, but just have less address lines? So instead of the simple decode we do to select the entirety of RAM on one chip, did they have a pile more glue logic to decode that same 32k spread over many chips?

Just curious. It would explain why old 8 bit computer's boards were so heavily populated.
The CPU usually had 16bit address bus. RAM chips where, usually, DRAMs, and held 1bit per cell, with less than 16 bit address bus. There was a pin named CE (chip enable) that selected a certain group of ram chips. A common RAM size was 4kbit. That means that for a full 64kbyte ram system you needed 128 DRAM chips... The glue logic that generated CE was usually a decoder that took the 4 most significant bits from the address bus and decoded (like 4 of 16) into a CE per group of 8 DRAM chips, forming a 8 bit word. Unfortunately I/O space, bank switching, differences in speed between RAM and ROM etc, made things more complex than that. Add the fact that you are dealing with DRAM (so needed refresh every circa 120ms) and things get complex quickly.
User avatar
BillO
Posts: 1038
Joined: 12 Dec 2008
Location: Canada

Re: How was memory decoded in the 6052's heyday

Post by BillO »

GARTHWILSON wrote:
I have not looked at the current speeds of GALs, but last I did look, they were still pretty slow, like 10-15ns. Are there faster 22V10's now that don't require an expensive programmer? (I'm party thinking about the brand of programmable logic that most programmers won't program correctly. Was that Atmel?) 74LVC1G__ gates are in the 3-5ns (max) times for cascaded gate pairs @ 5V. The '139 has 3.6ns max PD @ 5V. They can't lose their programming either like EPROM which is only guaranteed to hold it for ten years. They're pretty attractive.
http://www.ti.com/paramsearch/docs/para ... 8nom=0.8;5

The ones I have in abundance are between 5 and 10ns. Lattice is the name on all mine, and the ones most readily available on eBay. This compares well with the typical 74LS138 at ~20ns and its in in the range of ACT logic.

Fast enough for most of my purposes since I would typically need additional logic to the '138 (whatever family) to get what I need. The GAL usually does it all. Single chip glue.

Programmers are embarrassingly cheap. Well under $100 US. I use a "Genius G540" that cost me less than $45 delivered to my door. I bought it 6 years ago and it's worked flawlessly since. It programs a huge array of devices including EPROMs, EEROMs, GALs, CPLDs, Microcontrollers, ... and more, much more.

I know that 10ns, or even 5ns is considerable if your working on a 15MmHz system, but consider that the GAL is configurable to suite specific needs and can usually be employed without too much other logic. It usually works fine with room to spare.

The biggest factors for me are their programability, their availability and their low price. Typically less than $2 a piece .. if you are diligent about looking for deals.

One more thing. They are reusable. You can muck around with some idea, and once tired of it, use the GAL(s) for something entirely different. Overall, not a bad tool to have in the box.
Bill
Post Reply