Quote:
Yes I know 12 isn't a power of two, and that may take some getting over. Powers of two have a strong intellectual appeal! But in concrete terms the power-of-two benefits don't seem compelling. It's true that 65Org16 would have an advantage when it comes to rewriting existing 8-bit code. Eg: a double-precision addition translates directly to single precision. (What are some other examples? I'd be happy to hear them, but right now I want to look at the flip side.)
Going in multiples of 8 makes better use of available memory types. Is 9-bit-wide memory available in the easiest-to-use types (SRAM and ROMs)? I doubt it. The HP-71 hand-held computer which was introduced in 1983 was an incredible example of outstanding system engineering from the standpoint of having plug-and-play
ten years before they were even
talking about in on PCs, interfacing to a virtually unlimited number of peripherals at once, the user being able to extend the OS, having local environments, user control of interrupts and their servicing, and a ton of other marvels, and being nearly bug-free; but it has a 4-bit data bus, a 20-bit address bus, and 64-bit processor registers. (The 64-bit registers were for efficient implementation of the IEEE floating-point standard before it was even a standard but its passage was anticipated.) The strange implementation actually had some good reasons, based on the application and the technology available at the time. ANS Forth requires that the number of bits in a cell be a power of two, 16-bit minimum, but it's not like it were the law. There are other things about ANS Forth I won't ever be going along with. Forth on the HP-71 had 20-bit cells, which along with 4 bits per address (and two addresses per text character), was kind of strange, but did not present any problems in my experience. But in situations where 16-bit was a little limiting for the standard numbers you wanted to use, 20-bit avoided much of the potential need for doubles which now would be 40-bit instead of 32.
Quote:
(I may be a bit of a broken record over this [a phrase from the vinyl generation] but we don't need to expose all 32 bits of an address bus for 65Org16, and we can multiplex with the databus if we have to - the 65816 shows that. So, squinting at the 6502 pinout, I can lose or repurpose 6 pins before I start, which gives me pinouts ranging from 26 pins for a 16bit address bus to 40 pins for a 30bit address bus. The 6 pins would be 3 no-connect, phi1 out, sync and so. Or, we use a 50-pin module and drop the multiplexing. Or, we interface to a memory which multiplexes high and low addresses internally.)
Without knowing for sure, I suspect the 816's reason for the address bus multiplexing had to do with the cost of lead frames and IC sockets of more than 40 pins (before PLCCs came along), and the longer connections in those lead frames as they looked forward to higher speeds, also being able to make a board that with a few jumpers could use either the 6502 or the '816. Samuel wrote here one time that the cost of testing goes up quickly as you add pins. Maybe DC@wdc (new member here) can tell us, if indeed he's reading this. I would certainly prefer not to have to add the external latch to go above 64K of address space. That scheme seems to reduce the maximum speed of the '816 a little bit compared to the 6502 also.
What I meant by not bringing all the address lines out is that probably no one would be using more than say 16Mx16 of memory map in a system like this. For that, 24 bits of address bus would be enough, with no multiplexing.
If this FPGA is really going to allow bus speeds of 32 or 50 or more MHz though, you're going to have to leave the module a lot of pins for ground and Vcc. Just one or two won't do, especially on something big enough to get the pins on .100" centers. On my
4Mx8 5V SRAM module (data sheet
here) which I have available with 10ns SRAM chips, no signal pin is more than .2" away from a ground or bypassed Vcc pin.
http://WilsonMinesCo.com/