Joined: Thu May 28, 2009 9:46 pm Posts: 8505 Location: Midwestern USA
|
Thanks to those who have so far offered suggestions.
barrym95838 wrote: You wish to non-destructively test up to 4MB? So we likely have a (read, modify, write, compare, write, loop) times ~4 million bytes or ~2 million words? That's gonna burn some significant cycles either way ... I only presented the 4 MB as a "fer instance." As the 65C816 can address 16 MB, the possibility of testing millions of bytes is there. It would ultimately depend on how important it is to know that all memory is good at power-on. However, the millions of bytes being tested wouldn't mattter in determining how many banks of extended RAM are present—at most, two memory cells per bank could be involved.
As an aside, the 65C816 has very good compute-bound performance, especially if use of REP and SEP (three cycles each) can be avoided during memory testing. I envision that a test of megabytes of RAM would run at a reasonable speed, assuming a high clock speed is in use.
Quote: I'm certain you already have an idea of how you want to do this, so may I request that you post a first attempt and let us fiddle with it to see if we can make it more efficient?
Right now, I've only a half-baked idea of how to accomplish this. At the moment, I don't want to go into any more detail on my thoughts (such as they are) lest I unduly influence others' thinking. I will, however, offer one more clue, which could be important in devising a suitable algorithm. Commonly-available, large SRAM is produced in even multiples of 64 KB, with 128 KB and 512 KB pieces being the ones I would be using in my designs (for now—larger SRAM is only available in 3.3 volts).
BigEd wrote: I think one would initially only test a byte or a few bytes at the start of each bank: the question to be answered is whether it's RAM, and whether it's a bank previously encountered.
To know that something is RAM, it's necessary to write to some byte a value different from the present value, and determine that the new value has been stored. To make that non-destructive, it's necessary to save the value first read, in order to restore it afterwards. That would be the method I'm considering, although I was going to test at the high end of the bank where the test cells (I'd be using 16-bit test patterns) would be ROM if the bank being probed mirrors to bank $00.
Quote: So, I think I'd use a table in the bank 00 RAM - already tested, or assumed to be present - to hold the 255 bytes from each of the 255 possible high banks. Then write the bank number to each bank's first byte, check that worked, then write the inverse and check that, and finally restore the initial bytes from my table, to make the process non-destructive. (Only need to restore to the unique banks of working RAM.) That would work and definitely prove that a bank is present...or not. However, it would be destructive. In developing the memory test for bank $00 (which occurs prior to the first visible signs of activity), I decided that everything in the range $000200-$00BEFF should survive a reboot so evidence could be preserved when I hit reset to recover from a crash. The only bank $00 address ranges that are destructively tested are the physical zero page, $0100-$01FF (system vectors and other critical data), and the native mode stack, which is presently at $BF00-$BFFF, the highest page of accessible bank $00 RAM in POC V1.3, and initially in V2.0.
kernelthread wrote: Write 0 to address 0. Set address to be tested (=P) to 0x010000. Write 0x55 to P, read it back and check it - if it's not 0x55, there is no memory at P. Write 0xAA to P, read back and check it - if it's not 0xAA, there is no memory at P. Now read address 0 - if it's not 0, P is a duplicate of address 0 (address line A16 is not connected to RAM)... Unfortunately, writing "0 to address 0" would be destructive, as that would step on a direct page location used by the IRQ handler, which will already be running when the probe for extended RAM banks will occur. A different location could be used, however, since direct page above $51 will be unused during POST and could be overwritten without consequence. For efficiency reasons, I'd be using a 16-bit test pattern, i.e., $A55A, followed by $5AA5, but otherwise your method would work.
floobydust wrote: Well, you already have code that will perform a RAM test. Beyond Bank $00, testing is done in 64KB (banks). If you're still using the Maxim Realtime Clock, there's 256 bytes of NVRAM there. Why not use part of that NVRAM for configuration data? Storing RAM configuration information in NVRAM won't prove that the RAM is usable. It will, however, speed up POST, since the bank probe wouldn't be required and the detailed memory test could be skipped (modern PCs do this when a fast boot is configured in the BIOS).
However, what if NVRAM gets corrupted due to a wild write in an ill-behaved program (which has happened several times ), or what if I replace the RTC with a different one having no data in NVRAM? In either case I'd still have to do the bank probe to determine what is present. Also, not doing the bank probe on each boot cheerfully assumes that all banks are present, even if a piece of memory goes south for some reason.
barrym95838 wrote: Riffing on the ideas of the other participants here, there's a way to test each location non-destructively without actually saving the original value and restoring it from "known-good" RAM ... if you EOR the value at a location with one or more portions of the address of that location, store it and EOR it back to original after it responds properly. That is a good—although slightly slower—alternative to the read-save-write-compare-write-compare-restore-compare sequence, as it would have the somewhat-beneficial effect of "exercising" the test locations' bits a little more than the alternative procedure. However, using a bank $00 stack to preserve things would itself be destructive, which can't happen for the reasons I explained above.
BigEd wrote: I think catering for various missing and mirroring and not-power-of-two sizes will be quite the challenge. Catering for all possible dataline and address line shorts is another thing. Let alone actual pattern-sensitive failure modes.
I think testing for size is best handled as a different problem from testing for system level faults which is again different from testing for failures within RAM chips. Good points. The bank discovery probably would report a bank not present if there was an address or data bus fault, but that would depend on which address or data line is faulty and where in the bank the test is occurring.
The detailed test would not be exhaustive and thus wouldn't necessarily detect a hardware fault. Mostly it would be to prove that the memory cells being tested will be able to store and regurgitate the test patterns. Should the need arise to fully qualify all memory locations, as well as address and data bus operation, a much more complex (and much slower) test regimen would be required to do inversions, walking bits, etc. I wouldn't do that in POST due to the time required.
_________________ x86? We ain't got no x86. We don't NEED no stinking x86!
|
|