GARTHWILSON wrote:
(This would of course mean the headers all have to be in RAM, not ROM.)
You don't have to put the whole head in RAM, just the link field. For example, assuming that A, B, C, and D, were defined in that order:
Code:
ORG TOP_OF_RAM-NUMBER_OF_HEADS*4
LINK4: DW LINK3,D_HEAD
LINK3: DW LINK2,C_HEAD
LINK2: DW LINK1,B_HEAD
LINK1: DW END_OF_LINKS,A_HEAD
In other words, link fields build downward (toward lower addresses) in RAM. You can use MVN or MVP to set up the link fields at cold start. Notice that it doesn't matter whether the heads are ROM or RAM. The head and definition of say : NIP SWAP DROP ; might look something like:
Code:
NIP_HEAD: DB 3,"NIP" ; name length, name
NIP: DW DO_COLON,SWAP,DROP,EXIT
To handle FORGET or MARKER, just reconstruct the links, which can be done several ways. It may not be particularly fast, but FIND is used far more often than MARKER, so you will usually win big even if FIND is only slightly faster and MARKER is really slow.
Anyway, having the entire dictionary cached does have some interesting properties.
1. You don't have to decide how big to make the cache, or whether you would want to allow it to be resized.
2. FIND will be a smaller, simpler (but not necessarily faster) routine, since it only has to search the cache, not the cache then the dictionary.
3. FIND doesn't have to compare the same word twice. For example, if A is a recently defined word and is in the cache, and B is not recent and not in the cache, then when using a separate cache and searching for B, FIND will usually compare B to A twice, once when searching the cache and once when searching the dictionary.
4. Words that are used occasionally (i.e. they wouldn't usually be found in a separate cache), would not necessarily fall all the way to the end (i.e. last word found) of the dictionary, so you might be able to do without hashing. I should point out that the use of a cache does not depend on any particular dictionary implementation (hashing, linked lists, etc.).
So there are some trade-offs here, but it's an interesting approach.