Quote:
Well, I think TeamTempest would be the right man to answer this question as he has written a port for the C64 (Wizardry).
Hrmph hrmph, somebody mention my name? Ah, that was mumblety-mumble years ago, but...
The p-machine core was portable across the C64, C128 and Apple II (also the Atari, but SirTech decided not to port Wizardry there). The Apple II use was so SirTech could avoid paying royalties for the run-time system. Main differences between them were all in the I/O area (C64/128 used multi-color text screens, Apple II a bitmapped one, for instance). Total assembled sizes, including custom I/O, were in the 11-13K range.
Quote:
- stack machine
Yes
Quote:
- very high code density
Yes. Cute tricks with variable-size integer values in there.
Quote:
- comes with pointers to local variables as well as global variables
P-codes to directly load and store the first eight (no stacking of offset + base address then, so faster)
Quote:
special instructions allow access to intermediate variables (used inside local procedures)
Not sure I remember these specifically. I do remember I thought it was a pain to have to keep track of procedure chains so as to be able to access the variables of the parents of procedures nested arbitrarily deep.
Quote:
- uses the 6502 stack as an evaluation stack (maximum: 128 INTEGER and 64 REAL values)
Yes. "TSX" got used a lot.
Quote:
- parameters are passed on the evaluation stack, not the local variable stack (IOW the stack machine has two stacks)
yes. Though my implementation of the local variable stack worked, I think I'd do it differently today. I intermixed the p-code segments and the local variable stack on the assumption that no local variable could belong to a segment that wasn't in memory (hence it was safe to intermix). The idea was to leave space at one end of memory for heap allocation by the running program. Today I'd take advantage of p-code's inherent relocatability to move it when heap space is needed and keep the local stack "pure" at the other end of memory from the heap.
Quote:
INTEGER, CHAR, and BOOLEAN values are always 16 bit. REAL (= floating point) 32 bit. STRINGs can have a size up to 255 chars.
Maybe. I never could clarify that to my satisfaction. I wrote pushes to the evaluation stack in terms of two bytes for the sake of speed, but I also always checked for an odd number of bytes. Possibly wasted effort, but I never could get assurance it wasn't.
Quote:
- INTEGER multiplication is always 16x16, division 16/16. Operations MOD and DIV are separate instructions
Those were fun. Lots of hacks to check for high-byte values of zero so I could speed them up by shifting whole bytes. David Bradley later ripped out those in the Apple II version so he could implement a random number generator he liked better (C64/128 used the white noise voice of the SID chip for that).
Quote:
- comes with special instructions for
a) data type REAL (= floating point) (There are no preallocated FACs. That's BASIC. )
Nope. Wizardry was integer only, although it had its own p-coded "bignum" routines. All those REAL p-codes pointed to a trap that halted execution.
Quote:
b) data type SET (up to 512 elements)
c) data type STRING (e.g. string comparison)
Yes.
Quote:
d) data type INTEGER[x] = LONG INTEGER (uses BCD arithmetic)
16-bit binary, not BCD.
Quote:
e) loading/storing a series of bits from a given address + bit position (for PACKED data types)
Probably. Some of those p-codes can do complex things when you string them together.
Quote:
f) array boundary checking (can be omitted) and array indexing
I think we had to give up bounds checking when we first discovered that Wizardry used negative indices in some places as a hack to read size information stored just before an allocated block.
Quote:
g) loading so-called code segments to allow programs > 64kb (overlay capability).
Yes. That and UCSD's determination not to allow code segments beyond a certain size in order to encourage modularity. These interpeters went a bit beyond the standard UCSD versions because they could use any extra RAM to stash code segments for fast loading. The C64 version used RAM "under" the Kernel to store the two most commonly used segments and could use a RAM expander if one was attached. The C128 version used the second 64K bank rather than RAM under the Kernel (ooh, lots of segments!) and eventually the 80-column chip 16- or 64K RAM as well as any RAM expander. The game simply cached segments at startup as long as an interpreter kept reporting "yes, there's more room".
Overlays coming out of a RAM expander were pretty darn quick.
Quote:
- The interpreter is rather short. Most of the UCSD system deals with IO: reading from keyboard, writing to a text screen/terminal, printer output, scanning file directory for names, open/close/delete files etc.
- UCSD comes with its own file system. It's more like GEOS but without the graphics.
Yes. 512-byte sectors and dedicated device numbers that have to be mapped onto whatever hardware you have.
Quote:
- Often used code can be stored in a library (special type of segment) that gets loaded when needed. A library can be written in p-code or 6502 code.
- P-code is relocatable, but added 6502 code is also relocatable due to a special file format.
Yes, but in the case of Wizardry a number of special routines were required by fiat to be implemented as part of the interpreter itself rather than this way. Speeding up common string operations, data decompression, one virtual disk spanning more than one physical disk, the caching system, things like that.