Hmm, screwed up again. I
did mean LDA absolute, not LDA immediate. I even looked up an on-line reference to make sure I had the right opcode...apparently I found an unreliable source!
There is of course
no possibility I mis-read it
Anyhow...as has been pointed out, the HXA_T variant of HXA does not understand any instruction set, but several of the demos that come with it implement 65xx instruction sets as macros. I note in passsing that writing the macros was instructive enough that I incorporated what I learned back into HXA65, the variant that understands these instructions natively, to make it more efficient at doing that.
One approach to creating a new assembler would be to modify the file 'i6502.a', which contains macros implementing the NMOS 6502 instruction set. It would be simple if it was just a matter of replacing all the 'BIT08' pseudo ops with 'BIT16' and all the 'BIT16' pseudo ops with 'BIT32', but there is still the problem that the program counter should advance only half as fast as it does. So some modification of the operand fields would also be necessary (for those instructions which have operands, anyway).
I thought of retarding the PC within each macro by using the last instruction to set it back half the number of 8-bit bytes generated, but there is a limitation within HXA which currently makes this possible only 1023 times (so programs couldn't be more than 1000 or so instructions long). Also I haven't entirely worked out how this would affect relative branch calculations.
Mmm, and also there are data storage pseudo ops like 'STRING', 'HEX' and so on. Presumably these should be modified somehow, either natively or via macro, to always output some multiple of 16 bits.
The main reason I'm harping so much on 'what bytes in what order?' is that I've largely come to view an assembler as a tool for doing exactly that: specifying what bytes in what order. Internally HXA simply maintains a sequence of [type, value] pairs, where 'type' is usually one of the '-BIT--' pseudo ops and 'value' is a 32-bit integer. It's only at output time that HXA uses the '-BIT--' type to determine what bytes of each value to extract and in what order.
If you look at the 'i6502.a' and related instruction set files closely, you'll see that that's what all the macros amount to. If you look at the file 'a_ins65x.awk' (the only difference between HXA_T and HXA65), you'll see that essentially it's doing the same thing, spitting out ['-BIT--', value] pairs (much faster, of course).
Having just written this, it's finally occured to me that there's absolutely nothing to stop me from defining new '-BIT--' types that have the desired properties - 16-bit values having size one as far as the PC goes, for instance - while keeping all the current types. A particular native assembler or macro instruction set would use just the types it was interested in.
So...16-bit values, opcodes or operands, are not much trouble. There are only two choices (2!) for outputting each 8-bit 'nybble'. But there are 24 ways (4!) to output a 32-bit value in terms of 'nybbles' (if I counted correctly). HXA is agnostic when it comes to this sort of thing; it currently knows internally only two of the 24 ways but that's only because other orders haven't been made known to it. There's no fundamental difficulty with implementing other orders, it's just a question of exactly which one(s).
The answer to that question is what I'm after. What does the proposed CPU expect to see? If it's this, as EE suggests:
Quote:
LDA $89ABCDEF ; 00AD CDEF 89AB
then this macro (also shown earlier) creates it:
Code:
.cpu T_32_M ; MSB-first order
.macro LDA, ?addr
.word $00AD ; opcode
.word ?addr, ; lo 16 bits
.word ^(?addr) ; hi 16 bits
.endm
Though a native version would do this faster, and also not have to evaluate '?addr' twice.