6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sun Jul 07, 2024 10:44 am

All times are UTC




Post new topic Reply to topic  [ 56 posts ]  Go to page Previous  1, 2, 3, 4  Next
Author Message
PostPosted: Thu Jan 14, 2016 6:52 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8254
Location: Midwestern USA
sark02 wrote:
It might be fun to hear about members' largest 65xx programming feats (as opposed small projects that integrated large external works).

The largest single assembly language project I ever did was a truck (lorry) leasing and billing package that ran on a bunch of Commodore 128Ds multiplexed to an 80 megabyte Lt. Kernal hard disk subsystem. The package made heavy use of ISAM databases, which were not natively supported by the Lt. Kernal DOS, requiring that I scratch-develop a lot of primitive code to manage record and file locking, arbitration of access, etc.

The finished project had nearly 100,000 lines of code. Assembling the entire package took nearly two days of run time. However, excepting a REX (resident executive) and the ISAM database engine, programs were mostly standalone and could be independently assembled, with all definitions in INCLUDE files. En masse assembly was only necessary if a common definition had to be changed. Most of the time, assembly of a few files was all that was necessary to make changes or add functions and features.

In the area of development for my POC project, the largest single program right now is my mkfs (make filesystem) program, which has around 17,000 lines of code. Again, it makes heavy use of INCLUDE files that have common definitions that all programs that run on POC would need (e.g., BIOS jump table, data types, etc.). In the case of mkfs, I didn't actually type in 17,000 lines, as about half of it was stuff that had already been written for other purposes.

POC's firmware currently has about 12,300 lines of code. Again, it's heavily dependent on INCLUDE files. Of course, I had to write all the INCLUDE files. :D

Quote:
- Was the project scale/function known going in, or did the project evolve over time?

In most cases, I have a pretty good idea where I wanted the project to go and the scale of it. However, all non-trivial projects evolve, especially when better algorithms are worked out for key parts of the code.

Quote:
- Was the program modular, with each module developed and tested within a test environment and then integrated into the whole, or were new things added in-place?

I seldom do assembly language programs that way. I do have tested functions (especially for display management) that get integrated. However, they are not standalone programs.

Quote:
- Did you make heavy use of macros?

I make extensive use of macros, especially in calling functions that require a parameter stack frame. Macros not only save a lot of typing, they reduce the likelihood of introducing bugs, as the assembler will halt with an error if the parameters associated with the macro call are incorrect, insufficient, etc. I have written a macro that is used in macros for the purpose of generating the instructions that generate the stack frame. If I hadn't written it, I'd spend a lot of time pounding in instructions to figure out how to generate a stack frame whose elements are in the right order and of the right type and size.

Quote:
- Was the project spread around multiple independently assembled source files and then combined using a linker to produce a single binary, or did you use independent binaries with jump tables, or did you include all source files into one large file for assembly, or something else?

On POC I assemble everything en masse.

Quote:
- If your system processed I/O and had timing elements, could you exhaustively test all the code paths prior to integration, or did you cook up a test environment that let you simulate events, or something else?

I concocted tests to prove that I/O works as it should, simply by loading the code on the machine and running it. Serial I/O wasn't too much work (other than the timing issue involving NXP UARTs). Testing and debugging SCSI was a challenge at first, as it is interrupt-driven and does stack acrobatics to route execution as the SCSI bus changes phases. If something was wrong, the machine went down like an engine-less DC-10. :evil:

Quote:
- Did your program include telemetry, stats, counters or other run-time data to let you examine the run-time condition of the system? If so, how was that data extracted and observed?

Other than occasional use of a UART timer to measure SCSI performance, I just watched what happened through the M/L monitor.

Quote:
- Did your system include an interface for debugging and/or examination? If so, and if your system included real-time elements (e.g. service I/O or dealt with critical timing), did you encounter any particular conflicts between debug/run-time.

I suppose the M/L monitor would be the debugging interface. Since I scratch-developed both the monitor and the underlying environment on which it runs, conflicts were (in theory) non-existent. :lol:

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 14, 2016 7:29 am 
Offline

Joined: Tue Nov 10, 2015 5:46 am
Posts: 217
Location: Kent, UK
BigDumbDinosaur, thanks for describing those projects. 100,000 lines taking two days to assemble. Holy God.

I'll assume that the truck leasing and billing system was a commercial project, so I hope you don't mind if I ask a few follow-up questions:
- Why was the Commodore 128D chosen as the hardware platform? Was it your prerogative? Did the client/employer give any push-back to using a "toy" computer? (I'm not disrespecting the C128, but I remember "grown ups" in the early-80s being very dismissive towards the thought that home computers could be used for anything beyond kids in their bedrooms playing games).
- Why, specifically, the C128D? Did you use the on-board Z80? Did you use the high memory? A friend of mine had a C128, but always used it in C64 mode.
- By "running on a bunch of C128Ds", did you mean running together to perform a single, collective, function, or each C128D was a essentially a client terminal that could run this application or that application (on demand), but they all talked to the same database through the multiplexer? Or something else?


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 14, 2016 12:47 pm 
Offline

Joined: Sun Apr 10, 2011 8:29 am
Posts: 597
Location: Norway/Japan
I did most of my 6502 work back in the eighties, and I don't have the sources of that around anymore (although I'm still looking - found one manual I wrote the other day). Most of what I did back then I did on an Apple II, and usually that would be a combination of UCSD Pascal and 6502 assembly language. As I don't have the code around anymore I don't remember to what extent I used macros etc., or even how large the 6502 parts were.
Anyway, the two biggest setups was one for controlling a programmed-tracking satellite dish, and the other one was for monitoring ptarmigans in a lab setup. There were a number of birds, each in their own cage, and the cage was set up as a big capacitor so that the bird affected the dielectric properties of that capacitor, and by applying a signal and monitoring the changes you could track the activity of the bird. Then there were some other data inputs, and a printer which would regularly print out statistics, and data would be stored to floppy now and then. The birds were arctic ptarmigans (living at very high latitudes) and not much was known about their metabolism at the time (long winters with no sunlight for more than four months, and then later 24 hour sunlight for as long). So I wrote some software in a combination of UCSD Pascal and 6502 code, with cooperative multitasking so that it could do all these things at the same time. There were some VIA I/O boards involved, and some special hardware designed by a med. tech guy who worked at a hospital. I don't remember much more of it, except that there was no room left on the Apple II.. I used 4 floppy drives, the extra ones were to have swapping so that the editor had room to work. The whole thing worked though, and I heard some years later that a researcher got a PhD out of the data collected.


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 14, 2016 3:04 pm 
Offline

Joined: Sun Nov 08, 2009 1:56 am
Posts: 390
Location: Minnesota
Possibly the largest program I wrote was a UCSD Pascal interpreter for the C64/C128. It came to about 13K assembled (C64) or 14K for the C128 (exploiting the additional hardware accounted for the difference). That was done by conditional assembly from one source.

Quote:
- Was the project scale/function known going in, or did the project evolve over time?


Known from the start. The ultimate purpose was to port the "Wizardry" series of role-playing games, which were written in UCSD Pascal, to the Commodore computers.

Quote:
- Was the program modular, with each module developed and tested within a test environment and then integrated into the whole, or were new things added in-place?


The source eventually became modular once it became too big to fit into the C64's memory at one time.

Quote:
- Did you make heavy use of macros?


This is the project where I learned to use macros. Assembly speed on the C64 slowed down immensely if it had to read source from disk as opposed to memory, and as that prospect came nearer it occurred to me that I could use macros to save code space and put that off as long as possible. Once I learned how to use them I came to appreciate how they could also cut down on stupid typing errors.

Quote:
- Was the project spread around multiple independently assembled source files and then combined using a linker to produce a single binary, or did you use independent binaries with jump tables, or did you include all source files into one large file for assembly, or something else?


Multiple source files assembled all at once into a single binary. The Merlin64 assembler I used did not have linking capabilities. Merlin128 did, and I did use it eventually for this project, but not its linking ability.

Quote:
- If your system processed I/O and had timing elements, could you exhaustively test all the code paths prior to integration, or did you cook up a test environment that let you simulate events, or something else?


The biggest problem -and we all knew this going in - was the slow transfer speed of the 1541 drives. Since UCSD Pascal has overlay capabilities that Wizardry depends on, this was going to have to be solved. I did eventually figure out a way to use the 6502 in a 1541 to speed up the transfer. This was tested independently of the main program. When it didn't speed up as much as I expected, it was suggested that I play with sector interleave to account for the time it took the disk to rotate. That pretty much solved the problem.

Of course memory to memory transfer was much faster, so if there was a RAMdrive hanging off the system, then I used that as a cache. Also the second 64K RAM bank of the C128 and eventually its 80-column video chip memory as well.

Quote:
- Did your program include telemetry, stats, counters or other run-time data to let you examine the run-time condition of the system? If so, how was that data extracted and observed?


The only thing I recall is counters for the byte codes being executed. Simple, just use the code itself as an offset into a table of counters. I think I was looking for a reason not to implement a particular code that looked hard to do, so I was hoping for a zero value after running the game for a while.

Quote:
- Did your system include an interface for debugging and/or examination? If so, and if your system included real-time elements (e.g. service I/O or dealt with critical timing), did you encounter any particular conflicts between debug/run-time.


There was a small self-modifying advance-pc-and-fetch-byte-code loop in zero page, similar to what the BASIC interpreter used. It was easy enough to put a BRK opcode into it, exit to a monitor, examine the machine state, and then return to execute the next byte code. Did find a few bugs in the interpreter that way. But that only affected the machine-independent interpreter; once execution was passed to a machine-specific I/O or other support routine, that ran to completion without interrruption.


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 14, 2016 5:21 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8254
Location: Midwestern USA
sark02 wrote:
BigDumbDinosaur, thanks for describing those projects. 100,000 lines taking two days to assemble. Holy God.

Nowadays I suspect assembly time would be in less than a minute. The Kowalski simulator will assemble my mkfs program (17,000+ lines of source code) in a few seconds. Things have improved a bit since the days of the C-128 and its 2 MHz 8502 MPU. :D

BTW, my development system was supported by a UPS. A power blip during assembly would have been unpleasant, to say the least. :twisted:

Quote:
I'll assume that the truck leasing and billing system was a commercial project...

It was.

Quote:
Why was the Commodore 128D chosen as the hardware platform? Was it your prerogative?

Actually, the 128D and Lt. Kernal combination was a pragmatic choice made by my client—I suggested, they chose. The client was a startup company whose capital was largely invested in refrigerated truck trailers, which at the time, cost about 40,000 USD each. They were prepared to do all the leasing and billing activity on paper when the owner's then-wife came to me and asked if I could help them out. It was immediately apparent that their requirements would be best met on a system that supported concurrent transaction processing. Unfortunately, the available machines at the time that could do that were running upwards of 25,000 USD minimum, just for the hardware and an operating environment. I was able to do everything for about half of that, with most of the cost in software development. As the husband-and-wife were friends and were financially stretched very thin, I gave them the best price I could without short-changing myself. They got the functionality they needed for a much lower cost, and the only significant downside was somewhat slower performance.

Quote:
Did the client/employer give any push-back to using a "toy" computer?

I don't think they ever saw the C-128D as a toy. I already had a Lt. Kernal system for my own use, so what I did was bring two C-128Ds, the Lt. Kernal and a multiplexer in for a demo. When they saw what the combination could do they were sold on the idea. It was a whole lot cheaper than a Point 4 or MAI BASIC IV mini, and took up less space.

Quote:
Why, specifically, the C128D?

Several reasons, some technical and some cosmetic. Going back to the "it's a toy" mentality, the C-128D looked more professional than the flat 128 due to the detachable keyboard and general arrangement. It was more PC-like in appearance, which meant it would not look out of place in an office.

I already had another friend who was (still is) in the auto repair business processing on a C-128D, and he was quite satisfied with the machine. So it wasn't as though I would be venturing into the unknown.

From a technical perspective, the 80 column display was a requirement, and since the 8568 VDC in the C-128D was shipped with 64KB of video RAM, the display capabilities were substantially enhanced over the flat 128. I devised a simple windowing system that took advantage of the extra video RAM and the 8568's "blitter" capabilities to produce fast and responsive display changes.

Quote:
Did you use the on-board Z80?

No and I never even contemplated doing so. For one thing, I'm not fluent in the Z80 assembly language. More importantly, the Z80 in the C-128(D) was hamstrung, as it couldn't directly talk to the I/O hardware. CP/M on the C-128 had to use the 8502 as an intermediary, which really hurt performance, since CP/M tends to be heavily I/O bound. Also, the Lt. Kernal DOS was written entirely in 6502 assembly language, and the APIs were likewise all 6502 assembly language. Getting the Z80 involved would have been a programming nightmare, with little gain.

Quote:
Did you use the high memory?

By high memory are you referring to RAM under the ROMs? I did use that RAM for disk buffers, a place for an interrupt handler and as an intermediary space for inter-workstation communication.

I made extensive use of RAM1 for data structures. Rather than use the cross-bank load/store functions in the kernel, I wrote new ones that were "one way." That reduce the amount of MMU activity and produced better performance.

Quote:
By "running on a bunch of C128Ds", did you mean running together to perform a single, collective, function, or each C128D was a essentially a client terminal that could run this application or that application (on demand), but they all talked to the same database through the multiplexer? Or something else?

Each C-128D essentially ran as a smart terminal, both executing the program in use and handling user interactivity. The only time users would become aware of others users' presence would be when two (or more) attempted to acquire a lock on a resource.

I had a lot of help from Fiscal Information when I embarked on this project, as it was the first of its kind. The final version of the system had twin 40 MB ST-506 disks and a streaming tape drive, attached to an OMTI SASI controller (SASI is the ancestor of SCSI). There were three multiplexers ganged together to attach a total of 12 machines to the SCSI bus. Eleven were C-128Ds and the 12th machine was a C-64 acting as a printer driver. The C-64 was attached to three parallel printers via a custom interface built from three 6526 CIAs, the interface plugged into the cartridge slot, which was also shared by the Lt. Kernal host adapter.

The client ran on this system for some four years, at which time they had amassed enough capital to afford a UNIX system (which I also built and programmed for them). That was the end of the use of "toy" computers to run a multi-million dollar truck leasing business. :D

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Fri Jan 15, 2016 2:47 am 
Offline

Joined: Tue Nov 10, 2015 5:46 am
Posts: 217
Location: Kent, UK
BigDumbDinosaur wrote:
Going back to the "it's a toy" mentality, the C-128D looked more professional than the flat 128 due to the detachable keyboard and general arrangement. It was more PC-like in appearance, which meant it would not look out of place in an office.
I hadn't realized that. The picture I had in my head was a flat C=128 with a drive on the side - sort of like the Amiga 500. A Google image search clued me in, though. Yes, it was a professional looking piece of kit. Very nice.

Quote:
By high memory are you referring to RAM under the ROMs? I did use that RAM for disk buffers, a place for an interrupt handler and as an intermediary space for inter-workstation communication.
I thought the 128 had 128KB of RAM. That's what I meant by "high memory", and I assumed it used some kind of paging scheme. Are we talking about the same thing?


Top
 Profile  
Reply with quote  
PostPosted: Fri Jan 15, 2016 2:53 am 
Offline

Joined: Tue Nov 10, 2015 5:46 am
Posts: 217
Location: Kent, UK
Tor wrote:
Most of what I did back then I did on an Apple II, and usually that would be a combination of UCSD Pascal and 6502 assembly language.
teamtempest wrote:
Possibly the largest program I wrote was a UCSD Pascal interpreter for the C64/C128
I realize they're not the same, but I love these coincidences! Was UCSD Pascal a popular language on 8-bitters?


Top
 Profile  
Reply with quote  
PostPosted: Fri Jan 15, 2016 2:55 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8254
Location: Midwestern USA
sark02 wrote:
I thought the 128 had 128KB of RAM. That's what I meant by "high memory", and I assumed it used some kind of paging scheme. Are we talking about the same thing?

The C-128(D) did have 128 KB (hence the model name), but also had an 8502 MPU, which like its 6502 ancestor, has a 16 bit address bus and thus could only see 64 KB.

The C-128 made the 128 KB of RAM appear in two banks, and had a device called the MMU (memory management unit) to determine which bank was in context, as well as combinations of RAM, ROM and I/O. The MMU also defined some common areas of RAM that were the same regardless of the bank in context. These common areas were the key to supporting cross-bank transfers.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Thu Sep 29, 2016 9:02 pm 
Offline

Joined: Tue Jul 05, 2005 7:08 pm
Posts: 1019
Location: near Heidelberg, Germany
I have two candidates for my largest programs, although I would have to count lines of code...

1) my BDOS system to read and write PC disks with a C128. It basically included an almost complete DOS with Command line interface to copy files, as well as a GUI interface with some simple menus and windowing. IIRC about 20k+ assembled http://www.6502.org/users/andre/misc/

2) my GeckOS preemptive multitasking operating system that included just a whole load of accompanying programs - depends if you count them as well:
- kernel 2-6k
- IEEE488 / IEC bus file systems ca 6k
- TCP/IP over SLIP stack ca. 8k
- machine language monitor 8+k
- lsh "unix" shell (no scripting in there though)
- ...
http://www.6502.org/users/andre/osa/index.html

_________________
Author of the GeckOS multitasking operating system, the usb65 stack, designer of the Micro-PET and many more 6502 content: http://6502.org/users/andre/


Top
 Profile  
Reply with quote  
PostPosted: Fri Sep 30, 2016 1:13 am 
Offline

Joined: Sun Apr 10, 2011 8:29 am
Posts: 597
Location: Norway/Japan
sark02 wrote:
Tor wrote:
Most of what I did back then I did on an Apple II, and usually that would be a combination of UCSD Pascal and 6502 assembly language.
teamtempest wrote:
Possibly the largest program I wrote was a UCSD Pascal interpreter for the C64/C128
I realize they're not the same, but I love these coincidences! Was UCSD Pascal a popular language on 8-bitters?
I hadn't noticed this reply before.. UCSD Pascal was definitely popular with the Apple II, but I'm not sure about other 8-bitters. It was certainly a well known system. Turbo Pascal included very similar extensions (to make Pascal usable in the real world), so the UCSD dialect had become a kind of standard at that point.
To elaborate more about UCSD Pascal would require its own thread though.


Top
 Profile  
Reply with quote  
PostPosted: Fri Sep 30, 2016 4:10 am 
Offline
User avatar

Joined: Sat Dec 07, 2013 4:32 pm
Posts: 246
Location: The Kettle Moraine
Historically, I have no idea. I've probably assembled 4k of code without strings for something or other on some project in the past. Most of what I've done over the years is gone and forgotten. Every now and then I stumble upon some large project I had and can hardly believe I did it; I have no recollection whatsoever of having done it.

Currently, my 65xx assembler and editor package comprises a total of about 3500 lines, unassembled. I never use macros, becaus the assembler I use, which I wrote, doesn't support them at this time. The assembler itself is around 1500 lines, and is far from complete. It will currently only do one pass, and doesn't actually assemble anything yet. It is modular; that is, the assembler is in two pieces. The cross reference for 6502 mnemonics and opcodes is separate from the code, so that it can eventually handle 65c02 and perhaps others.

The amount of code involved in getting to this point is astronomical. First I wrote an editor and assembler in BASIC. That editor was limited to around 650 lines and was painfully slow: only one line was in memory, the rest were always swapped to disk. So I used that to write a barebones editor in assembly, which I used to write the current, very functional editor. Now I'm working on the assembler, written in assembly, which is actually on its third iteration. I scrapped it and started over twice so far.

The amount of man and machine hours in this project, if Ihad kept track, would be staggering. It was only recently that I started using an emulator to speed up my assembler by nearly 1000%. I ran the editor at 200% for a while, but currently always run that at 100%, so all that gets saved is emulated disk time. I do my editing without the emulator whenever I can, it's much nicer that way. The number of times I would actually assemble in one day went from about a dozen down to just one, prior to using the emulator. I think if I assemble the assembler without the emulator right now it would take over an hour and a half to assemble.

Because of how slow things progressed, I would lose track of what I was doing a lot. That is the main reason I scrapped and restarted so many times. I don't even recall how many times I did that, or at least wiped out pages of changes, when writing the editor. Several times in the early going I had disk failures, cauing me to start over due to that. I have two bad drives, so even though I was using multiple backups I didn't realise my backups were getting corrupted.

For two years, almost all my work was done during Sunday football. I'd watch the game, or nap, whilst the actual assembling was happening.

I went on a spree lately, as documented in another thread. In about a month I wrote over 1000 lines. This is just working on it a few hours a day. I've been working 60 hour weeks lately so haven't got much programming in at all. If I actually had some free time, I'd get a lot further. The assembler is very close to being able to do one-pass, non-symbolic assembling. To get to two passes and symbolic, well enough to assemble the assembler really isn't that far off. From there, things should happen a lot quicker.

I did all of this exactly the same 25 years ago. Except I had time then. I had achieved a macroassembler that would assemble in mumerous passes back then. Granted it probably didn't have all the capabilities of a commercial product. I wouldn't know; I've never used any other macroassembler. Or at least, when I did (for other CPUs,) I never used macros anyway.


Top
 Profile  
Reply with quote  
PostPosted: Fri Sep 30, 2016 8:07 am 
Offline

Joined: Sun Apr 10, 2011 8:29 am
Posts: 597
Location: Norway/Japan
I've become so used to modern source code practices that I don't think I could ever do the editing on the target platform anymore. Now I can't imagine writing anything that isn't under source version control. If I write something at all, even just notes, I make a Git repo out of it and that makes me able to keep track of everything I do, including stray side-experiments in the middle of something else. So I never lose a previous change, and it's also incredibly easy to go back to an old project and get into the thought process again - I just take a look at the commits and there it is. I have stuff going back to the nineties where I still do the occasional change - naturally that went through several version control system revisions, but the history is all there in Git now.

The editing is the important part. From there I can transfer the source to the target and build it there (and test), but at that point the step to also cross-*build* on the host platform and transfer the binary to the target is also very short.


Top
 Profile  
Reply with quote  
PostPosted: Fri Sep 30, 2016 10:56 am 
Offline
User avatar

Joined: Fri Nov 09, 2012 5:54 pm
Posts: 1397
Back in 1987 (29 years ago), I did build this little computer:

Image

6522 for I\O, and EF9367 for graphics (512*512, 8 colors).
//Graphics PCB not shown in the picture.
https://en.wikipedia.org/wiki/Thomson_EF936x

Had no assembler back then, just a C64 with an EPROM programmer, paper, pencil,
and the book 'C64 Intern' which contained a documented ROM dissasembly listing
of the C64 kernal.

If I remember correctly, I had spent a week without sleep for typing about 8KB
of raw 6502 machine code into something like a HEX editor on the C64...

The trick is to go with 5 EPROMs in total, and to have 4 of them in the erasor
while burning one of the ROMs and then plugging it into the socket to see what happens.

Version 68 of the machine code eventually worked as intended, go figure. :lol:

Needless to say, that I'm not too happy when "another new layer of abstraction"
is getting thrown between me and the silicon nowaday every now and then...


Top
 Profile  
Reply with quote  
PostPosted: Fri Sep 30, 2016 6:25 pm 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8464
Location: Southern California
ttlworks wrote:
Needless to say, that I'm not too happy when "another new layer of abstraction"
is getting thrown between me and the silicon nowaday every now and then...

Abstraction is good if you can control it yourself. What I don't like is layers that are given to us that are nearly impenetrable.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Sat Oct 01, 2016 12:46 am 
Offline

Joined: Wed Nov 18, 2015 8:36 am
Posts: 102
Location: UK
Wow, ttlworks, love what you did back in 1987! At the time, I was still at school and doing machine code on my Atari 800XL (as others have commented, the Atari 8 bits were great machines).

Back in the day when I started learning 6502, I didn't have money for an assembler (I was only just a teenager and parents spent all their money on getting me a micro!). After doing a few hundred bytes of hex coding, I got fed up and decided to write my own assembler in BASIC. I wrote one for my Oric-1 (the first computer I owned) and then the Atari too.

I think in those days I would not have been writing more than around 1-2KB. I only used machine code for performance critical sections (i.e a main game loop), and then did the regular stuff from BASIC (e.g. initial welcome and instruction screens for a game, or initialisation of graphics).

But after a long time away from it, a couple of years ago I decided to build my own 6502 home brew. So far I have around 15000 lines of assembler (including comments etc.) generating just over 15KB. I have less than a K remaining of the 16KB space in the memory map for ROM. The 15KB squeezes in the following features:
- Kernel (interrupt handlers, timers etc.)
- Keyboard, Serial, Sound, Joystick, VDP, SD Card drivers
- Simplified FAT16 filesystem (load, save, dir and del files)
- Very simple monitor
- A completely home grown programming language (editor and interpreter) with commands to support graphics (sprites), sound and joystick

I'm most pleased with the built-in language (which I call dflat!). It takes up just under 8KB of the ROM, which considering the number of commands and features it has (no goto - only structured flow and name procedures to transfer control) is not too shabby.

So now in my 40's and more than 30 years since I first learnt 6502, this is by far the most lines and greatest complexity 6502 code I have personally ever written!


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 56 posts ]  Go to page Previous  1, 2, 3, 4  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: