whartung wrote:
drogon wrote:
It lets me build up librarys of routines to be re-used, if needed. I also find it easier to manage projects that way - so currently my OS is 12 .s files with corresponding .h files. I know that speed really isn't the issue on a modern PC though even when I type 'make' it only assembles what needs to be built which takes under a second. Sure - I could just have one file full of include statements, but I'd rather work with separate assembly units.
Garth's point (methinks) is that if you have to "include 'file.h'", why not simply "include 'file.s'" and skip the linkage phase.
It's a valid point.
My assembler (which does not currently support include) assembles Fig Forth at about 1000 lines/s, so it's 4s to assemble it.
Time isn't the issue on a modern system. My desktop - a somewhat modest intel i3 system running Devuan Linux assembles & links one file in the same time as all the files in my current project. It's in the millisecond region.
After a make clean:
Code:
gordon @ wakko: time make
[Assemble] rubyOs.s
[Assemble] bios.s
[Assemble] larson.s
[Assemble] sw16.s
[Assemble] host.s
[Assemble] util.s
[Assemble] osRdWrCh.s
[Assemble] osByte.s
[Assemble] osWord.s
[Link]
[gmonCode.h]
Output is 3202 bytes
0.008u 0.004s 0:00.04 0.0% 0+0k 0+672io 0pf+0w
whartung wrote:
I think it all depends on the complexity. One of the benefits of using object files and separate assembly is information hiding. Only public labels are public, no worries about name space conflicts across modules, etc.
For me, it's the way I've always done it - right back since 1980 when I was introduced to the university pdp11 Unix system. Then it was a necessity for anything non-trivial.
Typically, I start a project by writing code, then when I feel that file has grown, and has started to have a few subroutines/functions, I split it off into what I feel are manageable sections, then edit them, as needed, grow, split, and so on. It's not about speed, it's all about management. Makefiles are easy for me, (because I've been using them forever) as is cross-referencing files, labels, etc. using the tools I have to-hand. My biggest personal project is currently my BASIC interpreter - that's 130 files (about half source and header files) totalling about 30K lines.
Part of my longer term goal though is to make my system self-hosting, so to that end I'll be writing various utilities for it and that's when separate compilation/assembly and common libraries will help time-wise as there is simply no way that my little 6502 system, even running at 16Mhz can possibly compete with my desktop, so doing stuff in smaller chunks makes sense there (like right back to the old Unix days)
whartung wrote:
The other benefit of object files is late binding on memory. Simply, you can put all of the code together and all of the data together, vs having them intermixed in the image. You can assign memory locations at link time, rather than assembly time. Thus you could locate, for example, "strcpy" wherever you like without the location be coded within the strcpy routine.
This is why I re-wrote sweet16 - For space efficiency, Woz gave it a built-in dependency on where it sits in memory. I wanted to remove that. The computer (assembler/linker) ought to have a far better idea of where to place things them me sitting down, counting bytes and working it out... (at least I hope so!)
whartung wrote:
The simpler the program, the less necessary this kind of thing is. With flexibility comes complexity and build performance.
I think it's really much more important on a large '816 program that uses a large code space.
Highly likely - but who is writing large programs right now (02 or 816?) People doing the Foenix or Commander X64 projects probably, although I've no idea what they'll be using. I'll be using my own system to write/port my BASIC into it though - the best test of it that I can think of.
-Gordon
_________________
--
Gordon Henderson.
See my
Ruby 6502 and 65816 SBC projects here:
https://projects.drogon.net/ruby/