6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Thu Sep 19, 2024 11:54 pm

All times are UTC




Post new topic Reply to topic  [ 49 posts ]  Go to page Previous  1, 2, 3, 4  Next
Author Message
PostPosted: Thu Nov 21, 2019 9:56 pm 
Offline

Joined: Mon Nov 18, 2019 8:08 pm
Posts: 9
I try use the opportunity now to give a quick overview of how 65816 programming is supported currently for those interested. It's a bit late but as I've spent 3+ hours to write this I don't wanted to throw it away now.

64tass compiles the usual 65816 opcodes with their addressing modes. By default ambiguous addressing modes are resolved through numeric ranges.

There's a 256 byte direct page range and if the address falls within then direct page addressing is used with the offset within that range. This can be set by .dpage and by default it's 0. That's why addresses <256 are normally direct page.

Similar a data bank can be defined for data bank addresses which is changeable by the .databank directive. It's 0 by default so addresses <65536 will use data bank addressing. The assembler of course knows which opcodes use bank 0 or program bank addressing and those are not subject to this range.

Anything else uses long addressing if available.

This is good enough for simple use cases but not if direct page or data bank gets moved around a lot.

That's usually when the not too pretty @b, @w and @l style address size forcing comes up. Yes it can be used to force 16 or 24 bit addressing to get data bank or long addressing but then it's used as a work-around against the address range detection.

On the other hand the address ranges can be disabled and then there won't be any automatic direct page (with .dpage ?) or data bank (with .databank ?) addressing modes generated. Without the range check these addressing modes must be explicitly written out to get them as follows:
Code:
lda 4,d  ; for direct page
lda 12,b ; for data bank
Which is non-standard but more consistent than the address size forcing. For completeness there's also 0,k for the program bank. Of course these addressing modes are available even if the ranges are still in use.

As one does not want to repeat them normally (unlike the index registers) it's better to write them into constant definitions:
Code:
tmp  = 0,d
tmp2 = 2,d

lda tmp,x  ; for lda 0,d,x
sta tmp2   ; for lda 2,d
Or define regular labels with them:
Code:
.virtual 6,d    ; accepted for *= or .logical/.here as well
tmp  .word ?
tmp2 .word ?
.endv
The above constructs works with stack addressing similarly which can simplify stack variable referencing:
Code:
lda (tmp),y  ; for lda (6,s),y
The other challenge is managing the immediate addressing mode's constant size.

There's nothing innovative here, the accu/index sizes are selected with directives (.al, .xl, .as, .xs) or through rep/sep constant tracking if enabled (.autsiz).

Alternatively if one does not want to bother with the directives the address size forcing may be used where it's needed:
Code:
lda @w#0      ; 16 bit immediate
The >64KiB address space is supported but of course an appropriately large output format must be used for that. Banks are crossed in the image file while the program counter remains in the current program bank.

That's all I can think of which may be relevant for 65816 programming with 64tass.

[edited]

Of course I forgot a few things, like what to do about the ,d and ,b register bases in the addresses when direct offsets are needed for indexed addressing. I'll add it here as the end of thread is too far away. I've added code blocks above as well were needed.
Code:
tmp = 7,d

lda tmp        ; direct access is nothing special. But for an indexed one
ldx #tmp-(0,d) ; subtract the "base" (it's 0,d here) to get the offset
lda 0,d,x      ; as the CPU will add the direct page base back here
If this subtraction would get repetitive for a table of direct page offsets then lists/tuples help:
Code:
.byte (adr1, adr2, adr3, adr4, adr5)-(0,d) ; subtracts 0,d from each of them
The 0,d had to be in parentheses because of operator precedence rules.

The same offset calculation is needed if the .dpage range was set to something else than zero just with a plain number.

65816 code can be in another program bank than zero and then jump address tables may need ".addr" instead of ".word". If an address gets larger than 16 bits .word will complain about it while .addr stores the last 16 bits and fails only if an address points to a different program bank.

There are no special directives yet for tables of 16 bit data bank or 8 bit direct page addresses which would strip ,b and ,d and take into account the current .databank and .dpage settings like .addr for program addresses. No one asked for them and it can be done by using other directives (.byte/.word) combined with base subtraction, as above.


Last edited by soci on Sun Nov 24, 2019 6:29 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 21, 2019 10:29 pm 
Offline
User avatar

Joined: Sat Sep 29, 2012 10:15 pm
Posts: 899
soci, thanks. This is very useful and your effort is appreciated!
[edit] Armed with this, I can do everything... my search may be over

_________________
In theory, there is no difference between theory and practice. In practice, there is. ...Jan van de Snepscheut


Top
 Profile  
Reply with quote  
PostPosted: Fri Nov 22, 2019 2:26 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
Disclaimer: I'm a Unix guy since about 1983. (Not necessarily Linux, though that's my main platform these days because popular.) But each system of course sucks in its own unique, different way of course. (And FWIW, one of my favourite things is functional programming, which puts LISP right up there, though—blub warning—I wouldn't use it if I had a lanaguage with an Hindley-Milner type system available.) Enso, here I just want to point out a few of the design decisions that are helpful in some of the systems you don't like.
enso wrote:
I've looked at ca65 a couple of times. At some point I may even install it, but for now I am avoiding C. I think the taste left by GNU as (the worst assembler I've had to work with, perhaps) is what's giving me the willies, which is not fair since ca65 may be a fine assembler.

I have little experience with ca65, but I think you should definitely have a serious look at it.
  • You can completely ignore the C compiler and use just the assembler and linker, and this works well in part perhaps because there are plenty of ca65 users who do the same, so it's an assembler designed for use by assembly language programmers, not just by a compiler.
  • The main design decision you'd have to give a thumbs-up or -down to is using a linker. This can be great if you want to produce a lot of different versions of your code for different platforms (e.g., a LISP that can run from ROM or RAM on anything from a 4 KB Altair-like system to a 48 KB Apple II to a 128 KB Commodore 128, with various features depending on the target); it can be just a bit of added annoyance if you're doing much simpiler builds.
  • It comes with some handy libraries for use on a lot of different 6502-based platforms that can be quite useful when you just want to quickly get some I/O working on a new platform.
  • The support is great, at least if you're a good programmer. Recently https://retrocomputing.stackexchange.com/q/12492/7208someone%20pointed%20out%20an%20obscure%20bug%20in%20their%20VIC-20%20libraries that intrigued me, for which I filed an issue on GitHub, and the folks there were great at pointing me to the things I needed to learn to fix it, including helping with refactorings to improve existing library code. If you had features you wanted to add that were sensible and had reasonably broad application, I think you'd be likely to get them on to the master branch.

It does produce listing files, though I've not looked at them myself. These are obviously pre-link (and thus pre-relocation for non-absolute code), but I've often found that good enough for what I need when debugging my stuff built with ASxxxx, which is similar. ASxxx does offer a nice feature where the linker will read a .lst file and produce a .rst file with the addresses and modified opcode arguments (though not the symbol table, sadly) rewritten in the listing file; you might consider whether it would be worth adding such a feature to ca65 if you settle on it.

Also, remember that the linker can add debug information to the output which, though it's a very different thing, might with some custom tooling be an alternative approach to solving some of the problems to which you currently apply a listing file.
Quote:
BitWise, no offense but I find Java a nuisanse - a cumbersome language taking the worst qualities of C++.

Right, but if you're not actually reading/modifying the assembler itself, Java isn't a concern, just the JVM. And yeah, it's a bit of a pain because you need to go find and install a JVM on your system, but on Linux this is relatively painless. The slow startup time is also annoying.

If you are having to write code to run under the JVM, you'd probably like Clojure; it seems to be a fairly decent Lisp. (Re Lenningen, see below.) Or better yet, Scala (blub warning, see above re HM type systems).
Quote:
...I am likewise annoyed with python. I just don't like it - kind of the way I don't like perl. I like languages that either compile code I can look at, and if there is a VM I would much rather deal with something like Smalltalk as gives you an awful lot of power.

I've written a lot of code in Bash, Perl, Ruby and Python and Python is without question the best of the bunch. Not necessarily the language itself, but the ecosystem around it; it's designed not just for portablility like the others, but also for programmers.

Python does compile code you can look at; a disassembler for the bytecode generated by the compiler comes in the standard library. It also comes with a parser, letting you (relatively) easily build tools like pytest, which parses Python code and then tweaks the AST to instrument that code before passing it on to the compiler, allowing the test framework to "take apart" expressions and show you intermediate values when a test fails. (This is a major part of what makes pytest the best unit test framework I've ever seen, and I'm a 20-year TDD guy who's seen a lot of them.) It's good design like this that leaves me little doubt that, for a general "scripting tool," Python is the best one out there for modern platforms.

I wouldn't say that Smalltalk (which I have used) is more powerful than Python. It undoubtedly has better syntax, but the languages themselves are somewhere around similar in power and Python has a much better environment for modern Linux/Mac/Windows machines if your concern is to use it as an integration tool for helping to build code, as opposed to writing standalone programs.
Quote:
I do wind up with something that requires python every year or so, and it's kind of different every time. It is a big download, has its own library system that downloads all kinds of crap that scares the pants off me every time

Every language system with a large package library has and should have its own package system for those. To try to use platform packages is a nightmare if you want to run on more than one platform (and especially you want any chance of working on more than one of Linux/Mac/Windows) and having to manually pull down packages used by your particular system is something you'd quickly write a tool for, thus producing some custom packaging system anyway.

But the really key thing that these systems provide (though it's not always used) is installing the packages in your project directory, not global to the system or even global to the user. (Python uses something called "virtualenv" for this, and also has a nice package called "virtualenvwrapper" that lets you create and enable/disable separate virtualenvs for general command line use. The latter lets you `workon foo`, `pip install somepackage`, use that from the command line, and then `deactivate` to remove it from your current environment, leaving it to be reactivated later if you like.) This is invaluable for being able to use these things without having them clutter up or, worse yet, create bad interactions, with other things you're doing on that host.

If you want to see an example of how I use this, have a look at my 8bitdev repo. Just `git clone` it and run `./Test` from the top level (under Linux and probably MacOS; I've made no special provisions for Windows support though that wouldn't be hard to add). You'll see it download, build and install the ASxxxx assembler suite (this is completely separate from Python) and then install the pytest and py65 (a 6502 emulator) packages, build my code, and run the unit tests on it using pytest and py65. All of its stuff is put under the `.build` directory and affects your system and user account not at all, except that maybe you needed to make sure that `python3` itself is available.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Fri Nov 22, 2019 4:03 am 
Offline
User avatar

Joined: Sat Sep 29, 2012 10:15 pm
Posts: 899
[Highly opinionated opinion is opined below. No offense is intended. This is my explanation of why I do not care for python, java and the like. If you like them, you are probably smarter and more patient than I am. Permission to speak frankly. Peace!]

cjs - you make valid points. And ok, it's not fair to equate python with perl, it is actually a language wheres perl is a pile of stuff. And at one point I toyed with the indentation syntax in some toy languages I've implemented back in the day. Sure it's useful, and if you know it and like it - great.

Clojure has some insanely wonderful stuff, but (aside from requiring java) Clojure is a Lisp-1, using a common namespace which makes macros a drag. And it tries to fix Common Lisp syntax with improvements on (), such as {} and []. Anyone who tries to fix Lisp syntax completely misses the point. People who come up with clever ideas to 'get rid of those annoying parentheses' should use CL for a few months and write a couple of macros. There is a good reason Lisp is here and unchanged 60 years later.

Smalltalk has many limitations (like no metaprogramming at all), but at least it's entirely self-contained. I can start up an image saved 20 years ago and run my code without fear. Same with Lisp, btw.

I have issues with package systems generally, and your point about local installs is a compromise, but why? The only reason to have that kind of an infrastructure is because the libraries are ****ty and need constant fixes, which by the way, breaks everything along the way (which is the problem with all modern software). If someone has the balls to make 100MB Electron executables that boast 'portability', whatever they mean by that, just include what you need with your python app, compiled or source or whatever, and the entire compiler and all the libraries needed, and the Encyclopedia Britanica, with room to spare. How much code is really there in an assembler? It's completely nuts.

As soon as you use a language with a large package library and a package system, especially a fine-grained one, you are totally screwed. Someone updates a library that is used by another library and your code breaks. Whose fault is it? By the time you figure out what happened, you've grown a neckbeard and learned all kinds of things you should never know about. The great chase is on, and you better keep updating your code daily. Bah.

This kind of retroactive bull**** really annoys me. I don't care if Guido has a revelation about what Python 7 should be, if I ran my assembler 10 years ago, 10 years from now when I install Linux on a clean machine it should still run. Even after Linux jams an update down my throat because I ran 'pooping birds' in python or whatever. I don't get a comfy feeling about that at all. Everything I've had that was written in python - that I was foolish enough to use - no longer works a couple of years later.

I am not afraid of learning how to get complicated stuff up and running, or figuring out how to use it. I just don't want to. Any language that cannot apply its full power to extend itself using the same syntax is just blub. You get a kitchen sink, and a set of wrenches, labeled in French, Dutch, whatever. Wrenches are wrenches. Keep them.

And thank you for the repo - I will perhaps give it a whirl sometime. Although the 32-bit linux caveat is already making me shudder (a few years ago I spent a lot of time getting some or other 32-bit package working well. I think it's easier these days).

[Edit] SBCL and other modern compilers infer types without declarations, and generate correct (and usually pretty fast) code.

_________________
In theory, there is no difference between theory and practice. In practice, there is. ...Jan van de Snepscheut


Top
 Profile  
Reply with quote  
PostPosted: Fri Nov 22, 2019 1:01 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
enso wrote:
Clojure has some insanely wonderful stuff, but (aside from requiring java) Clojure is a Lisp-1, using a common namespace which makes macros a drag. And it tries to fix Common Lisp syntax with improvements on (), such as {} and [].

Well, Lisp-1 vs. Lisp-2 is a matter of taste, I suppose. But it's orthogonal to macro issues, of which Lisp has plenty of its own. Keep in mind it was Scheme that introduced hygenic macros.

As for {} and [], my understanding is that those are not "fixes" for Lisp syntax but simply syntactic sugar for equivalent S-expressions. (But I could be wrong about that.) If you object to that kind of thing, you also need to drop semicolons for comments and '(...) for (quote ...).

But no big deal; I'm not here to debate the specifics of Lisp implementations.

Quote:
I have issues with package systems generally, and your point about local installs is a compromise, but why? The only reason to have that kind of an infrastructure is because the libraries are ****ty and need constant fixes....

No, the primary reason is because they want to support distributed development, maintenance and use of a huge set of libraries and programs. It's utterly impractical to centrally co-ordinate development of even thousands, much less tens of thousands, of different libraries, command line executables, and so on. It's very much about making individual projects self-contained in a certain way, which is clearly a feature you want.

Quote:
...just include what you need with your python app, compiled or source or whatever, and the entire compiler and all the libraries needed, and the Encyclopedia Britanica, with room to spare. How much code is really there in an assembler? It's completely nuts.


I'm not sure what you mean by this. But certainly including all your stuff, rather than just providing links to it that a tool can automatically download and install in your project directory, is another option. If you object to both, yeah, you get stuck with "I can't use that tool." Fair enough, but when you decide you'll never use an emulator or unit test framework for your large assembler program your productivity will suffer in comparison to those who do.

Quote:
As soon as you use a language with a large package library and a package system, especially a fine-grained one, you are totally screwed. Someone updates a library that is used by another library and your code breaks.

You couldn't be more wrong about this: this is exactly a scenario that the package managers are designed to address and prevent (unless you took steps to tell it, "you can use any random versions of this package rather than the one I specified and and tested).

This is why you can check out an old pile of code using Python's PIP/requirements.txt, or NPM's package.json/package.lock, or similar things, and it just builds and works despite being entirely unable to compile with the modern versions of the packages it uses.

Quote:
Everything I've had that was written in python - that I was foolish enough to use - no longer works a couple of years later.

And that's precisely because you didn't use a system that properly specifies what dependencies you needed. Running decade-old Python 2 code is no problem if you set up your project to properly specify dependencies.

Quote:
And thank you for the repo - I will perhaps give it a whirl sometime. Although the 32-bit linux caveat is already making me shudder....

If it's an issue for your particular project where you want to use a tool where only 32-bit binaries are offered, I'd suggest you take another approach I've used elsewhere which is simply to download the source and build it. I'm not offering my repo as a good tool for you to use, I'm offering as a demonstration of some techniques and ideas that will help you make projects that are easier to build, for you and others, now and in the future.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Fri Nov 22, 2019 4:56 pm 
Offline
User avatar

Joined: Sat Sep 29, 2012 10:15 pm
Posts: 899
cjs wrote:
Well, Lisp-1 vs. Lisp-2 is a matter of taste, I suppose. But it's orthogonal to macro issues, of which Lisp has plenty of its own. Keep in mind it was Scheme that introduced hygenic macros.

There is a rational basis: Lisp-2 macros work pretty well, while Lisp-1 requires special 'hygienic' macros to fix problems that are non-issues in Lisp-2.

I may be wrong about the braces. I have serious issues with the backquote - everyone reads it differently. SBCL reads it as a struct containing the rest of the expression!. Quote is benign, reading as (quote ...) - syntactic vinegar to me. And comments are ignored by the reader, so don't count. You are right, enough Lisp, for now...

Now, package systems. I know very little about python's, and will defer to you. I don't know if python applications normally fix the versions of all libraries or bundle them. I suspect not: in my experience, something triggers an update, things break. The mentioned software that no longer works was not mine - it was stuff floating around Github and such. Once some python library disappeared - can't remember which one, but there was a lot of talk about it on the webs. Had to manually fix code to use some other library that replaced it, requiring slightly different parameters. Stone age stuff. Is it because the devs failed to bundle the libraries or specify them? I don't know. Above my paygrade. Someday I will use a python app and it will fail to break, and I may even like it. Until then, I avoid them as a rule.

You are probably right about the need for a deep library ecosystem - but that's for developers. If I, as a user, see package system failures, that is worse than a bug, that is a complete failure of the development environment - the developer should have chosen tools that are not so broken that they poke through the application and stab me. 30 years ago people put up with a few bugs in their software - and waited for version 2. Today it feels like those were the good old days.

My ubuntu now has three different package systems, I believe. God help me. At least once a year something I need breaks from updates. Some new version of a library steps on an old one, and something does not work. Sometimes I get the 'can't get there from here' errors about version conflicts that just cannot be resolved. I've gotten into completely impossible situations that way. That should not happen but it certainly happens to me an awful lot.

So that's one of the reasons I am here. I can boot up my Apple ][ and run Zork and it will not bitrot on me. That's is what I want from my tools, so I can write code that does not require updates, and the insanity won't pollute my mind. And hopefully write some code that will work for as long as the hardware is there.

_________________
In theory, there is no difference between theory and practice. In practice, there is. ...Jan van de Snepscheut


Top
 Profile  
Reply with quote  
PostPosted: Fri Nov 22, 2019 6:11 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10938
Location: England
We do seem to get a lot of dependencies these days, and with that we sometimes get long chains of dependencies. It seems to me a trade-off - if, as a developer, you make use of heaps of packages which have deep dependencies, your offering might be richer and your development easier, but your users will potentially have trouble if they can't either capture a working state of play or successfully keep everything up to date. Or, as a developer, you can make an effort to use fewer dependencies, and shallower ones, and perhaps more stable ones, and then your users will have an easier time.

I think those two types of tactics might show up generational differences - older developers will aim for simplicity and self-containedness, where younger developers will be attracted to an always-connected always-updating ecosystem of dependencies.

I read recently the idea that we used to agonise over the possibility and the difficulty of software reuse. But suddenly we find we have an epidemic of reuse. A facility like a forum might be shipped as a complete OS image with many and deep dependencies already populated, and is meant to be installed in a container or a VM and run as-is. And because we do genuinely now have an epidemic of security problems, that container will be updated and overhauled several times a month.


Top
 Profile  
Reply with quote  
PostPosted: Fri Nov 22, 2019 7:56 pm 
Offline
User avatar

Joined: Sat Sep 29, 2012 10:15 pm
Posts: 899
BigEd wrote:
... older developers will aim for simplicity and self-containedness, where younger developers will be attracted to an always-connected always-updating ecosystem of dependencies.

Very interesting! That rings true, as the 'universe' expands and the (apparent) quality of code deteriorates, I am more and more reluctant to include other people's work by reference - as opposed to by copy (or at all). When I was younger and more optimistic it sounded like a good idea - everyone is improving their little fiefdom, and the entire project benefits. Now I see it as too many cooks in the oven. Each with their own opinions, quirks and decisions (that for some reason are entirely out of phase with mine!).

So we are between a rock, a hardplace, and a cauldron of boiling water. There are three extreme points on a 2-d surface:
* Command: Write or at least grok all code involved, fix/update when you deem necessary (generally not possible);
* Freeze: include all libraries in use (all bugs and security problems are baked-in but will be slowly uncovered over time);
* Flow: Allow dynamic updates and expect constant, unpredictable emergent problems until the day you die;

Note that none of these preclude the existence of a deep ecosystem during development - we are concerned with the rest of the lifecycle.

Current trend is definitely runaway #3. Is there a sweet spot?

[edit] Speaking of 'always connected'... Real objects do not retroactively change - at least in this universe. When you buy a paper book, its text does not change when the author revises his master copy. More importantly, it shouldn't.

Our society was already on the runaway #3 path in terms of consumerism - when physical objects fail to upgrade themselves we force the issue by tossing them into the dump and buying new objects with borrowed money. Digital things presented an opportunity for an apparently free upgrade path (there is always a price!). However, just because you can do something does not mean it's a good idea.

To get back to computers: this retroactive change problem is a core issue in computer science. We are just beginning to scratch its surface. Clojure, for instance, has a pretty good grasp of mutability and persistence (I strongly recommend looking into at least [1] its brilliant implementation of dynamic vectors). Forth programmers work in a hyperstatic global environment: old stuff is still there, and some things that look like modifications to existing things are actually new things that shadow or extend old things. This a rabbithole that plunges you into depths of philosophy with seemingly simple questions: how do you determine what identity means in your environment? What does it mean to name things? It turns out there are serious benefits to not just changing things willy-nilly and losing history in dynamic environments. And some things really should be immutable; some edits should create new versions of objects instead of just changing them; some references should stay with the version given them originally.

Version-control systems are a step in the right direction. But local version control of each part does not assure that the entire system makes sense. The next logical step for github is obviously a time-travel feature that lets you see everything as it had existed at some point in time in the past. But unless there is a global protocol for committing not just your code but everything involved, there are no guarantees. Even then, without a really sophisticated system for reasoning about dependencies on a symbolic level, or even lower - decyphering pointer reference maybe - you are still in trouble: going back to a previous version of a utility may drag in an outdated distribution of Linux.

For an alternative look at human so-called information systems through the eyes of an intelligent horse-like alien, see http://ngnghm.github.io/blog/2015/08/02/chapter-1-the-way-houyhnhnms-compute/ by François-René Rideau.

Oddly enough, we are only a little-bit off-topic.

(1) https://hypirion.com/musings/understanding-persistent-vector-pt-1

_________________
In theory, there is no difference between theory and practice. In practice, there is. ...Jan van de Snepscheut


Top
 Profile  
Reply with quote  
PostPosted: Fri Nov 22, 2019 11:13 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
enso wrote:
You are probably right about the need for a deep library ecosystem - but that's for developers.

Well, yes, but we are talking about developers here, are we not? I had assumed all this was about how we set up the contents of a repo so that developers can most effectively change the code and build and test the new version of the product. That of course means being a "user" of other products, such as assemblers, compilers, languages in which you write your build system, frameworks that help you test, emulators, and so on. Management of those dependencies for a repo has been a major focus of mine for almost two decades now.

Quote:
Someday I will use a python app and it will fail to break, and I may even like it. Until then, I avoid them as a rule.

I'm keen on seeing different ways of doing what I do, given similar goals. Perhaps you could demonstrate how you would deal with the (currently pretty simple) situation I deal with in my 8bitdev repo: I want to write, build and test some assembly code, and have others able to do the same with the code I committed. Feel free to grab my two small proto-bigint routines and their tests (though I suspect you'll be translating those from pytest to something else), or use something else equivalent. Bonus points if you bring up a program in an emulator (I've not implemented that yet, though it's as much due to developer UI issues as technical ones.) What's your ideal for how something like this should work?

(And feel free to frame-challange: you can propose that there's simply no unit testing because it's not necessary, if that's the way you want to roll, though I would have difficulty accepting that without a pretty convincing demonstration of how my life can be just as easy without unit tests as it is with in the case of writing assembly code for 8-bit computers.)

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Fri Nov 22, 2019 11:39 pm 
Offline
User avatar

Joined: Sat Sep 29, 2012 10:15 pm
Posts: 899
To clarify: I draw a sharp distinction between using code and developing code. The event I was describing had to do with a utility that read an NZB file and pulled data off a newsserver. I am a user, clearly, and even seeing an error message on the command line is ok with me. Like out of disk space, can't connect, auth error, etc. That is reasonable. However, it was fine the day before, and then the parser broke with a bunch of gobbledygook, incomprehensible stuff to a layperson. I did some googling to find that other people had the same problem, and it had to do with some deprecated library. I found a workaround - it was a few lines of code to pass different parameters to a similar function in another library. I suppose that makes me a developer, but no, this is a complete failure.

I get it - I write tons of code just to get something done, and don't protect it or test it well enough because well, I am the only user (and when I come across it 5 years later, yes, I am sorry I failed to document or test it. But the trade off is that for every piece of code I reuse years later, there is a broken hard drive full of useless code in my basement. I don't have 100000 programmers like Microsoft to document and test useless code).

My point is, however, that it is entirely unacceptable, even for a utility passing off as an application, to crash in that particular way. I don't even think it's the programmer's fault. Should everyone, at application start, write verification routines scanning library directories to make sure the libraries that were there yesterday are still there - do you think that's acceptable? No, that's the linker's job. I am out of nice words here. If libc just disappeared, (could happen, I suppose), I would just toss everything and switch to something like a BSD derivative with a conservative update policy, or RISCOS or something.

_________________
In theory, there is no difference between theory and practice. In practice, there is. ...Jan van de Snepscheut


Top
 Profile  
Reply with quote  
PostPosted: Sat Nov 23, 2019 4:58 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
Well, I'm in full agreement with your complaint: it is certainly an application developer's job to make sure that his application, when installed by the end user, has its proper dependencies where possible and gives informative messages when it does not.

The Python ecosystem provides OS-independent tools for doing just this: pip and the standard packaging system. Ironically, this is the very tool you were saying you don't like and don't want to use.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Sat Nov 23, 2019 10:55 pm 
Offline
User avatar

Joined: Sat Sep 29, 2012 10:15 pm
Posts: 899
cjs wrote:
Well, I'm in full agreement with your complaint: it is certainly an application developer's job to make sure that his application, when installed by the end user, has its proper dependencies where possible and gives informative messages when it does not.

I think my point was that using a young (and inherently flawed) infrastructure that encourages mass participation as an application platform is a bad idea. Shame on you. If I use tools built on such a platform, shame on me.

Kind of like living in a high-rise apartment building that has a first floor "Rent-a-pickaxe and have a go at the foundation walls - now at no charge!" business.

J'accuse, Guido! And the rest of you who think applications should have the code swapped out underneath you! Invite me over for dinner and I will kick out chairs from under your butts.

There is no irony in me not wanting to update everyone's stuff all the time and watch other things break. There is an inherent sadness, and a sense of foreshadowing. Perhaps, a feeling of the wind of impending doom brushing softly against my face, as I slowly turn and walk away from the busy crowds.

_________________
In theory, there is no difference between theory and practice. In practice, there is. ...Jan van de Snepscheut


Top
 Profile  
Reply with quote  
PostPosted: Sun Nov 24, 2019 7:07 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8387
Location: Midwestern USA
BigEd wrote:
I think those two types of tactics might show up generational differences - older developers will aim for simplicity and self-containedness, where younger developers will be attracted to an always-connected always-updating ecosystem of dependencies.

There's is a phrase that describes what you are thinking, one that I learned as a "green" software developer in the early 1970s: evolution, not revolution. Eric Raymond makes an indirect reference to this in his The Art of UNIX Programming, referring to it as the rule of least surprise.

Pardon me for being the gruff, old curmudgeon I am, but I share Enso's disdain for many of these so-called modern "languages." I dismiss them as languages designed to appeal to non-programmers, often with a structure and syntax that is alien to what the "older developer" (that would be me) would expect to see (the "...or die" error-handling of PERL is an example). In the case of the three "Pees" (Perl, PHP and Python), I find little about them that would excite me as a programmer. It seems something about them changes every five minutes, occasionally, as Enso noted, breaking compatibility with older work. Languages that do that are the work of programmers who are not looking at the big picture. There's a reason why the older languages, e.g., C, BASIC, etc., have stayed current, and that is due to evolution, not revolution. The 20-something crowd that seems to dominate Linux development these days should take note of that.

Getting back to 65C816 assemblers, I do (have done—I can't see well enough to write code and had to get my wife to proof-read this so I wouldn't appear to be a complete fool) my editing and assembling in Kowalski's simulator, using an extensive set of macros to synthesize '816-specific instructions and addressing modes. It's hardly ideal, but it does work.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Sun Nov 24, 2019 3:50 pm 
Offline

Joined: Sat Dec 12, 2015 7:48 pm
Posts: 143
Location: Lake Tahoe
BigDumbDinosaur wrote:
...
There's a reason why the older languages, e.g., C, BASIC, etc., have stayed current, and that is due to evolution, not revolution. The 20-something crowd that seems to dominate Linux development these days should take note of that.

...


"Hey you kids, get off my data bus!"

Seriously, keeping up with the latest iteration of libraries, frameworks, APIs, etc. is nothing short of exhausting. Once I get code developed, it then becomes a full-time job just keeping up with all the external changes that continuously crop up. I gave up a long time ago with Linux kernel drivers; the thrashing of the API was just too much. And trying to pick the GUI framework de jour? Ugh. Least common denominator: command line, C code.

Enough of *my* grumbling. I didn't see any mention of ACME's 65816 support. It's mostly focussed on 6502 code but has some decent support for the 65816. I switched to it (from ca65) when the Lawless Legends team chose it for their development tool. A nice basic, single file assembler. I was able to quickly port '02 PLASMA to it, then when 65816 support was needed it easily segued into 16 bit code. It does have some limitations with >64K memory:

https://sourceforge.net/p/acme-crossass ... /65816.txt


Top
 Profile  
Reply with quote  
PostPosted: Sun Nov 24, 2019 5:35 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10938
Location: England
I read quite an interesting point a little while ago: in a field that's rapidly expanding and therefore taking on newly-trained people, the majority view is going to remain in the hands of those with at most a few years of experience. If the number of software engineers doubles every year - as it has done in the past, I think - then fully half the engineers have very little experience.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 49 posts ]  Go to page Previous  1, 2, 3, 4  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 21 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: