I'm feeling that my thoughts here may not be all that applicable to the kind of work the OP is doing, but I'll post them here anyway just in case there's anything helpful in them.
My viewpoint might be slightly different because I generally reverse-engineer system and BASIC ROMs and the like, rather than games, so I generally have no need to get new builds of the code working soon after starting, if at all.
But I feel that it's pretty awkward and to be reworking large chunks of
.byte directives or misaligned disassembly in source code, not to mention handling new or renamed labels for those; I'd rather just run it back through the disassembler. So I generally just stick with adding all my material to the diasembler's info files (and re-running the disassembly) until I'm reasonably convinced that I've got the vast majority of the code worked out to at least the level of what's code and what's data. (This can be tricky in some instances; for example the
National/Panasonic JR-200 BIOS ROM is full of code and data that's never called or used in the BIOS ROM itself, but only used from the BASIC ROM. Also, sometimes labels are never used anyway, such as with routines that take a function number and then look up the address to call from a table.)
That said, this approach is not without its own pain. I had to develop techniques to quickly re-disassemble and switch between the info files and the disassembly output (a script that watches for changes to the files and automatically re-runs the disassembly, and setting my Vim buffer for the disassembly output to automatically reload when the disassembly output changes) and also write some post-processing to make the output format of
f9dasm a bit nicer. It's also not as nice editing f9dasm's info files as it is editing source code directly, though it's not been
quite annoying enough yet for me to just go write my own disassembler.
The Git workflow for all of this is simple enough, though; do a bit of hacking and commit. I don't use branches and the like nearly as extensively in disassembly as I do when writing or modifying my own code, though I do regularly use branches local to my repo for short periods of time when I'm working something out. (These generally live for only a few hours, at most, and never leave my local repo.) I do commit the disassembly output, even though it's an object file, which means I need to take slightly more care when doing a commit to avoid it getting out of sync, but this is necessary so that I can just point people to the repo on the web (in GitHub or GitLab or whatever) to read the disassembled code, rather than making them pull the repo and run the disassembly themselves. (As an example of the source info file and the disassembly output you can have a look at
Bn-BIOS/B1.info and
Bn-BIOS/B1.dis. Note that that's 6800 code.)
BigEd wrote:
Then you can, for example, reassemble a series of versions hoping and intending to get the same binary every time, and if you don't you can bisect to find the first point of divergence.
Going back through commits to figure out where your assembly output diverged from the original binary generally isn't a thing so long as you have a script that does the assembly and compare for you and you run on every commit before pushing it. I suppose it can be handy to be able to go back to specific points in history and find out where you went wrong in assigning labels or adding comments or whatever, but honestly I don't do that much when reverse-engineering, instead just fixing things in the current version and moving on. However, if you end up with more than one developer working on the code simultaneously, Git will be invaluable in helping you merge your changes together. (Sadly, I have always ended up working on this stuff alone, to date.)