sburrow wrote:
I was going to wait another day or two to reply, but I figured now is about right to respond to Gordon.
Thank you all for the excellent info! Lots of great ideas here.
Yesterday I tried the "insert 256 characters" approach. It was ok-ish, but even after a full page of text to shift it was bogging down. If I kept that going for much larger documents then it would certainly not be feasible.
This morning I have been working on Gordon's "nano like" idea. At least that's what I think it is. Each time I hit Return (or Arrow Up/Down) it saves the line buffer at the bottom of a virtual stack. Each line being 80 columns, simplify it down to 128 bytes, then I essentially only have the amount of edits that I have RAM available. BUT it doesn't slow down much. The slowest it goes is when it's re-drawing the characters on the screen after a Return (or Backspace when deleting a line). The compromise is that eventually I will run out of available memory and it will have to "defrag" or "compress" or something, removing older edited lines.
So far so good, still working on it, but seems promising.
Nano is a common editor used in Unix/Linux systems. It was based on an editor called Pico which was part of the Pine/Alpine email package. It's a simple editor: What you type goes into the text, Cursor keys move as you might expect, page up/down/home/end keys (where present) again do what you might expect along with a bunch of control-keys to do extra stuff - e.g. Ctrl-K for Kill a line (delete it), Ctrl-U to un-kill a line (so you have copy/paste lines there) and so on. Press Ctrl-K 10 times, then move and Ctrl-U and you've moved 10 lines. Or Ctrl-K 10 times, then immediately Ctrl-U to pu them back, then move and another Ctrl-U to copy them into a new location...
It has grown over the years to support things like syntax highlighting and so on and while I don't use it for coding, it is a good base when you need a "quick and dirty" editor. (And I've been using Pine, now Alpine email programs for some 30 years now)
https://en.wikipedia.org/wiki/GNU_nanoAnyway...
A speed-up you can use is to only copy the line the cursor is on into a "working" buffer if you type a command that changes the line (insert char, delete, etc) that way you can scroll up/down/left/right very quickly.
I also forced a maximum line length in my editor of 160 characters. This is based on my personal editing where I'd normally never had a window more than 160 characters wide (often 120 characters as that's a comfortable left-right scan for me. (BCPL also uses a byte for string lengths so the absolute maximum is 255 characters anyway unless I do my own management)
So I read a file in, line at a time, allocating space for each line and keeping an array of pointers to each line.
So what I think of is essentially a line editor with many lines.
Editing and compiling a 2000 line program is right at the limits of this system and even if it had double the RAM things like read/writing to/from disc and the actual compile time start to become limitations on usability. (The BCPL compiler is just under 6700 lines in total and I have compiled it on the system, but it's an all-nighter)
Heap management is via free/malloc or freevec/getvec in BCPL - my management code (written in in BCPL) is a fairly standard first-fit allocator which 'remembers' the pointer to the last block allocated (so starts looking for free space where it last allocated something until it hits the end of free space before wrapping round looking for holes).
And when I was looking at options to write a new little OS, and when looking at how to boot BCPL I initially wrote this in sweet-16 which also worked very well on the 6502. (And I blew the dust off it recently for my next little 6502 project which may or may not see the light of day...)
I think a good little memory management/allocator like this is a handy thing to have. How you might implement it on an '816 to handle bank crossing is left as an exercise to the user though... (The underlying VM/Bytecode in my BCPL system gives me linear RAM and does the bank shenanigans for you to accessing RAM in BCPL is very easy - the cost is speed of the underlying VM)
I do edit and compile short programs regularly on my system and have never run out of memory due to heap fragmentation. The heap management code does insert guard words at the start and end of each allocation to check for overwriting - that works well and the editors never tripped the guards...
Quote:
drogon wrote:
Lets be realistic. If you're editing a 1MB file on a single digit Mhz system then you're doing it wrong. Really. Give up. It's not the system for this task no matter what you might think the CPU is capable of, This is not the system. Even on Unix systems I used in the late 80s (I had a Sun3/50 as my 'desktop') I'd not do that - a 16Mhz 68030 with 4MB of RAM trying to edit a 1MB file? I might just about get away with it, but I'd need to be very patient. I've enjoyed making my '816 system self-hosting, (ie. run it's own editor & compiler) but even then there is a limit to what I think it can do.
I don't understand, maybe because I am so young that all I've ever known is 1.44 MB 'floppies' (they weren't so floppy) and onwards. The main constraint for handling 1 MB text files is just to show how "shift all the data on each character" is unfeasible. The single digit MHz constraint was to show I'm not running on a modern PC or something. I don't necessarily think I will ever reach 1 MB files of me just typing away at my SBC, but you never know.
Well, indeed, you never know but we'll always push the boundaries. Today, we're doing (or thinking about doing) stuff on a CPU that we'd never have dreamed about doing on that CPU when it was released in 1986 - The Acorn Communicator had 512KB or 1MB of RAM, and while the Apple IIgs has space for expansion (Up to 8MB I think) did anyone/thing use it all? The communicator did have an "Office" suite but I never used it - how might it have handled a 1MB file? Hm. I presume there was also an upgrade AppleWorks for the IIgs - could that manage very large files?
From my own/personal point of view, I've hit the limit of my '816 system - Like many others (companies, individuals) before me I'm looking at moving on - the '816 now appears to be a stop-gap, almost - and allowed me to do my own "what if" and implement a multi-tasking OS with development tools but now I want it a little faster...
Here's another thought: Unix has some interesting tools to manage text files most of them are redundant these days (I suspect) - one is
split - split a file into smaller files each with a certain number of lines in it.. Merging them back is trivial with the
cat command... Did these arise because files were starting to get big, but couldn't be managed as a whole? (for editing, sorting, searching?) Then there's the
sed command - a stream editor reads a file, a set of editing commands and writes a new file... (something I have used 'for real' with a very old computer with tape reader and punch!) And maybe think Unix -
do one thing well - and if that one thing is editing files < 64K long, then write something nice to split files and merge them again...
... and we're almost back to writing a chapter per file and my old mantra; save early save often...
Cheers,
-Gordon