JimDrew wrote:
whartung wrote:
You can fill a pool with a drinking straw. I don't doubt you can move data to the chip. But as Gordon mentioned, drawing an 800 pixel line on a display, pixel by pixel, is expensive.
I agree, but nobody redraws every single pixel on the screen every frame in real-world applications. Not even video playpack software does that. There is plenty of speed to do most anything with a 1MHz 6502 (a 20MHz 65816 is even better). Most all of the various game machines of the era only had 320x240 graphics so I don't expect hires (640x480 or larger) games to be written for any '816 machine anyways. Besides, DOOM is 320x240 and that is all that seems to matter.
If you want to invoke the DOOM era, it certainly did redraw every single pixel on the screen every frame, at 320x
200. (Maybe some of the border UI elements didn't, but they probably did due to mode13 paging, and that's still just a small fraction of the screen.) Pretty much every 3d game before 3d accelerators became a thing did a full clear & redraw of the visible framebuffer every frame, including those on 6502 home computers (with their requisite low frame rate). Some got away without the clear, if they provably cover the whole display every frame: DOOM did, which is why you get those graphics trails when you idspispopd out of bounds where there's no geometry. But that's still just "redraw every single pixel on the screen every frame" once instead of twice.
Of course, PC hardware was unaccelerated plain framebuffers for the longest time, so the only thing you
could do is redraw everything with the CPU, even for scrolling 2d games.
On the 6502, Elite, Driller, many flight simulators, many 3d driving games, etc, simply chugged along in the exact same way, CPU-pushing entire bitmap screens of pixels as fast as it can manage in single-digit (or less) frames per second.
Same with video playback, every pixel was software placed every frame (except for dirty rectangle compression in single-buffered displays, which is actually pretty rare), drawn into small windows or fullscreen low-resolution modes. The first major acceleration to video playback was hardware scaling onto a chromakey, but the CPU still had to draw a 320x240 or lower resolution video fully in software, pixel by pixel, for the GPU to stretch it. Of course, even later came actual hardware video format decoding, but that's well past the era we're talking about.
I've always compared tile/sprite chipsets as being hardware video decoders, with the video data being usefully editable directly in its compressed form. It's in my opinion the only reasonable way to have high quality, well-animated, colorful, fullscreen interactive graphics from 80s-to-early-90s class of hardware, as evidenced by arcade hardware & home game consoles of the time. The CPU can get away with doing very little in very dynamic game graphic scenarios, as long as that video chip is pumping quality from easily-editable display specs. I think the SNES had some of the best looking and playing 2d games of the era, yet its '816 had a very crippled clock speed between 2 and 3.5 MHz.
But even in the 6502 era we had 3d games, GUIs, productivity apps, visualizers, graphics shape editors, proportional fonts, etc, all of which still had to bite the bullet to move every pixel with the CPU, often with very large screen refreshes, sometimes the whole screen by necessity.
Increasing resolution has a quadratic increase in workload, compared to linear MHz of CPU work done, without a command-based GPU doing heavy bitmap processing for you (which I think wouldn't be very retro). However, that level of work is sometimes inevitable to do what you want on the machine, and can't be dismissed. Without such capabilities, either in useful fullscreen CPU pixel-pushing speed or only sticking with tile graphics, it could easily be seen as only a "games console" rather than a "home computer" in my opinion. (platforms predating that era of color home computer gaming would be held to a different standard)