6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sat Nov 23, 2024 11:12 pm

All times are UTC




Post new topic Reply to topic  [ 63 posts ]  Go to page Previous  1, 2, 3, 4, 5  Next
Author Message
PostPosted: Sat Dec 26, 2015 12:28 am 
Offline
User avatar

Joined: Sun Jun 30, 2013 10:26 pm
Posts: 1949
Location: Sacramento, CA, USA
kc5tja wrote:
... is it truly worth your time taking a library I've written as-is and trying to make it work in your project? I think that's a valid question to ask, particularly since, with ever-increasing frequency, I find situations where just rewriting something from scratch is a more productive use of my time.

If I knew that you personally had written and tested the library in question, I would either use it as is without hesitation (if I felt that it was a good fit), or ask you for the source so I could use it as a solid framework for my own mods. In case you didn't notice, I'm a distant admirer.

Mike B.


Top
 Profile  
Reply with quote  
PostPosted: Sat Dec 26, 2015 5:51 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8513
Location: Midwestern USA
theGSman wrote:
I would have to agree that the "everything is an object" mentality is not always the best way to approach programming. Contrary to what some OOP proponents might have us believe, it is more natural to conceive of a program as DOING something (which invites a procedural or task oriented approach) rather than a collection of inter-connected parts or objects as in a motor vehicle.

Interesting, your comment about a motor vehicle. Cars today are very much a "system," with only superficial resemblance to much of automotive design in years past. Today's cars really cannot operate well (or at all) if any one component fails. In computer terms, today's cars are "object oriented," as they are comprised of set of "black boxes," about which other black boxes know little except what each black box can accept as input and what it will generate as output. Ergo today's cars tend to suffer some problems that are analogous to those often seen in software developed using OO philosophies.

There is no doubt that OO makes sense in some environments. However, in exchange for the supposed gains of OO programming we see massive code bloat, sluggish performance on anything less than the latest hardware, and sometimes questionable behavior. That last item is significant, since the more abstract the environment becomes the more difficult it is to maintain fine-grained control over what the user experiences and most importantly, what the user can do.

I have done a lot of development in Thoroughbred Dictionary-IV, which is a 4GL environment that is driven by the Thoroughbred T-code engine. The T-code engine is a very powerful BASIC quasi-interpreter designed for transaction processing (e.g., airline reservations processing) on preemptive multitasking systems, and with features that encourage (but don't require) structured programming techniques. When I first started developing in the Thoroughbred environment some 25 years ago I zeroed in on the 4GL features, thinking "Wow! I can write high quality software in a fraction of the time it would take to do it in assembly language." The "fraction of a time" part was partially correct, but achieving the "high quality part" proved to be more wishful thinking than fact.

The core problem is that using 4GL methods gives you rapid application development (RAD) at the expense of control, and it is the latter that keeps users from messing up the data and/or the environment. I soon realized that I was spending increasing amounts of time trying to work around 4GL restrictions that prevented me from getting the precise level of control that I had long had in writing in lower level languages (e.g., in C or assembly language). After probably about 18-or-so months of fighting with the object oriented mentality that effaced procedural methods and in the process, produced slow, klunky code, I pulled the plug on 4GL and started working at level of the T-code engine, which is essentially a 3GL environment.

A remarkable thing happened when I made that change. Software got done just as quickly, ran much faster and most importantly, did precisely what I wanted it to do when the user was pounding on the keyboard making dumb mistakes. Strangely enough, I spent less time debugging than I did in the 4GL environment, probably because when writing at the 3GL level I was fully cognizant of exactly what was going to happen when the program was run.

My diatribe is not an indictment of 4GL and OO methods per se, but more a gentle jab at those who promote said methods with religious fervor. OO makes sense in the "assembly line mentality" software shop, where productivity is often measured in terms of lines-of-code written per time unit, rather the quality of the finished program. It also makes sense when disparate programmers contribute to the overall project. However, that also produces "design by committee" code that occasionally defies logic.

I daresay that some OO "fanatics" probably have done little in the way of bare-metal assembly language programming. If so, it means they may have been deprived of the opportunity to develop the mental discipline and good work habits required when programming at the machine level, while not burning up endless hours fighting bugs that an experienced and disciplined programmer would not have encountered. That discipline and experience—or lack thereof—will have a profound effect on the finished product, both in terms of size and speed, and in ease-of-use and resistance to failure due to the actions of ham-fisted users.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Sat Dec 26, 2015 7:41 am 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
BigDumbDinosaur wrote:
today's cars are "object oriented," as they are comprised of set of "black boxes," about which other black boxes know little except what each black box can accept as input and what it will generate as output. Ergo today's cars tend to suffer some problems that are analogous to those often seen in software developed using OO philosophies.


So, too, your computer and its relationship to the keyboard and mouse. This is not a bad thing.

Quote:
However, in exchange for the supposed gains of OO programming we see massive code bloat


Gonna hafta call you on this one. Some bloat will certainly exist, because of v-tables and template expansions, sure. C++ only allocates one v-table per class that uses them, though, so if you have hundreds of instances of objects, its cost is zero. Templates are expanded, again, on a per-type basis; but, here again, unless you're working with hundreds of types in a single binary, that one template isn't going to expand more than a handful of times.

If you study the bulk of the bloat that appears in most programs these days, it's unequivocally due to frivolous graphical content or GUI configuration patterns. Icons on the iPad and iPhone, for example, are only around 64x64 pixels, but remember that they are true-color with alpha channel: 32 bits per pixel. That's a whopping 32KB for an icon of that size. And, modern practices suggest icons as large as 512x512 pixels. I'm not even joking: see http://makeappicon.com/ios7icon . That's 1 megabyte. For. An. Icon. Your applications will often need several of these "assets." (As they're called in the industry.)

No amount of procedural code is going to undermine bloat of this nature. You'll save, at most, single-digit percentages of your distribution size.

Quote:
sluggish performance on anything less than the latest hardware,


Going to call you on this one as well. BeOS was written entirely in C++ except for the main kernel, including the use of templates and other contemporaneously advanced features from C++. Binary sizes are quite small compared to their Linux or Windows counterparts (including apps written in plain, vanilla C), and BeOS makes Linux look remarkably static in comparison on 486-class hardware. I've used BeOS on a 486/33 once (BeOS R5, to be specific), and I'm happy to report that it was the first OS I've come across that manages to actually make me happy as a user. Only one other OS ever has done that: AmigaOS.

That said, it's interesting to examine the cause of sluggish performance.

In the case of web apps, we find the use of Javascript everywhere, for virtually everything these days. Ignoring relative deviations in interpreter performance as noise, the overwhelming bulk of what Javascript does in a web app is collect metrics on how you use the website. This is often used to generate "heat maps", which tells the website authors which portions of the website are used the most frequently, what the average number of clicks to perform some task takes, how long you stay on a page, etc. All of these metrics are useful for basic website usability, but of course, also for deciding how to optimize ad delivery.

In the case of desktop applications, well, I've seen push advertising in them too. This was (is?) a prominent issue on the Windows 10 platform especially, and I even remember Canonical getting flack for it for their Ubuntu marketplace program. Also, if you run tcpdump on some independent Linux box on the local network, you'll notice an awful lot of your seemingly innocuous desktop applications actually "call home" to report, well, metadata and metrics. This does have the benefit of providing you with a means of automatically updating your application periodically, but just be aware, all that network I/O will influence perceived performance. It's not a lot, but it is enough.

This brings to mind features like search prediction on Google. This kind of interface is often called a "live" interface, because it's dynamic. On desktop applications, to pull this off well, you need your data to be indexed extremely well. B-tree or skip-lists are absolutely required; nothing else will offer sub-decisecond response times to locating information unless your data set is small enough to fit in a small handful of cache lines. Ideally, you'll want millisecond response times, because your "live" interface often involves the display of several auto-searched whatevers, and to be truly fluid, you want that entire data set to come back in less than 100ms. So, not only are you involved with writing the main GUI layout, you're also writing code to keep it up to date live, often with an active data stream, and that means you need a good index (more code and often, a lot more data requirements too).

Modern tracing JITs for "interpreted languages," like Javascript, Java, C#, and others, now produce executable code that compares quite favorably with C. So, clearly, it's not the use of OO that is bogging the system down. It's what the applications have been programmed to do behind your back that is affecting performance.

Regardless, operating systems are optimized today to lazily load code into memory via demand-paging. This means, when you run a program foo, foo does not get loaded into memory right away. Instead, the OS, creates an address space for it, and sets the PC to the address where it would have been loaded. Obviously, the very first thing that happens will be a page fault (since nothing is loaded there), which the OS then traverses a bunch of data structures to find out, "OOHH!! This belongs to FOO.EXE, in page 123." With this knowledge, it then loads in just that one page, in the hopes that that's all you will ever need.

Well, as you can imagine, this results in very poor perceived user performance, because the application is, even if you have a billion GB of RAM, apparently thrashing the harddrive. If your application is several hundred kilobytes to a few megabytes in size, which many GUI applications will be, consider that all this OS overhead happens every 4096 bytes of code and data fetched, and it's quite easy to see how even on a 2GHz computer with 800MHz FSBs, it leads to some pretty hefty latencies. SSDs are the only way to hide the application start-up costs, and that's only because of its zero seek time. I/O overhead is still measured in milliseconds, though. If you do the math, it's not to hard to see why computers "get slower" the faster they become. And none of this is influenced, in any way, by OOPLs.

On the other hand, if you actually take the time to load a binary en masse, you'll find a much better perception of user interface responsiveness. I know this because I've demonstrated this with my Kestrel-2. Despite running only at 12 MIPS, and only being a 16-bit CPU with fairly poor instruction density, the fact that I get more responsive programs using explicitly managed overlays than my 2GHz Linux box does with demand paging is telling. Moral: demand paging is awesome technology for server applications only. It seems to be utter tripe for real-time user interfaces.

And, it all compounds. You have megabyte-sized assets (instead of something like Display Postscript, which could easily render a resolution-independent asset; vis. PC/GEOS for another system using vector graphics almost exclusively), with background processing to communicate with home-base and constantly on the vigil for automatic updates, with background processing for maintaining a live user interface, with the database-like indexing needed to support these live interfaces, ... it all adds up! This is where code bloat actually comes from, and the sluggishness you feel is the multiplicatively compounded effects they introduce. We're not dealing with 3270 terminals anymore. (Though, I do kind of like 3270 terminals. Kestrel-3 will some day have a GUI library intended to emulate something like a 3270 terminal, just so I can stick my middle finger at all these "live" UIs today. Not that I hate live UIs, but yeah, they can be easily over-done.)

Quote:
sometimes questionable behavior. That last item is significant, since the more abstract the environment becomes the more difficult it is to maintain fine-grained control over what the user experiences and most importantly, what the user can do.


This is the only thing I agree with, but for different reasons. It is only a problem when the software is under development. If you've worked to write your OO program to be small and run efficiently, which is to say, you put as much care in your OO program as you do in any procedural program you'd write, then you'll come to realize that this problem exists as well with procedural code. The reason is that, in the absence of any tracing JIT support, any non-trivial procedural program will make use of jump tables to couple multiple like-typed objects somewhere. It always happens. (The alternative is the use very large IF/ELSE IF/ELSE constructs or switch/case equivalents, which actually produces thrice the code as a simple v-table would.) And when that coupling breaks down, you run into the same ambiguities.

Quote:
The core problem is that using 4GL methods ...


Just a point of clarification, in general industry acceptance, all OOP languages today are still 3GL languages. Languages such as SQL are considered 4GL languages.

That said, reading about the problems you were having, I'd have to say your team made the wrong choice with its development tools. BASIC is rarely a good foundation for any serious applications development, primarily because it lacks support for the kind of "programming in the large" features you typically need in a large, enterprise application environment. Even if Thoroughbred managed to bolt such functionality into the language, you're left with a walled garden environment, which means you can't get as much expertise for help as you could with a more open environment, like C, or dare I say it, even C++. Also, it's entirely possible that Thoroughbred's implementation of the environment was sorely lacking, which gave you a really bad taste for OOP.

I'm not suggesting OOP is right for all tasks either, of course. So far, the overwhelming majority of all OO code I write is work-related (in Javascript, no less). Virtually everything I write for myself (even if it uses classes) tends to be procedural in nature. But, I see you blame OOP for many things which I myself have seen occur in plain-vanilla, procedural languages all-too-frequently, or which are just plain poor or inexperienced engineering choices.

The one thing that is common between the pro-OOP and anti-OOP crowds is the desire to rush code out the door to beat the competition. So far as I've been able to tell, every single source of complexity comes from the requirement that my code interoperate with somebody else's code. Terminal handling in Linux is a disaster because of the continued cultural requirement to support ASR-33 teletypes at the login: prompt. C++ programming is a disaster, in large part, because of the Boost and STL libraries. C programming is a disaster because I have to be super careful about how I manage my memory when working with 3rd party libraries. Python programming is a disaster because the packaging mechanism the community rallies behind is immature and, frankly, broken by design. Javascript is a disaster because it lacks both type-checking and basic arity-checking (meaning given function f(a,b,c) {...}, calling f() and f(1,2,3,4,5,6,7,8,9) are perfectly valid things to do in Javascript). Combine this with the JS community's pervasive abhorence for documenting APIs, and you get a signed waiver for admittance at your local insane asylum. And on it goes. In an enterprise environment, there's a strong incentive for "code reuse" to occur. To my best estimation, I feel this happens to such an extent that it's actually an anti-pattern.

Alas, the software development world disagrees with my point of view. And why shouldn't it? It's made a lot of people a lot of serious money, and kept a lot of people I consider unqualified employed, while concurrently driving me to the point of unemployability because new-fangled "solutions" replace older, more problematic platforms with such velocity that I just can't keep up.


Top
 Profile  
Reply with quote  
PostPosted: Sat Dec 26, 2015 3:06 pm 
Offline

Joined: Sun Nov 08, 2009 1:56 am
Posts: 411
Location: Minnesota
Lately I decided I needed to learn PHP. I had a very hard time with it until the light blinked on and I realized what I was doing wrong and how to work with the language instead of against it (FWIW, it was the realization that if I could see the page, PHP was already done and could no longer affect anything. So whatever I wanted to happen had to happen before then).

I guess I 'm just saying that part of programming is knowing how to use the tools at hand appropriately. Every language has its own view of the world, and if you don't understand that view you'll never be able to use it efficiently.


Top
 Profile  
Reply with quote  
PostPosted: Sat Dec 26, 2015 5:05 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
barrym95838 wrote:
In case you didn't notice, I'm a distant admirer.

Mike B.


Thanks!


Top
 Profile  
Reply with quote  
PostPosted: Sat Dec 26, 2015 5:59 pm 
Offline

Joined: Sun Feb 23, 2014 2:43 am
Posts: 78
kc5tja wrote:
... I find situations where just rewriting something from scratch is a more productive use of my time.

I agree with this. A while back I needed a certain data structure. Tried a library only to find that it was unstable, and I ended up needing a special feature anyway. So I wrote it myself, and learned something in the process. And no extra dependencies to bother with.

Back to OOP, C++ allows organizing static code into "class-like" structures. Most of my non-GUI specific code is done this way, with a few classes here and there (if any). I mention this because C++ provides useful features that have nothing to do with OOP at all.


Top
 Profile  
Reply with quote  
PostPosted: Sat Dec 26, 2015 6:41 pm 
Offline
User avatar

Joined: Sun Jun 30, 2013 10:26 pm
Posts: 1949
Location: Sacramento, CA, USA
joe7 wrote:
... I mention this because C++ provides useful features that have nothing to do with OOP at all.

... like mildly amusing ASCII art? :P

http://rosettacode.org/wiki/99_Bottles_ ... _Version_2

Mike B.


Top
 Profile  
Reply with quote  
PostPosted: Sun Dec 27, 2015 4:11 am 
Offline

Joined: Mon Jan 26, 2015 6:19 am
Posts: 85
kc5tja wrote:
Gonna hafta call you on this one. ....(See http://forum.6502.org/viewtopic.php?p=42718#p42718 for rest of post)...

Most of your post is dealing with large systems worked on by teams of programmers.

For much smaller systems that a single programmer might work on, the advantages of OOP are less clear. Even the 65816 system would fall in this category. (It seems that the 65816 falls somewhere between the 8086 and the 80386 in terms of processing power). It would be a waste of resources to design such a system for GUI and even if you did, you almost certainly wouldn't be using 1MB icons! I could see a possible role for a simple monochromatic GUI but for anything more sophisticated, you are dealing with the wrong type of system.

As I said above, for '02 work, speed/memory optimization becomes the important factor and OOP is less likely to deliver that.


Top
 Profile  
Reply with quote  
PostPosted: Sun Dec 27, 2015 5:07 am 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
theGSman wrote:
It would be a waste of resources to design such a system for GUI


Don't tell the Wheels authors that. Or for that matter Steve Wozniak and his team behind the Apple IIgs. Or the Super Nintendo engineers. They all might have a word with you in private as they illustrate well-performing counter-examples.

Quote:
and even if you did, you almost certainly wouldn't be using 1MB icons!


How does this relate to object-oriented programming again? It's not clear to me, especially after I wrote several tens of kilobytes on how they're not at all related and gave detailed counter-examples why, why you still think they are.

Quote:
As I said above, for '02 work, speed/memory optimization becomes the important factor and OOP is less likely to deliver that.


And as I said above, you're conflating two different, completely orthogonal things that really have zilch to do with each other. OOP came into its own on a computer built around a 16-bit microcoded processor, with a 16-bit word-addressed address space, equipped with a monochrome, bitmapped screen with far larger frame buffer than you'll typically find on a 6502/65816 based system, and running at around 5.8MHz. The 65816 has plenty access to resources to support OOP: it's faster than the Alto clock for clock, it's maximum clock speed is twice to thrice that of the Alto's (14MHz to 20MHz versus 5.8MHz), and it addresses significantly more RAM than the Alto (16MB versus 128KB). Even a stock Apple IIgs had more resources at its disposal than an Alto. After all, the 65816 runs about 80% the performance of a MC68000, clock for clock (which places it at slightly better than an 80286, for what it's worth).

What I am saying, though, is that the kind of bloat that BDD was complaining about and how "object oriented" a program is are two independent qualities, to such an extent that the decision to make something OO versus plain procedural would not manifest in any tangible performance hit observable to the user in most cases (there are always counter-examples of course). 1MB icons have absolutely nothing what-so-ever with the fact that iOS is written in Objective-C. Nothing. iOS could be written in raw assembly language for all we care; that iOS developers are encouraged to use such high resolution assets these days is a cultural and political affair, NOT a technical one.

Conversely, I use OOP principles when I develop code with unit tests all the time. The notion of dependency injection makes fine-grained unit testing possible for most programming languages, for few support a hyperstatic global runtime environment like Forth does. OOP, in this case, lets me abstract away the host environment, independently of how sophisticated the runtime OS is, or whether or not it even has an OS. Fault injection, timing tests, etc. are all feasible, thanks to run-time polymorphism and other techniques first pioneered with OOP. And, yes, these do have value on 6502 and 65816 based systems. There's nothing about the 6502 or 65816 that makes these CPUs somehow "less of a CPU" compared to an 80386; they're all Turing-complete environments. It just takes the 6502 and 65816 more effort.

When I started this thread (look on page 1), the crux of my statement was that I decided, based on numerous thought experiments, that the 65816 was not convenient for OO dispatch, which we must remember is not the same as OO programming; the best I could muster was still 3x slower for an empty method than an empty subroutine call and return. I've stated this repeatedly. And I will continue to state this. But, somehow, between that and my disambiguation of what causes code or performance bloat, my message got conflated into, "OO is awesome! We should all do OO on the 6502! Yay!" I really am unclear how this happened, and frankly, it's annoying, and I really wish people would stop. I've stated the opposite too many times already, but too many people just aren't getting it.

This is why I post so infrequently here anymore. I just don't need this kind of stress. I found the conversation insightful until I started repeating myself over and over again. I'm giving up, and silencing this thread. I'm done.


Top
 Profile  
Reply with quote  
PostPosted: Sun Dec 27, 2015 2:36 pm 
Offline

Joined: Sun Apr 10, 2011 8:29 am
Posts: 597
Location: Norway/Japan
@kc5tja

A good summary I think (including your previous post in that). OOP itself isn't some kind of monster. It turned out that I was using OO methodology to solve programming problems long before I had heard about OOP. And I and another programmer once came up with a setup in Fortran (of all things) which made for a cleaner and more maintainable approach to implement a new system. That was OO too, but we didn't call it that.

Some programming problems are natural candidates for OOP. That doesn't mean everything is. The *nix driver model has been mentioned. The filesystem too. BDD, re your DESIGNING A NEW FILESYSTEM thread you would be hard pressed not to implement features like open, read, write etc. in an OO manner. Inside those functions a lot will probably be procedural. There's no conflict there.

Programming languages designed for OOP have built-in mechanisms to support it. That doesn't mean it can't be done in C (or even Fortran, as I mentioned), it's been done for ages. It just takes a bit more care. But care can be good.. I write OO (when I write OO) in OO languages too, but not always.

Problems with bloat, maintainability, bad performance, all of that really comes from what one does with the tools, and I agree with kc5tja that a lot of that is cultural too, re the big icons. Nothing to do with OOP. And word processors that add every feature imaginable that you'll never need.

What I have most problems with really boils down to what programmers do with the tools. You can do very badly with C, with C++ the possibilities are endless for doing it wrong. Not really C++ fault, although I personally have a slight dislike for the language's actual design w.r.t. syntax and semantics.

Re. reusability.. that's probably very oversold. That's not where OOP's strenght is, not in practice anyway. Reusability is difficult in any case. I'm currently writing a new little module at work. I need some simple library functions. I should ideally use one that exists already elsewhere in our system.. except that it's part of an RPM which contains much more. Which again needs more, and soon I'll need to install half our system just to get to that simple memory handling function. Worse, I need to port all of those modules too, in order to build those RPMs.. because my target is not the standard platform. So don't tell, but this is going to be the copy&paste reusable method..

So our RPMs contain too many sub-parts, maybe. A bit OT, but if you use LaTeX you have probably been installing the TexLive packages. SuSE apparently decided that there were too many parts stuffed into each TexLive RPM. In SuSE Enterprise edition release 11 they had about 37 RPMs available in total, covering everything TeX. Debian has about 60 for the same purpose. The newer SuSE SLE release 13 has divided the same TexLive setup into no less than 5066 different RPMs. That doesn't seem to solve more problems than it creates. My co-worker's machine now has 1456 RPMs installed in order to do the same that he did with 6 RPMs before he upgraded (and I do with about 30 packages on my Debian system). Of course every SuSE TexLive RPM is now very small, but splitting up everything like that just creates another set of problems.

Dependencies always create some kind of problem, one way or the other.


Top
 Profile  
Reply with quote  
PostPosted: Sun Dec 27, 2015 4:19 pm 
Offline

Joined: Mon Jan 26, 2015 6:19 am
Posts: 85
kc5tja wrote:
When I started this thread (look on page 1), the crux of my statement was that I decided, based on numerous thought experiments, that the 65816 was not convenient for OO dispatch, which we must remember is not the same as OO programming; the best I could muster was still 3x slower for an empty method than an empty subroutine call and return. I've stated this repeatedly. And I will continue to state this. But, somehow, between that and my disambiguation of what causes code or performance bloat, my message got conflated into, "OO is awesome! We should all do OO on the 6502! Yay!" I really am unclear how this happened, and frankly, it's annoying, and I really wish people would stop. I've stated the opposite too many times already, but too many people just aren't getting it.

This is why I post so infrequently here anymore. I just don't need this kind of stress. I found the conversation insightful until I started repeating myself over and over again. I'm giving up, and silencing this thread. I'm done.

It appears that I have missed the gist of your post. I got the impression that you were defending OOP on the basis that it is not the cause of software bloat and that other programming methods would unlikely have much impact on the bloat (if any).

If I got this wrong then I apologize.


Top
 Profile  
Reply with quote  
PostPosted: Sun Dec 27, 2015 4:55 pm 
Offline

Joined: Sun Apr 10, 2011 8:29 am
Posts: 597
Location: Norway/Japan
@theGSman - now that's confusing.

I recommend reading the thread again, from the beginning. What OOP really boils down to is simply this, quoted from one of the earliest posts from kc5tja:
kc5tja wrote:
OO is proper modular programming taken seriously and quite literally.
That's all it is, really. And as you can see from that, there's nothing there involving 1MB icons or other bloat - that comes from elsewhere.


Top
 Profile  
Reply with quote  
PostPosted: Sun Dec 27, 2015 5:22 pm 
Offline

Joined: Mon Jan 26, 2015 6:19 am
Posts: 85
Tor wrote:
What OOP really boils down to is simply this, quoted from one of the earliest posts from kc5tja:
kc5tja wrote:
OO is proper modular programming taken seriously and quite literally.

This thread seems to have become a "no true Scotsman" debate and I don't think I can make any further worthwhile contribution to it.

I like '02 programming mainly because the systems (that I deal with) are relatively simple and there is no need to be overly concerned about programming styles. I can use objects or procedures at will. I can do top-down or bottom-up designs or even start at the middle if I choose. Heck I can even fill up my code with spaghetti if I want. As long as it all works, I'm happy. (I have to take a far more disciplined approach with my Linux machine).


Top
 Profile  
Reply with quote  
PostPosted: Sun Dec 27, 2015 5:48 pm 
Offline
User avatar

Joined: Sun Jun 30, 2013 10:26 pm
Posts: 1949
Location: Sacramento, CA, USA
theGSman wrote:
This thread seems to have become a "no true Scotsman" debate ...

I never heard that one before, so I looked it up. Thanks for the morning smile.

Quote:
I like '02 programming mainly because the systems (that I deal with) are relatively simple and there is no need to be overly concerned about programming styles. I can use objects or procedures at will. I can do top-down or bottom-up designs or even start at the middle if I choose. Heck I can even fill up my code with spaghetti if I want. As long as it all works, I'm happy.

Yeah, programming the 6502 is like a guilty pleasure for my brain. It seems to know what I'm trying to say, and it just does it without complaining about the atrocious way I said it.

Quote:
(I have to take a far more disciplined approach with my Linux machine).

I'm gonna have to do some serious studying, to see if this old dog can learn some new tricks. I want to change careers soon, and I want to get up to speed on some modern coding skills, so I can try to keep up with the "young guns" with whom I'll be working and competing. Wish me luck.

Mike B.


Top
 Profile  
Reply with quote  
PostPosted: Wed Dec 30, 2015 11:09 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8546
Location: Southern California
theGSman wrote:
kc5tja wrote:
Gonna hafta call you on this one. ....(See http://forum.6502.org/viewtopic.php?p=42718#p42718 for rest of post)...

Most of your post is dealing with large systems worked on by teams of programmers.

For much smaller systems that a single programmer might work on, the advantages of OOP are less clear. Even the 65816 system would fall in this category. (It seems that the 65816 falls somewhere between the 8086 and the 80386 in terms of processing power). It would be a waste of resources to design such a system for GUI and even if you did, you almost certainly wouldn't be using 1MB icons! I could see a possible role for a simple monochromatic GUI

The following seems not to be done with OOP, but are examples of GUIs on the 6502, not even the '816, and one being at 1MHz and the other at 1.79MHz:
A Graphical OS for the Atari 8-bit

Image Image Image

Also, GEOS on the C64, which one of our sons used around 1989, was impressive considering the limitations of the C64. Apparently the 512KB RAM Expansion Unit (REU) (which we did not have) made an absolutely huge difference in the performance though (like 100x, I hear, although the actual ratio undoubtedly depends on what you're doing), since the C64's disc interface was so slow. There's a C64 GEOS demo at https://www.youtube.com/watch?v=5XLgAR_vmZo&t=3m38s (already cued up), about 35 seconds' worth. Brian Dougherty, CEO of Berkeley Softworks, specifically said they used sophisticated development tools and extra-efficient programming to do what people thought was not possible on the C64.

There's an Apple IIgs GUI demo at https://www.youtube.com/watch?v=4BduGGDZ15Y&t=17m50s (already cued up). Watch for about six minutes. This is still only 2.8MHz, a fifth as fast as the minimum the '816 is specified to be able to do.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 63 posts ]  Go to page Previous  1, 2, 3, 4, 5  Next

All times are UTC


Who is online

Users browsing this forum: Google [Bot] and 5 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: