BigDumbDinosaur wrote:
today's cars are "object oriented," as they are comprised of set of "black boxes," about which other black boxes know little except what each black box can accept as input and what it will generate as output. Ergo today's cars tend to suffer some problems that are analogous to those often seen in software developed using OO philosophies.
So, too, your computer and its relationship to the keyboard and mouse. This is not a bad thing.
Quote:
However, in exchange for the supposed gains of OO programming we see massive code bloat
Gonna hafta call you on this one. Some bloat will certainly exist, because of v-tables and template expansions, sure. C++ only allocates one v-table per class that uses them, though, so if you have hundreds of instances of objects, its cost is zero. Templates are expanded, again, on a per-type basis; but, here again, unless you're working with hundreds of types in a single binary, that one template isn't going to expand more than a handful of times.
If you study the bulk of the bloat that appears in most programs these days, it's unequivocally due to frivolous graphical content or GUI configuration patterns. Icons on the iPad and iPhone, for example, are only around 64x64 pixels, but remember that they are true-color with alpha channel: 32 bits per pixel. That's a whopping 32KB for an icon of that size. And, modern practices suggest icons as large as
512x512 pixels. I'm not even joking: see
http://makeappicon.com/ios7icon . That's 1
megabyte. For. An. Icon. Your applications will often need several of these "assets." (As they're called in the industry.)
No amount of procedural code is going to undermine bloat of this nature. You'll save, at most, single-digit percentages of your distribution size.
Quote:
sluggish performance on anything less than the latest hardware,
Going to call you on this one as well. BeOS was written entirely in C++ except for the main kernel, including the use of templates and other contemporaneously advanced features from C++. Binary sizes are quite small compared to their Linux or Windows counterparts (including apps written in plain, vanilla C), and BeOS makes Linux look remarkably static in comparison on 486-class hardware. I've used BeOS on a 486/33 once (BeOS R5, to be specific), and I'm happy to report that it was the first OS I've come across that manages to actually make me
happy as a user. Only one other OS
ever has done that: AmigaOS.
That said, it's interesting to examine the cause of sluggish performance.
In the case of web apps, we find the use of Javascript everywhere, for virtually every
thing these days. Ignoring relative deviations in interpreter performance as noise, the overwhelming bulk of what Javascript does in a web app is collect metrics on how you use the website. This is often used to generate "heat maps", which tells the website authors which portions of the website are used the most frequently, what the average number of clicks to perform some task takes, how long you stay on a page, etc. All of these metrics are useful for basic website usability, but of course, also for deciding how to optimize ad delivery.
In the case of desktop applications, well, I've seen push advertising in them too. This was (is?) a prominent issue on the Windows 10 platform especially, and I even remember Canonical getting flack for it for their Ubuntu marketplace program. Also, if you run tcpdump on some independent Linux box on the local network, you'll notice an awful lot of your seemingly innocuous desktop applications actually "call home" to report, well, metadata and metrics. This does have the benefit of providing you with a means of automatically updating your application periodically, but just be aware, all that network I/O
will influence perceived performance. It's not a lot, but it is
enough.
This brings to mind features like search prediction on Google. This kind of interface is often called a "live" interface, because it's dynamic. On desktop applications, to pull this off well, you need your data to be indexed
extremely well. B-tree or skip-lists are absolutely required; nothing else will offer sub-decisecond response times to locating information unless your data set is small enough to fit in a small handful of cache lines. Ideally, you'll want millisecond response times, because your "live" interface often involves the display of several auto-searched whatevers, and to be truly fluid, you want that
entire data set to come back in less than 100ms. So, not only are you involved with writing the main GUI layout, you're also writing code to keep it up to date live, often with an active data stream, and that means you need a good index (more code and often, a lot more data requirements too).
Modern tracing JITs for "interpreted languages," like Javascript, Java, C#, and others, now produce executable code that compares quite favorably with C. So, clearly, it's not the use of OO that is bogging the system down. It's what the applications have been programmed to do
behind your back that is affecting performance.
Regardless, operating systems are optimized today to lazily load code into memory via demand-paging. This means, when you run a program foo, foo does
not get loaded into memory right away. Instead, the OS, creates an address space for it, and sets the PC to the address where it
would have been loaded. Obviously, the very first thing that happens will be a page fault (since nothing is loaded there), which the OS then traverses a bunch of data structures to find out, "OOHH!! This belongs to FOO.EXE, in page 123." With this knowledge, it then loads in
just that one page, in the hopes that that's all you will ever need.
Well, as you can imagine, this results in very poor perceived user performance, because the application is, even if you have a billion GB of RAM, apparently thrashing the harddrive. If your application is several hundred kilobytes to a few megabytes in size, which many GUI applications will be, consider that all this OS overhead happens
every 4096 bytes of code and data fetched, and it's quite easy to see how even on a 2GHz computer with 800MHz FSBs, it leads to some pretty hefty latencies. SSDs are the only way to hide the application start-up costs, and that's only because of its zero seek time. I/O overhead is still measured in milliseconds, though. If you do the math, it's not to hard to see why computers "get slower" the faster they become. And none of this is influenced, in any way, by OOPLs.
On the other hand, if you actually take the time to load a binary en masse, you'll find a
much better perception of user interface responsiveness. I know this because I've demonstrated this with my Kestrel-2. Despite running only at 12 MIPS, and only being a 16-bit CPU with fairly poor instruction density, the fact that I get more responsive programs using explicitly managed overlays than my 2GHz Linux box does with demand paging is telling. Moral:
demand paging is awesome technology for server applications only. It seems to be utter tripe for real-time user interfaces.
And, it all compounds. You have megabyte-sized assets (instead of something like Display Postscript, which could easily render a resolution-independent asset; vis. PC/GEOS for another system using vector graphics almost exclusively), with background processing to communicate with home-base and constantly on the vigil for automatic updates, with background processing for maintaining a live user interface, with the database-like indexing needed to support these live interfaces, ... it all adds up!
This is where code bloat actually comes from, and the sluggishness you feel is the multiplicatively compounded effects they introduce. We're not dealing with 3270 terminals anymore. (Though, I do kind of like 3270 terminals. Kestrel-3 will some day have a GUI library intended to emulate something like a 3270 terminal, just so I can stick my middle finger at all these "live" UIs today. Not that I hate live UIs, but yeah, they can be easily over-done.)
Quote:
sometimes questionable behavior. That last item is significant, since the more abstract the environment becomes the more difficult it is to maintain fine-grained control over what the user experiences and most importantly, what the user can do.
This is the only thing I agree with, but for different reasons. It is only a problem when the software is under development. If you've worked to write your OO program to be small and run efficiently, which is to say, you put as much care in your OO program as you do in any procedural program you'd write, then you'll come to realize that this problem exists
as well with procedural code. The reason is that, in the absence of any tracing JIT support, any non-trivial procedural program
will make use of jump tables to couple multiple like-typed objects somewhere. It always happens. (The alternative is the use very large IF/ELSE IF/ELSE constructs or switch/case equivalents, which actually produces
thrice the code as a simple v-table would.) And when that coupling breaks down, you run into the same ambiguities.
Quote:
The core problem is that using 4GL methods ...
Just a point of clarification, in general industry acceptance, all OOP languages today are still 3GL languages. Languages such as SQL are considered 4GL languages.
That said, reading about the problems you were having, I'd have to say your team made the wrong choice with its development tools. BASIC is rarely a good foundation for any serious applications development, primarily because it lacks support for the kind of "programming in the large" features you typically need in a large, enterprise application environment. Even if Thoroughbred managed to bolt such functionality into the language, you're left with a walled garden environment, which means you can't get as much expertise for help as you could with a more open environment, like C, or dare I say it, even C++. Also, it's entirely possible that Thoroughbred's implementation of the environment was sorely lacking, which gave you a really bad taste for OOP.
I'm not suggesting OOP is right for all tasks either, of course. So far, the overwhelming majority of all OO code I write is work-related (in Javascript, no less). Virtually everything I write for myself (even if it uses classes) tends to be procedural in nature. But, I see you blame OOP for many things which I myself have seen occur in plain-vanilla, procedural languages all-too-frequently, or which are just plain poor or inexperienced engineering choices.
The one thing that is common between the pro-OOP and anti-OOP crowds is the desire to rush code out the door to beat the competition. So far as I've been able to tell,
every single source of complexity comes from the requirement that my code interoperate with somebody else's code. Terminal handling in Linux is a disaster because of the
continued cultural requirement to support ASR-33 teletypes at the login: prompt. C++ programming is a disaster, in large part, because of the Boost and STL libraries. C programming is a disaster because I have to be super careful about how I manage my memory when working with 3rd party libraries. Python programming is a disaster because the packaging mechanism the community rallies behind is immature and, frankly, broken by design. Javascript is a disaster because it lacks both type-checking
and basic arity-checking (meaning given
function f(a,b,c) {...}, calling
f() and
f(1,2,3,4,5,6,7,8,9) are perfectly valid things to do in Javascript). Combine this with the JS community's pervasive abhorence for documenting APIs, and you get a signed waiver for admittance at your local insane asylum. And on it goes. In an enterprise environment, there's a strong incentive for "code reuse" to occur. To my best estimation, I feel this happens to such an extent that it's actually an anti-pattern.
Alas, the software development world disagrees with my point of view. And why shouldn't it? It's made a lot of people a lot of serious money, and kept a lot of people I consider unqualified employed, while concurrently driving me to the point of unemployability because new-fangled "solutions" replace older, more problematic platforms with such velocity that I just can't keep up.