6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sat Nov 23, 2024 11:14 pm

All times are UTC




Post new topic Reply to topic  [ 63 posts ]  Go to page Previous  1, 2, 3, 4, 5  Next
Author Message
 Post subject:
PostPosted: Fri Jan 20, 2006 3:26 pm 
Offline

Joined: Sat Nov 19, 2005 11:44 pm
Posts: 10
Location: London
OO guru here and general strange guy (C# tech architect).

My thought on OO:

OO is an abstraction of thought, and a poor one, not of a machine or how the data is actually organised or should be. Sometimes the mappings are terrible (particularly on the e-commerce platform I run but we've had to make a lot of compromises). OO will wear off when we finally start designing machines to narrow the gap between human cognitive sciences and computer sciences when we realise data isn't actually object oriented and rigidly structured but is a loosely categorised system (see definitions of taxonomy). I've actually invented my own approach but not researched it throughly to see if it is practical (taxonomic computing).

I doubt that a 6502 / derivative has the ability to handle it purely from a performance point of view. You lose a hell of a lot of that hard-procedural programmed performance with OO as your working with something that is over-abstracted.

Anyway back to sleep :wink:


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Fri Jan 20, 2006 5:31 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
cseemeuk wrote:
I doubt that a 6502 / derivative has the ability to handle it purely from a performance point of view. You lose a hell of a lot of that hard-procedural programmed performance with OO as your working with something that is over-abstracted.


I've already pointed out earlier that the 6502 family is, thus far, not at all optimized for OO. That is a given, and really isn't contested. Nonetheless, with domain-specific optimizations, you can make the 6502 competitive with the 80386 in dispatch speeds, which is good enough in most cases.

Claiming that OO is a poor abstraction model will require a TON of proof to back up -- if you examine the tenets of modular programming, you'll see that OO is pretty much exactly modular programming, but on a finer scale, and is often enforced by the language (finally). The reason everyone is all googley-eyed over OO on more powerful processors is precisely because, for the first time since they got out of college, they're actually following the advice of Wirth, Djikstra, and Knuth, even if they don't realize it, and finally starting to reap the benefits.

OO does introduce one feature which modular programming didn't yet have -- that of type inheritance -- and one convenience that modular programming hadn't yet invented -- dynamic dispatch (aka polymorphism). However, the Oberon programming language (Wirth) proved that these can be added to a normal, procedural programming language without going hog-wild with OO specific syntax. Indeed, Oberon-2 doesn't even add a single keyword to support such "type-bound procedures."

As with any software development tool, it can be abused, or it can be used properly. I natively think and write all of my software in terms of objects, but I rarely use an OO programming language, for example. Objects truely are the only way to manage open-ended runtime expandability (e.g., to provide for plugins). See COM and CORBA. Writing in terms of objects also makes it significantly easier to identify module boundaries, even if you never decide to support 3rd party expandability.

Part of the issues surrounding OO is that it hasn't made good on its promise of plug-n-play software. No, of course not. Nothing ever will. But some technologies sure make it easier. COM and CORBA for example are not 100% object oriented, despite their marketing names. They are *component* oriented, where a component is defined as an object that exposes a predetermined interface. You can create new components that inherit the interface of another, but you cannot perform implementation inheritance -- in other words, you have to hand-implement every interface yourself. Nonetheless, this has still resulted in software which is overwhelmingly more popular with the buying public, since now they can add customized tools to frameworks much easier than, say, with MFC or with Java's libraries. For the first time, people don't need to know a programming language to do this.

This leads one to conclude that having the principles of normal, procedural, modular programming as espoused by Wirth, et. al. applied to instances of individual data types, rather than to modules as a whole, is the proper course of evolution for programming, and results in the most future-proof form of programming. See the Oberon System for an example of a non-OO-language which nonetheless still provide explicit support for components, and indeed uses them *extensively* internally, yet provides a level of performance that makes Windows 2000 and Linux quake in their boots with fright. Oberon System is only about 6MB of code when compiled (including the base set of user applications), of which only those pieces which are actually used are loaded into the computer's memory (Oberon's modules, as with its predecessor Modula-2, are dynamically loaded as needed).

In light of all this, I therefore conclude that 'over-abstracted,' as you claim OO is, is an over-reaction based on relative inexperience with the true breadth of OO technology. Remember that C# is hardly the end-all, be-all of OO programming languages, and actually represents one of the worst examples as far as languages go (it's back-end execution engine is quite good, however), as it falls squarely in the Java-inspired-just-to-compete-with-Java language category, where the arguments of over-abstraction can be reasonably applied (though, I personally blame the execution of the concept rather than the language for the programming model it exposes; a good compiler would be able to statically optimize all the OO cruft, thus resulting in equivalent generated code as a procedurally written program that does the same thing). However, these arguments do not apply outright to C, C++, Ada, OCAML, Oberon and all of its derivatives, or even Object Pascal, which is still very much a procedural language at heart.

You might choose to refer me to the so-called "table oriented programming" model, which is pure procedural, where data is maintained in a table, such that a duality with objects exists: columns refer to object's fields, and rows to individual instances. (BTW, TOP is used in most closed systems, contrary to some beliefs, to help manage memory. See slab allocators, for example, which is a form of TOP). IF you can guarantee that the precise table schema will remain static throughout the entire lifetime of the running program, then yes, you can exploit this to write your code in a purely procedural manner. However, OO excels where you *don't* have a purely static schema. A graphics drawing program will often deal with outlined and filled shapes for example. A filled shape has an interior pattern, and often a set of interior colors too, while a hollow shape has zero need for these fields. A table-oriented approach would never be able to handle this situation. The problem is exacerbated when you open up the toolbox to allow 3rd party developers to code their own graphics primitives.

In conclusion, OO has its place, and I think you'll be quite shocked to realize that it applies itself quite nicely to a wide variety of applications. OO provides a set of guidelines one can use to build the ADTs and modules you'd need in a procedural language for a statically bound program. OO provides the basis for interface specification rules allowing for open-ended, 3rd party expansion after-market. And in the more powerful procedural languages like C, C++, and Oberon, you can mix/match the two applications of OO on an as-needed basis, along with pure procedural, traditionally written code to boot, thus letting you choose specifically the abstraction level you need for your application.

I blame the programmer, not the tool.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Fri Jan 20, 2006 6:45 pm 
Offline

Joined: Sat Nov 19, 2005 11:44 pm
Posts: 10
Location: London
Add note: I've been writing OO code since about 1993 (C++ -> Java -> C)

The problem that I encounter is not the tool or language, but that OO is not a good medium between machine and human thinking. We don't think in an object-oriented fashion at all. If you take an "uninitiated" person (as far as computing goes) and try and explain objects and their interactions to them, things will not work properly straight away (one of the problems I've encountered). I've been analysing people's thought processes and have come up with another method of abstraction that seems to fit nicely. I'm going to write a paper on it when I get a moment (whilst juggling naff customer requirements, family and a half finished 6502 board!).


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sat Jan 21, 2006 6:42 am 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
cseemeuk wrote:
We don't think in an object-oriented fashion at all.


This is a generalization that is patently false -- I do regularly think in terms of objects and their roles, and always have. In fact, I remember my superior officers in the Air Force screaming me out for being too literal. :)

Quote:
I've been analysing people's thought processes and have come up with another method of abstraction that seems to fit nicely. I'm going to write a paper on it when I get a moment (whilst juggling naff customer requirements, family and a half finished 6502 board!).


Thanks for telling me about the paper. I would like to be one of its first readers, as I've researched a number of worthwhile abstraction mechanisms, and I personally haven't really found any that was as general purpose or flexible as procedural + OO. TOP looks really, really interesting to me, but if only it could get around the static schema problem. I'm always looking for new methodologies to be more laz...er...productive in my line of work. :)


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sat Jan 21, 2006 11:37 am 
Offline

Joined: Sat Nov 19, 2005 11:44 pm
Posts: 10
Location: London
The key misconception is that we think in terms of objects - our clients don't. They think in isolated chunks of genius and madness.

I'm all for lazy - that's why my brain churns on these things all day. If I can spend 6 months making the next 10 years easier, then I gain some of my time back (which is precious).


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Wed Feb 15, 2006 6:23 am 
Offline
User avatar

Joined: Thu Mar 11, 2004 7:42 am
Posts: 362
djmips wrote:
:evil:


LOL!


Top
 Profile  
Reply with quote  
 Post subject: Re:
PostPosted: Mon Mar 16, 2015 5:19 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8546
Location: Southern California
cseemeuk wrote:
Add note: I've been writing OO code since about 1993 (C++ -> Java -> C)

The problem that I encounter is not the tool or language, but that OO is not a good medium between machine and human thinking. We don't think in an object-oriented fashion at all. If you take an "uninitiated" person (as far as computing goes) and try and explain objects and their interactions to them, things will not work properly straight away (one of the problems I've encountered). I've been analysing people's thought processes and have come up with another method of abstraction that seems to fit nicely. I'm going to write a paper on it when I get a moment (whilst juggling naff customer requirements, family and a half finished 6502 board!).

Did you forget? :D I know it's only been nine years and we have to give you time, but I'd sure like to see it if you ever get it done. :D

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
 Post subject: Re: Re:
PostPosted: Thu Dec 24, 2015 6:56 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8513
Location: Midwestern USA
GARTHWILSON wrote:
cseemeuk wrote:
I'm going to write a paper on it when I get a moment (whilst juggling naff customer requirements, family and a half finished 6502 board!).

Did you forget? :D I know it's only been nine years and we have to give you time, but I'd sure like to see it if you ever get it done. :D

:lol: :lol: :lol:

I'm one programmer who has not been convinced that OO programming is "better." OO proponents often seem to be more like fanatics when holding forth on why OO is better than other methods. It all depends on how you define "better."

One thing that the use of OO techniques has done is force the development of ever-faster hardware so as to maintain an adequate level of performance. That becomes apparent if I load and run an older version of Linux on recent hardware: it goes like a raped ape compared to Windows. :lol:

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Thu Dec 24, 2015 1:48 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
You do realize that a significant chunk of even older Linux systems are OO, right?


Top
 Profile  
Reply with quote  
PostPosted: Thu Dec 24, 2015 7:42 pm 
Offline

Joined: Sun Feb 23, 2014 2:43 am
Posts: 78
There is an attempt underway to add 65816 support to LLVM:
https://github.com/jeremysrand/llvm-65816

If I'm not mistaken, this should allow for C, C++, and Obj-C.


Top
 Profile  
Reply with quote  
PostPosted: Thu Dec 24, 2015 8:13 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
While it'll be nice to have first-class support for the CPU on a mature tool-chain, that doesn't mean it'll be run-time efficient to invoke a C++ method on a 65816 (and even less so Obj-C). I never said it was impossible, only slow. :)


Top
 Profile  
Reply with quote  
PostPosted: Thu Dec 24, 2015 8:21 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
That looks like one to watch, thanks joe7. Although I see progress is slow. But that's normal.


Top
 Profile  
Reply with quote  
PostPosted: Thu Dec 24, 2015 9:39 pm 
Offline

Joined: Sun Apr 10, 2011 8:29 am
Posts: 597
Location: Norway/Japan
Well, the Unix (and Linux) device driver model is object oriented. And that both makes a lot of sense and is also easy to understand (and implement, something I have done). But then again there is object oriented.. to me, most of object-oriented application programming, particularly when in C++, is very messy indeed. That could be caused by a combination of many things. A higly complex and too large language in the case of C++, high exposure due to popularity, problems not necessarily matching an object oriented solution, but maybe most of all simply too little effort put into thinking through exactly how the problem should be divided up. Something that is difficult to get right in general, and then adding objects into the mess probably makes it worse.


Top
 Profile  
Reply with quote  
PostPosted: Thu Dec 24, 2015 10:54 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
The device driver model, the Linux module system, file descriptors, the virtual filesystem architecture, the virtual *memory* system, ... there's a lot of object oriented architecture inside the Linux kernel specifically, and I'm willing to bet BSDs as well. And that's just the kernel.

Yes, many C++ libraries are really ugly. I attribute much of that to C++ itself. In particular, if library A wants to use the STL, and library B wants to use multiple inheritance, your application that uses both A and B now must use both together, and that just wreaks havoc on code legibility. No question. Many OO applications written in C++ are complex not because they are OO, but because they're forced to inherit the poor decisions made by others, and in turn, promulgates further still more poor decisions intended to paper over the original ones.

I blame that on programmers who start their career without having felt the specific pain points of lower-level software engineering: they have no basis to compare the solutions before them for suitability. They see C++ features and ogle at its feature set, trying to use what they think makes sense because, if the language offers it, it must be the right choice.

However, as annoying as it is, feature virality isn't the largest cause of complexity in applications. I still find similar complexity patterns in every language, including those which explicitly opt for a greatly reduced set of programmer-visible features. C++ is particularly egregious, yes, but Python, Perl, Ruby, Java, C, and yes, even Smalltalk itself all have similar complexity problems. The crux of the issue is that well-designed, well-factored software is complex by its very nature, and it's now manifest in the source code because the inter-relationships are now explicit. Case in point: the linux memory management code, where you'll find several instances of comments like, "You are not expected to understand this", throughout the code.

We can only keep 7 things in our heads at once, plus or minus two. OO makes explicit just how many different aspects to a well-engineered application exists, and it's a lot more than 7 for anything non-trivial.

Prior to OO, no programmer on the planet could properly realize an abstract data type library that wasn't, in some capacity, coupled to some surrounding environment. This obviously compromises how "abstract" the ADT can be. None were truly reusable in any context. With OO's polymorphism support, it became possible to implement types that were truly divorced of their environment, and that requires some external configuration.

Could you do this with C? Absolutely, and I've done it. Long time ago, I was working on an AX.25 routing daemon in plain C. It was written in a way that exposed its API through a Unix-domain socket. I needed a way to test how it handled a variety of socket failure cases. To do this, I realized I had to wrap all of my system calls into dereferences through an interface, because nothing in C alone would let me abstract away the OS to any degree whatsoever, let alone to a fine enough degree where I could perform fault injection.

But, it requires a large amount of discipline to get right --- discipline that appropriate syntax and compiler support automates.

Which now brings us, I think, to the next question. Is physical code re-use worth it, as distinct from what I call idea re-use? That is, is it truly worth your time taking a library I've written as-is and trying to make it work in your project? I think that's a valid question to ask, particularly since, with ever-increasing frequency, I find situations where just rewriting something from scratch is a more productive use of my time.


Top
 Profile  
Reply with quote  
 Post subject: Re: Re:
PostPosted: Fri Dec 25, 2015 3:21 pm 
Offline

Joined: Mon Jan 26, 2015 6:19 am
Posts: 85
BigDumbDinosaur wrote:
I'm one programmer who has not been convinced that OO programming is "better." OO proponents often seem to be more like fanatics when holding forth on why OO is better than other methods. It all depends on how you define "better."

I would have to agree that the "everything is an object" mentality is not always the best way to approach programming. Contrary to what some OOP proponents might have us believe, it is more natural to conceive of a program as DOING something (which invites a procedural or task oriented approach) rather than a collection of inter-connected parts or objects as in a motor vehicle. Certainly, some parts of a program may be best viewed as objects (such as windows and their components) but forcing the rest of the program to follow the same principle does not necessarily lead to the best code.

Of course, encapsulation and inheritance are features of OOP that have their advantages, especially in major projects done by teams of quasi-autonomous programmers. Objects produced by one programmer can be taken by another and used or extended/modified to suit. This makes objects very flexible and portable. The original programmer need not even include the source code for an object - only the compiled code - which can be advantageous to that programmer. (These features can be emulated with just C but errors are more easily avoided with a proper OOP language).

An '02 environment is unlikely to have projects large enough for these features to be a big enough advantage. Code optimization may be a more important consideration since speed/memory is likely to be more limited. I would rather use assembler and C or Forth which still allow objects of some form where they are required but still permit optimized code.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 63 posts ]  Go to page Previous  1, 2, 3, 4, 5  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 6 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: