Object orientation has proven its worth time and time again. To be fair, OO is, really, nothing more than modular programming taken quite literally and seriously, for the first time in comp-sci history. The only true innovation it offers distinct from MP is that of type inheritance, or more often called interface inheritance. Note that I'm distinguishing type from class. There can be a number of classes which all fulfill the same basic type. For my purposes, though it's not strictly true in a mathematical sense, a type can be thought of as a purely abstract base class.
However, today, inheritance is being seriously questioned as a valuable addition, as it's been proven that aggregation and delegation can do everything inheritance can, but the reverse isn't true. Indeed, most modern OO implementations now have either very tight control over implementation inheritance or refusing to support it at all, while still offering freely interface inheritance. Systems which lack support for implementation inheritance (e.g., COM and CORBA, the latter of which is becoming increasingly popular in embedded environments too) implement its equivalent with aggregation and delegation behind the scenes instead. Often these systems have dedicated support languages (e.g., IDL in both COM and CORBA) which hides these behind-the-scenes activities, but let's be realistic: in a tight embedded environment, you're probably going to want full control over the runtime of the system.
For this reason, and for the sake of general education, I think it's strongly desirable to explore how to perform object oriented dispatch that satisfies the requirement of high speed and/or low memory overhead (obviously, preferably both).
On other CPU architectures, objects often include a pointer to a "v-table" or "entry point vector", as CORBA calls the concept. I'll use CORBA's terminology, since it's easier to abbreviate: EPV.
Code: Select all
Object EPV (shared by all objects of the same class)
+------+ +----------+
| pEPV |--->| func ptr |
+------+ +----------+
| .... | | func ptr |
+------+ +----------+
| func ptr |
+----------+
| .... |
There is an alternative approach as well, called dynamic dispatch, where the infrastructure to make method invokations work is not known a priori by a compiler, and hence must rely on run-time knowledge to work:
Code: Select all
Object Class descriptor for object
+--------+ +----------+
| pClass |-->| func ptr |--> Class_XXX_do_method
+--------+ +----------+
| .... | | .... |
; Objects refer not to EPV, but instead point directly
; to a Class structure, describing the class of the object.
; Part of the Class structure is a pointer to a "method
; handler," which gets invoked every time a method on an
; object gets called.
Class_Foo_do_method:
; X contains pre-scaled method ID
; useful for quasi-static dispatch.
JMP (epv,X)
epv:
dw foo_method1, foo_method2, ...etc...
Class_Bar_do_method:
; handle the methods we specifically override
CPX #ID_METHOD1
BEQ bar_method1
CPX #ID_METHOD2
BEQ bar_method2
; whoops -- we don't override anything else.
; Pass the method request to the base class.
JMP Class_Foo_do_method
Also notice how the dynamic dispatch system gives the class implementor the choice of how to implement the actual dispatch mechanism. Foo uses a table-lookup, like a statically dispatched EPV-based system, while Bar cherry-picks. There are advantages to both -- the former is comparatively fast, while the latter makes adaptability, flexibility, the ability to catch unimplemented methods, and even support for multiple implementation inheritance (in the Sather sense) much easier. For example, in Objective-C, you can create a CORBA-like remote-object system without the use of a dedicated IDL compiler, because the object proxies and skeletons needn't know a priori of a class' interface -- it learns this information dynamically.
In an object oriented system for embedded applications, there will typically be a strong ability to domain optimized the code so that most OO dispatches can be optimized to a simple subroutine call. This is obviously a strong goal, for all CPU architectures, not just the 65816. This is why C++ makes it relatively hard to use virtual methods ("If I have to type the virtual keyword ONE MORE TIME...."), to encourage a more static binding between an object's client and implementation for the sake of speed.
However, this won't always occur. OSes and device drivers are a prime example. Most OSes today offer what is called "device independent I/O", meaning that you can use the same interface (e.g., a file-like interface in the case of Unix, or a request block interface for VMS and AmigaOS) for most, if not all, devices, regardless of the kind of device. The problem is, however, you need to implement this interface with speed in mind, as when hacking hardware, real-time response is critical (consider sound-card modems for example!).
I posited in the RISC-vs-CISC article that the 6502 and 65816 really aren't engineered to support OOP in any real capacity, and I stick by that assessment. However, others have come up with good ideas for implementing the concept with various design tradeoffs which lend themselves towards certain applications.
You're really not going to make a dynamically dispatched system any faster than the sum of all its comparisons and failed branches, so the 6502/65816 is about as fast as any other CPU in its class for that. Therefore, the remainder of this thread will concentrate on statically-dispatched method invokations. The assumption is that method calls occur frequently enough to constitute a significant portion of runtime.