* object -- Something. Anything. Whatever you want. An object can represent something on the screen (e.g., a button inside a window), something intangible (a
collection of other objects; e.g., the contents of a window).
* inheritance -- The ability to say that something is
like something else,
except for a set of specified differences. For example, a directory is like a file -- it has a filename, a creation datestamp, a modification datestamp, a set of permissions, a disk space quota, etc. It differs from a file in that you cannot open it or read from it in any streaming fashion. (Indeed, UNIX implements directories internally
as structured files).
* binding -- The definition of this depends on context.
* type -- You cannot add an integer and a social security number -- the two are fundamentally different types, the sum of which is meaningless. You can, however, add a natural number to a floating point number -- this is a perfectly natural thing to expect, even if a natural number and floating point number are not of the same class. Which brings us to . . .
* class -- A specific kind of some type. As nebulous as it sounds, it's actually pretty concrete. You have soda cans and coffee cans, for example -- both are
types of cans -- they both expose the same basic human interface, and can be used to hold some kind of contents.
* base class -- in the above can example, we can define
can to be the
base class for both soda and coffee cans. This base class serves as a foundation for categorization. A box, for example, is clearly not a can, despite being able to do the same basic kinds of things a can can do. For this reason, cans and boxes can each, in turn, be subclasses of a single
container class.
* subclass -- a more detailed or specific kind of object. For example, soda cans and coffee cans are subclasses of cans. But Coca Cola cans and Pepsi cans are subclasses of soda cans. I have never heard of a Pepsi coffee can. Conversely, I have never heard of Sanka brand soda either.
Here is a good point to tie in classes with interfaces/types. With such a strict hierarchy of categorization, how does one account for all the different diet products out there? With inheritance, you can't -- unless you rely on multiple-inheritance, a god-awful hack that allows you to say that a DietCocaColaCan is both a SodaCan
and a DietProduct,
while still being able to share implementation details. That is, if I invoke the method UsesAspertame() on a diet coffee can object, but I didn't explicitly define it myself, the programming language needs to figure out how to invoke the right method! For this simple exercise, it usually gets it right. But what if I create a class that inherits from SankaCoffeeCan and DietChocolateBeverage? Sanka's implementation of UsesAspertame() will return FALSE, while DietChocolateBeverage's will return TRUE. How do we resolve this?
Interfaces are a means of supporting
multiple interface inheritance, while still only supporting
single implementation inheritance, thus providing a happy medium. I can inherit from the class that
best solves the problem at hand, while still filling in the rest, as if I had full multiple inheritance available to me. Thus,
I get to manually choose the result of UsesAspertame() because I have to either manually invoke Sanka's or DietChocolate's implementation, OR, just return the result myself.
At this point, I should point out that maybe I'm creating a whole new class hierarchy of deserts with this (e.g., let's call the new base class SomeStarBucksConcoction). This new class
may not even be in the hierarchy for Cans nor for DietBeverages. Yet, because they still expose all the same interfaces,
they are still of the same type as far as the client program is concerned.
Hopefully, this helps disambiguates single-inheritance, multiple-inheritance, and types.
* aggregation -- Extending behavior by creating a large crate of different objects, which are (behind the scenes of some client software module) preconfigured to work together. To use the example above, SomeStarBucksConcoction would be an aggregate object, since it would embed both a DietChocolate object and a SankaCoffeeCan object, preconfigured to work together, seamlessly. Extra credit to those who can successfully explain why inheritance is just a form of aggregation taking place behind the programmer's back.
* method -- I remember seeing a very clear explanation of why these are called 'methods,' and it's lost on me now. So I'll try to recall from memory. Basically, asking an object to do something is basically
sending a message to the object. So:
Code:
myObject -> PrintStatusOn( someOutputConsole );
shows the client
sending PrintStatusOn() to myObject. How (the "method") this is done is dependent entirely on the object's class. Hence, the
specific implementation that gets executed as a result of a message is called
the method. The act of sending a message, since it results in its invokation, is also known as "invoking a method." The term "sending a message" is used often in Smalltalk circles while "invoking a method" is used heavily in C++/Java/C# circles, but they both mean precisely the same thing.
* a priori -- a Latin idiomatic phrase (of the same class as "de facto", or "et cetera") meaning "known ahead of time." In the context of compilers, this means that the compiler can either know or deduce critical information at compile-time, before the program is actually run.
Garth wrote:
seems to mean "at compilation time, when runtime address is still unknown and may even be different every time it is loaded into the computer depending on what other programs are already loaded, and may even change after the loading)
No, this actually would be a run-time effect. A better example is this: a compiler will know "a priori" that a method's address will be the fourth entry in an EPV because it's the fourth item in the interface function list, as specified in the source code.
* Sather -- an object oriented programming language that, while never becoming very popular, proved worthy in its research results. It inspired the whole concept of splitting implementation and interfaces, thus providing the awakening academia/industry needed to finally realize the goal of component-driven software design.
* IDL -- Interface Description Language. A language that is used to make up for the lack of expressivity of traditional, statically compiled languages like C++, C, etc. for describing network communications protocols. For example, suppose we want to describe a filesystem as a set of C++ objects. This is relatively easy to do provided you use these objects
within the program that's using them. But what if you want to access a filesystem remotely, e.g., across a network? You need to create different kinds of objects that fulfill the basic class definitions for files and directories, but translates the method invokations into network requests. You could do this work manually, but it's tedious. Why not instead just let a computer program auto-generate the code to manage the network connection for you instead? But, it doesn't know what you want until you tell it. That's the job of IDL -- to tell this special compiler what you want to expose via the network.
* object proxies and skeletons -- remember the IDL compiler discussed above, and how we expose a filesystem interface using it? Well, those "stand-in" objects that "look like" normal files and directories but really aren't are known as "proxies." On the opposite side of the network connection are "skeletons," which translate incoming network requests into local filesystem object method calls.
* domain optimization -- an example is better than a formal definition. If I write an OO program, and later learn that all of my method invokations can be resolved by simple subroutine calls, then it follows I really didn't need to use an OO language in the first place. I could just call subroutines instead, and avoid all the static and/or dynamic dispatch overhead. Another example: if I know that the output console will be 80 columns, then I don't need to write an output routine that auto-wraps paragraphs. I can pre-wrap the paragraphs at edit-time (e.g., as I'm entering the program) and never have to worry about it at run-time.
* edit-time -- when you're entering a program's code.
* compile-time -- when the compiler is parsing and generating code from your source code.
* load-time -- at the time the program is being loaded from disk and is being relocated/fixed-up in memory.
* run-time -- while the program is running.