Confusion with polymorphism: Parametric, Inclusion, Coercion, and Overloading - language-agnostic

Reading stackoverflow questions, the general consensus seems to be that overloading is not part of polymorphism.
However, my OOP lecture notes state that:
"There are four kinds of polymorphism: Parametric, Inclusion, Coercion, and Overloading".
In the notes, it refers to overloading with methods with different parameters, and also overloading operators, e.g. + in the sense of ints and floats.
Wikipedia also states "Ad hoc polymorphism is supported in many languages using function overloading."
Thus i'm confused as to why people say this isn't part of polymorphism, as it seems to be in my opinion; we have different forms for one method.
Could anyone elaborate?
Thanks.

If you take a strict definition of what the word Poly-Morphism means, then yes, overloading is polymorphism. The methods have the same name, different signatures and the runtime knows which method to use based upon the signature you use. That's many forms of the same method. It is not "classic" descriptions of polymorphism with classes and inheritance, animals, dogs, and cats, etc. Some languages have operator overloading. Is that many forms of the same type?
It really depends on what you say is polymorphing. If you say that many forms only relates to objects, they yeah, you can't have overloading as "real" polymorphism in the OOP sense because they are methods, not objects.
This can help, Polymorphism vs Overriding vs Overloading
You can see there are many opinions.

Ad hoc polymorphism is considering the operators themselves to be like objects, which can be overloaded, yet still work in situations where the user is unaware of the specifics of the overload. This is basically the same as the motivation for polymorphism in objects, except with operators. http://en.wikipedia.org/wiki/Operator_overloading

Related

How to monkey patch a generic type tag function table

I found it interesting to read on one of the ways that you can do functional dynamic dispatch in sicp - using a table of type tag + name -> functions that you can fetch from or add to.
I was wondering, is this a typical type dispatch mechanism for a dynamic non OO language?
Also what would be the typical way to monkey path this, using a chaining list of tables(if you don't find it in the first table try next table recursively)? Rebind the table within local scope to a modified copy? ect?
I believe this is a typical type dispatch mechanism, even for non-dynamic non-OO languages, based on this article about the JHC Haskell compiler and how it implements type classes. The implication in the article is that most Haskell compilers implement type classes (a kind of type dispatch) by passing dictionaries. His alternative is direct case analysis, which likely would not be applicable in dynamically typed languages, since you don't know ahead of time what the types of the constituents of your expression will be. On the other hand, this isn't extensible at run-time either.
As for dynamic non-OO languages, I'm not aware of many examples outside Lisp/Scheme. Common Lisp's CLOS makes Lisp a proper OO language and provides dynamic dispatch as well as multiple dispatch (you can add or remove generics and methods at run-time, and they can key off the type of more than just the first parameter). I don't know how this is usually implemented, but I do know that it is usually an add-on facility rather than a built-in facility, which implies it's using functionality available to the would-be monkey-patcher, and also that certain versions have been criticized for their lack of speed (CLISP, I think, but they may have resolved this). Of course, you could implement this type of parallel dispatch mechanism within an OO language as well, and you can probably find plenty of examples of that.
If you were using purely-functional persistent maps or dictionaries, you could certainly implement this faculty without even needing the chain of inherited maps; as you "modify" the map, you get a new map back, but all the existing references to the old map would still be valid and see it as the old version. If you were implementing a language with this facility you could interpret it by putting the type->function map in the Reader monad and wrapping your interpreter in it.

Why should you ever have to care whether an object reference is an interface or a class?

I often seem to run into the discussion of whether or not to apply some sort of prefix/suffix convention to interface type names, typically adding "I" to the beginning of the name.
Personally I'm in the camp that advocates no prefix, but that's not what this question is about. Rather, it's about one of the arguments I often hear in that discussion:
You can no longer see at-a-glance
whether something is an interface or a
class.
The question that immediately pops up in my head is: apart from object creation, why should you ever have to care whether an object reference is a class or an interface?
I've tagged this question as language agnostic, but as has been pointed out it may not be. I contend that it is because while specific language implementation details may be interesting, I'd like to keep this on a conceptual level. In other words, I think that, conceptually, you'd never have to care whether an object reference is typed as a class or an interface but I'm not sure, hence the question.
This is not a discussion about IDEs and what they do or don't do when visualizing the different types; caring about the type of an object is certainly a necessity when browsing through code (packages/sources/whatever form). Nor is it a discussion about the pros or cons about either naming convention. I just can't seem to figure out in what scenario, other than object creation, you actually care about wether or not you're referencing a concrete type or an interface.
Most of the time, you probably don't care. But here are some instances that I can think of where you would. There are several, and it does vary a little bit by language. Some languages don't mind as much as others.
In the case of inversion of control (where someone PASSES you a parameter) you probably don't care if it's an interface or an object as far as calling its methods etc. But when dealing with types, it definitely can make a difference.
In managed languages such as .NET languages, interfaces can usually only inherit one interface, whereas a class can inherit one class but implement many interfaces. The order of classes vs interfaces may also matter in a class or interface declaration. So you need to know which is which when defining a new class or interface.
In Delphi / VCL, interfaces are reference counted and automatically collected, whereas classes must be explicitly freed, so lifecyle management on the whole is affected, not just the creation.
Interfaces may not be viable sources for class references.
Interfaces can be cast to compatible interfaces, but in many languages, they cannot be cast to compatible classes. Classes can be cast to either.
Interfaces may be passed to parameters of type IID, or IUnknown, whereas classes cannot (without a cast and a supporting interface).
An interface's implementation is unknown. Its input and output are defined, but the implementation which creates the output is abstracted. In general, ones attitude may be that when working with a class, one may know how the class works. But when working with an interface, no such assumption should be made. In a perfect world, it might make no difference. But in reality, this most certainly can have affect your design.
I agree with you (and thereby do not use an "I" prefix for interfaces). We shouldn't have to care whether it is an abstract class or an interface.
Worth noting that Java needs to have a notion of interface solely because it does not support multiple inheritance. Otherwise, "abstract class" concept would suffice (which may be "all" abstract, or partially abstract, or almost concrete and just 1 tiny bit abstract, whatever).
Things that concrete class can have and the interfaces can't:
Constructors
Instance fields
Static methods and static fields
So if you use the convention of starting all interface names with 'I' then it indicates to the user of your library that the particular type will not have any of the above mentioned things.
But personally I feel that this is not a reason enough to start all interface names with 'I'. The modern IDEs are powerful enough to indicate if some type is an interface. Also it hides the true meaning of an interface name: imagine if Runnable and List interfaces were named IRunnable and IList repectively.
When a class is used, I can make the assumption that I will get objects from a relatively small and almost well-defined range of subclasses. That's because subclassing is - or at least it should be
- a decision that isn't made too easily, especially in languages that don't support multiple inheritance. In contrast, interfaces can be implemented by any class, and the implementation can be added later to any class.
So the information is useful, especially when browsing through code, and trying to get a feeling what the code author intended to do - but I think it should be enough, if the IDE shows interfaces/classes as distinctive icons.
You want to see at a glance which are the "interfaces" and which are the "concrete classes" so that you can focus your attention to the abstractions in the design instead of the details.
Good designs are based on abstractions - if you know and understand them you understand the system without knowing any of the details. So you know you can skip the classes without the I prefix, and focus on the ones that do have it while you are understanding the code, and you also know to avoid building new code around non-interface classes without having to refer to some other design document.
I agree that the I* naming convention is just not appropriate for modern OO languages, but truth is this question isn't really language agnostic. There are legitimate cases where you have an interface not for any architectural reason but because you simply don't have an implementation or have access to an implementation. For these cases you can read I* as *Stub or similar, and, in these cases, it might make sense to have an IBlah and a Blah class
These days, though, you rarely come up against this, and in modern OO languages when you say Interface you actually mean Interface not just I don't have the code for this. So there is no need for the I*, and in fact it encourages really bad OO design as you won't get the natural naming conflicts that would tell you something's gone wrong in your architecture. Say you had a List and an IList... what's the difference? when would you use one over the other? if you wanted to implement IList would you be constrained (conceptually at least) by what List does? I'll tell you what... if I found both an IBlah and a Blah class in any of my codebases I would purge one at random and take away that person's commit privileges.
Interfaces don't have fields, hence when you use IDisposable (or whatever), you know you're only declaring what you can do. That seems to me the main point of it.
Distinguishing between interfaces and classes may be useful, anywhere the type is referenced, in the IDE or out, to determine:
Can I make a new implementation of this type?
Can I implement this interface in a language that does not support multiple inheritance of implementation classes (e.g., Java).
Can there be multiple implementations of this type?
Can I easily mock this interface in an arbitrary mocking framework?
It is worth noting that UML distinguishes between interfaces and implementation classes. In addition, the "I" prefix is used in the examples in "The Unified Modeling Language User Guide" by the three amigos Booch, Jacobson and Rumbaugh. (Incidentally, this also provides an example why IDE syntax coloring alone is not sufficient to distinguish in all contexts.)
You should care, because :
An interface with capital "I" enables one, namely you or your co-workers to use any implementation which implements the interface. If in the future you figure out a better way to do something, say a better list sorting algorithm, you will be stuck with having the change ALL of the invoking methods as well.
It helps in understanding code - e.g. you don't need to memorize all 10 implementations of say, I_SortableList , you just care that it sorts a list (or something like that). Your code becomes practically self-documenting here.
To complete the discussion, here is a pseudocode example illustrating the above:
//Pseudocode - define implementations of ISortableList
Class SortList1 : ISortableLIst, SortList2:IsortableList, SortList3:IsortableList
//PseudoCode - the interface way
void Populate(ISortableList list, int[] nums)
{
list.set(nums)
}
//PseudoCode - the "i dont care way"
void Populate2( SortList1 list, int[] nums )
{
list.set(nums)
}
...
//Pseudocode - create instances
SortList1 list1 = new SortList1();
SortList2 list2 = new SortList2();
SortList3 list3 = new SortList3();
//Invoke Populate() - The "interface way"
Populate(list1,nums);//OK, list1 is ISortableList implementation
Populate(list2,nums);//OK, list2 is ISortableList implementation
Populate(list3,nums);//OK, list3 is ISortableList implementation
//Invoke Populate2() - the "I don't care way"
Populate(list1,nums);//OK, list1 is an instance of SortList1
Populate(list2,nums);//Not OK, list2 is not of required argument type, won't compile
Populate(list3,nums);//the same as above
Hope this helps,
Jas.

Idiom vs. pattern

In the context of programming, how do idioms differ from patterns?
I use the terms interchangeably and normally follow the most popular way I've heard something called, or the way it was called most recently in the current conversation, e.g. "the copy-swap idiom" and "singleton pattern".
The best difference I can come up with is code which is meant to be copied almost literally is more often called pattern while code meant to be taken less literally is more often called idiom, but such isn't even always true. This doesn't seem to be more than a stylistic or buzzword difference. Does that match your perception of how the terms are used? Is there a semantic difference?
Idioms are language-specific.
Patterns are language-independent design principles, usually written in a "pattern language" (a uniform template) describing things such as the motivating circumstances, pros & cons, related patterns, etc.
When people observing program development from On High (Analysts, consultants, academics, methodology gurus, etc) see developers doing the same thing over and over again in various situations and environments, then the intelligence gained from that observation can be distilled into a Pattern. A pattern is a way of "doing things" with the software tools at hand that represent a common abstraction.
Some examples:
OO programming took global variables away from developers. For those cases where they really still need global variables but need a way to make their use look clean and object oriented, there's the Singleton Pattern.
Sometimes you need to create a new object having one of a variety of possible different types, depending on some circumstances. An ugly way might involve an ever-expanding case statement. The accepted "elegant" way to achieve this in an OO-clean way is via the "Factory" or "Factory Method" pattern.
Sometimes, a lot of developers do things in a certain way but it's a bad way that should be disrecommended. This can be formalized in an antipattern.
Patterns are a high-level way of doing things, and most are language independent. Whether you create your objects with new Object or Object.new is immaterial to the pattern.
Since patterns are something a bit theoretical and formal, there is usually a formal pattern (heh - word overload! let's say "template") for their description. Such a template may include:
Name
Effect achieved
Rationale
Restrictions and Limitations
How to do it
Idioms are something much lower-level, and usually operate at the language level. Example:
*dst++ = *src++
in C copies a data element from src to dst while incrementing the pointers to both; it's usually done in a loop. Obviously, you won't see this idiom in Java or Object Pascal.
while <INFILE> { print chomp; }
is (roughly quoted from memory) a Perl idiom for looping over an input file and printing out all lines in the file. There's a lot of implicit variable use in that statement. Again, you won't see this particular syntax anywhere but in Perl; but an old Perl hacker will take a quick look at the statement and immediately recognize what you're doing.
Contrary to the idea that patterns are language agnostic, both Paul Graham and Peter Norvig have suggested that the need to use a pattern is a sign that your language is missing a feature. (Visitor Pattern is often singled out as the most glaring example of this.)
I generally think the main difference between "patterns" and "idioms" to be one of size. An idiom is something small, like "use an interface for the type of a variable that holds a collection" while Patterns tend to be larger. I think the smallness of idioms does mean that they're more often language specific (the example I just gave was a Java idiom), but I don't think of that as their defining characteristic.
Since if you put 5 programmers in a room they will probably not even agree on what things are patterns, there's no real "right answer" to this.
One opinion that I've heard once and really liked (though can't for the life of me recall the source), is that idioms are things that should probably be in your language or there is some language that has them. Conversely, they are tricks that we use because our language doesn't offer a direct primitive for them. For instance, there's no singleton in Java, but we can mimic it by hiding the constructor and offering a getInstance method.
Patterns, on the other hand, are more language agnostic (though they often refer to a specific paradigm). You may have some infrastructure to support them (e.g., Spring for MVC), but they're not and not going to be language constructs, and yet you could need them in any language from that paradigm.

Syntactic sugar vs. feature

In C# (and Java) a string is little more than a char array with a stored length and a few methods tacked on. Likewise, (reference vs. value stuff aside) objects are little more than glorified structs with inheritance and interfaces added.
On one level, these additions feel like clear features and enhancements unto themselves. On another level, they feel like a marginal upgrade from the status of "syntactic sugar."
To take this idea further, consider (I may have some details wrong, but the point remains):
transistor
elementary logic gate
compound gate
| |
ALU flip-flop
| | |
| register RAM
| |
CPU
microcode
assembly
C
C++
| |
MSIL JavaScript
C# jQuery
Many times, any single layer of abstraction looks a lot like syntactic sugar but multiple layers of separation feel very removed from each other.
How do you know when something has stopped being syntactic sugar and started being a bona fide feature?
It turns out to be a feature instead of syntactic sugar when it implies a different way of thinking.
You are right when you say objects are in fact glorified structs with methods and inheritance. That, however, is just the implementation detail. What objects allow is to think in a different way. You can relate more easily to real world entities when thinking about objects. The same thing happened when even further back in time, we jumped from using go-to's to procedural programming. Under the hood, the processor still keeps on jmp'ing from OP to OP, but we could think in a different, more black-box, way.
Having said that, in extreme, you can say everything is syntactic sugar, but some of that sugar is a feature when it allows you to think differently.
Syntactic sugar is a feature.
All of software is a giant stack of abstractions built on top of other abstractions. A string may be nothing more than an array of characters, but there are many operations that feel natural on strings, but awkward on character arrays. The goal of all of these abstractions is the same: remove irrelevant details so that the developer can focus on the important parts of the problem.
As you point out, all modern programming languages could be eliminated, and we could go back to working in assembly language. But our productivity would plummet.
I guess people call something syntactic sugar when they feel they get little benefit from it, and a feature when they feel the get a large benefit from it. That makes the distinction very fuzzy, and quite subjective.
When the change provides value? I have coded in assembler. I switched to C and looked at the output from the compiler. It's code was 95+% as good as my hand crafted assembler and it was much easier to write. For me that provided value so I'd say it wasn't sugar.
C++ helps me translate my object oriented thoughts into code. As long as the overhead isn't terribly high then I think it's a feature.
I'm a practical sort. "If I can see it's valuable" is my answer
"Syntactic sugar" is a feature you don't like
It seems that syntactical surgar is a syntax that changes nothing about the abilities of the language, and using a different construct accomplishes exactly the same thing. A String (thinking in Java) is not just syntatical sugar over a char array. A char array is mutable (in content if not in length). You could not make a char array immutable with an existing language feature without a String array.
On the other hand, the plus operator working on Strings is indeed syntatical sugar for using a StringBuilder and calling append.
I would have to say when the same result is cannot be achieved simply by writing different code, with the same type of "time-constraint" as using the syntactical sugar.
My Example would be a Lambda expression, writing a foreach loop doesn't take a lot of effort, but using .Foreach() sure is nice too; versus rewriting the whole HttpRequest class on your own. One is syntactical, one is a feature. Both save time, one in a much bigger way than the other.
Generally the term "syntactic sugar" refers to language features which never allowed a programmer to do something which could not be done before, but rather provided a nice means of expressing something that could already expressed in the language, even if somewhat more awkwardly.
Certain constructs may be unambiguously regarded as syntactic sugar. For example, in VB.NET, code to test for whether two references weren't equal used to require If Not (ref1 Is Ref2) but newer versions of the language allow If ref1 IsNot Ref2. Nothing can be expressed in the new syntax that couldn't be expressed in the old, but the new syntax is cleaner, introduces no ambiguities, and the only reason not to use it would be if code had to be back-compatible with old versions of the language.
Some constructs may be a bit harder to define as sugar. In particular, if a language adds constructs which will work identically to existing construct when used with other types, but will fail compilation with others, such constructs may provide a means of compile-time type verification which did not exist previously. Java generics may generally be viewed in this light. One can add a Cat to an ArrayList<Cat> just as easily as to an ArrayList; what the ArrayList<Cat> adds is a guard to reject Dogs at compile time. Since compile-time constraints don't allow one to write any program that couldn't be written without them, some people may view them as syntactic sugar. On the other hand, even though type verification is performed at compile-time rather than run-time, it might still be viewed as one of the jobs of a program.
Syntactic sugar and language feature are basically describing the same thing, even if syntactic sugar is sometimes used in a pejorative way whereas feature is often associated with deeper changes in the language architecture (introducing lambdas etc.).
But this distinction is very dependent on a individual point of view (and its subjectively felt usefulness).
Regarding language-design aspects and your example with strings and char-arrays, I would say that this should be neither a feature nor sugar, but simply expressible in the languages basic syntax (LOP - language-oriented programming). Generic concepts (typeclasses, metaprogramming etc.) allow you to express many new and useful constructs by yourself without waiting for the language to get a new feature. Just look at Haskell or C++'s metaprogramming capabilities.

What makes a language Object-Oriented?

Since debate without meaningful terms is meaningless, I figured I would point at the elephant in the room and ask: What exactly makes a language "object-oriented"? I'm not looking for a textbook answer here, but one based on your experiences with OO languages that work well in your domain, whatever it may be.
A related question that might help to answer first is: What is the archetype of object-oriented languages and why?
Definitions for Object-Orientation are of course a huge can of worms, but here are my 2 cents:
To me, Object-Orientation is all about objects that collaborate by sending messages. That is, to me, the single most important trait of an object-oriented language.
If I had to put up an ordered list of all the features that an object-oriented language must have, it would look like this:
Objects sending messages to other objects
Everything is an Object
Late Binding
Subtype Polymorphism
Inheritance or something similarly expressive, like Delegation
Encapsulation
Information Hiding
Abstraction
Obviously, this list is very controversial, since it excludes a great variety of languages that are widely regarded as object-oriented, such as Java, C# and C++, all of which violate points 1, 2 and 3. However, there is no doubt that those languages allow for object-oriented programming (but so does C) and even facilitate it (which C doesn't). So, I have come to call languages that satisfy those requirements "purely object-oriented".
As archetypical object-oriented languages I would name Self and Newspeak.
Both satisfy the above-mentioned requirements. Both are inspired by and successors to Smalltalk, and both actually manage to be "more OO" in some sense. The things that I like about Self and Newspeak are that both take the message sending paradigm to the extreme (Newspeak even more so than Self).
In Newspeak, everything is a message send. There are no instance variables, no fields, no attributes, no constants, no class names. They are all emulated by using getters and setters.
In Self, there are no classes, only objects. This emphasizes, what OO is really about: objects, not classes.
According to Booch, the following elements:
Major:
Abstraction
Encapsulation
Modularity
Hierarchy (Inheritance)
Minor:
Typing
Concurrency
Persistence
Basically Object Oriented really boils down to "message passing"
In a procedural language, I call a function like this :
f(x)
And the name f is probably bound to a particular block of code at compile time. (Unless this is a procedural language with higher order functions or pointers to functions, but lets ignore that possibility for a second.) So this line of code can only mean one unambiguous thing.
In an object oriented language I pass a message to an object, perhaps like this :
o.m(x)
In this case. m is not the name of a block of code, but a "method selector" and which block of code gets called actually depends on the object o in some way. This line of code is more ambiguous or general because it can mean different things in different situations, depending on o.
In the majority of OO languages, the object o has a "class", and the class determines which block of code is called. In a couple of OO languages (most famously, Javascript) o doesn't have a class, but has methods directly attached to it at runtime, or has inherited them from a prototype.
My demarcation is that neither classes nor inheritance are necessary for a language to be OO. But this polymorphic handling of messages is essential.
Although you can fake this with function pointers in say C, that's not sufficient for C to be called an OO language, because you're going to have to implement your own infrastructure. You can do that, and a OO style is possible, but the language hasn't given it to you.
It's not really the languages that are OO, it's the code.
It is possible to write object-oriented C code (with structs and even function pointer members, if you wish) and I have seen some pretty good examples of it. (Quake 2/3 SDK comes to mind.) It is also definitely possible to write procedural (i.e. non-OO) code in C++.
Given that, I'd say it's the language's support for writing good OO code that makes it an "Object Oriented Language." I would never bother with using function pointer members in structs in C, for example, for what would be ordinary member functions; therefore I will say that C is not an OO language.
(Expanding on this, one could say that Python is not object oriented, either, with the mandatory "self" reference on every step and constructors called init, whatnot; but that's a Religious Discussion.)
Smalltalk is usually considered the archetypal OO language, although Simula is often cited as the first OO language.
Current OO languages can be loosely categorized by which language they borrow the most concepts from:
Smalltalk-like: Ruby, Objective-C
Simula-like: C++, Object Pascal, Java, C#
I am happy to share this with you guys, it was quite interesting and helpful to me. This is an extract from a 1994 Rolling Stone interview where Steve (not a programmer) explains OOP in simple terms.
Jeff Goodell: Would you explain, in simple terms, exactly what object-oriented software is?
Steve Jobs: Objects are like people. They’re living, breathing things that have knowledge inside them about how to do things and have memory inside them so they can remember things. And rather than interacting with them at a very low level, you interact with them at a very high level of abstraction, like we’re doing right here.
Here’s an example: If I’m your laundry object, you can give me your dirty clothes and send me a message that says, “Can you get my clothes laundered, please.” I happen to know where the best laundry place in San Francisco is. And I speak English, and I have dollars in my pockets. So I go out and hail a taxicab and tell the driver to take me to this place in San Francisco. I go get your clothes laundered, I jump back in the cab, I get back here. I give you your clean clothes and say, “Here are your clean clothes.”
You have no idea how I did that. You have no knowledge of the laundry place. Maybe you speak French, and you can’t even hail a taxi. You can’t pay for one, you don’t have dollars in your pocket. Yet, I knew how to do all of that. And you didn’t have to know any of it. All that complexity was hidden inside of me, and we were able to interact at a very high level of abstraction. That’s what objects are. They encapsulate complexity, and the interfaces to that complexity are high level.
As far as I can tell, the main view of what makes a language "Object Oriented" is supporting the idea of grouping data, and methods that work on that data, which is generally achieved through classes, modules, inheritance, polymorphism, etc.
See this discussion for an overview of what people think (thought?) Object-Orientation means.
As for the "archetypal" OO language - that is indeed Smalltalk, as Kristopher pointed out.
Supports classes, methods, attributes, encapsulation, data hiding, inheritance, polymorphism, abstraction...?
Disregarding the theoretical implications, it seems to be
"Any language that has a keyword called 'class'" :-P
To further what aib said, I would say that a language isn't really object oriented unless the standard libraries that are available are object oriented. The biggest example of this is PHP. Although it supports all the standard object oriented concepts, the fact that such a large percentage of the standard libraries aren't object oriented means that it's almost impossible to write your code in an object oriented way.
It doesn't matter that they are introducing namespaces if all the standard libraries still require you to prefix all your function calls with stuff like mysql_ and pgsql_, when in a language that supported namespaces in the actual API, you could get rid of functions with mysql_ and have just a simple "include system.db.mysql.*" at the top of your file so that it would know where those things came from.
when you can make classes, it is object-oriented
for example : java is object-oriented, javascript is not, and c++ looks like some kind of "object-curious" language
In my experience, languages are not object-oriented, code is.
A few years ago I was writing a suite of programs in AppleScript, which doesn't really enforce any object-oriented features, when I started to grok OO. It's clumsy to write Objects in AppleScript, although it is possible to create classes, constructors, and so forth if you take the time to figure out how.
The language was the correct language for the domain: getting different programs on the Macintosh to work together to accomplish some automatic tasks based on input files. Taking the trouble to self-enforce an object-oriented style was the correct programming choice because it resulted in code that was easier to trouble-shoot, test, and understand.
The feature that I noticed the most in changing that code over from procedural to OO was encapsulation: both of properties and method calls.
Simples:(compare insurance character)
1-Polymorphism
2-Inheritance
3-Encapsulation
4-Re-use.
:)
Object: An object is a repository of data. For example, if MyList is a ShoppingList object, MyList might record your shopping list.
Class: A class is a type of object. Many objects of the same class might exist; for instance, MyList and YourList may both be ShoppingList objects.
Method: A procedure or function that operates on an object or a class. A method is associated with a particular class. For instance, addItem might be a method that adds an item to any ShoppingList object. Sometimes a method is associated with a family of classes. For instance, addItem might operate on any List, of which a ShoppingList is just one type.
Inheritance: A class may inherit properties from a more general class. For example, the ShoppingList class inherits from the List class the property of storing a sequence of items.
Polymorphism: The ability to have one method call work on several different classes of objects, even if those classes need different implementations of the method call. For example, one line of code might be able to call the "addItem" method on every kind of List, even though adding an item to a ShoppingList is completely different from adding an item to a ShoppingCart.
Object-Oriented: Each object knows its own class and which methods manipulate objects in that class. Each ShoppingList and each ShoppingCart knows which implementation of addItem applies to it.
In this list, the one thing that truly distinguishes object-oriented languages from procedural languages (C, Fortran, Basic, Pascal) is polymorphism.
Source: https://www.youtube.com/watch?v=mFPmKGIrQs4&list=PL-XXv-cvA_iAlnI-BQr9hjqADPBtujFJd
If a language is designed with the facilities specifically to support object-oriented programming(4 features) then it is an Object-oriented programming language.
You can program in an object-orientated style in more or less any language.It’s the code that is object-oriented not the language.
Examples of real object-oriented languages are Java, c#, Python, Ruby, C++.
Also, it's possible to have extensions to provide Object-Oriented features like PHP, Perl etc.
You can write an object-oriented code with C but it is not object-oriented prog. lang. It is not designed for that (that was the whole point of c++)
Archetype
The ability to express real-world scenarios in code.
foreach(House house in location.Houses)
{
foreach(Deliverable mail in new Mailbag(new Deliverable[]
{
GetLetters(),
GetPackages(),
GetAdvertisingJunk()
})
{
if(mail.AddressedTo(house))
{
house.Deliver(mail);
}
}
}
-
foreach(Deliverable myMail in GetMail())
{
IReadable readable = myMail as IReadable;
if ( readable != null )
{
Console.WriteLine(readable.Text);
}
}
Why?
To help us understand this more easily. It makes better sense in our heads and if implemented correctly makes the code more efficient, re-usable and reduces repetition.
To achieve this you need:
Pointers/References to ensure that this == this and this != that.
Classes to point to (e.g. Arm) that store data (int hairyness) and operations (Throw(IThrowable))
Polymorphism (Inheritance and/or Interfaces) to treat specific objects in a generic fashion so you can read books as well as graffiti on the wall (both implement IReadable)
Encapsulation because an apple doesn't expose an Atoms[] property