Bullet Physics, when to choose which DynamicsWorld? - bulletphysics

I have a few general questions about the bullet physics library.
Here is my current understanding in a nutshell:
btDiscreteDynamicsWorld - Simplest physics world, only handles rigid bodies, maybe it has better performance.
btSoftRigidDynamicsWorld - The only physics world that can work with large jello moulds
btContinuousDynamicsWorld - If you have really fast objects this will prevent them from prenetrating each other or flying through each other, but is otherwise like a btDiscreteDynamicsWorld.
Is my understanding of the btDiscreetDynamicsWorld, btContinuousDynamicsWorld and btSoftRigidDynamicsWorld classes in terms of functionality, purpose, and performance correct?
Why does the user manual recommend the btDiscreteDynamicsWorld class?
btSoftRigidDynamicsWorld appears to be the only world that can handle soft bodies, so what if we wanted Continuous Physics integration and Soft bodies?
How fast is fast enough to consider using a btContinuousDynamicsWorld, and what are the drawbacks of using one?
Edit:
My Buddy Mako also posted this question on The Bullet forums: http://www.bulletphysics.org/Bullet/phpBB3/viewtopic.php?f=9&t=4863

Please ignore btContinuousDynamicsWorld, it is not functional (it has never been completed).
If you want to use soft bodies, use btSoftRigidDynamicsWorld, otherwise use btDiscreteDynamicsWorld .

Related

Is OOP abused in universities? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I started my college two years ago, and since then I keep hearing "design your classes first". I really ask myself sometimes, should my solution to be a bunch of objects in the first place! Some say that you don't see its benefits because your codebase is very small - university projects. The project size excuse just don't go down my throat. If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project.
I am not saying OOP is bad, I just feel it is abused in classrooms where students like me are told day and night that OOP is the right way.
IMHO, the proper answer shouldn't come from a professor, I prefer to hear it from real engineers in the field.
Is OOP the right approach always?
When is OOP the best approach?
When is OOP a bad approach?
This is a very general question. I am not asking for definite answers, just some real design experience from the field.
I don't care about performance. I am asking about design. I know it is engineering in real life.
==================================================================================
Thankful for all contributions. I chose Nosredna answer, because she addressed my questions in general and convinced me that I was wrong about the following :
If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project.
The professors have the disadvantage that they can't put you on huge, nasty programs that go on for years, being worked on by many different programmers. They have to use rather unconvincing toy examples and try to trick you into seeing the bigger picture.
Essentially, they have to scare you into believing that when an HO gauge model train hits you, it'll tear your leg clean off. Only the most convincing profs can do it.
"If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project."
That's where I disagree. A small project fits into your brain. The large version of it might not. To me, the benefit of OO is hiding enough of the details so that the big picture can still be crammed into my head. If you lack OO, you can still manage, but it means finding other ways to hide the complexity.
Keep your eye on the real goal--producing reliable code. OO works well in large programs because it helps you manage complexity. It also can aid in reusability.
But OO isn't the goal. Good code is the goal. If a procedural approach works and never gets complex, you win!
OOP is a real world computer concept that the university would be derelict to leave out of the curriculum. When you apply for jobs, you will be expected to be conversant in it.
That being said, pace jalf, OOP was primarily designed as a way to manage complexity. University projects written by one or two students on homework time are not a realistic setting for large projects like this, so the examples feel (and are) toy examples.
Also, it is important to realize that not everyone really sees OOP the same way. Some see it about encapsulation, and make huge classes that are very complex, but hide their state from any outside caller. Others want to make sure that a given object is only responsible for doing one thing and make a lot of small classes. Some seek an object model that closely mirrors real world abstractions that the program is trying to relate to, others see the object model as about how to organize the technical architecture of the problem, rather than the real world business model. There is no one true way with OOP, but at its core it was introduced as a way of managing complexity and keeping larger programs more maintainable over time.
OOP is the right approach when your data can be well structured into objects.
For instance, for an embedded device that's processing an incoming stream of bytes from a sensor, there might not be much that can be clearly objectified.
Also in cases where ABSOLUTE control over performance is critical (when every cycle counts), an OOP approach can introduce costs that might be nontrivial to compute.
In the real world, most often, your problem can be VERY well described in terms of objects, although the law of leaky abstractions must not be forgotten!
Industry generally resolves, eventually, for the most part, to using the right tool for the job, and you can see OOP in many many places. Exceptions are often made for high-performance and low-level. Of course, there are no hard and fast rules.
You can hammer in a screw if you stick at it long enough...
My 5 cents:
OOP is just one instance of a larger pattern: dealing with complexity by breaking down a big problem into smaller ones. Our feeble minds are limited to a small number of ideas they can handle at any given time. Even a moderately sized commercial application has more moving parts than most folks can fully maintain a complete mental picture of at a time. Some of the more successful design paradigms in software engineering capitalize on the notion of dealing with complexity. Whether it's breaking your architecture into layers, your program into modules, doing a functional breakdown of actions, using pre-built components, leveraging independent web services, or identifying objects and classes in your problem and solution spaces. Those are all tools for taming the beast that is complexity.
OOP has been particularly successful in several classes of problems. It works well when you can think about the problem in terms of "things" and the interactions between them. It works quite well when you're dealing with data, with user interfaces, or building general purpose libraries. The prevalence of these classes of apps helped make OOP ubiquitous. Other classes of problems call for other or additional tools. Operating systems distinguish kernel and user spaces, and isolate processes in part to avoid the complexity creep. Functional programming keeps data immutable to avoid the mesh of dependencies that occur with multithreading. Neither is your classic OOP design and yet they are crucial and successful in their own domains.
In your career, you are likely to face problems and systems that are larger than you could tackle entirely on your own. Your teacher are not only trying to equip you with the present tools of the trade. They are trying to convey that there are patterns and tools available for you to use when you are attempting to model real world problems. It's in your best interest to accumulate a collection of tools for your toolbox and choose the right tool(s) for the job. OOP is a powerful tool to have, but by far not the only one.
No...OOP is not always the best approach.
(A true) OOP design is the best approach when your problem can best be modeled as a set of objects that can accomplish your goals by communicating/using one another.
Good question...but I'm guessing Scientific/Analytic applications are probably the best example. The majority of their problems can best be approached by functional programming rather than object oriented programming.
...that being said, let the flaming begin. I'm sure there are holes and I'd love to learn why.
Is OOP the right approach always?
Nope.
When OOP is the best approach?
When it helps you.
When OOP is a bad approach?
When it impedes you.
That's really as specific as it gets. Sometimes you don't need OOP, sometimes it's not available in the language you're using, sometimes it really doesn't make a difference.
I will say this though, when it comes to technique and best practices continue to double check what your professors tell you. Just because they're teachers doesn't mean they're experts.
It might be helpful to think of the P of OOP as Principles rather than Programming. Whether or not you represent every domain concept as an object, the main OO principles (encapsulation, abstraction, polymorphism) are all immensely useful at solving particular problems, especially as software gets more complex. It's more important to have maintainable code than to have represented everything in a "pure" object hierarchy.
My experience is that OOP is mostly useful on a small scale - defining a class with certain behavior, and which maintains a number of invariants. Then I essentially just use that as yet another datatype to use with generic or functional programming.
Trying to design an entire application solely in terms of OOP just leads to huge bloated class hierarchies, spaghetti code where everything is hidden behind 5 layers of indirection, and even the smallest, most trivial unit of work ends up taking three seconds to execute.
OOP is useful --- when combined with other approaches.
But ultimately, every program is about doing, not about being. And OOP is about "being". About expressing that "this is a car. The car has 4 wheels. The car is green".
It's not interesting to model a car in your application. It's interesting to model *the car doing stuff. Processes are what's interesting, and in a nutshell, they are what your program should be organized around. Individual classes are there to help you express what your processes should do (if you want to talk about car things, it's easier to have a car object than having to talk about all the individual components it is made up of, but the only reason you want to talk about the car at all is because of what is happening to it. The user is driving it, or selling it, or you are modelling what happens to it if someone hits it with a hammer)
So I prefer to think in terms of functions. Those functions might operate on objects, sure, but the functions are the ones my program is about. And they don't have to "belong" to any particular class.
Like most questions of this nature, the answer is "it depends."
Frederick P. Brooks said it the best in "The Mythical Man-Month" that "there is no single strategy, technique or trick that will exponentially raise the productivity of programmers." You wouldn't use a broad sword to make a surgical incision and you wouldn't use a scalpel in a sword fight.
There are amazing benefits to OOP, but you need to be comfortable with the pattern to take advantage of these benefits. Knowing and understanding OOP also allows you to create a cleaner procedural implementation for your solutions because of the underlying concepts of separation of concerns.
I've seen some of the best results of using OOP when adding new functionality to a system or maintaining/improving a system. Unfortunately, it's not easy to get that kind of experience while attending a university.
I have yet to work on a project in the industry that was not a combination of both functional and OOP. It really comes down to your requirements and what are the best (maybe cheapest?) solutions for them.
OOP is not always the best approach. However it is the best approach in the majority of applications.
OOP is the best approach in any system that lend itself to objects and the interaction of objects. Most business applications are best implemented in an object-oriented way.
OOP is a bad approach for small 1 off applications where the cost of developing an framework of objects would exceed the needs of the moment.
Learning OOA, OOD & OOP skills will benefit the most programmers, so it is definately useful for Universities to teach it.
The relevance and history of OOP runs back to the Simula languages back in the 1960s as a way to engineer software conceptually, where the developed code defines both the structure of the source and general permissible interactions with it. Obvious advantages are that a well-defined and well-created object is self-justifying and consistently repeatable as well as reliable; ideally also able to be extended and overridden.
The only time I know of that OOP is a 'bad approach' is during an embedded system programming efforts where resource availability is restricted; of course that's assuming your environment gives you access to them at all (as was already stated).
The title asks one question, and the post asks another. What do you want to know?
OOP is a major paradigm, and it gets major attention. If metaprogramming becomes huge, it will get more attention. Java and C# are two of the most used languages at the moment (see: SO tags by number of uses). I think it's ignorant to state either way that OOP is a great/terrible paradigm.
I think your question can best be summarized by the old adage: "When the hammer is your tool, everything looks like a nail."
OOP is usually an excellent approach, but it does come with a certain amount of overhead, at least conceptual. I don't do OO for small programs, for example. However, it's something you really do need to learn, so I can see requiring it for small programs in a University setting.
If I have to do serious planning, I'm going to use OOP. If not, I won't.
This is for the classes of problems I've been doing (which includes modeling, a few games, and a few random things). It may be different for other fields, but I don't have experience with them.
My opinion, freely offered, worth as much...
OOD/OOP is a tool. How good of a tool depends on the person using it, and how appropriate it is to use in a particular case depends on the problem. If I give you a saw, you'll know how to cut wood, but you won't necessarily be able to build a house.
The buzz that I'm picking up on is that functional programming is the wave of the future because it's extremely friendly to multi-threaded environments, so OO might be obsolete by the time you graduate. ;-)

At what point should architecture become layered?

Obviously, "Hello World" doesn't require a separated, modular front-end and back-end. But any sort of Enterprise-grade project does.
Assuming some sort of spectrum between these points, at which stage should an application be (conceptually, or at a design level) multi-layered? When a database, or some external resource is introduced? When you find that the you're anticipating spaghetti code in your methods/functions?
when a database, or some external resource is introduced.
but also:
always (except for the most trivial of apps) separate AT LEAST presentation tier and application tier
see:
http://en.wikipedia.org/wiki/Multitier_architecture
Layers are a mean to keep a design loosely coupled and highly cohesive.
When you start to have a few classes (either implemented or just sketched with UML), they can be grouped logically, into layers - or more generally packages, or modules. This is called the art of separating the concerns.
The sooner the better: if you do not start layering early enough, then you risk to have never do it as the effort can be too important.
Here are some criteria of when to...
Any time you anticipate the need to
replace one part of it with a
different part.
Any time you find
yourself need to divide work amongst
parallel team.
There is no real answer to this question. It depends largely on your application's needs, and numerous other factors. I'd suggest reading some books on design patterns and enterprise application architecture. These two are invaluable:
Design Patterns: Elements of Reusable Object-Oriented Software
Patterns of Enterprise Application Architecture
Some other books that I highly recommend are:
The Pragmatic Programmer: From Journeyman to Master
Refactoring: Improving the Design of Existing Code
No matter your skill level, reading these will really open your eyes to a world of possibilities.
I'd say in most cases dealing with multiple distinct levels of abstraction in the concepts your code deals with would be a strong signal to mirror this with levels of abstraction in your implementation.
This does not override the scenarios that others have highlighted already though.
I think once you ask yourself "hmm should I layer this" the answer is yes.
I've worked on too many projects that probably started off as proof of concept/prototype that ended up being full projects used in production, which are horribly written and just wreak of "get it done quick, we'll fix it later." Trust me, you wont fix it later.
The Pragmatic Programmer lists this as the Broken Window Theory.
Try and always do it right from the start. Separate your concerns. Build it with modularity in mind.
And of course try and think of the poor maintenance programmer who might take over when you're done!
Thinking of it in terms of layers is a little limiting. It's what you see in whitepapers about a product, but it's not how products really work. They have "boxes" that depend on each other in various ways, and you can make it look like they fit into layers but you can do this in several different configurations, depending on what information you're leaving out of the diagram.
And in a really well-designed application, the boxes get very small. They are down to the level of individual interfaces and classes.
This is important because whenever you change a line of code, you need to have some understanding of the impact your change will have, which means you have to understand exactly what the code currently does, what its responsibilities are, which means it has to be a small chunk that has a single responsibility, implementing an interface that doesn't cause clients to be dependent on things they don't need (the S and the I of SOLID).
You may find that your application can look like it has two or three simple layers, if you narrow your eyes, but it may not. That isn't really a problem. Of course, a disastrously badly designed application can look like it has layers tiers if you squint as hard as you can. So those "high level" diagrams of an "architecture" can hide a multitude of sins.
My generic rule of thumb is to at least to separate the problem into a model and view layer, and throw in a controller if there is a possibility of more than one ways of handling the model or piping data to the view.
(Or as the first answer, at least the presentation tier and the application tier).
Loose coupling is all about minimising dependencies, so I would say 'layer' when a dependency is introduced. i.e. a database, third party application, etc.
Although 'layer' is probably the wrong term these days. Most of the time I use Dependency Injection (DI) through an Inversion of Control container such as Castle Windsor. This means that I can code on one part of my system without worrying about the rest. It has the side effect of ensuring loose coupling.
I would recommend DI as a general programming principle all of the time so that you have the choice on how to 'layer' your application later.
Give it a look.
R

Do design patterns increase or decrease the complexity of an Application?

I was just looking at this question about SQL, and followed a link about DAO to wikipedia. And it mentions as a disadvantage:
"As with many design patterns, a design pattern increases the complexity of the application." -Wikipedia
Which suddenly made me wonder where this idea came from (because it lacks a citation). Personally I always considered patterns reduce to complexity of an application, but I might be delusional, so I'm wondering if this complexity is based on something or not.
Thanks.
If the person reading the code is aware of design patterns and their concept and is able to identify design patterns in practical use (not just the book examples) then they really do reduce the complexity.
However I've found with a lot of junior developers, which haven't heard much about design patterns or weren't aware of them at all, that they believe their use increases the complexity of the code.
I can understand it: You suddenly have more classes or code to go through to solve what seems to be a simple problem at first. If you're not aware of the benefits of design patterns, hacked solutions always look better.
I love design patterns, but they (apart from simple ones like Singleton) definitely add complexity to an application. They add some dimension to a design that is not intuitively obvious to a novice designer (and not part of the features of the programming language).
Some people might feel patterns reduce complexity because of the benefits they bring in terms of the software's non-functional requirements such as maintainability, extendability, reusability, etc. However, I disagree and see the benefits as a return on complexity investment. Perhaps in some cases patterns reduce complexity, but a theoretical discussion like that sheds more heat than light. Almost none of the answers so far used concrete examples, save https://stackoverflow.com/a/760968/1168342.
To be specific, many patterns increase accidental complexity of a design by introducing new structures (interfaces, methods, etc.) that weren't present in the design before the pattern was applied.
Let's use Visitor as an example.
Visitor is a way of separating operations from an object structure on which they operate.
Before the solution with Visitor, the operations are hard-coded into each Element of the object structure. The challenge for the developer is that adding new operations involves modifying the code in the various elements.
After the application of the Visitor pattern, there is an additional class hierarchy of visitors, which encapsulate the operations.
The flow of
control in the solution is definitely more complex, and will be harder
to debug (anyone who has implemented Visitor and tried to follow the
program flow of double-dispatched calls with accept/visit will know this).
Understanding and maintaining Visitor functions in terms of
cohesive units is less complicated than the alternative of coding
functions into each of the Elements in the fixed structure that is
visited. This is the benefit of the pattern.
It's difficult to say quantitatively how much increase there is in accidental complexity or how much easier it is to add new operations. I certainly don't agree with answers that make a blanket statement saying in the long-term, complexity is reduced with applying a pattern. It's not like your design "forgets" the double-dispatch added by Visitor's approach, just because you have code which more easily allows operations to be added. The complexity is a price (or tax) you pay to get the benefit in maintainability.
Patterns still have to be applied
Regardless of one's supposed familiarity with patterns, any given pattern must be applied to a solution. That application is going to be different every time (Martin Fowler said patterns are only half-baked solutions). Developers will always have to understand what classes are playing what roles in the existing design, which is subject to the essential complexity (the application problem's complexity) that is often non-trivial.
In the best case, understanding a design pattern applied in an application that's already complex may be trivial, but it's not 0 effort:
Patterns aren't always applied the same way. There are many variants of patterns -- Proxy comes to mind. I'm not sure that everyone agrees about how any given pattern should be applied.
Introducing one pattern (e.g., Strategy to encapsulate algorithms) often leads to other patterns to manage things properly (e.g., Factory to instantiate the concrete Strategies).
Introducing a pattern often leads to more responsibilities. Object cleanup when a Factory is used is not trivial (and also not documented in GoF). How many know about the so-called Lapsed-listener problem?
What happens if there is a change in the assumptions made about the need for the pattern (e.g., there is no longer a need to have multiple encapsulated algorithms provided by the Strategy pattern)? It's going to be extra work to remove the pattern later. If you don't remove it, new developers could be duped by its presence when they come on board. Patterns are intertwined between the classes playing the roles in the pattern. Removal is not trivial.
Erich Gamma gave an anecdote at ECOOP 2006 that designers in one case decided to remove the Abstract Factory pattern from a commercial multi-platform GUI widget framework (the classic Abstract Factory example!). As I remember the anecdote, the multiple-levels of indirection (polymorphic calls) in complex GUIs was a significant performance hit in the client code. Customers complained about GUIs being sluggish, and the "optimization" was to remove the indirections. In this case, performance trumped maintainability; the pattern was only making the coders happy, not the end users.
DAO example
In terms of the DAO example you cite in the question, if you're coding an application that will never need to run with varying databases, then the DAO pattern is an unneeded level of complexity. In general, if your code doesn't need the benefit that a pattern is supposed to provide, applying that pattern will increase your application's complexity unnecessarily.
Revolving door metaphor
Using buildings as a metaphor, let's consider a revolving door as a building design pattern. The following image comes from Wikipedia:
You can "see" the additional complexity in such a door. The benefits of revolving doors are in energy savings. They attempt to solve the problem where there are people frequently going in and out of a building, and opening/closing a standard door allows too much air to be exchanged between the inside the outside of the building each time.
It probably wouldn't make sense to install a revolving door as the entrance of a two-bedroom house, because there is not enough traffic to justify the additional complexity. The revolving door would still work in a two-bedroom house. The benefits in terms of energy savings would be small (and might actually be worse because of size and air-tightness relative to a conventional door). The door would surely cost more and would take up more space than a traditional door.
Design patterns often lead to additional levels of abstraction around a problem, and if not handled correctly then too much abstraction can lead to complexity.
However, since design patterns provide a common vocabulary to communicate ideas they also reduce complexity and increase maintainability.
At the end of the day it's a double-edged sword, but I can't imagine a situation where I'd avoid using a design pattern...
There's an infamous disease known as "Small Boy With A Pattern Syndrome" that usually strikes someone who has recently read the GoF book for the first time and suddenly sees patterns everywhere. That can add complexity and unnecessary abstraction.
Patterns are best added to code as a discovery or refactoring to solve a particular problem, in my opinion.
In the short term, design patterns will often increase the complexity of the code. They add extra abstractions in places they might not be strictly necessary. However, in the long term they reduce complexity because future enhancements and changes fit into the patterns in a simple way. Without the patterns, these changes would be much more intrusive and complexity would likely be much higher.
Take for example a decorator pattern. The first time you use it, it will add complexity because now you have to define an interface for the object and create another class to add the decoration. This could likely be done with a simple property and be done with. However, when you get to 5 or 20 decoarations, the complexity with a decorator is much less than with properties.
As Grover said, the power of Design patterns is dependent on the ability of programmers to recognize them when they see them. It's like reducing a mathematical problem to a simpler problem, and them solving the simpler one. To someone who doesn't realize this, though, it seems like you've just created another problem.
I think it's always a good idea to document explicitly, using comments and/or descriptive names, when you're using a pattern to solve a problem. This might even educate another programmer who comes across it about the pattern if he wasn't aware of this.
I think it depends on the "audience" i.e. the maintaining developers of the code base. If they are design pattern illiterate then yes it can increase complexity, because most things one doesn't understand are "complex".
If the team is design pattern literate, i.e. they understand the basics and understand the premise behind why design patterns are useful (and as important when they're not) then I think they reduce complexity.
After all Computer Science maybe a fledging science but it's got decades of experience under its belt. The chances are somebody has already solved your problem once before. Whether the answer is a design pattern, data structure or algorithm.
I rather like this humorous explanation by Dylan Beattie. I recommend the read (if nothing else to waste five minutes on a Friday morning!)
Design Patterns work like algorithms for Object Oriented Programming. Shows you how to put together objects in a meaningful relationship that performs a particular task. So I would say yes they reduce complexity by allowing you to understand the design of the software better. The same way with algorithms in procedural programs.
If I told you that X was using a Linked List with a Bubble Sort it would be a lot easier to following along with what the programmer was doing.
Design patterns increase the code and divide it into multiple parts. If the design pattern and concept is known then it doesn't sound complex but code based on design pattern you don't know then it looks complex.
I made some research about this topic in the scope of GoF patterns. I observed OO Metric value fluctuations after the design pattern refactorings, you can check it out using this link.
Design patterns definitely adds complexity upfront in return for more modular, maintainable, flexible and extensible code in the long run. Perhaps the iterator pattern epitomizes it best.
Using index to iterate a list is clearly very simple and intuitive thing to do yet design patterns encapsulates it an iterator with is definitely more complex than a simple index.
The trade off is that it will be fair to say it has taken away the (implementation) simplicity but in return has made it more flexible removing iteration responsibility from the container and objectifying it which can be reusable.
That's design patterns for you.
Design Patterns dont increase complexity, it can make things alot easier to read and maintain. It can be harder to new comers to integrate your developer team, but this effort will be benefitial as a whole.
Problem are Design Patterns as a whole, but the its abuse
They should neither increase nor decrease the complexity.
You should always use an appropriate design for your code. This may use common design patterns or not.
The main benefits to design patterns are
By learning them you have added more design tools to your toolbox
By learning their names, if you use them and put a comment stating the pattern you're using, it helps readers understand your design intent more concisely
When I teach patterns at Hopkins, the two big things I stress are:
Patterns are all about Communication of Intent
Don't use any of the specific patterns as a Golden Hammer; lock them in your toolbox and only pull them out if it makes sense for your application.

Development Cost of Procedural Programming vs. OOP?

I come from a fairly strong OO background, the benefits of OOD & OOP are second nature to me, but recently I've found myself in a development shop tied to a procedural programming habits. The implementation language has some OOP features, they are not used in optimal ways.
Update: everyone seems to have an opinion about this topic, as do I, but the question was:
Have there been any good comparative studies contrasting the cost of software development using procedural programming languages versus Object Oriented languages?
Some commenters have pointed out the dubious nature of trying to compare apples to oranges, and I agree that it would be very difficult to accurately measure, however not entirely impossible perhaps.
Most all of these questions are confounded by the problem that individual programmer productivity varies by an order of magnitude or more; if you happen to have an OO programmer who is one of the gruop at productivity x, and a "procedural" programmer who is a 10x programmer, the procedural programmer is liable to win even if OO is faster in some sense.
There's also the problem that coding productivity is usually only 10-20 percent of the total effort in a realistic project, so higher productivity doesn't have much impact; even that hypothetical 10x programmer, or an infinitely fast programmer, can't cut the overall effort by more that 10-20 percent.
You might have a look at Fred Brooks' paper "No Silver Bullet".
After poking around with google I found this paper here. The search terms I used are Productivity object oriented.
The opening paragraphs goes on to say
Introduction of object-oriented
technology does not appear to hinder
overall productivity on new large
commercial projects, but it neither
seems to improve it in the first two
product generations. In practice, the
governing influence may be the
business workflow and not the
methodology.
I think you will find that Object Oriented Programming is better in specific circumstances but neutral for everything else. What sold my bosses on converting my company's CAD/CAM application to a object oriented framework is that I precisely showed the exact areas in which it will help. The focus wasn't on the methodology as a whole but how it will help us sold some specific problem we had. For us was having a extensible framework for adding more shapes, reports, and machine controllers, and using collections to remove the memory limitation of the older design.
OO or procedural offer to different way to develop and both can be costly if badly managed.
If we suppose that the works are done by the best person in both case, I think the result might be equal in term of cost.
I believe the cost difference will be on how you will be the maintenance phase where you will need to add features and modify current features. Procedural project are harder to have automatic testing, are less subject to be able to expand without affecting other part and is more harder to understand the concept part by part (because cohesive part aren't grouped together necessary).
So, I think, the OO cost will be lower in the long run compared to Procedural.
i think S.Lott was referring to the "unrepeatable experiment" phenomenon, i.e. you cannot write application X procedurally then rewind time and write it OO to see what the difference is.
you could write the same app twice two different ways, but
you would learn something about the app doing it the first way that would help you in the second way, and
you may be better at OO than at procedural, or vice-versa, depending on your experience and the nature of the application and the tools chosen
so there really is no direct basis for comparison
empirical studies are likewise useless, for similar reasons - different applications, different teams, etc.
paradigm shifts are difficult, and a small percentage of programmers may never make the transition
if you are free to develop your way, then the solution is simple: develop things your way, and when your co-workers notice that you are coding circles around them and your code doesn't break nearly as often etc. and they ask you how you do it, then teach them OOP (along with TDD and any other good practices you may use)
if not, well, it might be time to polish the resume... ;-)
Good idea. A head-to-head comparison. Write application X in a procedural style, and in an OO style and measure something. Cost to develop. Return on Investment.
What does it mean to write the same application in two styles? It would be a different application, wouldn't it? The procedural people would balk that the OO folks were cheating when they used inheritance or messaging or encapsulation.
There can't be such a comparison. There's no basis for comparing two "versions" of an application. It's like asking if apples or oranges are more cost-effective at being fruit.
Having said that, you have to focus on things other folks can actually see.
Time to build something that works.
Rate of bugs and problems.
If your approach is better, you'll be successful, and people will want to know why.
When you explain that OO leads to your success... well... you've won the argument.
The key is time. How long does it take the company to change the design to add new features or fix existing ones. Any study you make should focus on that area.
My company had a event driven procedure oriented design for a CAM software in the mid 90's created using VB3. It was taking a long time to adapt the software to new machines. A long time to test the effects of bug fixes and new features.
With VB6 came along I was able to graph out the current design and a new design that fixed the testing and adaptation problem. The non-technical boss grasped what I was trying doing right away.
The key is to explain WHY OOP will benefit the project. Use things like Refactoring by Fowler and Design Patterns to show how a new design will lower the time to do things. Also include how you get from Point A to Point B. Refactoring will help with showing how you can have working intermediate stages that can be shipped.
I don't think you'll find a study like that. At least you should define what you mean by "cost". Because OOP designing is somehow slower, so on the short term development is maybe faster with procedural programming. On very short term maybe spaghetti coding is even more faster.
But when project begins growing things are opposite, because OOP designing is best featured to manage code complexity.
So in a small project maybe procedural design MAY be cheaper, because it's faster and you don't have drawbacks.
But in a big project you'll get stick very quickly using only a simple paradigm like procedural programming
I doubt you will find a definitive study. As several people have mentioned this is not a reproducible experiment. You will find anecdotal evidence, a lot of it. Some people may find some statistical studies, but I would examine them carefully. I am not aware of any really good ones.
I also will make another point, there is no such thing as purely object oriented or purely procedural in the real world. Many if not most object methods are written with procedural code. At the same time many procedural programs use OO methodologies such as encapsulation (also call abstraction by some).
Don't get me wrong, OO and procedural programs look and are different, but it is a matter of dark gray vs light gray instead of black and white.
This article says nothing about OOP vs Procedural. But I'd think that you could use similar metrics from your company for a discussion.
I find it interesting as my company is starting to explore the ROWE initiative. In our first session, it was apparent that we don't currently capture enough metrics on outcomes.
So you need to focus on 1) Is the maintenance of current processes impeding future development? 2) How are different methods going to affect #1?

What's the point of OOP?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
As far as I can tell, in spite of the countless millions or billions spent on OOP education, languages, and tools, OOP has not improved developer productivity or software reliability, nor has it reduced development costs. Few people use OOP in any rigorous sense (few people adhere to or understand principles such as LSP); there seems to be little uniformity or consistency to the approaches that people take to modelling problem domains. All too often, the class is used simply for its syntactic sugar; it puts the functions for a record type into their own little namespace.
I've written a large amount of code for a wide variety of applications. Although there have been places where true substitutable subtyping played a valuable role in the application, these have been pretty exceptional. In general, though much lip service is given to talk of "re-use" the reality is that unless a piece of code does exactly what you want it to do, there's very little cost-effective "re-use". It's extremely hard to design classes to be extensible in the right way, and so the cost of extension is normally so great that "re-use" simply isn't worthwhile.
In many regards, this doesn't surprise me. The real world isn't "OO", and the idea implicit in OO--that we can model things with some class taxonomy--seems to me very fundamentally flawed (I can sit on a table, a tree stump, a car bonnet, someone's lap--but not one of those is-a chair). Even if we move to more abstract domains, OO modelling is often difficult, counterintuitive, and ultimately unhelpful (consider the classic examples of circles/ellipses or squares/rectangles).
So what am I missing here? Where's the value of OOP, and why has all the time and money failed to make software any better?
The real world isn't "OO", and the idea implicit in OO--that we can model things with some class taxonomy--seems to me very fundamentally flawed
While this is true and has been observed by other people (take Stepanov, inventor of the STL), the rest is nonsense. OOP may be flawed and it certainly is no silver bullet but it makes large-scale applications much simpler because it's a great way to reduce dependencies. Of course, this is only true for “good” OOP design. Sloppy design won't give any advantage. But good, decoupled design can be modelled very well using OOP and not well using other techniques.
There are much better, more universal models (Haskell's type model comes to mind) but these are also often more complicated and/or difficult to implement efficiently. OOP is a good trade-off between extremes.
OOP isn't about creating re-usable classes, its about creating Usable classes.
All too often, the class is used
simply for its syntactic sugar; it
puts the functions for a record type
into their own little namespace.
Yes, I find this to be too prevalent as well. This is not Object Oriented Programming. It's Object Based Programming and data centric programing. In my 10 years of working with OO Languages, I see people mostly doing Object Based Programming. OBP breaks down very quickly IMHO since you are essentially getting the worst of both words: 1) Procedural programming without adhering to proven structured programming methodology and 2) OOP without adhering to to proven OOP methodology.
OOP done right is a beautiful thing. It makes very difficult problems easy to solve, and to the uninitiated (not trying to sound pompous there), it can almost seem like magic. That being said, OOP is just one tool in the toolbox of programming methodologies. It is not the be all end all methodology. It just happens to suit large business applications well.
Most developers who work in OOP languages are utilizing examples of OOP done right in the frameworks and types that they use day-to-day, but they just aren't aware of it. Here are some very simple examples: ADO.NET, Hibernate/NHibernate, Logging Frameworks, various language collection types, the ASP.NET stack, The JSP stack etc... These are all things that heavily rely on OOP in their codebases.
Reuse shouldn't be a goal of OOP - or any other paradigm for that matter.
Reuse is a side-effect of an good design and proper level of abstraction. Code achieves reuse by doing something useful, but not doing so much as to make it inflexible. It does not matter whether the code is OO or not - we reuse what works and is not trivial to do ourselves. That's pragmatism.
The thought of OO as a new way to get to reuse through inheritance is fundamentally flawed. As you note the LSP violations abound. Instead, OO is properly thought of as a method of managing the complexity of a problem domain. The goal is maintainability of a system over time. The primary tool for achieving this is the separation of public interface from a private implementation. This allows us to have rules like "This should only be modified using ..." enforced by the compiler, rather than code review.
Using this, I'm sure you will agree, allows us to create and maintain hugely complex systems. There is lots of value in that, and it is not easy to do in other paradigms.
Verging on religious but I would say that you're painting an overly grim picture of the state of modern OOP. I would argue that it actually has reduced costs, made large software projects manageable, and so forth. That doesn't mean it's solved the fundamental problem of software messiness, and it doesn't mean the average developer is an OOP expert. But the modularization of function into object-components has certainly reduced the amount of spaghetti code out there in the world.
I can think of dozens of libraries off the top of my head which are beautifully reusable and which have saved time and money that can never be calculated.
But to the extent that OOP has been a waste of time, I'd say it's because of lack of programmer training, compounded by the steep learning curve of learning a language specific OOP mapping. Some people "get" OOP and others never will.
There's no empirical evidence that suggests that object orientation is a more natural way for people to think about the world. There's some work in the field of psychology of programming that shows that OO is not somehow more fitting than other approaches.
Object-oriented representations do not appear to be universally more usable or less usable.
It is not enough to simply adopt OO methods and require developers to use such methods, because that might have a negative impact on developer productivity, as well as the quality of systems developed.
Which is from "On the Usability of OO Representations" from Communications of the ACM Oct. 2000. The articles mainly compares OO against theprocess-oriented approach. There's lots of study of how people who work with the OO method "think" (Int. J. of Human-Computer Studies 2001, issue 54, or Human-Computer Interaction 1995, vol. 10 has a whole theme on OO studies), and from what I read, there's nothing to indicate some kind of naturalness to the OO approach that makes it better suited than a more traditional procedural approach.
I think the use of opaque context objects (HANDLEs in Win32, FILE*s in C, to name two well-known examples--hell, HANDLEs live on the other side of the kernel-mode barrier, and it really doesn't get much more encapsulated than that) is found in procedural code too; I'm struggling to see how this is something particular to OOP.
HANDLEs (and the rest of the WinAPI) is OOP! C doesn't support OOP very well so there's no special syntax but that doesn't mean it doesn't use the same concepts. WinAPI is in every sense of the word an object-oriented framework.
See, this is the trouble with every single discussion involving OOP or alternative techniques: nobody is clear about the definition, everyone is talking about something else and thus no consensus can be reached. Seems like a waste of time to me.
Its a programming paradigm.. Designed to make it easier for us mere mortals to break down a problem into smaller, workable pieces..
If you dont find it useful.. Don't use it, don't pay for training and be happy.
I on the other hand do find it useful, so I will :)
Relative to straight procedural programming, the first fundamental tenet of OOP is the notion of information hiding and encapsulation. This idea leads to the notion of the class that seperates the interface from implementation. These are hugely important concepts and the basis for putting a framework in place to think about program design in a different way and better (I think) way. You can't really argue against those properties - there is no trade-off made and it is always a cleaner way to modulize things.
Other aspects of OOP including inheritance and polymorphism are important too, but as others have alluded to, those are commonly over used. ie: Sometimes people use inheritance and/or polymorphism because they can, not because they should have. They are powerful concepts and very useful, but need to be used wisely and are not automatic winning advantages of OOP.
Relative to re-use. I agree re-use is over sold for OOP. It is a possible side effect of well defined objects, typically of more primitive/generic classes and is a direct result of the encapsulation and information hiding concepts. It is potentially easier to be re-used because the interfaces of well defined classes are just simply clearer and somewhat self documenting.
The problem with OOP is that it was oversold.
As Alan Kay originally conceived it, it was a great alternative to the prior practice of having raw data and all-global routines.
Then some management-consultant types latched onto it and sold it as the messiah of software, and lemming-like, academia and industry tumbled along after it.
Now they are lemming-like tumbling after other good ideas being oversold, such as functional programming.
So what would I do differently? Plenty, and I wrote a book on this. (It's out of print - I don't get a cent, but you can still get copies.)Amazon
My constructive answer is to look at programming not as a way of modeling things in the real world, but as a way of encoding requirements.
That is very different, and is based on information theory (at a level that anyone can understand). It says that programming can be looked at as a process of defining languages, and skill in doing so is essential for good programming.
It elevates the concept of domain-specific-languages (DSLs). It agrees emphatically with DRY (don't repeat yourself). It gives a big thumbs-up to code generation. It results in software with massively less data structure than is typical for modern applications.
It seeks to re-invigorate the idea that the way forward lies in inventiveness, and that even well-accepted ideas should be questioned.
HANDLEs (and the rest of the WinAPI) is OOP!
Are they, though? They're not inheritable, they're certainly not substitutable, they lack well-defined classes... I think they fall a long way short of "OOP".
Have you ever created a window using WinAPI? Then you should know that you define a class (RegisterClass), create an instance of it (CreateWindow), call virtual methods (WndProc) and base-class methods (DefWindowProc) and so on. WinAPI even takes the nomenclature from SmallTalk OOP, calling the methods “messages” (Window Messages).
Handles may not be inheritable but then, there's final in Java. They don't lack a class, they are a placeholder for the class: That's what the word “handle” means. Looking at architectures like MFC or .NET WinForms it's immediately obvious that except for the syntax, nothing much is different from the WinAPI.
Yes OOP did not solve all our problems, sorry about that. We are, however working on SOA which will solve all those problems.
OOP lends itself well to programming internal computer structures like GUI "widgets", where for example SelectList and TextBox may be subtypes of Item, which has common methods such as "move" and "resize".
The trouble is, 90% of us work in the world of business where we are working with business concepts such as Invoice, Employee, Job, Order. These do not lend themselves so well to OOP because the "objects" are more nebulous, subject to change according to business re-engineering and so on.
The worst case is where OO is enthusiastically applied to databases, including the egregious OO "enhancements" to SQL databases - which are rightly ignored except by database noobs who assume they must be the right way to do things because they are newer.
In my experience of reviewing code and design of projects I have been through, the value of OOP is not fully realised because alot of developers have not properly conceptualised the object-oriented model in their minds. Thus they do not program with OO design, very often continuing to write top-down procedural code making the classes a pretty flat design. (if you can even call that "design" in the first place)
It is pretty scary to observe how little colleagues know about what an abstract class or interface are, let alone properly design an inheritance hierarchy to suit the business needs.
However, when good OO design is present, it is just sheer joy reading the code and seeing the code naturally fall into place into intuitive components/classes. I have always perceived system architecture and design like designing the various departments and staff jobs in a company - all are there to accomplish a certain piece of work in the grand scheme of things, emitting the synergy required to propel the organisation/system forward.
That, of course, is quite rare unfortunately. Like the ratio of beautifully-designed versus horrendously-designed physical objects in the world, the same can pretty much be said about software engineering and design. Having the good tools at one's disposal does not necessarily confer good practices and results.
Maybe a bonnet, lap or a tree is not a chair but they all are ISittable.
I think those real world things are objects
You do?
What methods does an invoice have? Oh, wait. It can't pay itself, it can't send itself, it can't compare itself with the items that the vendor actually delivered. It doesn't have any methods at all; it's totally inert and non-functional. It's a record type (a struct, if you prefer), not an object.
Likewise the other things you mention.
Just because something is real does not make it an object in the OO sense of the word. OO objects are a peculiar coupling of state and behaviour that can act of their own accord. That isn't something that's abundant in the real world.
I have been writing OO code for the last 9 years or so. Other than using messaging, it's hard for me to imagine other approach. The main benefit I see totally in line with what CodingTheWheel said: modularisation. OO naturally leads me to construct my applications from modular components that have clean interfaces and clear responsibilities (i.e. loosely coupled, highly cohesive code with a clear separation of concerns).
I think where OO breaks down is when people create deeply nested class heirarchies. This can lead to complexity. However, factoring out common finctionality into a base class, then reusing that in other descendant classes is a deeply elegant thing, IMHO!
In the first place, the observations are somewhat sloppy. I don't have any figures on software productivity, and have no good reason to believe it's not going up. Further, since there are many people who abuse OO, good use of OO would not necessarily cause a productivity improvement even if OO was the greatest thing since peanut butter. After all, an incompetent brain surgeon is likely to be worse than none at all, but a competent one can be invaluable.
That being said, OO is a different way of arranging things, attaching procedural code to data rather than having procedural code operate on data. This should be at least a small win by itself, since there are cases where the OO approach is more natural. There's nothing stopping anybody from writing a procedural API in C++, after all, and so the option of providing objects instead makes the language more versatile.
Further, there's something OO does very well: it allows old code to call new code automatically, with no changes. If I have code that manages things procedurally, and I add a new sort of thing that's similar but not identical to an earlier one, I have to change the procedural code. In an OO system, I inherit the functionality, change what I like, and the new code is automatically used due to polymorphism. This increases the locality of changes, and that is a Good Thing.
The downside is that good OO isn't free: it requires time and effort to learn it properly. Since it's a major buzzword, there's lots of people and products who do it badly, just for the sake of doing it. It's not easier to design a good class interface than a good procedural API, and there's all sorts of easy-to-make errors (like deep class hierarchies).
Think of it as a different sort of tool, not necessarily generally better. A hammer in addition to a screwdriver, say. Perhaps we will eventually get out of the practice of software engineering as knowing which wrench to use to hammer the screw in.
#Sean
However, factoring out common finctionality into a base class, then reusing that in other descendant classes is a deeply elegant thing, IMHO!
But "procedural" developers have been doing that for decades anyway. The syntax and terminology might differ, but the effect is identical. There is more to OOP than "reusing common functionality in a base class", and I might even go so far as to say that that is hard to describe as OOP at all; calling the same function from different bits of code is a technique as old as the subprocedure itself.
#Konrad
OOP may be flawed and it certainly is no silver bullet but it makes large-scale applications much simpler because it's a great way to reduce dependencies
That is the dogma. I am not seeing what makes OOP significantly better in this regard than procedural programming of old. Whenever I make a procedure call I am isolating myself from the specifics of the implementation.
To me, there is a lot of value in the OOP syntax itself. Using objects that attempt to represent real things or data structures is often much more useful than trying to use a bunch of different flat (or "floating") functions to do the same thing with the same data. There is a certain natural "flow" to things with good OOP that just makes more sense to read, write, and maintain long term.
It doesn't necessarily matter that an Invoice isn't really an "object" with functions that it can perform itself - the object instance can exist just to perform functions on the data without having to know what type of data is actually there. The function "invoice.toJson()" can be called successfully without having to know what kind of data "invoice" is - the result will be Json, no matter it if comes from a database, XML, CSV, or even another JSON object. With procedural functions, you all the sudden have to know more about your data, and end up with functions like "xmlToJson()", "csvToJson()", "dbToJson()", etc. It eventually becomes a complete mess and a HUGE headache if you ever change the underlying data type.
The point of OOP is to hide the actual implementation by abstracting it away. To achieve that goal, you must create a public interface. To make your job easier while creating that public interface and keep things DRY, you must use concepts like abstract classes, inheritance, polymorphism, and design patterns.
So to me, the real overriding goal of OOP is to make future code maintenance and changes easier. But even beyond that, it can really simplify things a lot when done correctly in ways that procedural code never could. It doesn't matter if it doesn't match the "real world" - programming with code is not interacting with real world objects anyways. OOP is just a tool that makes my job easier and faster - I'll go for that any day.
#CodingTheWheel
But to the extent that OOP has been a waste of time, I'd say it's because of lack of programmer training, compounded by the steep learning curve of learning a language specific OOP mapping. Some people "get" OOP and others never will.
I dunno if that's really surprising, though. I think that technically sound approaches (LSP being the obvious thing) make hard to use, but if we don't use such approaches it makes the code brittle and inextensible anyway (because we can no longer reason about it). And I think the counterintuitive results that OOP leads us to makes it unsurprising that people don't pick it up.
More significantly, since software is already fundamentally too hard for normal humans to write reliably and accurately, should we really be extolling a technique that is consistently taught poorly and appears hard to learn? If the benefits were clear-cut then it might be worth persevering in spite of the difficulty, but that doesn't seem to be the case.
#Jeff
Relative to straight procedural programming, the first fundamental tenet of OOP is the notion of information hiding and encapsulation. This idea leads to the notion of the class that seperates the interface from implementation.
Which has the more hidden implementation: C++'s iostreams, or C's FILE*s?
I think the use of opaque context objects (HANDLEs in Win32, FILE*s in C, to name two well-known examples--hell, HANDLEs live on the other side of the kernel-mode barrier, and it really doesn't get much more encapsulated than that) is found in procedural code too; I'm struggling to see how this is something particular to OOP.
I suppose that may be a part of why I'm struggling to see the benefits: the parts that are obviously good are not specific to OOP, whereas the parts that are specific to OOP are not obviously good! (this is not to say that they are necessarily bad, but rather that I have not seen the evidence that they are widely-applicable and consistently beneficial).
In the only dev blog I read, by that Joel-On-Software-Founder-of-SO guy, I read a long time ago that OO does not lead to productivity increases. Automatic memory management does. Cool. Who can deny the data?
I still believe that OO is to non-OO what programming with functions is to programming everything inline. (And I should know, as I started with GWBasic.) When you refactor code to use functions, variable2654 becomes variable3 of the method you're in. Or, better yet, it's got a name that you can understand, and if the function is short, it's called value and that's sufficient for full comprehension.
When code with no functions becomes code with methods, you get to delete miles of code.
When you refactor code to be truly OO, b, c, q, and Z become this, this, this and this. And since I don't believe in using the this keyword, you get to delete miles of code. Actually, you get to do that even if you use this.
I do not think OO is natural metaphor. I don't think language is a natural metaphor either, nor do I think that Fowler's "smells" are better than saying "this code tastes bad." That said, I think that OO is not about natural metaphors and people who think the objects just pop out at you are basically missing the point. You define the object universe, and better object universes result in code that is shorter, easier to understand, works better, or all of these (and some criteria I am forgetting). I think that people who use the customers/domain's natural objects as programming objects are missing the power to redefine the universe.
For instance, when you do an airline reservation system, what you call a reservation might not correspond to a legal/business reservation at all.
Some of the basic concepts are really cool tools I think that most people exaggerate with that whole "when you have a hammer, they're all nails" thing. I think that the other side of the coin/mirror is just as true: when you have a gadget like polymorphism/inheritance, you begin to find uses where it fits like a glove/sock/contact-lens. The tools of OO are very powerful. Single-inheritance is, I think, absolutely necessary for people not to get carried away, my own multi-inheritance software not withstanding.
What's the point of OOP? I think it's a great way to handle an absolutely massive code base. I think it lets you organize and reorganize you code and gives you a language to do that in (beyond the programming language you're working in), and modularizes code in a pretty natural and easy-to-understand way.
OOP is destined to be misunderstood by the majority of developers This is because it's an eye-opening process like life: you understand OO more and more with experience, and start avoiding certain patterns and employing others as you get wiser. One of the best examples is that you stop using inheritance for classes that you do not control, and prefer the Facade pattern instead.
Regarding your mini-essay/question
I did want to mention that you're right. Reusability is a pipe-dream, for the most part. Here's a quote from Anders Hejilsberg about that topic (brilliant) from here:
If you ask beginning programmers to
write a calendar control, they often
think to themselves, "Oh, I'm going to
write the world's best calendar
control! It's going to be polymorphic
with respect to the kind of calendar.
It will have displayers, and mungers,
and this, that, and the other." They
need to ship a calendar application in
two months. They put all this
infrastructure into place in the
control, and then spend two days
writing a crappy calendar application
on top of it. They'll think, "In the
next version of the application, I'm
going to do so much more."
Once they start thinking about how
they're actually going to implement
all of these other concretizations of
their abstract design, however, it
turns out that their design is
completely wrong. And now they've
painted themself into a corner, and
they have to throw the whole thing
out. I have seen that over and over.
I'm a strong believer in being
minimalistic. Unless you actually are
going to solve the general problem,
don't try and put in place a framework
for solving a specific one, because
you don't know what that framework
should look like.
Have you ever created a window using WinAPI?
More times than I care to remember.
Then you should know that you define a class (RegisterClass), create an instance of it (CreateWindow), call virtual methods (WndProc) and base-class methods (DefWindowProc) and so on. WinAPI even takes the nomenclature from SmallTalk OOP, calling the methods “messages” (Window Messages).
Then you'll also know that it does no message dispatch of its own, which is a big gaping void. It also has crappy subclassing.
Handles may not be inheritable but then, there's final in Java. They don't lack a class, they are a placeholder for the class: That's what the word “handle” means. Looking at architectures like MFC or .NET WinForms it's immediately obvious that except for the syntax, nothing much is different from the WinAPI.
They're not inheritable either in interface or implementation, minimally substitutable, and they're not substantially different from what procedural coders have been doing since forever.
Is this really it? The best bits of OOP are just... traditional procedural code? That's the big deal?
I agree completely with InSciTek Jeff's answer, I'll just add the following refinements:
Information hiding and encapsulation: Critical for any maintainable code. Can be done by being careful in any programming language, doesn't require OO features, but doing it will make your code slightly OO-like.
Inheritance: There is one important application domain for which all those OO is-a-kind-of and contains-a relationships are a perfect fit: Graphical User Interfaces. If you try to build GUIs without OO language support, you will end up building OO-like features anyway, and it's harder and more error-prone without language support. Glade (recently) and X11 Xt (historically) for example.
Using OO features (especially deeply nested abstract hierarchies), when there is no point, is pointless. But for some application domains, there really is a point.
I believe the most beneficial quality of OOP is data hiding/managing. However, there are a LOT of examples where OOP is misused and I think this is where the confusion comes in.
Just because you can make something into an object does not mean you should. However, if doing so will make your code more organized/easier to read then you definitely should.
A great practical example where OOP is very helpful is with a "product" class and objects that I use on our website. Since every page is a product, and every product has references to other products, it can get very confusing as to which product the data you have refers to. Is this "strURL" variable the link to the current page, or to the home page, or to the statistics page? Sure you could make all kinds of different variable that refer to the same information, but proCurrentPage->strURL, is much easier to understand (for a developer).
In addition, attaching functions to those pages is much cleaner. I can do proCurrentPage->CleanCache(); Followed by proDisplayItem->RenderPromo(); If I just called those functions and had it assume the current data was available, who knows what kind of evil would occur. Also, if I had to pass the correct variables into those functions, I am back to the problem of having all kinds of variables for the different products laying around.
Instead, using objects, all my product data and functions are nice and clean and easy to understand.
However. The big problem with OOP is when somebody believes that EVERYTHING should be OOP. This creates a lot of problems. I have 88 tables in my database. I only have about 6 classes, and maybe I should have about 10. I definitely don't need 88 classes. Most of the time directly accessing those tables is perfectly understandable in the circumstances I use it, and OOP would actually make it more difficult/tedious to get to the core functionality of what is occurring.
I believe a hybrid model of objects where useful and procedural where practical is the most effective method of coding. It's a shame we have all these religious wars where people advocate using one method at the expense of the others. They are both good, and they both have their place. Most of the time, there are uses for both methods in every larger project (In some smaller projects, a single object, or a few procedures may be all that you need).
I don't care for reuse as much as I do for readability. The latter means your code is easier to change. That alone is worth in gold in the craft of building software.
And OO is a pretty damn effective way to make your programs readable. Reuse or no reuse.
"The real world isn't "OO","
Really? My world is full of objects. I'm using one now. I think that having software "objects" model the real objects might not be such a bad thing.
OO designs for conceptual things (like Windows, not real world windows, but the display panels on my computer monitor) often leave a lot to be desired. But for real world things like invoices, shipping orders, insurance claims and what-not, I think those real world things are objects. I have a stack on my desk, so they must be real.
The point of OOP is to give the programmer another means for describing and communicating a solution to a problem in code to machines and people. The most important part of that is the communication to people. OOP allows the programmer to declare what they mean in the code through rules that are enforced in the OO language.
Contrary to many arguments on this topic, OOP and OO concepts are pervasive throughout all code including code in non-OOP languages such as C. Many advanced non-OO programmers will approximate the features of objects even in non-OO languages.
Having OO built into the language merely gives the programmer another means of expression.
The biggest part to writing code is not communication with the machine, that part is easy, the biggest part is communication with human programmers.