What part of STL knowledge is must for a C++ developer? - stl

I have good knowledge of C++ but never dwell into STL. What part of STL I must learn to improve productivity and reduce defects in my work?
Thanks.

I have good knowledge of C++
With all due respect, but no – you don’t. The standard library, or at least large parts of it (especially the subset known as “STL”) is a fundamental part of C++. Without knowledge of it you don’t know very much about C++ at all.
In fact, much of the modern design of C++ (essentially everything since the 98 version) was guided by design considerations stemming from the standard library, and much of the changes in the language since then are changes to the standard library. If you take a look at the official C++ language description a good part of the document is concerned with the library.

Usually the first reaction (at least in my opinion, of course) for people who have not worked with the STL before is to get upset with all the template code. So I would start by studying a little bit on this subject.
In the case you already know template fundamentals I would recommend taking a brief look over an STL design document. This is actually the second stage of hassle for people not yet familiar with it. The reason for this is that the STL is not designed under a typical object oriented paradigm, but under the generic programming paradigm.
With this in mind, a good start could be this introductory article. The terms used throughout the STL components are explained there. Please notice that is a relatively old text focused on the SGI implementation (which predates the C++ standard and incorrectly mentions, for example, the hash based containers as part of it). However, the theory is still valid.
Well, if you already know most things I've said up to this point, just jump directly to the topcis the others provided.

You mention about improving your productivity and reduce defects. There are general guidelines that you can use for this. I assume c++11, and I mention a bit more than stl (smart pointers):
Use containers, they will manage memory for you. You get rid of new for C arrays and later having to delete them, for example.
For dynamic arrays, use std::vector. You also have hashtables in std::unordered_map and balanced trees with std::map. There are more containers, take a look here.
Use std::array instead of plain C arrays whenever you can: they never decay to pointers when passing as arguments to functions, which can avoid very disgusting bugs.
Use smart pointers and forget forever for a naked new and its matching delete in code.
This can reduce errors far more than you would expect, especially in the presence of exceptions.
Use std::make_shared when possible. You can use it to allocate a shared_ptr directly as an argument to a function that takes a std::shared_ptr. With a naked new this is not possible.
Use algorithms instead of hand-coded loops. The code will be far more readable and usually more performant.
With this advice your code should look closer (but not necessarily equal or semantically equivalent) to C# or Java, in which manual memory management disappears. This is even better than garbage collection in many scenarios, since you have deterministic guarantees for when a resource will be freed.

I'd say the algorithms from <algorithm> will really clean up your code and at the same time make your code more concise.
Obviously, knowledge of all the containers will help you to optimize the bottlenecks of your code caused by a certain choice of container which is not optimal (but be sure to profile first).
These are pretty much the basics and they will help you a lot to make more robust code.
After that you can delve into smart pointers like std::shared_ptr which are almost always better than regular pointers (in my case, at least).

I think can start with containers(vector, list) and alghorithms(binary search, sort).
And as Jesse Emond wrote, the more you know, the better you live)))

Related

What are important languages to learn to understand different approaches and concepts? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
When all you have is a pair of bolt cutters and a bottle of vodka, everything looks like the lock on the door of Wolf Blitzer's boathouse. (Replace that with a hammer and a nail if you don't read xkcd)
I currently program Clojure, Python, Java and PHP, so I am familiar with the C and LISP syntax as well as the whitespace thing. I know imperative, functional, immutable, OOP and a couple type systems and other things. Now I want more!
What are languages that take a different approach and would be useful for either practical tool choosing or theoretical understanding?
I don't feel like learning another functional language(Haskell) or another imperative OOP language(Ruby), nor do I want to practice impractical fun languages like Brainfuck.
One very interesting thing I found myself are monoiconic stack based languages like Factor.
Only when I feel I understand most concepts and have answers to all my questions, I want to start thinking about my own toy language to contain all my personal preferences.
Matters of practicality are highly subjective, so I will simply say that learning different language paradigms will only serve to make you a better programmer. What is more practical than that?
Functional, Haskell - I know you said that you didn't want to, but you should really really reconsider. You've gotten some functional exposure with Clojure and even Python, but you've not experienced it to its fullest without Haskell. If you're really against Haskell then good compromises are either ML or OCaml.
Declarative, Datalog - Many people would recommend Prolog in this slot, but I think Datalog is a cleaner example of a declarative language.
Array, J - I've only just discovered J, but I find it to be a stunning language. It will twist your mind into a pretzel. You will thank J for that.
Stack, Factor/Forth - Factor is very powerful and I plan to dig into it ASAP. Forth is the grand-daddy of the Stack languages, and as an added bonus it's simple to implement yourself. There is something to be said about learning through implementation.
Dataflow, Oz - I think the influence of Oz is on the upswing and will only continue to grow in the future.
Prototype-based, JavaScript / Io / Self - Self is the grand-daddy and highly influential on every prototype-based language. This is not the same as class-based OOP and shouldn't be treated as such. Many people come to a prototype language and create an ad-hoc class system, but if your goal is to expand your mind, then I think that is a mistake. Use the language to its full capacity. Read Organizing Programs without Classes for ideas.
Expert System, CLIPS - I always recommend this. If you know Prolog then you will likely have the upper-hand in getting up to speed, but it's a very different language.
Frink - Frink is a general purpose language, but it's famous for its system of unit conversions. I find this language to be very inspiring in its unrelenting drive to be the best at what it does. Plus... it's really fun!
Functional+Optional Types, Qi - You say you've experience with some type systems, but do you have experience with "skinnable* type systems? No one has... but they should. Qi is like Lisp in many ways, but its type system will blow your mind.
Actors+Fault-tolerance, Erlang - Erlang's process model gets a lot of the buzz, but its fault-tolerance and hot-code-swapping mechanisms are game-changing. You will not learn much about FP that you wouldn't learn with Clojure, but its FT features will make you wonder why more languages can't seem to get this right.
Enjoy!
What about Prolog (for unification/backtracking etc), Smalltalk (for "everything's a message"), Forth (reverse polish, threaded interpreters etc), Scheme (continuations)?
Not a language, but the Art of the Metaobject Protocol is mind-bending stuff
I second Haskell. Don't think "I know a Lisp, so I know functional programming". Ever heard of type classes? Algebraic data types? Monads? "Modern" (more or less - at least not 50 years old ;) ) functional languages, especially Haskell, have explored a plethora of very powerful useful new concepts. Type classes add ad-hoc polymorphism, but type inference (yet another thing the languages you already know don't have) works like a charm. Algebraic data types are simply awesome, especially for modelling trees-like data structures, but work fine for enums or simple records, too. And monads... well, let's just say people use them to make exceptions, I/O, parsers, list comprehensions and much more - in purely functional ways!
Also, the whole topic is deep enough to keep one busy for years ;)
I currently program Clojure, Python, Java and PHP [...] What are languages that take a different approach and would be useful for either practical tool choosing or theoretical understanding?
C
There's a lot of C code lying around---it's definitely practical. If you learn C++ too, there's a big lot of more code around (and the leap is short once you know C and Java).
It also gives you (or forces you to have) a great understanding of some theoretical issues; for instance, each running program lives in a 4 GB byte array, in some sense. Pointers in C are really just indices into this array---they're just a different kind of integer. No different in Java, Python, PHP, except hidden beneath a surface layer.
Also, you can write object-oriented code in C, you just have to be a bit manual about vtables and such. Simon Tatham's Portable Puzzle Collection is a great example of fairly accessible object-oriented C code; it's also fairly well designed and well worth a read to a beginner/intermediate C programmer. This is what happens in Haskell too---type classes are in some sense "just another vtable".
Another great thing about C: engaging in Q&A with skilled C programmers will get you a lot of answers that explain C in terms of lower-level constructs, which builds your closer-to-the-iron knowledge base.
I may be missing OP's point---I think I am, judging by the other answers---but I think it might be a useful answer to other people who have a similar question and read this thread.
From Peter Norvig's site:
"Learn at least a half dozen programming languages. Include one language that supports class abstractions (like Java or C++), one that supports functional abstraction (like Lisp or ML), one that supports syntactic abstraction (like Lisp), one that supports declarative specifications (like Prolog or C++ templates), one that supports coroutines (like Icon or Scheme), and one that supports parallelism (like Sisal). "
http://norvig.com/21-days.html
I'm amazed that after 6 months and hundreds of votes, noone has mentioned SQL ...
In the types as theorems / advanced type systems: Coq ( I think Agda comes in this category too).
Coq is a proof assistant embedded into a functional programing language.
You can write mathematical proofs and Coq helps to build a solution.
You can write functions and prove properties about it.
It has dependent types, that alone blew my mind. A simple example:
concatenate: forall (A:Set)(n m:nat), (array A m)->(array A n)->(array A (n+m))
is the signature of a function that concatenates two arrays of size n and m of elements of A and returns an array of size (n+m). It won't compile if the function doesn't return that!
Is based on the calculus of inductive constructions, and it has a solid theory behind it.
I'm not smart enough to understand it all, but I think is worth taking a look, specially if you trend towards type theory.
EDIT: I need to mention: you write a function in Coq and then you can PROVE it is correct for any input, that is amazing!
One of the languages which i am interested for have a very different point of view (including a new vocabulary to define the language elements and a radical diff syntax) is J. Haskell would be the obvious choice for me, although it is a functional lang, cause its type system and other unique features open your mind and makes you rethink you previous knowledge in (functional) programming.
Just like fogus has suggested it to you in his list, I advise you too to look at the language OzML/Mozart
Many paradigms, mainly targetted at concurrency/multi agent programming.
Concerning concurrency, and distributed calculus, the equivalent of Lambda calculus (which is behind functionnal programming) is called the Pi Calculus.
I have only started begining to look at some implementation of the Pi calculus. But they already have enlarged my conceptions of computing.
Pict
Nomadic Pict
FunLoft. (this one is pretty recent, conceived at INRIA)
Dataflow programming, aka flow-based programming is a good step ahead on the road. Some buzzwords: paralell processing, rapid prototyping, visual programming (not as bad as sounds first).
Wikipedia's articles are good:
In computer science, flow-based
programming (FBP) is a programming
paradigm that defines applications as
networks of "black box" processes,
which exchange data across predefined
connections by message passing, where
the connections are specified
externally to the processes. These
black box processes can be reconnected
endlessly to form different
applications without having to be
changed internally. FBP is thus
naturally component-oriented.
http://en.wikipedia.org/wiki/Flow-based_programming
http://en.wikipedia.org/wiki/Dataflow_programming
http://en.wikipedia.org/wiki/Actor_model
Read JPM's book: http://jpaulmorrison.com/fbp/
(We've written a simple implementation in C++ for home automation purposes, and we're very happy with it. Documentation is under construction.)
You've learned a lot of languages. Now is the time to focus on one language, and master it.
perhaps you might want to try LabView for it's visual programming, although it's for engineering purposes.
nevertheless, you seem pretty interested in all that's out there, hence the suggestion
also, you could try the android appinventor for visually building stuff
Bruce A. Tate, taking a page from The Pragmatic Programmer wrote a book on exactly that:
Seven Languages in Seven Weeks: A Pragmatic Guide to Learning Programming Languages
In the book, he covers Clojure, Haskell, Io, Prolog, Scala, Erlang, and Ruby.
Mercury: http://www.mercury.csse.unimelb.edu.au/
It's a typed Prolog, with uniqueness types and modes (i.e. specifying that the predicate append(X,Y,Z) meaning X appended to Y is Z yields one Z given an X and Y, but can yield multiple X/Ys for a given Z). Also, no cut or other extra-logical predicates.
If you will, it's to Prolog as Haskell is to Lisp.
Programming does not cover the task of programmers.
New things are always interesting, but there are some very cool old stuff.
The first database system was dBaseIII for me, I was spending about a month to write small examples (dBase/FoxPro/Clipper is a table-based db with indexes). Then, at my first workplace, I met MUMPS, and I got headache. I was young and fresh-brained, but it took 2 weeks to understand the MUMPS database model. There was a moment, like in comics: after 2 weeks, a button has been switched on, and the bulb has just lighten up in my mind. MUMPS is natural, low level, and very-very fast. (It's an unbalanced, unformalized btree without types.) Today's trends shows the way back to it: NoSQL, key-value db, multidimensional db - so there are only some steps left, and we reach Mumps.
Here's a presentation about MUMPS's advantages: http://www.slideshare.net/george.james/mumps-the-internet-scale-database-presentation
A short doc on hierarchical db: http://www.cs.pitt.edu/~chang/156/14hier.html
An introduction to MUMPS globals (in MUMPS, local variables, short: locals are the memory variables, and the global variables, short: globals are the "db variables", setting a global variable goes to the disk immediatelly):
http://gradvs1.mgateway.com/download/extreme1.pdf (PDF)
Say you want to write a love poem...
Instead of using a hammer just because there's one already in your hand, learn the proper tools for the task: learn to speak French.
Once you've reached near-native speaking level, you're ready to start your poem.
While learning new languages on an academical level is an interesting hobby, IMHO you can't really learn to use one until you try to apply it to a real world problem. So, rather than looking for a new language to learn, I'd in your place first look for a new things to build, and only then I'd look for the right language to use for that one specific project. First pick the problem, then the tool, not the other way around..
For anyone who hasn't been around since the mid 80's, I'd suggest learning 8-bit BASIC. It's very low-level, very primitive and it's an interesting exercise to program around its holes.
On the same line, I'd pick an HP-41C series calculator (or emulator, although nothing beats real hardware). It's hard to wrap your brain around it, but well worth it. A TI-57 will do, but will be a completely different experience. If you manage to solve second degree equations on a TI-55, you'll be considered a master (it had no conditionals and no branches except a RST, that jumped the program back to step 0).
And last, I'd pick FORTH (it was mentioned before). It has a nice "build your language" Lisp-ish thing, but is much more bare metal. It will teach you why Rails is interesting and when DSLs make sense and you'll have a glipse on what your non-RPN calculator is thinking while you type.
PostScript. It is a rather interesting language as it's stack based, and it's quite practical once you want to put things on paper and you want either to get it done or troubleshoot why isn't it getting done.
Erlang. The intrinsic parallelism gives it a rather unusual feel and you can again learn useful things from that. I'm not so sure about practicality, but it can be useful for some fast prototyping tasks and highly redundant systems.
Try programming GPUs - either CUDA or OpenCL. It's just C/C++ extensions, but the mental model of the architecture is again completely different from the classic approach, and it definitely gets practical once you need to get some real number crunching done.
Erlang, Forth and some embedded work with assembly language. Really; buy an Arduino kit or something similar, and create a polyphonic beep in assembly. You'll really learn something.
There's also anic:
https://code.google.com/p/anic/
From its site:
Faster than C, Safer than Java, Simpler than *sh
anic is the reference implementation compiler for the experimental, high-performance, implicitly parallel, deadlock-free general-purpose dataflow programming language ANI.
It doesn't seem to be under active development anymore, but it seems to have some interesting concepts (and that, after all, is what you seem to be after).
While not meeting your requirement of "different" - I'd wager that Fantom is a language that a professional programmer should look at. By their own admission, the authors of fantom call it a boring language. It merely shores up the most common use cases of Java and C#, with some borrowed closure syntax from ruby and similar newer languages.
And yet it manages to have its own bootstrapped compiler, provide a platform that has a drop in install with no external dependencies, gets packages right - and works on Java, C# and now the Web (via js).
It may not widen your horizons in terms of new ways of programming, but it will certainly show you better ways of programming.
One thing that I see missing from the other answers: languages based on term-rewriting.
You could take a look at Pure - http://code.google.com/p/pure-lang/ .
Mathematica is also rewriting based, although it's not so easy to figure out what's going on, as it's rather closed.
APL, Forth and Assembly.
Have some fun. Pick up a Lego Mindstorm robot kit and CMU's RobotC and write some robotics code. Things happen when you write code that has to "get dirty" and interact with the real world that you cannot possibly learn in any other way. Yes, same language, but a very different perspective.

What are the disadvantages code reuse?

A few years ago, we needed a C++ IPC library for making function calls over TCP. We chose one and used it in our application. After a while, it became clear it didn't provide all functionality we needed. In the next version of our software, we threw the third party IPC library out and replaced it by one we wrote ourselves. From then on, I sometimes doubt whether this was a good decision, because it has proven to be quite a lot of work and it obviously felt like reinventing the wheel. So my question is: are there disadvantages to code reuse that justify this reinvention?
I can suggest a few
The bugs get replicated - If you reuse a buggy code :)
Sometimes it may add an additional overhead. As an example if you just need to do a simple thing it is not advisable to use a complex BIG library that implements the required feature.
You might face with some licensing concerns.
You may need to spend some time to learn\configure the external library. This may not be effective if the re-development takes a much lower time.
Reusing a poorly documented library may get more time than expected/estimated
P.S. The reasons for writing our own library were:
Evaluating external libraries is often very difficult and it takes a lot of time. Also, some problems only become visible after a thorough evaluation.
It made it possible to introduce some features that are specific for our project.
It is easier to do maintenance and to write extensions, as you know the library through and through.
It's pretty much always case by case. You have to look at the suitability and quality of what you're trying to reuse.
The number one issue is: you can only successfully reuse code if that code is GOOD code. If it was designed poorly, has bugs, or is very fragile then you'll run into the same issues you already did run into -- you have to go do it yourself anyway because it's so hard to modify the existing code.
However, if it's a third-party library that you are considering using that you don't have the source code for, it's a little different. You can try and get the source if it's that kind of library. Some commercial library vendors are open to suggestions and feature requests.
The Golden Wisdom :: It Has To Be Usable Before It Can Be Reusable.
The biggest disadvantage (you mention it yourself) by reusing third party libraries, is that you are strongly coupled and dependent to how that library works and how it's supposed to be used, unless you manage to create a middle interface layer that can take care of it.
But it's hard to create a generic interface, since replacing an existing library with another one, more or less requires that the new functionality works in similar ways. However, you can always rewrite the code using it, but that might be very hard and take a long time.
Another aspect is that if you reinvent the wheel, you have complete control over what's happening and you can do modifications as you see fit. This can be completely impossible if you are depending on a third part library being alive and constantly providing you with updates and bug fixes. On the other hand, reusing code this way enables you to focus on other things in your software, which sometimes might be the thing to do.
There's always a trade off.
If your code relies on external resources and those go away, you may be crippling portions of many applications.
Since most reused code comes from the internet, you run into all the issues with the Bathroom Wall of Code Atwood talks about. You can run into issues with insecure or unreliable borrowed code, and the more black boxed it is, the worse.
Disadvantages of code reuse:
Debugging takes a whole lot longer since it's not your code and it's likely that it's somewhat bloated code.
Any specific requirements will also take more work since you are constrained by the code you're re-using and have to work around it's limitations.
Constant code reuse will result in the long run in a bloated and disorganized applications with hard to chase bugs - programming hell.
Re-using code can (dependently on the case) reduce the challenge and satisfaction factor for the programmer, and also waste an opportunity to develop new skills.
It depends on the case, the language and the code you want to re-use or re-write. In general I believe that the higher-level the language is, the more I tend towards code reuse. Bugs in higher-level language can have a bigger impact, and they're easier to rewrite. High level code must stay readable, neat and flexible. Of course that could be said of all code, but, somehow, rewriting a C library sounds less of a good idea than rewriting (or rather re-factoring) PHP model code.
So anyway, these are some of the arguments I'd use to promote "reinventing the wheel".
Sometimes it's just faster, more fun, and better in the long run to rewrite from scratch than having to work around bugs and limitation of a current codebase.
Wondering what you are using to keep this library you reinvented?
Initial time for create a reusable code is more expensive and time cost
When master branch has an update you need to sync it and deploy again
The bugs get replicated - If you reuse a buggy code
Reusing a poorly documented code may get more time than expected/estimated

How is PDL used in real-world programming?

I've been reading code complete, not far in yet but one of the things it talks about is PDL - a higher level design language, which you write each routine in before coding in the language of choice.
I wondered if anyone actually did this in real life? Another thing it says is to leave each line of PDL in the code as comments. Surely that is overly verbose commenting?
I've never used PDL in real life, apart from perhaps something similar called ISWIM for a university class but I've never used it when writing my own code.
Surely if you write every routine/method/whatever in pseudo code first you will end up wasting a lot of time?
Surely if you write every routine/method/whatever in pseudo code first you will end up wasting a lot of time?
Not at all - planning out what you're going to do beforehand can save time. It forces you to think things through and refactor at the easiest stage (i.e. before you've really done anything).
You don't have to fully write each routine - just the key steps, to give you enough of a mental map of what each part will do, and whether you've planned for everything you need.
I've never heard about PDL (Program Design Language?) specifically though, and - after looking at it - it does seem to be wordy, ugly and too much effort, and I wouldn't recommend using it - stick to concise but readable pseudo-code.
I used it in the 1980s when I worked in defense. PDL is overkill for a solo programmer's weekend project of 1-1000 lines of code. But if you are developing a 10k-100k line of code system with a team of a dozen software engineers, it is excellent for defining preliminary software designs in a waterfall methodology. Also, it was designed for compliance with MIL-STD software development requirements.
I remember one of my lecturers I had during my Software Engineering degree in first year university refused to help students if they hadn't at least attempted some sort of Pseudo code.
A lot of people used to complain about it, but its a skill I aquired from him I find my self using most of the time while designing software. I always have a pad and pen next to me while coding! :)
Yes, I do. I did not realize that it is called PDL until I read the book, though. I called it pseudocode. The difference between pseudocode and PDL is not big - PDL avoids using target language constructs, which is not a big deal in practice.
I start with PDL if the routine is less than trivial.
BTW, McConnell uses word pseudocode instead of PDL in the second edition of Code Complete.
Writing things in pseudocode is very useful and you end up with the documentation already written ;-). It would decouple your intentions from your implementation, that many times is a optimized hack specific for your language or environment. Maintainers in the future or people refactoring your code or translating to other languages would be very grateful to you when you keep that pseudocode in the documentation. I have never called PDL, also because PDL in Perl means Perl Data Language, a very useful package to work with large datasets as vectors or matrices like in R.

What's the point of OOP?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
As far as I can tell, in spite of the countless millions or billions spent on OOP education, languages, and tools, OOP has not improved developer productivity or software reliability, nor has it reduced development costs. Few people use OOP in any rigorous sense (few people adhere to or understand principles such as LSP); there seems to be little uniformity or consistency to the approaches that people take to modelling problem domains. All too often, the class is used simply for its syntactic sugar; it puts the functions for a record type into their own little namespace.
I've written a large amount of code for a wide variety of applications. Although there have been places where true substitutable subtyping played a valuable role in the application, these have been pretty exceptional. In general, though much lip service is given to talk of "re-use" the reality is that unless a piece of code does exactly what you want it to do, there's very little cost-effective "re-use". It's extremely hard to design classes to be extensible in the right way, and so the cost of extension is normally so great that "re-use" simply isn't worthwhile.
In many regards, this doesn't surprise me. The real world isn't "OO", and the idea implicit in OO--that we can model things with some class taxonomy--seems to me very fundamentally flawed (I can sit on a table, a tree stump, a car bonnet, someone's lap--but not one of those is-a chair). Even if we move to more abstract domains, OO modelling is often difficult, counterintuitive, and ultimately unhelpful (consider the classic examples of circles/ellipses or squares/rectangles).
So what am I missing here? Where's the value of OOP, and why has all the time and money failed to make software any better?
The real world isn't "OO", and the idea implicit in OO--that we can model things with some class taxonomy--seems to me very fundamentally flawed
While this is true and has been observed by other people (take Stepanov, inventor of the STL), the rest is nonsense. OOP may be flawed and it certainly is no silver bullet but it makes large-scale applications much simpler because it's a great way to reduce dependencies. Of course, this is only true for “good” OOP design. Sloppy design won't give any advantage. But good, decoupled design can be modelled very well using OOP and not well using other techniques.
There are much better, more universal models (Haskell's type model comes to mind) but these are also often more complicated and/or difficult to implement efficiently. OOP is a good trade-off between extremes.
OOP isn't about creating re-usable classes, its about creating Usable classes.
All too often, the class is used
simply for its syntactic sugar; it
puts the functions for a record type
into their own little namespace.
Yes, I find this to be too prevalent as well. This is not Object Oriented Programming. It's Object Based Programming and data centric programing. In my 10 years of working with OO Languages, I see people mostly doing Object Based Programming. OBP breaks down very quickly IMHO since you are essentially getting the worst of both words: 1) Procedural programming without adhering to proven structured programming methodology and 2) OOP without adhering to to proven OOP methodology.
OOP done right is a beautiful thing. It makes very difficult problems easy to solve, and to the uninitiated (not trying to sound pompous there), it can almost seem like magic. That being said, OOP is just one tool in the toolbox of programming methodologies. It is not the be all end all methodology. It just happens to suit large business applications well.
Most developers who work in OOP languages are utilizing examples of OOP done right in the frameworks and types that they use day-to-day, but they just aren't aware of it. Here are some very simple examples: ADO.NET, Hibernate/NHibernate, Logging Frameworks, various language collection types, the ASP.NET stack, The JSP stack etc... These are all things that heavily rely on OOP in their codebases.
Reuse shouldn't be a goal of OOP - or any other paradigm for that matter.
Reuse is a side-effect of an good design and proper level of abstraction. Code achieves reuse by doing something useful, but not doing so much as to make it inflexible. It does not matter whether the code is OO or not - we reuse what works and is not trivial to do ourselves. That's pragmatism.
The thought of OO as a new way to get to reuse through inheritance is fundamentally flawed. As you note the LSP violations abound. Instead, OO is properly thought of as a method of managing the complexity of a problem domain. The goal is maintainability of a system over time. The primary tool for achieving this is the separation of public interface from a private implementation. This allows us to have rules like "This should only be modified using ..." enforced by the compiler, rather than code review.
Using this, I'm sure you will agree, allows us to create and maintain hugely complex systems. There is lots of value in that, and it is not easy to do in other paradigms.
Verging on religious but I would say that you're painting an overly grim picture of the state of modern OOP. I would argue that it actually has reduced costs, made large software projects manageable, and so forth. That doesn't mean it's solved the fundamental problem of software messiness, and it doesn't mean the average developer is an OOP expert. But the modularization of function into object-components has certainly reduced the amount of spaghetti code out there in the world.
I can think of dozens of libraries off the top of my head which are beautifully reusable and which have saved time and money that can never be calculated.
But to the extent that OOP has been a waste of time, I'd say it's because of lack of programmer training, compounded by the steep learning curve of learning a language specific OOP mapping. Some people "get" OOP and others never will.
There's no empirical evidence that suggests that object orientation is a more natural way for people to think about the world. There's some work in the field of psychology of programming that shows that OO is not somehow more fitting than other approaches.
Object-oriented representations do not appear to be universally more usable or less usable.
It is not enough to simply adopt OO methods and require developers to use such methods, because that might have a negative impact on developer productivity, as well as the quality of systems developed.
Which is from "On the Usability of OO Representations" from Communications of the ACM Oct. 2000. The articles mainly compares OO against theprocess-oriented approach. There's lots of study of how people who work with the OO method "think" (Int. J. of Human-Computer Studies 2001, issue 54, or Human-Computer Interaction 1995, vol. 10 has a whole theme on OO studies), and from what I read, there's nothing to indicate some kind of naturalness to the OO approach that makes it better suited than a more traditional procedural approach.
I think the use of opaque context objects (HANDLEs in Win32, FILE*s in C, to name two well-known examples--hell, HANDLEs live on the other side of the kernel-mode barrier, and it really doesn't get much more encapsulated than that) is found in procedural code too; I'm struggling to see how this is something particular to OOP.
HANDLEs (and the rest of the WinAPI) is OOP! C doesn't support OOP very well so there's no special syntax but that doesn't mean it doesn't use the same concepts. WinAPI is in every sense of the word an object-oriented framework.
See, this is the trouble with every single discussion involving OOP or alternative techniques: nobody is clear about the definition, everyone is talking about something else and thus no consensus can be reached. Seems like a waste of time to me.
Its a programming paradigm.. Designed to make it easier for us mere mortals to break down a problem into smaller, workable pieces..
If you dont find it useful.. Don't use it, don't pay for training and be happy.
I on the other hand do find it useful, so I will :)
Relative to straight procedural programming, the first fundamental tenet of OOP is the notion of information hiding and encapsulation. This idea leads to the notion of the class that seperates the interface from implementation. These are hugely important concepts and the basis for putting a framework in place to think about program design in a different way and better (I think) way. You can't really argue against those properties - there is no trade-off made and it is always a cleaner way to modulize things.
Other aspects of OOP including inheritance and polymorphism are important too, but as others have alluded to, those are commonly over used. ie: Sometimes people use inheritance and/or polymorphism because they can, not because they should have. They are powerful concepts and very useful, but need to be used wisely and are not automatic winning advantages of OOP.
Relative to re-use. I agree re-use is over sold for OOP. It is a possible side effect of well defined objects, typically of more primitive/generic classes and is a direct result of the encapsulation and information hiding concepts. It is potentially easier to be re-used because the interfaces of well defined classes are just simply clearer and somewhat self documenting.
The problem with OOP is that it was oversold.
As Alan Kay originally conceived it, it was a great alternative to the prior practice of having raw data and all-global routines.
Then some management-consultant types latched onto it and sold it as the messiah of software, and lemming-like, academia and industry tumbled along after it.
Now they are lemming-like tumbling after other good ideas being oversold, such as functional programming.
So what would I do differently? Plenty, and I wrote a book on this. (It's out of print - I don't get a cent, but you can still get copies.)Amazon
My constructive answer is to look at programming not as a way of modeling things in the real world, but as a way of encoding requirements.
That is very different, and is based on information theory (at a level that anyone can understand). It says that programming can be looked at as a process of defining languages, and skill in doing so is essential for good programming.
It elevates the concept of domain-specific-languages (DSLs). It agrees emphatically with DRY (don't repeat yourself). It gives a big thumbs-up to code generation. It results in software with massively less data structure than is typical for modern applications.
It seeks to re-invigorate the idea that the way forward lies in inventiveness, and that even well-accepted ideas should be questioned.
HANDLEs (and the rest of the WinAPI) is OOP!
Are they, though? They're not inheritable, they're certainly not substitutable, they lack well-defined classes... I think they fall a long way short of "OOP".
Have you ever created a window using WinAPI? Then you should know that you define a class (RegisterClass), create an instance of it (CreateWindow), call virtual methods (WndProc) and base-class methods (DefWindowProc) and so on. WinAPI even takes the nomenclature from SmallTalk OOP, calling the methods “messages” (Window Messages).
Handles may not be inheritable but then, there's final in Java. They don't lack a class, they are a placeholder for the class: That's what the word “handle” means. Looking at architectures like MFC or .NET WinForms it's immediately obvious that except for the syntax, nothing much is different from the WinAPI.
Yes OOP did not solve all our problems, sorry about that. We are, however working on SOA which will solve all those problems.
OOP lends itself well to programming internal computer structures like GUI "widgets", where for example SelectList and TextBox may be subtypes of Item, which has common methods such as "move" and "resize".
The trouble is, 90% of us work in the world of business where we are working with business concepts such as Invoice, Employee, Job, Order. These do not lend themselves so well to OOP because the "objects" are more nebulous, subject to change according to business re-engineering and so on.
The worst case is where OO is enthusiastically applied to databases, including the egregious OO "enhancements" to SQL databases - which are rightly ignored except by database noobs who assume they must be the right way to do things because they are newer.
In my experience of reviewing code and design of projects I have been through, the value of OOP is not fully realised because alot of developers have not properly conceptualised the object-oriented model in their minds. Thus they do not program with OO design, very often continuing to write top-down procedural code making the classes a pretty flat design. (if you can even call that "design" in the first place)
It is pretty scary to observe how little colleagues know about what an abstract class or interface are, let alone properly design an inheritance hierarchy to suit the business needs.
However, when good OO design is present, it is just sheer joy reading the code and seeing the code naturally fall into place into intuitive components/classes. I have always perceived system architecture and design like designing the various departments and staff jobs in a company - all are there to accomplish a certain piece of work in the grand scheme of things, emitting the synergy required to propel the organisation/system forward.
That, of course, is quite rare unfortunately. Like the ratio of beautifully-designed versus horrendously-designed physical objects in the world, the same can pretty much be said about software engineering and design. Having the good tools at one's disposal does not necessarily confer good practices and results.
Maybe a bonnet, lap or a tree is not a chair but they all are ISittable.
I think those real world things are objects
You do?
What methods does an invoice have? Oh, wait. It can't pay itself, it can't send itself, it can't compare itself with the items that the vendor actually delivered. It doesn't have any methods at all; it's totally inert and non-functional. It's a record type (a struct, if you prefer), not an object.
Likewise the other things you mention.
Just because something is real does not make it an object in the OO sense of the word. OO objects are a peculiar coupling of state and behaviour that can act of their own accord. That isn't something that's abundant in the real world.
I have been writing OO code for the last 9 years or so. Other than using messaging, it's hard for me to imagine other approach. The main benefit I see totally in line with what CodingTheWheel said: modularisation. OO naturally leads me to construct my applications from modular components that have clean interfaces and clear responsibilities (i.e. loosely coupled, highly cohesive code with a clear separation of concerns).
I think where OO breaks down is when people create deeply nested class heirarchies. This can lead to complexity. However, factoring out common finctionality into a base class, then reusing that in other descendant classes is a deeply elegant thing, IMHO!
In the first place, the observations are somewhat sloppy. I don't have any figures on software productivity, and have no good reason to believe it's not going up. Further, since there are many people who abuse OO, good use of OO would not necessarily cause a productivity improvement even if OO was the greatest thing since peanut butter. After all, an incompetent brain surgeon is likely to be worse than none at all, but a competent one can be invaluable.
That being said, OO is a different way of arranging things, attaching procedural code to data rather than having procedural code operate on data. This should be at least a small win by itself, since there are cases where the OO approach is more natural. There's nothing stopping anybody from writing a procedural API in C++, after all, and so the option of providing objects instead makes the language more versatile.
Further, there's something OO does very well: it allows old code to call new code automatically, with no changes. If I have code that manages things procedurally, and I add a new sort of thing that's similar but not identical to an earlier one, I have to change the procedural code. In an OO system, I inherit the functionality, change what I like, and the new code is automatically used due to polymorphism. This increases the locality of changes, and that is a Good Thing.
The downside is that good OO isn't free: it requires time and effort to learn it properly. Since it's a major buzzword, there's lots of people and products who do it badly, just for the sake of doing it. It's not easier to design a good class interface than a good procedural API, and there's all sorts of easy-to-make errors (like deep class hierarchies).
Think of it as a different sort of tool, not necessarily generally better. A hammer in addition to a screwdriver, say. Perhaps we will eventually get out of the practice of software engineering as knowing which wrench to use to hammer the screw in.
#Sean
However, factoring out common finctionality into a base class, then reusing that in other descendant classes is a deeply elegant thing, IMHO!
But "procedural" developers have been doing that for decades anyway. The syntax and terminology might differ, but the effect is identical. There is more to OOP than "reusing common functionality in a base class", and I might even go so far as to say that that is hard to describe as OOP at all; calling the same function from different bits of code is a technique as old as the subprocedure itself.
#Konrad
OOP may be flawed and it certainly is no silver bullet but it makes large-scale applications much simpler because it's a great way to reduce dependencies
That is the dogma. I am not seeing what makes OOP significantly better in this regard than procedural programming of old. Whenever I make a procedure call I am isolating myself from the specifics of the implementation.
To me, there is a lot of value in the OOP syntax itself. Using objects that attempt to represent real things or data structures is often much more useful than trying to use a bunch of different flat (or "floating") functions to do the same thing with the same data. There is a certain natural "flow" to things with good OOP that just makes more sense to read, write, and maintain long term.
It doesn't necessarily matter that an Invoice isn't really an "object" with functions that it can perform itself - the object instance can exist just to perform functions on the data without having to know what type of data is actually there. The function "invoice.toJson()" can be called successfully without having to know what kind of data "invoice" is - the result will be Json, no matter it if comes from a database, XML, CSV, or even another JSON object. With procedural functions, you all the sudden have to know more about your data, and end up with functions like "xmlToJson()", "csvToJson()", "dbToJson()", etc. It eventually becomes a complete mess and a HUGE headache if you ever change the underlying data type.
The point of OOP is to hide the actual implementation by abstracting it away. To achieve that goal, you must create a public interface. To make your job easier while creating that public interface and keep things DRY, you must use concepts like abstract classes, inheritance, polymorphism, and design patterns.
So to me, the real overriding goal of OOP is to make future code maintenance and changes easier. But even beyond that, it can really simplify things a lot when done correctly in ways that procedural code never could. It doesn't matter if it doesn't match the "real world" - programming with code is not interacting with real world objects anyways. OOP is just a tool that makes my job easier and faster - I'll go for that any day.
#CodingTheWheel
But to the extent that OOP has been a waste of time, I'd say it's because of lack of programmer training, compounded by the steep learning curve of learning a language specific OOP mapping. Some people "get" OOP and others never will.
I dunno if that's really surprising, though. I think that technically sound approaches (LSP being the obvious thing) make hard to use, but if we don't use such approaches it makes the code brittle and inextensible anyway (because we can no longer reason about it). And I think the counterintuitive results that OOP leads us to makes it unsurprising that people don't pick it up.
More significantly, since software is already fundamentally too hard for normal humans to write reliably and accurately, should we really be extolling a technique that is consistently taught poorly and appears hard to learn? If the benefits were clear-cut then it might be worth persevering in spite of the difficulty, but that doesn't seem to be the case.
#Jeff
Relative to straight procedural programming, the first fundamental tenet of OOP is the notion of information hiding and encapsulation. This idea leads to the notion of the class that seperates the interface from implementation.
Which has the more hidden implementation: C++'s iostreams, or C's FILE*s?
I think the use of opaque context objects (HANDLEs in Win32, FILE*s in C, to name two well-known examples--hell, HANDLEs live on the other side of the kernel-mode barrier, and it really doesn't get much more encapsulated than that) is found in procedural code too; I'm struggling to see how this is something particular to OOP.
I suppose that may be a part of why I'm struggling to see the benefits: the parts that are obviously good are not specific to OOP, whereas the parts that are specific to OOP are not obviously good! (this is not to say that they are necessarily bad, but rather that I have not seen the evidence that they are widely-applicable and consistently beneficial).
In the only dev blog I read, by that Joel-On-Software-Founder-of-SO guy, I read a long time ago that OO does not lead to productivity increases. Automatic memory management does. Cool. Who can deny the data?
I still believe that OO is to non-OO what programming with functions is to programming everything inline. (And I should know, as I started with GWBasic.) When you refactor code to use functions, variable2654 becomes variable3 of the method you're in. Or, better yet, it's got a name that you can understand, and if the function is short, it's called value and that's sufficient for full comprehension.
When code with no functions becomes code with methods, you get to delete miles of code.
When you refactor code to be truly OO, b, c, q, and Z become this, this, this and this. And since I don't believe in using the this keyword, you get to delete miles of code. Actually, you get to do that even if you use this.
I do not think OO is natural metaphor. I don't think language is a natural metaphor either, nor do I think that Fowler's "smells" are better than saying "this code tastes bad." That said, I think that OO is not about natural metaphors and people who think the objects just pop out at you are basically missing the point. You define the object universe, and better object universes result in code that is shorter, easier to understand, works better, or all of these (and some criteria I am forgetting). I think that people who use the customers/domain's natural objects as programming objects are missing the power to redefine the universe.
For instance, when you do an airline reservation system, what you call a reservation might not correspond to a legal/business reservation at all.
Some of the basic concepts are really cool tools I think that most people exaggerate with that whole "when you have a hammer, they're all nails" thing. I think that the other side of the coin/mirror is just as true: when you have a gadget like polymorphism/inheritance, you begin to find uses where it fits like a glove/sock/contact-lens. The tools of OO are very powerful. Single-inheritance is, I think, absolutely necessary for people not to get carried away, my own multi-inheritance software not withstanding.
What's the point of OOP? I think it's a great way to handle an absolutely massive code base. I think it lets you organize and reorganize you code and gives you a language to do that in (beyond the programming language you're working in), and modularizes code in a pretty natural and easy-to-understand way.
OOP is destined to be misunderstood by the majority of developers This is because it's an eye-opening process like life: you understand OO more and more with experience, and start avoiding certain patterns and employing others as you get wiser. One of the best examples is that you stop using inheritance for classes that you do not control, and prefer the Facade pattern instead.
Regarding your mini-essay/question
I did want to mention that you're right. Reusability is a pipe-dream, for the most part. Here's a quote from Anders Hejilsberg about that topic (brilliant) from here:
If you ask beginning programmers to
write a calendar control, they often
think to themselves, "Oh, I'm going to
write the world's best calendar
control! It's going to be polymorphic
with respect to the kind of calendar.
It will have displayers, and mungers,
and this, that, and the other." They
need to ship a calendar application in
two months. They put all this
infrastructure into place in the
control, and then spend two days
writing a crappy calendar application
on top of it. They'll think, "In the
next version of the application, I'm
going to do so much more."
Once they start thinking about how
they're actually going to implement
all of these other concretizations of
their abstract design, however, it
turns out that their design is
completely wrong. And now they've
painted themself into a corner, and
they have to throw the whole thing
out. I have seen that over and over.
I'm a strong believer in being
minimalistic. Unless you actually are
going to solve the general problem,
don't try and put in place a framework
for solving a specific one, because
you don't know what that framework
should look like.
Have you ever created a window using WinAPI?
More times than I care to remember.
Then you should know that you define a class (RegisterClass), create an instance of it (CreateWindow), call virtual methods (WndProc) and base-class methods (DefWindowProc) and so on. WinAPI even takes the nomenclature from SmallTalk OOP, calling the methods “messages” (Window Messages).
Then you'll also know that it does no message dispatch of its own, which is a big gaping void. It also has crappy subclassing.
Handles may not be inheritable but then, there's final in Java. They don't lack a class, they are a placeholder for the class: That's what the word “handle” means. Looking at architectures like MFC or .NET WinForms it's immediately obvious that except for the syntax, nothing much is different from the WinAPI.
They're not inheritable either in interface or implementation, minimally substitutable, and they're not substantially different from what procedural coders have been doing since forever.
Is this really it? The best bits of OOP are just... traditional procedural code? That's the big deal?
I agree completely with InSciTek Jeff's answer, I'll just add the following refinements:
Information hiding and encapsulation: Critical for any maintainable code. Can be done by being careful in any programming language, doesn't require OO features, but doing it will make your code slightly OO-like.
Inheritance: There is one important application domain for which all those OO is-a-kind-of and contains-a relationships are a perfect fit: Graphical User Interfaces. If you try to build GUIs without OO language support, you will end up building OO-like features anyway, and it's harder and more error-prone without language support. Glade (recently) and X11 Xt (historically) for example.
Using OO features (especially deeply nested abstract hierarchies), when there is no point, is pointless. But for some application domains, there really is a point.
I believe the most beneficial quality of OOP is data hiding/managing. However, there are a LOT of examples where OOP is misused and I think this is where the confusion comes in.
Just because you can make something into an object does not mean you should. However, if doing so will make your code more organized/easier to read then you definitely should.
A great practical example where OOP is very helpful is with a "product" class and objects that I use on our website. Since every page is a product, and every product has references to other products, it can get very confusing as to which product the data you have refers to. Is this "strURL" variable the link to the current page, or to the home page, or to the statistics page? Sure you could make all kinds of different variable that refer to the same information, but proCurrentPage->strURL, is much easier to understand (for a developer).
In addition, attaching functions to those pages is much cleaner. I can do proCurrentPage->CleanCache(); Followed by proDisplayItem->RenderPromo(); If I just called those functions and had it assume the current data was available, who knows what kind of evil would occur. Also, if I had to pass the correct variables into those functions, I am back to the problem of having all kinds of variables for the different products laying around.
Instead, using objects, all my product data and functions are nice and clean and easy to understand.
However. The big problem with OOP is when somebody believes that EVERYTHING should be OOP. This creates a lot of problems. I have 88 tables in my database. I only have about 6 classes, and maybe I should have about 10. I definitely don't need 88 classes. Most of the time directly accessing those tables is perfectly understandable in the circumstances I use it, and OOP would actually make it more difficult/tedious to get to the core functionality of what is occurring.
I believe a hybrid model of objects where useful and procedural where practical is the most effective method of coding. It's a shame we have all these religious wars where people advocate using one method at the expense of the others. They are both good, and they both have their place. Most of the time, there are uses for both methods in every larger project (In some smaller projects, a single object, or a few procedures may be all that you need).
I don't care for reuse as much as I do for readability. The latter means your code is easier to change. That alone is worth in gold in the craft of building software.
And OO is a pretty damn effective way to make your programs readable. Reuse or no reuse.
"The real world isn't "OO","
Really? My world is full of objects. I'm using one now. I think that having software "objects" model the real objects might not be such a bad thing.
OO designs for conceptual things (like Windows, not real world windows, but the display panels on my computer monitor) often leave a lot to be desired. But for real world things like invoices, shipping orders, insurance claims and what-not, I think those real world things are objects. I have a stack on my desk, so they must be real.
The point of OOP is to give the programmer another means for describing and communicating a solution to a problem in code to machines and people. The most important part of that is the communication to people. OOP allows the programmer to declare what they mean in the code through rules that are enforced in the OO language.
Contrary to many arguments on this topic, OOP and OO concepts are pervasive throughout all code including code in non-OOP languages such as C. Many advanced non-OO programmers will approximate the features of objects even in non-OO languages.
Having OO built into the language merely gives the programmer another means of expression.
The biggest part to writing code is not communication with the machine, that part is easy, the biggest part is communication with human programmers.

Do you use design patterns?

What's the penetration of design patterns in the real world? Do you use them in your day to day job - discussing how and where to apply them with your coworkers - or do they remain more of an academic concept?
Do they actually provide actual value to your job? Or are they just something that people talk about to sound smart?
Note: For the purpose of this question ignore 'simple' design patterns like Singleton. I'm talking about designing your code so you can take advantage of Model View Controller, etc.
Any large program that is well written will use design patterns, even if they aren't named or recognized as such. That's what design patterns are, designs that repeatedly and naturally occur. If you're interfacing with an ugly API, you'll likely find yourself implementing a Facade to clean it up. If you've got messaging between components that you need to decouple, you may find yourself using Observer. If you've got several interchangeable algorithms, you might end up using Strategy.
It's worth knowing the design patterns because you're more likely to recognize them and then converge on a clean solution more quickly. However, even if you don't know them at all, you'll end up creating them eventually (if you are a decent programmer).
And of course, if you are using a modern language, you'll probably be forced to use them for some things, because they're baked into the standard libraries.
In my opinion, the question: "Do you use design pattern?", alone is a little flawed because the answer is universally YES.
Let me explain, we, programmers and designers, all use design patterns... we just don't always realise it. I know this sounds cliché, but you don't go to patterns, patterns come to you. You design stuff, it might look like an existing pattern, you name it that way so everyone understand what you are talking about and the rationale behind your design decision is stronger, knowing it has been discussed ad nauseum before.
I personally use patterns as a communication tool. That's it. They are not design solutions, they are not best practices, they are not tools in a toolbox.
Don't get me wrong, if you are a beginner, books on patterns will show you how a solution is best solved "using" their patterns rather than another flawed design. You will probably learn from the exercise. However, you have to realise that this doesn't mean that every situation needs a corresponding pattern to solve it. Every situation has a quirk here and there that will require you to think about alternatives and take a difficult decision with no perfect answer. That's design.
Anti-pattern however are on a totally different class. You actually want to actively avoid anti-patterns. That's why the name anti-pattern is so controversial.
To get back to your original question:
"Do I use design patterns?", Yes!
"Do I actively lean toward design patterns?", No.
Yes. Design patterns can be wonderful when used appropriately. As you mentioned, I am now using Model-View-Controller (MVC) for all of my web projects. It is a very common pattern in the web space which makes server-side code much cleaner and well-organized.
Beyond that, here are some other patterns that may be useful:
MVVM (Model-View-ViewModel): a similar pattern to MVC; used for WPF and Silverlight applications.
Composition: Great for when you need to use a hierarchy of objects.
Singleton: More elegant than using globals for storing items that truly need a single instance. As you mentioned, a simple pattern but it does have its uses.
It is worth noting a design pattern can also highlight a lack of language features and/or deficiencies in a language. For example, iterators are now built in as part of newer languages.
In general design patterns are quite useful but you should not use them everywhere; just where they are a good fit for your needs.
I try to, yes. They do indeed help maintainability and readability of your code. However, there are people who do abuse them, usually (from what I've seen) by forcing a system into a pattern that doesn't exist.
I try to use patterns if they are applicable. I think it's kind of sad seeing developers implement design patterns in code just for the sake of it. For the right task though, design patterns can be very useful and powerful.
There are many design patterns beyond the simple that are used in "real world". Good example Stackoverflow uses the Model View Controller Pattern. I have used Class Factories multiple times in projects for my employer, and I have seen many already written projects using them as well.
I am not saying every design pattern is being used but many are.
Yes we do, it usually happens when we start designing something and then someone notices that it resembles an existing pattern. We then take a look at it and see how it would help us achieve our goal.
We also use patterns that are not documented but that emerge from designing a lot.
Mind you, we don't use them a lot.
Yes, Factory, Chain of Responsibility, Command, Proxy, Visitor, and Observer, among others, are in use in a codebase I work with daily. As far as MVC goes, this site seems to use it quite well, and the devs couldn't say enough good things in the latest podcast.
Yes, I use a lot of well known design patterns, but I also end up building some software that I later find out uses a 'named' design pattern. Most elegant, reusable designs could be called a 'pattern'. It's a lot like dance moves. We all know the waltz, and the 2-step, but not everyone has a name for the 'bump and scoot' although most of us do it.
MVC is very well known so yes we use design patterns quite a lot. Now if your asking about the Gang of Four patterns, there are several that I use because other maintainers will know the design and what we are working towards in the code. There are several though that remain fairly obscure for what we do, so if I use one I don't get the full benefits of using a pattern.
Are they important, yes because it gives you a method of talking about software design in a quick efficient and generally accepted way. Can you do better custom solutions, well yes (sorta)?
The original GoF patterns were pulled from production code, so they catalogued what was already being used in the wild. They aren't purely or even mostly an academic thing.
I find the MVC pattern really useful to isolate your model logic, which can than be reused or worked on without too much trouble. It also helps de-coupling your classes and makes unit testing easier. I wrote about it recently (yes, shameless plug here...)
Also, I've recently used a factory pattern from a base class to generate and return the proper DataContext class that I needed on the fly, using LINQ.
Bridges are used when trying when trying to glue together two different technologies (like Cocoa and Ruby on the Mac, for example)
I find, however, that whenever I implement a pattern, it's because I knew about it before hand. Some extra thought generally goes into it as I find I must modify the original pattern slightly to accommodate my needs.
You just need to be careful not to become and architecture astronaut!
Yes, design patterns are largely used in the real world - and daily by many of the people I work with.
In my opinion the biggest value provided by design patterns is that they provide a universal, high level language for you to convey software design to other programmers.
For instance instead of describing your new class as a "utility that creates one of several other classes based on some combination of input criteria", you can simply say it's an "abstract factory" and everyone instantly understands what you're talking about.
Yes, design patterns or abstractly patterns are part of my life, where I look, I begin to see them. Therefore, I am surrounded by them. But, as you know, little knowledge is a dangerous thing. Therefore, I strongly recommend you to read GoF book.
One of the main problems about design patterns, most developers just do not get the idea, or do not believe in them. And most of the time they argue about the variables, loops, or switches. But, I strongly believe that if you do not speak the pattern language, your software will not go far and you will find yourselves in a maintenance nightmare.
As you know, anti-pattern is also dangerous thing and it happens when you have little expertise on design patterns. And refactoring anti-patterns is much more harder. As a recommended book about this problem please read "AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis".
Yes.
We are even using them in my current job: Mainframe coding with COBOL and PL/I.
So far I have seen Adaptor, Visitor, Facade, Module, Observer and something very close to Composite and Iterator. Due to the nature of the languages it's mostly strutural patterns that are used. Also, I'm not always sure that the people who use them do so conciously :D
I absolutely use design patterns. At this point I take MVC for granted as a design pattern. My primary reason for using them is that I am humble enough to know that I am likely not the first person to encounter a particular problem. I rarely start a piece of code knowing which pattern I am going to use; I constantly watch the code to see if it naturally develops into an existing pattern.
I am also very fond of Martin Fowler's Patterns of Enterprise Application Architecture. When a problem or task presents itself, I flip to related section (it's mostly a reference book) and read a few overviews of the patterns. Once I have a better idea of the general problem and the existing solutions, I begin to see the long term path my code will likely take via the experience of others. I end up making much better decisions.
Design patterns definitely play a big role in all of my "for the future" ideas.