New design patterns/design strategies [closed] - language-agnostic

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I've studied and implemented design patterns for a few years now, and I'm wondering. What are some of the newer design patterns (since the GOF)? Also, what should one, similar to myself, study [in the way of software design] next?
Note: I've been using TDD, and UML for some time now. I'm curious about the newer paradigm shifts, and or newer design patterns.

There is roughly an infinite number of design patterns. Design patterns are just that: a recurrence of tricks programmers use to get things done. The most useful thing about the GoF patterns is how famous they are. In that, they have become a language -- exactly what the GoF hoped to achieve.
Many other patterns you'll find on the web and in literature are "just" useful tricks, not so much a language you can use when you speak to fellow programmers. That said, there is a number of patterns that arose in the past ten years or so, particularly in the realm of web development. See the patterns listed in Martin Fowler's patterns book.

I'm surprised that no one has mentioned Martin Fowler's book Patterns of Enterprise Application Architecture. This is an outstanding book with dozens of patterns, many of which are used in modern ORM design (repository, active record), along with a number of UI layer patterns. Highly recommended.

I am an avid follower and supporter of the PCMEF (now PCBMER) framework
Here's a simpler overview of it.
It kind of understands that enterprise systems are huge an complex, and by combining a bunch of other design patterns together into the PCMBER framework (Presentation, Control, Mediator, Entity and Resource), even the most complex system remain easy to usnerstand and manage.

One of the newer ones that I found particularly useful is Domain Driven Design. Not so much a pattern in its own right, but more of a mindset - to concentrate on the domain objects - i.e. the things that you model and build the rest of application around it.
I found that it gave meaning to principles that we all knew before but were too lazy to deal with - like Single Responsibility Principle and Separation of Concerns. I take those two especially more seriously now.
Another axis of improvement for me was TDD and Dependency Injection. I have discovered that with lots of interfaces and classes implementing them I was able to let go of this fear of only defining something once. That is not to say that it is in conflict with DRY(Don't Repeat Yourself) much. It's OK to have two classes with the same properties if their purposes are different. Encapsulation and SRP are much more important than only defining a property once.

Umm...none of the things people have mentioned are design patterns.
GOF was written implicitly with Java in mind. It explored that space pretty well. However once you go into other languages some patterns are no longer necessary (Observer is rarely used in a language like C# that supports events) and some new ones spring forth. Grab yourself the Pro JavaScript Design Patterns or Design Patterns In Ruby books and see what happens to the stand-by pattens in these very different paradigms.
My favorites lately have come from leaning on the functional drift of modern languages. I'm a big fan of nested closures and of the functional ways of tackling some of the same problems that GoF does (again, see the Ruby book for great examples). I also am currently in love with the idea of internal domain-specific languages which open up into a whole series of design patterns of their own (including nested closures). Also event-aggregation seems to be poised to hit it big in the .Net world in the near future.
A couple other big ones that have hit the scene but aren't discussed as much in GoF - probably because they are more high level then what those guys were going for - are Inversion Of Control Containers, Message Bussing, Aspect-Oriented-Programming, Model-View-Controller, Model-View-Presenter, Model-View-ViewModel, and their ilk.
By the way, these are not design patterns, but if you're looking to progress beyond TDD start looking into Behavior-Driven-Development and Context/Specification.

A huge change from a maintenance aspect is the use of DVCS. If you don't know what one is or haven't used one, I highly suggest reading up on the two hard hitters:
Mercurial (hg): https://www.mercurial-scm.org/
git : http://git-scm.com/
They've done quite a bit to change the workflow of the common programming environment. Not really a pattern/design I spose, but I don't think TDD or UML are technical patterns/designs either at some level. Maybe more like common practices surrounding programming.

Related

Is OOP abused in universities? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I started my college two years ago, and since then I keep hearing "design your classes first". I really ask myself sometimes, should my solution to be a bunch of objects in the first place! Some say that you don't see its benefits because your codebase is very small - university projects. The project size excuse just don't go down my throat. If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project.
I am not saying OOP is bad, I just feel it is abused in classrooms where students like me are told day and night that OOP is the right way.
IMHO, the proper answer shouldn't come from a professor, I prefer to hear it from real engineers in the field.
Is OOP the right approach always?
When is OOP the best approach?
When is OOP a bad approach?
This is a very general question. I am not asking for definite answers, just some real design experience from the field.
I don't care about performance. I am asking about design. I know it is engineering in real life.
==================================================================================
Thankful for all contributions. I chose Nosredna answer, because she addressed my questions in general and convinced me that I was wrong about the following :
If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project.
The professors have the disadvantage that they can't put you on huge, nasty programs that go on for years, being worked on by many different programmers. They have to use rather unconvincing toy examples and try to trick you into seeing the bigger picture.
Essentially, they have to scare you into believing that when an HO gauge model train hits you, it'll tear your leg clean off. Only the most convincing profs can do it.
"If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project."
That's where I disagree. A small project fits into your brain. The large version of it might not. To me, the benefit of OO is hiding enough of the details so that the big picture can still be crammed into my head. If you lack OO, you can still manage, but it means finding other ways to hide the complexity.
Keep your eye on the real goal--producing reliable code. OO works well in large programs because it helps you manage complexity. It also can aid in reusability.
But OO isn't the goal. Good code is the goal. If a procedural approach works and never gets complex, you win!
OOP is a real world computer concept that the university would be derelict to leave out of the curriculum. When you apply for jobs, you will be expected to be conversant in it.
That being said, pace jalf, OOP was primarily designed as a way to manage complexity. University projects written by one or two students on homework time are not a realistic setting for large projects like this, so the examples feel (and are) toy examples.
Also, it is important to realize that not everyone really sees OOP the same way. Some see it about encapsulation, and make huge classes that are very complex, but hide their state from any outside caller. Others want to make sure that a given object is only responsible for doing one thing and make a lot of small classes. Some seek an object model that closely mirrors real world abstractions that the program is trying to relate to, others see the object model as about how to organize the technical architecture of the problem, rather than the real world business model. There is no one true way with OOP, but at its core it was introduced as a way of managing complexity and keeping larger programs more maintainable over time.
OOP is the right approach when your data can be well structured into objects.
For instance, for an embedded device that's processing an incoming stream of bytes from a sensor, there might not be much that can be clearly objectified.
Also in cases where ABSOLUTE control over performance is critical (when every cycle counts), an OOP approach can introduce costs that might be nontrivial to compute.
In the real world, most often, your problem can be VERY well described in terms of objects, although the law of leaky abstractions must not be forgotten!
Industry generally resolves, eventually, for the most part, to using the right tool for the job, and you can see OOP in many many places. Exceptions are often made for high-performance and low-level. Of course, there are no hard and fast rules.
You can hammer in a screw if you stick at it long enough...
My 5 cents:
OOP is just one instance of a larger pattern: dealing with complexity by breaking down a big problem into smaller ones. Our feeble minds are limited to a small number of ideas they can handle at any given time. Even a moderately sized commercial application has more moving parts than most folks can fully maintain a complete mental picture of at a time. Some of the more successful design paradigms in software engineering capitalize on the notion of dealing with complexity. Whether it's breaking your architecture into layers, your program into modules, doing a functional breakdown of actions, using pre-built components, leveraging independent web services, or identifying objects and classes in your problem and solution spaces. Those are all tools for taming the beast that is complexity.
OOP has been particularly successful in several classes of problems. It works well when you can think about the problem in terms of "things" and the interactions between them. It works quite well when you're dealing with data, with user interfaces, or building general purpose libraries. The prevalence of these classes of apps helped make OOP ubiquitous. Other classes of problems call for other or additional tools. Operating systems distinguish kernel and user spaces, and isolate processes in part to avoid the complexity creep. Functional programming keeps data immutable to avoid the mesh of dependencies that occur with multithreading. Neither is your classic OOP design and yet they are crucial and successful in their own domains.
In your career, you are likely to face problems and systems that are larger than you could tackle entirely on your own. Your teacher are not only trying to equip you with the present tools of the trade. They are trying to convey that there are patterns and tools available for you to use when you are attempting to model real world problems. It's in your best interest to accumulate a collection of tools for your toolbox and choose the right tool(s) for the job. OOP is a powerful tool to have, but by far not the only one.
No...OOP is not always the best approach.
(A true) OOP design is the best approach when your problem can best be modeled as a set of objects that can accomplish your goals by communicating/using one another.
Good question...but I'm guessing Scientific/Analytic applications are probably the best example. The majority of their problems can best be approached by functional programming rather than object oriented programming.
...that being said, let the flaming begin. I'm sure there are holes and I'd love to learn why.
Is OOP the right approach always?
Nope.
When OOP is the best approach?
When it helps you.
When OOP is a bad approach?
When it impedes you.
That's really as specific as it gets. Sometimes you don't need OOP, sometimes it's not available in the language you're using, sometimes it really doesn't make a difference.
I will say this though, when it comes to technique and best practices continue to double check what your professors tell you. Just because they're teachers doesn't mean they're experts.
It might be helpful to think of the P of OOP as Principles rather than Programming. Whether or not you represent every domain concept as an object, the main OO principles (encapsulation, abstraction, polymorphism) are all immensely useful at solving particular problems, especially as software gets more complex. It's more important to have maintainable code than to have represented everything in a "pure" object hierarchy.
My experience is that OOP is mostly useful on a small scale - defining a class with certain behavior, and which maintains a number of invariants. Then I essentially just use that as yet another datatype to use with generic or functional programming.
Trying to design an entire application solely in terms of OOP just leads to huge bloated class hierarchies, spaghetti code where everything is hidden behind 5 layers of indirection, and even the smallest, most trivial unit of work ends up taking three seconds to execute.
OOP is useful --- when combined with other approaches.
But ultimately, every program is about doing, not about being. And OOP is about "being". About expressing that "this is a car. The car has 4 wheels. The car is green".
It's not interesting to model a car in your application. It's interesting to model *the car doing stuff. Processes are what's interesting, and in a nutshell, they are what your program should be organized around. Individual classes are there to help you express what your processes should do (if you want to talk about car things, it's easier to have a car object than having to talk about all the individual components it is made up of, but the only reason you want to talk about the car at all is because of what is happening to it. The user is driving it, or selling it, or you are modelling what happens to it if someone hits it with a hammer)
So I prefer to think in terms of functions. Those functions might operate on objects, sure, but the functions are the ones my program is about. And they don't have to "belong" to any particular class.
Like most questions of this nature, the answer is "it depends."
Frederick P. Brooks said it the best in "The Mythical Man-Month" that "there is no single strategy, technique or trick that will exponentially raise the productivity of programmers." You wouldn't use a broad sword to make a surgical incision and you wouldn't use a scalpel in a sword fight.
There are amazing benefits to OOP, but you need to be comfortable with the pattern to take advantage of these benefits. Knowing and understanding OOP also allows you to create a cleaner procedural implementation for your solutions because of the underlying concepts of separation of concerns.
I've seen some of the best results of using OOP when adding new functionality to a system or maintaining/improving a system. Unfortunately, it's not easy to get that kind of experience while attending a university.
I have yet to work on a project in the industry that was not a combination of both functional and OOP. It really comes down to your requirements and what are the best (maybe cheapest?) solutions for them.
OOP is not always the best approach. However it is the best approach in the majority of applications.
OOP is the best approach in any system that lend itself to objects and the interaction of objects. Most business applications are best implemented in an object-oriented way.
OOP is a bad approach for small 1 off applications where the cost of developing an framework of objects would exceed the needs of the moment.
Learning OOA, OOD & OOP skills will benefit the most programmers, so it is definately useful for Universities to teach it.
The relevance and history of OOP runs back to the Simula languages back in the 1960s as a way to engineer software conceptually, where the developed code defines both the structure of the source and general permissible interactions with it. Obvious advantages are that a well-defined and well-created object is self-justifying and consistently repeatable as well as reliable; ideally also able to be extended and overridden.
The only time I know of that OOP is a 'bad approach' is during an embedded system programming efforts where resource availability is restricted; of course that's assuming your environment gives you access to them at all (as was already stated).
The title asks one question, and the post asks another. What do you want to know?
OOP is a major paradigm, and it gets major attention. If metaprogramming becomes huge, it will get more attention. Java and C# are two of the most used languages at the moment (see: SO tags by number of uses). I think it's ignorant to state either way that OOP is a great/terrible paradigm.
I think your question can best be summarized by the old adage: "When the hammer is your tool, everything looks like a nail."
OOP is usually an excellent approach, but it does come with a certain amount of overhead, at least conceptual. I don't do OO for small programs, for example. However, it's something you really do need to learn, so I can see requiring it for small programs in a University setting.
If I have to do serious planning, I'm going to use OOP. If not, I won't.
This is for the classes of problems I've been doing (which includes modeling, a few games, and a few random things). It may be different for other fields, but I don't have experience with them.
My opinion, freely offered, worth as much...
OOD/OOP is a tool. How good of a tool depends on the person using it, and how appropriate it is to use in a particular case depends on the problem. If I give you a saw, you'll know how to cut wood, but you won't necessarily be able to build a house.
The buzz that I'm picking up on is that functional programming is the wave of the future because it's extremely friendly to multi-threaded environments, so OO might be obsolete by the time you graduate. ;-)

What is "over-engineering" as applied to software? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I wonder what would be a good definition of term "over-engineering" as applied to software development. The expression seems to be used a lot during software design discussions often in conjunction with "excessive future-proofing" and it would be nice to nail down a more precise definition.
Contrary to most answers, I do not believe that "presently unneeded functionality" is over-engineering; or it is the least problematic form.
Like you said, the worst kind of over-engineering is usually committed in the name of future-proofing and extensibility - and achieves the exact opposite:
Empty layers of abstraction that are at best unnecessary and at worst restrict you to a narrow, inefficient use of the underlying API.
Code littered with designated "extension points" such as protected methods or components acquired via abstract factories - which all turn out to be not quite what you actually need when you do have to extend the functionality.
Making everything configurable to "avoid hard-coding", with the effect that there's more (complex, failure-prone) application logic in configuration files than in source code.
Over-genericizing: instead of implementing the (technically uninteresting) functional spec, the developer builds a (technically interesting) "business rule engine" that "executes" the specs themselves as supplied by business users. The net result is an interpreter for a proprietary (scripting or domain-specific) language that is usually horribly designed, has no tool support and is so hard to use that no business user could ever work with it.
The truth is that the design that is most easily adapted to new and changing requirements (and is thus the most future-proof and extensible) is the design that is as simple as possible.
Contrary to popular belief, over-engineering is really a phenomena that appears when engineers get "hubris" and think they understand the user.
I made a simple diagram to illustrate this:
In the cases where we've considered things over engineered, it's always been describing software that has been designed to be so generic that it loses sight of the main task that it was initially designed to perform, and has therefore become not only hard to use, but fundimentally unintelligent.
To me, over-engineering is including anything that you don't need and that you don't know you're going to need. If you catch yourself saying that a feature might be nice if the requirements change in a certain way, then you might be over-engineering. Basically, over-engineering is violating YAGNI.
The agile answer to this question is: every piece of code that does not contribute to the requested functionality.
There is this discussion at Joel on Software that starts with,
creating extensive class hierarchies for an imagined future problem that does not yet exist, is a kind of over-engineering, and is therefore, bad.
And, gets into a discussion with examples.
If you spend so much time thinking about the possible ramifications of a problem that you end up interfering with the solving of the problem itself, you may be over-engineering.
There's a fine balance between "best engineering practices" and "real world applicability". At some point you have to decide that even though a particular solution may not be as "pure" from an engineering standpoint as it could be, it will do the job.
For example:
If you are designing a user management system for one-time use at a high school reunion, you probably don't need to add support for incredibly long names, or funky character sets. Setting a reasonable maximum length and doing some basic sanitizing should be sufficient. On the other hand, if you're creating a system that will be deployed for hundreds of similar events, you might want to spend some more time on the problem.
It's all about the appropriate level of effort for the task at hand.
I'm afraid that a precise definition is probably not possible as it's highly dependent on the context. For example, it's much easier to over-engineer a web site that displays glittering ponies than it is a nuclear power plant control system. Redundancies, excessive error checking, highly instrumented logging facilities are all over-engineering for a glittering ponies app, but not for a nuclear power plant control system. I think the best you can do is have a feeling about when you are applying too much overhead to your features for the purpose of the application.
Note that I would distinguish between gold-plating and over-engineering. In my mind, gold-plating is creating features that weren't asked for and will never be used. Over-engineering is more about how much "safety" you build into the application either by coding checks around the code or using excessive design for a simple task.
This relates to my controversial programming opinion of "The simplest approach is always the best approach".
Quoting from here: "...Implement things when you actually need them, never when you just foresee that you need them."
To me it is anything that would add any more fat to the code. Meat would be any code that will do the job according to the spec and fat would be any code that would bloat the code in a way that it just adds more complexity. The programmer might have been expecting a future expansion of the functionality; but still it is fat.!
My rough definition would be 'Providing functionality that isnt needed to meet the requirements spec'
I think they are the same as Gold plating and being hit by the Golden hammer :)
Basically, you can sit down and spent too much time trying to make a perfect design, without ever writing some code to check out how it works. Any agile method will tell you not to do all your design up-front, and to just create chunks of design, implement it, reiterate over it, re-design, go again, etc...
Over-engeneering means architecting and designing the applcation with more components than it really should have according to the requirements list.
There is a big difference between over-engeneering and creating an extensible applcaiton, that can be upgraded as reqirements change. If I can think of an example i'll edit the post.
Over-engineering is simply creating a product with greater functionality, quality, generality, extensibility, documentation, or any other aspect than is required.
Of course, you may have requirements outside a specific project -- for example, if you forsee doing future similar applications, then you might have additional requirements for extendability, dependent on cost, that you add on to the project specific requirements.
When your design actually makes things more complex instead of simplifying things, you’re overengineering.
More on this at:
http://www.codesimplicity.com/post/what-is-overengineering/
Disclaimer #1: I am a big-picture BA. I know no code. I read this site all the time. This is my first post.
Funny I was just told by my boss that I over-engineered a new software produce we're planning for mentoring (target market HR people). So I came here to look up the term.
They want to get something in place to sell now, re-purposing existing tools. I can't help but sit back and think, fewer signups, lower retention, if it doesn't allow some of the flexibility we talked about. And mainly, have a highly visual UI that a monkey could use.
He said we could plan future phases to improve the product, especially the UI. We have current customers waiting on "future improvements" that we still aren't doing. They need it though, truly need it.
I am in the process of resigning so I didn't push back.
But my definition would be.............making sure it only does as little as possible, for as cheap as possible, and still be passable for the thing you say it is. Beyond that is over engineering.
Disclaimer #2: This site helped me land my next job implementing a more configurable software.
I think the best answers to your question can be found in this other qestion
The beauty of Agile programming is that it's hard to over engineer if you do it right.

What ever happened to Aspect Oriented Programming? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I remember that in the late 1990s and early 2000s Aspect Oriented Programming (AOP) was supposed to be the "Next Big Thing". Nowadays I see some AOP still around, but it seems to have faded into the background.
There has maybe been a lot of hype in the early 2000's, and what happened is the following: there has been a lot of attempts to create aspect-oriented frameworks, and these attempts have merged into two significant projects in the Java sphere: AspectJ and Spring AOP. AspectJ is complete, complex, academic, somewhat overengineered. Spring AOP covers 80% of use cases with 20% of complexity.
If you look at Google Trends for terms "AspectJ, Spring AOP", then compare to the popularity of Java itself, you will see that the relative popularity of AspectJ is somewhat constant, but Spring AOP is raising. That means that people use AOP, but don't want the complexity of AspectJ. I think that the creators of AspectJ made a lot of tactical mistakes; AspectJ has always been a research project and has not been designed "for the masses".
In the .NET sphere, we have seen a similar interest for AOP in the early 2000's. In 2003, when I started AOP researches, there were half a dozen of AOP weavers for .NET; all were following the path of AspectJ, and all were in infant stage. None of these projects survived. Based on this analysis, I built PostSharp, who was designed to cover 80% of use cases with 20% of complexity, and yet was much more convenient to use than Spring AOP. PostSharp is now considered the leading aspect weaver for .NET. PostSharp 2.0 builds on 5 years of feedback and from industry experience of AspectJ and brings "enterprise-ready" AOP (future will judge if this claim is deserved). Besides PostSharp, other significant players are Spring Framework for .NET and Windsor Castle, two DI-centric application frameworks providing 'also' aspects (aspects are considered as dependencies injected into constructed objects). Since these technologies use runtime weaving, they have severe technical limitations, so practically they can be used only in service objects (that's what these application frameworks have been designed for). Another starting project in .NET is LinFu, which can do 'also' aspects.
To be short, the AOP landscape has undergone some consolidation last years, and probably enters the phase of productivity: customers will use it because it really saves money, not because it is cool. Even if it is very cool :).
Oh, and I forgot: most application servers have built-in support for AOP. Thinking of JBoss, WebSphere, and, to some extent, WCF.
That tends to happen with every "next big thing." Lots of hype, followed by a slow decline in the use of the buzzword. But, even though buzzwords fade and eventually disappear, whatever good ideas were behind them tend to stick around to be absorbed into the mainstream.
[Edit] Okay, an example for those who think I'm "bashing" something, or claiming that aspect oriented programming is going to disappear. At one time the next big thing was structured programming. Object oriented programming grew out of that, and now nobody talks about doing "structured programming" any more. But, in many ways we're still using its best ideas, because OOP adopted them, improved them, and added still more new ideas.
It's around on some projects, my own experience on a recent project is that is too easy to abuse :( !!! What started a a nice way to setup debug, timing and to some extend transaction management, it quickly got corrupted to the weirdest, and hardest code to understand and debug that I've seen in a while.
just to expand a bit on the debug/diagnostic side, the stack traces generated by AOP code many times hide beyond recognition the actual place where the exception took place.
AOP is actually truly brilliant, the problem with it is that no existing language has really great support for it. Sure C# has attributes (which only works when you're CODING the thing) and Java has "injection" (which creates a mess out of the runtime) but in general there are no (mainstream) languages which have truly great support for it...
So it kind of like ended up being a "Design Pattern" (with insanely different implementations in all the different languages though) which after all isn't all that bad I guess ;)
I'm going to suggest that it wasn't big enough. It sounds very appealing, but does it really make coding any easier? I've been wanting to try it out and find what benefits it really holds, but I don't think I do enough coding where I need the relationships that it provides. I don't think it is as beneficial as it sounds.
Also at this point, making it easier to do multicore programming easier is a big thing and I don't think aspect-oriented programming will assist with that.
You can also find lots of content on Adoption Risks.

When is Object Oriented not the correct solution? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've encountered lately some opinions saying that Object Oriented design/programming should not always be used.
Do you know some use-cases that will not benefit from and should not use Object Oriented design?
For example: there are some problems (concerns) that will benefit from AOP.
Some problems are best expressed using other paradigms such as Functional Programming. Also, declarative paradigms allow more robust formal reasoning about the correctness of the code. See Erlang for a good example of a language with certain advantages that can't really be matched by OO languages due to the fundamental nature of the paradigm.
Examples of problem domains where other language paradigms have a better fit are database queries (SQL), expert systems (Prolog, CLIPS etc.) or Statistical computing (R).
In my experience one of the places that does not benefit from OO design is in low end embedded systems. There is a certain amount of overhead required to do OO and there are times you cannot afford this overhead. In a standard PC the overhead is so minimal it’s not even worth considering, however in low end embedded systems that overhead can be significant.
Cross-cutting concerns benefit from Aspect Oriented Programming (AOP). By cross-cutting, I mean functionality that could benefit various parts of the application and that really do not belong to a particular object. Logging is usually given as an example. Security could be another. For example, why should a Person object know anything about logging or who should be allowed access to it?
One that easily comes to mind... Database-y web applications.
In such a scenario, it makes more sense to conform to an accepted framework.. rather than eek out a nice OOP design. e.g. if you have to do some kind of complex query with JOIN and ORDER BYs .. SQL will kick object butt.
Choose the solution based on the problem... instead of hammering the problem till it fits a solution.
The fundamental principle to understand here is that there is no universal methodology, paradigm or approach that can be applied to all problem domains. These are typically designed to cater for a particular set of problems and may not be optimized for other domains.
It is just like an algorithm for a typical type of problem (e.g. Sorting). There cannot be a universal algorithm that is applicable to all possible scenarios or datasets.
Same for OOP. I would not apply it to a problem that is essentially AI related and can be better solved using declarative programming. I would certainly not apply it to develop device drivers that require maximum performance and speed.
OOP and AOP are not mutually exclusive, the are complementary.
As for OO, there are certainly case where it's less apllicable. If there weren't all we would have is OO languages. For purely number crunching tasks, many people still prefer Fortran. Functional languages are useful when you're dealing with concurrency and parallelism.
Also when your app is mainly just a database with a GUI over it (like a CRM app, for instance) OO isn't very useful, even though you might use an OO language to build it.
The advantages of OO design are expandability and maintainability. Hence, it's not of much use where those features aren't needed. These would be very small apps, for a very specific short-term need. (things that you would consider doing as a batch file or in a scripting language)
I wouldn't bother with OOP if the programming language that you are using doesn't easily allow you to use OOP. We use a BDL at my workplace that is made to be procedural. I once tried to do some OOP, and well, that was just a big oops. Shouldn't have bothered.
Not good enough? I don't know if I can come up with an example of that, but I do know that some REALLY simple applications might not see any "benefits" in the beginning of using a fully object oriented design model. If it is something truly procedural and trivial, however, in the end, it might need to be re-visited.
I would sudgest you visit wikipedia and read their articles about different types of programming languages.
Saying that a type of programming "isn't good enough" doesn't make any sense.
Each type has a purpose. You can't compare them. They're not made to do the same thing.
Any time you can't think of a good reason for OO is a good time to avoid it. (Sounds facetious, but I'm serious.)
OOP could be a little too much if you're creating an incredibly simple application or procedural application, as other posters have said. Also, I don't think AOP necessarily needs to replace OOP, if anything it helps to reinforce good OOP design.
Echoing Nigel, SQL seems almost implicitly to be incompatible with any kind of abstraction (including subqueries and functions).
Well, OOP is not especially orthogonal to anything (except perhaps other ways of getting polymorphism) so...uh...whatever.
Object Oriented programming is good solution if you make good design.

What essential design artifacts do you produce? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
In the course of your software development lifecycle, what essential design artifacts do you produce? What makes them essential to your practice?
The project I'm currently on has been in production for 8+ years. This web application has been actively enhanced and maintained over that time. While we have CMMI based policies and processes in place, with portions of our practice being well defined, the design phase has been largely overlooked. Best practices, anyone?
Having worked on a lot of waterfall projects in the past and a lot of adhoc and agile projects more recently, there's a number of design artifacts I like to create although I can't state enough that it really depends on the details of the project (methodology/team structure/timescale/tools etc).
For a generic, server-based 'enterprise application' I'd want the bare minimum to be something along these lines:
A detailed functional design document (aka spec). Generally something along the lines of Joel s' WhatsTimeIsIt example spec, although probably with some UML use-case diagrams.
A software techical design document. Not necessarily detailed for 100% system coverage but detailed in all the key areas and containing all the design decisions. Being a bit of an UML freak it'd be nice to see lots of pictures along the lines of package diagrams, component diagrams, key feature class diagrams, and probably some sequence diagrams thrown in for good measure.
An infrastructure design document. Probably with UML deployment diagram for the conceptual deisng and perhaps a network diagram for something more physical.
When I say document any of the above might be broken down into multiple documents, or perhaps stored on a wiki/some other tool.
As for their usefulness, my philosophy has always been that a development team should always be able to hand over an application to a support team without having to hand over their phone numbers. If the design artifacts don't clealry indicate what the application does, how it does it, and where it does it then you know the support team are going to give the app the same care and attention they would a rabid dog.
I should mention I'm not vindicating the practice of handing software over from a dev team to a support team once it's finished, which raises all manner of interesting issues, I'm just saying it should be possible if the management so desired.
Working code...and whiteboard drawings.
:P
Designs change so much during development and afterwards that most of my carefully crafted documents rot away in source control and become almost more of a hindrance than a help, once code is in production. I see design documents as necessary to good communication and to clarify your thinking while you develop something, but after that it takes a herculean effort to keep them properly maintained.
I do take pictures of whiteboards and save the JPEGs to source control. Those are some of my best design docs!
In our model (which is fairly specific to business process applications) the design artefacts include:
a domain data model, with comments on each entity and attribute
a properties file listing all the modify and create triggers on each entity, calculated attributes, validators and other business logic
a set of screen definitions (view model)
However do these really count as design artefacts? Our framework is such that these definitions are used to generate the actual code of the system, so maybe they go beyond design.
But the fact that they serve double duty is powerful because they are, by definition, up to date and synchronised with the code at all times.
This is not a design document, per se, but our unit tests serve the dual purpose of "describing" how the code they test is supposed to function. The nice part about this is that they never get out of date, since our unit tests must pass for our build to succeed.
I don't think anything can take the place of a good old fashioned design spec for the following reasons:
It serves as a means of communicating how you will build an application to others.
It lets you get ideas out of your head so you don't worry about tracking a million things at the same time.
If you have to pause a project and return to it later you're not starting your thought process over again.
I like to see various bits of info in a design spec:
General explanation of your approach to the challenge at hand
How will you monitor your application?
What are the security concerns and how are they addressed?
Flowcharts / sequence diagrams
Open issues
Known limitations
Unit tests, while a fantastic and arguably critical item to include in your application development, don't cover all of these topics.