What ever happened to Aspect Oriented Programming? [closed] - language-agnostic

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I remember that in the late 1990s and early 2000s Aspect Oriented Programming (AOP) was supposed to be the "Next Big Thing". Nowadays I see some AOP still around, but it seems to have faded into the background.

There has maybe been a lot of hype in the early 2000's, and what happened is the following: there has been a lot of attempts to create aspect-oriented frameworks, and these attempts have merged into two significant projects in the Java sphere: AspectJ and Spring AOP. AspectJ is complete, complex, academic, somewhat overengineered. Spring AOP covers 80% of use cases with 20% of complexity.
If you look at Google Trends for terms "AspectJ, Spring AOP", then compare to the popularity of Java itself, you will see that the relative popularity of AspectJ is somewhat constant, but Spring AOP is raising. That means that people use AOP, but don't want the complexity of AspectJ. I think that the creators of AspectJ made a lot of tactical mistakes; AspectJ has always been a research project and has not been designed "for the masses".
In the .NET sphere, we have seen a similar interest for AOP in the early 2000's. In 2003, when I started AOP researches, there were half a dozen of AOP weavers for .NET; all were following the path of AspectJ, and all were in infant stage. None of these projects survived. Based on this analysis, I built PostSharp, who was designed to cover 80% of use cases with 20% of complexity, and yet was much more convenient to use than Spring AOP. PostSharp is now considered the leading aspect weaver for .NET. PostSharp 2.0 builds on 5 years of feedback and from industry experience of AspectJ and brings "enterprise-ready" AOP (future will judge if this claim is deserved). Besides PostSharp, other significant players are Spring Framework for .NET and Windsor Castle, two DI-centric application frameworks providing 'also' aspects (aspects are considered as dependencies injected into constructed objects). Since these technologies use runtime weaving, they have severe technical limitations, so practically they can be used only in service objects (that's what these application frameworks have been designed for). Another starting project in .NET is LinFu, which can do 'also' aspects.
To be short, the AOP landscape has undergone some consolidation last years, and probably enters the phase of productivity: customers will use it because it really saves money, not because it is cool. Even if it is very cool :).
Oh, and I forgot: most application servers have built-in support for AOP. Thinking of JBoss, WebSphere, and, to some extent, WCF.

That tends to happen with every "next big thing." Lots of hype, followed by a slow decline in the use of the buzzword. But, even though buzzwords fade and eventually disappear, whatever good ideas were behind them tend to stick around to be absorbed into the mainstream.
[Edit] Okay, an example for those who think I'm "bashing" something, or claiming that aspect oriented programming is going to disappear. At one time the next big thing was structured programming. Object oriented programming grew out of that, and now nobody talks about doing "structured programming" any more. But, in many ways we're still using its best ideas, because OOP adopted them, improved them, and added still more new ideas.

It's around on some projects, my own experience on a recent project is that is too easy to abuse :( !!! What started a a nice way to setup debug, timing and to some extend transaction management, it quickly got corrupted to the weirdest, and hardest code to understand and debug that I've seen in a while.
just to expand a bit on the debug/diagnostic side, the stack traces generated by AOP code many times hide beyond recognition the actual place where the exception took place.

AOP is actually truly brilliant, the problem with it is that no existing language has really great support for it. Sure C# has attributes (which only works when you're CODING the thing) and Java has "injection" (which creates a mess out of the runtime) but in general there are no (mainstream) languages which have truly great support for it...
So it kind of like ended up being a "Design Pattern" (with insanely different implementations in all the different languages though) which after all isn't all that bad I guess ;)

I'm going to suggest that it wasn't big enough. It sounds very appealing, but does it really make coding any easier? I've been wanting to try it out and find what benefits it really holds, but I don't think I do enough coding where I need the relationships that it provides. I don't think it is as beneficial as it sounds.
Also at this point, making it easier to do multicore programming easier is a big thing and I don't think aspect-oriented programming will assist with that.
You can also find lots of content on Adoption Risks.

Related

New design patterns/design strategies [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I've studied and implemented design patterns for a few years now, and I'm wondering. What are some of the newer design patterns (since the GOF)? Also, what should one, similar to myself, study [in the way of software design] next?
Note: I've been using TDD, and UML for some time now. I'm curious about the newer paradigm shifts, and or newer design patterns.
There is roughly an infinite number of design patterns. Design patterns are just that: a recurrence of tricks programmers use to get things done. The most useful thing about the GoF patterns is how famous they are. In that, they have become a language -- exactly what the GoF hoped to achieve.
Many other patterns you'll find on the web and in literature are "just" useful tricks, not so much a language you can use when you speak to fellow programmers. That said, there is a number of patterns that arose in the past ten years or so, particularly in the realm of web development. See the patterns listed in Martin Fowler's patterns book.
I'm surprised that no one has mentioned Martin Fowler's book Patterns of Enterprise Application Architecture. This is an outstanding book with dozens of patterns, many of which are used in modern ORM design (repository, active record), along with a number of UI layer patterns. Highly recommended.
I am an avid follower and supporter of the PCMEF (now PCBMER) framework
Here's a simpler overview of it.
It kind of understands that enterprise systems are huge an complex, and by combining a bunch of other design patterns together into the PCMBER framework (Presentation, Control, Mediator, Entity and Resource), even the most complex system remain easy to usnerstand and manage.
One of the newer ones that I found particularly useful is Domain Driven Design. Not so much a pattern in its own right, but more of a mindset - to concentrate on the domain objects - i.e. the things that you model and build the rest of application around it.
I found that it gave meaning to principles that we all knew before but were too lazy to deal with - like Single Responsibility Principle and Separation of Concerns. I take those two especially more seriously now.
Another axis of improvement for me was TDD and Dependency Injection. I have discovered that with lots of interfaces and classes implementing them I was able to let go of this fear of only defining something once. That is not to say that it is in conflict with DRY(Don't Repeat Yourself) much. It's OK to have two classes with the same properties if their purposes are different. Encapsulation and SRP are much more important than only defining a property once.
Umm...none of the things people have mentioned are design patterns.
GOF was written implicitly with Java in mind. It explored that space pretty well. However once you go into other languages some patterns are no longer necessary (Observer is rarely used in a language like C# that supports events) and some new ones spring forth. Grab yourself the Pro JavaScript Design Patterns or Design Patterns In Ruby books and see what happens to the stand-by pattens in these very different paradigms.
My favorites lately have come from leaning on the functional drift of modern languages. I'm a big fan of nested closures and of the functional ways of tackling some of the same problems that GoF does (again, see the Ruby book for great examples). I also am currently in love with the idea of internal domain-specific languages which open up into a whole series of design patterns of their own (including nested closures). Also event-aggregation seems to be poised to hit it big in the .Net world in the near future.
A couple other big ones that have hit the scene but aren't discussed as much in GoF - probably because they are more high level then what those guys were going for - are Inversion Of Control Containers, Message Bussing, Aspect-Oriented-Programming, Model-View-Controller, Model-View-Presenter, Model-View-ViewModel, and their ilk.
By the way, these are not design patterns, but if you're looking to progress beyond TDD start looking into Behavior-Driven-Development and Context/Specification.
A huge change from a maintenance aspect is the use of DVCS. If you don't know what one is or haven't used one, I highly suggest reading up on the two hard hitters:
Mercurial (hg): https://www.mercurial-scm.org/
git : http://git-scm.com/
They've done quite a bit to change the workflow of the common programming environment. Not really a pattern/design I spose, but I don't think TDD or UML are technical patterns/designs either at some level. Maybe more like common practices surrounding programming.

Is OOP abused in universities? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I started my college two years ago, and since then I keep hearing "design your classes first". I really ask myself sometimes, should my solution to be a bunch of objects in the first place! Some say that you don't see its benefits because your codebase is very small - university projects. The project size excuse just don't go down my throat. If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project.
I am not saying OOP is bad, I just feel it is abused in classrooms where students like me are told day and night that OOP is the right way.
IMHO, the proper answer shouldn't come from a professor, I prefer to hear it from real engineers in the field.
Is OOP the right approach always?
When is OOP the best approach?
When is OOP a bad approach?
This is a very general question. I am not asking for definite answers, just some real design experience from the field.
I don't care about performance. I am asking about design. I know it is engineering in real life.
==================================================================================
Thankful for all contributions. I chose Nosredna answer, because she addressed my questions in general and convinced me that I was wrong about the following :
If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project.
The professors have the disadvantage that they can't put you on huge, nasty programs that go on for years, being worked on by many different programmers. They have to use rather unconvincing toy examples and try to trick you into seeing the bigger picture.
Essentially, they have to scare you into believing that when an HO gauge model train hits you, it'll tear your leg clean off. Only the most convincing profs can do it.
"If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project."
That's where I disagree. A small project fits into your brain. The large version of it might not. To me, the benefit of OO is hiding enough of the details so that the big picture can still be crammed into my head. If you lack OO, you can still manage, but it means finding other ways to hide the complexity.
Keep your eye on the real goal--producing reliable code. OO works well in large programs because it helps you manage complexity. It also can aid in reusability.
But OO isn't the goal. Good code is the goal. If a procedural approach works and never gets complex, you win!
OOP is a real world computer concept that the university would be derelict to leave out of the curriculum. When you apply for jobs, you will be expected to be conversant in it.
That being said, pace jalf, OOP was primarily designed as a way to manage complexity. University projects written by one or two students on homework time are not a realistic setting for large projects like this, so the examples feel (and are) toy examples.
Also, it is important to realize that not everyone really sees OOP the same way. Some see it about encapsulation, and make huge classes that are very complex, but hide their state from any outside caller. Others want to make sure that a given object is only responsible for doing one thing and make a lot of small classes. Some seek an object model that closely mirrors real world abstractions that the program is trying to relate to, others see the object model as about how to organize the technical architecture of the problem, rather than the real world business model. There is no one true way with OOP, but at its core it was introduced as a way of managing complexity and keeping larger programs more maintainable over time.
OOP is the right approach when your data can be well structured into objects.
For instance, for an embedded device that's processing an incoming stream of bytes from a sensor, there might not be much that can be clearly objectified.
Also in cases where ABSOLUTE control over performance is critical (when every cycle counts), an OOP approach can introduce costs that might be nontrivial to compute.
In the real world, most often, your problem can be VERY well described in terms of objects, although the law of leaky abstractions must not be forgotten!
Industry generally resolves, eventually, for the most part, to using the right tool for the job, and you can see OOP in many many places. Exceptions are often made for high-performance and low-level. Of course, there are no hard and fast rules.
You can hammer in a screw if you stick at it long enough...
My 5 cents:
OOP is just one instance of a larger pattern: dealing with complexity by breaking down a big problem into smaller ones. Our feeble minds are limited to a small number of ideas they can handle at any given time. Even a moderately sized commercial application has more moving parts than most folks can fully maintain a complete mental picture of at a time. Some of the more successful design paradigms in software engineering capitalize on the notion of dealing with complexity. Whether it's breaking your architecture into layers, your program into modules, doing a functional breakdown of actions, using pre-built components, leveraging independent web services, or identifying objects and classes in your problem and solution spaces. Those are all tools for taming the beast that is complexity.
OOP has been particularly successful in several classes of problems. It works well when you can think about the problem in terms of "things" and the interactions between them. It works quite well when you're dealing with data, with user interfaces, or building general purpose libraries. The prevalence of these classes of apps helped make OOP ubiquitous. Other classes of problems call for other or additional tools. Operating systems distinguish kernel and user spaces, and isolate processes in part to avoid the complexity creep. Functional programming keeps data immutable to avoid the mesh of dependencies that occur with multithreading. Neither is your classic OOP design and yet they are crucial and successful in their own domains.
In your career, you are likely to face problems and systems that are larger than you could tackle entirely on your own. Your teacher are not only trying to equip you with the present tools of the trade. They are trying to convey that there are patterns and tools available for you to use when you are attempting to model real world problems. It's in your best interest to accumulate a collection of tools for your toolbox and choose the right tool(s) for the job. OOP is a powerful tool to have, but by far not the only one.
No...OOP is not always the best approach.
(A true) OOP design is the best approach when your problem can best be modeled as a set of objects that can accomplish your goals by communicating/using one another.
Good question...but I'm guessing Scientific/Analytic applications are probably the best example. The majority of their problems can best be approached by functional programming rather than object oriented programming.
...that being said, let the flaming begin. I'm sure there are holes and I'd love to learn why.
Is OOP the right approach always?
Nope.
When OOP is the best approach?
When it helps you.
When OOP is a bad approach?
When it impedes you.
That's really as specific as it gets. Sometimes you don't need OOP, sometimes it's not available in the language you're using, sometimes it really doesn't make a difference.
I will say this though, when it comes to technique and best practices continue to double check what your professors tell you. Just because they're teachers doesn't mean they're experts.
It might be helpful to think of the P of OOP as Principles rather than Programming. Whether or not you represent every domain concept as an object, the main OO principles (encapsulation, abstraction, polymorphism) are all immensely useful at solving particular problems, especially as software gets more complex. It's more important to have maintainable code than to have represented everything in a "pure" object hierarchy.
My experience is that OOP is mostly useful on a small scale - defining a class with certain behavior, and which maintains a number of invariants. Then I essentially just use that as yet another datatype to use with generic or functional programming.
Trying to design an entire application solely in terms of OOP just leads to huge bloated class hierarchies, spaghetti code where everything is hidden behind 5 layers of indirection, and even the smallest, most trivial unit of work ends up taking three seconds to execute.
OOP is useful --- when combined with other approaches.
But ultimately, every program is about doing, not about being. And OOP is about "being". About expressing that "this is a car. The car has 4 wheels. The car is green".
It's not interesting to model a car in your application. It's interesting to model *the car doing stuff. Processes are what's interesting, and in a nutshell, they are what your program should be organized around. Individual classes are there to help you express what your processes should do (if you want to talk about car things, it's easier to have a car object than having to talk about all the individual components it is made up of, but the only reason you want to talk about the car at all is because of what is happening to it. The user is driving it, or selling it, or you are modelling what happens to it if someone hits it with a hammer)
So I prefer to think in terms of functions. Those functions might operate on objects, sure, but the functions are the ones my program is about. And they don't have to "belong" to any particular class.
Like most questions of this nature, the answer is "it depends."
Frederick P. Brooks said it the best in "The Mythical Man-Month" that "there is no single strategy, technique or trick that will exponentially raise the productivity of programmers." You wouldn't use a broad sword to make a surgical incision and you wouldn't use a scalpel in a sword fight.
There are amazing benefits to OOP, but you need to be comfortable with the pattern to take advantage of these benefits. Knowing and understanding OOP also allows you to create a cleaner procedural implementation for your solutions because of the underlying concepts of separation of concerns.
I've seen some of the best results of using OOP when adding new functionality to a system or maintaining/improving a system. Unfortunately, it's not easy to get that kind of experience while attending a university.
I have yet to work on a project in the industry that was not a combination of both functional and OOP. It really comes down to your requirements and what are the best (maybe cheapest?) solutions for them.
OOP is not always the best approach. However it is the best approach in the majority of applications.
OOP is the best approach in any system that lend itself to objects and the interaction of objects. Most business applications are best implemented in an object-oriented way.
OOP is a bad approach for small 1 off applications where the cost of developing an framework of objects would exceed the needs of the moment.
Learning OOA, OOD & OOP skills will benefit the most programmers, so it is definately useful for Universities to teach it.
The relevance and history of OOP runs back to the Simula languages back in the 1960s as a way to engineer software conceptually, where the developed code defines both the structure of the source and general permissible interactions with it. Obvious advantages are that a well-defined and well-created object is self-justifying and consistently repeatable as well as reliable; ideally also able to be extended and overridden.
The only time I know of that OOP is a 'bad approach' is during an embedded system programming efforts where resource availability is restricted; of course that's assuming your environment gives you access to them at all (as was already stated).
The title asks one question, and the post asks another. What do you want to know?
OOP is a major paradigm, and it gets major attention. If metaprogramming becomes huge, it will get more attention. Java and C# are two of the most used languages at the moment (see: SO tags by number of uses). I think it's ignorant to state either way that OOP is a great/terrible paradigm.
I think your question can best be summarized by the old adage: "When the hammer is your tool, everything looks like a nail."
OOP is usually an excellent approach, but it does come with a certain amount of overhead, at least conceptual. I don't do OO for small programs, for example. However, it's something you really do need to learn, so I can see requiring it for small programs in a University setting.
If I have to do serious planning, I'm going to use OOP. If not, I won't.
This is for the classes of problems I've been doing (which includes modeling, a few games, and a few random things). It may be different for other fields, but I don't have experience with them.
My opinion, freely offered, worth as much...
OOD/OOP is a tool. How good of a tool depends on the person using it, and how appropriate it is to use in a particular case depends on the problem. If I give you a saw, you'll know how to cut wood, but you won't necessarily be able to build a house.
The buzz that I'm picking up on is that functional programming is the wave of the future because it's extremely friendly to multi-threaded environments, so OO might be obsolete by the time you graduate. ;-)

What are some advanced software development topics every developer should know? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Let's say your company has given you the time & money to acquire training on as many advanced programming topics that you can eat in a year, carte blanche. What would those topics be and how would you prefer to acquire them?
Assumptions:
You're still having deliverables to bring into existence, but you're allowed one week per month for the year for this training.
The training can come from anywhere. IE: Classroom, on-site instructor, books, subscriptions, podcasts, etc.
Subject matter can cover any platform, technology, language, DBMS, toolset, etc.
Concurrent/Parallel programming and multi-threading, especially with respect to memory models and memory coherency.. I think every programmer should be aware of the considerations in this arena as we move into a world of multi-core/multi-cpu hardware.
For this I would probably using Internet research most heavily; but an on-campus primer at a good university could be a good way to start off.
Security!
Far too many programmers just build something and think they can add security as an afterthought after finishing the "main" part of the program. You could always benefit from knowing more about how to secure your app, how to design software to be secure from the get-go, how to do intrusion detection, etc.
Advanced Database Development
Things like data warehousing (MDX, OLAP queries, star schemas, fact tables, etc), advanced performance tuning, advanced schema and query patterns, and the like are always useful.
Here are the three that I'm always finding myself explaining to junior developers who didn't get enough CS training. All that other stuff is generally more hype than substance, or can be fairly easily picked up. But if you don't know these three, you can do a great deal of damage:
Algorithm analysis, including Big O
Notation.
The various levels of
cohesion and coupling.
Amdahl's Law, and how it pertains to optimizations.
Internationalization issues, especially since it sounds like it would not be an advanced topic. But it is.
Accessibility
It's ignored by so many organizations but the simple fact of the matter is that there are a huge number of people with low or no vision, color blindness, or other differences that can make navigating the web a very frustrating experience. If everybody had at least a little bit of training in it, we might get some web based UIs that are a little more inclusive.
Object oriented design patterns.
I guess "advanced" is different for everyone, but I'd suggest the following as being things that most decent developers (i.e. ones that don't need to be told about NP-completeness or design patterns) could gain from:
Multithreading techniques that go
beyond "lock" and when to apply them.
In-depth training to learn and
habitualize themselves with clever
features in their toolchain (IDE/text
editor, debugger, profiler, shell.)
Some cryptography theory and hands-on experience with different common flaws in security schemes that people create.
If they program against a database, learn the internals of their database and advanced
query composition and tuning techniques.
Developers should know the basics in SQL development and how their decisions impact database performance. It is one thing to write a query it is another thing to write a query, understand the explain plan and make design decisions based off that output. I think a good course on PL/SQL development and database performance would be very beneficial.
Unfortunately communication skills seem to fall under the "advanced topics" section for most developers (present company excluded, of course).
Best way to acquire this skill: practice.
Take of the headphones, and talk to
someone instead of IM'ing or emailing
the guy at the next desk.
Pick up the phone and talk to a
client instead of lobbing an email
over the fence.
Ask questions at a conference instead of sitting behind your laptop
screen twittering.
Actively participate in a non-technical meeting at work.
Present something in public.
Most projects do not fail because of technical reasons. They fail because they could not create a team. Communication is vital to team dynamics.
It will not harm your career either.
One of the best courses I took was a technical writing course. It has served me well in my career.
Additionally: it probably does not matter WHAT the topic is - the fact that the organization is interested in it and is paying for it and the developers want to go and do go, is a better indicator of success/improvement than any one particular topic.
I also don't think it matters that much what the topic is. Dev organizations deal with so many things during a project that training and then on the job implementation/trial and error will always get you some better perspective - even if the attempts to try out/use the new stuff fail. That experience will probably help more on the subsequent projects.
I'm a book person, so I wouldn't really bother with instruction.
Not necessarily in this order, and depending on what you know already
OO Programming
Functional Programming
Data structures and algorithms
Parallel processing
Set based logic (essentially the theory behind sql and how to apply it)
Building parsers (I only put this, because it actually came up where I work)
Software development methodolgoies
NP Completeness. Specifically, how to detect if a problem is NP-Complete, and how to build an approximate solution to the problem.
I see this as important because you don't want a developer to try and solve an NP-complete problem by getting the optimum solution, unless the problem's search space is very small, in which case brute force is acceptable. However, as the search space increases, the time required to solve the problem increases exponentially.
I'd cover new technologies and trends. Some of the new technologies I'm researching/enhancing my skills with include:
Microsoft .NET Framework v3.0/v3.5/v4.0
Cloud Computing Frameworks (Amazon EC2, Windows Azure Services, GoGrid, etc.)
Design Patterns
I am from MS based developer world, so here is my take on this
More about new concepts in Cloud Computing (various API etc.). as the industry is betting on it for sometime.
More about LinQ for .net framework
Distributed databases
Refactoring techniques (which implies also learning to write a good set of unit/functional tests).
Knowing how to refactor is the best way to keep code clean -- it is rare when you get it right the first time (especially in new designs).
A number of refactorings, however, require a decent set of tests to check that the refactoring did not add unexpected behavior.
Parallel computing- the easiest and best way to learn it
Debugging
Debugging by David J. Agans is a good book on the topic. Debugging can be very complex when you deal with multi threaded programs, crashes, algorithms that doesn't work. etc. Everybody would be better off being good at debugging.
I'd vote for real-world battle stories. Have developers from other organizations present their successes and failures. Don't limit the presentations to technologies you're using. With a significantly complex project, this is bound to cut into 'advanced' topics you haven't even considered. Real-world successes (and failures) have a lot to teach.
Go to the Stack Overflow DevDays
and the ACCU conferences
Read
Agile Software Development, Principles, Patterns, and Practices (Robert C. Martin)
Clean Code (Robert C. Martin)
The Pragmatic Programmer (Andrew Hunt&David Thomas)
Well if you're here I would hope by now you have the basics down:
OOP Best practices
Design patterns
Application Security
Database Security/Queries/Schemas
Most notably developers should strive to learn multiple programming languages and disciplines, in order for their skill set to be expanded in more than one direction. They don't need to become experts in these other skills but at least have a very acute understanding of integration with their central discipline. This will make them much better developers in the long run, and also let them gain the ability to use all tools at their disposal to create applications that can transcend the limitations of a singular language.
Outside of programming specific topics, you should also learn how to work under Agile, XP, or other team based methodologies in order to be more successful while working in a team environment.
I think an advanced programmer should know how to get your employer to give you the time & money to acquire training on as many advanced programming topics that you can eat in a year. I'm not advanced yet. :)
I'd suggest an Artificial Intelligence class at a college/university. Most of the stuff is fun, easy to grasp (the basics at least), and the solutions to problems are usually creative.
Hitchhikers Guide to the Galaxy.
How would I prefer to acquire the training? I'd love to have a substantial amount of company time dedicated to self-training.
I totally agree on Accessabiitly. I was asked to look into it for the website at work and there is a real lack of good knowledge on the subject, not only a lack of CSS standards to aid in the likes of screen readers.
However my answer goes to GUI design - its quite a difficult thing to get right. There's too many awful applications out there that could be prevented just by taking the time to follow HCI (Human Computer Interaction) advice/designs. Take Google/Apple for inspiration when making a GUI - not your typical hundreds of buttons/labels combo that too often gets pushed out.
Automated testing: Unit testing, functional integration testing, non-functional testing
Compiler details (more relevant on some platforms than others): How does the compiler implement certain common constructs in language X? On a byte-code interpreted platform, how does JIT compilation work? What can be JIT-compiled (for example, can virtual calls be JIT compiled?)?
Basic web security
Common design idioms from other problem domains than the one you're working in at the moment.
I'd recommend learning about Refactoring, Test Driven Development, and various unit testing frameworks (NUnit, Visual Test, CppUnit, etc.) I'd also learn how to incorporate automated unit testing into your continuous integration builds.
Ultimately if you can prove your code does what it claims it can do, you don't have to be there to answer questions as to why or how. If a maintainer comes along and tries to "fix" your code, they'll know instantly if they broke it. Tests written around the requirements (use cases) explain to the maintainer what your users wanted it to do, and provide a little working example of how to call it. Think of unit tests as functional documentation.
Test Driven Development (TDD) is a more novel design approach that begins with the requirements, where you start by writing a test before you write the code. You then write exactly enough code required to pass the test. You have to stop before you write extra code (that you may never need), because you will refactor it later if you find that you really needed it.
What makes TDD cool is that a bad interface (such as one with lots of dependencies) is also very hard to write tests for. It's so hard that a coder would rather refactor the interface to make it easier to test. And that refactoring simplifies the code, removing inappropriate dependencies, or grouping related tests together to make it easier to test, thus improving cohesion. By making it immediately apparent to the developer when he's writing a badly interfaced module, the developer sticks to the architecture and gravitates to the principles of tight cohesion and loose coupling. Good interfaces are the natural result. And as a bonus, once you pass all your tests, you know you're done.
On the surface this seems like an easy question to answer, just enter your favorite pet peeve about what other developers can't do correctly. But when I read through the answers and gave it some thought, I realized that every "advanced topic" brought up was covered in my undergraduate computer science curriculum--20 years ago. And I doubt that OO, security, functional programming, etc. concepts have changed in that time. Sure the tools have, but I argue that tools are different than topics.
So what is an "advanced topic" in computer science? Who is the Turing, Knuth, Yourdon of the 21st century?
I don't have a clear answer to this question, though I'd like to see more work on theories for parallel programming that will enable tools to abstract that messy stuff for developers.
Quite funny that noone hasnt mentioned:
debugging.
tools & ide you work with
and platform you are developing to.
Everyday development is much more fun if you know your tools really well and you accomplish more and make your life easier if you know how to debug someone elses code at ease.
Source Control

What is "over-engineering" as applied to software? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I wonder what would be a good definition of term "over-engineering" as applied to software development. The expression seems to be used a lot during software design discussions often in conjunction with "excessive future-proofing" and it would be nice to nail down a more precise definition.
Contrary to most answers, I do not believe that "presently unneeded functionality" is over-engineering; or it is the least problematic form.
Like you said, the worst kind of over-engineering is usually committed in the name of future-proofing and extensibility - and achieves the exact opposite:
Empty layers of abstraction that are at best unnecessary and at worst restrict you to a narrow, inefficient use of the underlying API.
Code littered with designated "extension points" such as protected methods or components acquired via abstract factories - which all turn out to be not quite what you actually need when you do have to extend the functionality.
Making everything configurable to "avoid hard-coding", with the effect that there's more (complex, failure-prone) application logic in configuration files than in source code.
Over-genericizing: instead of implementing the (technically uninteresting) functional spec, the developer builds a (technically interesting) "business rule engine" that "executes" the specs themselves as supplied by business users. The net result is an interpreter for a proprietary (scripting or domain-specific) language that is usually horribly designed, has no tool support and is so hard to use that no business user could ever work with it.
The truth is that the design that is most easily adapted to new and changing requirements (and is thus the most future-proof and extensible) is the design that is as simple as possible.
Contrary to popular belief, over-engineering is really a phenomena that appears when engineers get "hubris" and think they understand the user.
I made a simple diagram to illustrate this:
In the cases where we've considered things over engineered, it's always been describing software that has been designed to be so generic that it loses sight of the main task that it was initially designed to perform, and has therefore become not only hard to use, but fundimentally unintelligent.
To me, over-engineering is including anything that you don't need and that you don't know you're going to need. If you catch yourself saying that a feature might be nice if the requirements change in a certain way, then you might be over-engineering. Basically, over-engineering is violating YAGNI.
The agile answer to this question is: every piece of code that does not contribute to the requested functionality.
There is this discussion at Joel on Software that starts with,
creating extensive class hierarchies for an imagined future problem that does not yet exist, is a kind of over-engineering, and is therefore, bad.
And, gets into a discussion with examples.
If you spend so much time thinking about the possible ramifications of a problem that you end up interfering with the solving of the problem itself, you may be over-engineering.
There's a fine balance between "best engineering practices" and "real world applicability". At some point you have to decide that even though a particular solution may not be as "pure" from an engineering standpoint as it could be, it will do the job.
For example:
If you are designing a user management system for one-time use at a high school reunion, you probably don't need to add support for incredibly long names, or funky character sets. Setting a reasonable maximum length and doing some basic sanitizing should be sufficient. On the other hand, if you're creating a system that will be deployed for hundreds of similar events, you might want to spend some more time on the problem.
It's all about the appropriate level of effort for the task at hand.
I'm afraid that a precise definition is probably not possible as it's highly dependent on the context. For example, it's much easier to over-engineer a web site that displays glittering ponies than it is a nuclear power plant control system. Redundancies, excessive error checking, highly instrumented logging facilities are all over-engineering for a glittering ponies app, but not for a nuclear power plant control system. I think the best you can do is have a feeling about when you are applying too much overhead to your features for the purpose of the application.
Note that I would distinguish between gold-plating and over-engineering. In my mind, gold-plating is creating features that weren't asked for and will never be used. Over-engineering is more about how much "safety" you build into the application either by coding checks around the code or using excessive design for a simple task.
This relates to my controversial programming opinion of "The simplest approach is always the best approach".
Quoting from here: "...Implement things when you actually need them, never when you just foresee that you need them."
To me it is anything that would add any more fat to the code. Meat would be any code that will do the job according to the spec and fat would be any code that would bloat the code in a way that it just adds more complexity. The programmer might have been expecting a future expansion of the functionality; but still it is fat.!
My rough definition would be 'Providing functionality that isnt needed to meet the requirements spec'
I think they are the same as Gold plating and being hit by the Golden hammer :)
Basically, you can sit down and spent too much time trying to make a perfect design, without ever writing some code to check out how it works. Any agile method will tell you not to do all your design up-front, and to just create chunks of design, implement it, reiterate over it, re-design, go again, etc...
Over-engeneering means architecting and designing the applcation with more components than it really should have according to the requirements list.
There is a big difference between over-engeneering and creating an extensible applcaiton, that can be upgraded as reqirements change. If I can think of an example i'll edit the post.
Over-engineering is simply creating a product with greater functionality, quality, generality, extensibility, documentation, or any other aspect than is required.
Of course, you may have requirements outside a specific project -- for example, if you forsee doing future similar applications, then you might have additional requirements for extendability, dependent on cost, that you add on to the project specific requirements.
When your design actually makes things more complex instead of simplifying things, you’re overengineering.
More on this at:
http://www.codesimplicity.com/post/what-is-overengineering/
Disclaimer #1: I am a big-picture BA. I know no code. I read this site all the time. This is my first post.
Funny I was just told by my boss that I over-engineered a new software produce we're planning for mentoring (target market HR people). So I came here to look up the term.
They want to get something in place to sell now, re-purposing existing tools. I can't help but sit back and think, fewer signups, lower retention, if it doesn't allow some of the flexibility we talked about. And mainly, have a highly visual UI that a monkey could use.
He said we could plan future phases to improve the product, especially the UI. We have current customers waiting on "future improvements" that we still aren't doing. They need it though, truly need it.
I am in the process of resigning so I didn't push back.
But my definition would be.............making sure it only does as little as possible, for as cheap as possible, and still be passable for the thing you say it is. Beyond that is over engineering.
Disclaimer #2: This site helped me land my next job implementing a more configurable software.
I think the best answers to your question can be found in this other qestion
The beauty of Agile programming is that it's hard to over engineer if you do it right.

What are the real challenges for a developer migrating between programming languages? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Many developers will claim that moving from one programming language to another is relatively simple especially if the languages are based on similar paradigms. However, in practice the effort comes not from learning the syntax of the language but in developing a deep understanding of the language nuances and more importantly knowing what is offered in the language’s libraries. For example, switching from Java to .Net is not difficult from a syntactic perspective but programming efficiency requires a good knowledge of the available libraries. Switching from PHP to .Net could present an even greater hurdle given the language disparities.
What are the real overheads for a developer to move to a different language in the same paradigm? What if the paradigms are different?
The biggest challenge (for me) is usually the API, rather than the language itself (.NET notwithstanding). For example, I've been using Microsoft's C++ and C# for a lot of years (Delphi before that). But I have great difficulty getting started on Java; even trivial projects can take me a while. Not because the language is difficult (it's not), but because the APIs are different, and arranged differently.
It takes months to get up to speed on an API to the point you can use it fluently, and years to become "good" and learn all the ins and outs of the language. That's daunting for a lot of developers, because you basically have to devote a significant amount (if not all) of your time and effort toward working in the new language to become an expert at it. Many times, the incentive to move out of your current area of expertise just isn't there.
Same paradigm is much easier because it is really just a matter of grasping the various libraries and locating them quickly as you mentioned.
If the paradigms are different than this switch is more difficult. Moving from a static to dynamic language or a procedural to OOP language will require a different mind set. This will take more time but it is possible and still a very good exercise.
It may be similar to learning foreign languages. If you speak English, than moving to another Latin based language is far easier than going to something like Greek.
For me it would finding good bloggers and useful sites on the language. After a while you get to know where the best people are. Those people and sites are good sources of information for learning the subtleties.
leaving your comfort zone. I think this is one of the biggest reasons some developers don't learn new languages.
But for others, this is what drives them.
Moving within the same paradigm is relatively easy. I find that switching between Java and .NET painless because both platforms offer similar functionality and similar libraries. But switching paradigms could be a real challenge.
My students usually have a hard time going to Functional and Logical languages after learning Java even though the functional and logic programming are easier.
Another problem is switching between types of applications. For example, if you are accustomed to building desktop applications in Java then suddenly try to build web applications in .NET the switch is hard because you are not only learning new languages but new fields of programming.
Another challenge is the tool sets that are available for a particular language. Java and .NET have similar tools but with some differences. If you learned to program using Visual Studio then it is possible that features of Visual Studio become confused with the language. I see students having this problem all the time. When you switch to Java and there is no equivalent menu option of wizard in the new IDE it could cause problems.
I advocate to persons learning programming to learn the core concepts of the paradigm rather than the specific language because it makes you more portable in the future. A person comfortable with Object Oriented Concepts will have an easier time switching between Java and .NET or Python than a person who simple learned how to program in C#.
Just like learning a new human language, for me the biggest problem lies on typical constructions you need to do to solve a problem.
I know that perhaps learning "while" or "for" loops in several languages aren't so difficult - but when your problem goes up one level of abstraction (iterate through this array) you'll find yourself using "[" instead of "(" and vice-versa.
The learning curve can be even steeper if, besides learning a new language, you have to learn a new framework. When I went from typícal ASP.Net to MVC (using NVelocity) I kinda felt myself completely lost - all my knowledge on how to solve typical problems with asp.net control had to be forgotten.
Finally, the biggest challenge happens when you change between languages with different paradigms - because you can no longer think in the same way to solve a problem. Like - when moving from C# to Prolog, instead of thinking in functions, arguments, class hierarchy, etcs... I had to think only in states, events linked to data changes and event chaining via recursion - it was madness but I could finish my university homework.
Coming from 15+ years of C++ background I just had to move to Java for a new project I am working on, and this is a rather painful step. There are tools and frameworks for almost everything, but the learning curve is enormous! The new syntax is nothing to worry about, the API is more difficult, but the most difficult area is to find the way through the all the frameworks.
Also, having to use Eclipse as IDE is a step back in terms of stability and reliability compared to Emacs. The features of Eclipse are really compelling, but the bugs in the IDE cause constant hassle.