Related
G'day,
This is related to my question on star developers and to this question regarding telling someone that they're writing bad code but I'm looking at a situation that is more specific.
That is, how do I tell a "star" that their changes to a program that I'd written are poorly made and inconsistently implemented without just sounding like I'm annoyed at someone "playing with my stuff"?
The new functionality added was deliberatly left out of the original version of this shell script to keep the it as simple as possible until we got an idea of the errors we were going to see with the system under load.
Basically, I'd argued that to try and second guess all error situations was impossible and in fact could leave us heading down a completely wrong path after having done a lot of work.
After seeing what needed to be added, someone dived in and made the additions but unfortunately:
the logic is not consistent
the variable names no longer describe the data they contain
there are almost no comments at all
the way in which the variables are used is not easy to follow and massively decreases readability and hence maintainability.
I always try and approach coding from the Damien Conway point of view "Always code as if your system is going to be maintained by a psychopath who knows where you live." That is, I try to make it easy for follow and not as an advertisement for my own brilliance. "What does this piece of code do?" exercises are fun and are best left to obfuscation contests IMHO.
Any suggestions greatly received.
cheers,
I would just be honest about it. You don't necessarily need to point every little detail that's wrong, but it's worth having a couple of examples of any general points you're going to make. You might want to make notes about other examples that you don't call out in the first brief feedback, in case they challenge your reasoning.
Try to make sure that the feedback is entirely about the code rather than the person. For example:
Good: The argument validation in foo() seems inconsistent with that in bar(). In foo(), a NullPointerException is thrown if the caller passes in null, whereas bar() throws IllegalArgumentException.
Bad: Your argument validation is all over the place. You throw NullPointerException in foo() but IllegalArgumentException in bar(). Please try to be consistent.
Even with a "please," the second form is talking about the developer rather than the code.
Of course in many cases you don't need to worry about being so careful, but if you think they're going to be very sensitive about it, it's worth making the effort. (Read what you've written carefully, if it's written feedback: I accidentally included a "you" in the first version to start with :)
I've found that most developers (superstar or not) are pretty reasonable about accepting, "No, I didn't implement that feature because it has problem X." It's possible that I've been lucky though.
Coming from the other perspective, I would encourage you to think about it in their shoes. I will describe a "hypothetical" experience.
Some things to keep in mind:
The guy was trying to do something
good.
Programmers are terrible at
mind reading. They tend to only know
what they read.
He may have not been given complete guidance as what needs to be done(or what doesn't need to be done)
He is likely doing the best he knows how to.
Just keep that in mind and talk to them. Teach them. No need for yelling or pissing contests. Just remember that they are not intentionally trying to make your life difficult.
I see that you've asked a lot of questions about how to deal with certain kinds of developers. It seems to be a common thread for you. You keep asking about how to change people around you. If this is a constant problem for you, then perhaps you are the problem.
Now I know you are asking questions to learn how to deal with people you find difficult, and that's good, however, you keep asking (and getting answers) about how to change people.
It seems to me that you need to change. Work with these people to change the code to what you want it to be. With them. Don't try to get them to do it. Just do it, and tell them what you did and why, and ask suggestions for further improvement, and learn from each other. Play off of each other's experience and strengths. Just my 2 cents.
If you have clearly defined coding standards for the project, point out that the code needs to be changed to meet those standards. The list you have there seems like quite reasonable feedback (though #3 is much argued-over; I would only push to document the really confusing parts as fixing the other three points, hopefully, makes the code less confusing).
If there are any other examples you have in your repository from this developer that are several months old, show one to him and ask him what it is doing. (Show him this one in a few months). When he has to zip around to find out what is actually in his variables, and deconstructing every line of code to figure out what it is doing. Break into a code review / pair programming session right there. Refactor and rename together so that he hopefully begins to see for himself exactly why these things are important.
Frankly, I think this is a political problem, not a coding problem. Specifically...
WHO SAID THIS PERSON WAS A "STAR"? If this is the same person you described in your other question, then you already have your answer there: THIS PERSON IS NO "STAR".
So then you get into the other effects of politics...
Who is claiming this person to be a star? Why can you not just tell the person "this is crap code"? Who is protecting them / defending them were you to do that? Can you do that or would you get blasted / demoted / put on the "to be laid off" pile?
You are asking questions that cannot really be answered in isolation. IF the code is crap, then throw it away and do it correctly yourself. IF there are reasons that you cannot do that, then you need to ask yourself if the benefits of this place outweigh the negatives.
Cheers,
-R
Creating a program and then releasing it to be worked on by other developers is tough. You are throwing your code to the mercy of others' development styles, coding conventions, etc.
Telling those developers that they are doing coding poorly, after the code is in, is one of the hardest things that you can do. It is best to address your concerns before they ever start working with your code. This can be done in two ways: Maintaining a detailed coding standard, requiring that submitted code adheres to this and maintaining a development road map, not to just outline when new features will be in, but to create dependencies to avoid such mishaps.
More to your situation, it is important not to criticize or it could cause hostility and worse code coming in. Maybe you can work with that developer to create standards documentation. You will be able to express your ideas about what the standards should be, and you will get their input, without causing any hard feelings.
Always point out the good things in their code, and be sure when discussing the weaknesses that you frame them pointing out the reasons that it will benefit everyone (the developer included), never criticize.
Good luck.
I would do the following:
Make sure he knows that his hard work is appreciated (preferably, this should be the truth)
Ask him if he would mind making a few changes, making it sound like no big deal and easy to fix
Explain the issues, including why they are issues, and suggest specific changes to set him on the right path.
Hopefully, the exercise will help him integrate into the culture project better.
We try to solve these potential 'issues' proactively:
Every 'bigger' project where people work together gets assigned a project 'codelead' (one of the developers). This rotates every project (based on preferences, experience with the particular task ...) so everyone gets to be in the 'contributing' and 'code-project-lead' roles once in a while.
We explicitely made an agreement that
these project 'leads' can decide
whatever they want to with the code
contributions of the others (sort of
like a temporary dictatorship: change
it, make suggestions, ask people to
redo stuff etc.). The projectcode
'lead' bears the complete
responsibility for the aggregated
code to work.
With these formalised 'leads' (and the changing roles) I think people have less problems with (constructive) criticism of the parts they contribute.
Yes, keep the feedback as appreciative, professional and technical as possible, back up your concerns with possible "worst case" scenarios so that the disadvantages of those features and/or this particular implementation, become blatantly obvious.
Also, if this is about features/code that are very specific and are not of any use to most users, express your concerns about the code/use ratio - indicating concerns about increased code base complexity etc.
Ideally, present your concerns as open-ended questions - in the sense of: "Though, I am wondering if this way of doing it may work in the long term, due to ...". So that you actually encourage an active dialogue between contributors.
Invite your fellow contributors and user to provide their opinions on these concerns, in fact ask other people/contributors what they are thinking about this addition (in terms of pros & cons, requirements, code quality), do make the statement that you are willing to reconsider your current position if other contributors/users can provide corresponding insight.
You are basically encouraging an informal review that way, asking your community to also look into the proposed additions, so that the advantages and disadvantages can be discussed.
So, whatever the decision will be, it will be one that is community-backed, and not just simply made by you.
You being the architect of the original design, are also in an excellent position to provide architectural reasons why something is not (yet) suitable for inclusion/deployment.
If stability, complexity or code quality are a real concern, do illustrate how other contributions also had to go through a certain review process in order to be acceptable.
You can also mention how specific code doesn't really align with your current design, or how it may not scale too well with future extension to your current design, similarly you can highlight why certain stuff was left out explicitly.
If you actually like the features or the core idea, be sure to highlight the excellent addition these features would make if properly implemented and integrated, but do also highlight that the existing implementation isn't really appropriate due to a number of reasons.
If you can, do make specific suggestions for improvements, provide examples of how to do things better, and what to avoid and do express that you hope, this can be reworked to be added with the help of your project's community.
Ideally, present your requirements for actually accepting this contribution and do mention the background for your requirements, you may in fact say that you hate some of these requirements yourself.
Preferably, present and discuss instances where you yourself contributed similar code (or even worse code) and that you ended up facing huge issues due to your own code, so that these policies are now in place to prevent such issues. By actually talking about your own bad code, you can actually be very subjective.
Emphasize that you generally appreciate the effort itself, and that you are willing to provide the necessary help and pointers to bring the code in question into a better shape and form. Also, encourage that similar contributions in the future should be properly coordinated within your community, in order to avoid similar issues.
Always think in terms of features and functionality (and remind your contributor to do the same), not code - imagine it like a thorough code review process, where the final code that ends up being committed/accepted, may have hardly anything in common with the original implementation.
This is again a good possibility, to present examples where you yourself developed code that ended up largely reworked, so that much of it is now replaced by a much better implementation.
Similarly, there's always the issue with code that has no active maintainers, so you can just as easily suggest that you feel concerned about code that may end up being unmaintained, you could even ask if the corresponding developer would be willing to help maintain that code, possibly in a separate branch.
In the same sense, always require new code to be accompanied with proper comments, documentation and other updates. In other words, code that adds new or changes existing functionality, should always be accompanied with updates to all relevant documentation.
Ultimately, if you know right away that you cannot and will not accept any of that code in the near future, you can at least invite the developer to branch or even fork your project, possibly in you repository and with your help and guidance, so that you still express your gratitude for working with your project.
I'm working on a product which is meant to be simple to use and simple to set up, the competition largely requiring a long set up period and in some cases going as far as a bespoke solution for each customer. One part of our application is now expanding based on customer requests and it is looking like we'll need to make it very flexible so each customer can have a lot of control over how it behaves for them. The problem being that I don't want to make the system too configurable, as I believe this then makes it more complex to learn and to work with. I'm also concerned it opens the door to someone messing things up for themselves, kind of like handing them a gun, although I'm not actually pointing it at their foot for them.
Has anyone else faced a similar dilemma of putting power in users hands? How did you solve it? and what was the result?
I don't normally like to subscribe to the idea that all users are stupid, but there is a rule which can still be applied:
If you give them the opportunity, they WILL break it
Now it is up to you whether or not to give them the ability to do potentially dumb things. Or better yet, develop it so that when they do do the stupid voodoo that they do, it can be reverted or recovered from error state gracefully.
I highly recommend you read Joel's Controlling Your Environment Makes You Happy, which can be described as a treatise on user interface design but is really about usability with a healthy dab of psychology thrown in.
The section I'm referring to is Choices:
Every time you provide an option,
you're asking the user to make a
decision.
This is something I strongly agree with. Many developers, product managers and so on take the easy route and instead of figuring out what users actually need, they just give them a choice. You see this in enterprise bloatware like Clearcase or PVCS where there are so many options--90% of which you'd never change--indicating the designers have tried to make it all things to all men rather than doing one or two things exceptionally well.
Instead it just does lots of things badly.
Keep it simple, follow conventions, don't overwhelm the user with pointless and unnecessary choices and make the software behave like a normal user would expect. That alone would set you apart from an awful lot of other products.
Personally I like the TurboTax model (http://turbotax.intuit.com/). When creating a tax return, I get a simple, tell-me-like-I'm-five wizard that takes me step-by-step through the process, but I can step outside the process at any time and use more advanced features, returning to the process later.
Make it easy and simple and uncluttered for your user to do what they're going to do 80% of the time, but give them the power to deliberately step outside of the norm.
Interesting timing for your question. In the U.S. this is Income Tax week. Filling out the ol' 1040 and associated subforms should give us some sympathy for what users endure.
Lessons I take away are:
Only ask questions that relate to the user domain; avoid questions relating to the software system; and if you can derive the answer or suggest a most likely answer, do so.
Put related questions together (as long as they are normally entered by the same person using data most likely available at the same place and time, which is the definition of related for these purposes).
Make it support incremental input. It should be easy to enter the data they have, and defer completing it when the rest is available.
Show status validity and completeness. Make it clear and obvious how far they are to having validatable data.
Make it interruptable. Make sure it's possible to interrupt the process, leave the application, come back, and resume where they left off.
Yup, it's harder to program. Embrace it.
There are at least two ways to build a good software product:
Focus on a narrow set of functionality, and implement that functionality very well.
Design your system to be customizable (ideally, through scripting.) If you do the base system right, it will be easy to provide the basic, no options, just-do-what-I-want functionality on top of the customization layer.
Unfortunately, there are many more ways to create a bad software product.
Your questions implies that you can either provide a flexible solution OR make it foolproof.
I wouldn't put it like that. To me this is rather a matter of user expectations and the question in the first place would be:
How can I meet all important user expectations (even if they conflict with each other) without corrupting the application?
For instance a web application which has a menu, breadcrumb navigation, a site map and a search offers together with the inline links five different ways to find what you're looking for and how to go there.
That way most users can find fast and easily the functionality they are expecting and therefore the need for an extensive documentation might actually decrease.
So the answer might be to offer several different carefully chosen ways to solve one specific task, while each of them can be streamlined independent to avoid user mistakes.
The answer with this lies in who your end-users are. I used to write software that got used by professional sports coaches. While these guys were definitely good at what they did, they were hardly proficient at computer use, so our configurability was kept to a minimum (at least as far as what could be done in the GUI).
On the other hand, if you're dealing with power users, adding options is usually not a bad thing as long as they aren't intrusive.
It's all about who's going to be getting them.
Read Jeff Atwood's Training Your Users. It's a great article with some very useful links.
I like the approach of Firefox towards this. The basic options are accessible in the option menu, all the rest is under about:config. Thus you have an easy interface and an incredible flexibility if you need it.
I've had great success, and been happiest as an user, when using sensible defaults. In other words, make the most common use case easy (or even better, free), but give users the ability to step outside of that use case when the situation calls for it.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
One thing I struggle with is planning an application's architecture before writing any code.
I don't mean gathering requirements to narrow in on what the application needs to do, but rather effectively thinking about a good way to lay out the overall class, data and flow structures, and iterating those thoughts so that I have a credible plan of action in mind before even opening the IDE. At the moment it is all to easy to just open the IDE, create a blank project, start writing bits and bobs and let the design 'grow out' from there.
I gather UML is one way to do this but I have no experience with it so it seems kind of nebulous.
How do you plan an application's architecture before writing any code? If UML is the way to go, can you recommend a concise and practical introduction for a developer of smallish applications?
I appreciate your input.
I consider the following:
what the system is supposed to do, that is, what is the problem that the system is trying to solve
who is the customer and what are their wishes
what the system has to integrate with
are there any legacy aspects that need to be considered
what are the user interractions
etc...
Then I start looking at the system as a black box and:
what are the interactions that need to happen with that black box
what are the behaviours that need to happen inside the black box, i.e. what needs to happen to those interactions for the black box to exhibit the desired behaviour at a higher level, e.g. receive and process incoming messages from a reservation system, update a database etc.
Then this will start to give you a view of the system that consists of various internal black boxes, each of which can be broken down further in the same manner.
UML is very good to represent such behaviour. You can describe most systems just using two of the many components of UML, namely:
class diagrams, and
sequence diagrams.
You may need activity diagrams as well if there is any parallelism in the behaviour that needs to be described.
A good resource for learning UML is Martin Fowler's excellent book "UML Distilled" (Amazon link - sanitised for the script kiddie link nazis out there (-: ). This book gives you a quick look at the essential parts of each of the components of UML.
Oh. What I've described is pretty much Ivar Jacobson's approach. Jacobson is one of the Three Amigos of OO. In fact UML was initially developed by the other two persons that form the Three Amigos, Grady Booch and Jim Rumbaugh
I really find that a first-off of writing on paper or whiteboard is really crucial. Then move to UML if you want, but nothing beats the flexibility of just drawing it by hand at first.
You should definitely take a look at Steve McConnell's Code Complete-
and especially at his giveaway chapter on "Design in Construction"
You can download it from his website:
http://cc2e.com/File.ashx?cid=336
If you're developing for .NET, Microsoft have just published (as a free e-book!) the Application Architecture Guide 2.0b1. It provides loads of really good information about planning your architecture before writing any code.
If you were desperate I expect you could use large chunks of it for non-.NET-based architectures.
I'll preface this by saying that I do mostly web development where much of the architecture is already decided in advance (WebForms, now MVC) and most of my projects are reasonably small, one-person efforts that take less than a year. I also know going in that I'll have an ORM and DAL to handle my business object and data interaction, respectively. Recently, I've switched to using LINQ for this, so much of the "design" becomes database design and mapping via the DBML designer.
Typically, I work in a TDD (test driven development) manner. I don't spend a lot of time up front working on architectural or design details. I do gather the overall interaction of the user with the application via stories. I use the stories to work out the interaction design and discover the major components of the application. I do a lot of whiteboarding during this process with the customer -- sometimes capturing details with a digital camera if they seem important enough to keep in diagram form. Mainly my stories get captured in story form in a wiki. Eventually, the stories get organized into releases and iterations.
By this time I usually have a pretty good idea of the architecture. If it's complicated or there are unusual bits -- things that differ from my normal practices -- or I'm working with someone else (not typical), I'll diagram things (again on a whiteboard). The same is true of complicated interactions -- I may design the page layout and flow on a whiteboard, keeping it (or capturing via camera) until I'm done with that section. Once I have a general idea of where I'm going and what needs to be done first, I'll start writing tests for the first stories. Usually, this goes like: "Okay, to do that I'll need these classes. I'll start with this one and it needs to do this." Then I start merrily TDDing along and the architecture/design grows from the needs of the application.
Periodically, I'll find myself wanting to write some bits of code over again or think "this really smells" and I'll refactor my design to remove duplication or replace the smelly bits with something more elegant. Mostly, I'm concerned with getting the functionality down while following good design principles. I find that using known patterns and paying attention to good principles as you go along works out pretty well.
http://dn.codegear.com/article/31863
I use UML, and find that guide pretty useful and easy to read. Let me know if you need something different.
UML is a notation. It is a way of recording your design, but not (in my opinion) of doing a design. If you need to write things down, I would recommend UML, though, not because it's the "best" but because it is a standard which others probably already know how to read, and it beats inventing your own "standard".
I think the best introduction to UML is still UML Distilled, by Martin Fowler, because it's concise, gives pratical guidance on where to use it, and makes it clear you don't have to buy into the whole UML/RUP story for it to be useful
Doing design is hard.It can't really be captured in one StackOverflow answer. Unfortunately, my design skills, such as they are, have evolved over the years and so I don't have one source I can refer you to.
However, one model I have found useful is robustness analysis (google for it, but there's an intro here). If you have your use-cases for what the system should do, a domain model of what things are involved, then I've found robustness analysis a useful tool in connecting the two and working out what the key components of the system need to be.
But the best advice is read widely, think hard, and practice. It's not a purely teachable skill, you've got to actually do it.
I'm not smart enough to plan ahead more than a little. When I do plan ahead, my plans always come out wrong, but now I've spend n days on bad plans. My limit seems to be about 15 minutes on the whiteboard.
Basically, I do as little work as I can to find out whether I'm headed in the right direction.
I look at my design for critical questions: when A does B to C, will it be fast enough for D? If not, we need a different design. Each of these questions can be answer with a spike. If the spikes look good, then we have the design and it's time to expand on it.
I code in the direction of getting some real customer value as soon as possible, so a customer can tell me where I should be going.
Because I always get things wrong, I rely on refactoring to help me get them right. Refactoring is risky, so I have to write unit tests as I go. Writing unit tests after the fact is hard because of coupling, so I write my tests first. Staying disciplined about this stuff is hard, and a different brain sees things differently, so I like to have a buddy coding with me. My coding buddy has a nose, so I shower regularly.
Let's call it "Extreme Programming".
"White boards, sketches and Post-it notes are excellent design
tools. Complicated modeling tools have a tendency to be more
distracting than illuminating." From Practices of an Agile Developer
by Venkat Subramaniam and Andy Hunt.
I'm not convinced anything can be planned in advance before implementation. I've got 10 years experience, but that's only been at 4 companies (including 2 sites at one company, that were almost polar opposites), and almost all of my experience has been in terms of watching colossal cluster********s occur. I'm starting to think that stuff like refactoring is really the best way to do things, but at the same time I realize that my experience is limited, and I might just be reacting to what I've seen. What I'd really like to know is how to gain the best experience so I'm able to arrive at proper conclusions, but it seems like there's no shortcut and it just involves a lot of time seeing people doing things wrong :(. I'd really like to give a go at working at a company where people do things right (as evidenced by successful product deployments), to know whether I'm a just a contrarian bastard, or if I'm really as smart as I think I am.
I beg to differ: UML can be used for application architecture, but is more often used for technical architecture (frameworks, class or sequence diagrams, ...), because this is where those diagrams can most easily been kept in sync with the development.
Application Architecture occurs when you take some functional specifications (which describe the nature and flows of operations without making any assumptions about a future implementation), and you transform them into technical specifications.
Those specifications represent the applications you need for implementing some business and functional needs.
So if you need to process several large financial portfolios (functional specification), you may determine that you need to divide that large specification into:
a dispatcher to assign those heavy calculations to different servers
a launcher to make sure all calculation servers are up and running before starting to process those portfolios.
a GUI to be able to show what is going on.
a "common" component to develop the specific portfolio algorithms, independently of the rest of the application architecture, in order to facilitate unit testing, but also some functional and regression testing.
So basically, to think about application architecture is to decide what "group of files" you need to develop in a coherent way (you can not develop in the same group of files a launcher, a GUI, a dispatcher, ...: they would not be able to evolve at the same pace)
When an application architecture is well defined, each of its components is usually a good candidate for a configuration component, that is a group of file which can be versionned as a all into a VCS (Version Control System), meaning all its files will be labeled together every time you need to record a snapshot of that application (again, it would be hard to label all your system, each of its application can not be in a stable state at the same time)
I have been doing architecture for a while. I use BPML to first refine the business process and then use UML to capture various details! Third step generally is ERD! By the time you are done with BPML and UML your ERD will be fairly stable! No plan is perfect and no abstraction is going to be 100%. Plan on refactoring, goal is to minimize refactoring as much as possible!
I try to break my thinking down into two areas: a representation of the things I'm trying to manipulate, and what I intend to do with them.
When I'm trying to model the stuff I'm trying to manipulate, I come up with a series of discrete item definitions- an ecommerce site will have a SKU, a product, a customer, and so forth. I'll also have some non-material things that I'm working with- an order, or a category. Once I have all of the "nouns" in the system, I'll make a domain model that shows how these objects are related to each other- an order has a customer and multiple SKUs, many skus are grouped into a product, and so on.
These domain models can be represented as UML domain models, class diagrams, and SQL ERD's.
Once I have the nouns of the system figured out, I move on to the verbs- for instance, the operations that each of these items go through to commit an order. These usually map pretty well to use cases from my functional requirements- the easiest way to express these that I've found is UML sequence, activity, or collaboration diagrams or swimlane diagrams.
It's important to think of this as an iterative process; I'll do a little corner of the domain, and then work on the actions, and then go back. Ideally I'll have time to write code to try stuff out as I'm going along- you never want the design to get too far ahead of the application. This process is usually terrible if you think that you are building the complete and final architecture for everything; really, all you're trying to do is establish the basic foundations that the team will be sharing in common as they move through development. You're mostly creating a shared vocabulary for team members to use as they describe the system, not laying down the law for how it's gotta be done.
I find myself having trouble fully thinking a system out before coding it. It's just too easy to only bring a cursory glance to some components which you only later realize are much more complicated than you thought they were.
One solution is to just try really hard. Write UML everywhere. Go through every class. Think how it will interact with your other classes. This is difficult to do.
What I like doing is to make a general overview at first. I don't like UML, but I do like drawing diagrams which get the point across. Then I begin to implement it. Even while I'm just writing out the class structure with empty methods, I often see things that I missed earlier, so then I update my design. As I'm coding, I'll realize I need to do something differently, so I'll update my design. It's an iterative process. The concept of "design everything first, and then implement it all" is known as the waterfall model, and I think others have shown it's a bad way of doing software.
Try Archimate.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.
The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).
So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.
Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.
Programmers who don't code in their spare time for fun will never become as good as those that do.
I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.
(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)
The only "best practice" you should be using all the time is "Use Your Brain".
Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)
EDIT:
Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?
"Googling it" is okay!
Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.
Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.
What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.
(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)
Most comments in code are in fact a pernicious form of code duplication.
We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.
I think eventually many people just blank them out, especially those flowerbox monstrosities.
Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.
On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.
XML is highly overrated
I think too many jump onto the XML bandwagon before using their brains...
XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.
My 5 cents
Not all programmers are created equal
Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.
It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)
I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.
For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax
For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.
Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.
If you only know one language, no matter how well you know it, you're not a great programmer.
There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.
It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.
Performance does matter.
Print statements are a valid way to debug code
I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.
Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)
Your job is to put yourself out of work.
When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.
If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.
Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.
1) The Business Apps farce:
I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.
Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.
How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?
The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.
I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.
2) The n-years-of-experience-required:
Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.
3) The common "computer science" degree curriculum:
The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.
Getters and Setters are Highly Overused
I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).
I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!
UPDATE:
This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).
First of all: anyone who uses public fields deserves jail time
Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.
Many people think:
private fields + public accessors == encapsulation
I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.
Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):
There is a reason that we keep our
variables private. We don't want
anyone else to depend on them. We want
the freedom to change their type or
implementation on a whim or an
impulse. Why, then, do so many
programmers automatically add getters
and setters to their objects, exposing
their private fields as if they were
public?
UML diagrams are highly overrated
Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.
Opinion: SQL is code. Treat it as such
That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.
I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?
Readability is the most important aspect of your code.
Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.
If you're a developer, you should be able to write code
I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:
Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.
It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:
Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.
Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.
I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!
Edit:
There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.
Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.
The use of hungarian notation should be punished with death.
That should be controversial enough ;)
Design patterns are hurting good design more than they're helping it.
IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.
And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.
Less code is better than more!
If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.
PHP sucks ;-)
The proof is in the pudding.
Unit Testing won't help you write good code
The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.
And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.
In fact, I'll generalize that even further,
Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.
They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.
Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.
I think that a method should be created wherever you can name one.
It's ok to write garbage code once in a while
Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.
Code == Design
I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.
Here's an article on the topic of Code as Design.
Software development is just a job
Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.
But in the grand scheme of things, it is just a job.
It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.
I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.
I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.
Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.
Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)
Software Architects/Designers are Overrated
As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.
How's that for controversial?
Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.
There is no "one size fits all" approach to development
I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.
Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.
People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.
It isn't.
Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.
What's the penetration of design patterns in the real world? Do you use them in your day to day job - discussing how and where to apply them with your coworkers - or do they remain more of an academic concept?
Do they actually provide actual value to your job? Or are they just something that people talk about to sound smart?
Note: For the purpose of this question ignore 'simple' design patterns like Singleton. I'm talking about designing your code so you can take advantage of Model View Controller, etc.
Any large program that is well written will use design patterns, even if they aren't named or recognized as such. That's what design patterns are, designs that repeatedly and naturally occur. If you're interfacing with an ugly API, you'll likely find yourself implementing a Facade to clean it up. If you've got messaging between components that you need to decouple, you may find yourself using Observer. If you've got several interchangeable algorithms, you might end up using Strategy.
It's worth knowing the design patterns because you're more likely to recognize them and then converge on a clean solution more quickly. However, even if you don't know them at all, you'll end up creating them eventually (if you are a decent programmer).
And of course, if you are using a modern language, you'll probably be forced to use them for some things, because they're baked into the standard libraries.
In my opinion, the question: "Do you use design pattern?", alone is a little flawed because the answer is universally YES.
Let me explain, we, programmers and designers, all use design patterns... we just don't always realise it. I know this sounds cliché, but you don't go to patterns, patterns come to you. You design stuff, it might look like an existing pattern, you name it that way so everyone understand what you are talking about and the rationale behind your design decision is stronger, knowing it has been discussed ad nauseum before.
I personally use patterns as a communication tool. That's it. They are not design solutions, they are not best practices, they are not tools in a toolbox.
Don't get me wrong, if you are a beginner, books on patterns will show you how a solution is best solved "using" their patterns rather than another flawed design. You will probably learn from the exercise. However, you have to realise that this doesn't mean that every situation needs a corresponding pattern to solve it. Every situation has a quirk here and there that will require you to think about alternatives and take a difficult decision with no perfect answer. That's design.
Anti-pattern however are on a totally different class. You actually want to actively avoid anti-patterns. That's why the name anti-pattern is so controversial.
To get back to your original question:
"Do I use design patterns?", Yes!
"Do I actively lean toward design patterns?", No.
Yes. Design patterns can be wonderful when used appropriately. As you mentioned, I am now using Model-View-Controller (MVC) for all of my web projects. It is a very common pattern in the web space which makes server-side code much cleaner and well-organized.
Beyond that, here are some other patterns that may be useful:
MVVM (Model-View-ViewModel): a similar pattern to MVC; used for WPF and Silverlight applications.
Composition: Great for when you need to use a hierarchy of objects.
Singleton: More elegant than using globals for storing items that truly need a single instance. As you mentioned, a simple pattern but it does have its uses.
It is worth noting a design pattern can also highlight a lack of language features and/or deficiencies in a language. For example, iterators are now built in as part of newer languages.
In general design patterns are quite useful but you should not use them everywhere; just where they are a good fit for your needs.
I try to, yes. They do indeed help maintainability and readability of your code. However, there are people who do abuse them, usually (from what I've seen) by forcing a system into a pattern that doesn't exist.
I try to use patterns if they are applicable. I think it's kind of sad seeing developers implement design patterns in code just for the sake of it. For the right task though, design patterns can be very useful and powerful.
There are many design patterns beyond the simple that are used in "real world". Good example Stackoverflow uses the Model View Controller Pattern. I have used Class Factories multiple times in projects for my employer, and I have seen many already written projects using them as well.
I am not saying every design pattern is being used but many are.
Yes we do, it usually happens when we start designing something and then someone notices that it resembles an existing pattern. We then take a look at it and see how it would help us achieve our goal.
We also use patterns that are not documented but that emerge from designing a lot.
Mind you, we don't use them a lot.
Yes, Factory, Chain of Responsibility, Command, Proxy, Visitor, and Observer, among others, are in use in a codebase I work with daily. As far as MVC goes, this site seems to use it quite well, and the devs couldn't say enough good things in the latest podcast.
Yes, I use a lot of well known design patterns, but I also end up building some software that I later find out uses a 'named' design pattern. Most elegant, reusable designs could be called a 'pattern'. It's a lot like dance moves. We all know the waltz, and the 2-step, but not everyone has a name for the 'bump and scoot' although most of us do it.
MVC is very well known so yes we use design patterns quite a lot. Now if your asking about the Gang of Four patterns, there are several that I use because other maintainers will know the design and what we are working towards in the code. There are several though that remain fairly obscure for what we do, so if I use one I don't get the full benefits of using a pattern.
Are they important, yes because it gives you a method of talking about software design in a quick efficient and generally accepted way. Can you do better custom solutions, well yes (sorta)?
The original GoF patterns were pulled from production code, so they catalogued what was already being used in the wild. They aren't purely or even mostly an academic thing.
I find the MVC pattern really useful to isolate your model logic, which can than be reused or worked on without too much trouble. It also helps de-coupling your classes and makes unit testing easier. I wrote about it recently (yes, shameless plug here...)
Also, I've recently used a factory pattern from a base class to generate and return the proper DataContext class that I needed on the fly, using LINQ.
Bridges are used when trying when trying to glue together two different technologies (like Cocoa and Ruby on the Mac, for example)
I find, however, that whenever I implement a pattern, it's because I knew about it before hand. Some extra thought generally goes into it as I find I must modify the original pattern slightly to accommodate my needs.
You just need to be careful not to become and architecture astronaut!
Yes, design patterns are largely used in the real world - and daily by many of the people I work with.
In my opinion the biggest value provided by design patterns is that they provide a universal, high level language for you to convey software design to other programmers.
For instance instead of describing your new class as a "utility that creates one of several other classes based on some combination of input criteria", you can simply say it's an "abstract factory" and everyone instantly understands what you're talking about.
Yes, design patterns or abstractly patterns are part of my life, where I look, I begin to see them. Therefore, I am surrounded by them. But, as you know, little knowledge is a dangerous thing. Therefore, I strongly recommend you to read GoF book.
One of the main problems about design patterns, most developers just do not get the idea, or do not believe in them. And most of the time they argue about the variables, loops, or switches. But, I strongly believe that if you do not speak the pattern language, your software will not go far and you will find yourselves in a maintenance nightmare.
As you know, anti-pattern is also dangerous thing and it happens when you have little expertise on design patterns. And refactoring anti-patterns is much more harder. As a recommended book about this problem please read "AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis".
Yes.
We are even using them in my current job: Mainframe coding with COBOL and PL/I.
So far I have seen Adaptor, Visitor, Facade, Module, Observer and something very close to Composite and Iterator. Due to the nature of the languages it's mostly strutural patterns that are used. Also, I'm not always sure that the people who use them do so conciously :D
I absolutely use design patterns. At this point I take MVC for granted as a design pattern. My primary reason for using them is that I am humble enough to know that I am likely not the first person to encounter a particular problem. I rarely start a piece of code knowing which pattern I am going to use; I constantly watch the code to see if it naturally develops into an existing pattern.
I am also very fond of Martin Fowler's Patterns of Enterprise Application Architecture. When a problem or task presents itself, I flip to related section (it's mostly a reference book) and read a few overviews of the patterns. Once I have a better idea of the general problem and the existing solutions, I begin to see the long term path my code will likely take via the experience of others. I end up making much better decisions.
Design patterns definitely play a big role in all of my "for the future" ideas.