How would you explain to a non-technical person why writing code (business-logic) behind the onclick event is a bad practice and leads to unmaintainable code?
Edited:
I have to explain management why some refactoring is needed, also why some code is not passing code review. To some people in management this only means more funding. I came up with this example because at some point of a discussion somebody said: ..put the code behind the button and forget all that model-view-controller hype, you will finish your task faster.
This is how I would explain it:
The difference between writing code behind onlclick events and writing applications that are tiered or layered, is like the difference between a medieval town and a modern city.
In a medieval town, everyone plows it's field, everyone sews his clothes, builds his house and educates his children to the best of his abilities, no one is really specialized to do a task well, they have to do all the tasks necessary for survival.
This is what writing code behind onclick events is like, the code has to do everything, handle UI interactions, do business validations, handle database access, and this repeats for every event.
In a modern city there are farmers that practice agriculture at a larger scale and specialize in it, there are tailors that can sew better clothing for everyone because of greater experience and specialization, there are builders, there are teachers that teach children in schools and can do a better job at it because they have more time to do it. This is what writing a tiered application looks like, the UI tier is responsible for handling user requests and updating the user interface only and therefore is more easy to change replace or extend by having no extra code burden, the business tier is responsible for business logic and all the logic is centralized, reusable, the business logic code is more compact and clean, the data access tier handles database interaction and specializes in doing it possibly interacting with more than one type of database.
Writing code behind onclick events is a rudimentary style of programming that is not the most efficient style and doesn't yield the best possible results in the long run, although the results are more than often acceptable (the application works), it can work allot better, be easier to maintain and extend and be more consistent (regarding ui, interactions, error reporting, workflow, etc) by using a proper tiered design that employs good coding practices.
Because you would not connect the doorbell directly to the release buzzer.
Why does a non-technical person need to know this? Since 'code style' really is a technical matter, you won't be able to explain it without explaining some of the technicalities behind it.
Probably the simplest explanation I can think of (but this does not cover all the reasons that it is bad-practice, just what I believe is the most common reason):
When writing software you strive to make it as maintainable as possible, this means being able to quickly adjust the application to changing user/client/management requirements. The code behind an onClick event will change whenever the GUI requirements change, the business logic code will change whenever the business requirements change. By making one independent of the other, there will be less work to do when either of these things change. Furthermore, when updating the business logic, you will not need to worry about how it ties into the GUI, and when updating the GUI you will not need to worry about how the business logic actually works.
The other main reason is re-usability. The GUI with all the buttons is really just a 'view' of the underlying data/an interface to that data. You may have several ways of accessing/changing the same information, and it would be redundant to replicate the business logic. This would also add complexity if the business logic changes, since you would have to change it in multiple places.
With pictures and stories :)
Not to be too glib, but build a scenario on a white board where there is a simple piece of business functionality (change a user password). Illustrate graphically what it looks like. Now expand it so that 2 forms need to change the users password (admin and user) Double code! Explain DRY. Change the password complexity rules and now we need a double fix. Refactor the boxes to move the code to a common area in the same project.
Now expand it again where another app needs to do the same thing. Now a class library is better. Talk honestly about the increase in complexity vs the maintainability and resuse.
Rinse and repeat until it sinks in.
I have to ask like anyone else - why?
What matters to a nontechnical person is that the desired behavior is happening when the button is clicked. From their point of view, you are coding into the click event.
But if it really is an issue, focus on what the nontechnical people care about - BUGS. They don't care about making code elegant or having nice design patterns, etc. It's all about whether or not things work.
Say something like this:
Any business rules that have to be programmed into the system may need to be reused in many different places, like a report, a button, a search, etc. I might even need to use that same logic in a web page as well as the software package. You might think it's only going to be needed here right now, but experience has proven that most of the time business rules have to be used in more than one place and it is best to assume it will always be reused.
If I put the business rules behind the button directly, reuse of that logic can be difficult if not impossible. Then I have to put the same logic in the system more than once, which introduces more opportunities for mistakes. I could fix the logic in one place and it would still be broken somewhere else.
Instead, I take the business rules and put them in a central location so no matter where I need them I can reuse them. Then, a bug fix in one place is a bug fixed in all places, and the software will have fewer problems.
An analogy would be links on a webpage. Instead of copying all the text from a webpage into another webpage, we just make a link. Then we always have the most up-to-date information.
Just remember - nontechy people are pragmatic - it's all about what they can immediately see and use.
Tell your managers about technical debt (and also here)
I think you should convince your managers of the general principle, that refactoring or paying down the technical debt, is necessary once in a while. Just like a cook has to clean its kitchen once in a while in order to make good food, you have to clean up your code once in a while before you can implement new features.
If your managers don't agree with this general principle, then you're in big trouble. If they do agree, then I think they should not micromanage you but trust your technical expertise on what kind of refactoring has highest priority.
Economics
3/4 of the total life-cycle costs of a typical software application are maintenance. By skimping on the 1/4 part up front you load the 3/4.
Skimp of the development enough and the 3/4 becomes 19/20. Do it properly and your $100,000 project costs $400,000 over its lifetime. Skip on TLC up front and you save $20,000 now but your project costs $2 million over its life time.
If the CFO is in the meeting you could drop a comment along the lines of:
'But don't worry, the extra $1.5m will
come out of someone else's budget so
it won't affect your bonus.'
The other hidden cost of leaving a mess lying around is the bit rot will massively slow down changes on the application and increase the risk of undetected bugs that nobody notices until they're right in the middle of a month-end close.
ConcernedOfTunbridgeW explains it well in terms of what management wants to hear. Main point is that if it has problems right now, it probably is because of redundancy of code logic. The refactoring you want to do will eliminate that. It will cost a little more in the long run, but it will save money in maintenance long-term.
The reason I don't usually put the code behind the onclick event is because I'm often duplicating the code, and want all of those type of onclick events to call the same routine.
At least in those terms, you're not going to have any success explaining this to a non-technical person. It's too technical of a point.
If you generalize a little bit and perhaps try to explain something like sepeartion of concerns (not in those terms), you might have a little bit more luck
This is really speaking to separating business layer (how things are processed) and presentation layer (how things are displayed).
The rate of change and the reasons for changing the two are generally very different. You want to be flexible enough to change them, in response to changing business needs as easily as possible, in a manner that reduces risk (of regressions).
Related
We have been trying now for a while to assist the management (of a customer) with the implementation of a a new system that is custom developed by ourselves, to their requirements. Their old system is text based (DOS) and their employees have been using it for years. The new system is Windows GUI and have many advanced features which will make their lives easier and their organisation more efficient. The problem is that the staff do not want to adapt to the new GUI environment and they are now resorting to be unfriendly and as unhelpful as possible, often placing serious obstacles in our way. The management is adamant that implementation must proceed. The system runs trouble free. We have been consistently friendly and helpful with all parties.
Any advise would be greatly appreciated! Have you encountered something like this before and did you manage to turn it round?
Note:This question is intended to help Programmers etc. with implementation difficulties by sharing experiences and factual solutions that worked. It is not intended to be subjective and indeed Programming techniques may help to solve the problem.
Use the tool
Somebody needs to really understand how the existing tool works. Not just well enough to walk through it; but well enough to do it for real. Why not take 2 weeks and go and do their job with them? That will both improve your understanding of the tool, and may foster a better working relationship with them. And while you're there, perhaps buy the drinks once or twice - it sounds corny, but anything that lowers the hostility, and lets you communicate.
User experience
Getting a good developer (or better: designer) who understanding user-experience is probably key. You can't just completely change their tooling and expect their productivity to remain the same.
Keyboard use:
Think of tools like Visual Studio, AutoCAD, etc - in most cases you don't need the mouse, and "die hard" types wouldn't notice if you took their mouse away. Try to respect this; provide shortcuts / chords (ideally the same as the existing system).
Terminology:
Keep it the same. Don't invent new terms for things.
Talk to them?
This may or may not be possible, but getting a few key users "on board" early can be pivotal; especially if you genuinely empower them to help with the user experience.
Find the faults
In the existing system. Take away their existing pain points and they may forgive you a lot.
Unfortunately it sounds like a case of needing to close the barn doors after the horses have bolted. You really need to get grass roots buy-in on the need for an improved system before beginning the project and maintain that relationship during the development.
By having champions of the system at the "coal face" level in the business would a) make sure you meet not only the management requirements but also the users goals which is all important in a successful system and b) the users get a system to which they have been a development party not just had a system thrust on them.
Getting people to moan about the short comings of an existing system is easy. Describing possible new systems before its create in way which allows the users to comment enables them to feel some control and gives you vital feedback. Be absolutely sure to identifier those killer gripes about the old system and make sure those are addressed in the new system.
Of course this all a bit late for you. The way forward is to create a review forum with the most vocal opponents and put them in a room with you and management. Get them to defend their reasons for not wanting the new system. If you can't show how your new system is better then perhaps it isn't. If you can see how the new system might be slightly improved (the movement may only need to be small) then do that, it may go a long way to get back the feeling of involvement you missed out on before.
I would sit together with the staff or a couple of the more loud mouthed opposers, go through what they find lacking with the system and suggest some of these changes to be incorporated in a future release(s). That way they will pay more attention to your the system and also feel more a part of the process instead of just being handed something like some peon. In addition it would also help avoid any misunderstandings about the system.
Get one / more of the user to be your champion by involving them in the development process. Make sure to choose the right ones. Hopefully one that you can reason with. When launching, do a launch event. Make it a big deal. Not necessarily applied to an application, but I've seen it worked in my previous work environments. If this is too late (you went ahead without any involvement from the actual users before), well... there is always things called staff turnover, lol. Out with the old and in with the new. Make the new users your buddy :).
You have to show some kind of benefit for making the change. A demo / mockup can be useful for this. Choose a manager to demo it to and wait. Let it become his idea. Then it might move forward. Being to pushy can cause a negative knee-jerk reaction which might block further consideration of the idea.
It is sad that software often gets replaced by a management decision without any user involvement and then people wonder why the system is rejected.
I've witnessed this first hand. A guy I once worked was told to develop a new version of an application "in secret". At the end of 6 months development it was shown to the users. It didn't meet their requirements and they were angry they hadn't been involved. Needless to say the software didn't make it into production and the developer left shortly after (I felt sorry for him as he had wasted 6 months effort and actually did a real good job considering the circumstances).
The chances are that the software is inferior to the previous application- perhaps data entry is actually slower (you will be biased as you wrote it- everyone likes to think their software is better).
Re-engage with the users, do some analysis and work out what is bad about the old system. If the new system can address the grips the users have with the old system you might be able to turn this around.
Edit- who was involved in engaged with your developers? Presumably the managers at the client, who probably never use the system? This is another big mistake people tend to make- managers driving requirements.
If the old system is perfect, then it never needed to be replaced in the first place!
I'm working in a small development group. We are building and improving our product.
Half a year ago we couldn't think about higher characteristics, such as usability, because we had so many problems with our product. Many bugs, high technical debt, low performance and other problems kept us from being able to focus on usability.
With time we've improved our process substantially. What we've done:
Real Agile iterations
Continuous integration
Testing(unit-tests, functional Smoke tests, performance)
Code quality is 'good'
Painless deployment process
So we are now producing stable, reliable releases. The following quote (paraphrased) describes our current situation:
first - make it work; after that, make it reliable; after that, make it usable
We are geeks, so we can't 'make' a great UI by ourselves.
So what should we do? What direction can you recommend?
Maybe we should hire Usability experts part-time or full-time?
How can we explain the importance of Usability to our stakeholders?
How do we convince them that this is useful?
What do your Business people say will make you the most money? Do that. Maybe usability is the next thing to do, maybe more features, maybe a different product. It's not something a "geek" will necessarily be able to guess.
I'm in the same boat as you are - I basically live on the command line, and I'm completely out of touch with the modern UI (both web and desktop application).
The solution for me was using a real UI developer for all my GUIs, and I just live in the back-end as it were.
There are quite a few benefits of this arrangement:
You don't have to debug your own crappy UIs anymore :) that's their job, and they're better at it than you, so no worries.
Your code will naturally gravitate to a MVC or at least tiered API approach, which is easier to code against for all parties involved.
Good UI people know what questions to ask end users, and know when those users don't know what they're talking about. I certainly don't have that skill.
You can do what you do best, and they do what they do best, making a stronger team overall.
The cons are obvious - you need to not only find the money for a talented UI dev, but you need to find a talented UI dev!
Now, I can't speak for you and your company's position in your market etc etc (I also don't do buisnessspeak :) ) but if you can afford another hire, it will give back more to the team than the cost of the position. It did for me!
You ask, "How can we explain the importance of Usability to our stakeholders?" but I'm not sure that you yourselves get it!
Interaction design (iD) and usability aren't things that you can tack on to an existing products when the "important" things are done. They should be there from the very first start, preferably done in small iterations with small tests and studies. I'm talking about cheap and dirty iD/usability, stuff like lo-fi prototyping, user testing with just four people, having enough stats to be able to detect user errors and such.
If you don't to iD/usability from the start, you risk ending up with the same crappy product as your competitors and/or providing users with band aids when they need surgery.
What do your users want ? They're probably the people best placed to identify requirements.
You are the ones who know and understand the product, so don't assume that just because someone else has 'usability expert' in their title that hiring them will somehow make your product usable.
Also, don't undercut your own instincts for usability. As a programmer, you use software all the time, what products do find the most usable? Think about what you like about them and compare them to your product.
Think about what your product does, and imagine that you are the person having to use the product and imagine how you would want it to work. Think of what a user wants to accomplish using your product, and imagine the steps they would have to go through to do it. Does it seem easy to understand what to do? Can it be done in fewer steps?
Most importantly, talk to your customers. Find out what they found confusing or difficult to accomplish. See if they have come up with their own workarounds for using your product in ways you didn't initially picture.
If you put as much thought, planning and effort into usability as you did into improving the reliability and deployment, you will end up with a much better product.
When analyzing the next step it really all comes down to business requirements & goals.
What is upper management like? Are they tech-savy? Are they open to new ideas? Do they think that the current product needs adjustment, improvement, etc? Is the product still in high demand? Is the marketplace changing such that the product/service will soon be obsolete? etc. etc. etc.
IF there are real business reasons for spending the $/time/resources then you can begin to explore product improvements. At that point consider the opinions of previous posters regarding user opinion.
I know so many geeks including myself who know usability, so one way would be learning it. Another way bringing someone in who can do UI design and usability.
To convince them that usability is important:
It's useless if you can't use it!
I don't know what sort of product you build but you always got clients, and clients always love usable applications. This will increase sales, happy client count and decrease tech support.
What does it do for your users? What do they think about the usability? Maybe it's not an ssue for them.
Make it more valueable to your users. Deliver more business value. Help your customers getter a better return on their investment. Do this by making it do more of what they need it to do, to do it better (more accurately, more quickly, more reliably more useably), or to do it at lower cost (less infrastructure needed to run it, reduced maintenance costs because you improved reliability), more flexibly (deals with their business changes)...
Lots of dimensions which do connect with the technical ones you refer to (usability reliabilty stability etc). But paying customers normally care about their business needs/features, not your technical ones that deliver them.
Go talk to your users (or potential users)
My one-liner about the importance of usability:
What use is a reliable system that is not usable?
If you have an existing product with existing users, then what makes you think that your current UI is not usable?
Do you suspect that there are some minor changes you can make that will greatly improve usability or is something more revolutionary required? If the latter, then what about the needs of your existing users, will they be willing to re-learn a whole new UI?
Edit
A user interface can be considered "poor" for a variety of reasons...
It is just plain ugly / old fashioned / does not "look like a Windows application"
It uses metaphors or workflows which do not relate to things that the user understands or wants to do
The first of these is relatively easy to fix, especially if you hire in a great designer. The fix would be the equivalent of redecorating your lounge and buying a new sofa and TV. Same room, different experience. Your existing users would still be able to use this application.
Fixing the second of these gets a lot more complex and involved, and might really impact your codebase. It's hard to comment further without knowing more about your application.
I think the answer is in the order of things, you say its:
"first - make it work; after that, make it reliable; after that, make it usable"
But the most important thing here is "make it work". Acceptance criteria for a functionality to "work" is that it is in fact - usable. If not, it will not be executed. Then it's just a block of dead code. And dead code should not be in the system in the first place.
Make a UI.
Then throw that away, and make another after you tried to use the first one. Then simplify as much as you can. Any time you can programmatically determine what the user wants from inputs, instead of multiple explicit choices, do so. Too many buttons induces paralysis.
I'm working on a product which is meant to be simple to use and simple to set up, the competition largely requiring a long set up period and in some cases going as far as a bespoke solution for each customer. One part of our application is now expanding based on customer requests and it is looking like we'll need to make it very flexible so each customer can have a lot of control over how it behaves for them. The problem being that I don't want to make the system too configurable, as I believe this then makes it more complex to learn and to work with. I'm also concerned it opens the door to someone messing things up for themselves, kind of like handing them a gun, although I'm not actually pointing it at their foot for them.
Has anyone else faced a similar dilemma of putting power in users hands? How did you solve it? and what was the result?
I don't normally like to subscribe to the idea that all users are stupid, but there is a rule which can still be applied:
If you give them the opportunity, they WILL break it
Now it is up to you whether or not to give them the ability to do potentially dumb things. Or better yet, develop it so that when they do do the stupid voodoo that they do, it can be reverted or recovered from error state gracefully.
I highly recommend you read Joel's Controlling Your Environment Makes You Happy, which can be described as a treatise on user interface design but is really about usability with a healthy dab of psychology thrown in.
The section I'm referring to is Choices:
Every time you provide an option,
you're asking the user to make a
decision.
This is something I strongly agree with. Many developers, product managers and so on take the easy route and instead of figuring out what users actually need, they just give them a choice. You see this in enterprise bloatware like Clearcase or PVCS where there are so many options--90% of which you'd never change--indicating the designers have tried to make it all things to all men rather than doing one or two things exceptionally well.
Instead it just does lots of things badly.
Keep it simple, follow conventions, don't overwhelm the user with pointless and unnecessary choices and make the software behave like a normal user would expect. That alone would set you apart from an awful lot of other products.
Personally I like the TurboTax model (http://turbotax.intuit.com/). When creating a tax return, I get a simple, tell-me-like-I'm-five wizard that takes me step-by-step through the process, but I can step outside the process at any time and use more advanced features, returning to the process later.
Make it easy and simple and uncluttered for your user to do what they're going to do 80% of the time, but give them the power to deliberately step outside of the norm.
Interesting timing for your question. In the U.S. this is Income Tax week. Filling out the ol' 1040 and associated subforms should give us some sympathy for what users endure.
Lessons I take away are:
Only ask questions that relate to the user domain; avoid questions relating to the software system; and if you can derive the answer or suggest a most likely answer, do so.
Put related questions together (as long as they are normally entered by the same person using data most likely available at the same place and time, which is the definition of related for these purposes).
Make it support incremental input. It should be easy to enter the data they have, and defer completing it when the rest is available.
Show status validity and completeness. Make it clear and obvious how far they are to having validatable data.
Make it interruptable. Make sure it's possible to interrupt the process, leave the application, come back, and resume where they left off.
Yup, it's harder to program. Embrace it.
There are at least two ways to build a good software product:
Focus on a narrow set of functionality, and implement that functionality very well.
Design your system to be customizable (ideally, through scripting.) If you do the base system right, it will be easy to provide the basic, no options, just-do-what-I-want functionality on top of the customization layer.
Unfortunately, there are many more ways to create a bad software product.
Your questions implies that you can either provide a flexible solution OR make it foolproof.
I wouldn't put it like that. To me this is rather a matter of user expectations and the question in the first place would be:
How can I meet all important user expectations (even if they conflict with each other) without corrupting the application?
For instance a web application which has a menu, breadcrumb navigation, a site map and a search offers together with the inline links five different ways to find what you're looking for and how to go there.
That way most users can find fast and easily the functionality they are expecting and therefore the need for an extensive documentation might actually decrease.
So the answer might be to offer several different carefully chosen ways to solve one specific task, while each of them can be streamlined independent to avoid user mistakes.
The answer with this lies in who your end-users are. I used to write software that got used by professional sports coaches. While these guys were definitely good at what they did, they were hardly proficient at computer use, so our configurability was kept to a minimum (at least as far as what could be done in the GUI).
On the other hand, if you're dealing with power users, adding options is usually not a bad thing as long as they aren't intrusive.
It's all about who's going to be getting them.
Read Jeff Atwood's Training Your Users. It's a great article with some very useful links.
I like the approach of Firefox towards this. The basic options are accessible in the option menu, all the rest is under about:config. Thus you have an easy interface and an incredible flexibility if you need it.
I've had great success, and been happiest as an user, when using sensible defaults. In other words, make the most common use case easy (or even better, free), but give users the ability to step outside of that use case when the situation calls for it.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
One thing I struggle with is planning an application's architecture before writing any code.
I don't mean gathering requirements to narrow in on what the application needs to do, but rather effectively thinking about a good way to lay out the overall class, data and flow structures, and iterating those thoughts so that I have a credible plan of action in mind before even opening the IDE. At the moment it is all to easy to just open the IDE, create a blank project, start writing bits and bobs and let the design 'grow out' from there.
I gather UML is one way to do this but I have no experience with it so it seems kind of nebulous.
How do you plan an application's architecture before writing any code? If UML is the way to go, can you recommend a concise and practical introduction for a developer of smallish applications?
I appreciate your input.
I consider the following:
what the system is supposed to do, that is, what is the problem that the system is trying to solve
who is the customer and what are their wishes
what the system has to integrate with
are there any legacy aspects that need to be considered
what are the user interractions
etc...
Then I start looking at the system as a black box and:
what are the interactions that need to happen with that black box
what are the behaviours that need to happen inside the black box, i.e. what needs to happen to those interactions for the black box to exhibit the desired behaviour at a higher level, e.g. receive and process incoming messages from a reservation system, update a database etc.
Then this will start to give you a view of the system that consists of various internal black boxes, each of which can be broken down further in the same manner.
UML is very good to represent such behaviour. You can describe most systems just using two of the many components of UML, namely:
class diagrams, and
sequence diagrams.
You may need activity diagrams as well if there is any parallelism in the behaviour that needs to be described.
A good resource for learning UML is Martin Fowler's excellent book "UML Distilled" (Amazon link - sanitised for the script kiddie link nazis out there (-: ). This book gives you a quick look at the essential parts of each of the components of UML.
Oh. What I've described is pretty much Ivar Jacobson's approach. Jacobson is one of the Three Amigos of OO. In fact UML was initially developed by the other two persons that form the Three Amigos, Grady Booch and Jim Rumbaugh
I really find that a first-off of writing on paper or whiteboard is really crucial. Then move to UML if you want, but nothing beats the flexibility of just drawing it by hand at first.
You should definitely take a look at Steve McConnell's Code Complete-
and especially at his giveaway chapter on "Design in Construction"
You can download it from his website:
http://cc2e.com/File.ashx?cid=336
If you're developing for .NET, Microsoft have just published (as a free e-book!) the Application Architecture Guide 2.0b1. It provides loads of really good information about planning your architecture before writing any code.
If you were desperate I expect you could use large chunks of it for non-.NET-based architectures.
I'll preface this by saying that I do mostly web development where much of the architecture is already decided in advance (WebForms, now MVC) and most of my projects are reasonably small, one-person efforts that take less than a year. I also know going in that I'll have an ORM and DAL to handle my business object and data interaction, respectively. Recently, I've switched to using LINQ for this, so much of the "design" becomes database design and mapping via the DBML designer.
Typically, I work in a TDD (test driven development) manner. I don't spend a lot of time up front working on architectural or design details. I do gather the overall interaction of the user with the application via stories. I use the stories to work out the interaction design and discover the major components of the application. I do a lot of whiteboarding during this process with the customer -- sometimes capturing details with a digital camera if they seem important enough to keep in diagram form. Mainly my stories get captured in story form in a wiki. Eventually, the stories get organized into releases and iterations.
By this time I usually have a pretty good idea of the architecture. If it's complicated or there are unusual bits -- things that differ from my normal practices -- or I'm working with someone else (not typical), I'll diagram things (again on a whiteboard). The same is true of complicated interactions -- I may design the page layout and flow on a whiteboard, keeping it (or capturing via camera) until I'm done with that section. Once I have a general idea of where I'm going and what needs to be done first, I'll start writing tests for the first stories. Usually, this goes like: "Okay, to do that I'll need these classes. I'll start with this one and it needs to do this." Then I start merrily TDDing along and the architecture/design grows from the needs of the application.
Periodically, I'll find myself wanting to write some bits of code over again or think "this really smells" and I'll refactor my design to remove duplication or replace the smelly bits with something more elegant. Mostly, I'm concerned with getting the functionality down while following good design principles. I find that using known patterns and paying attention to good principles as you go along works out pretty well.
http://dn.codegear.com/article/31863
I use UML, and find that guide pretty useful and easy to read. Let me know if you need something different.
UML is a notation. It is a way of recording your design, but not (in my opinion) of doing a design. If you need to write things down, I would recommend UML, though, not because it's the "best" but because it is a standard which others probably already know how to read, and it beats inventing your own "standard".
I think the best introduction to UML is still UML Distilled, by Martin Fowler, because it's concise, gives pratical guidance on where to use it, and makes it clear you don't have to buy into the whole UML/RUP story for it to be useful
Doing design is hard.It can't really be captured in one StackOverflow answer. Unfortunately, my design skills, such as they are, have evolved over the years and so I don't have one source I can refer you to.
However, one model I have found useful is robustness analysis (google for it, but there's an intro here). If you have your use-cases for what the system should do, a domain model of what things are involved, then I've found robustness analysis a useful tool in connecting the two and working out what the key components of the system need to be.
But the best advice is read widely, think hard, and practice. It's not a purely teachable skill, you've got to actually do it.
I'm not smart enough to plan ahead more than a little. When I do plan ahead, my plans always come out wrong, but now I've spend n days on bad plans. My limit seems to be about 15 minutes on the whiteboard.
Basically, I do as little work as I can to find out whether I'm headed in the right direction.
I look at my design for critical questions: when A does B to C, will it be fast enough for D? If not, we need a different design. Each of these questions can be answer with a spike. If the spikes look good, then we have the design and it's time to expand on it.
I code in the direction of getting some real customer value as soon as possible, so a customer can tell me where I should be going.
Because I always get things wrong, I rely on refactoring to help me get them right. Refactoring is risky, so I have to write unit tests as I go. Writing unit tests after the fact is hard because of coupling, so I write my tests first. Staying disciplined about this stuff is hard, and a different brain sees things differently, so I like to have a buddy coding with me. My coding buddy has a nose, so I shower regularly.
Let's call it "Extreme Programming".
"White boards, sketches and Post-it notes are excellent design
tools. Complicated modeling tools have a tendency to be more
distracting than illuminating." From Practices of an Agile Developer
by Venkat Subramaniam and Andy Hunt.
I'm not convinced anything can be planned in advance before implementation. I've got 10 years experience, but that's only been at 4 companies (including 2 sites at one company, that were almost polar opposites), and almost all of my experience has been in terms of watching colossal cluster********s occur. I'm starting to think that stuff like refactoring is really the best way to do things, but at the same time I realize that my experience is limited, and I might just be reacting to what I've seen. What I'd really like to know is how to gain the best experience so I'm able to arrive at proper conclusions, but it seems like there's no shortcut and it just involves a lot of time seeing people doing things wrong :(. I'd really like to give a go at working at a company where people do things right (as evidenced by successful product deployments), to know whether I'm a just a contrarian bastard, or if I'm really as smart as I think I am.
I beg to differ: UML can be used for application architecture, but is more often used for technical architecture (frameworks, class or sequence diagrams, ...), because this is where those diagrams can most easily been kept in sync with the development.
Application Architecture occurs when you take some functional specifications (which describe the nature and flows of operations without making any assumptions about a future implementation), and you transform them into technical specifications.
Those specifications represent the applications you need for implementing some business and functional needs.
So if you need to process several large financial portfolios (functional specification), you may determine that you need to divide that large specification into:
a dispatcher to assign those heavy calculations to different servers
a launcher to make sure all calculation servers are up and running before starting to process those portfolios.
a GUI to be able to show what is going on.
a "common" component to develop the specific portfolio algorithms, independently of the rest of the application architecture, in order to facilitate unit testing, but also some functional and regression testing.
So basically, to think about application architecture is to decide what "group of files" you need to develop in a coherent way (you can not develop in the same group of files a launcher, a GUI, a dispatcher, ...: they would not be able to evolve at the same pace)
When an application architecture is well defined, each of its components is usually a good candidate for a configuration component, that is a group of file which can be versionned as a all into a VCS (Version Control System), meaning all its files will be labeled together every time you need to record a snapshot of that application (again, it would be hard to label all your system, each of its application can not be in a stable state at the same time)
I have been doing architecture for a while. I use BPML to first refine the business process and then use UML to capture various details! Third step generally is ERD! By the time you are done with BPML and UML your ERD will be fairly stable! No plan is perfect and no abstraction is going to be 100%. Plan on refactoring, goal is to minimize refactoring as much as possible!
I try to break my thinking down into two areas: a representation of the things I'm trying to manipulate, and what I intend to do with them.
When I'm trying to model the stuff I'm trying to manipulate, I come up with a series of discrete item definitions- an ecommerce site will have a SKU, a product, a customer, and so forth. I'll also have some non-material things that I'm working with- an order, or a category. Once I have all of the "nouns" in the system, I'll make a domain model that shows how these objects are related to each other- an order has a customer and multiple SKUs, many skus are grouped into a product, and so on.
These domain models can be represented as UML domain models, class diagrams, and SQL ERD's.
Once I have the nouns of the system figured out, I move on to the verbs- for instance, the operations that each of these items go through to commit an order. These usually map pretty well to use cases from my functional requirements- the easiest way to express these that I've found is UML sequence, activity, or collaboration diagrams or swimlane diagrams.
It's important to think of this as an iterative process; I'll do a little corner of the domain, and then work on the actions, and then go back. Ideally I'll have time to write code to try stuff out as I'm going along- you never want the design to get too far ahead of the application. This process is usually terrible if you think that you are building the complete and final architecture for everything; really, all you're trying to do is establish the basic foundations that the team will be sharing in common as they move through development. You're mostly creating a shared vocabulary for team members to use as they describe the system, not laying down the law for how it's gotta be done.
I find myself having trouble fully thinking a system out before coding it. It's just too easy to only bring a cursory glance to some components which you only later realize are much more complicated than you thought they were.
One solution is to just try really hard. Write UML everywhere. Go through every class. Think how it will interact with your other classes. This is difficult to do.
What I like doing is to make a general overview at first. I don't like UML, but I do like drawing diagrams which get the point across. Then I begin to implement it. Even while I'm just writing out the class structure with empty methods, I often see things that I missed earlier, so then I update my design. As I'm coding, I'll realize I need to do something differently, so I'll update my design. It's an iterative process. The concept of "design everything first, and then implement it all" is known as the waterfall model, and I think others have shown it's a bad way of doing software.
Try Archimate.
Say there are two possible solutions to a problem: the first is quick but hacky; the second is preferable but would take longer to implement. You need to solve the problem fast, so you decide to get the hack in place as quickly as you can, planning to start work on the better solution afterwards. The trouble is, as soon as the problem is alleviated, it plummets down the to-do list. You're still planning to put in the better solution at some point, but it's hard to justify implementing it right now. Suddenly you find you've spent five years using the less-than-perfect solution, cursing it the while.
Does this sound familiar? I know it's happened more than once where I work. One colleague describes deliberately making a bad GUI so that it wouldn't be accidentally adopted long-term. Do you have a better strategy?
Write a test case which the hack fails.
If you can't write a test which the hack fails, then either there's nothing wrong with the hack after all, or else your test framework is inadequate. If the former, run away quick before you waste your life on needless optimisation. If the latter, seek another approach (either to flagging hacks, or to testing...)
Strategy 1 (almost never selected): Don't implement the kluge. Don't even let people know it's a possibility. Just do it the right way the first time. Like I said, this one is almost never selected, due to time constraints.
Strategy 2 (dishonest): Lie and Cheat. Tell management that there are bugs in the hack, and they could cause major problems later on. Unfortunately, most of the time, the managers just say to wait until the bugs become a problem, then fix the bugs.
Strategy 2a: Same as strategy 2, except there really are bugs. Same problem, though.
Strategy 3 (and my personal favorite): Design the solution whenever you can, and do it well enough that an intern or code-monkey could do it. It's easier to justify spending the small amount of code-monkey money than to justify your own salary, so it might just get done.
Strategy 4: Wait for a rewrite. Keep waiting. Sooner or later (probably later), someone is going to have to rewrite the thing. Might as well do it right then.
Here is a great related article on technical debt.
Basically, it is an analogy of debt with all the technical decisions you make. There is good debt and bad debt... and you have to pick the debt that is going to achieve the goals you want with the least long term cost.
The worst kind of debt is small little accumulating shortcuts that are analogous to credit card debt... each one doesn't hurt, but pretty soon you are in the poor house.
This is a major issue when doing deadline driven work. I find that adding very detailed comments about why this way was chosen and some hints at how it should be coded help. This way people looking at the code see it and keep it fresh.
Another option that will work is add a bug.feature in your tracking framework (you do have one, right?) detailing the rework. That way it is visible and may force the issue at some point.
The only time you can ever justify fixing these things (because they're not really broken, just ugly) is when you have another feature or bug fix that touches the same section of code, and you might as well re-write it.
You have to do the math on what a developer's time costs. If software requirements are being met, and the only thing wrong is that the code is embarrasing under the hood, it's not really worth fixing.
Whole companies can go out of business because over-zealous engineers insist on a re-architecture every year or so when they get antsy.
If it's bug-free and meets requirements, it's done. Ship it. Move on.
[Edit]
Of course I'm not advocating that everything be hacked in all the time. You have to design and write code carefully in the normal course of the development process. But when you do end up with hacks that just had to be done quickly, you have to do a cost-benefit analysis on whether or not it's worth it to clean up the code. If over the lifetime of the application you will spend more time coding around a messy hack than you would have fixing it, then of course fix it. But if not, it's way too expensive and risky to re-code a working, bug-free application just because looking at the source makes you ill.
YOU DON'T DO INTERIM SOLUTIONS.
Sometimes I think programmers just need to be told this.
Sorry about that, but seriously--a hacky solution is worthless and even on the first iteration can take longer than doing a portion of the solution correctly.
Please stop leaving me your crap code to maintain. Just ALWAYS CODE IT RIGHT. No matter how long it takes and who yells at you.
When you are sitting there twiddling your thumbs after delivering early while everyone else is debugging their stupid hacks, you'll thank me.
Even if you don't think you are a great programmer, always strive to do the best you can, never take shortcuts--it doesn't cost you ANY time to do it right. I can justify this statement if you don't believe me.
Suddenly you find you've spent five years using the less-than-perfect solution, cursing it the while.
If you're cursing it, why is it at the bottom of the TODO list?
If it's not affecting you, why are you cursing it?
If it is affecting you, then it's a problem that needs to be fixed NOW.
I make sure that I'm vocal about the priority of the long term fix ESPECIALLY after the short term fix has gone in.I detail the reasons why it's a hack and not a good long term solution and use those to get the stakeholders (managers, clients, etc) to understand why it needs to be fixed Depending on the case, I may even inject a bit of worst case scenario fear in there. "If this safely line snaps, the whole bridge could collapse!"I take responsibility for coming up with a long term solution and make sure that it gets deployed
It is a hard call. I have done hacks personally cause, sometimes you HAVE to get that product out the door and into the customers hands. However, the way that I take care of it is to just do it.
Tell the project lead or your boss, or the customer: There are some spots that need to be cleaned up, and coded better. I need a week to do it, and it is going to cost less to do it now, then it will be to do it 6 months from now, when we need to implement an extension onto the subsystem.
Usually problems like this arise from bad communication with management or the customer. If the solution works for the customer then they see no reason to ask for it to be changed. So they need to be told about the tradeoffs you made beforehand so they can plan extra time to fix the problems after you've implemented the quick solution.
How to solve it depends a bit on why it's a bad solution. If your solution is bad because it's hard to change or maintain then the first time you have to do maintenance and have a bit more time then that is the right time to upgrade to a better solution. In this case it helps if you tell the customer or your boss that you took a shortcut in the first place. That way they know that they can't expect a fast solution next time around. Cripling the UI can be a good way to make sure the customer comes back to get stuff fixed.
If the solution is bad because it's risky or unstable then you really need to talk to the person doing the planning and have some time planned in to fix the problem asap.
Good luck. In my experience this is almost impossible to achieve.
Once you go down the slippery slope of implementing a hack because you are under pressure then you might as well get used to living with it for all time. There is almost NEVER enough time to re-work something that already works, no matter how badly it is implemented internally. What makes you think you will magically have more time "at some later date" to fix the hack?
The only exception I can think of to this rule is if the hack completely prevents you from implementing another piece of functionality that is needed by a customer. Then you have no choice but to do the re-work.
I try to build the hacky solution so that it can be migrated to the longterm way as painlessly as possible. Say you got a guy who is building a database in SQL Server cuz that's his strongest DB, but your corporate standard is Oracle. Build the db with as few non-transferable features (like Bit datatypes) as possible. In this example, it's not hard to avoid bit types, but it makes transitioning later an easier process.
Educate whoever is in charge of making the final decision why the hacky way of doing things is bad in the long-run.
Describe the problem in terms they can relate to.
Include a graph of cost, productivity, and revenue curves.
Teach them about technical debt.
Regularly refactor if you're pushed forward.
Never call it "refactoring" or "going back and cleaning up" in front of non-technical people. Instead, call it "adapting" the system to handle "new features".
Basically, people who don't understand software don't get the concept of revisiting things that already work. The way they look at it, developers are like mechanics who want to keep taking apart and reassembling the entire car every time someone wants to add a feature, which sounds insane to them.
It helps to make analogies to everyday things. Explain to them how when you made the interim solution, you made choices that suited building it quickly, as opposed to being stable, maintainable, etc. It's like choosing to build with wood instead of steel because wood is easier to cut, and thus, you could build the interim solution quicker. The wood, however, simply can not support the foundation of a 20-story building.
We use Java and Hudson for continuous integration. 'Interim solutions' must be commented with:
// TODO: Better solution required.
Every time Hudson runs a build it provides a report of each TODO item so that we have an up to date, highly visible record of any outstanding items that need improved.
Great question. This bothers me a lot, too - and most of the time I'm the sole person responsible for prioritizing issues in my own projects (yep, small business).
I found out that the problem that needs to be fixed is usually just a subset of the problem. IOW, the customer that needs an urgent fix does not need the whole problem to be solved, just a part of it - smaller or larger. That sometimes enables me to create a workaround that is not solution to the complete problem but just to the customer's subset and that allows me to leave the bigger issue open in the issue tracker.
That may of course not apply at all to your work environment :(
This reminds me of the story of "CTool". In the beginning CTool was put forward by one of our devs, I'll call him Don, as one possible way to solve the problem we were having. Being an earnest hard-working type, Don plugged away and delivered a working prototype. You know where I am going with this. Overnight, CTool became a part of the company work flow with an entire department depending on it. By the second or third day, bitter complaints started streaming in about CTool's shortcomings. Users questioned Don's competence, commitment and IQ. Don's protests that this was never supposed to be a production app fell on deaf ears. This went on for years. Finally, someone got around to re-writing the app, well after Don had departed. By this time, so much loathing had become attached to the name CTool that naming it CTool version 2 was out of the question. There was even a formal funeral for CTool, somewhat reminiscent of the copier (or was it a printer?) execution scene in Office Space.
Some might say Don deserved the slings and arrows for not making it go right to fix CTool. My only point is that saying you should never hack out a solution is probably unjustifiable in the Real World. But if you are the one to do it, tread cautiously.
Get it in writing (an email). So when it becomes a problem later management doesn't "forget" that it was supposed to be temporary.
Make it visible to the users. The more visible it is the less likely people are going to forget to go back and do it the right way when the crisis is over.
Negotiate before the temp solution is in place for a project, resources, and time lines to get the real fix in. Work for the real solution should probably begin as soon as the temp solution is finished.
You file a second very descriptive bug against your own "fix" and put a to-do comment right in the affected areas that says, "This area needs a lot of work. See defect #555" (use the right number of course). People who say "don't put in a hack" don't seem to understand the question. Assume you have a system that needs to be up and running now, your non-hack solution is 8 days of work, your hack is 38 minutes of work, the hack is there to buy you time to do the work and not lose money while you're doing it.
Now you still have to get your customer or management agree to schedule the N*100 minutes of time required to do the real fix in addition to the N minutes needed now to fix it. If you must refuse to implement the hack until you get such agreement, then maybe that's what you have to do, but I've worked with some understanding people in that regard.
The real price of introducing a quick-fix is that when someone else needs to introduce a 2nd quick fix, they will introduce it based on your own quick-fix. So, the longer a quick-fix is in place, the more entrenched it will become. Quite often, a hack takes only a little bit longer than doing things right, until you encounter a 2nd hack which builds on the first.
So, obviously it is (or seems to be) sometimes necessary to introduce a quick fix.
One possible solution, assuming your version control supports it, is to introduce a fork from the source whenever you make such a hack. If people are encouraged to avoid coding new features within these special "get it done" forks, then it will eventually be more work to integrate the new features with the fork than it will be to get rid of the hack. More likely, though, the "good" fork will get discarded. And if you are far enough away from release that making such a fork will not be practical (because it is not worth doing the dual integration mentioned above), then you probably shouldn't even be using a hack anyways.
A very idealistic approach.
A more realistic solution is to keep your program segmented into as many orthogonal components as possible and to occasionally do a complete rewrite of some of the components.
A better question is why the hacky solution is bad. If it is bad because it reduces flexibility, ignore it until you need flexibility. If it is bad because it impacts the programs behavior, ignore it and eventually it will become a bug fix and WILL be addressed. If it is bad because it looks ugly, ignore it, as long as the hack is localized.
Some solutions I've seen in the past:
Mark it with a comment HACK in the code (or similar scheme such as XXX)
Have an automatic report run and emailed weekly to those that care which counts how many times the HACK comments appear
Add a new entry in your bug tracking system with the line number and description of the right solution (so the knowledge gained from the research before writing the hack isn't lost)
write a test case that demonstrates how the hack fails (if possible) and check it into the appropriate test suite (i.e. so that it throws errors that someone will eventually want to cleanup)
once the hack is installed and the pressure is off, immediately start on the right solution
This is an excellent question. One thing I've noticed as I get more experience: hacks buy you a very short amount of time, and often cost you a huge amount more. Closely related is the 'quick fix' that solves what you think is the problem -- only to find when it blows up that that it wasn't the problem at all.
Setting aside the debate about whether you should do it, let's assume that you have to do it. The trick now is to do it in a way that minimizes long range affects, it easily ripped out later, and makes itself a nuisance so you remember to fix it.
The nuisance part is easy: make it issue a warning every time you execute the kludge.
The ripped out part can be easy: I like to do this be putting the kludge behind a subroutine name. That makes it easier to update since you compartmentalize the code. When you get your permanent solution, you're subroutine can either implement it or be a no-op. Sometimes a subclass can work nicely for this too. Don't let other people depend on whatever your quick fix is, though. It's difficult to recommend any particular technique without seeing the situation.
Minimizing long range effects should be easy if the rest of the code is nice. Always go through the published interface, and so on.
Try to make the cost of the hack clear to the business folks. Then they can make an informed decision either way.
You could intentionally write it in way that is overly restrictive and singe purposed and would require a re-write to be modified.
We had to do this once - make a short term demo version that we knew we did not want to keep. The customer wanted it on a winTel box, so we developed the prototype in SGI/XWindows. (We were fluent in both, so it wasn't a problem).
Confession:
I have used '#define private public' in C++ in order to read data from some other code layer. It went in as a hack but works well and fixing it has never become a priority. It is now 3 years later...
One of the main reasons hacks do not get removed is the risk that one introduces new bugs while fixing the hack. (Especially when dealing with pre-TDD code bases.)
My answer is a bit different from the others. My experience is that the following practices help you stay agile and move from hackey first iteration/alpha solutions to beta/production ready:
Test Driven Development
Small units of refactoring
Continous Integration
Good Configuration management
Agile database techniques/database refactoring
And it should go without saying you have to have stakeholder support to do any of these correctly. But with these products in place you have the right tools and processes to quickly change a product in major ways with confidence. Sometimes your ability to change is your ability to manage the risk of the changes and from the development perspective these tools/techniques give you surer footing.