Almost everywhere I worked I met lots of people who didn't care that they produced massive amounts of boilerplate code.
For me this is one of the worst things ever, it leads to errors, it is boring and it increases the noise.
Worst example may be even Microsofts unwillingness to give us a better syntax for this annoying "INotifyPropertyChanged" - stuff. You can't use automatically generated properties, you have to create a big redundancy (replicating the property name in the call to "OnPropertyChanged" or whatever your raiser method is called).
Some people go as far as to accept that most programs in many programming languages consist of mostly the same repeated code (noise), not interesting stuff (signal). See MSDN - examples for example, there is so much unneeded, repeated code all over the place (the horrible "INotifyPropertyChanged" - pattern that ruins all the flow being only the tip of the iceberg).
However, when I raise this issue and propose solutions like AOP (PostSharp.NET) or using delegates (for the non - C# - folks: anonymous functions, often realized using a lambda operator), all I get is "we don't care".
Anyone else here troubled by the insane amount of noise introduced by boilerplate code and who wants to think about ways to push solutions to the boilerplate - issue?
For what it's worth, I'm completely on your side.
The boilerplate folks argue that the repetitive, redundant code is "automatic" or "consistent" and therefore doesn't contribute to code complexity. Often when a language forces developers to create boilerplate, the industry creates IDEs and other crutches to automate the process. Then, when the apparent cost of producing that boilerplate code approaches zero, people think it doesn't cost anything.
They're wrong: Boilerplate code contributes to code bulk, and anyone maintaining code has to dig through the irrelevant code to get at the important parts. Also, since auto-generated code can and often does get edited, it can hide bugs introduced by typos, incomplete renaming or other accidents. The cost of boilerplate code is not in its creation but in its maintenance - which many projects try to ignore completely.
In the '80s, I saw trade mags plastered with ads for memory leak debuggers for C++, and it was an obvious sign to me that memory management in C++ is seriously broken. Now, in the vicinity of Java and C#, I see a proliferation of code generating assists, and that indicates to me that those languages have issues that would be better solved elsewhere.
Scala has issues of its own, but I love what they've done with properties and auto-initializing constructors.
Just kill'em. No, really, if someone writes boilerplate code and doesn't care to improve it, I doubt we may call him professional. It often happens that management wants to see tasks done fast and only thing that's left is to push out some write-only boilerplate code just to make them happy. If your management encourages such approach - change your job.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
As we program, we all develop practices and patterns that we use and rely on. However, over time, as our understanding, maturity, and even technology usage changes, we come to realize that some practices that we once thought were great are not (or no longer apply).
An example of a practice I once used quite often, but have in recent years changed, is the use of the Singleton object pattern.
Through my own experience and long debates with colleagues, I've come to realize that singletons are not always desirable - they can make testing more difficult (by inhibiting techniques like mocking) and can create undesirable coupling between parts of a system. Instead, I now use object factories (typically with a IoC container) that hide the nature and existence of singletons from parts of the system that don't care - or need to know. Instead, they rely on a factory (or service locator) to acquire access to such objects.
My questions to the community, in the spirit of self-improvement, are:
What programming patterns or practices have you reconsidered recently, and now try to avoid?
What did you decide to replace them with?
//Coming out of university, we were taught to ensure we always had an abundance
//of commenting around our code. But applying that to the real world, made it
//clear that over-commenting not only has the potential to confuse/complicate
//things but can make the code hard to follow. Now I spend more time on
//improving the simplicity and readability of the code and inserting fewer yet
//relevant comments, instead of spending that time writing overly-descriptive
//commentaries all throughout the code.
Single return points.
I once preferred a single return point for each method, because with that I could ensure that any cleanup needed by the routine was not overlooked.
Since then, I've moved to much smaller routines - so the likelihood of overlooking cleanup is reduced and in fact the need for cleanup is reduced - and find that early returns reduce the apparent complexity (the nesting level) of the code. Artifacts of the single return point - keeping "result" variables around, keeping flag variables, conditional clauses for not-already-done situations - make the code appear much more complex than it actually is, make it harder to read and maintain. Early exits, and smaller methods, are the way to go.
Trying to code things perfectly on the first try.
Trying to create perfect OO model before coding.
Designing everything for flexibility and future improvements.
In one word overengineering.
Hungarian notation (both Forms and Systems).
I used to prefix everything. strSomeString or txtFoo.
Now I use someString and textBoxFoo. It's far more readable and easier for someone new to come along and pick up. As an added bonus, it's trivial to keep it consistant -- camelCase the control and append a useful/descriptive name. Forms Hungarian has the drawback of not always being consistent and Systems Hungarian doesn't really gain you much. Chunking all your variables together isn't really that useful -- especially with modern IDE's.
The "perfect" architecture
I came up with THE architecture a couple of years ago. Pushed myself technically as far as I could so there were 100% loosely coupled layers, extensive use of delegates, and lightweight objects. It was technical heaven.
And it was crap. The technical purity of the architecture just slowed my dev team down aiming for perfection over results and I almost achieved complete failure.
We now have much simpler less technically perfect architecture and our delivery rate has skyrocketed.
The use of caffine. It once kept me awake and in a glorious programming mood, where the code flew from my fingers with feverous fluidity. Now it does nothing, and if I don't have it I get a headache.
Commenting out code. I used to think that code was precious and that you can't just delete those beautiful gems that you crafted. I now delete any commented-out code I come across unless there's a TODO or NOTE attached because it's too perilous to leave it in. To wit, I've come across old classes with huge commented-out portions and it really confused me why they were there: were they recently commented out? is this a dev environment change? why does it do this unrelated block?
Seriously consider not commenting out code and just deleting it instead. If you need it, it's still in source control. YAGNI though.
The overuse / abuse of #region directives. It's just a little thing, but in C#, I previously would use #region directives all over the place, to organize my classes. For example, I'd group all class properties together in a region.
Now I look back at old code and mostly just get annoyed by them. I don't think it really makes things clearer most of the time, and sometimes they just plain slow you down.
So I have now changed my mind and feel that well laid out classes are mostly cleaner without region directives.
Waterfall development in general, and in specific, the practice of writing complete and comprehensive functional and design specifications that are somehow expected to be canonical and then expecting an implementation of those to be correct and acceptable. I've seen it replaced with Scrum, and good riddance to it, I say. The simple fact is that the changing nature of customer needs and desires makes any fixed specification effectively useless; the only way to really properly approach the problem is with an iterative approach. Not that Scrum is a silver bullet, of course; I've seen it misused and abused many, many times. But it beats waterfall.
Never crashing.
It seems like such a good idea, doesn't it? Users don't like programs that crash, so let's write programs that don't crash, and users should like the program, right? That's how I started out.
Nowadays, I'm more inclined to think that if it doesn't work, it shouldn't pretend it's working. Fail as soon as you can, with a good error message. If you don't, your program is going to crash even harder just a few instructions later, but with some nondescript null-pointer error that'll take you an hour to debug.
My favorite "don't crash" pattern is this:
public User readUserFromDb(int id){
User u = null;
try {
ResultSet rs = connection.execute("SELECT * FROM user WHERE id = " + id);
if (rs.moveNext()){
u = new User();
u.setFirstName(rs.get("fname"));
u.setSurname(rs.get("sname"));
// etc
}
} catch (Exception e) {
log.info(e);
}
if (u == null){
u = new User();
u.setFirstName("error communicating with database");
u.setSurname("error communicating with database");
// etc
}
u.setId(id);
return u;
}
Now, instead of asking your users to copy/paste the error message and sending it to you, you'll have to dive into the logs trying to find the log entry. (And since they entered an invalid user ID, there'll be no log entry.)
I thought it made sense to apply design patterns whenever I recognised them.
Little did I know that I was actually copying styles from foreign programming languages, while the language I was working with allowed for far more elegant or easier solutions.
Using multiple (very) different languages opened my eyes and made me realise that I don't have to mis-apply other people's solutions to problems that aren't mine. Now I shudder when I see the factory pattern applied in a language like Ruby.
Obsessive testing. I used to be a rabid proponent of test-first development. For some projects it makes a lot of sense, but I've come to realize that it is not only unfeasible, but rather detrimental to many projects to slavishly adhere to a doctrine of writing unit tests for every single piece of functionality.
Really, slavishly adhering to anything can be detrimental.
This is a small thing, but: Caring about where the braces go (on the same line or next line?), suggested maximum line lengths of code, naming conventions for variables, and other elements of style. I've found that everyone seems to care more about this than I do, so I just go with the flow of whoever I'm working with nowadays.
Edit: The exception to this being, of course, when I'm the one who cares the most (or is the one in a position to set the style for a group). In that case, I do what I want!
(Note that this is not the same as having no consistent style. I think a consistent style in a codebase is very important for readability.)
Perhaps the most important "programming practice" I have since changed my mind about, is the idea that my code is better than everyone else's. This is common for programmers (especially newbies).
Utility libraries. I used to carry around an assembly with a variety of helper methods and classes with the theory that I could use them somewhere else someday.
In reality, I just created a huge namespace with a lot of poorly organized bits of functionality.
Now, I just leave them in the project I created them in. In all probability I'm not going to need it, and if I do, I can always refactor them into something reusable later. Sometimes I will flag them with a //TODO for possible extraction into a common assembly.
Designing more than I coded.
After a while, it turns into analysis paralysis.
The use of a DataSet to perform business logic. This binds the code too tightly to the database, also the DataSet is usually created from SQL which makes things even more fragile. If the SQL or the Database changes it tends to trickle to everything the DataSet touches.
Performing any business logic inside an object constructor. With inheritance and the ability to create overloaded constructors tend to make maintenance difficult.
Abbreviating variable/method/table/... Names
I used to do this all of the time, even when working in languages with no enforced limits on lengths of names (well they were probably 255 or something). One of the side-effects were a lot of comments littered throughout the code explaining the (non-standard) abbreviations. And of course, if the names were changed for any reason...
Now I much prefer to call things what they really are, with good descriptive names. including standard abbreviations only. No need to include useless comments, and the code is far more readable and understandable.
Wrapping existing Data Access components, like the Enterprise Library, with a custom layer of helper methods.
It doesn't make anybody's life easier
Its more code that can have bugs in it
A lot of people know how to use the EntLib data access components. No one but the local team knows how to use the in house data access solution
I first heard about object-oriented programming while reading about Smalltalk in 1984, but I didn't have access to an o-o language until I used the cfront C++ compiler in 1992. I finally got to use Smalltalk in 1995. I had eagerly anticipated o-o technology, and bought into the idea that it would save software development.
Now, I just see o-o as one technique that has some advantages, but it's just one tool in the toolbox. I do most of my work in Python, and I often write standalone functions that are not class members, and I often collect groups of data in tuples or lists where in the past I would have created a class. I still create classes when the data structure is complicated, or I need behavior associated with the data, but I tend to resist it.
I'm actually interested in doing some work in Clojure when I get the time, which doesn't provide o-o facilities, although it can use Java objects if I understand correctly. I'm not ready to say anything like o-o is dead, but personally I'm not the fan I used to be.
In C#, using _notation for private members. I now think it's ugly.
I then changed to this.notation for private members, but found I was inconsistent in using it, so I dropped that too.
I stopped going by the university recommended method of design before implementation. Working in a chaotic and complex system has forced me to change attitude.
Of course I still do code research, especially when I'm about to touch code I've never touched before, but normally I try to focus on as small implementations as possible to get something going first. This is the primary goal. Then gradually refine the logic and let the design just appear by itself. Programming is an iterative process and works very well with an agile approach and with lots of refactoring.
The code will not look at all what you first thought it would look like. Happens every time :)
I used to be big into design-by-contract. This meant putting a lot of error checking at the beginning of all my functions. Contracts are still important, from the perspective of separation of concerns, but rather than try to enforce what my code shouldn't do, I try to use unit tests to verify what it does do.
I would use static's in a lot of methods/classes as it was more concise. When I started writing tests that practice changed very quickly.
Checked Exceptions
An amazing idea on paper - defines the contract clearly, no room for mistake or forgetting to check for some exception condition. I was sold when I first heard about it.
Of course, it turned to be such a mess in practice. To the point of having libraries today like Spring JDBC, which has hiding legacy checked exceptions as one of its main features.
That anything worthwhile was only coded in one particular language. In my case I believed that C was the best language ever and I never had any reason to code anything in any other language... ever.
I have since come to appreciate many different languages and the benefits/functionality they offer. If I want to code something small - quickly - I would use Python. If I want to work on a large project I would code in C++ or C#. If I want to develop a brain tumour I would code in Perl.
When I needed to do some refactoring, I thought it was faster and cleaner to start straightaway and implement the new design, fixing up the connections until they work. Then I realized it's better to do a series of small refactorings to slowly but reliably progress towards the new design.
Perhaps the biggest thing that has changed in my coding practices, as well as in others, is the acceptance of outside classes and libraries downloaded from the internet as the basis for behaviors and functionality in applications. In school at the time I attended college we were encouraged to figure out how to make things better via our own code and rely upon the language to solve our problems. With the advances in all aspects of user interface and service/data consumption this is no longer a realistic notion.
There are certain things which will never change in a language, and having a library that wraps this code in a simpler transaction and in fewer lines of code that I have to write is a blessing. Connecting to a database will always be the same. Selecting an element within the DOM will not change. Sending an email via a server-side script will never change. Having to write this time and again wastes time that I could be using to improve my core logic in the application.
Initializing all class members.
I used to explicitly initialize every class member with something, usually NULL. I have come to realize that this:
normally means that every variable is initialized twice before ever being read
is silly because in most languages automatically initialize variables to NULL.
actually enforces a slight performance hit in most languages
can bloat code on larger projects
Like you, I also have embraced IoC patterns in reducing coupling between various components of my apps. It makes maintenance and parts-swapping much simpler, as long as I can keep each component as independent as possible. I'm also utilizing more object-relational frameworks such as NHibernate to simplify database management chores.
In a nutshell, I'm using "mini" frameworks to aid in building software more quickly and efficiently. These mini-frameworks save lots of time, and if done right can make an application super simple to maintain down the road. Plug 'n Play for the win!
A few years ago, we needed a C++ IPC library for making function calls over TCP. We chose one and used it in our application. After a while, it became clear it didn't provide all functionality we needed. In the next version of our software, we threw the third party IPC library out and replaced it by one we wrote ourselves. From then on, I sometimes doubt whether this was a good decision, because it has proven to be quite a lot of work and it obviously felt like reinventing the wheel. So my question is: are there disadvantages to code reuse that justify this reinvention?
I can suggest a few
The bugs get replicated - If you reuse a buggy code :)
Sometimes it may add an additional overhead. As an example if you just need to do a simple thing it is not advisable to use a complex BIG library that implements the required feature.
You might face with some licensing concerns.
You may need to spend some time to learn\configure the external library. This may not be effective if the re-development takes a much lower time.
Reusing a poorly documented library may get more time than expected/estimated
P.S. The reasons for writing our own library were:
Evaluating external libraries is often very difficult and it takes a lot of time. Also, some problems only become visible after a thorough evaluation.
It made it possible to introduce some features that are specific for our project.
It is easier to do maintenance and to write extensions, as you know the library through and through.
It's pretty much always case by case. You have to look at the suitability and quality of what you're trying to reuse.
The number one issue is: you can only successfully reuse code if that code is GOOD code. If it was designed poorly, has bugs, or is very fragile then you'll run into the same issues you already did run into -- you have to go do it yourself anyway because it's so hard to modify the existing code.
However, if it's a third-party library that you are considering using that you don't have the source code for, it's a little different. You can try and get the source if it's that kind of library. Some commercial library vendors are open to suggestions and feature requests.
The Golden Wisdom :: It Has To Be Usable Before It Can Be Reusable.
The biggest disadvantage (you mention it yourself) by reusing third party libraries, is that you are strongly coupled and dependent to how that library works and how it's supposed to be used, unless you manage to create a middle interface layer that can take care of it.
But it's hard to create a generic interface, since replacing an existing library with another one, more or less requires that the new functionality works in similar ways. However, you can always rewrite the code using it, but that might be very hard and take a long time.
Another aspect is that if you reinvent the wheel, you have complete control over what's happening and you can do modifications as you see fit. This can be completely impossible if you are depending on a third part library being alive and constantly providing you with updates and bug fixes. On the other hand, reusing code this way enables you to focus on other things in your software, which sometimes might be the thing to do.
There's always a trade off.
If your code relies on external resources and those go away, you may be crippling portions of many applications.
Since most reused code comes from the internet, you run into all the issues with the Bathroom Wall of Code Atwood talks about. You can run into issues with insecure or unreliable borrowed code, and the more black boxed it is, the worse.
Disadvantages of code reuse:
Debugging takes a whole lot longer since it's not your code and it's likely that it's somewhat bloated code.
Any specific requirements will also take more work since you are constrained by the code you're re-using and have to work around it's limitations.
Constant code reuse will result in the long run in a bloated and disorganized applications with hard to chase bugs - programming hell.
Re-using code can (dependently on the case) reduce the challenge and satisfaction factor for the programmer, and also waste an opportunity to develop new skills.
It depends on the case, the language and the code you want to re-use or re-write. In general I believe that the higher-level the language is, the more I tend towards code reuse. Bugs in higher-level language can have a bigger impact, and they're easier to rewrite. High level code must stay readable, neat and flexible. Of course that could be said of all code, but, somehow, rewriting a C library sounds less of a good idea than rewriting (or rather re-factoring) PHP model code.
So anyway, these are some of the arguments I'd use to promote "reinventing the wheel".
Sometimes it's just faster, more fun, and better in the long run to rewrite from scratch than having to work around bugs and limitation of a current codebase.
Wondering what you are using to keep this library you reinvented?
Initial time for create a reusable code is more expensive and time cost
When master branch has an update you need to sync it and deploy again
The bugs get replicated - If you reuse a buggy code
Reusing a poorly documented code may get more time than expected/estimated
Let's say you're the lucky programmer who inherited a code that is close to software rot. Software rot as defined in Pragmatic Programmer is code that is too ugly (in this case, unrefactored code) it is compared to a broken window that no one wants to fix it and in turn can damage a house and can cause criminals to thrive in that city.
But it is the same code that Joel Spolsky in JoelOnSoftware values in such a way that it contains valuable patches which have been debugged throughout its lifetime (which can look unstructured and ugly).
How would you maintain this?
Have a look at Working Effectively with Legacy Code by Michael Feathers. Lots of good advice there.
Welc is a great book. You should certainly check it out.
If you don't want to wait for the book arrive, I can summarise the bits I think are important
You need to understand your system. Do some throwaway coding to understand the part you need to work on. E.g. be prepared to try and do some work to get the system under test based upon the knowledge that you will probably break it. (understand what went wrong)
Look for areas where you can break dependencies. Michael Feathers calls these seams. They are points where you can take abit of the legacy system and refactor it so it will be testable.
As you work on the system add tests as you go.
You can do a few things:
Refactor the code to make it more maintainable. If the code is being used for feature development as well then refactoring will make sense.
If the code is legacy code and is touched only for bug fixes then I would suggest you only fix as much as required and when required.
Often, the first impression people get from such legacy acquired code is that its messy. Give it some time and get comfortable with it. You may see some valid reasons as to why the code looks this way with time to come...
First, make sure that you have a robust test procedure for it, and that it will actually be tested again in depth, by several people (you, QA, ...).
Then, take some time, day after day, to improve the small parts you have to modify. The key is to have a management that understands "why it takes longer as expected". Explain that you have to do refactoring and that it is important for both short and long term, ask other developers to review the existing code and confirm your arguments.
Any useful metrics will be fine
One of the things that I look for in a code is unit test. This will give the freedom to refactor it. So if the code does not have tests I consider it a legacy code.
If the code:
has been replaced by newer code that implements the same or functionality or better
is not being used by current systems
is soon to be replaced by something else altogether
has been archived for historic reasons
when vendors stop supporting it
We use the term "legacy" to refer to any code, still in use, developed using technology we have ceased active development in.
It is code that we would rather rewrite using more recent tools than modify in its current state.
Micheal Feathers, Author of the excellent "Working Effectively with Legacy Code", defines it as any code that does not have tests.
A better question would probably be what marks a piece of code as non legacy.
To me legacy means unchangeable. So as soon as you're no longer 'able' to change it it's legacy.
Whether that ability is removed by fixed requirements, fear of breakage, knowledge loss, or some other impact is largely irrelevant.
A related note is that I don't think I'd ever use the exact word legacy as it stirs up too many emotions to be useful.
I don't believe there is a definitive answer, but I do believe that the likelihood that code is legacy code increases with the number of people who don't want to touch it and the likelihood that changing it will cause it to break.
the term "legacy code" is subjective and is probably a loaded term. but in general I subscribe to the view that legacy code is one that is not unit-testable and as such is hard to refactor.
When the code is old enough you never met the developer who originally wrote the code.
When 3rd party libraries aren't supported anymore.
In my opinion all code that is written is legacy code. It might take some time before the original intent and all the decisions made about the code is forgotten but sooner or later you cannot imagine what they were thinking while writing it. You never write legacy code yourself, right?
Using unit tests or some measure like seconds since the developer has left the building do not really measure whether or not the code is legacy code. Legacy code may have a good set of unit tests and comments and it may have undergone a strict code review and other analysis. This doesn't mean that the code is still relevant for the program at hand. It just suggests that the code might be comparably well written. And if it is no longer relevant, the code will actually make it harder to solve the problem the program is developed for.
Legacy code has been defined in many places as "code without tests". I don't think they are specific in the types of tests, but in general, if you can't make a change to your code without the fear of something unknown happening, well, it quickly devolves.
See "Working Effectively with Legacy Code"
I maybe wrong, but I don't think there is an established metric for this.
Usually a piece of code is deemed to be legacy, when it has seen at least 5-6 release cycles( maybe more ). More often than not, the Original Implementor is no longer around and the code is maintained through.
Almost seconds after the devs leave the premises. :)
If...
there's no money in the bank for new features
you can't find anyone that admits working on the project that needs fixing
the source code to the project you own has gone MIA
...then you're working on legacy code.
Usually people refer to something as legacy code when no one is still around that is familiar with or feels comfortable maintaining the code.
Unit tests make it easier for people unfamiliar with code to dig into it, so the theory is it helps prevent code from becoming "legacy".
Often when code is legacy it is changed in a different manner. People are afraid to change it, but also changes tend to be quick and dirty because nobody understands full consequences. Code duplication issues may arise, because people don't want to take the risk associated with deeper changes.
So, in such circumstances, the situation may get worse, at an increasing rate.
I don't know of any real metrics that can be used to determine if something is "legacy code" or not, but anything older than just written could be considered legacy. Legacy code means different things to different people/organizations, so it really is somewhat subjective.