Does writing and speaking on software make you a better programmer? - blogs

Do you think that writing about software (i.e. having a blog) and speaking on software (and concepts) make you a better programmer?

Statistically speaking yes. You only retain about 20% of what you read and hear, but 80% of what you teach.
By writing about something or teaching about it, you force yourself to understand the concepts on a much deeper level.
UPDATE:
I wanted to update this with some links to more concrete data to support the statistics that I have been taught numerous times about learning retention rates. However, it would appear there is some controversy surrounding these numbers, even though the NTL Institute for Applied Behavioral Science maintains that research was done to back them up.

I believe that is the case. As with teaching, you develop a firmer grasp on the subject when you have to explain to someone else. You get to see what you understand and don't understand in greater detail.

Yes. In the work force, being able to communicate effectively is as, and sometimes more important than knowing every obscure detail about language X.

Absolutely yes. You have the chance to be challenged and questioned and second-guessed in ways you'd never think of on your own. It also gives you a chance to work on the organization and presentation of your ideas. All of this will feed back into the decisions that you make when you're writing code.

I would argue the opposite: that in general the good programmers love to write and speak about software. It shows that they are passionate about it, and won't accept crap.

Absolutely. Knowledge without regular using is useless. Talking about technologies, languages, methods, development processes, books etc. greatly improves overall experience and points possible ways of professional evolution.

Absolutely, for one simple reason: It challenges your preconditions. You could write an article about how perfect .NET is for a given situation, only to find someone has used it, and it turned out badly.

I think the main thing these activities do is force one to more thoroughly research things and research new things. Does this make you a better programmer? I think so.

I think it encourages you to be a better programmer generally by visualizing your opinions and by reading users responses. I don't think the fact that you have a blog or can display the ability that you are knowledgable about developing makes you better inherently but it might help motivate you to be better so you can keep up with your posts.

If you tend to write or speak about software then that means you're thinking about it and you have opinions. Caring about it enough to write makes you a better programmer.

I think that being able to speak and write well make you a better developer. Not necessarily because it will improve your programming skills, but because software development is a lot more than just banging out code. Whether it's for a company or an open source project, all but the smallest pieces of software are team products. In this environment, it's going to be the developer who can best communicate that will make the biggest contribution, not the one who is necessarily the best coder.

Teaching about software absolutely makes you a better programmer. Writing on a blog is not so far off.

Most of the things that I learned about .NET, I learned when I was reviewing it to be able to train newbie devs. So yes, speaking on software helps a whole lot.

Yes. If you get feedback (eg, blog comments), then doubly so. Others will invariably think of something that you didn't, but may never have had the chance to tell you if you didn't speak up first.

Related

How large a role does subjectiveness play in programming?

I often read about the importance of readability and maintainability. Or, I read very strong opinions about which syntax features are bad or good. Or discussions about the values of certain paradigms, like OOP.
Aside from that, this same question floats about in my mind whenever I read debates on SO or Meta about subjective questions. Or read questions about best practices and sometimes find myself or others disagreeing.
What role does subjectiveness play within the programming realm?
Sometimes I think it plays a large role. Software developers are engineers in a way, but also people. A large part of programming is dealing with code that's human readable. This is very different from Math or Physics or other disciplines with very exact and structured rules. Here the exact structure and rules are largely up in the air, changeable on a whim, and hence the amount of languages in existence. And one person may find one language very readable, and another person may find their own language the most comforting.
The same with practices. One person may not like certain accepted practices. I myself find splitting classes into different files very unreadable, for instance.
But, I can't say rules haven't helped in general. Certain practices have and do make life easier. And new languages have given rise to syntax and structure that make life easier. There's certainly been a progression towards code that is easier to read and maintain even given a largely diverse group of people. So maybe these things aren't as subjective as I thought.
It reminds me, in a way, of UI design. Certainly it's subjective, but then there's an entire discipline involved in crafting good UI and it tends to work.
Is there something non-subjective about the ideas behind maintainability, readability, and other best practices? Is there something tangible to grasp when one develops a new language or thinks of new practices?
Arguably your question is really about the distinction between programming, which is mathematical, algorithmic and scientific, and software engineering, which is subjective, variable and human-focused.
Great programmers are not necessarily great software engineers, and vice versa. The two skillsets, while not exclusive by any means, have less overlap than they appear at first. Their relative importance depends a lot on the project: a brilliant programmer working alone can turn out amazing examples of technical genius, and it doesn't matter that nobody else can understand or maintain it, because he's not going to share the code anyway. But move into an enterprise environment -- like corporate in-house software development -- and I'll gladly trade you ten "cave troll" geniuses for a mediocre programmer who understands the importance of readability and documentation.
It's been my experience that the world needs great software engineers more than it needs great programmers. Relatively few people in this day and age are writing software which is truly performance-critical (OS kernels, compilers, graphics engines, realtime embedded systems, etc), and the Internet allows mediocre programmers to quickly grab algorithmic solutions for problems they couldn't solve alone. But nearly everyone writing professional code has to work within a team. And team productivity rises and falls dramatically on the ability of its members to communicate effectively and distribute workload efficiently, two skills which are highly subjective and impossible to prove by rigid formula.
Most software engineering principles are built on experience rather than objective law. Much like the social sciences, we study, learn, adapt and apply -- but with no real guarantees of outcome. All we can say is that some things seem to work better than others in most groups.
I think, a lot of it is necessarily determined by how much our mind is able to process at one time. So it comes down to how much the language and tools enable a team or a developer to break down the problem into chunks that are meaningful by themselves, but not so large that it becomes too hard to grasp them. The common theme is the art of organizing information (in this case, the code, the logic, ...) But that's not so different from Maths or Physics, by the way.
Just as the best authors borrow from many styles, the best programmers keep a huge range of patterns in their mental arsenal. Slavishly following a few patterns and adhering to some absolute truth is both lazy and dangerous.
Put it another way, the day we rely on robots for code review is the day I quit.
It all depends on your point of view :-)
But to answer your questions, I think one way to view subjectivity is to recognize that software languages, tools, and best practices are a shared means of communication among individuals. Yes, a programming language is a formal way of instructing a computer how to behave, but a programming language may also be viewed as a way to define and communicate specifications to a high level of detail (the code is the ultimate spec, is it not?).
So as far as we may want to concern ourselves with the degree of subjectivity in software languages, tools, and best practices, I would say that the lack of subjectivity may indicate how well communication is facilitated.
Yes, individuals have certain proclivities that are expressed in their habits and tendencies, but that should not ultimately matter too much in the perfect platform for development.
Turning to my Maths PhD wife I asked if there's any subjectivity in mathematics. Her answer is yes there is, mainly in the way we as humans achieve the answer.
If a mathematical proof is the result, how you get to that result can vary. If the dataset is large you may need to use a computer, which can introduce errors, and thus debated about whether that is the right approach. Or sometimes mathematicians can disagree on the theory - one is trying to prove that x is true while the other is trying to prove that x is false.
I think the same thing exists in computer science. A correct answer is a program that runs correctly, but that definition of correct may be different for each project. Sometimes correct means no bugs. Sometimes it means running efficiently.
From here programmers can argue how best to achieve the "correct" result. A good example of this is is the FizzBuzz application. A simple answer would be just a for loop, but Enterprise FizzBuzz is also "correct" in that it produces the correct answer, but is generally laughed at as "bad" engineering due to its overcomplication of the idea (it was a joke app after all).
How large a role does subjectiveness play in programming? I'd say it's a very large part of what we do, simply because we are human, and because there are multiple ways of getting the "correct" answer so there is disagreement over which way is the best.
Studies have been done showing that certain practices reduce defect rates in software. For instance, a study found a strong correlation between cyclomatic complexity and the probability of being fault-prone. Other studies show the average effectiveness of design and code inspections are 55 and 60 percent. So it appears to be in our best interests to favor simplicity, check metrics, and do code reviews.
We're talking probabilities here, though. If I review your code, I'm not guaranteed to find 60% of your bugs. There are also few absolutes in software development; experienced developers know that the correct answer is generally "it depends." That said, there are a number of practices with objective data in their favor.

Why do newbie programmers seem to shy away from libraries? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've noticed many questions on here from new programmers that can be solved using libraries. When a library is suggested, often times they respond "I don't want to use X library" Is it the learning curve? or ? Just curious!
A lot of new programmers are still working at a very low level of abstraction, learning the trade. That's something everyone has to go through. It takes a while to "move up the stack" so to speak.
Once programmers realise that they spend most of the time solving the same problems as someone else already did, and the goal is to realise "business value", then they can really appreciate the value a good library brings.
When you're still learning the ins and outs of a new language, also having to learn how to use a 3rd party library can look like too much work. Also, libraries tend to be badly documented - or at least have documentation that seems totally opaque to a new(er) programmer.
So, faced with trying to solve problem X, saying "use a library" can sound a lot like "solve problem Y THEN problem x".
(Also, their professors told them not to. I managed to get all the way though my undergrad in C++ without learning the STL existed. Boy, did THAT cook my noodle.)
Some people, when confronted with a
problem, think “I know, I'll use a
library.” Now they have two
problems.
Seriously - this is a reasonable way for a newbie, already overwhelmed by new language, programming environment, paradigms, keystrokes, etc. to react to the suggestion to use a library. If you've got a solution, but it's not working, there are many potential sources of error; sorting through them is a challenge. Adding to them can seem irrational.
"Use a library" means find the library, download it, install it in your project, and call the necessary function. Not hard, if you're used to it (and there aren't corporate policies against it, and you have reason to trust the vendor, and the library itself has minimal dependencies, etc.). But if it's all new to you, when you ask a programming question and get back a system configuration answer, it can seem unhelpful (even if it is not, in fact).
Almost always it's because their professor has told them that they can't.
Sometimes it's just because they want to learn it themselves, but I'd say that's rare.
It's the learning curve.
Using libraries is probably one of the worst things a learning programmer can do. Instead of learning how to code, they're learning how to use specific APIs that other people implemented. I'm not saying that every programmer has to understand every single thing that they use, but programmers who know the ins and outs of a computer (digital logic, assembling op-codes, etc) usually have an edge over people who've started with something like Java Swing and are just throwing together libraries.
In production, this is a different matter of course. But I think the best course of education is to 'make everything' once, at least. Writing my own web application framework from the ground up really improved my programming skills and abstract abilities. Doesn't mean I'll use that framework if someone hires me to build them an application, but I know the strengths, weaknesses, and reasons behind the things that the 'giant' frameworks use, and it can help me choose a particular framework for a particular situation.
I remember shying away from several libraries simply because I wanted to see if I could create my own algorithm. I didn't want to just give up and let someone do the work for me but rather I wanted to learn from my mistakes. Once I had come up with a solution I was happy with, I looked into the libraries.
So for me it was simply wanting to see if I could do it.
I always have this urge to do it myself, but sometimes I can see my own limitations.
Just recently downloaded a library to create PDF documents, but thats pretty much the only time I can remember.
At least for me, (trying to) do things myself, is my way of learning.
My impression is that many newbie programmers wouldn't consider it their own work if they were to use someone elses libraries.
I don't think that this is necessarily a bad thing. Using libraries is great; it saves time, effort, bugs, etc. However, you learn very little in the process, and for new programmers, learning is the goal. To answer the question, I think that they tend to shy way from libraries simply because they are not used to using them and perhaps they don't know that they exist.
For many poorly documented libraries that are either implemented loosely or in languages that don't allow you to control containment and visibility very well, it can be quite difficult to guess just how the library is supposed to be used.
After you've used it for a while, you've gotten used to the quirks or read other source code that taught you the right way; but until then it can be pretty irritating to use a poorly put-together/designed library. (or even a well designed one that isn't terribly well documented).
If you don't have the source code to the library, that's another problem--you have no control over the ability to keep your program working. This is much more rare these days, but still happens in the case of a purchased library.
Most of the points covered off (for me the main one is the learning curve) but one other I think plays a part:
Because learning about a library is less exciting than coding the same functionality yourself.
More libraries = less billable hours.
I think there's a lot of time that needs to be invested in understanding the library's purpose - yes, a learning curve, but it's more that newbie programmers probably don't know what they need until they have a lot more experience.
Because it's fun.
Because part of maturing as a developer is learning to quickly identify problems which can be solved by a library or existing solution and which need personal attention.
When you're trying to learn how to do things, anytime something is accomplished "magically" by calling AwesomeClass.doAwesomeStuff(), you end up giving away a portion of control. When you are "new" and don't know what you're giving away or why it can be unsettling. This was my primary knock against Rails when I was first learning it. So many things just "worked" and I didn't know why without digging through lots of Rails source (which I generally didn't have time to do).
At least, that's my take on it.
The same reason that more experienced developers do -
Because it can often be as difficult to learn how to use a library as to write the part of it you need yourself. And at least then you can understand how it works when it doesn't do what you expect.
An experienced developer just has experience at understanding how to use libraries so more likely to consider it. An inexperienced developer it's one more thing to learn...
I'm a programmer, not a psychologist! :)
It was a long, long time ago for me, but it was because I wanted to learn and experience. I didn't want to use something I did not understand, so if I didn't think I understood the library and could program it myself, I tried not to use it. There might have been a bit of fear too; programming gives you a feeling of control, and using a library is like giving away this control.
Answer from a noob -
"I am not sure how to use the libraries or even how to access them or how it works"
Libraries often come with the overhead of learning some API and it's paradigm. It can get complex fairly quickly, and I could easily understand that beginners would prefer something a bit more in their comfort zone. From my experience, I found most libraries & frameworks seem to do a great job abstracting some tedious routine, but when I need to either extend this functionality, or use it in a way that's not intended, it can be a handful.
I think it's one of those things where "practice makes perfect".
Well, the newbie's purpose might be more solving the problem than implementing a solution. Perhaps what they really want to do is figure out how to solve the problem. I mean, if they're still heavily in the learning phase, it's quite possible they don't want easy answers handed to them.
I think the professors want them to stick to the basics. When I graduated from under-grad school, I knew C++, Java and some other languages but had no clue about libraries and frameworks being used in companies. It was like do you know java..yes..can you write a servlet..no.
For speed demons they rarely use 3rd party libraries and new programmers are usually looking to squeeze every once of speed of of their code. I think if they don't have control over their code they can't get the performance that they are looking for. At least thats why i avoided libraries when I first started to program.
I remember programing my first DAL and avoided all the other free libraries out on the web because I wanted my code to perform at top speed. Later, I discovered that usually its not the code thats the bttleneck its actually the database.
Some open source libraries are buggy or not as efficient as others.
In my eyes another factor is that additional libraries add complexity. Programs tend to get harder to understand, harder to maintain and buggier when getting more complex. I think what makes especially new programmers shy away from libraries is that adding library code increases complexity more than adding your own code - simply because understanding how the library works is still out of their grasp. So it seems to be a problem of both skill and psychology.
I think more fundamental issues can be recognized as a deterrant to using existing libraries.
Part of this as "newbie programmers" is a lack of exposure to libraries. If you don't know they exist, how do you know to use them?
Number Of Options Available. Let's say I'm really interested in learning more about MVC, but if I have to choose between cakephp and smarty and zend and ... well you can quickly see the gears work to discover a way to achieve the goal without investing the time experimenting. Take a look at Freshmeat or SourceForge to get a better understanding at the daunting selection of libraries available.
Questionable support combined with sketchy/outdated documentation for the libraries. Do I want to use this tool that may no longer work or may be abandoned in the future? It is likely that a project will evolve, and so it will for the project of a library too. Will its usefulness last the lifetime of my project or will I be required to re-do this work again?
Using a library requires you to understand the relatively complex design of the library, something that new programmers might not have mastered because all they've ever written is simple/procedural/single-purpose code. For example, to an experienced programmer standard design patterns like template method, observer and command seem pretty obvious, but to a newbie it all just seems like magic and/or unnecessary complexity. For me the turning point was when I got good enough to grok design patterns and write some basic reusable code.
It's been a long time now, but when I came out of college, I knew nothing of libraries. This was in the days of mainframes and mini-computers. Our college had a VAX and the managers were paranoid about students hacking the system, so didn't allow us to even see the library manuals. So, when I first came out of college, I didn't even think of libraries being available.
I am sure there are a lot of reasons why the newbie doesn't want to use the new library. But wouldn't this be a good opportunity, if you have enough time, to show them what the advantage of using the library is? With the people I work with, I will usually provide an example of why something is better than their approach. It helps them learn and mature as a programmer.
Happens that noobs use Libs without knowing, but when they must import/add one that is fairly less documented, there is a fear of unknow. That happens mostly for compiled langs!
In the interpreted, (or compiled IRT), mainly when there is a console, such fear is almost non-existent; since you can require and see if it fails, call a method and see what it returns.
Consoles are tools of bravery !

Does pair programming work when there is a skills impedance mismatch?

For example, can an experienced coder with limited C#.NET experience be successfully paired with an experienced C#.NET coder with the secondary aim of getting the former up to speed with C#.NET?
Absolutely. Sharing knowledge is one of the points of pair-programming (along with the useful dynamic of having one person type for a bit and the other review as they do it).
In my experience, it's one of the most effective ways of doing so - and allows the less experienced coder still to usefully contribute (it takes less experience to review what an expert is doing and make sensible comments/interventions than to do the entire job).
That depends on the personal chemistry between them. If the more experienced programmer is willing and able to share his knowledge, and let the less experienced programmer participate in the development through writing code and discussions, I would say that it is a very efficient way of learning.
Yes, I find good pair programming is always two way, it's essentially a piece of social engineering masquerading as an IT innovation.
Yes, this will work. If 1) the programmer with limited experience is receptive to learning C# and 2) the other programmer is willing to teach C#.
When the skill mismatch is high, then it does become more of a teacher/student relationship. This isn't bad, but can waste time of the skilled person.
However, even if it's impractical or a waste, it can be very useful to have a very occasional pair session! Even if the student is overwhelmed or it's awkward, sometimes it's useful for "students" to see how people of top level work and think. It helps give them an idea of the problems/skills/methods of high quality work. Think of it as a high school student visiting a research laboratory. It's a waste for the pro scientists to teach the high school student, but the 1 hour visit can help focus the student and give them a glimpse of the ultimate goals...
I remember why I chose Emacs as my editor. I just happened to sit near an expert user, and literally rudely peering over his shoulder I watched him rearrange and navigate code super-quickly. I only watched for less than a minute and I never talked to him.. he may have not even noticed I was watching! But I was floored, and decided to learn Emacs. Ten years later I still don't have as much skill as that expert, but I don't regret my decision to change editors, since I got a glimpse of what was possible.
Personally, I think that would work well and is one of the goals of pair-programming but how successfully will depend on the two programmers. If programmer 1 (the one learning C#) was putting in some extra time to truly get up to speed and programmer 2 (the other one) has the patience and desire to teach it should be good for both.
You can certainly do this - we've done it in the past. But you have to accept that you trade off the "code quality" benefits against training benefits. There's no free training ride, I'm afraid.
It works to some extent. Usually it's one leading the other... so it's not much pair programming in that sense.
It depends heavily on the experienced coder's skill to teach and the other coder's skill to learn quickly.
Yes, but only if the better person is patient and willing to teach and the worse person is willing to learn. I've pair programmed with people not as good as me and it was tedious, but I think they learnt from it. I've pair programmed with people that are better than me and I certainly learnt from it. Depends on the people really.
It can be effective with the following caveat: You must switch partners.
I've actually been in this situation and, if the gap is large, it can be very taxing for both members of the pair. Best to switch partners after a few hours, with the time varying according to your tolerance and the size of the gap. If that option isn't available, mix in some solo programming.
There's a saying that a team's strength is as good as its weakest link. Pairing the strongest one with the weakest one has traditionally been the best strategy because the weakest learning from the strongest potentially ensures most amount of learning. If there is a worry of the strongest being uninterested, then replace the strongest with someone who'd really be the strongest.
It all depends on the personality of the developers there is no hard and fast rule.
One thing that is certain is that the experienced developer will be less productive when working with an inexperienced developer. I personally think there needs to be a good match of abilities when pair programming. It is however a very good way of getting inexperience developers up to speed.
Although its a good idea but practically it may not be useful. To train somebody you can organize training and assign mentor who can help and guide. The mentor can assign work from the real project and may monitor.
Pair programming should be between relatively experienced people, if you want to get the benefits of this concept. In my view pair programming with an inexperienced person will have loss of productivity and not sure how much the person will pickup when somebody is constantly checking on him. Assigning a task and giving chance to develop it independently and then review it will provide good self learning.
Yes, but the approach to make it effective may not be clear at first. The task that is being pair programmed on should be the task of the less experienced programmer (We will call him Michael). I would also have Michael start the pair programming session so as to explain what the objective of the session is. This approach puts Michael in the drivers seat where the more experienced programmer (We will call him Bill) will serve more of a mentoring role.
Typically Bill will either take or be given more complex tasks to work on. This approach allows Michael to work on tasks that are more suited to his experience level. I would recommend switching off at 30 minute to hour intervals at first such that Michael can get used to the process of giving someone else control. You can slowly shorten these switch offs to 15 minute intervals or whatever works best for the two developers.
I think that the final results you get depend on the guys that are doing this. In this case, you'd probably end up with one leading the other (and where the other is just paying attention to understand the language features the first one is using).
It depends on how much skills impedance mismatch we're talking about.
Two good programmers in different languages can grow-up quickly with it, obviously at the cost of a little slowdown of the one expert in the current project's language.
If the difference is too big (for example, a veteran and a rookie), instead, it might be preferable to start with some other kind of approach, to avoid the risk of getting highly counterproductive.
Always beware the extreme pair programming !

How can I become better at realizing how to solve a particular problem?

I have become pretty fluent in a few different languages now, but I seem to have a hard time actually figuring out the best way to go about solving particular problems. What are some ways to go about getting better at the actual problem solving of programming.
Experience. Solving something completely new is hard. The best way to solve problems is to try and find a problem that you've solved before, and that is similar, and adapt you solution to the new problem. So until you have experience with many different kinds of problems, it's hard to solve new problems that you come across. Visiting sites like this and reading questions and theirs answers are a great way of learning how others solved problems that they encountered.
Basically, "just do it". When you have to make a choice, just make any choice (except flipping a coin).
Once you have something that works, then sit back and scratch your head about what you did wrong and how to do it better.
If you have absolutely no clue how to do even that, just solve a part of the problem completely and move on.
I suggest checking out this book. They aren't the best kid on the block, though they want us to think they are...but they did well with Basecamp.
When all you have is a hammer, everything begins to look like a nail.
So, make sure you're well versed in algorithms and data structures. When you study them, think hard about what sort of uses a particular algorithm is good for.
Ask someone else. Someone in your office, on Twitter or SO, or even your wife. People with no technical knowledge often come up with simpler solutions.
If you must solve it on your own, try one of these others:
Do a quick search for another person or project which has tried to solve your problem. If they have a blog, documentation or source code, you might be able to learn from their implementation.
Come up with at least TWO solutions and pick the best one.
Pretend you have 15 minutes to solve the problem before the civilized world is destroyed by Nuclear War / Skynet / Permanent endless re-runs of Seinfeld, you might think of something much simpler which gets 99% of the work done.
The book is a set of heuristics to go through when solving a problem. Read about it on Wikipedia. Buy it on Amazon.
By solving actual problems. Practice makes perfect.
If you have time to become fluent in multiple languages, my guess is that you haven't spent much time doing any actual work. If you have a job, it might be time for a new one. If you're still in school, do you have any interest in starting a project for yourself or contributing to one that you use regularly?
It might help to know what kinds of problems you're having difficulty solving.
Go find an open source or free project you can get excited about and contribute. I learned a lot by signing up to code for my favorite video game modification.
Experience.
Study really only goes so far. Find something fun and small. Do it.
One way that seems to work for a lot of people is to pick use a book like Programming Challenges as a guide, and focus on solving problems of a particular type. For example if you're weak in an area like graph problems or dynamic programming, find a set of problems on an online judge and work through them. You'll start to recognize patterns and be able to classify problems.
Google for an answer. Chances are someone else has solved the same problem or a similar problem before.
Ask on SO. :)
Read some textbooks or online articles about design patterns.
Problems may have many solutions, some simpler and some more complicated. Don't get stuck thinking there is only one solution. Just go with the simplest solution that makes the most sense in the context of your application.
After years of experience you'll be able to think of your own solutions to most problems. :)
Study Algorithms!
Search and get a hold on as many examples, books on the subject programming or otherwise, etc.
Problem solving skills can also be improved by playing tactical games.
These made me Enjoy problem solving and become better (not necessarily good) at problem solving:
Chess and igo
I like this general method:
List the possible solutions with their strengths and weaknesses
(This will push you to briefly taste all of them)
Chose the best one and make your design on it
(If you find any heavy obstacle, reconsider other options)
Implement
most importantly, on every step, learn
The best way is probably to learn form a master if that's an option. Especially if you can find someone familiar with the problems your addressing.
Generally the more tools we have at hand the more options we have for tackling a problem. I agree that's important to always code and to always deliver something that works (however inelegant it is). But I think we need to increase our skills/knowledge in many directions:
Language skills (know your language(s) in depth)
Programming paradigms (Imperative, Object, Functional)
Framework knowledge
Algorithms
Patterns
Date-Structures
Methodologies (Agile, DDD, BDD, ?DD)
Tools
etc
You can get a lot of skill through on the job just-in-time-learning, but I usually have a pet subject at any time that I'm trying to get a deeper understanding of, typically this means getting the book and reading it cover to cover.
Work your way through Project Euler, and look at other people's solutions to the problems. Almost every problem will have been solved in a way that wouldn't have occurred to you, and usually with greater efficiency.
I think that there is a lot more than raw experience involved in becoming a good problem solver - because I've seen poor problem solvers with lots of experience.
Here are a few tips but you can find many more around the web.
Look at a number of problems and
figure out what they have in
common. The greater the generality
with which you understand the
solution to a problem, the more you
can apply it to other problems.
Try to discover approaches which
good problem solvers use to solve problem. But don't assume anyone has monopoly on problem solving
If you read Richard Feynman's
books, you'll notice that he
considers many different routes to
get to his goal. Don't narrow your
approach prematurely
Be positive. Assume that you can find
the solution to anything. Your state of mind matters. Enjoying the process of problem solving makes it much easier
Don't beat your head against the wall. If you don't
seem to be making progress with one approach, try another approach
Always be looking for more ways of
solving problems and more insights
into the process of problem solving itself
Be willing to work. It still can take a lot of effort solve some problems
The more different fields of study you know, the more viewpoints you have. I have a strong math background and I find it very useful for many problems. Physics, music or any different viewpoint might be useful
Practice solving problems.
Take an algorithms or discrete math course.
Here some tools that I've used in the past to help me understand a particular problem and its solution. I don't always use them today, but they helped me to learn how to think about breaking down a problem and coming up with a solution.
Class-Responsibility-Collaboration (CRC) cards
One card per class, details the responsibility of the class and what other classes it collaborates with. Using cards you can layout your design for the solution and see where you have too much coupling or too much responsibility. They allow you to think about the design in a lightweight manner before committing to code.
Use cases
Either actual structured use cases that describe user interaction with the system or even briefer stories or story cards. I still use stories, though I capture them in a wiki instead. This allows you to capture the interaction with the system in an informal way. Stories are basically placeholders for conversations that you need to have with the customer about what is supposed to be done. Using the collected stories, you start getting a grasp on the overall intention of the code. You can also start seeing how things interact and what works with other things. This is really the beginning of design.
UML Diagrams - particularly interaction diagrams
For awhile I used these a lot. It really helped to see how things actually worked together under the hood. I will still diagram, informally, some complex interactions to make sure I don't miss anything important. Going through a lot of these really helped me to think about how my objects interacted and now it is sort of second nature to think in terms of interactions.
Class diagrams -- a really high level view of the code.
These allow you to see your code structurally, especially if you can break the diagram down into components or layers of architecture. Mostly I use these now to explain the code to other people when necessary. When starting out these provide a pretty good visualization, though, if you're struggling with the bird's-eye view of the code.
The best advice I can give you if you try these is to follow the "rules" until you really have a good grasp of what is going on. Once you feel like you have a better understanding of what they provide, you can use them or not, or modify how you use them to keep only what is helpful and let the other stuff go.

What's your most controversial programming opinion?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.
The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).
So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.
Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.
Programmers who don't code in their spare time for fun will never become as good as those that do.
I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.
(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)
The only "best practice" you should be using all the time is "Use Your Brain".
Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)
EDIT:
Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?
"Googling it" is okay!
Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.
Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.
What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.
(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)
Most comments in code are in fact a pernicious form of code duplication.
We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.
I think eventually many people just blank them out, especially those flowerbox monstrosities.
Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.
On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.
XML is highly overrated
I think too many jump onto the XML bandwagon before using their brains...
XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.
My 5 cents
Not all programmers are created equal
Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.
It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)
I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.
For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax
For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.
Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.
If you only know one language, no matter how well you know it, you're not a great programmer.
There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.
It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.
Performance does matter.
Print statements are a valid way to debug code
I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.
Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)
Your job is to put yourself out of work.
When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.
If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.
Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.
1) The Business Apps farce:
I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.
Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.
How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?
The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.
I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.
2) The n-years-of-experience-required:
Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.
3) The common "computer science" degree curriculum:
The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.
Getters and Setters are Highly Overused
I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).
I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!
UPDATE:
This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).
First of all: anyone who uses public fields deserves jail time
Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.
Many people think:
private fields + public accessors == encapsulation
I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.
Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):
There is a reason that we keep our
variables private. We don't want
anyone else to depend on them. We want
the freedom to change their type or
implementation on a whim or an
impulse. Why, then, do so many
programmers automatically add getters
and setters to their objects, exposing
their private fields as if they were
public?
UML diagrams are highly overrated
Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.
Opinion: SQL is code. Treat it as such
That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.
I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?
Readability is the most important aspect of your code.
Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.
If you're a developer, you should be able to write code
I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:
Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.
It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:
Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.
Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.
I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!
Edit:
There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.
Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.
The use of hungarian notation should be punished with death.
That should be controversial enough ;)
Design patterns are hurting good design more than they're helping it.
IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.
And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.
Less code is better than more!
If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.
PHP sucks ;-)
The proof is in the pudding.
Unit Testing won't help you write good code
The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.
And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.
In fact, I'll generalize that even further,
Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.
They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.
Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.
I think that a method should be created wherever you can name one.
It's ok to write garbage code once in a while
Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.
Code == Design
I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.
Here's an article on the topic of Code as Design.
Software development is just a job
Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.
But in the grand scheme of things, it is just a job.
It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.
I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.
I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.
Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.
Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)
Software Architects/Designers are Overrated
As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.
How's that for controversial?
Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.
There is no "one size fits all" approach to development
I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.
Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.
People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.
It isn't.
Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.