Toy projects for new programers - language-agnostic

When I was first started teaching myself programming, after finishing a tutorial I would feel like I still couldn't do anything in the language. So, I looked around to find something to work on. Since I had just learned a few of the basics, the amount of work involved in finding, reading and adding to an open source project seemed insurmountable. Instead I started on a couple toy projects, which ended up being incredibly beneficial.
Having seen a lot of questions from beginners similar to "what should I do now?" and a lot of answers similar to "start working for an open source project" has made me think there has to be better advice for a new programmer. While working on an open source project surely gives great experience, there is a perceptible barrier to entry.
Instead, I think it would be great if new programmers were prodded towards working on a toy program related to some interest they have. Since there are so many directions that programming can take you, I think it would be interesting to list some simple (but fun/rewarding) projects grouped by the direction the new programmer is looking to pursue. Such as:
Game Design:
Write a text adventure (like Zork)
Natural Language Processing:
Create a program that writes meaningless, but grammatically valid essays.

I recently asked a similar question (Diverse resource of problems to show merits of different languages) and got links to sites that provide problem sets, as well as validation. Check out:
http://www.codechef.com/
https://www.spoj.pl/problems/classical/
http://wiki.python.org/moin/ProblemSets
http://projecteuler.net/
Although these problems don't oftem amount to projects, they are still interesting. I'm interested in seeing what people come up with here.

I actually think that a TopCoder approach might be better... programmers can still pick topics of interests, but they're actually working for a prize on a REAL project and they get feedback. Frankly speaking, TopCoder is a bit of a bloat and as far as I can tell, they don't allow people to make free competitions. It would be great if there is a TopCoder/StackOverflow type of site: people can submit code, get voted on their implementation and just have a good time!
I'll even pitch my idea, I'm starting to work on my own version of TopCoder/StackOverflow hybrid monstrosity called MyDevArmy (although I have not done anything so far except buy the domain).

Write a program which renders Wolfram automata (esp. Rule 110).
See YelloSoft for example code.

Start by writing a Blackjack simulation. Choose whichever strategy you want for the first run.
Next, start adding additional runs for different strategies like hitting/standing when your hand's value is 15 vs. 16 vs. 17 vs. 18, and whether the hand is soft or hard (an ace's value being counted as 1 or 11). The dealer's strategy will be constant, as they really are in casinos.
By the end, your program will run, say, 1000 instances of each strategy combination. It will print out a summary of the rate of hand wins (percentage of times you beat the dealer) for each stand value and hard/soft combination.
This is easily one of my favorite projects I've done and it can really cement some techniques in the language of your choosing. Plus, if you have the initiative to start learning some of the (fairly simple) discrete math that's involved in coming up with the odds of these situations as a side project, you can come away with an even better experience. Who knows, maybe you could ditch this computer stuff and take up card counting?

Related

Disagreement on software time estimation

How do you deal with a client who has different time estimates for the software product than yours?
I am going to describe a scenario that is not mine, but that captures broadly the same problem. I am working as a subcontractor to a large company that has a programming department. The software project we are working on is in an area that the department believe they have a handle on, but because their expertise and mine are very different we tend to get different results.
Example: At the start of the project I suggested one way of development which they rubbished as being unrealistically difficult and suggested integrating a different framework (one they are familiar with) with the programming language we are using (Python) to get more or less the same result.
Their estimate for this integration: less than a week (they haven't done the integration before).
My estimate for the integration: above two weeks.
Using my suggested way to get the result needed (including using matplotlib among other libraries used elsewhere within the project): 45 minutes. This is not an estimate, the bit was actually finished in 45 minutes.
Example: for the software to be integrated with their internal system, they needed to provide a web service for me to use. They provided a broken one, though it does work with their internal tool (doesn't work with .Net or Java mainstream packages among other options). They maintain that it is my fault that the integration has taken longer than the time estimated.
The problem is not that they don't know, the problem is that they have enough knowledge about programming to be dangerous (in my opinion). Is there some guidelines for how to deal with this type of situation? A way for expectation management? Or may be I shouldn't get involved in such projects from the start and in this case what are the telltale signs?
If a client isn't happy with a time estimate, don't do the work. If they think they can do it better or faster, tell them to go ahead.
The one thing I never allow is for my estimates to be modified. That's something that caught me out early on in my career but we learn our lessons.
If clients were so good at doing the work, they wouldn't be hiring me. I'd simply point out that they hired me for my expertise so why are they disregarding that expertise. Of course, if they were to allow the scope of the project to change (i.e., less work), that would be another matter, and one up for discussion.
If you didn't lock in exactly what they were meant to provide as part of the deal, then it's a "he says, she says" situation and, unfortunately, the customer controls the purse strings. However, often, the greatest power you can have is the ability to just walk away.
No-one says you have to do the job.
Of course, all that advice above is worth every cent you paid for it :-)
I don't know your specific circumstances.
Or may be I shouldn't get involved in such projects from the start and in this case what are the telltale signs?
My answer for sure. If you can avoid those projects, do it.
Some signs : people thinking they know how to do things when you can guess they can't. The "oh no let's not use this perfectly suitable tool because I don't know it" is a major indicator that the person is technically challenged.
first of all, it is no fun to be in such an environment.
So, if you like to have fun at your job, and you do not need to take this job for extenuating financial reasons, then simply do not take the job that is not fun.
Since that is hardly realistic in many cases, you will end up with the job and need to manage the situation as best you can. One way is to make sure there is a paper trail documenting your objections and concerns with the plan. Try not to be overtly negative, but try to be constructive and present valid alternatives. Here you will need to feel out the political landscape, determine if the 'boss' will be appreciative or threatened by your commentary, and act accordingly.
Many times there are other issues that management is dealing with that you are not aware of. Be cautious of this fact, and maybe ask the management team if this is the case, again without being condescending or negative.
Finally, if you have alternatives that take less time than the meetings it would take to discuss them, just try it in a sandbox, and show it off. This would go a long way to 'proving' your points. Caution here is that you could be accused of not being a team player, or of wasting resources, or not following direction. Make sure this is mitigated by doing these types of things on your own time, or after careful consideration of how long you are spending on these things as well as how vested your boss seems to be on the alternatives.
hth
I ran into the same problem with integration. Example: for the
software to be integrated with their internal system, they needed to
provide a web service for me to use...They maintain that it is my
fault that the integration has taken longer than the time estimated.
Wow very similar to what I was experiencing with a client. The best thing I can suggest is to keep good documentation. In the end that is what saved me. When it came to finger pointing I had all of the emails and facts in order and was prepared to defend my self.
One thing I would suggest is to separate out a target/goal and an estimation. I would not change my estimate unless it involved actually removing features or something is revealed that would make it easier. Tell them you will try to hit the target in anyway you can and you care about the business goal. However, your estimate will not change. If its getting no where and they are just dense then smile and nod and take it if its the only gig around.
Was just writing about this in my blog
How to estimate the WRONG way

How long should it take for someone to be able to type code from memory? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I understand that this question could be answered with a simple sentence and that it may be viewed as subjective, however, I am a young student who is interested in pursuing a career in programming and wondered how long it took some of you to get to the level of experience you are now?.
I ask this because I am currently working on building an application in Java on the Android platform and it bothers me that I am constantly having to look up how to write a certain section of code in my application such as writing to a database, or how the if statement should be structured.
My question really is, how long did it take for you to become experienced enough to actually know exactly how your next line of code was going to look, before you even wrote it?
The speed at which you can quickly recall language syntax, common library functions, and best practice patterns is directly proportional to the amount of time you spend using them.
In other words you will find yourself getting faster the more you do it.
I have been a C++ programmer for the last 20 years. It has taken me that long to get to this expertise level. I'm mostly a Windows programmer, and I keep the msdn website up on one of my monitors all the time.
Doesn't matter how long you've been doing it. You will never know everything from memory. Don't sweat it.
I've been programing for almost half of my life and I sill can't always recall simple syntax, let alone entire tracks of code for more complex tasks. If you ask me that's what reference books and Google are for.
A far more important skill to have is the general knowledge of programming in any language, i.e. recursion, looping, object oriented design, working with APIs, error handling etc... Once you have all that down, you can apply it to any language and platform.
I can tell you that after 25 years there are lines of code that I don't know how they're exactly going to look like.
Want an example? I'm programming in Java since last century and I can honestly still make a mistake if I were to type hashcode() or hashCode().
Why? Because actually typing such a method name yourself is so last century. Your intention is to override Object's hashCode() method, so you use programming by intention.
You hit Ctrl-O then h and you get a list of the methods starting with an 'h' that you can override. Then you hit enter. As a bonus, the "#Override" gets inserted for you too.
4 keys. 4, to get this:
#Override
public int hashCode() {
}
And honestly, whether hashCode takes an uppercase 'c' or not... I couldn't care less. This is not what an hashcode is about and my intention is not to know all the inconsistencies languages and API designers came up with. My intention is to override the method that gives back an object's hashcode and my (modern) IDE allows me to get that skeleton in four keypresses, including hitting enter.
Another example: there are people who do really type this countless times a day:
for (int i = 0; i < ; i++) {
}
or the more tricky:
for (int i = ; i >= 0; i--) {
}
Note in that latter case I can still mess up and type "i++" instead of "i--" (a 'thinko' as its called).
But I don't care at all, because I type "fi<tab>" (three keys) and I get the first one or "fir<tab>" (four keys, "for i (in) reverse") and I get the second one. You ain't beating that (especially seen that I'm a touch typist so I type these three or four keys fast). In addition to speed, as an added bonus the autocompletion won't mess you "i--".
In many case I don't know exactly the line I'll get: sure, I know it "more or less" and that's exactly the way it should be.
Don't sweat it too much. As others have mentioned it eventually gets easier to write code without looking things up so much as you work with a particular language over time.
HOWEVER
There are a few reasons that even veteran programmers find themselves constantly using reference material:
(1) Unlike days of yore, most projects now require you to use a number of languages to get the job done. For example single web site-project a web-site may require C#, XML, JavaScript, SQL, HTML, XHTML, RegEx and CSS all in the same project. Switching between some of these languages can really throw you for a loop sometimes because many of them are just similar enough to be familiar, but just different enough to make you forget the subtle differences in syntax.
(2) Just when you start getting comfortable that you know a language inside and out, the vendor will release a new update that changes everything you knew about it. For example ASP->ASP.NET.
I still look up simple things fairly frequently and I've been at this almost 20 years. The important thing is that you understand the underlying concepts and principles.
It took me 12 years to get where I am at today, which is my experience in professional programming. You will always improve when working with some programming language, even if you have been working with it for the past 5 years.
About your question, it depends. I think that you should be comfortable with the syntax after a week, comfortable with the main libraries after a month, and comfortable with the platform after 6 months.
But when you get there, don't stop!
If you code every day with the same language, you'll probably have the common language elements and patterns memorized in a month or two. But there are plenty of things that you'll probably never memorize, simply because you don't use them often enough, and also because modern IDE's can help you so much that you don't really need to remember everything if you utilize all their features (like code snippets, shortcuts, intellisense).
I've been coding for 15 years, and doing C# for the last three, and I still use the MSDN reference material every day. However, as far as the basic building blocks of the language are concerned, I had them memorized in the first month or two.
Also, the more often you code, the better you'll commit it to memory.
There's a false assumption going on here I think...
At my job I end up working all over the stack in different languages and platforms. If I'm away from a project for 6 months I end up having to look at code to get even basic things done. The advantage of experience is reducing the re-ramp up time on productivity though.
So, instead of it taking weeks or months to get back to a point where I can write 80% of the code from memory it takes a few or several days (if that sometimes). I've been programming for about 5 years now. I'm just now getting to the point of being able to visualize small applications in their entirety.
As long as you're working on solving a problem that you haven't already solved (numerous times) you'll probably always have to look up code.
If it can be done I'm guessing it takes longer than 5 years for most people, unless they work in one language with one editor and in one area. (ex: C#, Visual Studio, file system operations) My company isn't big enough to employee someone that specific.
Don't be downhearted by having to consult documentation all the time man, that's what it's there for. Over time you get used to syntax and things like that but don't sweat it if you can't remember library methods or ways to connect to a DB.
Over time (with experience) you might remember these off the top of your head but in reality there's nothing wrong with taking a quick consultation of the documentation to refresh the memory.
Also remember that technology is ever changing so it's good to keep the mind fresh with new ways of thinking/ways to do things.
Question is not 100% clear. One of the best programmers I know doesn't remember anything and needs to look up printf formatting strings almost daily. On the other hand if you are having hard time figuring out how to write that for (int i = 0; i < len; i++) loop after doing this for 6 months -- that doesn't sound right.
the idea that one could every bit of code from even a single language and then type plainly from memory is pretty far out. the amount of pre-defined functions for, say PHP or Java alone is immense.
but that being said, its important to learn the programming structures, and know them the best you can. structures like foreach, if then else, switch, etc. are really the things that need to be integrated thoroughly. also, conceptual things like Object Orientation (not just "using" objects like mysqli, but understanding things like controlling code, client code, bottom-up and top-down architecture) are the real things that make good programmers great. for myself, i know that i have not the capacity to learn every defined function thats provided by language writers so i instead learn whether or not it can be done(and of course still try on occasion to do things that cant be done, lol). if you know that, then its a matter of google and books to find the "specific" mechanisms on how.
cheers my friend.
Not an answer per se, but I just wanted to say that as a novice myself, this q&a is one of the most useful things I've read on SO. It seems that yes, experts can probably code the basic stuff from memory, but even they revert to book for complex problems, and for beginners, that's what the book is for.
I feel like I'm just beginning
You should use code snippet if you are using certain piece of code repeatedly. I doesn't bother me if I cannot remember some piece of code from memory.
For me, it depends heavily on what I'm writing. For example, I doubt that most people ever quite memorize all the parameters to some Windows functions. I may know that I need to call CreateFile on the next line, but don't know all the parameters in order until they're typed in (with help from Intellisense, and sometimes MSDN).
With something that's doing simple computation, I'm a lot more likely to be limited by my typing speed (but I'm a fairly poor typist, so thinking faster than I can type doesn't take much).
That means it's really a question of how much of the time you need to refer to something to type the next line of code -- at first, it's a pretty small percentage, and over time it grows. I doubt it ever gets to 100% for anybody though. I, for one, don't think I'd want it to -- that would indicate I wasn't working on new and different things...
If you use eclipse and java, you may find yourself already there.
Other combinations may be a little slower to a lot slower.
Java has the advantage that it's pretty easy for the environment to build an entire parse tree while you are typing. At any place in your code, typing ctrl-space can give you an entire list of valid options.
Also syntax errors are always underlined.
If you want to go hard core though, I started in C before the day of decent editors and it took me about a year before I typed in more than a few lines and didn't get a compile error.
I don't know about memorization. Repetition is mother of all learning and that applies to all aspects of life. Look at the experienced accountant vs. the novice when filing taxes, who looks up stuff more. But what I did discover recently is that I navigate documentations much quicker and have a sense for going directly to where my question is answered. I got 6h sense - I can see the code! Seriously, it all comes with experience. Still, when learning something completely new, there's no shame in looking up how to do certain things. That's what separates humans right, learning from others. The more you work on something, the better you become.
There is absolutely nothing wrong with having to lookup the documentation every time you want to write some code. I got lucky in that once I use a certain function, I don't forget it very easily. However, most of the time, especially when I'm coding in a language that I haven't used in a while,
I start out by writing a flowchart of the algorithm that I want to code - just the pure logic. The most important reason for doing this is so that I don't lose my train of thought and forget the algorithm that solves my problem in the midst of technical problems like syntax and < what library functions exist? >
Then I look at the documentation and check to see if there are simple function calls that will help me accomplish each task in my algorithm
If such functions do not exist, then I either modify my algorithm to accommodate for what functions the language does provide or write helper functions do fill in the gaps.
I only start coding NOW, which is not too difficult to do anymore, because I already have all the relevant functions written down. So now it's just a question of translating pure logic into syntactically accurate code.
Proper syntax usually does not elude me, but if that does happen (VERY rare), Google provides very nice code snippets if you ask nicely.
Hope this helps
May the Force be with you
I'm programming Java now for about 5 years, and I never have had any trouble remembering syntax. I can <brag>write almost all java.util.*, java.io.*, java.lang.* and javax.swing.* stuff out of my memory</brag>, but does it help me? Not very much. It doesn't make me a better programmer than someone who can't!
I'm using Netbeans, which greatly helps working with libraries. Also, the documentation, just in the place where you need it. Sometimes, it's quite unnecessary, but sometimes you'd wish the "auto complete" screen would popup faster!
The best thing as a student is to concentrate on what you are doing, not how fast you are doing it. Looking up things isn't bad; as it'll help your so-called "unconsciousness mind" process what you are really trying to accomplish. Having such breaks, e.g. by looking up a certain documentation or syntax reference, may even let you be better at programming (no proof for this, though).
Question is quite subjective.
With the great many IDE's available and the "newbie" tutorials on getting started, it won't take you long before you're off writing your own apps.
That said, unless you have a "thirst" for how stuff works kinda attitude all the time towards everything, you won't go far. In this field, you really have to have a passion for what you do to be great.
... It bothers me that I am continually having to look things up... How long did it take you to get to the level where you are now?
For me, and for most of the students I teach, the answer depends on two variables:
How many lines of code have I written?
Do I use the language or library every day?
(Reading other people's code is very helpful for learning a language and learning how to think in a language, but for me at least, it hasn't helped me become a fluent writer of programs in a language. Only writing code does that.) So my first comment is that time should be measured in lines of code written, not hours or years.
(Ray Bradbury once advised aspiring writers of fiction to write a thousand words a day six days a week, and after they've written a million words they might start to know something about their craft. This is good advice for programmers too.)
As for my own experience, across a half dozen languages that I currently know well or once knew well, it's been pretty consistent for me that
After writing 100 lines I am continually looking things up in the manual and don't really know what I am doing.
After writing 1,000 lines I use the manual occasionally and am starting to learn how to think in the language.
After writing 10,000 lines I am about as good as I'm going to get without making special efforts.
After writing 25,000 lines I probably will not need the manual again.
It's also true that
To learn to write 100 lines I had to read 100 to 1,000 lines that someone else wrote.
For the first 1,000 lines I write it is good for me to read 2,000 lines someone else wrote. For the next 1,000 lines it is good for me to read 1,000 lines someone else wrote.
After I've written 5,000 lines I learn the most by reading code written by world experts or by people who designed the language and understand what is there. I no longer have much to learn by reading just any program.
On the other hand, my experience about when I stop having to refer continually to the documentation is much less consistent.
I find it especially hard when two languages are very similar; I will never stop needing the ksh manual to tell me what is different from sh or bash, and I will never stop needing the Haskell manual to tell me what is different from Standard ML (though the need grows less with each additional 1,000 lines of Haskell that I write). I also find it interesting that while I have written over 35,000 lines of Lua code, and I will never need the manual again for a language question, I have to look up libraries and API functions almost every time I write something longer than 500 lines. (I've written a lot of short Lua programs and a couple of long ones, and I don't use the language every day, although I definitely use it at least several days each month.)
As for the unspoken question, when are you personally going to get better?, take some advice from Watts Humphrey: measure your own performance and track it over time. I think if each day you count the number of times you had to stop and look things up, and graph that against number of lines of code written or edited (which you can get from source-code control), you will be pleasantly surprised by objective evidence of improvement. And I think once you have such evidence, you will be able to focus more on continuing to improve, and not so much on where you are now or where you hope to be in a year.
It's true that after some years of programming you'll be able to remember a lot of thing without having to check the "manual". For me this is not an important milestone in your programming life though, the really important moment is when you reach the point where you don't know how to do something... but you're sure that can be done and you know where and how to research about it :-)
You made a very important step participating on this site. Exchanging ideas and helping each other it's a excellent way of learning.
Sociologist Malcolm Gladwell believes that ten thousand hours is a good benchmark for the amount of practice required to become world class at many fields of endeavour. I think that sort of number applies to programming as well. This isn't quite what you asked, though; being able to code competently certainly requires familiarity with your environment (language constructs, system libraries, third party libraries and perhaps something of the concepts underpinning them), but there are many soft skills involved which are harder to describe and can only really be acquired through practice.
As others have said, being good at programming is not about typing code from memory; it's about recognising patterns, understanding systems, solving problems. It's about choosing the right tools for the job (languages, libraries, algorithms, whatever) and being able to make proper use of them.
In all the jobs I've had, it's about adaptibility and flexibility; you might have to learn a new language or pick up somebody else's poorly documented code tomorrow, and a good programmer will be able to take this in their stride.
I've been coding professionally for nearly 10 years now; there's all sorts of code I use semi-regularly which I look up the options for at least some of the time. There are too many commands with too many options in too many languages for me to remember each and every last one in detail and Google is quite good enough at getting the information I want.
That said - there are some bits of routine code which I use all the time but can count on one hand the number of times I've written - the exact syntax for populating a dataset in .Net for example. One of the skills I've most come to value over this time and which saves me the most time is spotting when some code can be quickly and easily moved into utility libraries. If it's fiddly but routine, consider this approach to save yourself hassle and improve your overall code quality.
In the context of this question ; java, c++, javascript the languages are still evolving. I can't say about other languages.
The language standard/specification changes over time
Libraries are added to supplement the language constructs e.g Boost, Google Collections, Apache Commons, jQuery
Applications will rarely be bound to a single aspect of a language
Across organizations/projects, coding standards change
A project I worked upon recommended against using primitives
When unfamiliar with the constructs used, I put in pseudo-code flagged with //TODO first .. then go in and find the actual API to use.
IMO, the answer to your question is - there is no definite answer.
As a Java programmer the sheer size of the runtime library makes it impossible to remember everything. Swing is big, there is an XSLT engine (which contains TWO languages), the Concurrent support evolves and grows.
The direct access to the Javadoc API from within Eclipse combined with code completion makes it possible to find the information you cannot remember but you know is there, quickly and efficiently.
I have found the javaalmanac.com (which has been reworked into the more convoluted http://www.exampledepot.com/egs/index.html) invaluable in presenting short, concise and above all correct programs doing just one small thing. Strongly recommended.
Most weeks I program in Java, C#, Python, PHP, JavaScript, SQL, Smarty, Django Templating and occasionally C++ and Objective C. I'm a student so this is partially school work and partially my part-time job. Instead of learning syntax I've learned what concepts to look for.
Seeing patterns and concepts is key, once you know the concepts and what to look for the syntax is secondary.
I find that even when I am just being exposed to a language I can accomplish a lot just by looking for 'what ought to be there'
Why should you be able to remember all of this stuff? Personally I embrace the fact that I can't remember all of this stuff and simply try and remember where to find the information that I need. I find that much more useful. This takes the form of blogging, taking notes, keeping large amounts of 'sample' code and reusable libraries and writing about code that I find useful and interesting; oh and lots of books, some of which I hardly ever 'need' and some of which I hardly ever really read but they've been skimmed and I know where they live.
Technologies come and go and there's just no way I could have kept all of the things that I may one day find useful in my head; so I page them out and just keep the index in memory... For example; 9 years ago I was doing some stuff with Java and CORBA and whatever. There's no way that I could drop back into that now without the notes that I wrote up for my website back when I was doing it: http://www.lenholgate.com/blog/2001/02/corba---enumeration.html. Likewise I have code that I use on a daily basis that has been kicking around since 1997 or earlier. I don't remember how to type it in, I have it in a file with tests (if I'm lucky) and docs (if I'm even luckier).
Whilst I realise that most of what I'm talking about is 'big stuff' I also often have to go and look at some of my old code to simply work out how to structure a typedef...
Of course the day to day stuff will come with time and practice; but you need to work out that you have to page some of it out in a form that you can reload later very quickly. Embrace the fact that your memory is never going to be able to hold it all and outsource it :)

Is writing specifications for hobby projects the only way for them to be finished?

Here's what I'm wondering. Every night that our 3 months old baby lets us sleep, I jump to my computer and start coding my hobby projects. I have about 20 different projects that I'm working on: different types of projects, from C++ games to web apps along with some contribution to open source projects. It's truly a passion and has been for a lot of years.
Yet, when I look back, I see that I haven't been able to fully complete one of my hobby projects. I've always done the prototypes and setup the most important features, but with time instead of finishing my project I end up switching to another project that seems "so much cooler" at the moment. Hence I usually end up with buggy and incomplete games that have no end nor story, 3D engines that have the fastest PolygonDraw routine ever, yet lack to implement anything else, etc... The list is long. I think I must have written unfinished Pong over a hundred times different!
I've been told that the remedy is to write specs for my hobby projects.
On one hand, I write a lot of specs at work. I know how crucial they are for defining a product's roadmap and staying within schedule. On the other hand, specs and hobby project just quite don't seem to go along! It seems to me that the learning curve to building a game is actually what makes it fun; not the game itself. Hence the fun of losing time restructuring an entire engine, the fun of creating the most useless features, and so on...
So here comes the question: Do you ever write specifications for your hobby projects? How are they different then the ones from work? How do you manage to complete your hobby projects?
I'd be glad to know while I work on my new project: a piano sonata generator :)
I don't think writing specs is the solution to your problem. Clearly, your "hobby projects" are things that you find fun. You write the fun parts but then avoid the not fun parts that would be necessary to complete something.
If you're just "programming for fun" then good, you're succeeding. I don't think writing specs is fun.
If you really want to "finish" something, the best way isn't to write a spec, it's to not jump to another project when the fun factor dips.
It is all about 'Self project management' ... even for fun.
I feel for you ... I used to have many repos that tended to all get stuck at around revision 200 or so.
Here is what used to happen, because I didn't do enough planning, after around 200 commits, things get messy and need a rewrite ... then interest disappears because it seems like too much hassle.
I learned to write my own specs for personal use
to
Give me focus to get the job done, and not go off into feature creep lane
Remind me what I am working towards
To have great ideas before I get coding
Keep thing more fun for a longer time
For me, writing my own specs is vital to getting anything done!
You wouldn't start a business without a plan would you?
For personal projects I have tons of moleskine books filled with rough specs and ideas. When they mature, they migrate from the note books into real documents and the coding begins.
BIG EDIT: On a drive for personal efficiency and, to get projects finished. I read "Getting Things Done" ... Despite all the hippy crap about 'psyche' and various levels of mind (which im sure is not based in any science) the tips are very good.
I don't get too complicated, but listing out all of the features and requirements that you want included in your application really does help. As with most hobby projects you often don't just sit down and code them straight through for 2 months and finish them. It's an hour here, two hours there, etc. Basically it's very common to forget what you were working on last and what the original purpose of this super great idea for an application was.
If you spend a few hours writing down specs and requirements it will be very valuable to you 6 months down the road when you get some free time or your ADD switches to that project and you try to remember what it is this was suppose to do.
I just found out recently that writing specs is really the thing I need to get my projects done.
I've been a bit like you, tons of projects, jumping from one to the other and never getting things finished. Until about 6 months ago, when I started to actually write specs and have a kind of roadmap for my projects.
All that I can say is that, it actually works, because you break your projects into smaller steps, just like a race with checkpoints, and when you start to mark the checkpoints as done, it feels good, addictive and your focus will be on the finish line.
This way, you get to keep only 1 or 2 projects at the same time, but actually finish them. And of course, you have the extra and pretty valuable bonus of keeping up with the project even if you don't touch it for about a month or more. The specs will always be there to remind you of the goals and purposes of your project.
This is just my personal experience, and I believe that you should give it a try. Hopefully it will workout for you too.
I've been able to do some hobby projects and finish some of them. I try to finish them all but some i just cant muster.
The reason i think is that the amount of details that are needed to finish a projects are so many that it goes from a passion project to a chore of a project.
What helped me finish most of mine is that they stayed a passion until the finishing touches were left. So i just plowed through them.
Will a spec help, to some degree yes. They get you further into the project but almost always there's a point where the passion fades and you look for the next shiny object.
It doesn't work for me! Infact whenever I'm writing up specs I'm generally making the projects even bigger, and less likely to be finished.
Sometimes the best way to do it is to just do it.
Ze Frank explains this much better than me:
http://www.zefrank.com/theshow/archives/2006/07/071106.html (video link with swearing)
EDIT: Just to add. If you are finding you want to leave your half-finished project for a new, grand idea... do it! Don't look back!
Completion is not a requirement for your own pet projects. Nobody will blame you for not finishing stuff that barely anyone else would even bother starting.
The reason you started was because of passion. That is very important. You should not force yourself to 'slog through' in your free time. You will drain your passion which is your most vital resource.
I usually write a first set of spec when I get started.
I'm also a big fan of paper thinking, so I'll draw screens, UML, diagrams, flow charts, design elements... It's just a matter of defining the scope of your project and be able to watch what you had in mind. It really helps me think.
These documents will be my specs for the whole project. I will add others as I go, but I'm not trying to maintain the old ones as much as I would have it it was a work project: I know where I'm going and I can keep track of the changes looking at my code.
Of course, some of my hobby projects are done collaboratively. In these cases, I write down more specs in order to have a better communication with my team and I try to keep documents such as DB Diagrams up to date.
I also have several hobby projects that I have not finished. I have about 10 and have written a specification for exactly one of them, the largest in scope (also a game).
I have not finished either the ones without specifications, nor the one with. I think this is because I never publish the work or show it to anyone so it remains full of bugs and never 'finished.
I suppose that this means that regardless of whether or not you have a spec, it will not affect the success of the project as much as other factors, like having the time, motivation, help, and having confidence.
The single best thing I've ever found to help move towards completion is to have someone else working on the project with you. Find a friend (or two) who is interested in the same thing and design/code it with them. Not only do you have someone to bounce ideas off of, but you've also got someone to motivate you, not to mention progress is twice as fast so you'll hopefully finish before you give up :)
Of course, it requires source control, but you were already using that for your projects, right? :)
Do you want to finish them?
I think it's reasonable to never finish a hobby project. You can just keep working on it as long as you live. Aciddose has been working on his virtual instrument xhip for years, stubbornly never getting to 1.0, making the instrument patches people program worthless from one release to the next. Yet he and the users of his softsynth seem to be having a grand time.
Maybe if you just aim for a "release" and not being "finished" you'll be more satisfied. Betas let you keep dreaming.
Yes and no. I write notes in a notebook as I'm thinking about it, and add to it as I implement it. It is a somewhat different process from work projects where someone else may have to see the spec.
I finish about half of what I start.
I've helped with development on a range of systems from safety critical avionics to throwaway personal projects like a Sudoku solver. Obviously with the avionics systems, specifications were critical to the safe operation of the system and to prevent killing somebody, but I've never bothered with my personal projects.
I think this is because specs are generally boring to read and write. Joel wrote an interesting article about this, and how to make them better if you do write them:
Painless Functional Specifications
Unfortunately I haven't had the guts to try making my specs more fun to read at work yet.
Maybe intead of writing specs you should try working on some projects for or with other people? That could provide some external motivation. I do some web devleopment for my cousin's drive in theater, and if they need a feature they won't stop asking me about it until I finish it.
The single biggest piece of advice I could give you would be to get something out there - make the spec for your first version small enough that you actually feel you can complete it, even though it won't have nearly all the features you want.
Once you get something out there, the pressure from users of your software will be enough to hopefully keep you going on it. It also ensures that the direction you take in development is the same direction your users want you to go.
If you don't actually get any users, then don't feel so bad about dropping the project - if nobody is interested, it probably isn't worth pursuing.
If pressure from your users isn't enough to keep you focused, then open source it. If there's enough interest in it, somebody else will pick it up where you left off, and you are free to move on to bigger and better things.
Unfortunately, after writing specs for the core of the DIFL engine (don't bother looking it up, as there's no trace of it outside my home systems), I still didn't finish it up.
Short answer: developing specifications for a hobby project is neither necessary nor sufficient to guarantee completion.
That being said...
I keep an engineering notebook for all of my personal projects. I use the notebook to capture all sorts of things about the projects on which I work. This includes project motivation, valuable resources leveraged during the project, things developed over the course of the project that might potentially be reused later, key insights gained, etc. etc. It also includes, more to your question, specifications for most of the projects. I employ an agile/lean approach to creating these specifications which, for me, is compelling from a cost/benefit perspective.
btw...I have many, many personal projects that did not culminate in a complete working system. Some of these I might get around to completing 'someday maybe'. I consciously chose to stop working on some of the others because they had served their purpose (e.g. introduced me to a new technology, helped me better understand a language feature, etc.) Continuing to crank away at projects like these would have led to diminishing returns so I chose to reallocate my time to projects I felt were higher leverage.
The real question is: what is your hobby? Is it finishing a project, or tinkering. If getting the last ten yards is a chore, you have to decide if it's worth it to you. Writing detailed specs will work; so will self-flagellation if you're into that sort of self-discipline. Nothing will make it easy if it's against your make-up, so you have to decide whether the end-goal is worth anything to you.
And, just to demonstrate that there is nothing programming-specific about this point, you might really like this guy. One of the main points in his work is that conceptual artists, such as Picasso and Da Vinci never really cared about the final execution--the idea was everything, and, having asserted it, they were strangely content with someone else finishing the actual work or leaving the sketch unfinished and unpublished.
I'm not sure that writing specs is the solution to your problems (or mine which seem similar) however in the case where I want to make something more than a throwaway experiment there are a few things that help me slightly without taking the fun out of it.
Specs really are quite tight and should be technical but for a hobby approach you could write up a little bit of something similar much more loose that outlines some of the things you would like to feature and shows how they fit together in a sort of design draft. Though not as detailed or restrictive as a proper spec it might help to keep the tinkering leading in the right direction.
Secondly you could break it down and depending on your time allowances maybe add a few goals in. If you focus on building one part of the project as a time breaking it into subprojects that can be linked together at the end, it gives a feeling of progress as you move from part to part rather than feeling like you have been working on the same thing for ages and can't be bothered any more. It works if you tick it off on a list, since usually it has to happen atleast mentally anyway.
In saying this if your goal is to play with certain concepts and not actually create a final product then you probably won't because you aren't working towards it. One way might be to take the above mentioned idea of breaking it up and then find a way of adding something personally interesting into each part that bores you, maybe trying to add a challenge into it or something.
I'm not particularly experienced still learning, but this is how I keep my tinkering together(sometimes unless I hit a total block cause by inexperience) and how I've approached many multimedia and web projects on a hobby basis in past years. Though the guy who said open-source it when you get bored and let someone else pick it up, that was a good idea if you want to see your code used but have satisfied your personal goals.
I have much the same problem. One thing I've noticed that HAS helped though, is lowering my ambitions. like WAY WAY low. Writing a spec is one way to reign in the ambitions, if you have some kind of limiting rule for the spec, like "The spec can only be one page", or "the spec can be no longer than 300 words long", or "Spec only something that I can get done in one day of coding". Getting the balance right can take some practice. If you go with the last limit, you can impose the rule of MANDATORY dismissal of the project if you can't finish it in one day.
The nice thing about this, is it limits you to achievable goals. This might sound really stupid or wrong at first. Or maybe it sounds reasonable, but you just can't help it, you wanna do amazing things, not ordinary things! Not small things that you can only get done in a few hours!
but keep this in mind:
“A complex system that works is
invariably found to have evolved from
a simple system that worked. The
inverse proposition also appears to be
true: A complex system designed from
scratch never works and cannot be made
to work. You have to start over,
beginning with a working simple
system.”
—John Gall
It is SO MUCH easier to make that ambitious project, if you already have a FINISHED and WORKING project to base it on. Then the "more complex thing" CAN be a project that fits in a day. This is the ideal and philosophy I'm working towards, because I think it has the best chance of succeeding. Looking at past successful projects, the vast majority of them evolved in this way, whether it was intentional or not.
What helps me a lot is to split a new feature into small tasks that could each be done in an evening hacksession. So if I have time, I simply pick one task from the list and just finish it. This is often enough to get "in the flow" and do "just one more".
I do this only for one feature at a time so I don't get distracted by all the other cool things I could add to my application.
I constantly write specs for my projects, in work, at university and outside in my free time. The biggest weakness of a programmer is his/her memory, so I find it good to keep myself busy during my thinking time by writing down my every thought into some sort of structured document. Before you know it you've written a full database schema or have a Requirements Specification.
At the moment I'm working on improving my SQL skills, and I've been spending a lot of this free time between writing queries writing down my experienced. After a couple of tweaks I had a decent document outlining what needed to be done.
I think the core problem is not the lack of specs, but rather that finishing something (anything) is hard.
It is hard work. It may seem as if your program is 90 % done. But doing those last 10 % (rooting out all bugs, getting the application to release quality, writing documentation, etc) requires as much work as the first 90 %. And if you want to be serious about marketing your program, answering support emails, fixing other people's bugs, that's more work still. And perhaps not the kind of work you are most interested in.
It is also hard mentally. An unfinished project has unlimited potential. It is an empty canvas where you can project your unbridled ambitions, lofty ideals and revolutionary thoughts. Once it is finished and made real you have to see it for what it is. Limited. Flawed. Never as pretty as the idea that spawned it.
That said, finishing something can also be very rewarding. You learn a lot, get a reality check on your ideas, the satisfaction of having completed something and you get to see what other people think of your work.
Some advice:
Make sure that you really want to finish the project. I.e., that the rewards are worth all the hard work. (If not, then accept that fact and remain a happy tinkerer.)
Find ways of motiviating yourself through the "boring" parts. Specs, maybe, if it keeps you focused. But find whatever works for you, whether it is ticking of todo-items, rewarding yourself with a cookie or dreaming of fame and fortune.
Release early, release often. The more you save for a "big release" the bigger is the chance that that release never happens.
First release, then rewrite. When you feel the urge to do a major rewrite, do a release first, then do the rewrite (if you are still up for it). Software is never perfect. If you strive for perfection without any pressure to release your half-baked (but existing) code, then you will never be done.
Most hobby projects of mine don't really get finished either. As long as I'm working on something and learning though I don't think thats a problem. Currently I'm not writing specs, but I am practicing/training TDD. I bring it up as I write tests that are the specs. Some days I'll sit down and just create a bunch of tests outlining what the software should do. Some days I make those tests pass. Its enjoyable in that I don't have to keep the code all in my head, and at any point I can sit down and make further progress by fixing the broken tests. Things just work, its kind of surreal.
Joel's article about the Evidence Based Scheduling works for me. Though I implemented it differently.
The idea is to break the project into small tasks and give estimates, then make a forecast when your project will finish based on the time the finished tasks took to finish them.
You may think your project will take years to finish, but actually from the estimate it's just two months or less. If you work more and finish tasks quickly, you will see the finish date coming earlier.
I think the most motivating thing to proceed forward is seeing the goal coming closer you run towards.
Plus: create something you will use later. Using stuff gives you incentive to improve it later.

How to keep a programming course interesting? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I guess, the following is a standard problem on every school or university:
It is Your job to teach programming. Unfortunately, some of the students
are semi-professionals and have years of experience while others do not even know the basic concepts, e.g. the concept "typed variable".
As far as I know, this leads to one of the following situations:
Programming is tought from its very basics. The experienced students get bored and discontinue to visit the lectures. As a consequence, they will miss even the stuff they do not already know.
Teachers and professors claim that they require basic knowledge (whatever that means). Inexperienced students cannot follow the lectures and a lot of them will focus on unimportant stuff (e.g. understanding every detail of a complex example while not getting the concept behind the example). Some of them will give up.
Universities invent an artificial programming language to give experienced programmers and newbies "equal chances". Most students will get frustrated about the "useless language".
Is there a fourth solution, which is better than those above?
IMO this is a problem based on the placement of the students, not something you should be too interested in dealing with on your end as a teacher.
If the course is an introduction to programming a computer, then you really need to start with the basics. If you have a classroom full of professionals who know how to program and they don't show, it was either a problem with your course description, or the school forcing them to take the class as a pre-req without allowing them to test out.
Your job should be to describe what you want to teach in the course description, and teach it. If students enroll who are overqualified, that's their problem. I think the only thing you really need to avoid is trying to make the course too advanced for beginners if your course really is for beginners.
I think the best way to keep it interesting is to bring up practical and interesting exercises along the theory. Taking a problem-solution approach is great (with interesting, funny, exciting, real-world problems). This requires the professor himself to have hands-on experience, work with new technologies and know them pretty well and not just teach what he had learned a couple decades ago.
The thing is, programming should be learned by practice. The instructor should focus on motivating students to code and try to solve problems themselves. This can be done by assigning a complete life-like project at the beginning of the course and working through the subproblems that occur in the project in the class. This way, students will have an idea why some specific feature in the programming language exists and where it might be useful.
Just a thought though. Not tried it! ;)
I recently attended a course in which there was a very wide spectrum of experience in programming among the students. They still managed to keep the experienced programmers in the class interested by having an exercise program in which they timed the practical parts of the exercises (the programming part), and posted the results in a high score chart. At the end of each lecture, the professor gave some pointers as to how we could improve our times even more. As we all know, all engineers love competing for topping such lists, so we kept showing up, and even learned a new thing or two.
The inexperienced students managed to complete the exercises too, even if they didn't care too much about their times.
Don't know if your course is one that can implement this solution, but if it is, you should really consider it.
I think there are a couple if things you can do to help bridge the gap between advanced and beginner students and to keep everyone interested and involved in the course.
Advanced Workshops
If it can be arranged (using PHD students etc.) run an optional weekly workshop which anyone can attend, but which is aimed at the more experianced students. Set a code task / challange each week and then at the workshop go through various solutions to the problem and discuss the implications and the theory behind the different choices.
This provides an interesting challange for the more experianced coders as they have something to get thier teeth into. It opens some debate and can help intermediate people grasp interesting concetps and if you get people to present thier solutions, it introduces an open reviewing style which is beneficial. It also helps the beginners in that you don't have to present them with really advanced concepts in the main lecture series just to keep the experienced people interested.
Student Involvement
Experienced people generally are experienced because they enjoy coding etc. and a lot of people love to share their knowledge. A really good way to use this, and to help both beginners and advanced students is to get the more advanced students involved in the teaching. If you run classes/labs where students complete exercises, try getting volunteers from the more experienced students to act as mentors/ supervisors for the labs. When the beginners struggle they can help out by explaining fine details or subtleties etc.
This can really help the beginners, as there are never usually enough staff available for everyone to be able to ask individual questions. It can also really benefit the more advanced people, as having to explain concept which you "know" is a great way to reinforce them in your own mind, and even to discover that you have subtle misunderstandings in your own knowledge.
Don't assume more than you need to; try to select programming environments which don't have too much intellectual baggage. You may think a C "Hello world" program is simple, but that requires understanding source files, compilation, static typing and block structuring. There are not easy concepts for a beginner. In comparison, typing "print 'hello world'" into a Python shell avoids them. Declarations, compound types, object orientation, pointers, floating point, recursion, modularity, threads, callbacks, modularity, networking, databases and so on are all major concepts which require effort to learn. And, there are plenty of fun things to be done without them. Your goal should be to get everyone in the group doing programming exercises as soon as possible.
Mixed ability teaching is hard; stream it by splitting the group up if you can. Maybe publish a quiz of basic concepts, and have an optional basic concepts section for those who didn't get 100%. Some people think they are experienced programmers but have misunderstood basic ideas.
If the course time available is too short to let people try lots of exercises, then I'd drop the more advanced material before I dropped practical work.
In one course I took, a large part of the course grade was derived from a end-of-term project which was announced in advance with extra credit available for assorted add-ons and frills. Sufficiently experienced student could start working on it while their less prepraed brethren were being taught the basics.
But as Dave Markle says, part of this is a matter of getting the right students into your class: you really want a cohort that is fiarly well matched at the start.
If you have many experienced students or this is an upperclassmen/graduate level-course, you should focus on integration into existing ecosystem. Being able to understand and integrate into an existing project rather than to always work from scratch is the most important skills you can give to give to those students.
Thus, programming assignments should come from real world scenarios. E.g., assign them tasks in an open source project. This can also make it more interesting, especially as their work may become part of a real world project.
If it's really beginners, tough luck, you will have to stick to the basics though if the students are non-CS majors, you can create problems from their own domains (e.g., engineering, chemistry, etc.)
I think your could be toast.
After some point, the difference is just to wide. It will take the whole year to get the beginners to the point that they can even understand stuff that wont bore the more advanced people.
However this clearly depends on the topic and setting. For some combinations of those the solution is teach to the level that the class is billed for. Those that are to advance will get board and quit, those that are to inexperienced with fall behind and quit. Don't worry about it to much as neither should have taken the class anyway. If on the other hand they need to take the class then some one further up the ladder messed up.
Sitting in your chair watching someone talk is boring (even if you talk well).
Things are interesting if you can achieve something, when you can manipulate the world and have a moment of success. So add as many practical exercises as you can and make really sure that they can do them in time and that they can do them successfully in time.
Nothing is more frustrating than to hear: "Well, I'm sorry that you couldn't complete it. You can find a solution here. Let's copy that and pretend it did work and move on." Examples during a course are simple and the people in front of you know that. So if they can't even solve the simple examples you bring along for them, what are they going to think?
I always think it is best to learn through practice. At the beginning of the course especially it is incredibly boring to teach language syntax in a lecture. It is far better to require your students to complete some work on their own or in a lab with assistants. This allows the more experienced students to get through the work quickly.
Once this is done you can have a lecture where you discuss some of the solutions to problems. Why they are good and why they are bad.
This works especially well if you also structure your course in such a way that students are always building on their rpevious work. The first week can be something simple like calculate how many days old I am from my birthday. A problem that is relatively simple mathematically but has a few weird cases. This might take several hours for someone inexperienced. Especially if they are learning syntax at the same time. But it gives them a simple goal to work toward.
After this you can expend on it. eg: take lasts weeks program and add functionality that allows it to batch process a file. This teaches people the importance of restructuring and refactoring, and can be expanded week after week. You may even want to distribute a good piece of work from the previous week for those that are falling behind to use. Obviously you will need to make sure people don't get too far behind, but this is a nice way to make sure that everyone feels that they have a fair shot at it even if their previous week's work wasn't too good. Those who are doing well will end
The key is to keep your lecture sessions relatively high level, and have people learn the syntax on their own, or with lab assistants. You can teach them different ways of thinking about a problem but the actual act of writing code is much easier to learn by doing it.
I once through a scheduling nightmare ended up teaching a beginner class and an advanced class in the same classroom at the same time. What I did was divide my time between the two levels by starting out giving the advanced group an assignment to do in class while I worked with the beginners and then giving the beginners an assignment while I worked with the advanced. You could do something similar (only having the groups self-select into the group they wanted to be in). Prepare some extra material for the more advanced ones and you are off to the races.
Another strategy is to keep everything at the beginner level, but offer the more advanced students some other material to do for extra credit (or even as substitution for some of the simpler tasks you require of the beginners). Discuss the more advanced assignments with them outside of class or individually while the class is working on practical work in the lab.
Keeping the lectures interesting with plenty of real world examples is helpful too. I tended to lecture as little as possible though and present the material more through class discussion and practical exercises and through asking leading questions. Making them find the information to answer your questions (and class participation was part of the grade) will make them pay more attention.
I also ended each semester with a course project that I only described what they had to do in order to get a B. An A would involve doing work beyond that including work in an area not covered in class. The more advanced students can then really shine by looking for really cool new things to do and even the beginners usually find a way to do something not covered by the course. It's amazing how much extra effort they will go to when they don't know how much more they have to do to get an A. Other instructors would be amazed at the quality of the end of the course projects I got and several of them started using the same method.
It may be better to break up a few areas of concern with what some would call, "Introductory Programming":
1) Introduction to personal computers and modern computing. Assuming that the course's software runs on windows, there may be some that need to cover the basics of a computer, e.g. what is a hard drive, keyboard, mouse, monitor, CPU, motherboard, etc. Note that this has nothing to do with even one line of code other than naming operating systems potentially. For some people this may be new to them and thus having a course that covers the basics, may well be worth it. Also in this course would be ways to use a mouse and all its various buttons, what are the various kinds of cables and connections people have, what are drivers, what are patches, what are parts of a network, e.g. firewall, router, load balancers, etc. The idea here isn't to get into how to configure a firewall perfectly, but rather that the person understands what various hardware components are for and possibly how to configure a home wireless network as the most complicated concepts taught.
2) Principles of programming. This would start with the idea of what are the steps are there to execute a sequence of commands. Things like printing and performing Mathematical operations, e.g. converting from Imperial to metric,would be covered with possibly sorting being the most complicated example, viewed from a variety of different algorithms and an understanding at a basic level of big-O notation.
3) Introduction to Data Structures and Advanced Programming. Now, let's introduce the concept of a relational database and how databases work in general and have projects with real world application, e.g. have each student take a list of something they have like DVDs or CDs and put these into a database schema to efficiently store all this data. Also, the idea of floating point arithmetic and its limitations, e.g. that a computer doesn't store the whole value of pi but rather an approximation that should be good enough in most cases.
4) Introduction to Parallel Programming and Operating Systems. Here you would have some in depth work in building an Operating System and handling how to write code that can run concurrently or in Parallel and how efficient are various programs under different circumstances.
That is how I could see someone breaking up programming so that it isn't where someone can learn in a week enough to pass the final without looking at anything else.
I have frequently been in this situation, first on the student side of things, and then on the teaching side.
Most schools force those sort of courses and their curriculum. This is silly, but such is life. If your school does allow it, I would suggest offering students attendance waiver if they pass an early screening test. It is in your interest and the interest of the freshmen to not sit in a class where a significant portion of the population is bored. Even being in a room with tons of people starting at their laptops harms discourse. Everyone is required to attend tests and submit assignments, but they at least don't have to show up.
Once you work with the novices, figure out if they're majors or nonmajors. Nonmajors will resent being in a CS course, you have to try and make it approachable for them. E.g., use examples from physics or chemistry or math rather than from building an interactive gui system.
If they're CS majors, they'd better damned be interested :)
My opinion is that teaching sample programs is dead-boring for most people. Searching, sorting, classification of 7-bit ascii input, using unix & make, opening a file, writing a file...
These are boring problems. Regardless of their importance/usefulness, these are tools. Unfortunately, tools are what's taught in intro courses, not problems.
But you need tools to be able to solve a problem. So it's a kind of chicken-egg problem.
Real world examples of code the student can imagine themselves doing out of their free time.
I remember a teacher telling me to use const values it was the tax of something. I only had to use the value in two places. She asked what if i need to change it i said its only in two places and i'll change it by hand also i couldnt imagine the gov ever changing the tax %.
I cant think of an non complex example where i would use a const so i wouldnt try teaching them to use that but for arrays i would simply write a guessing game then when the player wins the game, it plays back all the guesses in the same order to them. There is no easy way to do that w/o arrays and i could see how keeping track of someones steps/guesses would be useful (bragging rights to how quickly a person guessed it).
On the first day give the syllabus (what they will learn) and required basic knowlege (things you must know or else not take this class) and stick with it. After these all you can do is teach well (explain things well, answer questions, give a joke or two now and then etc). Caring who attends class, whether or not the field is boring, whether student lied about pre-requisites or not, who listens and the other yada yada is beyond your controls. Besides, you should expect adults to be adults. If students skip class and ace test, maybe that is best for them. If they skip class and bomb tests, well maybe they are in the wrong place.
I hated when professors had this mentality when I was in college. Now as a working professional I understand it.
Center the programming exercises around either sports or movies.

What's your most controversial programming opinion?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.
The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).
So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.
Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.
Programmers who don't code in their spare time for fun will never become as good as those that do.
I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.
(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)
The only "best practice" you should be using all the time is "Use Your Brain".
Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)
EDIT:
Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?
"Googling it" is okay!
Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.
Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.
What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.
(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)
Most comments in code are in fact a pernicious form of code duplication.
We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.
I think eventually many people just blank them out, especially those flowerbox monstrosities.
Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.
On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.
XML is highly overrated
I think too many jump onto the XML bandwagon before using their brains...
XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.
My 5 cents
Not all programmers are created equal
Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.
It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)
I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.
For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax
For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.
Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.
If you only know one language, no matter how well you know it, you're not a great programmer.
There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.
It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.
Performance does matter.
Print statements are a valid way to debug code
I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.
Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)
Your job is to put yourself out of work.
When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.
If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.
Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.
1) The Business Apps farce:
I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.
Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.
How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?
The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.
I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.
2) The n-years-of-experience-required:
Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.
3) The common "computer science" degree curriculum:
The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.
Getters and Setters are Highly Overused
I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).
I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!
UPDATE:
This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).
First of all: anyone who uses public fields deserves jail time
Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.
Many people think:
private fields + public accessors == encapsulation
I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.
Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):
There is a reason that we keep our
variables private. We don't want
anyone else to depend on them. We want
the freedom to change their type or
implementation on a whim or an
impulse. Why, then, do so many
programmers automatically add getters
and setters to their objects, exposing
their private fields as if they were
public?
UML diagrams are highly overrated
Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.
Opinion: SQL is code. Treat it as such
That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.
I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?
Readability is the most important aspect of your code.
Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.
If you're a developer, you should be able to write code
I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:
Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.
It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:
Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.
Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.
I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!
Edit:
There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.
Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.
The use of hungarian notation should be punished with death.
That should be controversial enough ;)
Design patterns are hurting good design more than they're helping it.
IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.
And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.
Less code is better than more!
If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.
PHP sucks ;-)
The proof is in the pudding.
Unit Testing won't help you write good code
The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.
And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.
In fact, I'll generalize that even further,
Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.
They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.
Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.
I think that a method should be created wherever you can name one.
It's ok to write garbage code once in a while
Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.
Code == Design
I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.
Here's an article on the topic of Code as Design.
Software development is just a job
Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.
But in the grand scheme of things, it is just a job.
It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.
I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.
I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.
Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.
Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)
Software Architects/Designers are Overrated
As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.
How's that for controversial?
Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.
There is no "one size fits all" approach to development
I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.
Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.
People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.
It isn't.
Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.