On your very first program, which construct hooked you on programming? - language-agnostic

To me it was the If statement, I'm psyched up, since then I believed that computers are very intelligent, or I can at least make it appear intelligent because of it.

For many of us who were introduced to computers in the late 70's or early 80's, the first program we saw looked like this:
10 PRINT "Commodore sucks! "
20 GOTO 10
("Commodore" could be replaced with "Apple", "Atari" or "TRS-80").
GOTO is awesome.

Answer #2 :)
The actual language construct that first really fascinated me was recursion. The problem
Write a function named SumDigits which
summed the digits of a number. Example:
SumDigits(1234) -> 10
At first I wrote a very long iterative solution. But after awhile I came up with this answer.
string SumDigits(int value) {
if ( value > 10 ) {
return SumDigits(value/10) + (value%10);
}
return value;
}
The succinctness of the answer amazed me and I instantly found a new love in recursion and terse programming.
It only took a couple of weeks though to learn the evils of recursion :)

Making the computer obey me. Awesome.
I also love (love to hate) that the computer will obey even when I'm wrong.
But seriously folks.
I was hooked when:
I saw that you can do rich and dynamic things with code.
That the machine is generally consistent.
That programming is like math in the sense that for all the "it depends" out there, we still have more than our fair share of questions with actual, provable answers.
That I could automate menial tasks with logic and loops.

Started out in QBasic, so, I think it was something along:
INPUT "What's your name?", a$
PRINT "Hi, "; a$; "!"
Being able to show something on the screen with PRINT was enough to get me excited about programming. Interactivity using INPUT was icing on the cake!

When I first started learning programming with QBASIC the whole idea of flow control using if statements and loops was great. I think it was just a few days after I learned about the if statement that I built my first "Choose your own adventure" game. Looking back I know it must have been horribly inefficient and massive in terms of lines of code, but the fact that I could branch the story using nothing but if statements was wonderful.

Pointers.
When I started programming in Turbo Pascal, I really couldn't understand, how the heck they write big programs. Memory was limited, and whatever I was trying to do - I would often hit the stack limits.
When I finally was introduced to pointers I was finally hooked, because I could finally start making something big! Not that I made anything big, but... :)

The first Fortran code I wrote had an if statement in it, and was one of the most fascinating things I had seen at the time. Something like this one
integer n
n = 1
20 if (n .le. 100) then
n = 2*n
write (*,*) n
goto 20
endif

For me, it wasn't a specific syntax - it was finding out I could break out and change ZX Spectrum games - I guess discovering source code is what got me hooked.
Then, when I actually started programming, and was copying code from books, being able to customise what was in the book and have the program still work, but how I wanted.

? "HELLO WORLD"

From my first BASIC book (BASIC for kids I believe):
10 INPUT A
20 INPUT B
30 LET C = A + B
40 PRINT C
That was on the cover of the book, and I was thrilled with the possibilities (it could do my homework!)... It took me like a couple of weeks to grasp that simple concept, but once I got it, the world of programming opened up for me.

I really fell in love with programming languages once I discovered comments. You can do all kind of stuff with it, like
// a commment!
int /* whoa, an inline comment! */ a;
;)

For Loops + Arrays did it for me. Once I realized that I could loop through an arbitrarily large set of things and do something to each of them, it all started to come together.

Most definitely the FOR loop - at 6 years old, having that lil' LOGO Turtle go from drawing a line to drawing a square to drawing a circle was all it took to turn me from a user into a programmer.

Clear case, the loop. Specifically looping in Gwbasic. Just for fun I wanted to print all the numbers from 0 to 100 on the screen. So I started:
10 print 1
20 print 2
30 print 3
40 print 4
At some point I thought WTF or something similar. There must be a better way. So I asked someone (perhaps a teacher or some fellow pupil) who then introduced me to the magics of variables and the loop. That must have been the moment I got hooked.
10 SET i = 0
20 IF i > 100 THEN GOTO 60
30 PRINT i
40 SET I = I + 1
50 GOTO 20
60 REM END LOOP
Thats 6 lines instead of 100!!! That was just so much better. Sorry if I got the syntax not exactly right. :-) I am really fond of that moment.

Lambda expressions
Been programming my nearly my whole life, but even now find excitement in new things like Lamdba expressions.
MyList.Any(p => p.IsCurrent)
Hmm.. love it.

When I was first learning, I think arrays for sure. Before that I really just thought in terms of single variables. When I learned about arrays, I was able to do a lot lot more with my qbasic :).
Now I have all sorts of data structures available, not just arrays :)

for or while loop since I could use it in molecular dynamics simulations and I wouldn't have to do the computations one at a time by hand

For me it was Pointers.
Whilst I won't even pretend to fully understand pointers it was the first time that I really got stuck with programming. After messing around with Visual Basic 6 and server-side scripting languages I moved onto C, picked up the bare basics, then met pointers.
During that lecture I can remember peoples reaction to pointers. Self-proclaimed programming gods cowered in fear, those that knew little started reading the job ads in the local papers. I actually remember one girl in my class saying "whaaa?"
Many programmers get a wake-up call when they hit pointers. The programming world has just introduced you to the world of Computing and if you've only really just met you'll find very little to talk about.

I remember the first computer problem that truly fascinated me. It was a problem I got in my highschool programming class.
Write a program that will read a number from the console
Why did that fascinate me? Well it didn't at first. I wrote a quick and dirty program that did the deed but then the teacher did something quite unexpected. She entered "a" into the program and it died a horrible death.
It took me the rest of the day to deal with evil input but eventually I got it working. This process completely hooked me on the idea of program correctness.

IF-GOTO.
No joke. I was hooked long before I got such a language as had pointers.
I had already written pointer-like algorithms using array indexes by that point.

Function pointers in C.
First I learned C, but not function pointers.
Then I began programming in assembler for about a semester in the university. I used function pointers then without even knowing about them. Somehow they seem like a natural thing in assembler for me. I had to explain them to my teachers a few times, because they didn't understand them :).
Then, I came back to C, and revelation hit me.
Now I laught at Java methods, generics, late polymorphism and such said "magic" things.

PRINT 3 + 4
The Computer said 7.
And all this was visible on my parents TV. ON A TV!!! Can you imagine that? I was imediately hooked. You could tell a machine what to do and you could see on a TV. Wow.
Backgound: My dads employer bought a small computer (Robotron KC 85/1) for the engineers to get in touch with this technology. They could take it home to play with it at the weekend. So my dad brought it home and all he new about it was how to connect it and how to turn it on. One of his collegues told him the trick with the "PRINT" command and the addition of numbers. He showed me and I was hooked. This was about 1986 when I was 12. I am still hooked for basicly the same reason (telling a machine what to do).

qBasic, when I was around 10 - 12 i started making a copy of that "The doctor asks you weird questions"-game. Which ended up as a lot of nasty questions. That's when i knew I wanted to work with computers.
A couple of years later i started playing with HTML and PHP, giving me a couple of customers and i started a company, that's when i knew i could get bloody rich on this :)

I've only been programming for about a year, so for me, it was LINQ. I looked at codes in books that exposed the query string to the DB, and thought "wow, that's kind of lame.". The I met LINQ and we have been happily married ever since.

When DBase II allowed me to print records to a text file AND stick exactly the right typesetting codes pretty much where I wanted -- before them, inside them, behind them, around them. I can still feel the chill on the back of my neck. WOW a whole whole typesetting system right here in my tiny computer! Warm up the 300 baud modem and send this file straight to an 8-inch disk and run it through a Compugraphic. Yards of beautifully shiny photo-type paper with all the letters in the right place. I do not deserve this happiness ...

Mine was goto in basic. I thought it was incredible you could make the program jump where ever you wanted it to. Only later did I realize with great power comes great responsibility.

What really made me hooked on programming was the following lines of x86 assembler:
mov ax, 13h
int 10h
mov ax, 0a000h
mov es, ax
When I got how easy it was to draw things by messing with the video memory, all the other stuff I've learned was suddenly useful for something.

Inspired the word
BASIC Beginners All Purpose Symbolic Instruction Code

I found ++ to be fascinating in high school.
Everyone else knew Basic, Pascal, etc. But i++ was code! And code that only I knew!
And such tiny changes could have very important effects:
++i is different in many important ways from i++.
And ++ translates so directly all the way down to machine code. So by learning this, I had direct access to the CORE of the MACHINE. Imagine the power!
Learning about this made me want to learn all the other strange operations and corners of the C language.

TRS-80 graphics
POKE 15360, 191

Related

I need some graph theory (something I'm notoriously bad at)

Okay, let me start off by saying the basics: where I'm coming from--this will actually help you answer my question.
I'm working on a complex parser with a team from the ground up. Unfortunately everyone is bad as graph theory, so they've tasked me with solving the problem (UCK).
Basically, we'd like to make the automation code a little more efficient. Right now, it works, but it's bad at looking ahead, and we don't have the hardware to facilitate our new code. We don't really NEED any graph theory, fortunately, but with current hardware, a workaround is necessary.
So my question is this (I'm going to reveal a little of our completely ... source--forgive me, it's in the uh... NEW PROGRAMMING LANGUAGE we're developing).
Our method, argument, and command goes something like this in our current architecture (I am going to leave out the calls to load libraries and stuff):
:Omnific(args_simplemachines)
{
tract(omni-presence++, gravity.sucks_LET'S.GO(0<1++square.root[helium.theory]**24-0.9::args_if.then)
{
if
{
:TRANSLATE_SQUARE.ROOT(0.2351zzx00--0.99::statement: then reduce the number of clocks by two levels;;CPU.LEVEL_LEVEL.THREE,ARMAGEDDON_EMERGENCY.IGNITE,ARMAGEDDON.PRERARE.TO_look.ahead.in.tandem) ;;comment: it's a complex formula that only makes sense when you've studied the entire source
;;comment: for the sake of brevity, I am going to leave out a chun
}
then ;;comment: and here's the part that WOULD work with new hardware to handle the arguments, but it makes absolutely no fucking sense to the current CPUs, and we can't translate it
{
:EMERGENCY_RUSH.UNDERSTANDING(CLEAR_ALL.RAM::JSON.TRANSLATE_.;;formulate.dilemma.signify_quick.rush--statement: see clearly now or else all the hardware is destroyed haha eat shit !)
}
}
;;comment: we have a simple garbage.cleanup here.
}
Now before you all destroy yourselves with vodka over the literally insanity I just wrote, I want to make it simple for you. This method simply specifies, in simple English: run the command "Omnific," then when the processor is cleared of tasks, run the arguments specified, which is a simple JSON translator for the feignt of heart (because sometimes we have to simplify things with a couple lines of JSON... bleh), then it reads the JSON in tandem with the Omnia code, compares the Omnia environment we coded recently, checks the laws and "givens," ignites the "ARMAGEDDON" core safely, and basically destroys itself... not literally.
HAHA, oh God, I am going to get banned so hard. But seriously, where we're stumped is we know there's a simple set of lines of code in like C or something (we also had to write a C translator) that is kind of a complex array--it runs through the bits in the memory and spits out each bit's properties. Anyway have experience with this? My dad does, but he thinks I'm nuts and won't give me the formula.
Ciao! Don't forget to smile.

Toy projects for new programers

When I was first started teaching myself programming, after finishing a tutorial I would feel like I still couldn't do anything in the language. So, I looked around to find something to work on. Since I had just learned a few of the basics, the amount of work involved in finding, reading and adding to an open source project seemed insurmountable. Instead I started on a couple toy projects, which ended up being incredibly beneficial.
Having seen a lot of questions from beginners similar to "what should I do now?" and a lot of answers similar to "start working for an open source project" has made me think there has to be better advice for a new programmer. While working on an open source project surely gives great experience, there is a perceptible barrier to entry.
Instead, I think it would be great if new programmers were prodded towards working on a toy program related to some interest they have. Since there are so many directions that programming can take you, I think it would be interesting to list some simple (but fun/rewarding) projects grouped by the direction the new programmer is looking to pursue. Such as:
Game Design:
Write a text adventure (like Zork)
Natural Language Processing:
Create a program that writes meaningless, but grammatically valid essays.
I recently asked a similar question (Diverse resource of problems to show merits of different languages) and got links to sites that provide problem sets, as well as validation. Check out:
http://www.codechef.com/
https://www.spoj.pl/problems/classical/
http://wiki.python.org/moin/ProblemSets
http://projecteuler.net/
Although these problems don't oftem amount to projects, they are still interesting. I'm interested in seeing what people come up with here.
I actually think that a TopCoder approach might be better... programmers can still pick topics of interests, but they're actually working for a prize on a REAL project and they get feedback. Frankly speaking, TopCoder is a bit of a bloat and as far as I can tell, they don't allow people to make free competitions. It would be great if there is a TopCoder/StackOverflow type of site: people can submit code, get voted on their implementation and just have a good time!
I'll even pitch my idea, I'm starting to work on my own version of TopCoder/StackOverflow hybrid monstrosity called MyDevArmy (although I have not done anything so far except buy the domain).
Write a program which renders Wolfram automata (esp. Rule 110).
See YelloSoft for example code.
Start by writing a Blackjack simulation. Choose whichever strategy you want for the first run.
Next, start adding additional runs for different strategies like hitting/standing when your hand's value is 15 vs. 16 vs. 17 vs. 18, and whether the hand is soft or hard (an ace's value being counted as 1 or 11). The dealer's strategy will be constant, as they really are in casinos.
By the end, your program will run, say, 1000 instances of each strategy combination. It will print out a summary of the rate of hand wins (percentage of times you beat the dealer) for each stand value and hard/soft combination.
This is easily one of my favorite projects I've done and it can really cement some techniques in the language of your choosing. Plus, if you have the initiative to start learning some of the (fairly simple) discrete math that's involved in coming up with the odds of these situations as a side project, you can come away with an even better experience. Who knows, maybe you could ditch this computer stuff and take up card counting?

How long should it take for someone to be able to type code from memory? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I understand that this question could be answered with a simple sentence and that it may be viewed as subjective, however, I am a young student who is interested in pursuing a career in programming and wondered how long it took some of you to get to the level of experience you are now?.
I ask this because I am currently working on building an application in Java on the Android platform and it bothers me that I am constantly having to look up how to write a certain section of code in my application such as writing to a database, or how the if statement should be structured.
My question really is, how long did it take for you to become experienced enough to actually know exactly how your next line of code was going to look, before you even wrote it?
The speed at which you can quickly recall language syntax, common library functions, and best practice patterns is directly proportional to the amount of time you spend using them.
In other words you will find yourself getting faster the more you do it.
I have been a C++ programmer for the last 20 years. It has taken me that long to get to this expertise level. I'm mostly a Windows programmer, and I keep the msdn website up on one of my monitors all the time.
Doesn't matter how long you've been doing it. You will never know everything from memory. Don't sweat it.
I've been programing for almost half of my life and I sill can't always recall simple syntax, let alone entire tracks of code for more complex tasks. If you ask me that's what reference books and Google are for.
A far more important skill to have is the general knowledge of programming in any language, i.e. recursion, looping, object oriented design, working with APIs, error handling etc... Once you have all that down, you can apply it to any language and platform.
I can tell you that after 25 years there are lines of code that I don't know how they're exactly going to look like.
Want an example? I'm programming in Java since last century and I can honestly still make a mistake if I were to type hashcode() or hashCode().
Why? Because actually typing such a method name yourself is so last century. Your intention is to override Object's hashCode() method, so you use programming by intention.
You hit Ctrl-O then h and you get a list of the methods starting with an 'h' that you can override. Then you hit enter. As a bonus, the "#Override" gets inserted for you too.
4 keys. 4, to get this:
#Override
public int hashCode() {
}
And honestly, whether hashCode takes an uppercase 'c' or not... I couldn't care less. This is not what an hashcode is about and my intention is not to know all the inconsistencies languages and API designers came up with. My intention is to override the method that gives back an object's hashcode and my (modern) IDE allows me to get that skeleton in four keypresses, including hitting enter.
Another example: there are people who do really type this countless times a day:
for (int i = 0; i < ; i++) {
}
or the more tricky:
for (int i = ; i >= 0; i--) {
}
Note in that latter case I can still mess up and type "i++" instead of "i--" (a 'thinko' as its called).
But I don't care at all, because I type "fi<tab>" (three keys) and I get the first one or "fir<tab>" (four keys, "for i (in) reverse") and I get the second one. You ain't beating that (especially seen that I'm a touch typist so I type these three or four keys fast). In addition to speed, as an added bonus the autocompletion won't mess you "i--".
In many case I don't know exactly the line I'll get: sure, I know it "more or less" and that's exactly the way it should be.
Don't sweat it too much. As others have mentioned it eventually gets easier to write code without looking things up so much as you work with a particular language over time.
HOWEVER
There are a few reasons that even veteran programmers find themselves constantly using reference material:
(1) Unlike days of yore, most projects now require you to use a number of languages to get the job done. For example single web site-project a web-site may require C#, XML, JavaScript, SQL, HTML, XHTML, RegEx and CSS all in the same project. Switching between some of these languages can really throw you for a loop sometimes because many of them are just similar enough to be familiar, but just different enough to make you forget the subtle differences in syntax.
(2) Just when you start getting comfortable that you know a language inside and out, the vendor will release a new update that changes everything you knew about it. For example ASP->ASP.NET.
I still look up simple things fairly frequently and I've been at this almost 20 years. The important thing is that you understand the underlying concepts and principles.
It took me 12 years to get where I am at today, which is my experience in professional programming. You will always improve when working with some programming language, even if you have been working with it for the past 5 years.
About your question, it depends. I think that you should be comfortable with the syntax after a week, comfortable with the main libraries after a month, and comfortable with the platform after 6 months.
But when you get there, don't stop!
If you code every day with the same language, you'll probably have the common language elements and patterns memorized in a month or two. But there are plenty of things that you'll probably never memorize, simply because you don't use them often enough, and also because modern IDE's can help you so much that you don't really need to remember everything if you utilize all their features (like code snippets, shortcuts, intellisense).
I've been coding for 15 years, and doing C# for the last three, and I still use the MSDN reference material every day. However, as far as the basic building blocks of the language are concerned, I had them memorized in the first month or two.
Also, the more often you code, the better you'll commit it to memory.
There's a false assumption going on here I think...
At my job I end up working all over the stack in different languages and platforms. If I'm away from a project for 6 months I end up having to look at code to get even basic things done. The advantage of experience is reducing the re-ramp up time on productivity though.
So, instead of it taking weeks or months to get back to a point where I can write 80% of the code from memory it takes a few or several days (if that sometimes). I've been programming for about 5 years now. I'm just now getting to the point of being able to visualize small applications in their entirety.
As long as you're working on solving a problem that you haven't already solved (numerous times) you'll probably always have to look up code.
If it can be done I'm guessing it takes longer than 5 years for most people, unless they work in one language with one editor and in one area. (ex: C#, Visual Studio, file system operations) My company isn't big enough to employee someone that specific.
Don't be downhearted by having to consult documentation all the time man, that's what it's there for. Over time you get used to syntax and things like that but don't sweat it if you can't remember library methods or ways to connect to a DB.
Over time (with experience) you might remember these off the top of your head but in reality there's nothing wrong with taking a quick consultation of the documentation to refresh the memory.
Also remember that technology is ever changing so it's good to keep the mind fresh with new ways of thinking/ways to do things.
Question is not 100% clear. One of the best programmers I know doesn't remember anything and needs to look up printf formatting strings almost daily. On the other hand if you are having hard time figuring out how to write that for (int i = 0; i < len; i++) loop after doing this for 6 months -- that doesn't sound right.
the idea that one could every bit of code from even a single language and then type plainly from memory is pretty far out. the amount of pre-defined functions for, say PHP or Java alone is immense.
but that being said, its important to learn the programming structures, and know them the best you can. structures like foreach, if then else, switch, etc. are really the things that need to be integrated thoroughly. also, conceptual things like Object Orientation (not just "using" objects like mysqli, but understanding things like controlling code, client code, bottom-up and top-down architecture) are the real things that make good programmers great. for myself, i know that i have not the capacity to learn every defined function thats provided by language writers so i instead learn whether or not it can be done(and of course still try on occasion to do things that cant be done, lol). if you know that, then its a matter of google and books to find the "specific" mechanisms on how.
cheers my friend.
Not an answer per se, but I just wanted to say that as a novice myself, this q&a is one of the most useful things I've read on SO. It seems that yes, experts can probably code the basic stuff from memory, but even they revert to book for complex problems, and for beginners, that's what the book is for.
I feel like I'm just beginning
You should use code snippet if you are using certain piece of code repeatedly. I doesn't bother me if I cannot remember some piece of code from memory.
For me, it depends heavily on what I'm writing. For example, I doubt that most people ever quite memorize all the parameters to some Windows functions. I may know that I need to call CreateFile on the next line, but don't know all the parameters in order until they're typed in (with help from Intellisense, and sometimes MSDN).
With something that's doing simple computation, I'm a lot more likely to be limited by my typing speed (but I'm a fairly poor typist, so thinking faster than I can type doesn't take much).
That means it's really a question of how much of the time you need to refer to something to type the next line of code -- at first, it's a pretty small percentage, and over time it grows. I doubt it ever gets to 100% for anybody though. I, for one, don't think I'd want it to -- that would indicate I wasn't working on new and different things...
If you use eclipse and java, you may find yourself already there.
Other combinations may be a little slower to a lot slower.
Java has the advantage that it's pretty easy for the environment to build an entire parse tree while you are typing. At any place in your code, typing ctrl-space can give you an entire list of valid options.
Also syntax errors are always underlined.
If you want to go hard core though, I started in C before the day of decent editors and it took me about a year before I typed in more than a few lines and didn't get a compile error.
I don't know about memorization. Repetition is mother of all learning and that applies to all aspects of life. Look at the experienced accountant vs. the novice when filing taxes, who looks up stuff more. But what I did discover recently is that I navigate documentations much quicker and have a sense for going directly to where my question is answered. I got 6h sense - I can see the code! Seriously, it all comes with experience. Still, when learning something completely new, there's no shame in looking up how to do certain things. That's what separates humans right, learning from others. The more you work on something, the better you become.
There is absolutely nothing wrong with having to lookup the documentation every time you want to write some code. I got lucky in that once I use a certain function, I don't forget it very easily. However, most of the time, especially when I'm coding in a language that I haven't used in a while,
I start out by writing a flowchart of the algorithm that I want to code - just the pure logic. The most important reason for doing this is so that I don't lose my train of thought and forget the algorithm that solves my problem in the midst of technical problems like syntax and < what library functions exist? >
Then I look at the documentation and check to see if there are simple function calls that will help me accomplish each task in my algorithm
If such functions do not exist, then I either modify my algorithm to accommodate for what functions the language does provide or write helper functions do fill in the gaps.
I only start coding NOW, which is not too difficult to do anymore, because I already have all the relevant functions written down. So now it's just a question of translating pure logic into syntactically accurate code.
Proper syntax usually does not elude me, but if that does happen (VERY rare), Google provides very nice code snippets if you ask nicely.
Hope this helps
May the Force be with you
I'm programming Java now for about 5 years, and I never have had any trouble remembering syntax. I can <brag>write almost all java.util.*, java.io.*, java.lang.* and javax.swing.* stuff out of my memory</brag>, but does it help me? Not very much. It doesn't make me a better programmer than someone who can't!
I'm using Netbeans, which greatly helps working with libraries. Also, the documentation, just in the place where you need it. Sometimes, it's quite unnecessary, but sometimes you'd wish the "auto complete" screen would popup faster!
The best thing as a student is to concentrate on what you are doing, not how fast you are doing it. Looking up things isn't bad; as it'll help your so-called "unconsciousness mind" process what you are really trying to accomplish. Having such breaks, e.g. by looking up a certain documentation or syntax reference, may even let you be better at programming (no proof for this, though).
Question is quite subjective.
With the great many IDE's available and the "newbie" tutorials on getting started, it won't take you long before you're off writing your own apps.
That said, unless you have a "thirst" for how stuff works kinda attitude all the time towards everything, you won't go far. In this field, you really have to have a passion for what you do to be great.
... It bothers me that I am continually having to look things up... How long did it take you to get to the level where you are now?
For me, and for most of the students I teach, the answer depends on two variables:
How many lines of code have I written?
Do I use the language or library every day?
(Reading other people's code is very helpful for learning a language and learning how to think in a language, but for me at least, it hasn't helped me become a fluent writer of programs in a language. Only writing code does that.) So my first comment is that time should be measured in lines of code written, not hours or years.
(Ray Bradbury once advised aspiring writers of fiction to write a thousand words a day six days a week, and after they've written a million words they might start to know something about their craft. This is good advice for programmers too.)
As for my own experience, across a half dozen languages that I currently know well or once knew well, it's been pretty consistent for me that
After writing 100 lines I am continually looking things up in the manual and don't really know what I am doing.
After writing 1,000 lines I use the manual occasionally and am starting to learn how to think in the language.
After writing 10,000 lines I am about as good as I'm going to get without making special efforts.
After writing 25,000 lines I probably will not need the manual again.
It's also true that
To learn to write 100 lines I had to read 100 to 1,000 lines that someone else wrote.
For the first 1,000 lines I write it is good for me to read 2,000 lines someone else wrote. For the next 1,000 lines it is good for me to read 1,000 lines someone else wrote.
After I've written 5,000 lines I learn the most by reading code written by world experts or by people who designed the language and understand what is there. I no longer have much to learn by reading just any program.
On the other hand, my experience about when I stop having to refer continually to the documentation is much less consistent.
I find it especially hard when two languages are very similar; I will never stop needing the ksh manual to tell me what is different from sh or bash, and I will never stop needing the Haskell manual to tell me what is different from Standard ML (though the need grows less with each additional 1,000 lines of Haskell that I write). I also find it interesting that while I have written over 35,000 lines of Lua code, and I will never need the manual again for a language question, I have to look up libraries and API functions almost every time I write something longer than 500 lines. (I've written a lot of short Lua programs and a couple of long ones, and I don't use the language every day, although I definitely use it at least several days each month.)
As for the unspoken question, when are you personally going to get better?, take some advice from Watts Humphrey: measure your own performance and track it over time. I think if each day you count the number of times you had to stop and look things up, and graph that against number of lines of code written or edited (which you can get from source-code control), you will be pleasantly surprised by objective evidence of improvement. And I think once you have such evidence, you will be able to focus more on continuing to improve, and not so much on where you are now or where you hope to be in a year.
It's true that after some years of programming you'll be able to remember a lot of thing without having to check the "manual". For me this is not an important milestone in your programming life though, the really important moment is when you reach the point where you don't know how to do something... but you're sure that can be done and you know where and how to research about it :-)
You made a very important step participating on this site. Exchanging ideas and helping each other it's a excellent way of learning.
Sociologist Malcolm Gladwell believes that ten thousand hours is a good benchmark for the amount of practice required to become world class at many fields of endeavour. I think that sort of number applies to programming as well. This isn't quite what you asked, though; being able to code competently certainly requires familiarity with your environment (language constructs, system libraries, third party libraries and perhaps something of the concepts underpinning them), but there are many soft skills involved which are harder to describe and can only really be acquired through practice.
As others have said, being good at programming is not about typing code from memory; it's about recognising patterns, understanding systems, solving problems. It's about choosing the right tools for the job (languages, libraries, algorithms, whatever) and being able to make proper use of them.
In all the jobs I've had, it's about adaptibility and flexibility; you might have to learn a new language or pick up somebody else's poorly documented code tomorrow, and a good programmer will be able to take this in their stride.
I've been coding professionally for nearly 10 years now; there's all sorts of code I use semi-regularly which I look up the options for at least some of the time. There are too many commands with too many options in too many languages for me to remember each and every last one in detail and Google is quite good enough at getting the information I want.
That said - there are some bits of routine code which I use all the time but can count on one hand the number of times I've written - the exact syntax for populating a dataset in .Net for example. One of the skills I've most come to value over this time and which saves me the most time is spotting when some code can be quickly and easily moved into utility libraries. If it's fiddly but routine, consider this approach to save yourself hassle and improve your overall code quality.
In the context of this question ; java, c++, javascript the languages are still evolving. I can't say about other languages.
The language standard/specification changes over time
Libraries are added to supplement the language constructs e.g Boost, Google Collections, Apache Commons, jQuery
Applications will rarely be bound to a single aspect of a language
Across organizations/projects, coding standards change
A project I worked upon recommended against using primitives
When unfamiliar with the constructs used, I put in pseudo-code flagged with //TODO first .. then go in and find the actual API to use.
IMO, the answer to your question is - there is no definite answer.
As a Java programmer the sheer size of the runtime library makes it impossible to remember everything. Swing is big, there is an XSLT engine (which contains TWO languages), the Concurrent support evolves and grows.
The direct access to the Javadoc API from within Eclipse combined with code completion makes it possible to find the information you cannot remember but you know is there, quickly and efficiently.
I have found the javaalmanac.com (which has been reworked into the more convoluted http://www.exampledepot.com/egs/index.html) invaluable in presenting short, concise and above all correct programs doing just one small thing. Strongly recommended.
Most weeks I program in Java, C#, Python, PHP, JavaScript, SQL, Smarty, Django Templating and occasionally C++ and Objective C. I'm a student so this is partially school work and partially my part-time job. Instead of learning syntax I've learned what concepts to look for.
Seeing patterns and concepts is key, once you know the concepts and what to look for the syntax is secondary.
I find that even when I am just being exposed to a language I can accomplish a lot just by looking for 'what ought to be there'
Why should you be able to remember all of this stuff? Personally I embrace the fact that I can't remember all of this stuff and simply try and remember where to find the information that I need. I find that much more useful. This takes the form of blogging, taking notes, keeping large amounts of 'sample' code and reusable libraries and writing about code that I find useful and interesting; oh and lots of books, some of which I hardly ever 'need' and some of which I hardly ever really read but they've been skimmed and I know where they live.
Technologies come and go and there's just no way I could have kept all of the things that I may one day find useful in my head; so I page them out and just keep the index in memory... For example; 9 years ago I was doing some stuff with Java and CORBA and whatever. There's no way that I could drop back into that now without the notes that I wrote up for my website back when I was doing it: http://www.lenholgate.com/blog/2001/02/corba---enumeration.html. Likewise I have code that I use on a daily basis that has been kicking around since 1997 or earlier. I don't remember how to type it in, I have it in a file with tests (if I'm lucky) and docs (if I'm even luckier).
Whilst I realise that most of what I'm talking about is 'big stuff' I also often have to go and look at some of my old code to simply work out how to structure a typedef...
Of course the day to day stuff will come with time and practice; but you need to work out that you have to page some of it out in a form that you can reload later very quickly. Embrace the fact that your memory is never going to be able to hold it all and outsource it :)

Most harmful misconception of beginners about programming? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Possible Duplicate:
What is your longest-held programming assumption that turned out to be incorrect?
What do you consider to be the most harmful misconception about programming from people who are new to programming that you have seen?
Re-inventing standard library functions/classes.
After going through a language book/tutorial, most beginners - knowing how to handle strings and numbers - will invent their own date functions, their own 'compression algorithms', their own SORT implementations.
Oh, and they always spend their first day searching for clrscr();.
That because their program compiles and runs it does what they expect it to do.
That if their code doesn't compile or work, it is because of a bug in the compiler.
Maybe not the most harmful, but they usually can't estimate how long stuff will take to be done, they think it can be done much faster than it really must(including me).
As for harmful stuff, good companies usually keep beginners away from where they can do much harm. They are usually encouraged to work by someone more experienced, so they can learn better.
That if their program works on their own computer, then it will work on everybody else's computer too.
"But it works on my machine!"
That programming is all about the syntax. Turns out it is all about problem solving.
That the user is a programmer.
Thinking if it doesn't look horribly complicated it must be wrong or "bad" code.
I must admit years ago in school I was guilty of thinking my programs didn't look complicated enough! These days I want to cry if something doesn't turn out as simple as:
//start
if(something)
{
do_stuff();
}
//go home
:P
Programming is easy: Programming is a lot of fun but don't ever think of it as being easy. It takes a lot of experience, learning, and failure to get better at it and be humble about it.
Tools do it for me so I don't need to learn what happens underneath the covers: Tools make things a lot easier and allow you to get things done quicker. However, you still need to know and get familiar with what's happening underneath the covers because sooner or later you will need to pop open the hood.
Lack of curiosity
It's all about the newest and the coolest technologies: Not necessarily. It is about what's right for the customer and the problem you're trying to solve.
"The problem is not in my program, it's a bug in the library / OS / language."
"It worked on my machine! What is wrong with yours?"
"Everything is a pattern, you just have to find them."
"I don't need to test because I only made a one line change."
"Source control is a waste of time for this project."
The real problem I've seen with programming tyros is "programming is magic", meaning not truly groking that the computer will operate exactly logically, and will do exactly the same thing every time given the exact same input.
They write something that they think should sort of does what they want, and then when it doesn't work, rather than try to approach the problem logically, they start changing things semi-randomly, hoping, apparently to appease the gods of computer magic by their sheer tenacity or willingness to abase themselves upon the altar of whimsy. They feel that the computer is capricious, and changes things randomly, and the best they can hope for is to get things to a vague approximation of working, and hope the stars stay aligned for long periods.
Of course, even to experienced programmers, it can feel that way sometimes, but there is an inherent knowledge that what is happening is happening for a specific reason, and you just have to dig down to get to that reason.
That their program will work.
If the previous hurdle is overcome miraculously, that their program will work as expected by the end user
If the previous hurdle is again overcome miraculously, that their program will stand the test of time, i.e that it will be maintainable
If all of the previous hurdles are again overcome miraculously, that their second system will be as good or better
That you have to have design patterns in your code.
That their solution is the One and Only True Way To Solve The Problem, and everyone else is just dumb and wrong.
most harmful misconception (financial version):
"That a college education is required to know or have understanding about how to write software."
"I am going to make a ton of money by playing with computers!"
Edit: Another one that drives me nuts:
"The other guy's code isn't calling mine correctly, so it's not my fault the system doesn't work." -- with no proactive investigation, diagnosis, suggested patch, nothing. As a manager or a team leader, this really gets under my skin.
The worse misconception I've encountered, and the hardest to be rid of, is that programming is writing code, and not reading it.
The most harmful misconception is: You are done when you get the code to work.
That you have to use every feature of the language you are learning, inheritance above all.
Updated: be obsessive about assembly inline code in C
That cool == usable.
Disabusing them of the notion that "perfect but very late" is better than "acceptable and on time".
No one is going to care if some weekly report runs in 5 seconds rather than 8 if it is two months late.
It has something to do with computers.
That their code doesn't need to be documented. They're the only ones who will ever look at it, right?
The most common misconception is that you can write an application by starting your favorite IDE/editor and then write code immediately.
Yes, it will create an application. Yes, it's probably cr#p too when you're finished...
You start developing software by first creating a design. Preferably with pen and paper or with some useful tools on your computer. Writing the actual code just happens to be a small part of the whole process. (If not, you're doing something wrong!)
The most harmful misconception is to assume that people in software industry know what they're doing. Beginners tend to trust everything written in product's documentation, they trust error messages and exception descriptions. They even trust stuff posted on blogs.
That all there is to it is building cool new stuff everyday. Maintenance IS a part of programming!
That the hard part is typing in the code. The farther up you go, the more that comes to be the easy part.
Early on:
But isn't all the world an x86?
I have to pass a size with that buffer?
Error checking? Why?
The STL is too complicated. I'd rather implement everything myself.
(Use std::swap()! std::swap()! Start there, then branch out to more...)
Not knowing that you cannot treat binary buffers as strings without first null terminating them. (Think: read(), recv(), etc.)
Later on:
Wrongly thinking that...
That there are 8 bits in a byte.
That garbage collection will save you from resource management.
Endianness? Padding? I can't just write(), send(), etc. the whole struct?
Threads and deadlocks and race conditions oh my.
i18n? (2009, and we're still learning that the earth is round!)
I could have written this better. Time to rewrite. (Hint: refactor.)
Time related, wrongly thinking that:
That within a calendar year, DST starts before it ends.
That all time timeszones are + or - whole hours.
That the max UTC offset is + or - 12 hours.
That there are 60 seconds in a minute.
That 1900 is a leap year.
Wrongly thinking that:
16-bit is enough to hold a Unicode code point.
I can ignore FOSS libraries that will do 90% of the work for me.
That C, C++, Python, Lisp, C#, .NET, Java, VB6, Ruby, PHP, Bash, assembler is the perfect language for every task!
That the program has to be correct the first time.
Fail fast, early, and often. It's the only way to get better.
That they will "break" something!
Or, to define "newcomers" as those that don't do it, "It'll be easy to change! It's software!"
cheers,

What's your Modus Operandi for solving a (programming) problem? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
While solving any programming problem, what is your modus operandi? How do you fix a problem?
Do you write everything you can about the observable behaviors of the bug or problem?
Take me through the mental checklist of actions you take.
(As they say - First, solve the problem. Then, write the code)
Step away from the computer and grab some paper and a pen or pencil if you prefer.
If I'm around the computer then I try to program a solution right then and there and it normally doesn't work right or it's just crap. Pen and paper force me to think a little more.
First, I go to one bicycle shop; or another.
Once I figure nobody invented that particular bicycle,
Figure out appropriate data structures for the domain and the problem, and then map needed algorithms for dealing with those data structures in ways you need.
Divide and conquer. Solve subsets of the problem
This algorithm has never failed me:
Take Action. Often just sitting there and being terrified or miffed by the problem will not help solve it. Also, often, no amounting of thinking will solve the problem. So you have to get your hands dirty and grapple with the problem head on.
Test. Under exactly what conditions, input values or states, does the problem occur? Make a mental model of why these particular conditions might cause the problem. Check similar conditions that don't cause the problem. Test enough so that you have a clear understanding of the problem.
Visualise. Put debug code in, dump variable contents, single step code whatever. Do anything that clarifies exactly what is going on where - within the problem conditions.
Simplify. Remove or comment code, poke values into variables, run particular functions with certain values. Try your hardest to get to the nub of the problem by cutting away the chaff or stuff that doesn't have a relevance to the problem at hand. Copy code into a separate project and run it, if you have to, to remove dependencies.
Accept. A great man said: "whatever remains, however improbable, must be the truth". In other words, after simplifying as much as you can, whatever is left must be the problem, no matter how bizarre it may seem at first.
Logic. Double, triple check the logic of the problem. Does it make sense? What would have to be true for it to make sense? Is there something you're missing? Is your understanding of the algorithm wrong? If all else fails, re-engineer the problem away.
As an adjunct to step 3, as a last resort, I often employ the binary search method of finding wayward code. Simply comment half the code and see if the problem disappears. If it does then it must be in that half (and vice versa). Half the remaining code and continue.
Google is great for searching for
error messages and common problems. Somewhere, someone has usually encountered your problem before and found a solution.
Pencil and paper. Pseudo Code and
workflow diagrams.
Talk to other developers about it. It
really helps when you have to force
yourself to simplify the problem for
someone else to understand. They may also have another angle. Sometimes it's hard to see the forest through the trees.
Go for a walk. Take your head out of
the problem. Take a step back and try
to see the bigger picture of what you
want to achieve. Make sure the problem you are 'trying' to solve is the one your 'need' to solve.
A big whiteboard is great to work on. Use it to write out workflows and relationships. Talk through what is happening with another team member
Move on. Do something else. Let your subconscious work on the problem. Allow the solution to come to you.
write down the problem
think very hard
write down the answer
I can't believe no one posted this already:
Write up your problem on StackOverflow, and let a bunch of other people solve it for you.
My method, something analytic-sinthetic:
Calm down. Take a deep breath. Focus your attention in what you're going to solve. This may include going for a walk, cleaning the whiteboard, getting scratch paper and pencils ordered, some snacks, etc. Avoid stress.
High level understanding of the problem. In case it is a bug, when does it happened? in what circumstances? If it is a new task, try to diverge of what results are needed. Recollect data, evidence, get acceptance descriptions, maybe documentation or a talk with someone that knows about the issue.
Setup the test playground. Try to feel happy with the tools needed. Use the data collected in the previous step to automate something, hopefully the bug if that's the case, some failing tests otherwise.
Start sinthesizing, summarizing what you know, reflecting that on code. Executing once and once more. If you are not happy with the results, return to step two with renewed ideas, diverge more: maybe apply tools (in order of cost) that helped before, i.e. divide and conquer, debug, multithread, dissassemble, profile, static analysis tools, metrics, etc. Get in this loop until you can isolate the problem and pass the over the phone test.
Now it's time to fix it but you have all the tools set up. It won't be so much trouble. Start writing code, apply refactoring, enjoy describing your solution in the docs.
Get someone to try your solution. She can eventually get you to step 2 but that's ok. Refine your solution and redeploy.
I'm interpreting this as fixing a bug, not a design problem.
Isolate the problem. Does it always occur? Does it occur only the first time run on a set of new data? Does it occur with specific values, but not with others?
Is the system generating any error message that appear related to the problem? Verify that the error messages are not generated when the problem does not occur.
Has anything been changed recently? Those are likely places to start looking.
Identify the gap between what I know is working (e.g. I can start up the app and attempt to do a query) and what I know is not working (e.g. it gives me an error instead of the expected results). Find an intermediate point in the code where it seems possible to look for a problem (does this contain valid data at this point?). This allows me to isolate the problem on one side or the other of the point I looked.
Read the stack traces. If you have a stack trace, find the first line that mentions in-house code. The problem is not in your libraries. Maybe it will turn out to be, but just forget about that possibly first. The error is in your code. It's not a bug in java, it's not a bug in apache commons HTTP client, it's in code written in your organization.
Think. Come up with something the system could be doing that can cause the symptoms you see. Find a way to validate whether that is what the system is doing.
No possibility the bug is in your code? Google for anything you can think of related. Maybe it is a bug in the library, or poor documentation leading you to use it wrong.
Logic.
Break the problem down, use your own brain and knowledge of each component of the system to determine exactly what is happening and why; then on the basis of this you will discover where the problem isn't, and hence determine where it must be.
I stop working on it until tomorrow. I usually solve my problem in the shower the next day. I find stepping away from the issue, and allowing my brain to clear, allows a fresh perspective on the issue.
Answer these three questions in this order:
Q1: What is the desired output?
I don't care if this is a napkin with scribble on it. I want something tangible that shows me what the end result is supposed to look like. If I don't get at least this far then I stop.
Q2: What is the input?
I find out what data I have coming in. Where this data is coming from from. What formulas I may need. What dependencies there might be on A happening before B. What permissions if any are necessary to get this data. I then ask Question 3.
Q3: Is there enough input to create the output?
If the answer is No then I go back to Q2 and get more input from whoever can give it to me.
For very large problems I break them down in Phases and apply Q1 Q2 and Q3 to each phase.
To paraphrase Douglas Adams, programming is easy. You only need to stare at a blank screen until your forehead bleeds. For people who are squeamish about their foreheads, my ideal architect-and-build for the bigger problems would go something like this. (For smaller problems, like George Jempty I can only recommend Feynman's Algorithm.)
What I write is couched in an on-site business setting but there are analogues in open-source or distributed teams. And I can't pretend that every, or even most, projects pan out this way. This is just the series of events that I dream about, and occasionally come to pass.
Get advanced, concise warning of what the problem is likely to look like. This is not the full, final meeting, but an informal discussion. Uncertainty in certain specification details is fine, as long as the client (or manager) is honest. Then take a piece of paper or text editor, and try to condense what you've learned down to five essential points, and then try to condense those to a single sentence. Be happy you can picture the core problem(s) to be solved without referencing any of your documentation.
Think about it for maybe a couple of hours, maybe playing with code and prototyping, but not with a view to the full architecture: you should even do other stuff, if you've time, or go for a walk. It's great if you can learn about a job an hour before home time in order to deliver a decision around midday the next day, so you get to sleep on it. Spend your time looking at potential libraries, frameworks, data standards. Try to tie together at least two languages or resources (say, Javascript on PHP-generated HTML; or get a Python stub talking RPC to a web service). Flesh out the core problems; zoom in on the details; zoom out to make sure the whole shape is still distinct and makes sense.
Send any questions to the client or manager well in advance of a meeting to discuss both the problem and your proposed solution. Invite as many stakeholders and your programming peers along as is convenient (and as your manager is happy with.) Explain the problem back to them, as you see it, then propose your solution. Explain as much as you can; pitch the technical details at your audience, but also let your explanations fill in more details in your own mental model.
Iterate on 2 and 3 until everyone is happy. Happiness is domain-specific. Your industry might require UML diagrams and line-item quotations, or it might be happy with something jotted on a whiteboard with an almost invisible drywipe marker. Make sure everyone has the same expectations of what you're about to build.
When your client or manager is happy for you to start, clear everything. Close Twitter, instant messenger, IRC and email for an hour or two. Start with the overall structure as you see it. Drop some of your prototype code in and see if it feels right. If it doesn't, change the structure as early as possible. But most of all make sure your colleagues give you a couple of hours of space. Try not to fight fires in this time. Begin with a good heart and cheer, and interest in the project. When you're bogged down later on you'll be glad of the clarity that came out of those first few hours.
How your programming proceeds from there depends on what it actually is, and what tasks the finished code needs to perform. And how you ultimately architect your code, and what external resources you use, will always be dictated by your experience, preference and domain knowledge. But give your project and its stakeholder team the most hopeful, most exciting and most engaged start you can.
Pencil, paper and a whiteboard. If you need more organization, use a tool like MindManager.
Andy Hunt's Pragmatic Thinking and Learning has a lot to say on this question.
Question: How do you eat an elephant?
Answer: One bite at a time.
One technique I like using for really big projects is to get into a room with a whiteboard and a pile of square Post-it Notes.
Write your tasks on the Post-it Notes then start sticking them on the whiteboard.
As you go, you can replace tasks that are too big with multiple notes.
You can shift notes around to change the order that the tasks happen in.
Use different colours to indicate different information; I sometimes use a different colour to indicate stuff that we need to do more research on.
This is a great technique for working with a team. Everybody can see the big picture and can contribute in a highly interactive way.
I think about it. I take anywhere from a couple minutes to a few weeks to mull over the problem and develop a general plan of attack.
Hammer out an initial solution. This solution is probably half-baked and one or more aspects may not work.
Refine that solution. Keep working on the problem till i have something that solves the problem.
(and this may be done at any step in the process) Ask questions on stack overflow to clear up any difficulties i'm having at the moment.
One of my ex-colleagues had a unique Modus Operandi. Whenever faced with a hard programming problem (e.g. Knapsack problem or some kind of non-standard optimization problem) he would get stoned on weed, claiming his ability to visualize complex state (such as that of recursive function doing operations on set passed on the stack) was greatly improved. The only difficulty, the next day he could not understand his own code. So eventually I showed him TDD and he has quit smoking...
I write it on a piece of paper and start with my horrible class diagram or flowchart. Then I write it on sticky notes to break it down to "TO DO's".
1 sticky note = 1 task. 1 dumped sticky note = 1 finished task. This works really well for me so far.
Add the problem to StackOverflow, wait about 5-10 minutes and you usually have a brilliant solution! :)
The following applies to a bug rather than building a project from scratch (but even then it could do both if reworded a bit):
Context: What is the problem at hand? What is it preventing, doing wrong, or not doing?
Control: What variables (in the wide sense of the word) are involved? Can the problem be reproduced?
Hypothesise: With enough data on what is occurring or required, it is possible to hypothesise, that is, to draw a mental image of the problem in question.
Evaluate: How much effort, cost, etc, will the correction require? Determine if it's a show stopper or a minor irritant. At this point, it may be too early to tell, but even that is a form of evaluation. This will allow prioritisation.
Plan: How will the problem be approached? Does it require specifications? If so, do them first.
Execute: A.K.A. The fun part.
Test: A.K.A. The not-so-fun-part.
Repeat to satisfaction. Finally:
Feedback: how did it come to be this way? What lead us here? Could this have been prevented, and if so, how?
EDIT:
Really summarised, stop, analyse, act.
Probably a gross oversimplification:
But really, this holds 100% true.
CONCEIVE
What are you without an idea? You may have a problem, but first you must define it more explicitly. You have a frozen pizza that you want to eat. You need to cook that pizza! In programming, this is usually your brainstorming session for coming up with a solution from the hip. Here you decide what your approach is.
PLAN
Well, of course you need to cook that pizza! But HOW! Will you use the oven? No. Too easy. You want to build a solar cooker, so you can eat that frozen pizza anywhere that the sun grants you power to do so. This is your design phase. This is your pencil and paper phase. This is where you start to form a cohesive, step-by-step method to implementation.
EXECUTE
Well, you are going to build a solar oven to cook your frozen pizza; you've decided. NOW DO IT. Write code. Test. Commit. Refactor. Commit.
Related question that may be useful:
Helpful points of view, concepts or ways to think about problems every newbie should know
Every problem I've ever had to solve on a computer has had something to do with solving a task in the real world. Therefore, I've learned to look at how I would accomplish something in the real world and map that to the computer problem.
Example:
I need to keep track of my student's grades and come up with a final grade that is an average of all the grades throughout the year?
Well, I'd save the grades in a log (database) and I'd have a page for every student (Field StudentID) and so on...
I always take a problem to a blog first. Stackoverflow would be a good place to start. Why waste your time re-inventing the wheel when someone else may have already solved a similar problem in the past? If anything you will get some good ideas to solve it yourself.
I use the scientific method:
Based on the available information about the programming problem I come up with a hypothesis about what the reason could be.
Then I design / think up an experiment that will reject or confirm the hypothesis. This could be observing something in a debugger or screen/file output. Or changing the program slightly.
If the hypothesis is rejected then repeat 1. The information gathered in 2. may help in coming up with a new hypothesis.
If the hypothesis is confirmed then the hypothesis may be refined/become more specific (repeat 1.). Or it may already be clear what the problem is.
This directed way of find the problem is much more effective than changing things at random, observe what happens and try to (inappropriately) use statistics.
No one has mentioned truth tables! But that's probably because they're usually only mildly helpful ;) (although your mileage may vary) I used one for the first time yesterday in my 8 years of programming.
Diagramming on whiteboards or paper have always been very helpful for me.
When faced with very weird bugs. Like this: JPA stops working after redeploy in glassfish
I start from scratch. Make a new project. Does it work? Yes. Start to recreate the components of my app one piece of a time. DB. Check. Deploy. Check. Until it breaks. Continue until it breaks. If it never breaks. Ok. You just recreated your entire app. Discard of the old one. When it breaks. You pinpointed the exact problem.
I think - what am i looking for?
What method best solves this problem?
Implement it with solid logic - no code
Pseudo code
code a rough cut
execute
These is my prioritized methods
Analyse
a. Try to find the source of your problem
b. Define desired outcome
c. Brainstorm about solutions
Try on error (If I dont want to analyse)
Google a bit around
a. Of course, look around on stackoverflow
When you get mad, walk away from pc for a cup of coffee
When you still mad after 10 cups of coffee, Go sleep a night to think about the problem
GOLDEN TIP
Never give up. Persistence will always win