Related
I made a arkanoid game and sometimes the ball laggs is there a simple fix to this? Im not sure what code to post because I got a lot of code
There is a lot a reason for that... It's probably a framerate problem. You need to install or code a tool to check it.
If the framerate sometimes falls you need to inspect your main game loop and find the reason.
It's impossible to help you more with so few details.
To find the cause of an unknown performance issue, use the free Adobe Scout tool.
Once you get it running, you should see a spike in the graph it produces at the moment the game lags. Inspecting this spike should reveal the cause of the lag, whether it be garbage collection, a very intensive method, or something else.
Here's a 'getting started' link:
http://www.adobe.com/devnet/scout/articles/adobe-scout-getting-started.html
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
What is the most useful strategy when your application throws an exception in the middle of the demo, in terms of keeping the client's mood still positive?
If your client doesn't trust you, there's nothing you can really do. I build trust with my clients so when something like this happens, they believe me when I give them an explanation. And when I tell them what I'm going to do to prevent future problems, I make sure to follow through.
Depending if this is a "final" demo or if its a mid-project demo will also affect whether you can really alleviate the customers' concerns. There's very little you can do to make the customer happy if it's the end of the contract and there's no more budget for testing and bug-fixes.
One generic strategy I've used: if you have someone in the room document the exception/problem in front of the customer and let them know it is going into the bug tracking system for investigation and testing, that will demonstrate to them due-dilligence and alleviate some concern. You, of course, need to follow through and make sure to fix the problem.
Just tell the truth but humorously, like saying apparently our software is still not that perfect, and we are keeping making it perfect.
don't ever lie or try to cover it up. clients are not dumb.
I'd say it depends on what stage you're in. If you're selling something, my experience is that it is always preferable to just bring static material for demonstration. Powerpoints are alright, but printed screenshots that the clients can toss around the table are unbeatable. It allows you to only show exactly what you want, in a very controlled manner, while still looking very professional.
If the client has already bought the project, and the demo is related to, say, a launch that's coming up, the best you can do is to smile and say "as you can see, we're still working out a few quirks"
I usually let the client know up front that I'm running off a live dev environment so we might see some weirdness. If I am aware of parts that have issues (inconsistent crashes..) I let the client know about them before I show that section (along with the fact that production won't do this and I am working on it already).
Update: based upon other responses, I agree that early demos with static material are better for generating discussion.
The last time I presented a project to a conference, I planned a live demo, but I actually had a set of slides headed "If you can see this, the live demo didn't work!" with big screen shots. Inevitably the live demo didn't work (it needed a globally routable IP address and this wasn't available) and the slides were called for.
I think it personally depends on your relationship with the client and how satisfied they are with your product this far.
I usually tell them it was bad data from testing or jokingly say that was a test to see how the application handles errors (if you have a generic, catch-all error message/page). You can also use the time to reiterate the importance of user testing/acceptance.
I've never done a demo where I didn't set expectations beforehand. If this is a work in progress, make sure they know that up front. This goes a long way towards keeping them positive. EVERYBODY has issues during demos. Just smile, tell the truth and continue.
Probably the best way to help keep your client positive is to project positivity and confidence yourself. If you seem unsure of yourself it'll show as being unsure of your app.
Possible strategies:
Telling the audience "we're all going to stay here until whoever did that owns up".
Complaining about the prevalence of evil monkeys.
Seriously, I'm majorly against public product demos. Don't attempt demos until you're 99.99999% sure the damn stuff works. And even then it's a bad idea, you're setting yourself up to fail with all the variables that can go wrong - which may not even be with your software. If you must do it, attempting to impress customers with a flashy UI is best done one-on-one. That's the way we've always done it.
Is this shipping product or still under development?
Assuming it's a shipping product, just recover and move on. Work the room while recovering (rebooting, restarting the app, etc.) and don't dwell on what happened. Customers understand stuff happens.
I was an applications engineer and this happened frequently. IMO, the key to recovering from this starts at the beginning of the relationship / sales process / demo, not one particular event. Establish trust and build credibility with your audience as early as possible. If you have this and an exeption gets thrown, the audience will probably trust you enough to think nothing about the software failure and just move past it. Yeah, you will have the occassional person looking to crucify you regardless of what you do but that comes with the territory.
Be honest if someone asks (they probably won't) and never talk negatively about your product. I also found when questioned about a fatal error, it helps, and they are very foregiving, if you can explain what happened in terms the audience understands. It demonstrates your knowledge of the software process and goes back to trust.
Write down the Exception and why/how it occurred IMMEDIATELY. Then they will see that you will be fixing it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
It seems that the normal progression to join projects is to contribute for a while, earn the trust, then get accepted as a member of the community (i.e. having commit access).
Now, I already apparently know "the best way" of how to get involed, in a manner of speaking; this is not my question; what I was hoping to attain is: How did everyone else get involed? Surely not everyone has gone down the "find a project and submit patches" route - or have they? I dont happen to know anybody in the open source community, so I'm just itching to know...
Perhaps you already knew someone in a community and just fell into it? Maybe you were getting frustrated with some bug and started contributing regulary as a result? Maybe you did just spot a project on SourceForge...
Update:
It seems that the most common reason is simply scratching an itch, to quote singpolyma: "Looking for a project to contribute to is often not the right way." Instead, you should join the open source community by contributing to a project that you already know and use.
Important:
Please, please, please: Tell me about your specific experience, no general answers please. Also, answer only if you are either a project member or a patch contributor. Please do not give advice on how to join a community, this isn't the kind of answer I'm looking for. If you would like to give advice on joing a community, please answer in this other thread.
Great Answers:
Mark Harrison talks about Tcl, cx_Oracle, kap and orapig
singpolyma talks about DiSo and Greasemonkey
Pax talks about contributing to GnuCash because of his wife
Related:
How to get involved in an open source project
How Open Source Projects Survive Poisonous People (And You Can Too)
My personal anecdotes:
I got involved with the Tcl community when it was first starting out in 1991 or so. The mailing list and later the usenet newsgroup were pretty important to connect with people. I specialized in user evangelism and teaching, and eventually ended up writing two books about the subject. One of them is still in print after ten years:
http://www.amazon.com/dp/0201634740
Now I use a lot of Python, and really like the cx_Oracle package. Again I was active in the mailing list, and contributed a few patches.
I've made a couple of software packages available that I had written for work. By making them open source, I was able to get some nice contributions back, and since they were not the "secret sauce" of my employers at the time, they didn't mind sharing the code. The two most popular packages were
http://sourceforge.net/projects/kap/ The Kinetic Application Processor -- this was built when I was working on the China Internet backbone.
http://code.google.com/p/orapig/ - OraPIG, the Oracle Python Interface Generator -- it generated Python code to call APIs defined in the database, and includes an XML-RPC database interface.
Advice:
Instead of looking for projects to join, try contributing to projects you already use.
It's often difficult to jump into the "core" development, because (a) on a big project, that might be a pretty big chunk of code to understand, and (b) there are probably a core group of people already working on it.
So, suppose you like a certain piece of software and want to start contributing, you can start working around the edges. Here's a couple of concrete tasks that will help you to become integrated with the group.
write some test cases for bugs to add to the regression test suite.
browse through the bug database and find a bug to work on. This might be the best way to get into the core development.
look at the feature request database and see if there's a small task you can work on.
look for "user doc" requests... a lot of them involve writing example code which you can provide.
Good luck!
The way people normally get involved is:
you use the FOSS product in your day to day work
you notice a problem or a missing feature
you mail the maintainer to ask if this bug/missing feature is real
the maintainer says yes, this is a bug/missing feature
you decide to try to fix/add the bug/feature
you code like mad
you submit a patch to the maintainer
the maintainer laughs in or face or says "thanks very much!
If you repeat the last few steps a few times, the maintainer will probably give you commit access to the project's RCS repository, and then you can really become dangerous. But the bottom line is that it is up to you to do something i.e. write some code - merely being "interested" in a project is not enough.
I joined DiSo and Greasemonkey.
The best way I've found to get involved is to get in early in the life of the project, or just be very interested. With DiSo or the various github projects I'm on, it was the former, with my Greasemonkey contributions, the latter.
Looking for a project to contribute to is often not the right way. Use stuff and find out what you want to build/fix, then do that.
I did a little bit of patch work on GnuCash since my wife restarted work part-time recently after our kids were a little more grown up.
I would've rather had my eyes ripped out with a hot poker than re-install Windows but GnuCash was missing something that [a certain other accounting package] had so I told her I'd get it added.
As it turns out, they took my patch and made it a lot better before putting it in (to the point where maybe 1% of the final patch was my stuff) but at least we can now use GnuCash instead of that proprietary stuff. They were also incredibly responsive - from patch submission to patch availability was only a week or so and it was in the product three weeks later.
I also once investigated getting a patch into the process accounting in the Linux kernel but the effort required far outweighed my needs :-)
I don't contribute on a regular basis, more as-needed (find your itch and scratch it). There are some who make a hobby of it but I'd rather be spending my spare time with the kids and, unfortunately, my employer won't pay me to contribute elsewhere.
That last bit particularly galled me since:
the Linux patch would have greatly assisted our product (and a lot of others).
it was change in behavior of another of our products that degraded the usefulness of our product.
the solution was fairly simple, conceptually (the effort required was testing since a problem would have been high-impact [task switching] and very pervasive [everyone using Linux]).
it would have been quicker to code up the patch than the workaround we eventually implemented.
the workaround is a kludge (p'tooee).
now, nobody in the world has the benefit of our patch (including us).
What I did was pretty simple; I opened one.
I have been joined by one permanent developer, and other two who donate code behind the scenes. The project is in very early stages, so not many users have downloaded it.
What really helps an open source project is having a plugin architecture.
It's much easier to contribute a simple plugin for eg. a file format than to try to add something to the Linux kernel. This makes it a lot quicker and easier to build a community.
TODO: Please supply an anecdote.
My coworkers and I were having a discussion about this yesterday. It seems that no matter how well we prepare and no matter how much we test and no matter what the client says immediately before the site becomes public, initial site launches almost always seem to be somewhat rocky. Some clients are better than others, but often things that were just fine during testing suddenly go horribly wrong when the site becomes public.
Is this a common experience? I'm not just talking about functionality breaking down (although that's often a problem as well). I'm also talking about sites that work exactly the way we wanted them to, but suddenly are not satisfactory to the client when it's time to make the site public. And I'm talking about clients that have been familiar with the site during most of the development process. Meaning, the public launch is definitely not the first time they've seen the site.
If you've dealt with this problem before, have you found a way to improve the situation? Or is this just something that will always be somewhat of a problem?
Don't worry. This is completely and entirely normal and happens with every piece of software. Everything that can go wrong will go wrong, and the most volatile entity in the development process, the client, will be the cause of these things.
You could do all the Requirements Gathering in the world, write a 100 page Proposal, provide screenshots and updates to the project hourly and the client will still not approve. On a personal note, I feel that the Internet is one of the worst mediums for this, as designs are a lot more free-flowing nowadays and the client will always have a certain picture in his/her mind; one that won't look like the finished product.
I find that a bulletproof contact with defined stages and sign-off sheets are the best way to handle such a situation. Assuming that your work is contracted you should ensure that at each stage the client is shown the work and is forced to approve each and every change made. At least that way if the client wants something changed you can tell them that they've already signed off that section and the additional work will cost them extra (also defined within the contract).
Not only did this approach work for me, it made the client stop and think about what he/she REALLY wanted. Luckily for me many of my clients are already tech-oriented, so they understand that these things can take time, but those that haven't a clue about Web Development expect things to be perfect within a couple of days. As long as you make sure that everything is covered in the contract the client will think about what they want and won't pester you with issues after.
Of course, anything you can do in regards to Quality Control would be fantastic and help the project move along nicely. Also ensure that some form of methodology is planned out before the project and that this methodology is known by the client(s). Often changes in fundamental areas can be costly and many clients do not seem to realise that a small change can require many things to be changed.
Yes, saw this several times on our projects (human beings are fickle).
Something that helps us in these situations is a good PM/Account Manager that can handle the customer, which makes things a little bit bearable on the technical level.
Web site launches are usually fairly smooth for us. Of course, we do extensive validation including code inspections, deployments to proto-servers (identical to our production servers), and mountains of documentation.
After every launch, we have a meeting to discuss what went well and what didn't so that we can make adjustments to our overall process and best-known-methods documents.
As for clients that change their minds at the last minute... sigh... we minimize that by having them sign off on the beta version. That way, there is no disagreement when the project is launched. If there is a disagreement, there is always a next release.
For what it's worth, the last site launch I did went off without a hitch. Now, it wasn't a high-traffic site, and there were some bugs that I did eventually fix, but there wasn't anything troubling on the day of the actual launch.
This was an ASP.NET/C# site. It wasn't terribly large or complicated, but it wasn't trivial either. Probably the most notable thing is that it was 100% designed, implemented, and tested by myself, from the database schema all the way up to the CSS. It was also my first time using ASP.NET. There were plenty of bumps in development but by the time I launched it I was pretty familiar with them and so knew what to expect.
I think the lesson to be learned from this is to have a good design up-front, solid implementation skills, and good testing, and a new site doesn't have to be a nightmare. There's at least a possibility of a trouble-free launch.
I wouldn't limit your statement to just web sites. I have worked on a lot of projects over the years and there are always details that get "discovered" when going live. No amount of testing removes all the fun things that can happen.
One thing I will say is what you learn in the first couple of hours of a new system going "on-line" is way move valuable that all the stuff learned during development. It's show time when the real cool problems and scenarios appear. Learn to love them and use these times as a learning point for the next time. Then each time it will be just at fun!
We used to have this problem a lot, but much less recently.
Partly for us it is about firmer project management and documenting the specification (as suggested in other answers here) but I believe more of the difference came from:
Expectation management - getting the client to accept that iterative changes are still to be expected after launch, that this is normal and not to worry about it
Increasing authority - we are now a well established (13 years) web developer and we can speak with a lot of expertise
Simply being more experienced - we can now predict in advance most of the queries that are likely to come up, and either resolve them, mitigate them or bring them to the client's attention so they don't sting us on the day
Plus, we rarely do big fanfare launches - a soft launch makes things much less stressful.
My experience is that web site launches are almost always rocky. I've only had two exceptions to this common truth. The first was a site developed for a small business ran by one person. This went smoothly because, well there was only one person to please so is was fairly easy to track what they wanted. The other was a multi-million dollar website launched by a fortune 500 company. This happened to go smoothly because there were 2 PMs and a small army of consultants there to manage the needs of the customer. That coupled with a one month of straight application load testing and a 1,000 user beta launch meant when the site finally went "live", I was able to get a full nights sleep (which is fairly uncommon). Neither of these situations constitute then norm though. Of course, there's nothing better than several thousand beta testers hitting your site to help find those contingencies that you never thought of.
I'm sure you can figure out the kind of errors that always sneak in, so for example is it due to rather superficial testing? E.g. randomly clicking around and checking if things appear to be right.
In order to improve I propose something along the following:
Create documents/checklists that specify all testing procedures.
Get regular people to test, not just the folks who built the application.
Setup a staging environment which closely resembles production.
Post-launch, analyze what went wrong and why it went wrong.
Maybe get external QA to check on your procedures.
Now, all those suggestions are of course very obvious but implementing them into your launch procedures will require time.
In general this really is an ongoing process which will help you and your colleagues to improve. And also be happier, because fixing bugs in production just makes you age rapidly. ;-)
Keep in mind, you won't be done the first time. Documents are heavy which is why people don't read them. People are also lazy and don't follow the procedures. This means that you always have analyze what happened, go back and improve the procedures.
If you have the opportunity I'd also spend some time on looking why nothing went wrong with another launch and comparing this to the usual.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
While solving any programming problem, what is your modus operandi? How do you fix a problem?
Do you write everything you can about the observable behaviors of the bug or problem?
Take me through the mental checklist of actions you take.
(As they say - First, solve the problem. Then, write the code)
Step away from the computer and grab some paper and a pen or pencil if you prefer.
If I'm around the computer then I try to program a solution right then and there and it normally doesn't work right or it's just crap. Pen and paper force me to think a little more.
First, I go to one bicycle shop; or another.
Once I figure nobody invented that particular bicycle,
Figure out appropriate data structures for the domain and the problem, and then map needed algorithms for dealing with those data structures in ways you need.
Divide and conquer. Solve subsets of the problem
This algorithm has never failed me:
Take Action. Often just sitting there and being terrified or miffed by the problem will not help solve it. Also, often, no amounting of thinking will solve the problem. So you have to get your hands dirty and grapple with the problem head on.
Test. Under exactly what conditions, input values or states, does the problem occur? Make a mental model of why these particular conditions might cause the problem. Check similar conditions that don't cause the problem. Test enough so that you have a clear understanding of the problem.
Visualise. Put debug code in, dump variable contents, single step code whatever. Do anything that clarifies exactly what is going on where - within the problem conditions.
Simplify. Remove or comment code, poke values into variables, run particular functions with certain values. Try your hardest to get to the nub of the problem by cutting away the chaff or stuff that doesn't have a relevance to the problem at hand. Copy code into a separate project and run it, if you have to, to remove dependencies.
Accept. A great man said: "whatever remains, however improbable, must be the truth". In other words, after simplifying as much as you can, whatever is left must be the problem, no matter how bizarre it may seem at first.
Logic. Double, triple check the logic of the problem. Does it make sense? What would have to be true for it to make sense? Is there something you're missing? Is your understanding of the algorithm wrong? If all else fails, re-engineer the problem away.
As an adjunct to step 3, as a last resort, I often employ the binary search method of finding wayward code. Simply comment half the code and see if the problem disappears. If it does then it must be in that half (and vice versa). Half the remaining code and continue.
Google is great for searching for
error messages and common problems. Somewhere, someone has usually encountered your problem before and found a solution.
Pencil and paper. Pseudo Code and
workflow diagrams.
Talk to other developers about it. It
really helps when you have to force
yourself to simplify the problem for
someone else to understand. They may also have another angle. Sometimes it's hard to see the forest through the trees.
Go for a walk. Take your head out of
the problem. Take a step back and try
to see the bigger picture of what you
want to achieve. Make sure the problem you are 'trying' to solve is the one your 'need' to solve.
A big whiteboard is great to work on. Use it to write out workflows and relationships. Talk through what is happening with another team member
Move on. Do something else. Let your subconscious work on the problem. Allow the solution to come to you.
write down the problem
think very hard
write down the answer
I can't believe no one posted this already:
Write up your problem on StackOverflow, and let a bunch of other people solve it for you.
My method, something analytic-sinthetic:
Calm down. Take a deep breath. Focus your attention in what you're going to solve. This may include going for a walk, cleaning the whiteboard, getting scratch paper and pencils ordered, some snacks, etc. Avoid stress.
High level understanding of the problem. In case it is a bug, when does it happened? in what circumstances? If it is a new task, try to diverge of what results are needed. Recollect data, evidence, get acceptance descriptions, maybe documentation or a talk with someone that knows about the issue.
Setup the test playground. Try to feel happy with the tools needed. Use the data collected in the previous step to automate something, hopefully the bug if that's the case, some failing tests otherwise.
Start sinthesizing, summarizing what you know, reflecting that on code. Executing once and once more. If you are not happy with the results, return to step two with renewed ideas, diverge more: maybe apply tools (in order of cost) that helped before, i.e. divide and conquer, debug, multithread, dissassemble, profile, static analysis tools, metrics, etc. Get in this loop until you can isolate the problem and pass the over the phone test.
Now it's time to fix it but you have all the tools set up. It won't be so much trouble. Start writing code, apply refactoring, enjoy describing your solution in the docs.
Get someone to try your solution. She can eventually get you to step 2 but that's ok. Refine your solution and redeploy.
I'm interpreting this as fixing a bug, not a design problem.
Isolate the problem. Does it always occur? Does it occur only the first time run on a set of new data? Does it occur with specific values, but not with others?
Is the system generating any error message that appear related to the problem? Verify that the error messages are not generated when the problem does not occur.
Has anything been changed recently? Those are likely places to start looking.
Identify the gap between what I know is working (e.g. I can start up the app and attempt to do a query) and what I know is not working (e.g. it gives me an error instead of the expected results). Find an intermediate point in the code where it seems possible to look for a problem (does this contain valid data at this point?). This allows me to isolate the problem on one side or the other of the point I looked.
Read the stack traces. If you have a stack trace, find the first line that mentions in-house code. The problem is not in your libraries. Maybe it will turn out to be, but just forget about that possibly first. The error is in your code. It's not a bug in java, it's not a bug in apache commons HTTP client, it's in code written in your organization.
Think. Come up with something the system could be doing that can cause the symptoms you see. Find a way to validate whether that is what the system is doing.
No possibility the bug is in your code? Google for anything you can think of related. Maybe it is a bug in the library, or poor documentation leading you to use it wrong.
Logic.
Break the problem down, use your own brain and knowledge of each component of the system to determine exactly what is happening and why; then on the basis of this you will discover where the problem isn't, and hence determine where it must be.
I stop working on it until tomorrow. I usually solve my problem in the shower the next day. I find stepping away from the issue, and allowing my brain to clear, allows a fresh perspective on the issue.
Answer these three questions in this order:
Q1: What is the desired output?
I don't care if this is a napkin with scribble on it. I want something tangible that shows me what the end result is supposed to look like. If I don't get at least this far then I stop.
Q2: What is the input?
I find out what data I have coming in. Where this data is coming from from. What formulas I may need. What dependencies there might be on A happening before B. What permissions if any are necessary to get this data. I then ask Question 3.
Q3: Is there enough input to create the output?
If the answer is No then I go back to Q2 and get more input from whoever can give it to me.
For very large problems I break them down in Phases and apply Q1 Q2 and Q3 to each phase.
To paraphrase Douglas Adams, programming is easy. You only need to stare at a blank screen until your forehead bleeds. For people who are squeamish about their foreheads, my ideal architect-and-build for the bigger problems would go something like this. (For smaller problems, like George Jempty I can only recommend Feynman's Algorithm.)
What I write is couched in an on-site business setting but there are analogues in open-source or distributed teams. And I can't pretend that every, or even most, projects pan out this way. This is just the series of events that I dream about, and occasionally come to pass.
Get advanced, concise warning of what the problem is likely to look like. This is not the full, final meeting, but an informal discussion. Uncertainty in certain specification details is fine, as long as the client (or manager) is honest. Then take a piece of paper or text editor, and try to condense what you've learned down to five essential points, and then try to condense those to a single sentence. Be happy you can picture the core problem(s) to be solved without referencing any of your documentation.
Think about it for maybe a couple of hours, maybe playing with code and prototyping, but not with a view to the full architecture: you should even do other stuff, if you've time, or go for a walk. It's great if you can learn about a job an hour before home time in order to deliver a decision around midday the next day, so you get to sleep on it. Spend your time looking at potential libraries, frameworks, data standards. Try to tie together at least two languages or resources (say, Javascript on PHP-generated HTML; or get a Python stub talking RPC to a web service). Flesh out the core problems; zoom in on the details; zoom out to make sure the whole shape is still distinct and makes sense.
Send any questions to the client or manager well in advance of a meeting to discuss both the problem and your proposed solution. Invite as many stakeholders and your programming peers along as is convenient (and as your manager is happy with.) Explain the problem back to them, as you see it, then propose your solution. Explain as much as you can; pitch the technical details at your audience, but also let your explanations fill in more details in your own mental model.
Iterate on 2 and 3 until everyone is happy. Happiness is domain-specific. Your industry might require UML diagrams and line-item quotations, or it might be happy with something jotted on a whiteboard with an almost invisible drywipe marker. Make sure everyone has the same expectations of what you're about to build.
When your client or manager is happy for you to start, clear everything. Close Twitter, instant messenger, IRC and email for an hour or two. Start with the overall structure as you see it. Drop some of your prototype code in and see if it feels right. If it doesn't, change the structure as early as possible. But most of all make sure your colleagues give you a couple of hours of space. Try not to fight fires in this time. Begin with a good heart and cheer, and interest in the project. When you're bogged down later on you'll be glad of the clarity that came out of those first few hours.
How your programming proceeds from there depends on what it actually is, and what tasks the finished code needs to perform. And how you ultimately architect your code, and what external resources you use, will always be dictated by your experience, preference and domain knowledge. But give your project and its stakeholder team the most hopeful, most exciting and most engaged start you can.
Pencil, paper and a whiteboard. If you need more organization, use a tool like MindManager.
Andy Hunt's Pragmatic Thinking and Learning has a lot to say on this question.
Question: How do you eat an elephant?
Answer: One bite at a time.
One technique I like using for really big projects is to get into a room with a whiteboard and a pile of square Post-it Notes.
Write your tasks on the Post-it Notes then start sticking them on the whiteboard.
As you go, you can replace tasks that are too big with multiple notes.
You can shift notes around to change the order that the tasks happen in.
Use different colours to indicate different information; I sometimes use a different colour to indicate stuff that we need to do more research on.
This is a great technique for working with a team. Everybody can see the big picture and can contribute in a highly interactive way.
I think about it. I take anywhere from a couple minutes to a few weeks to mull over the problem and develop a general plan of attack.
Hammer out an initial solution. This solution is probably half-baked and one or more aspects may not work.
Refine that solution. Keep working on the problem till i have something that solves the problem.
(and this may be done at any step in the process) Ask questions on stack overflow to clear up any difficulties i'm having at the moment.
One of my ex-colleagues had a unique Modus Operandi. Whenever faced with a hard programming problem (e.g. Knapsack problem or some kind of non-standard optimization problem) he would get stoned on weed, claiming his ability to visualize complex state (such as that of recursive function doing operations on set passed on the stack) was greatly improved. The only difficulty, the next day he could not understand his own code. So eventually I showed him TDD and he has quit smoking...
I write it on a piece of paper and start with my horrible class diagram or flowchart. Then I write it on sticky notes to break it down to "TO DO's".
1 sticky note = 1 task. 1 dumped sticky note = 1 finished task. This works really well for me so far.
Add the problem to StackOverflow, wait about 5-10 minutes and you usually have a brilliant solution! :)
The following applies to a bug rather than building a project from scratch (but even then it could do both if reworded a bit):
Context: What is the problem at hand? What is it preventing, doing wrong, or not doing?
Control: What variables (in the wide sense of the word) are involved? Can the problem be reproduced?
Hypothesise: With enough data on what is occurring or required, it is possible to hypothesise, that is, to draw a mental image of the problem in question.
Evaluate: How much effort, cost, etc, will the correction require? Determine if it's a show stopper or a minor irritant. At this point, it may be too early to tell, but even that is a form of evaluation. This will allow prioritisation.
Plan: How will the problem be approached? Does it require specifications? If so, do them first.
Execute: A.K.A. The fun part.
Test: A.K.A. The not-so-fun-part.
Repeat to satisfaction. Finally:
Feedback: how did it come to be this way? What lead us here? Could this have been prevented, and if so, how?
EDIT:
Really summarised, stop, analyse, act.
Probably a gross oversimplification:
But really, this holds 100% true.
CONCEIVE
What are you without an idea? You may have a problem, but first you must define it more explicitly. You have a frozen pizza that you want to eat. You need to cook that pizza! In programming, this is usually your brainstorming session for coming up with a solution from the hip. Here you decide what your approach is.
PLAN
Well, of course you need to cook that pizza! But HOW! Will you use the oven? No. Too easy. You want to build a solar cooker, so you can eat that frozen pizza anywhere that the sun grants you power to do so. This is your design phase. This is your pencil and paper phase. This is where you start to form a cohesive, step-by-step method to implementation.
EXECUTE
Well, you are going to build a solar oven to cook your frozen pizza; you've decided. NOW DO IT. Write code. Test. Commit. Refactor. Commit.
Related question that may be useful:
Helpful points of view, concepts or ways to think about problems every newbie should know
Every problem I've ever had to solve on a computer has had something to do with solving a task in the real world. Therefore, I've learned to look at how I would accomplish something in the real world and map that to the computer problem.
Example:
I need to keep track of my student's grades and come up with a final grade that is an average of all the grades throughout the year?
Well, I'd save the grades in a log (database) and I'd have a page for every student (Field StudentID) and so on...
I always take a problem to a blog first. Stackoverflow would be a good place to start. Why waste your time re-inventing the wheel when someone else may have already solved a similar problem in the past? If anything you will get some good ideas to solve it yourself.
I use the scientific method:
Based on the available information about the programming problem I come up with a hypothesis about what the reason could be.
Then I design / think up an experiment that will reject or confirm the hypothesis. This could be observing something in a debugger or screen/file output. Or changing the program slightly.
If the hypothesis is rejected then repeat 1. The information gathered in 2. may help in coming up with a new hypothesis.
If the hypothesis is confirmed then the hypothesis may be refined/become more specific (repeat 1.). Or it may already be clear what the problem is.
This directed way of find the problem is much more effective than changing things at random, observe what happens and try to (inappropriately) use statistics.
No one has mentioned truth tables! But that's probably because they're usually only mildly helpful ;) (although your mileage may vary) I used one for the first time yesterday in my 8 years of programming.
Diagramming on whiteboards or paper have always been very helpful for me.
When faced with very weird bugs. Like this: JPA stops working after redeploy in glassfish
I start from scratch. Make a new project. Does it work? Yes. Start to recreate the components of my app one piece of a time. DB. Check. Deploy. Check. Until it breaks. Continue until it breaks. If it never breaks. Ok. You just recreated your entire app. Discard of the old one. When it breaks. You pinpointed the exact problem.
I think - what am i looking for?
What method best solves this problem?
Implement it with solid logic - no code
Pseudo code
code a rough cut
execute
These is my prioritized methods
Analyse
a. Try to find the source of your problem
b. Define desired outcome
c. Brainstorm about solutions
Try on error (If I dont want to analyse)
Google a bit around
a. Of course, look around on stackoverflow
When you get mad, walk away from pc for a cup of coffee
When you still mad after 10 cups of coffee, Go sleep a night to think about the problem
GOLDEN TIP
Never give up. Persistence will always win