I've been working on an app, by myself, and I am at a stage where everything works great--as long as the user does everything he or she is supposed to do. :-) The software needs more testing to see how robust it is, how well it works when people do things like click the same button repeatedly, try to open the wrong kind of files, put data in the wrong places, etc.
I'm having a little trouble with this because it's a bit difficult for me to think in terms of using the application incorrectly. These are all edge cases to me. Still, I'd like to have the application as stable and well tested as possible before I start giving it to beta testers. Assuming that I am not talking about hiring professional testers at this point, I'm curious whether y'all have any tips or systematic ways of thinking about this task.
Thanks, as always.
Well it sounds like you are talking about 2 different things
"Testing your application's functionality" and "Stress testing"(which is the title of your question)
Stress testing is when you have a website, and want to check that it can server 100,000 people at the same time. Seeing how your application performs under stress. You can do this a number of ways, e.g by recording some actions and then getting a number of agent machines to hit your application concurrently.
This questions sounds more like a Quality Assurance question. That is what testers / beta testers are for. But there are things that you can do yourself to validate your application works the best it can.
Unit testing your code would be a good start, it helps you to try and find those edge cases. If your method takes in things like ints, try passing in int.max, int.min, and seeing what blows up. Pass nulls into everything. If you are using .Net you might want to look at PEX, it will go through all the branches/codepaths that your application has. That might help you to further refine your unit tests to test your application the best you can.
Integration tests, see what happens end to end for some of your usual things. This will help you 'find bugs' as you are developing later.
Those are some quick tips on things you can do yourself to try and find edge cases that you may have missed. But yes, eventually you will need to pass your app off to someone else to test. Just make sure that you have covered off as much as you can before it hits them :-)
Make sure you have adequate code coverage in your unit tests and integration tests.
Use appropriate UI validation, and test combinations that can break it.
I have found that a well-architected application that reduces the number of possible permutations in the UI (ways the user can break it) helps a lot. Design patterns like MVC can be especially useful in this regard, since they make your UI veneer as thin as possible.
Automation.
(Re)Factor your code so that another program can throw user-events at it. Create simple scripts of user events and play them back to your program. Capture events from beta users and save those as test scripts (useful for reproducing problems and checking for regressions). Write a fuzz-tester that applies small random changes to the scripts and try them against your program as well.
With this kind of automation you can stress and application and find glaring problems like caches and memory leaks. It won't test the actual functionality. For functionality, unit tests can be helpful. There are a ton of unit testing frameworks out there to try. Pick something useful, learn to write good tests, and integrate them into your build process.
Related
These days I've read several articles about BDD to find what it is talking about. Now I get a basic understandings, but still not clear about the whole process.
The following is what I think to do in a BDD process:
All the stakeholders(BA, customer, Dev, QA) are sitting together to discuss the requirements, and write the agreed features on story cards. Here I take a "user registeration" feature as example:
As a user,
I want to register on the system,
so that I can use its services
Create several scenarios in Given/When/Then format, and here is one of them:
Scenario: user successfully register
Given an register page
And an un-registered user
When the user fills username "Jeff" and password "123456"
And click on "Register"
Then the user can see a "Success" message
And the user "Jeff" is created in the system
Implement this scenario with some BDD testing framework, say cucumber-jvm, like:
import cucumber.api.java.en.Given;
public class Stepdefs {
#Given("an register page")
public void an_register_page throws Throwable {
// ...
}
#Given("an un-registered user")
public void an_register_page throws Throwable {
// ...
}
// ...
}
for the steps one by one.
But I soon find myself in trouble: there are pages, models, maybe databases need for this scenario, seems a lot of thing to do.
What should I do now?
What should I do now? Do I need to discuss this scenario with all the stakeholders? For BA/Customer/QA, I don't think they really care about the implementations, is it a good idea to discuss it with some other developers?
Suppose after I discuss it with some other developers, we agree to split it to several small parts. Can we make these small parts as "scenario"s with Scenario/Given/When/Then format as what we just did with cucumber-jvm, or can we use JUnit as we normally do in TDD?
1. If choose "cucumber-jvm", it seems a little heavy for small part
2. If choose JUnit, we need to involve more than one testing framework in a project
3. Is it the best if there is a single testing framework to do both things (not sure if there is)
Suppose I choose option 2, using JUnit for the small tasks
The following is what I will do after this decision:
Now we create new small tests to drive implementations, like creating user in database, as we normally do in TDD. (red->green->refactoring). And we don't care the cucumber test Scenario: user successfully register (which is failed) for now, just leave it there. Right?
We develop more small tests with JUnit, made them red -> green -> refactored. (And the imcomplete cucumber test is always failed)
Until all the small tests are passed, we turn to the cucumber test Scenario: user successfully register. Completed it and make sure it turn green at last.
Now develop another scenario, if it's easy, we can implement it just with cucumber, otherwise we will have to split it and write several jUnit test
There are definitely a lot of mis-understandings, even very basic ones. Because I don't find myself gain much value from BDD except the "discuss with all stakeholders" part.
Where is my mistake? Appreciate for any sugguestions!
Don't start with logging in; start with the thing that's different to the other systems out there. Why is someone logging in? Why do they want to use the service? Hard-code a user, pretend they're logged in, focus on the value.
If you focus on UI details you tie yourself to the UI very strongly, and it makes the UI hard to change. Instead, look at what capabilities the system is delivering. I don't recommend using a login scenario anyway, but if I did, I'd expect it to look more like:
Given Jeff isn't registered with the site
When he registers with the username "Jeff" and password "123456"
Then his account creation should be confirmed
And he should be invited to log in for the first time.
Look up "declarative vs. imperative" here to see more on this.
If your UI is really in flux, try out the scenario manually until the UI has settled down a bit. It will be easier to automate then. As you move into more stable scenarios it will be better to automate first (TDD-style).
What should you do now? Well, most people don't write class-level tests for the UI, so don't worry about it until you start driving out the controller and presenter layers. It's normally easier to have frameworks in the same language, but two different frameworks is fine. Cucumber / RSpec, JBehave / JUnit, SpecFlow / NUnit are pretty typical combinations. Do the smallest amount you need to get that first scenario working. It won't be much, because you can hard-code a lot of it. The second scenario will start introducing more interesting behaviour, and then you'll start to see your class-level tests emerge.
BTW, BDD started at a class level, so you can do the same thing with the classes; think of an example of how you might use it, write "Given, When, Then" in comments if your framework doesn't work that way already, and then fill in the gaps!
Yes, your Cucumber scenario will be red throughout, until it isn't.
Ideally you'll be making one last unit test and the Cucumber scenario pass at the same time, rather than just writing a bit of extra code. It's very satisfying to see it finally go green.
The original point of BDD was to get rid of the word "test", since it causes people to think of things like TDD as being about testing. TDD's really about clean design; understanding the responsibilities and behaviour of your code, in the same way that scenarios help you understand the capabilities and behaviour of your system. It should be normal to write both system-level scenarios and class-level tests too.
You're already ahead of all the people who forget to discuss the scenarios before they start coding, though! The conversations with stakeholders are the most important part. You might get value out of including a tester in those conversations. Testers are very good at spotting scenarios that other people miss.
It looks like you're pretty much on the right track where the rest of the process is concerned. You might find some of the other BDD answers in my profile helpful for you too. Congrats and good luck!
I think doing registration/sign_in first is a really good thing to do when you are learning the mechanics of doing BDD. Pretty much everyone understands why you would want to sign into a system, and everyone understands that the system has to know who you are before you can do this, so you have to register first.
Doing this simple task allows you to concentrate on a smaller subset of BDD. By narrowing your focus you can improve quality, whilst being aware that there is much more to learn a little later on.
To write your sign in scenarios you need to focus on two things:
writing scenarios
implementing step definitions
These are the basic mechanics of BDD, but they are only a small part of the overall process. Still I think you'd benefit from working on them because at the moment you are not executing the mechanics very well, which is to be expected because you are new to this.
When you write scenarios you should concentrate on 'what' you are doing and 'why' you are doing it. Scenarios have no need to know anything about 'how' you do things. Anything to do with filling in stuff, clicking on stuff etc. is a smell. When your scenarios only deal with the what and why they become much simpler.
Feature: Registration
A pre-requistite for signing in, see sign_in.feature
Scenario: Register
Given I am a new user
When I register
Then I should be registered
Feature: Sign in
Dependant on registration ...
I want to sign in so I can have personalised content and ...
Scenario: Sign in
Given I am registered
When I sign in
Then I should be signed in
You really don't much more than this to drive the development of a simple sign_in system. Once you have that running you can deal with some sad paths e.g.
Scenario: Sign in with bad password
Given I am registered
When I sign in with a bad password
Then I should not be signed in
And I should be told ...
If you implement things nicely this sad path scenario should be trivial to implement as all the infrastructure is already in place to sign in, all that is different is you are using a bad password.
You can see an example of this at https://github.com/diabolo/cuke_up. The way to use this example is follow the commit history, and in particular notice how I am using the extract_method refactor to take all the code out of the step definitions. Each method I extract is a tool to reused when writing subsequent scenarios. Making an effective set of tools is the key to productivity when implementing scenarios.
Nowadys sign_up is so simple because we can rely on a 3rd party library and their unit tests. This means we can get pretty good results without ever having to worry about the transition to our own code and doing bits of TDD. So for now there really is no need to think about TDD.
So long as you are aware that you are only doing a small subset of BDD, I think you can successfully use this approach to provide foundations for all the extra stuff you have to deal with when working with the things that differentiate your system from others.
To summarize, just focus on
writing simple scenarios
making your step definitions elegant
creating tools (extracted methods) that can be used in the next scenario you write
You have plenty of time to learn the other stuff, and it will be much easier if your basic mechanics are better developed.
I learnt swing basics and event handling basics from head first java...
Then i read a few tutorials on swing app development using netbeans...
and i loved it as i don't have to care about layouts and stuff...
But i read in one of the forums, that i should learn swings properly rather than using netbeans directly...
This confused me a bit....
Please suggest the best way to master development of swing apps....
thanks in advance
Well, I see I'm going to run counter to the majority here ;-)
Hand coding GUIs is a pain in the ass. Anything that makes that task easier is a good thing in my book. When you're just starting, having a generated GUI lets you get up and running faster.
GUI builders handle the really repetitive work and prevent you from doing the most common dumb things. The downside is that same approach will also prevent you from doing the really clever things. Eventually, you will encounter something that you cannot do through the GUI builder and you will need to poke into the code. So, you can't treat code generators like black boxes where you don't need to know what magic happens inside. At minimum, you need white boxes. Let the GUI builder do its magic, but understand that magic and its limitations.
Practice by generating a very simple GUI. Walk through the code and understand what it does. Make a change through the builder and see how the generated code changes. Try changing the code yourself to confirm you understanding is correct. *
If you don't understand something, hit the JavaDocs, the Swing Trail, or browse through the Java2S Swing Tutorials.
If you're still stuck try the kind folks at Java Ranch, or here on StackOverflow.
* Netbeans puts the generated code in guarded blocks and will not let you edit them directly. However, you can open the file in another editor to test a change. Also, you can do quite a lot to influence the code generation using the code tab in the properties window.
It depends on what you see as your goal.
There is no "perfect" approach to get comfortable using Java and swing, it always depends on what you want the outcome to be like.
Most enterprises depend on stability and speed, programmers need to write code fast and stable. If you write complex interfaces by hand it gets ugly when it comes to speed and precision at the same time. You can never write better code in terms of "it is working" then the netbeans gui builder can. Also, no one will probably have a look at your code once the application is up and running.
If you want to get to know swing only for the purpose of knowing it with no deeper intention what so ever, I'd recommend learning it by heart without netbeans as you'll probably familiarize yourself with most of it's functionality quicker then the other way around.
On one hand, if I want to learn something, I want to learn it from scratch, so I would probably go with writing swing-code myself and in the end using netbeans to generate it when I am fully able to comprehend what is generated.
On the other hand, if I need to write applications quick and am not paid to go into any details, I'd simply use netbeans.
I think you have answered yourself... you want to master development of swing apps...
everything that you do by autogenerating without knowing why or how is not mastering in my opinion ;)
If you want to be master, then you should at least know how to do it with your bare hands. Moreover, it will also help you if you will use other gui toolkits (main principles of gui toolkits are more or less the same, imho).
I am after some advice and pointers on integration testing for a web app. Our project has been running for a number of years, and it is reasonably complex. We are pretty well covered with unit tests, but we are missing a decent set of integration tests. We don't have documented use cases or even a reasonable set of test cases beyond our unit tests. 'Integration testing' today consists of the developer's knowledge of the likely impact of a change and manual, ad-hoc testing of the app. It is really not ideal - we now want to design and automate a solid set of tests to allow us to perform regression testing, and increase our confidence in the quality of the app.
We have finally built a platform (based on Selenium) to allow us to quickly author and automate the execution of the tests. The problem now: we don't have any tests, the page is well and truly blank. The system has around 30 classes which interact with each other and influence the UI. For a new user signing up, there are about 40 properties that can be set, with each once impacting the experience. Over the user life time they will generate even more states. Given so many variables and possible states, it is a daunting prospect to get started, which is probably why it has been neglected thus far.
The pain of not having a decent set of tests is now becoming destructive. I am dedicating time to get this problem fixed - I am after some practical advice on the authoring of the tests. How do you approach it? Do you have any links I may find useful? How can I stop my mind running away with the seemingly infinite number of states for a user's data? How can I flush out the edge cases which are failing (and our users seeming to be finding)?
If it is the sheer amount of combinations that is holding you back in trying to generate testcases, you should definitly take a look at all-pair testing.
We have used PICT from microsoft as a tool to successfully minimize the amount of testcases while still being reasonable confident to have most cases covered.
the reasoning behind all-pairs testing
is this: the simplest bugs in a
program are generally triggered by a
single input parameter. The next
simplest category of bugs consists of
those dependent on interactions
between pairs of parameters, which can
be caught with all-pairs testing.1
Bugs involving interactions between
three or more parameters are
progressively less common2, whilst
at the same time being progressively
more expensive to find by exhaustive
testing, which has as its limit the
exhaustive testing of all possible
inputs.
My coworkers and I were having a discussion about this yesterday. It seems that no matter how well we prepare and no matter how much we test and no matter what the client says immediately before the site becomes public, initial site launches almost always seem to be somewhat rocky. Some clients are better than others, but often things that were just fine during testing suddenly go horribly wrong when the site becomes public.
Is this a common experience? I'm not just talking about functionality breaking down (although that's often a problem as well). I'm also talking about sites that work exactly the way we wanted them to, but suddenly are not satisfactory to the client when it's time to make the site public. And I'm talking about clients that have been familiar with the site during most of the development process. Meaning, the public launch is definitely not the first time they've seen the site.
If you've dealt with this problem before, have you found a way to improve the situation? Or is this just something that will always be somewhat of a problem?
Don't worry. This is completely and entirely normal and happens with every piece of software. Everything that can go wrong will go wrong, and the most volatile entity in the development process, the client, will be the cause of these things.
You could do all the Requirements Gathering in the world, write a 100 page Proposal, provide screenshots and updates to the project hourly and the client will still not approve. On a personal note, I feel that the Internet is one of the worst mediums for this, as designs are a lot more free-flowing nowadays and the client will always have a certain picture in his/her mind; one that won't look like the finished product.
I find that a bulletproof contact with defined stages and sign-off sheets are the best way to handle such a situation. Assuming that your work is contracted you should ensure that at each stage the client is shown the work and is forced to approve each and every change made. At least that way if the client wants something changed you can tell them that they've already signed off that section and the additional work will cost them extra (also defined within the contract).
Not only did this approach work for me, it made the client stop and think about what he/she REALLY wanted. Luckily for me many of my clients are already tech-oriented, so they understand that these things can take time, but those that haven't a clue about Web Development expect things to be perfect within a couple of days. As long as you make sure that everything is covered in the contract the client will think about what they want and won't pester you with issues after.
Of course, anything you can do in regards to Quality Control would be fantastic and help the project move along nicely. Also ensure that some form of methodology is planned out before the project and that this methodology is known by the client(s). Often changes in fundamental areas can be costly and many clients do not seem to realise that a small change can require many things to be changed.
Yes, saw this several times on our projects (human beings are fickle).
Something that helps us in these situations is a good PM/Account Manager that can handle the customer, which makes things a little bit bearable on the technical level.
Web site launches are usually fairly smooth for us. Of course, we do extensive validation including code inspections, deployments to proto-servers (identical to our production servers), and mountains of documentation.
After every launch, we have a meeting to discuss what went well and what didn't so that we can make adjustments to our overall process and best-known-methods documents.
As for clients that change their minds at the last minute... sigh... we minimize that by having them sign off on the beta version. That way, there is no disagreement when the project is launched. If there is a disagreement, there is always a next release.
For what it's worth, the last site launch I did went off without a hitch. Now, it wasn't a high-traffic site, and there were some bugs that I did eventually fix, but there wasn't anything troubling on the day of the actual launch.
This was an ASP.NET/C# site. It wasn't terribly large or complicated, but it wasn't trivial either. Probably the most notable thing is that it was 100% designed, implemented, and tested by myself, from the database schema all the way up to the CSS. It was also my first time using ASP.NET. There were plenty of bumps in development but by the time I launched it I was pretty familiar with them and so knew what to expect.
I think the lesson to be learned from this is to have a good design up-front, solid implementation skills, and good testing, and a new site doesn't have to be a nightmare. There's at least a possibility of a trouble-free launch.
I wouldn't limit your statement to just web sites. I have worked on a lot of projects over the years and there are always details that get "discovered" when going live. No amount of testing removes all the fun things that can happen.
One thing I will say is what you learn in the first couple of hours of a new system going "on-line" is way move valuable that all the stuff learned during development. It's show time when the real cool problems and scenarios appear. Learn to love them and use these times as a learning point for the next time. Then each time it will be just at fun!
We used to have this problem a lot, but much less recently.
Partly for us it is about firmer project management and documenting the specification (as suggested in other answers here) but I believe more of the difference came from:
Expectation management - getting the client to accept that iterative changes are still to be expected after launch, that this is normal and not to worry about it
Increasing authority - we are now a well established (13 years) web developer and we can speak with a lot of expertise
Simply being more experienced - we can now predict in advance most of the queries that are likely to come up, and either resolve them, mitigate them or bring them to the client's attention so they don't sting us on the day
Plus, we rarely do big fanfare launches - a soft launch makes things much less stressful.
My experience is that web site launches are almost always rocky. I've only had two exceptions to this common truth. The first was a site developed for a small business ran by one person. This went smoothly because, well there was only one person to please so is was fairly easy to track what they wanted. The other was a multi-million dollar website launched by a fortune 500 company. This happened to go smoothly because there were 2 PMs and a small army of consultants there to manage the needs of the customer. That coupled with a one month of straight application load testing and a 1,000 user beta launch meant when the site finally went "live", I was able to get a full nights sleep (which is fairly uncommon). Neither of these situations constitute then norm though. Of course, there's nothing better than several thousand beta testers hitting your site to help find those contingencies that you never thought of.
I'm sure you can figure out the kind of errors that always sneak in, so for example is it due to rather superficial testing? E.g. randomly clicking around and checking if things appear to be right.
In order to improve I propose something along the following:
Create documents/checklists that specify all testing procedures.
Get regular people to test, not just the folks who built the application.
Setup a staging environment which closely resembles production.
Post-launch, analyze what went wrong and why it went wrong.
Maybe get external QA to check on your procedures.
Now, all those suggestions are of course very obvious but implementing them into your launch procedures will require time.
In general this really is an ongoing process which will help you and your colleagues to improve. And also be happier, because fixing bugs in production just makes you age rapidly. ;-)
Keep in mind, you won't be done the first time. Documents are heavy which is why people don't read them. People are also lazy and don't follow the procedures. This means that you always have analyze what happened, go back and improve the procedures.
If you have the opportunity I'd also spend some time on looking why nothing went wrong with another launch and comparing this to the usual.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Joining an existing team with a large codebase already in place can be daunting. What's the best approach;
Broad; try to get a general overview of how everything links together, from the code
Narrow; focus on small sections of code at a time, understanding how they work fully
Pick a feature to develop and learn as you go along
Try to gain insight from class diagrams and uml, if available (and up to date)
Something else entirely?
I'm working on what is currently an approx 20k line C++ app & library (Edit: small in the grand scheme of things!). In industry I imagine you'd get an introduction by an experienced programmer. However if this is not the case, what can you do to start adding value as quickly as possible?
--
Summary of answers:
Step through code in debug mode to see how it works
Pair up with someone more familiar with the code base than you, taking turns to be the person coding and the person watching/discussing. Rotate partners amongst team members so knowledge gets spread around.
Write unit tests. Start with an assertion of how you think code will work. If it turns out as you expected, you've probably understood the code. If not, you've got a puzzle to solve and or an enquiry to make. (Thanks Donal, this is a great answer)
Go through existing unit tests for functional code, in a similar fashion to above
Read UML, Doxygen generated class diagrams and other documentation to get a broad feel of the code.
Make small edits or bug fixes, then gradually build up
Keep notes, and don't jump in and start developing; it's more valuable to spend time understanding than to generate messy or inappropriate code.
this post is a partial duplicate of the-best-way-to-familiarize-yourself-with-an-inherited-codebase
Start with some small task if possible, debug the code around your problem.
Stepping through code in debug mode is the easiest way to learn how something works.
Another option is to write tests for the features you're interested in. Setting up the test harness is a good way of establishing what dependencies the system has and where its state resides. Each test starts with an assertion about the way you think the system should work. If it turns out to work that way, you've achieved something and you've got some working sample code to reproduce it. If it doesn't work that way, you've got a puzzle to solve and a line of enquiry to follow.
One thing that I usually suggest to people that has not yet been mentioned is that it is important to become a competent user of the existing code base before you can be a developer. When new developers come into our large software project, I suggest that they spend time becoming expert users before diving in trying to work on the code.
Maybe that's obvious, but I have seen a lot of people try to jump into the code too quickly because they are eager to start making progress.
This is quite dependent on what sort of learner and what sort of programmer you are, but:
Broad first - you need an idea of scope and size. This might include skimming docs/uml if they're good. If it's a long term project and you're going to need a full understanding of everything, I might actually read the docs properly. Again, if they're good.
Narrow - pick something manageable and try to understand it. Get a "taste" for the code.
Pick a feature - possibly a different one to the one you just looked at if you're feeling confident, and start making some small changes.
Iterate - assess how well things have gone and see if you could benefit from repeating an early step in more depth.
Pairing with strict rotation.
If possible, while going through the documentation/codebase, try to employ pairing with strict rotation. Meaning, two of you sit together for a fixed period of time (say, a 2 hour session), then you switch pairs, one person will continue working on that task while the other moves to another task with another partner.
In pairs you'll both pick up a piece of knowledge, which can then be fed to other members of the team when the rotation occurs. What's good about this also, is that when a new pair is brought together, the one who worked on the task (in this case, investigating the code) can then summarise and explain the concepts in a more easily understood way. As time progresses everyone should be at a similar level of understanding, and hopefully avoid the "Oh, only John knows that bit of the code" syndrome.
From what I can tell about your scenario, you have a good number for this (3 pairs), however, if you're distributed, or not working to the same timescale, it's unlikely to be possible.
I would suggest running Doxygen on it to get an up-to-date class diagram, then going broad-in for a while. This gives you a quickie big picture that you can use as you get up close and dirty with the code.
I agree that it depends entirely on what type of learner you are. Having said that, I've been at two companies which had very large code-bases to begin with. Typically, I work like this:
If possible, before looking at any of the functional code, I go through unit tests that are already written. These can generally help out quite a lot. If they aren't available, then I do the following.
First, I largely ignore implementation and look only at header files, or just the class interfaces. I try to get an idea of what the purpose of each class is. Second, I go one level deep into the implementation starting with what seems to be the area of most importance. This is hard to gauge, so occasionally I just start at the top and work my way down in the file list. I call this breadth-first learning. After this initial step, I generally go depth-wise through the rest of the code. The initial breadth-first look helps to solidify/fix any ideas I got from the interface level, and then the depth-wise look shows me the patterns that have been used to implement the system, as well as the different design ideas. By depth-first, I mean you basically step through the program using the debugger, stepping into each function to see how it works, and so on. This obviously isn't possible with really large systems, but 20k LOC is not that many. :)
Work with another programmer who is more familiar with the system to develop a new feature or to fix a bug. This is the method that I've seen work out the best.
I think you need to tie this to a particular task. When you have time on your hands, go for whichever approach you are in the mood for.
When you have something that needs to get done, give yourself a narrow focus and get it done.
Get the team to put you on bug fixing for two weeks (if you have two weeks). They'll be happy to get someone to take responsibility for that, and by the end of the period you will have spent so much time problem-solving with the library that you'll probably know it pretty well.
If it has unit tests (I'm betting it doesn't). Start small and make sure the unit tests don't fail. If you stare at the entire codebase at once your eyes will glaze over and you will feel overwhelmed.
If there are no unit tests, you need to focus on the feature that you want. Run the app and look at the results of things that your feature should affect. Then start looking through the code trying to figure out how the app creates the things you want to change. Finally change it and check that the results come out the way you want.
You mentioned it is an app and a library. First change the app and stick to using the library as a user. Then after you learn the library it will be easier to change.
From a top down approach, the app probably has a main loop or a main gui that controls all the action. It is worth understanding the main control flow of the application. It is worth reading the code to give yourself a broad overview of the main flow of the app. If it is a GUI app, creating a paper that shows which screens there are and how to get from one screen to another. If it is a command line app, how the processing is done.
Even in companies it is not unusual to have this approach. Often no one fully understands how an application works. And people don't have time to show you around. They prefer specific questions about specific things so you have to dig in and experiment on your own. Then once you get your specific question you can try to isolate the source of knowledge for that piece of the application and ask it.
Start by understanding the 'problem domain' (is it a payroll system? inventory? real time control or whatever). If you don't understand the jargon the users use, you'll never understand the code.
Then look at the object model; there might already be a diagram or you might have to reverse engineer one (either manually or using a tool as suggested by Doug). At this stage you could also investigate the database (if any), if should follow the object model but it may not, and that's important to know.
Have a look at the change history or bug database, if there's an area that comes up a lot, look into that bit first. This doesn't mean that it's badly written, but that it's the bit everyone uses.
Lastly, keep some notes (I prefer a wiki).
The existing guys can use it to sanity check your assumptions and help you out.
You will need to refer back to it later.
The next new guy on the team will really thank you.
I had a similar situation. I'd say you go like this:
If its a database driven application, start from the database and try to make sense of each table, its fields and then its relation to the other tables.
Once fine with the underlying store, move up to the ORM layer. Those table must have some kind of representation in code.
Once done with that then move on to how and where from these objects are coming from. Interface? what interface? Any validations? What preprocessing takes place on them before they go to the datastore?
This would familiarize you better with the system. Remember that trying to write or understand unit tests is only possible when you know very well what is being tested and why it needs to be tested in only that way.
And in case of a large application that is not driven towards databases, I'd recommend an other approach:
What the main goal of the system?
What are the major components of the system then to solve this problem?
What interactions each of the component has among them? Make a graph that depicts component dependencies. Ask someone already working on it. These componentns must be exchanging something among each other so try to figure out those as well (like IO might be returning File object back to GUI and like)
Once comfortable to this, dive into component that is least dependent among others. Now study how that component is further divided into classes and how they interact wtih each other. This way you've got a hang of a single component in total
Move to the next least dependent component
To the very end, move to the core component that typically would have dependencies on many of the other components which you've already tackled
While looking at the core component, you might be referring back to the components you examined earlier, so dont worry keep working hard!
For the first strategy:
Take the example of this stackoverflow site for instance. Examine the datastore, what is being stored, how being stored, what representations those items have in the code, how an where those are presented on the UI. Where from do they come and what processing takes place on them once they're going back to the datastore.
For the second one
Take the example of a word processor for example. What components are there? IO, UI, Page and like. How these are interacting with each other? Move along as you learn further.
Be relaxed. Written code is someone's mindset, froze logic and thinking style and it would take time to read that mind.
First, if you have team members available who have experience with the code you should arrange for them to do an overview of the code with you. Each team member should provide you with information on their area of expertise. It is usually valuable to get multiple people explaining things, because some will be better at explaining than others and some will have a better understanding than others.
Then, you need to start reading the code for a while without any pressure (a couple of days or a week if your boss will provide that). It often helps to compile/build the project yourself and be able to run the project in debug mode so you can step through the code. Then, start getting your feet wet, fixing small bugs and making small enhancements. You will hopefully soon be ready for a medium-sized project, and later, a big project. Continue to lean on your team-mates as you go - often you can find one in particular who is willing to mentor you.
Don't be too hard on yourself if you struggle - that's normal. It can take a long time, maybe years, to understand a large code base. Actually, it's often the case that even after years there are still some parts of the code that are still a bit scary and opaque. When you get downtime between projects you can dig in to those areas and you'll often find that after a few tries you can figure even those parts out.
Good luck!
You may want to consider looking at source code reverse engineering tools. There are two tools that I know of:
SWAG Kit (Linux only) link
Bauhaus academic commercial
Both tools offer similar feature sets that include static analysis that produces graphs of the relations between modules in the software.
This mostly consists of call graphs and type/class decencies. Viewing this information should give you a good picture of how the parts of the code relate to one another. Using this information, you can dig into the actual source for the parts that you are most interested in and that you need to understand/modify first.
I find that just jumping in to code can be a a bit overwhelming. Try to read as much documentation on the design as possible. This will hopefully explain the purpose and structure of each component. Its best if an existing developer can take you through it but that isn't always possible.
Once you are comfortable with the high level structure of the code, try to fix a bug or two. this will help you get to grips with the actual code.
I like all the answers that say you should use a tool like Doxygen to get a class diagram, and first try to understand the big picture. I totally agree with this.
That said, this largely depends on how well factored the code is to begin with. If its a gigantic mess, it's going to be hard to learn. If its clean, and organized properly, it shouldn't be that bad.
See this answer on how to use test coverage tools to locate the code for a feature of interest, without knowing anything about where that feature is, or how it is spread across many modules.
(shameless marketing ahead)
You should check out nWire. It is an Eclipse plugin for navigating and visualizing large codebases. Many of our customers use it to break-in new developers by printing out visualizations of the major flows.