What is Code Coverage? - language-agnostic

I have 3 questions :
What is CodeCoverage ?
What is it good for ?
What tools are used for
analyzing Code Coverage ?

You can get very good information from SO WEB SITE
Free code coverage tools
What is Code Coverage and how do YOU measure it?
Code Coverage is a measurement of how many lines/blocks/arcs of your code are executed while the automated tests are running.CC is collected by using a specialized tool to instrument the binaries to add tracing calls and run a full set of automated tests against the instrumented product. A good CC tools will give you not only the percentage of the code that is executed, but also will allow you to drill into the data and see exactly which lines of code were executed during particular test.
Code coverage algorithms were first created to address the problem of assessing a source code by looking directly at the source code. Code coverage belongs to the structural testing category because of the assertions made on the internal parts of the program and not on system outputs. Therefore code coverage aims at finding parts of the code that are not worth testing.
http://www.stickyminds.com/sitewide.asp?Function=edetail&ObjectType=ART&ObjectId=7580
alt text http://www.codecoveragetools.com/images/stories/software_lifecycle.jpg
Its Good for
Functional coverage aiming at finding how many functions or procedures were executed.
Statement or line coverage which identifies the number of lines in the source code has been executed.
Condition coverage or decision coverage answers the question about the number of loop conditions were executed in the program.
Path coverage which focuses on finding all possible paths from a given starting point in the code has been executed.
Entry and exit coverage which finds how many functions (C/C++, Java) or procedures (Pascal) were executing from the beginning to the end.
TOOLS
http://www.codecoveragetools.com/
http://java-source.net/open-source/code-coverage
http://www.codecoveragetools.com/index.php/coverage-process/code-coverage-tools-java.html
http://open-tube.com/10-code-coverage-tools-c-c/
http://csharp-source.net/open-source/code-coverage
http://www.kdedevelopers.org/node/3190

From wikipedia article
Code coverage is a measure used in
software testing. It describes the
degree to which the source code of a
program has been tested. It is a form
of testing that inspects the code
directly and is therefore a form of
white box testing1. Currently, the
use of code coverage is extended to
the field of digital hardware, the
contemporary design methodology of
which relies on Hardware description
languages (HDLs).
Advocating the use of code coverage
A code coverage tool simply keeps
track of which parts of your code get
executed and which parts do not.
Usually, the results are granular down
to the level of each line of code. So
in a typical situation, you launch
your application with a code coverage
tool configured to monitor it. When
you exit the application, the tool
will produce a code coverage report
which shows which lines of code were
executed and which ones were not. If
you count the total number of lines
which were executed and divide by the
total number of lines which could have
been executed, you get a percentage.
If you believe in code coverage, the
higher the percentage, the better. In
practice, reaching 100% is extremely
rare.
The use of a code coverage tool is
usually combined with the use of some
kind of automated test suite. Without
automated testing, a code coverage
tool merely tells you which features a
human user remembered to use. Such a
tool is far more useful when it is
measuring how complete your test suite
is with respect to the code you have
written.
Related articles
The Future of Code-Coverage Tools
The effectiveness of code coverage tools in software testing
Tools
Open Source Code Coverage Tools in Java

Code coverage is a metric, showing how "well" the source code is tested. There are several types of code coverage: line coverage, function coverage, branch coverage.
In order to measure the coverage, you shall run the application either manually or by automated test.
Tools can be divided in two categories:
- the ones that run the compiled code in a modified environment (like the debugger), counting the required points (functions, lines, etc.);
- the ones that require special compilation - in this case the resulting binary already contains the code which actually does the counting.
There are several tools for measuring and visualizing the result, they depend from platform, from source code's language.
Please read article on Wikipedia
To provide you tools, please define for which OS and language do you use.

Code coverage is a measure used in software testing. It describes the degree to which the source code of a program has been tested.
http://en.wikipedia.org/wiki/Code_coverage
The wikipedia definition is pretty good, but in my own words code coverage tells you how much automated testing you have accounted for. 100% would mean that ever single line of code in your application is being covered by a unit test.
NCover is an application for .NET

The term refers to how well your program is covered by your tests. See the following wikipedia article for more info:
http://en.wikipedia.org/wiki/Code_coverage

The other answers already cover what Code Coverage is. The think I'd like to stress is that you need to be careful not to treat high coverage as implicitly meaning you've tested all scenarios. It doesn't necessarily say how well you've tested the code or the quality of your tests, just that you've hit a certain percentage of code as part of the tests running.
High Code Coverage does not necessarily mean High Test Quality, but High Test Quality does mean High Code Coverage
In practice, I usually aim for 90-95% code coverage which is often achievable. The last few % are often too expensive to be worth trying to hit.

There are many ways to develop applications. One of those is "Extreme Programming" or "Test Driven Design (TDD)". It states that all code should be tested. Code Coverage is a means of measuring how much is tested.
I'd like to make a small remark about this: I don't think all code should be tested, nor that one should set a specific percentage of code coverage. Neither do I think that code shouldn't be tested with Unit Tests (code testing code). I do think one should decide what makes sense to test. Due to this reason I generally don't use code coverage.
One thing that some tools provide, is highlight the parts that are tested. This way you might run into some code that isn't tested but actually should be, which is the only thing I use it for.

Good answers.
My two cents is that there is no method of testing that catches all errors, but less testing will never catch more errors, so any testing is good. To my mind, coverage testing is not to show what code has been exercised, but to show what code has not been exercised, because that is where bugs love to lurk.
If you combine it with single-stepping, it is a very good way to review code and catch bugs. Here's an example.

Another useful tool for ensuring code quality(which encompasses code coverage) that I recently used is Sonar.
Here is the link http://www.sonarqube.org/

Related

AVR-Studio how to output?

I do not have experience with micorcontrollers but I have something related to them. Here is and explanation of my issue:
I have an algorithm, and I want to calculate how many cycles my algorithm would cost on a specific avr microcontroller.
To do that I downloaded AVR-STudio 6, and I used the simulator. I succeeded in obtaining the number of cycles for my algorithm. What I wan to know is that how can I make sure that my algorithm is working as it should be. AVR-Studio allows me to debug using the simulator but I am not able to see the output of my algorithm.
To simplify my question, I would like some help in implementing the hello world example in AVR-Studio, that is I want to see "hello world" in the output window, if that is possible.
My question is not how to program the microcontroller, my question is that how could I see the output of a program in AVR-Studio.
Many thanks
As Hanno Binder suggested in his comment:
Atmel Studio still does not provide any means to display debug messages sent by the program simulated. Your only option is to place breakpoints at apropriate locations and then inspect the state of the device in the simulator. For example the locations in RAM where your result is stored, or the registers in which it may reside; maybe have a 'watch' set on a variable or expression.
I think this is the best answer, watch vairables and memory while in debug mode.
Note: turn off optimization when you want to debug for infomation, or some variables will be optimized away.
the best thing to test if algorithms work is to run them in a regular PC program and feed them with data and compare the results with ground trouth.
Clearly to be able to do this a good programming style is neccessary that separates hardware related tasks from the actual data processing. Additionally you have to keep architectural differences in mind (eg: int=16bit vs. int=32bit --> use inttypes.h)

How to ensure quality of junit tests?

Are there proven ways of verifying quality of junit tests or integration tests?
Should your business analyst review unit tests to cerfity? Or are there any other ways?
In the traditional code first environment a peer or lead would review the test plan but how about automated tests?
I looked at this stackflow thread but couldn't extract anything meaningful stuff.
Thoughts?
Mutation testing and code coverage can verify the quality of your tests.
So first check than your code coverage is high enough. After this verify with mutation testing than your test are good. Mutation testing tool makes small change(s) in production code and reruns test - after a modification a good test should fail. For mutation testing tool in Java look at PIT Mutation Testing and this blog post: Introduction to mutation testing with PIT and TestNG
But this is still not enough, tests should be good written and readable. So you need code review also and quality rules verification for tests. I recommend nice book about writing good tests Practical Unit Testing. Chapter 10: Maintainable tests from this book is available for free.
Here's a nice linked article:
http://www.ibm.com/developerworks/java/library/j-cq01316/index.html?ca=drs
And:
Good Tests ⇒ High Coverage
High Coverage ⇒/⇒ Good Tests
Coverage tools are useful to identify what areas of your project need more attention, but it doesn't mean that areas with good coverage shouldn't need more attention.
Code coverage tool is a good start, but knowing that a given line was executed does not mean it was tested. Infamous test cases without assertions or expected=Exception.class are an example.
I can imagine few criteria on this level:
if the line is tested, any change to it (inverting condition, removing...) should fail at least one test
given piece of logic should be fully reconstructible based only its tests
the test does not mirror the production code
the test should not be dependent on current date, locale, timezone, order of other tests
One might try to automate the first one, others are more or less subjective.
As for analyst doing test review - probably only Fitensse fixtures are readable enough to satisfy non-developers.
Code review is the best way to ensure test quality. I would not have business analysts review the tests, for the simple fact that they might not have the training necessary to understand the tests. Also, unit tests do not all live at the functional level, where analysts' requirements are. An analyst might say 'when the user clicks save, the profile is saved' whereas you might have to write n number of tests across multiple layers to get that functionality.
You might consider code coverage tools to ensure 100% of the code lines are being tested. Emma is a good tool for java (http://emma.sourceforge.net/).

Unit testing an HTML parser/cleaner?

I'm trying to choose between a couple of different HTML parsers for a project I am working on, part of which accepts HTML input from the client.
I've built a simple automated test for each one, to see if they fit my needs. I have a large number of real-life HTML fragments to test, but they aren't enough for testing for safety, since they (probably) do not contain any malicious code.
I don't mind reviewing the outputs by hand.
My question is, is there a freely available database or list of HTML snippets containing malformed HTML and scripts intended for testing for XSS?
The ha.ckers XSS cheatsheet is pretty comprehensive, and was the catalyst for me to build a whitelist based sanitiser into jsoup.
Google's home page seems to be malformed, maybe you can use that?
http://validator.w3.org/check?uri=www.google.com&charset=%28detect+automatically%29&doctype=Inline&group=0
http://www.codinghorror.com/blog/2006/11/its-a-malformed-world.html
I built html-sanitizer-testbed for exactly this purpose. It consists of two components:
A suite of tests, that are designed to check the security of a HTML sanitizer. I have collected every tricky case I've been able to find. It includes everything on the ha.ckers.org XSS cheatsheet, as well as many other test cases I've collected over the years. Over the years I've analyzed dozens of HTML sanitizers (most of them were vulnerable), and added a test case for every security vulnerability I've ever found, so this is a pretty nice collection.
Also, it provides some test automation functionality, so that you don't need to review the outputs by hand: you can fire up a browser and check whether the browser seems to have executed any Javascript in the outputs of the sanitizer (in which case the sanitizer is broken). This part is not 100% reliable and comes with no guarantees whatsoever, so for maximum effectiveness, you might want to review the outputs by hand. However, it has worked pretty well for me so far.
I welcome feedback and contributions.

What is the best practice to write Selenium-based integration testing from zero for a complex application?

I am after some advice and pointers on integration testing for a web app. Our project has been running for a number of years, and it is reasonably complex. We are pretty well covered with unit tests, but we are missing a decent set of integration tests. We don't have documented use cases or even a reasonable set of test cases beyond our unit tests. 'Integration testing' today consists of the developer's knowledge of the likely impact of a change and manual, ad-hoc testing of the app. It is really not ideal - we now want to design and automate a solid set of tests to allow us to perform regression testing, and increase our confidence in the quality of the app.
We have finally built a platform (based on Selenium) to allow us to quickly author and automate the execution of the tests. The problem now: we don't have any tests, the page is well and truly blank. The system has around 30 classes which interact with each other and influence the UI. For a new user signing up, there are about 40 properties that can be set, with each once impacting the experience. Over the user life time they will generate even more states. Given so many variables and possible states, it is a daunting prospect to get started, which is probably why it has been neglected thus far.
The pain of not having a decent set of tests is now becoming destructive. I am dedicating time to get this problem fixed - I am after some practical advice on the authoring of the tests. How do you approach it? Do you have any links I may find useful? How can I stop my mind running away with the seemingly infinite number of states for a user's data? How can I flush out the edge cases which are failing (and our users seeming to be finding)?
If it is the sheer amount of combinations that is holding you back in trying to generate testcases, you should definitly take a look at all-pair testing.
We have used PICT from microsoft as a tool to successfully minimize the amount of testcases while still being reasonable confident to have most cases covered.
the reasoning behind all-pairs testing
is this: the simplest bugs in a
program are generally triggered by a
single input parameter. The next
simplest category of bugs consists of
those dependent on interactions
between pairs of parameters, which can
be caught with all-pairs testing.1
Bugs involving interactions between
three or more parameters are
progressively less common2, whilst
at the same time being progressively
more expensive to find by exhaustive
testing, which has as its limit the
exhaustive testing of all possible
inputs.

Design By Contract and Test-Driven Development [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm working on improving our group's development process, and I'm considering how best to implement Design By Contract with Test-Driven Development. It seems the two techniques have a lot of overlap, and I was wondering if anyone had some insight on the following (related) questions:
Isn't it against the DRY principle to have TDD and DbC unless you're using some kind of code generator to generate the unit tests based on contracts? Otherwise, you have to maintain the contract in two places (the test and the contract itself), or am I missing something?
To what extent does TDD make DbC redundant? If I write tests well enough, aren't they equivalent to writing a contract? Do I only get added benefit if I enforce the contract at run time as well as through the tests?
Is it significantly easier/more flexible to only use TDD rather than TDD with DbC?
The main point of these questions is this more general question: If we're already doing TDD properly, will we get a significant benefit for the overhead if we also use DbC?
A couple of details, though I think the question is largely language-agnostic:
Our team is very small, <10 programmers.
We mostly use Perl.
Note the differences.
Design driven by contract. Contract Driven Design.
Develop driven by test. Test Driven Development.
They are related in that one precedes the other. They describe software at different levels of abstraction.
Do you discard the design when you go to implementation? Do you consider that a design document is a violation of DRY? Do you maintain the contract and the code separately?
Software is one implementation of the contract. Tests are another. User's manual is a third. Operations guide is a fourth. Database backup/restore procedures are one part of an implementation of the contract.
I can't see any overhead from Design by Contract.
If you're already doing design, then you change the format from too many words to just the right words to outline the contractual relationship.
If you're not doing design, then writing a contract will eliminate problems, reducing cost and complexity.
I can't see any loss of flexibility.
start with a contract,
then
a. write tests and
b. write code.
See how the two development activities are essentially intertwined and both come from the contract.
I think there is overlap between DbC and TDD, however, I don't think there is duplicated work: introducing DbC will probably result in a reduction of test cases.
Let me explain.
In TDD, tests aren't really tests. They are behavioral specifications. However, they are also design tools: by writing the test first, you use the external API of your object under test – that you haven't actually written yet – in the same way that a user would. That way, you design the API in a way that makes sense to a user, and not in the way that makes it easiest for you to implement. Something like queue.full? instead of queue.num_entries == queue.size.
This second part cannot be replaced by Contracts.
The first part can be partially replaced by contracts, at least for unit tests. TDD tests serve as specifications of behavior, both to other developers (unit tests) and domain experts (acceptance tests). Contracts also specify behavior, to other developers, to domain experts, but also to the compiler and the runtime library.
But contracts have fixed granularity: you have method pre- and postconditions, object invariants, module contracts and so on. Maybe loop variants and invariants. Unit tests however, test units of behavior. Those might be smaller than a method or consist of multiple methods. That's not something you can do with contracts. And for the "big picture" you still need integration tests, functional tests and acceptance tests.
And there is another important part of TDD that DbC doesn't cover: the middle D. In TDD, tests drive your development process: you never write a single line of implementation code unless you have a failing test, you never write a single line of test code unless your tests all pass, you only write the minimal amount of implementation code to make the tests pass, you only write the minimal amount of test code to produce a failing test.
In conclusion: use tests to design the "flow", the "feel" of the API. Use contracts to design the, well, contract of the API. Use tests to provide the "rhythm" for the development process.
Something like this:
Write an acceptance test for a feature
Write a unit test for a unit that implements a part of that feature
Using the method signature you designed in step 2, write the method prototype
Add the postcondition
Add the precondition
Implement the method body
If the acceptance test passes, goto 1, otherwise goto 2
If you want to know what Bertrand Meyer, the inventor of Design by Contract, thinks about combining TDD and DbC, there is a nice paper by his group, called Contract-Driven Design = Test-Driven Development - Writing Test Cases. The basic premise is that contracts provide an abstract representation of all possible cases, whereas test cases only test specific cases. Therefore, a suitable test harness can be automatically generated from the contracts.
I would add:
the API is the contract for the programmers, the UI definition is the contract with the clients, the protocol is the contract for client-server interactions. Get those first, then you can take advantage of parallel development tracks and not get lost in the weeds. Yes, periodically review to make sure requirements are met, but never start a new track without the contract. And 'contract' is a strong word: once deployed, it must never change. You should include versioning management and introspection from the get-go, changes to the contract are only implemented by extension sets, version numbers change with these, and then you can do things like graceful degradation when dealing with mixed or old installations.
I learned this lesson the hard way, with a large project that wandered off into never-never land, then applied it the right way later when seriously under the gun, company-survival, short fuse timeline. We defined the protocol, defined and wrote a set of protocol emulations for each side of the transactions (basically canned message generators and received message checker, one evenings' worth of two-brained coding), then parted to separately write the server and client ends of the app. We recombined the night of the show, and it just worked. Requirements, design, contract, test, code, integrate. In that order. Repeat until baked.
I am a little leery of design by TLA. As with Patterns, buzz-word compliant recipes are a good guide, but it is my experience that there is no such thing as a one-size-fits-all design or project management procedure. If you are doing things precisely By The Book (tm) then, unless it is a DOD contract with DOD procedural requirements, you will probably get into trouble somewhere along the way. Read the Book(s), yes, but be sure tounderstand them, and then take into account also the people side of your team. Rules that are only enforced by the Book will not get enforced uniformly - even when tool-enforced there can be drop-outs (e.g. svn comments left empty or cryptically brief). Procedures only tend to get followed when the tool chain not only enforces them but makes following easier than any possible short-cuts. Believe me, when the going gets tough, the short-cuts get found, and you may not know about the ones that got used at 3am until it is too late.
You can also use executable acceptance tests that are written in the domain language of the contract. It might not be the actual "contract", but half way between unit tests and the contract.
I would recomment using Ruby Cucumber
http://github.com/aslakhellesoy/cucumber
But since you are a Perl shop, then maybe you can use my own small attempt at p5-cucumber.
http://github.com/kesor/p5-cucumber
Microsoft has done work on automatic generation of unit tests, based on code contracts and parameterized unit test. E.g. the contract says the count must be increased by one when an item is added to a collection, and the parameterized unit test say how to add “n” items to a collection. Pex will then try to create a unit test that proves the contract is broken. See this video for a overview.
If this works, your unit test will only have to be written for one example of each thing you are trying to test, and PEX will be able to then work out witch data items will break the test.
I had some ruminations about that topic some time ago.
You may want to take a look at
http://gleichmann.wordpress.com/2007/12/09/test-driven-development-and-design-by-contract-friend-or-foe/
When you are using TDD to implement a new method, you need some input: you need to know the assertions to check in your tests. Design-by-contract gives you those assertions: they are the post-conditions and invariants of the method.
I have found DbC very handy for jumpstarting the red-green-refactor cycle because it helps in identifying unit tests to start with. With DbC I start thinking about pre-conditions that the object being TDD-ed must handle and each pre-condition might represent a failing unit test to start a red-green-refactor cycle. At some point I switch to start the cycle with a failing unit test for a post-condition, and then just keep the TDD flow going. I have tried this approach with newcomers to TDD and it really works in kickstarting the TDD mindset.
In summary, think of DbC as an effective way to identify key behavioral unit tests. DbC helps at analyzing inputs (pre-conditions) and outputs(post-conditions), which are the two things that we need to control (inputs) and observe (outputs) to write testable software (a similar aim of TDD).