How to do integration testing? - language-agnostic

There is so much written about unit testing but I have hardly found any books/blogs about integration testing? Could you please suggest me something to read on this topic?
What tests to write when doing integration testing?
what makes a good integration test?
etc etc
Thanks

Anything written by Kent Beck, father of both JUnit and SUnit, is a great place to start (for unit tests / test writing in general). I'm assuming that you don't mean "continuous integration," which is a process-based build approach (very cool, when you get it working).
In my own experience, integration tests look very similar to regular unit tests, simply at a higher level. More mock objects. More state initialization.
I believe that integration tests are like onions. They have layers.
Some people prefer to "integrate" all of their components and test the "whole" product as an the "integration" test. You can certainly do this, but I prefer a more incremental approach. If you start low-level and then keep testing at higher composition layers, then you will achieve integration testing.

Maybe it is generally harder to find information on integration testing because it is much more specific to the actual application and its business use. Nevertheless, here's my take on it.
What applies to unit-tests also applies to integration tests: modules should have an easy way to mock their externals inputs (files, DB, time...), so that they can be tested together with the other unit-tests.
But what I've found extremely useful, at least for data-oriented applications, is to be able to create a "console" version of the application that takes input files that fully determine its state (no dependencies on databases, network resources...), and outputs the result as another file. One can then maintain pairs of inputs / expected results files, and test for regressions as part of nightly builds, for example. Having this console version allows for easier scripting, and makes debugging incredibly easier as one can rely on a very stable environment, where it is easy to reproduce bugs and to run the debugger.

J.B. Rainsberger has written about them. Here's a link to an InfoQ article with more info.
http://www.infoq.com/news/2009/04/jbrains-integration-test-scam

Related

Which kind of test should I use for a library?

I'm developing a PHP library that I'd like to use in different projects. The library uses a REST-like service in the background. I don't want to write tests for the service API, but for the library.
Would I need to write unit tests? Or functional tests? Since it is a library I won't write acceptance test - I hope this is correct.
I don't know if this is important for the issue, but the library needs to login into the service API and uses an API-key for the next operations. Also, when the library gets tested, the operations before are important. It is a designer tool and I have operations like 'move rectangle', 'rotate rectangle' and so on and I would like to test several operations in a sequence that should bring a certain result.
I think that this is a kind of functional test. Or do I need both? Can unit tests work with a service in the background?

How to take screenshot on test failure with junit 5

Can someone tell me please: how to take a screenshot when test method fails (jUnit 5). I have a base test class with BeforeEach and AfterEach methods. Any other classes with #Test methods extends base class.
Well, it is possible to write java code that takes screenshots, see here for example.
But I am very much wondering about the real problem you are trying to solve this way. I am not sure if you figured that yet, but the main intention of JUnit is to provide you a framework that runs your tests in various environments.
Of course it is nice that you can run JUnit within your IDE, and maybe you would find it helpful to get a screenshot. But: "normally" unit tests also run during nightly builds and such - in environments where "taking a screenshot" might not make any sense!
Beyond that: screenshorts are an extremely ineffective way of collecting information! When you have a fail, you should be locking for textual log files, html/xml reports, whatever. You want that failing tests generate information that can be easily digested.
So, the real answer here is: step back from what you are doing right now, and re-consider non-screenshot solutions to the problem you actually want to solve!
You don't need to take screen shots for JUnit test failes/passes, rather the recommended way is to generate various reports (Tests Passed/Failed Report, Code coverage Report, Code complexity Report etc..) automatically using the below tools/plugins.
You can use Cobertura maven plugin or Sonarqube code quality tool so that these will automatically generate the reports for you.
You can look here for Cobertura-maven-plugin and here for Sonarqube for more details.
You need to integrate these tools with your CI (Continuous Integration) environments and ensure that if the code is NOT passing certain quality (in terms of tests coverage, code complexity, etc..) then the project build (war/ear) should fail automatically.

Can you have 2 different suites in 1 JUnit TestCase Class?

I'd like to define two different suites in the same JUnit TestCase Class; one for behaviour tests and another for efficiency tests. Is it possible?
If yes, how? If not, why not?
Additional details: I'm using JUnit 3.8.1.
If I understand you correctly, you're trying to partition your tests. Suites, on their own, are not really the mechanism you need, rather it's JUnit's categories you need to investigate:
http://java.dzone.com/articles/closer-look-junit-categories
I've not used these as I've usually found the overhead of test partitioning too much effort, but this may work for you. I think TestNG has supported this concept for quite a while.
Also, if you're using Maven you get a partitioning of tests into unit and integration tests for free - check out the Failsafe plugin - which is good for separating tests you want to run quickly as part of every build from longer running tests.

How to increase efficiency of Unit testing?

We are creating unit test cases for our existing code base, while progressing through the creation of the test cases the test files are getting bigger in size and are taking very long time in execution.
I know the limitations of unit testing and I did some research also to increase efficiency. While research I found one useful idea to tighten up the provided data set.
Still I am looking for some more ideas on how I can increase efficiency of running/creating the unit test cases? We can keep the option to increase the server resources outside of this scope.
As your question was general I'll cover a few of the common choices. But most of the speed-up techniques have downsides.
If you have dependencies on external components (web services, file systems, etc.) you can get a speed-up by mocking them. This is usually desirable for unit testing anyway. You still need to have integration/functional tests that test with the real component.
If testing databases, you can get quite a speed-up by using an in-memory database (sqlite works well with PHP's PDO; with java maybe H2?). This can have downsides, unless database portability is already a design goal. (I'm about to move to running one set of unit tests against both MySQL and sqlite.) Mocking the database away completely (see above) may be better.
PHPUnit allows you to specify #group on each test. You could go through and mark your slower tests with #group slow, and then use the --exclude-group commandline flag to exclude them on most of your tests runs, and just include them in the overnight build. (You can also specify groups to include/exclude in your phpunit.xml.dist file.
(I don't think jUnit has this option, but TestNG does; for C#, NUnit offers categories for this.)
Creating fixtures once, and then sharing them between tests is quicker than creating the fixture before each test. The XUnit Test Patterns devotes whole chapters to the pros and cons (mostly cons) of this approach.
I know throwing hardware at it was explicitly forbidden in your question, but look again at #group, and consider how it can allow you to split your tests across multiple machines. Or splitting tests by directory, and processing one directory on each of multiple machines on your LAN. (PHPUnit is single-threaded, so you could run multiple instances on the same machine, each doing its own directory: be aware of how fixtures need to be independent (including unique names for databases you create, mocking the filesystem, etc.) if you go down this route.)

How do you make testing not boring?

Just as the title said. What ways do you use to test your own code so that it wouldn't be a boring task? Do you use any tool? For my projects, I use a spreadsheet to list all the possible routines i.e. from the basic CRUD and also all the weird routines. i make about 10 routines.
I get about 2-3 bugs and sometimes major ones by doing this. And if i'm not doing this the client reports another bug.
So tell me what technique do you use in testing your own code in such a way that it doesn't bore you?
Edit:
I forgot to mention that i am particularly working on web based apps and my language is PHP & Cakephp framework.
Have fast tests. The (more) immediate feedback helps to acchieve short iterations. This can almost make you addicted to starting the next test run.
If you find testing boring this is because testing your code is a necessary evil... least is how I perceived you see it.
All you need here is a change in your point of view towards testing... and more specifically... a change in HOW you are testing. You love programming a lot more than testing... well program your tests... then it is just as fun as programming the thing to begin with... and when you are done you have
a program that works
a test suite that remains and test it every builds
So leave that excel sheet and step by step debugger and join the fun :-)
Of course there is more to that and this where test frameworks (junit, testNG, Dunit, NUnit ...) will come in handy, they will take the little pains away and only leave the coding part of the test..
Happy coding and by extension.. happy testing :-)
Few references you may find useful, I am not a PHP expert, far from it but it seemed to fit the purpose.
http://www.simpletest.org/
http://www.phpunit.de/
http://laughingmeme.org/2003/08/05/a-few-tips-for-writing-useful-libraries-in-php/
I used to think the same as you. When I first started programming, we had to work out what the output would be on paper and then do visual comparisons of the actual and expected output. Talk about tedious. A couple of years ago, I discovered Test Driven Development and xUnit and now I love tests.
Basically, in TDD, you have a framework designed to allow you to write tests and run them very easily. So, writing tests just becomes writing code. The process is:
Just write enough to allow you to write a test. E.g you're adding a method to a class, so you just write the method sig and any return statement needed to get it to compile.
Then you write your first test and run the framework to see that it fails.
Then you add code to/refactor your method to get the test to pass.
Then you add the next test and see that it fails.
Repeat 3 and 4 until you can't think of any more tests.
You've finished.
That's one of the nice things about TDD: once your code passes every test you can think of, you know you're finished - without TDD, sometimes it's difficult to know when to stop. Where do your tests come from? They come from the spec. TDD often helps you to realise that the spec. is full of holes as you think of test cases for things that weren't in the spec. You can get these questions answered before you start writing the code to deal with them.
Another nice thing is that when you discover a bug later, you can start reworking your code safe in the knowledge that all of the existing tests will prove your code still works for all the known cases, whilst the new tests you've written to recreate the bug will show you when you've fixed it.
You can add unit tests to existing code - just add them for the bits you're changing. As you keep coming back to it, the tests will get more and more coverage.
xUnit is the generic name for a bunch of frameworks that support different languages: JUnit for Java, NUnit for .NET, etc. There's probably already one for whatever language you use. You can even write your own framework. Read this book - it's excellent.
For new code, work out what the code should do, write a test that asserts that the code does it, work out how to do it, then write the code.
For finding bugs in existing code, a test which reproduces the bug makes it easier to test.
This isn't boring, because in both cases the tests have a high likelihood of failure.
For UAT, then I haven't found any non-boring way - you go through the requirements one by one and make as many tests are required for the functionality. Ideally for new projects, that would have been mostly done up-front as part of the elaboration, but not always. It's only when you're writing tests after the fact that you have to a long list of tests which you already know will pass that it gets boring.
I dont see how it can be boring since it's a large part of the programming itself. Finding and removing bugs is very important, but if you think it's boring maybe you would rather write code in which case you can write a few lines that test critical parts in your code.
Use a test first approach \ pair programming test first.
If you writing them after you have written your own code, then your target is to find mistakes in your work = sad target.
Conversely, if you write your tests before you code, then your target is to write flawless software = happy target.
You probably mean tedious, rather than boring.
If so, this article may help
"No testing, no boring."
Write automatic unit tests, with PhpUnit or Simpletest since you're using PHP, or any other unit-testing framework available for your language of choice. Following Test-Driven Development (TDD), you will build a test suite along with your code. You won't have the impression you're testing anything. Really.
"* test a little, code a little*".
One of the advices I give to my team is that concerning a new features 90% of the logic should run out of the context of the application.
Features that can run outside of the application context are always easy to test.
If you are using .net, you can investigate NUnit.
You can also look at Pex. It seems to be an amazing test framework.
However, your question is a little generic because there are a lot testing types.
Have fun testing :).
I try to write my Tests first and try to design the class around it. So i am really test focussed. I am using JUnit etc.
If you try Programming in that way..testing becomes more and more fun, from my point of view.
I work for a small company yet we have a separate test team. This is because developers are often blind for their own errors, thus they tend to be bad testers.
Our test team is made up of experienced Test Engineers who work according to predefined test-plans and who often use automated test-tools to test the applications we create. (Including websites!) They are not developers!
These testers use TMap for the automated testing. The rest is just manual labor, reading the functional designs and making sure that whatever is mentioned in the functional design will work exactly as described in the final version.
Any errors are reported back to the developers by using an internal bug reporting tool.
Write some unit tests/automated tests, which will run automatically e.g. after a new build has been done.
Use encapsulation and try to test against interfaces only.
Write some small tools to help you test your modules/classes.
Making an easy to use, test suite is easy to do for Perl programs. There is a standard way to do testing in Perl using the Test Anything Protocol.
Basically you write a bunch of files with the .t extension, in the t/ directory of your project, and then run prove.
The files in t/ basically look like this:
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 8;
use Date::ICal;
$ical = Date::ICal->new( year => 1964, month => 10, day => 16,
hour => 16, min => 12, sec => 47,
tz => '0530' );
ok( defined $ical, 'new() returned something' );
ok( $ical->isa('Date::ICal'), " and it's the right class" );
is( $ical->sec, 47, ' sec()' );
is( $ical->min, 12, ' min()' );
is( $ical->hour, 16, ' hour()' );
is( $ical->day, 17, ' day()' );
is( $ical->month, 10, ' month()' );
is( $ical->year, 1964, ' year()' );
For more information you can read the tutorial.
There are many languages which have modules designed to work with The TAP, have a look here for more information.
Unfortunately, TAP has only recently been used for other languages than Perl, so there isn't as much support for them, as there exists for Perl.
Do not write tests for trivial stuff - at least not until it breaks i.e. on rare occasion. If you do then you will feel discomfort every time you need to come and maintain those tests. It's absolutely normal, boredom laziness frustration etc. is your natural instinct reaction to pointless work.
Quite opposite, writing tests for non-trivial algorithms & logic, discovering corner cases which you didn't even think about is actually fun and very rewarding experience.