Imagine you have a large project, with several thousands of JUnit tests.
Let's says that running all thoses tests takes 7 minutes.
This looks short when you build your project from an ant/maven script.
But when you are using Eclipse, you cannot run all your test very often, because 7 minutes is too long time.
So here is the question:
When you modify some classes, is there a way to let JUnit runs only tests that may have been impacted by thoses class changes ?
I mean, this sounds feasible using classloader feature : after running each test, it's possible to know which classes have been loaded for this test, and to store somewhere (even in memory) a signature of each class used for this test.
When Junit is launched again, it could, for each test, check if classes used by this test have been modified since the very last run, and then NOT launch the test if it was ok and if no class impacting the test has been changed. (If the test were OK for the last run, it should be OK)
Does someone know if this has been done/implemented already ?
You could try using Infinitest from either Eclipse or IntelliJ. (Edited spelling)
Related
I'm new to Junit. I've created a test suite with 50 test cases. If I run it passes only 30 test cases and 20 test cases are failing. How I can achieve running only those 20 failed test cases again with the help of Junit? Is it possible? Can someone guide on this?
Deciding which tests to run, and in which order, is the job of the test runner. So, to rerun only failed tests you need to use a test runner that records which tests passed or failed, and which can be instructed to run only tests it knows failed.
The tested runner in the Eclipse IDE can do this. Typically, and IDE is the only place you need this functionality, because an IDE is interactive and when using it you want fast feedback on whether a fix works. This functionality is not so useful elsewhere because in other contexts we typically want to check that the program passes all its tests.
Can someone tell me please: how to take a screenshot when test method fails (jUnit 5). I have a base test class with BeforeEach and AfterEach methods. Any other classes with #Test methods extends base class.
Well, it is possible to write java code that takes screenshots, see here for example.
But I am very much wondering about the real problem you are trying to solve this way. I am not sure if you figured that yet, but the main intention of JUnit is to provide you a framework that runs your tests in various environments.
Of course it is nice that you can run JUnit within your IDE, and maybe you would find it helpful to get a screenshot. But: "normally" unit tests also run during nightly builds and such - in environments where "taking a screenshot" might not make any sense!
Beyond that: screenshorts are an extremely ineffective way of collecting information! When you have a fail, you should be locking for textual log files, html/xml reports, whatever. You want that failing tests generate information that can be easily digested.
So, the real answer here is: step back from what you are doing right now, and re-consider non-screenshot solutions to the problem you actually want to solve!
You don't need to take screen shots for JUnit test failes/passes, rather the recommended way is to generate various reports (Tests Passed/Failed Report, Code coverage Report, Code complexity Report etc..) automatically using the below tools/plugins.
You can use Cobertura maven plugin or Sonarqube code quality tool so that these will automatically generate the reports for you.
You can look here for Cobertura-maven-plugin and here for Sonarqube for more details.
You need to integrate these tools with your CI (Continuous Integration) environments and ensure that if the code is NOT passing certain quality (in terms of tests coverage, code complexity, etc..) then the project build (war/ear) should fail automatically.
Can we use JUnit to test java batch jobs? Since Junit runs locally and java batch jobs run on the server, i am not sure how to start a job (i tried using using the JobOperator class) from JUnit test cases.
If JUnit is not the right tool, how can we unit test java batch code.
I am using using IBM's implementation of JSR 352 running on WAS Liberty
JUnit is first of all an automation and test monitor framework. Meaning: you can use it to drive all kinds of #Test methods.
From an conceptual point, the definition of unit tests is pretty vague; if you follow wikipedia, "everything you do to test something" can be seen as unit test. Following that perspective, of course, you can "unit test" batch code that runs on a batch framework.
But: most people think that "true", "helpful" unit tests do not require the presence of any external thing. Such tests can be run "locally" at build time. No need for servers, file systems, networking, ...
Keeping that in mind, I think there are two things you can work with:
You can use JUnit to drive "integration" or "functional tests". Meaning: you can define test suites that do the "full thing" - define batches, have them processed to check for expected results in the end. As said, that would be integration tests that make sure the end-to-end flow works as expected.
You look into"normal" JUnit unit-testing. Meaning: you focus on those aspects in your code that are "un-related" to the batch framework (in other words: look out for POJOs) and unit-test those. Locally; maybe with mocking frameworks; without relying on a real batch service running your code.
Building on the answer from #GhostCat, it seems you're asking how to drive the full job (his bullet 1.) in your tests. (Of course unit testing the reader/processor/writer components individually can also be useful.)
Your basic options are:
Use Arquillian (see here for a link on getting started with Arquillian and Liberty) to run your tests in the server but to let Arquillian handle the tasks of deploying the app to the server and collecting the results.
Write your own servlet harness driving your job through the JobOperator interface. See the answer by #aguibert to this question for a starting point. Note you'll probably want to write your own simple routine polling the JobExecution for one of the "finished" states (COMPLETED, FAILED, or STOPPED) unless your jobs have some other means of making the submitter aware.
Another technique to keep in mind is the startup bean. You can run your jobs simply by starting the server with a startup bean like:
#Startup
#Singleton
public class StartupBean {
JobOperator jobOp = BatchRuntime.getJobOperator();
// Drive job(s) on startup.
jobOp.start(...);
This can be useful if you have a way to check the job results separate from using the JobOperator interface (for which you need to be in the server). Your tests can simply poll and check for the job results. You don't even have to open an HTTP port, and the server startup overhead is only a few seconds.
I want to add some hints to my build, to run certain tests "first" without re-running them later.
Simply add Class names to a "priority" string in an input parameter to my test task, or
Have JUnit's testers smart enough to remember/persist failing test class names, so that the next time around the builder runs those first.
What is the most idiomatic way of doing this in Ant?
The following tools might help you to achieve the desired JUnit test execution order, but they depend on Eclipse usage:
Continuous Testing for Eclipse (CT-Eclipse)
JUnit Max
infinitest
I have not used any of those tools, and I have no Ant-only solution.
You might consider these related posts:
Run JUnit automatically when building Eclipse project
Starting unit tests automatically after saving a file
Ok, this is annoying.
MSTest executes all of my tests simultaneously which causes some of them to fail. No this is not because my tests are fragile and susceptible to build order rather it is because this is a demo project in which I use a Db4o object database running from a file.
So I have a couple of DataAccess tests checking that my repositories work correctly and boom, MSTest blows up. Since it tries to run all its tests at the same time it gets an error when a test tries to access the database file while other tests are using it.
Can anyone think of a quick way around this? I don't want to ditch MSTest (ok I do but another story) and I sure as heck don't want to run a full-blown database service so I'll take any way to force MSTest not to run simultaneously or tricks with opening files.
Anyone have any ideas?
You might want to try using a Monitor and entering in TestInitialize and exiting on TestCleanup. If your test classes all depend on the external file, you'll need to use a single lock object for all of them.
public static class LockClass
{
public static object LockObject = new object();
}
...
[TestInitialize]
public void TestSetup()
{
Monitor.Enter(LockClass.LockObject);
}
[TestCleanup]
public void TestCleanup()
{
Monitor.Exit(LockClass.LockObject);
}
This should force all of your tests to run serially and as long as all of your tests pass/fail they should run. If any of them throws an unexpected exception, though, all the rest will hang since the Exit code won't be run for the test that blows up.
I had a try using locks in this manner.
What I experienced, however, was that VS2010 does not execute the tests in parallel by default, but executes them sequencially, in a single thread. (parallel execution could be switched on, however. But this would not prevent the problem completely)
What I find very disturbing is, that the sequencial execution will take place in arbitrary order, even across test classes!
So for example an execution order may look like this:
Class A - TestInitialize: Lock will be established
Class A - TestMethod1: Will execute, OK
Class B - TestInitialize: Lock will be established
=> Thread will be blocked
=> Complete UnitTests will be blocked! The cause is that there are no other Threads which would go on executing methods of Class A. So the Montor.Exit() will never be reached.
I do not understand why MS is doing so. Other UnitTest frameworks (e.g. JUnit) execute the test methods class-wise. Otherwise there will be some interleaving of SetUp/TearDown method which would cause the chaos described...
Is there anybody out there knowing how to prevent MSTest jumping between test classes?
(Currently I use Resharpers test runner, which behaves as expected, executing all tests methods of one classe before proceeding with the next class)
Use an Ordered Test
http://msdn.microsoft.com/en-us/library/ms182630(v=VS.90).aspx