Repeat test including beforeClass with Junit - junit

We are setup our test environment in beforeClass static method and sometimes it fails and we would like to detect it and repeat setup of test environment. Does it possible to do this thing with junit?

Why not just do the setup in the test or in #Before instead of #BeforeClass. The point of #BeforeClass is to do stuff you only need to do once.

Related

NUnit equivalent for JUnit test state management with #Before/#After

I come from Java world and I mostly used JUnit, and now I have some problems expressing some aspects of tests with NUnit 3. In JUnit, each test creates its own instance of a test class, so it's perfectly valid to create some instance variables in a test class, set up them in #Before method, test method and helpers can access these variables freely without worrying they would be overwritten by other tests run in parallel, and #After tears down the test data nicely. With NUnit it does not work and SetUp and TearDown methods seem to be useless in this case, because test fixture instance is reused between invocations of test method(s), so fields of test fixture class can (and are) overwritten by every invocation of a test method (my class has a few test methods, and each of them generates several test cases, so there are some tens of invocations in one test run).
I do not know how to work around this problem. In my scenario, set up would create a temporary folder, which would be used as a work folder for following test case. Tear down would delete the temporary folder afterwards, cleaning up all intermediate files created by tested method. But now, when SetUp creates and stores a temporary folder path in instance field (so it can be read by test logic and somewhat complicated asserts and verifiers), the value of such field is overwritten by test cases run in parallel. I considered several approaches:
implement an IDisposable which would represent a context of each test, and enclose it with using in each test method - I do not like this idea, because I do not like the idea of IDisposable being used as anything else than resource management tool and combinig IDisposable with using to simulate set up/tear down smells to me like an abuse of this particular language feature,
create a method which accepts a delegate for actual test logic, and which invokes custom SetUpTestCase/TearDownTestCase methods. The method would invoke set up, then test delegate, and tear down afterwards. What I do not like about this approach is that it does not play well with test methods which accept parameters - each set of test methods parametrized in particular way would need a corresponding delegate type. Also it somewhat seems to be against spirit of NUnit and the way of describing test methods with attributes - after all, why should the main logic of my test be delegated to anything? Shouldn't the [Test] or [TestCase] method be actual test?
maybe there's some way to use more advanced aspects of NUnit, like actions or some callbacks/triggers/whatever, I am just too unexperienced to see these. What I particularly miss is the way to transfer data from set up method (for example, a path to a temporary folder created by it) to the test method that follows. I cannot use instance fields for this, and I do not know whether there exists any "tag" structure which would pass test-specific data between methods invoked on different stages of a test lifecycle?
Generally, SetUp and TearDown attributes seem pretty useless to me, if they cannot set up the test case without their result being overwritten immediately by another test case run in parallel. What am I missing here?
How can I implemented such per-test case, scoped setup/tear down behavior with NUnit? What do I do wrong, or what do I miss?
As you have established, the TestFixture class is instantiated once before the OneTimeSetUp is called; then for each test it runs a set of SetUp, Test and TearDown; and finally, the OneTimeTearDown.
If you want the tests to be run in parallel (which is not the default) then you must specify The Parallelizable Attribute. Whether you do that or not, it is a good idea for your tests to be written independently, so they do not conflict with each other - they need to be structured.
The AAA (Arrange, Act, Assert) pattern is a common way of structuring unit tests for a method under test. If your tests are to be run in parallel, then TestFixture fields are not suitable for holding information which may conflict across parallel tests, in the same way that it wouldn't be suitable in a multithreaded class.
I'd suggest using a private method in the TestFixture to set up the temporary folder - it will need to have some way of providing a unique folder name, so that the parallel tests do not interact - perhaps use a Guid or CallerMemberName as part of the folder name, and return the folder name.
This method should be called from the Arrange part of the test. And you'll need a try...finally wrapping the rest of the Test to ensure the folder gets torn down. Or you could go with your IDisposable idea - I don't think there's anything wrong with that: the whole point of that is to guarantee tidying up resources (both managed and unmanaged) when something goes out of scope.
Your second suggestion of a delegate would also be fine if you used lambda expressions rather than strictly-defined delegates - the lambda expression can capture variables from the containing scope.

What good are JUnit's #Ignore and #Disabled annotations?

What is the advantage to adding the #Disabled or #Ignore annotations to JUnit tests, e.g.:
#Test
#Disabled
void testSomething() { /* ... */ }
instead of just removing the #Test annotation?
void testSomething() { /* ... */ }
Either way, the test should not be executed.
The utility of these annotations is largely in documentation/reporting. When you run a JUnit suite, you get a report of the results. #Ignored/#Disabled tests will be marked as such (with optional comments) in that report.
This lets you track how many tests are being ignored/disabled. You can set policies around this (i.e. if a test is #Ignored for a month, just delete it) or make CI systems fail if too many tests are being #Ignored. You can make graphs showing trends of Passed/Failed/Skipped over time.
Really, it all comes down to how you want to track the evolution of your test suite, and wether you'd want to see a section of "skipped" tests, or the total number of tests going down when a test is temporarily broken/no longer useful.
#Disabled or #Ignore annotations can be used to disable or ignore the test methods from the test suite.
#Disabled introduced in junit5. It accepts only one optional parameter, which indicates the reason this test is disabled. Example :
#Disabled("Do not run in a lower environment")
Advantages of adding #Disabled or #Ignore:
Search-ability: You can easily identify all #Ignore or #Disabled annotations in the source code, while unannotated or commented out tests are not so simple to find.
Maintainable: It is easy to maintain or modify it later. It is always good practice to use annotations.
In my opinion, #Ignore is cleaner than commenting an entire block of test method .
Also, when you run your test suite, you would get a warning about some tests are ignored. You won’t get that if you comment it. That maybe someday you would want to enable it
Other advantages:
Manual-only execution: ensures a test is never run automatically while still allowing you to run it manually with no code change. My preferred approach for this, since it defaults to safe behaviour without requiring any configuration of test engine or test run.
Avoids warnings: untagging can trigger "unused code"-warnings for tests that are in use, only manually
Modularity: You can use it on an entire test class, whereas in JUnit 5 you would need to untag each test case individually

JUnit equivalents for TestNG's #BeforeSuite, #BeforeTest

I'm refactoring some test classes from TestNG to JUnit 4. During the process, I've stumbled upon the following annotations:
#BeforeTest
#AfterTest
According to the manual:
The annotated method will be run before/after any test method belonging to the classes inside the tag is run.
What would be the equivalent annotations in JUnit?
This is the original answer, but I think it is wrong. See below for a better one
The equivalent would be the annotations
#Before
and
#After
see also http://junit.sourceforge.net/javadoc/org/junit/Before.html
This is a better answer, after I learned about the difference between Before/AfterMethod and Before/AfterTest in TestNG
If I got it right, with Before/AfterTest you can run a method before or after a list of tests, that you specify inside the annotation or a separate document.
There is no out of the box feature like this in JUnit.
Probably the best you can do, is put what ever you want to do in a JUnit Rule. See also http://schauderhaft.de/2011/07/24/rules-in-junit-4-9-beta-3/
Then you can use that Rule in any test that needs it.

Running JUnit only on tests impacted after classes modification

Imagine you have a large project, with several thousands of JUnit tests.
Let's says that running all thoses tests takes 7 minutes.
This looks short when you build your project from an ant/maven script.
But when you are using Eclipse, you cannot run all your test very often, because 7 minutes is too long time.
So here is the question:
When you modify some classes, is there a way to let JUnit runs only tests that may have been impacted by thoses class changes ?
I mean, this sounds feasible using classloader feature : after running each test, it's possible to know which classes have been loaded for this test, and to store somewhere (even in memory) a signature of each class used for this test.
When Junit is launched again, it could, for each test, check if classes used by this test have been modified since the very last run, and then NOT launch the test if it was ok and if no class impacting the test has been changed. (If the test were OK for the last run, it should be OK)
Does someone know if this has been done/implemented already ?
You could try using Infinitest from either Eclipse or IntelliJ. (Edited spelling)

How do I define a TestSuite without using #SuiteClasses in Junit 4.5?

I'm trying to migrate to JUnit 4 and I'm not clear about the correct way to set up test suites.
I know how to set up a test suite with fixed tests using the #SuitesClasses annotation.
However, I want to have a top-level suite class, where I can programatically decide which test classes or suites I want to load. I know that there are addTest and addTestSuite operations in the TestSuite class.
However, if I define a TestSuite subclass with a constructor that attempts to add these tests and try to run it, I get an error "Must have SuiteClasses annotation".
Any idea how to do this?
I would recommend creating a subclass of the BlockJUnit4ClassRunner and pull in the classes you want to test manually. The protected methods of the class do all the hard work for you, although you might want to tweak the Descriptions a bit to make sure the results are all unique in the output files.