MSTest executing all my tests simultaneously breaks tests - what to do - configuration

Ok, this is annoying.
MSTest executes all of my tests simultaneously which causes some of them to fail. No this is not because my tests are fragile and susceptible to build order rather it is because this is a demo project in which I use a Db4o object database running from a file.
So I have a couple of DataAccess tests checking that my repositories work correctly and boom, MSTest blows up. Since it tries to run all its tests at the same time it gets an error when a test tries to access the database file while other tests are using it.
Can anyone think of a quick way around this? I don't want to ditch MSTest (ok I do but another story) and I sure as heck don't want to run a full-blown database service so I'll take any way to force MSTest not to run simultaneously or tricks with opening files.
Anyone have any ideas?

You might want to try using a Monitor and entering in TestInitialize and exiting on TestCleanup. If your test classes all depend on the external file, you'll need to use a single lock object for all of them.
public static class LockClass
{
public static object LockObject = new object();
}
...
[TestInitialize]
public void TestSetup()
{
Monitor.Enter(LockClass.LockObject);
}
[TestCleanup]
public void TestCleanup()
{
Monitor.Exit(LockClass.LockObject);
}
This should force all of your tests to run serially and as long as all of your tests pass/fail they should run. If any of them throws an unexpected exception, though, all the rest will hang since the Exit code won't be run for the test that blows up.

I had a try using locks in this manner.
What I experienced, however, was that VS2010 does not execute the tests in parallel by default, but executes them sequencially, in a single thread. (parallel execution could be switched on, however. But this would not prevent the problem completely)
What I find very disturbing is, that the sequencial execution will take place in arbitrary order, even across test classes!
So for example an execution order may look like this:
Class A - TestInitialize: Lock will be established
Class A - TestMethod1: Will execute, OK
Class B - TestInitialize: Lock will be established
=> Thread will be blocked
=> Complete UnitTests will be blocked! The cause is that there are no other Threads which would go on executing methods of Class A. So the Montor.Exit() will never be reached.
I do not understand why MS is doing so. Other UnitTest frameworks (e.g. JUnit) execute the test methods class-wise. Otherwise there will be some interleaving of SetUp/TearDown method which would cause the chaos described...
Is there anybody out there knowing how to prevent MSTest jumping between test classes?
(Currently I use Resharpers test runner, which behaves as expected, executing all tests methods of one classe before proceeding with the next class)

Use an Ordered Test
http://msdn.microsoft.com/en-us/library/ms182630(v=VS.90).aspx

Related

NUnit equivalent for JUnit test state management with #Before/#After

I come from Java world and I mostly used JUnit, and now I have some problems expressing some aspects of tests with NUnit 3. In JUnit, each test creates its own instance of a test class, so it's perfectly valid to create some instance variables in a test class, set up them in #Before method, test method and helpers can access these variables freely without worrying they would be overwritten by other tests run in parallel, and #After tears down the test data nicely. With NUnit it does not work and SetUp and TearDown methods seem to be useless in this case, because test fixture instance is reused between invocations of test method(s), so fields of test fixture class can (and are) overwritten by every invocation of a test method (my class has a few test methods, and each of them generates several test cases, so there are some tens of invocations in one test run).
I do not know how to work around this problem. In my scenario, set up would create a temporary folder, which would be used as a work folder for following test case. Tear down would delete the temporary folder afterwards, cleaning up all intermediate files created by tested method. But now, when SetUp creates and stores a temporary folder path in instance field (so it can be read by test logic and somewhat complicated asserts and verifiers), the value of such field is overwritten by test cases run in parallel. I considered several approaches:
implement an IDisposable which would represent a context of each test, and enclose it with using in each test method - I do not like this idea, because I do not like the idea of IDisposable being used as anything else than resource management tool and combinig IDisposable with using to simulate set up/tear down smells to me like an abuse of this particular language feature,
create a method which accepts a delegate for actual test logic, and which invokes custom SetUpTestCase/TearDownTestCase methods. The method would invoke set up, then test delegate, and tear down afterwards. What I do not like about this approach is that it does not play well with test methods which accept parameters - each set of test methods parametrized in particular way would need a corresponding delegate type. Also it somewhat seems to be against spirit of NUnit and the way of describing test methods with attributes - after all, why should the main logic of my test be delegated to anything? Shouldn't the [Test] or [TestCase] method be actual test?
maybe there's some way to use more advanced aspects of NUnit, like actions or some callbacks/triggers/whatever, I am just too unexperienced to see these. What I particularly miss is the way to transfer data from set up method (for example, a path to a temporary folder created by it) to the test method that follows. I cannot use instance fields for this, and I do not know whether there exists any "tag" structure which would pass test-specific data between methods invoked on different stages of a test lifecycle?
Generally, SetUp and TearDown attributes seem pretty useless to me, if they cannot set up the test case without their result being overwritten immediately by another test case run in parallel. What am I missing here?
How can I implemented such per-test case, scoped setup/tear down behavior with NUnit? What do I do wrong, or what do I miss?
As you have established, the TestFixture class is instantiated once before the OneTimeSetUp is called; then for each test it runs a set of SetUp, Test and TearDown; and finally, the OneTimeTearDown.
If you want the tests to be run in parallel (which is not the default) then you must specify The Parallelizable Attribute. Whether you do that or not, it is a good idea for your tests to be written independently, so they do not conflict with each other - they need to be structured.
The AAA (Arrange, Act, Assert) pattern is a common way of structuring unit tests for a method under test. If your tests are to be run in parallel, then TestFixture fields are not suitable for holding information which may conflict across parallel tests, in the same way that it wouldn't be suitable in a multithreaded class.
I'd suggest using a private method in the TestFixture to set up the temporary folder - it will need to have some way of providing a unique folder name, so that the parallel tests do not interact - perhaps use a Guid or CallerMemberName as part of the folder name, and return the folder name.
This method should be called from the Arrange part of the test. And you'll need a try...finally wrapping the rest of the Test to ensure the folder gets torn down. Or you could go with your IDisposable idea - I don't think there's anything wrong with that: the whole point of that is to guarantee tidying up resources (both managed and unmanaged) when something goes out of scope.
Your second suggestion of a delegate would also be fine if you used lambda expressions rather than strictly-defined delegates - the lambda expression can capture variables from the containing scope.

JUnit wait before proceeding

I am relying on some third party API to perform some action, namely delete a user eg. user.delete(). Unfortunately this is a void return and doesn't give me anything like a CompletableFuture back.
This is causing me some problems in my integration testing as I wish to create a user in a number of tests and then delete it once the test is complete ready for the next one. The test won't run if the delete task has not been completed by the third party.
So I can think of two solutions in my test code
Thread.sleep(1000);
Yuck quite brittle I have no idea how long it will take. Or block until I can be sure the user no longer exists (ResourceException thrown when user doesn't exist)
private void blockUntilUserRemoved() {
try {
do {
servicesClient.getUser("donald.duck#disney.com");
} while (true);
} catch (ResourceException e) {
return;
}
}
Which will work but feels wrong using exceptions to control the logic like this. Question is does it matter in test code?
That is as much as you can do with a void return type. So basically you do a long pooling here and check if it has been really deleted.
Other things I can think off is may be a database trigger that would work on delete - no idea if it is possible though; but even if it is - for tests this would require quite a lot of work. Another thing is that may be you can do that in a separate thread and your main thread get updated whenever the result comes - but again, for tests this sounds like an overkill, to me.
Another small suggestion is Thread.onSpinWait (since java-9) - read it's documentation to see how it might help (a bit).

How to programmatically fail fast an entire spec suite

I have a test suite where occasionally the suite loses it's database connection or something like that and starts throwing mysql errors for most of the remaining tests. I'll trouble shoot why that is happening later. but right now, I wanted to just cause rspec to fail fast when it detects that particular type of error is being thrown. Is there anyway to do that, perhaps in an after block that checks if there was an exception in the main test block, and then sends a command to spec to fail fast? I don't want to use fail fast in most other cases.
Use the raise_error matcher to specify that a block of code raises an error. The most
basic form passes if any error is thrown:
expect { #put your code here }.to raise_error
Please refer https://relishapp.com/rspec/rspec-expectations/docs/built-in-matchers/raise-error-matcher

Where to create KnowledgeBase in a Drools unit test?

Brief
I'm looking to write unit tests in JUnit to check individual drools rules. A unit test should be simple to write and fast to run. If the unit tests are slow then developers will avoid running them and the build will become excessively slow. With that in mind I'm trying to figure out the best (fastest to execute and easiest to write) method of writing these unit tests.
First attempt
The first option I tried was to create the KnowledgeBase as a static class attribute and initialized on one .drl file. Each test then creates a new session in the #Before method. This was based on the code examples in the Drools JBoss rules developer guide.
I've seen a second option that tidies this up a bit by creating some annotations to abstract the initialization code but it's basically the same approach.
I noticed that this basic unit test on one .drl file was taking a couple of seconds to run. This isn't too bad with one unit test but once it's scaled up I can see it being a problem. I did some reading and found that the KnowledgeBase is expensive to create, whereas the session is cheap. This is why the examples have the KnowledgeBase as static so it's created only once, however, with multiple unit test classes it will potentially be created many times.
Alternative
The alternative I tried is to create a singleton KnowledgeBase that will load all .drl files. This will be done once globally for the test suite and then autowired into each test class. I used a spring #Configuration class and defined the KnowledgeBase as an #Bean. I now find that the KnowledgeBase takes 2 seconds (but runs only once), the session creation takes about 0.2 seconds and the test itself takes no time at all.
It seems as if the spring approach may scale better but I'm not sure if I will get other problems with testing a single rule but using a KnowledgeBase initialized on all files? I'm using an AgendaFilter to target the specific rule I want to test. Also I've searched online quite a bit and haven't found anybody else doing it this way.
Summary
What will be the most scalable way of writing these tests? Thanks
This is a very nice collection of experiences. Let me contribute some ideas.
If you have a scenario where a large number of rules is essential to test the outcome because rules vie with each other, you should create the KieBase containing all and serialize it, once. For individual tests, either derive a session from it for each test, inserting facts and firing all rules, or, alternatively, derive the session up front, and run tests, clearing the session (!), inserting facts and firing all rules.
For testing a single rule or a small (<10) set of rules, compiling the DRL from scratch for each set of tests may be tolerable, especially when you adopt the strategy to reuse the session (!) by clearing Working Memory between individual tests.
Some consideration should also be given to rule design. Excessive iterative algorithms should not be implemented in DRL by hook or by crook; using a DRL function or some (static) Java method may be far superior. And testing such as these is much easier.
Also, following established rule design patterns helps a lot. Google "Design Patterns in Production Systems".

Running JUnit only on tests impacted after classes modification

Imagine you have a large project, with several thousands of JUnit tests.
Let's says that running all thoses tests takes 7 minutes.
This looks short when you build your project from an ant/maven script.
But when you are using Eclipse, you cannot run all your test very often, because 7 minutes is too long time.
So here is the question:
When you modify some classes, is there a way to let JUnit runs only tests that may have been impacted by thoses class changes ?
I mean, this sounds feasible using classloader feature : after running each test, it's possible to know which classes have been loaded for this test, and to store somewhere (even in memory) a signature of each class used for this test.
When Junit is launched again, it could, for each test, check if classes used by this test have been modified since the very last run, and then NOT launch the test if it was ok and if no class impacting the test has been changed. (If the test were OK for the last run, it should be OK)
Does someone know if this has been done/implemented already ?
You could try using Infinitest from either Eclipse or IntelliJ. (Edited spelling)