JUnit wait before proceeding - junit

I am relying on some third party API to perform some action, namely delete a user eg. user.delete(). Unfortunately this is a void return and doesn't give me anything like a CompletableFuture back.
This is causing me some problems in my integration testing as I wish to create a user in a number of tests and then delete it once the test is complete ready for the next one. The test won't run if the delete task has not been completed by the third party.
So I can think of two solutions in my test code
Thread.sleep(1000);
Yuck quite brittle I have no idea how long it will take. Or block until I can be sure the user no longer exists (ResourceException thrown when user doesn't exist)
private void blockUntilUserRemoved() {
try {
do {
servicesClient.getUser("donald.duck#disney.com");
} while (true);
} catch (ResourceException e) {
return;
}
}
Which will work but feels wrong using exceptions to control the logic like this. Question is does it matter in test code?

That is as much as you can do with a void return type. So basically you do a long pooling here and check if it has been really deleted.
Other things I can think off is may be a database trigger that would work on delete - no idea if it is possible though; but even if it is - for tests this would require quite a lot of work. Another thing is that may be you can do that in a separate thread and your main thread get updated whenever the result comes - but again, for tests this sounds like an overkill, to me.
Another small suggestion is Thread.onSpinWait (since java-9) - read it's documentation to see how it might help (a bit).

Related

n-layer using try catch

I'm working on an n-tier application, and doing try/catch only in my presentation layer. If an error occurs in the data layer or business layer, this error is captured by the try/catch in my presentation layer. Is this good, or should I work with try catch in every method of each layer?
In general, it is better to catch an exception as close to where it happens to allow for your code to potentially do something to fix/adapt/react to the issue. What this "do something" is depends upon the circumstance. For instance if you have a service layer call that fails, you may want to retry the call, because the service may have been too busy; whereas if your stored procedure is broken, then it does not matter how many times you retry, it will be broken until the logic is corrected in the database.
If all you want to do is log errors, then catching an error as close to where it happens is less useful.
Every project I have ever worked on had try-catch blocks in every layer of the application.
A corollary to try-catch is the concept of Fail Fast, which generally says that debugging productivity increases when a system immediately fails instead of failing slowly (read: after hours, days, weeks, months or even years of operation).
A good example of failing fast in the .NET Framework is the usage of Convert.ToInt32() versus a straight up cast using (int), like this:
int? settingValue = Convert.ToInt32(SomeSettingString);
if(settingValue == null)
{
// Do something here
}
else
{
// Do something else here
}
If SomeSettingString can be converted to an int, then the value is set to and Do something else logic is executed. Suppose a year from now, the setting changes and null is returned, because the conversion fails, now all of a sudden the Do something here logic executes and it is a debugging adventure to figure out that this condition happens, if you can find out at all. Most issues like this seem to only happen in PRODUCTION and not DEV.
Now let's look at the same thing, but by failing fast, like this:
try
{
int settingValue = (int)SomeSettingString;
}
catch(Exception ex)
{
// Fail fast and throw exception
throw new Exception("Fail fast");
}
Now the exception happens immediately when the setting string causes a conversion to int failure.
Note: Beware that failing fast can be sabotaged by empty catch blocks that "eat" exceptions. try blocks with empty catch blocks should be avoided, because they invariably lead to the "eaten" exception scenario.
Don't do this:
try
{
// Exception waiting to happen here
}
catch(Exception ex)
{
// Catch-all, because all exceptions derive from Exception class
// So this will eat exceptions and pretend like they never happened
}
Here's what you want to avoid at any level: a bunch of methods that all look like this:
void method()
{
try
{
// some code here that may potentially throw an exception
}
catch ( /* anything here */)
{
// code right here to handle that exception
}
}
If that's what you're doing, you may just as well go back to VB's old On Error Goto system, because you haven't gained anything. Exceptions provide two major advantages for error handling: the ability to easily handle different types of errors in different ways, and the ability for errors to be caught further up in the program's call stack. It's the second advantage that you're asking about here.
So we see that you do want to allow exceptions to "bubble up" to higher layers, as it's a big part of why we have exceptions at all... but do you always want to handle them at the presentation layer? No, we can do better. Sometimes, there may be business rules about how to respond to certain exceptions from the data layer. Sometimes, the data layer itself may be able to handle and recover from an exception without alerting the layer above it.
On the other hand, an advantage of exceptions is that it can leave you to write simpler code in the lowers layers, with fewer breaks from the normal flow of program execution for error handling code. That comes at the price of instead placing more of try/catch in the presentation tier. Again, this doesn't mean that the presentation tier is the only place to handle them, but this is the place to make the effort to ensure they don't get past your presentation layer un-caught. If you can't handle them anywhere else, do have a way to catch them in the presentation layer and show them to the user in a friendly way. It's also a good idea to use the the same mechanism to log or report your exceptions, so you can get good metrics on where you applications fails, and then use that information to make your application better.
When you do get to the point that you're inside an last-ditch exception handler, you may also want to consider terminating the application. If you really have unexpected things going on, such that unhandled exceptions make it through the presentation tier, there's a valid school of thought that suggests it may not be a good idea to continue running the program. But even in this case, you'll want to catch and try to report the exception, and then crash as gracefully as possible.

Why should I not wrap every block in "try"-"catch"?

I have always been of the belief that if a method can throw an exception then it is reckless not to protect this call with a meaningful try block.
I just posted 'You should ALWAYS wrap calls that can throw in try, catch blocks.' to this question and was told that it was 'remarkably bad advice' - I'd like to understand why.
A method should only catch an exception when it can handle it in some sensible way.
Otherwise, pass it on up, in the hope that a method higher up the call stack can make sense of it.
As others have noted, it is good practice to have an unhandled exception handler (with logging) at the highest level of the call stack to ensure that any fatal errors are logged.
As Mitch and others stated, you shouldn't catch an exception that you do not plan on handling in some way. You should consider how the application is going to systematically handle exceptions when you are designing it. This usually leads to having layers of error handling based on the abstractions - for example, you handle all SQL-related errors in your data access code so that the part of the application that is interacting with domain objects is not exposed to the fact that there is a DB under the hood somewhere.
There are a few related code smells that you definitely want to avoid in addition to the "catch everything everywhere" smell.
"catch, log, rethrow": if you want scoped based logging, then write a class that emits a log statement in its destructor when the stack is unrolling due to an exception (ala std::uncaught_exception()). All that you need to do is declare a logging instance in the scope that you are interested in and, voila, you have logging and no unnecessary try/catch logic.
"catch, throw translated": this usually points to an abstraction problem. Unless you are implementing a federated solution where you are translating several specific exceptions into one more generic one, you probably have an unnecessary layer of abstraction... and don't say that "I might need it tomorrow".
"catch, cleanup, rethrow": this is one of my pet-peeves. If you see a lot of this, then you should apply Resource Acquisition is Initialization techniques and place the cleanup portion in the destructor of a janitor object instance.
I consider code that is littered with try/catch blocks to be a good target for code review and refactoring. It indicates that either exception handling is not well understood or the code has become an amœba and is in serious need of refactoring.
Because the next question is "I've caught an exception, what do I do next?" What will you do? If you do nothing - that's error hiding and the program could "just not work" without any chance to find what happened. You need to understand what exactly you will do once you've caught the exception and only catch if you know.
You don't need to cover every block with try-catches because a try-catch can still catch unhandled exceptions thrown in functions further down the call stack. So rather than have every function have a try-catch, you can have one at the top level logic of your application. For example, there might be a SaveDocument() top-level routine, which calls many methods which call other methods etc. These sub-methods don't need their own try-catches, because if they throw, it's still caught by SaveDocument()'s catch.
This is nice for three reasons: it's handy because you have one single place to report an error: the SaveDocument() catch block(s). There's no need to repeat this throughout all the sub-methods, and it's what you want anyway: one single place to give the user a useful diagnostic about something that went wrong.
Two, the save is cancelled whenever an exception is thrown. With every sub-method try-catching, if an exception is thrown, you get in to that method's catch block, execution leaves the function, and it carries on through SaveDocument(). If something's already gone wrong you likely want to stop right there.
Three, all your sub-methods can assume every call succeeds. If a call failed, execution will jump to the catch block and the subsequent code is never executed. This can make your code much cleaner. For example, here's with error codes:
int ret = SaveFirstSection();
if (ret == FAILED)
{
/* some diagnostic */
return;
}
ret = SaveSecondSection();
if (ret == FAILED)
{
/* some diagnostic */
return;
}
ret = SaveThirdSection();
if (ret == FAILED)
{
/* some diagnostic */
return;
}
Here's how that might be written with exceptions:
// these throw if failed, caught in SaveDocument's catch
SaveFirstSection();
SaveSecondSection();
SaveThirdSection();
Now it's much clearer what is happening.
Note exception safe code can be trickier to write in other ways: you don't want to leak any memory if an exception is thrown. Make sure you know about RAII, STL containers, smart pointers, and other objects which free their resources in destructors, since objects are always destructed before exceptions.
Herb Sutter wrote about this problem here. For sure worth reading.
A teaser:
"Writing exception-safe code is fundamentally about writing 'try' and 'catch' in the correct places." Discuss.
Put bluntly, that statement reflects a fundamental misunderstanding of exception safety. Exceptions are just another form of error reporting, and we certainly know that writing error-safe code is not just about where to check return codes and handle error conditions.
Actually, it turns out that exception safety is rarely about writing 'try' and 'catch' -- and the more rarely the better. Also, never forget that exception safety affects a piece of code's design; it is never just an afterthought that can be retrofitted with a few extra catch statements as if for seasoning.
As stated in other answers, you should only catch an exception if you can do some sort of sensible error handling for it.
For example, in the question that spawned your question, the questioner asks whether it is safe to ignore exceptions for a lexical_cast from an integer to a string. Such a cast should never fail. If it did fail, something has gone terribly wrong in the program. What could you possibly do to recover in that situation? It's probably best to just let the program die, as it is in a state that can't be trusted. So not handling the exception may be the safest thing to do.
If you always handle exceptions immediately in the caller of a method that can throw an exception, then exceptions become useless, and you'd better use error codes.
The whole point of exceptions is that they need not be handled in every method in the call chain.
The best advice I've heard is that you should only ever catch exceptions at points where you can sensibly do something about the exceptional condition, and that "catch, log and release" is not a good strategy (if occasionally unavoidable in libraries).
I was given the "opportunity" to salvage several projects and executives replaced the entire dev team because the app had too many errors and the users were tired of the problems and run-around. These code bases all had centralized error handling at the app level like the top voted answer describes. If that answer is the best practice why didn't it work and allow the previous dev team to resolve issues? Perhaps sometimes it doesn't work? The answers above don't mention how long devs spend fixing single issues. If time to resolve issues is the key metric, instrumenting code with try..catch blocks is a better practice.
How did my team fix the problems without significantly changing the UI? Simple, every method was instrumented with try..catch blocked and everything was logged at the point of failure with the method name, method parameters values concatenated into a string passed in along with the error message, the error message, app name, date, and version. With this information developers can run analytics on the errors to identify the exception that occurs the most! Or the namespace with the highest number of errors. It can also validate that an error that occurs in a module is properly handled and not caused by multiple reasons.
Another pro benefit of this is developers can set one break-point in the error logging method and with one break-point and a single click of the "step out" debug button, they are in the method that failed with full access to the actual objects at the point of failure, conveniently available in the immediate window. It makes it very easy to debug and allows dragging execution back to the start of the method to duplicate the problem to find the exact line. Does centralized exception handling allow a developer to replicate an exception in 30 seconds? No.
The statement "A method should only catch an exception when it can handle it in some sensible way." This implies that developers can predict or will encounter every error that can happen prior to release. If this were true a top level, app exception handler wouldn't be needed and there would be no market for Elastic Search and logstash.
This approach also lets devs find and fix intermittent issues in production! Would you like to debug without a debugger in production? Or would you rather take calls and get emails from upset users? This allows you to fix issues before anyone else knows and without having to email, IM, or Slack with support as everything needed to fix the issue is right there. 95% of issues never need to be reproduced.
To work properly it needs to be combined with centralized logging that can capture the namespace/module, class name, method, inputs, and error message and store in a database so it can be aggregated to highlight which method fails the most so it can be fixed first.
Sometimes developers choose to throw exceptions up the stack from a catch block but this approach is 100 times slower than normal code that doesn't throw. Catch and release with logging is preferred.
This technique was used to quickly stabilize an app that failed every hour for most users in a Fortune 500 company developed by 12 Devs over 2 years. Using this 3000 different exceptions were identified, fixed, tested, and deployed in 4 months. This averages out to a fix every 15 minutes on average for 4 months.
I agree that it is not fun to type in everything needed to instrument the code and I prefer to not look at the repetitive code, but adding 4 lines of code to each method is worth it in the long run.
I agree with the basic direction of your question to handle as many exceptions as possible at the lowest level.
Some of the existing answer go like "You don't need to handle the exception. Someone else will do it up the stack." To my experience that is a bad excuse to not think about exception handling at the currently developed piece of code, making the exception handling the problem of someone else or later.
That problem grows dramatically in distributed development, where you may need to call a method implemented by a co-worker. And then you have to inspect a nested chain of method calls to find out why he/she is throwing some exception at you, which could have been handled much easier at the deepest nested method.
The advice my computer science professor gave me once was: "Use Try and Catch blocks only when it's not possible to handle the error using standard means."
As an example, he told us that if a program ran into some serious issue in a place where it's not possible to do something like:
int f()
{
// Do stuff
if (condition == false)
return -1;
return 0;
}
int condition = f();
if (f != 0)
{
// handle error
}
Then you should be using try, catch blocks. While you can use exceptions to handle this, it's generally not recommended because exceptions are expensive performance wise.
If you want to test the outcome of every function, use return codes.
The purpose of Exceptions is so that you can test outcomes LESS often. The idea is to separate exceptional (unusual, rarer) conditions out of your more ordinary code. This keeps the ordinary code cleaner and simpler - but still able to handle those exceptional conditions.
In well-designed code deeper functions might throw and higher functions might catch. But the key is that many functions "in between" will be free from the burden of handling exceptional conditions at all. They only have to be "exception safe", which does not mean they must catch.
I would like to add to this discussion that, since C++11, it does make a lot of sense, as long as every catch block rethrows the exception up until the point it can/should be handled. This way a backtrace can be generated. I therefore believe the previous opinions are in part outdated.
Use std::nested_exception and std::throw_with_nested
It is described on StackOverflow here and here how to achieve this.
Since you can do this with any derived exception class, you can add a lot of information to such a backtrace!
You may also take a look at my MWE on GitHub, where a backtrace would look something like this:
Library API: Exception caught in function 'api_function'
Backtrace:
~/Git/mwe-cpp-exception/src/detail/Library.cpp:17 : library_function failed
~/Git/mwe-cpp-exception/src/detail/Library.cpp:13 : could not open file "nonexistent.txt"
I feel compelled to add another answer although Mike Wheat's answer sums up the main points pretty well. I think of it like this. When you have methods that do multiple things you are multiplying the complexity, not adding it.
In other words, a method that is wrapped in a try catch has two possible outcomes. You have the non-exception outcome and the exception outcome. When you're dealing with a lot of methods this exponentially blows up beyond comprehension.
Exponentially because if each method branches in two different ways then every time you call another method you're squaring the previous number of potential outcomes. By the time you've called five methods you are up to 256 possible outcomes at a minimum. Compare this to not doing a try/catch in every single method and you only have one path to follow.
That's basically how I look at it. You might be tempted to argue that any type of branching does the same thing but try/catches are a special case because the state of the application basically becomes undefined.
So in short, try/catches make the code a lot harder to comprehend.
Besides the above advice, personally I use some try+catch+throw; for the following reason:
At boundary of different coder, I use try + catch + throw in the code written by myself, before the exception being thrown to the caller which is written by others, this gives me a chance to know some error condition occured in my code, and this place is much closer to the code which initially throw the exception, the closer, the easier to find the reason.
At the boundary of modules, although different module may be written my same person.
Learning + Debug purpose, in this case I use catch(...) in C++ and catch(Exception ex) in C#, for C++, the standard library does not throw too many exception, so this case is rare in C++. But common place in C#, C# has a huge library and an mature exception hierarchy, the C# library code throw tons of exception, in theory I(and you) should know every exceptions from the function you called, and know the reason/case why these exception being thrown, and know how to handle them(pass by or catch and handle it in-place)gracefully. Unfortunately in reality it's very hard to know everything about the potential exceptions before I write one line of code. So I catch all and let my code speak aloud by logging(in product environment)/assert dialog(in development environment) when any exception really occurs. By this way I add exception handling code progressively. I know it conflit with good advice but in reality it works for me and I don't know any better way for this problem.
You have no need to cover up every part of your code inside try-catch. The main use of the try-catch block is to error handling and got bugs/exceptions in your program. Some usage of try-catch -
You can use this block where you want to handle an exception or simply you can say that the block of written code may throw an exception.
If you want to dispose your objects immediately after their use, You can use try-catch block.

Fix common library functions, or abandon then?

Imagine i have a function with a bug in it:
Pseudo-code:
void Foo(LPVOID o)
{
//implementation details omitted
}
The problem is the user passed null:
Object bar = null;
...
Foo(bar);
Then the function might crash due to a access violation; but it could also happen to work fine. The bug is that the function should have been checking for the invalid case of passing null, but it just never did. It was never issue because developers were trusted to know what they're doing.
If i now change the function to:
Pseudo-code:
void Foo(LPVOID o)
{
if (o == null) throw new EArgumentNullException("o");
//implementation details omitted
}
then people who were happily using the function, and happened to but not get an access violation, now suddenly will begin seeing an EArgumentNullException.
Do i continue to let people using the function improperly, and create a new version of the function? Or do i fix the function to include what it should have originally had?
So now the moral dillema. Do you ever add new sanity checks, safety checks, assertions to exising code? Or do you call the old function abandoned, and have a new one?
Consider a bug so common that Microsoft had to fix it for developers:
MessageBox(GetDesktopWindow, ...);
You never, ever, ever want to make a window model against the desktop. You'll lock up the system. Do you continue to let developers lock up the user's computer? Or do you change the function to:
MessageBox(HWND hWndParent, ...)
{
if (hWndParent == GetDesktopWindow)
throw new Exception("hWndParent cannot be the desktop window. Use NULL instead.");
...
}
In reality Microsoft changed the Window Manager to auto-fix the bad parameter:
MessageBox(HWND hWndParent, ...)
{
if (hWndParent == GetDesktopWindow)
hWndParent = 0;
...
}
In my made up example there is no way to patch the function - if i wasn't given an object, i can't do what i need to do on it.
Do you risk breaking existing code by adding parameter validation? Do you let existing code continue to be wrong, getting incorrect results?
The problem is that not only are you fixing a bug, but you are changing the semantic signature of the method by introducing an error case.
From a software engineering perspective I would advocate that you try to specify methods as best as possible (for instance using pre and post-conditions) but once the method is out there, specification changes are a no-go (or at least you would have to check all occurrences of the method) and a new method would be better.
I'd keep the old function and simply let it create a warning that notifies you of every (possibly) wrong use and then i'd just kick the developer who used it wrong until he uses it properly.
You cannot catch everything. What if someone wrote "MakeLocation("Ian Boyd", "is stupid");"? Would you create a new function or change the function to catch that? No, you would fire the developer (or at least punish him).
Of course this requires that you document what your function requires as input.
This is where having automated tests [Unit testing, Intergration Testing, automated functional testing] are great, they give you the power to change existing code with confidance.
When making changes like this I would suggest finding all usages and ensuring they are behaving how you belive they should.
I myself would make bug fixes to existing function rather them duplicating them 99% of the time. If it changes behavior alot and there are a lot of calls to this function you need to be very sure of your change.
So go ahead make your change, run your unit tests, then your automated functional tests. Fix any errors and your golden!
If your code has a bug in it you should do what you normally do when any bug is reported. One part of that is assessing the impacts of fixing and of not fixing it. Sometimes the right thing to do with a bug is to not fix it because the behaviour it exposes has become accepted. Sometimes the cost of fixing it, or the iconvenience of releasing a fix outside the normal release cycle, stops you releasing a fixed bug for a while. This isn't a moral dilemma, it's an economic question of costs and benefits. If you are disturbed at the thought of having known bugs in your published code, publish a known-bugs list.
One option none of the other respondents seems to have suggested is to wrap the buggy function in another function which imposes the new behaviour that you require. In the world where functions can run to many lines it is sometimes less likely to introduce new bugs to preserve a 99%-correct piece of code and address the change without modifying existing code. Of course, this is not always possible
Two choices:
Give the error checking version a new name and deprecate the old version (one version later have it start issuing warnings (compile time if possible, run time if necessary), two versions later remove it).
[not always possible] Place the newly introduced error check in such a way that it only triggers if the unmodified version would crash or produce undefined behavior. (So that users who were taking care in their code don't get any unpleasant surprises.)
It entirely depends on you, your codebase, and your users.
If you are Microsoft and you have a bug in your API that is used by millions of devs around the world, then you will probably want to just create a new function and update the docs on the old one. If you can, you would also want to update the compiler to give warnings as well. (Though even then you may be able to change the existing system; remember when MS switched VC to the C++ standard and you had to update all of your #include iostreams and add using stds to get simple, existing console apps working again?)
It basically depends on what the function is. If it is something basic that will have massive ripple effects, then it could break a lot of code. If it is just an ancillary function, then you may as well fix it. Of course if you are Microsoft and your other code depends on a bug in one of your functions, then you probably should fix it since that is just plain embarrassing to keep. If other devs rely on the bug (that you created), then you may have an obligation to the users to not break their code that you caused to be buggy.
If you are a small company or independent developer, then sure, go ahead and fix the function. If you only need to update yourself or a few people on the new usage then fixing it is the best solution, especially since it is not even a big deal because all it really requires is an added note to the docs for the function. eg do not pass NULL or an exception is thrown if hWnd is the desktop, etc.
Another option as a sort of compromise would be to create a wrapper function. You could create a small, inline function that checks the args and then calls the existing function. That way you don’t really have to do much in the short term and eventually when people have moved to the new one, you can deprecate or even remove the old one, moving the code to the new once between the checks.
In most scenarios, it is better to fix a buggy function, particularly if you are merely adding argument checks as opposed to completely changing the behavior of the function. It is not really a good idea to facilitate—read encourage—bad coding just because it would break some existing code (especially if the code is free!) Think about it: if someone is creating a new program, then they can do it right from the start instead of relying on a bug. If they are re-compiling an old program that depends on the bug, then they can just update the code. Again, it depends on how messy and convoluted the code is, how many people are affected, and whether or not they pay you, but it is quite common to have to update old code to for example initialize variables that hand’t been, or check for error codes, etc.
To sum up, in your specific example (given the information provided), you should just fix it.

Should a TDD test always fail first?

As a followon to the discussion in the comments of this answer, should a TDD test always be made fail first?
Consider the following example. If I am writing an implementation of LinkedHashSet and one test tests that after inserting a duplicate, the original is in the same iteration order as before the insert, I might want to add a separate test that the duplicate is not in the set at all.
The first test will be observed to fail first, and then implemented.
The problem is that it is quite likely that the implementation to make the first test pass used a different set implementation to store the data, so just as a side effect the second test already passes.
I would think that the main purpose of seeing the test fail is to ensure that the test is a good test (many times I've written a test I thought would fail but didn't because the test was written wrong). But if you are confident that the test you write does indeed test something, isn't it valuable to have to ensure that you don't break that behavior later?
Of course it's valuable, because then it is a useful regression test. In my opinion, regression tests are more important than testing newly developed code.
To say that they must always fail first is taking a rule beyond practicality.
Yes, TDD tests must fail before they turn green (work). Otherwise you do not know if you have a valid test.
TDD for me is more of a design tool, not an afterthought. So there is no other way, the test will fail simply because there is no code to make it pass yet, only after I create it that the test can ever pass.
I think the point of "failing first" is to avoid kidding yourself that a test worked. If you have a set of tests checking the same method with different parameters, one (or more) of them is likely to pass from the start. Consider this example:
public String doFoo(int param) {
//TODO implement me
return null;
}
The tests would be something like:
public void testDoFoo_matches() {
assertEquals("Geoff Hurst", createBar().doFoo(1966));
}
public void testDoFoo_validNoMatch() {
assertEquals("no match", createBar().doFoo(1));
}
public void testDoFoo_outOfRange() {
assertEquals(null, createBar().doFoo(-1));
}
public void testDoFoo_tryAgain() {
assertEquals("try again", createBar().doFoo(0));
}
One of those tests will pass, but clearly the others won't, so you have to implement the code properly for the set of tests to pass. I think that is the true requirement. The spirit of the rule is to ensure you have thought about the expected outcome before you start hacking.
What you're actually asking is how you can test the test to verify that it is a valid one and it tests what you intend.
Making it fail at first is an ok option, but note that even if it fails when you plan it to fail and succeeds after you refactor the code to make it succeed, that still doesn't mean that your test actually tested what you wanted... Of course you can write some other classes which behave differently to test your test... But that's actually a test which tests your original test - How do you know that the new test is valid? :-)
So making a test fail first is a good idea but it still isn't foolproof.
IMHO, the importance of failing first is to make sure that the test you created doesn't have a flaw. You could, for instance, forget the Assert in your test, and you'd maybe never know that.
A similar case occurs when you're doing boundary tests, you've already built the code that covers it, but it is recommended to test that.
I think that's not a big problem for your test not to fail, but you have to make sure it is indeed testing what it should (debugging, maybe).

MSTest executing all my tests simultaneously breaks tests - what to do

Ok, this is annoying.
MSTest executes all of my tests simultaneously which causes some of them to fail. No this is not because my tests are fragile and susceptible to build order rather it is because this is a demo project in which I use a Db4o object database running from a file.
So I have a couple of DataAccess tests checking that my repositories work correctly and boom, MSTest blows up. Since it tries to run all its tests at the same time it gets an error when a test tries to access the database file while other tests are using it.
Can anyone think of a quick way around this? I don't want to ditch MSTest (ok I do but another story) and I sure as heck don't want to run a full-blown database service so I'll take any way to force MSTest not to run simultaneously or tricks with opening files.
Anyone have any ideas?
You might want to try using a Monitor and entering in TestInitialize and exiting on TestCleanup. If your test classes all depend on the external file, you'll need to use a single lock object for all of them.
public static class LockClass
{
public static object LockObject = new object();
}
...
[TestInitialize]
public void TestSetup()
{
Monitor.Enter(LockClass.LockObject);
}
[TestCleanup]
public void TestCleanup()
{
Monitor.Exit(LockClass.LockObject);
}
This should force all of your tests to run serially and as long as all of your tests pass/fail they should run. If any of them throws an unexpected exception, though, all the rest will hang since the Exit code won't be run for the test that blows up.
I had a try using locks in this manner.
What I experienced, however, was that VS2010 does not execute the tests in parallel by default, but executes them sequencially, in a single thread. (parallel execution could be switched on, however. But this would not prevent the problem completely)
What I find very disturbing is, that the sequencial execution will take place in arbitrary order, even across test classes!
So for example an execution order may look like this:
Class A - TestInitialize: Lock will be established
Class A - TestMethod1: Will execute, OK
Class B - TestInitialize: Lock will be established
=> Thread will be blocked
=> Complete UnitTests will be blocked! The cause is that there are no other Threads which would go on executing methods of Class A. So the Montor.Exit() will never be reached.
I do not understand why MS is doing so. Other UnitTest frameworks (e.g. JUnit) execute the test methods class-wise. Otherwise there will be some interleaving of SetUp/TearDown method which would cause the chaos described...
Is there anybody out there knowing how to prevent MSTest jumping between test classes?
(Currently I use Resharpers test runner, which behaves as expected, executing all tests methods of one classe before proceeding with the next class)
Use an Ordered Test
http://msdn.microsoft.com/en-us/library/ms182630(v=VS.90).aspx