I'm looking for the best practice for following (simplified) scenario:
#Test
public void someTest() {
for(String someText : someTexts) {
Assert.true(checkForValidity(someText));
}
}
This test iterates through x-thousands of texts and in this case I don't want it to be stopped for each failure. I want the errors to be buffered and in case of error(s) to fail at the end. Has JUnit got something on board for for my aim?
First of all, it's not really the correct way to implement this. JUnit allows parametrizing tests by defining a collection of inputs/outputs with the Parametrized test runner. Doing it this way ensures that each test case becomes a unique instance, making test report clearly state which samples passed and which ones failed.
If you still insist on doing it your way you should have a look at AssertJ's Soft Assertions which allow "swallowing" individual assertion failures, accumulating them and only reporting after the test is finished. The linked documentation section uses a nice example and is definitely worth reading.
Related
In my Coded UI Test project, I need to check if few Labels or Messages are consistent with the context. But those checks are not critical if not consistent and I need to output them only as warnings.
Note that I'm using nested ordered tests to use only one global ordered test with vstest.console.exe and get in one shot the overall test coverage report.
Till now I was creating assertions to check those consistencies, but an assertion failure leads to Test failure, then to ordered test failure and then to playback stop.
I tried to change Playback.PlaybackSettings.ContinueOnError value before and after the assertion: this works as I expect as the assertion is well reported as a warning in the html report file. But whatever, it causes the ordered test to stop and then my global ordered test chaining to fail...
I tried to use TestContext.WriteLine too instead of creating assert, but it seems that this is not output in the html report.
So my question is:
is there any way to create an assertion only as a Warning that will be output in the html report file and that doesn't lead to a test failure?
Thanks a lot for any answer and help on this ;)
So I got my solution with developping my own Warning Engine to integrate Warnings in test report, 'cause I found no existing solution for that with the current Coded UI Test Assertion engine.
I'll try to take some time to post generic parts of the code structure with comments translated in english (we're french so default comments are french for now...), but here are the main step lines :
Create a template based on the UITestActionLog.html original file
report structure of Coded UI Test engine, with only the start
bloc and the javascript functions and CSS declarations in it.
Create an assertion class with a main function to manage insertion
of Warning html bloc in the html report first created from the template.
Then create custom assert functions to call the main function
whereever on runtime, and custom Stopwatch to inject elapsed time in
the report ('cause I could'nt found a way to get back the elapsed
time directly from the Coded UI Test engine).
That's it.
Just a proposition as a way to do it, maybe not the best one but it worked for me. I'll try to take time to put blocl codes to be clearer on it.
I am running Junit tests using Eclipse Luna. I have implemented #Test method. I loop within the #Test method for multiple records and use Assert.assertEquals for Non-XML messages and XMLAssert.assertXMLEqual for XML messages.
The problem is, when I run the Junit with single or multiple test cases, I do not get the proper result in the Junit View. It always shows "Runs: 1/1" and does not show the correct count of runs. Even the failures and success are not shown correctly. Am I missing something here?
If you only have a single #Test method, there is only one thing running, so the Runs: 1/1 is correct. If you want to have more show up, make each assertion in its own test.
When I run my xUnit unit tests I sometimes get an error message like "Transaction (Process ID 58) was deadlocked on lock resources with another process and has been chosen as the deadlock victim" on one or more of the tests, seemingly randomly. If I re-run any failing test on its own it passes.
What should I do to prevent this? Is there an option to run the tests one-after-another instead of all at once?
(N.B. I'm running the tests over the API methods in my ASP.Net 5 MVC controllers under Visual Studio 2015)
Here's an example of one of my occasionally failing tests:
[Fact]
private void TestREAD()
{
Linq2SQLTestHelpers.SQLCommands.AddCollections(TestCollections.Select(collection => Convert.Collection2DB(collection)).ToList(), TestSettings.LocalConnectionString);
foreach (var testCollection in TestCollections)
{
var testCollectionFromDB = CollectionsController.Get(testCollection.Id);
Assert.Equal(testCollection.Description, testCollectionFromDB.Description);
Assert.Equal(testCollection.Id, testCollectionFromDB.Id);
Assert.Equal(testCollection.IsPublic, testCollectionFromDB.IsPublic);
Assert.Equal(testCollection.LayoutSettings, testCollectionFromDB.LayoutSettings);
Assert.Equal(testCollection.Name, testCollectionFromDB.Name);
Assert.Equal(testCollection.UserId, testCollectionFromDB.UserId);
}
}
There are two methods the test calls, here's the controller method:
[HttpGet("{id}")]
public Collection Get(Guid id)
{
var sql = #"SELECT * FROM Collections WHERE id = #id";
using (var connection = new SqlConnection(ConnectionString))
{
var collection = connection.Query<Collection>(sql, new { id = id }).First();
return collection;
}
}
and here's the helper method:
public static void AddCollections(List<Collection> collections, string connectionString)
{
using (var db = new DataClassesDataContext(connectionString))
{
db.Collections.InsertAllOnSubmit(collections);
db.SubmitChanges();
}
}
(Note that I'm using Dapper as the micro-ORM in the controller method and so, to avoid potentially duplicating errors in the test, I'm using LINQ to SQL instead in the test to set-up and clean-up test data.)
There are also database calls in the unit test's class's constructor and Dispose method. I can add them to the post if needed.
OK, so looks like a plain vanilla case of deadlocks in your app and the need to handle that - what is your plan on the app side?
The tests and their data rigging can potentially fall prey to the same thing. xUnit doesnt have anything to address this and I'd strongly argue it shouldnt.
So in both the test and the app, you need failure/retry management.
For a web app, you have a fire them a picture of a whale and let them try again pattern but ultimately you want a real solution.
For a test, you don't want whales and definitely want to handle it, i.e. not be brittle.
I'd be using Poly to wrap retry decoration around anything in either the app or the tests that's prone to significant failures -- your exercise is to figure out what are the significant failures in your context.
Under normal circumstances a database with a single reader/writer operating synchronously shouldn't deadlock. Analysing why it happens is a matter of doing the analysis on the DB side. The tools that side would also likely quickly reveal to you if e.g. you have some aspect of your overall System Under Test which is resulting in competing work.
(Obviously your snippets are incomplete as there is a disconnect between CollectionsController.Get(testCollection.Id) and the fact that the controller method is not static - the point of this discussion should not be down at that level IMO though)
I have a method under test. Within its call stack, it calls a DAO which intern uses JDBC to chat with the DB. I am not really interested in knowing what will happen at the JDBC layer; I already have tests for that, and they work wonderfully.
I am trying to mock, using JMock, the DAO layer, so I can focus on the details this method under test. Here is a basic representation of what I have.
#Test
public void myTest()
{
context.checking(new Expectations() {
{
allowing(myDAO).getSet(with(any(Integer.class)));
will(returnValue(new HashSet<String>()));
}
});
// Used only to show the mock is working but not really part of this test.
// These asserts pass.
Set<String> temp = myDAO.getSet(Integer.valueOf(12));
Assert.assertNotNull(temp);
Assert.assertTrue(temp.isEmpty());
MyTestObject underTest = new MyTestObject();
// Deep in this call MyDAO is initialized and getSet() is called.
// The mock is failing to return the Set as desired. getSet() is run as
// normal and throws a NPE since JDBC is not (intentionally) setup. I want
// getSet() to just return an empty set at this layer.
underTest.thisTestMethod();
...
// Other assertions that would be helpful for this test if mocking
// was working.
}
It, from what I have learned creating this test, that I cannot mock indirect objects using JMock. OR I am not seeing a key point. I'm hoping for the second half to be true.
Thoughts and thank you.
From the snippet, I'm guessing that MyTestObject uses reflection, or a static method or field to get hold of the DAO, since it has no constructor parameters. JMock does not do replacement of objects by type (and any moment now, there'll be a bunch of people recommending other frameworks that do).
This is on purpose. A goal of JMock is to highlight object design weaknesses, by requiring clean dependencies and focussed behaviour. I find that burying DAO/JDBC access in the domain objects eventually gets me into trouble. It means that the domain objects have secret dependencies that make them harder to understand and change. I prefer to make those relationships explicit in the code.
So you have to get the mocked object somehow into the target code. If you can't or don't want to do that, then you'll have to use another framework.
P.S. One point of style, you can simplify this test a little:
context.checking(new Expectations() {{
allowing(myDAO).getSet(12); will(returnValue(new HashSet<String>()));
}});
within a test, you should really know what values to expect and feed that into the expectation. That makes it easier to see the flow of values between the objects.
I have a method which works like this:
public void deploy(UserInput userInput) {
if (userInput is wrong)
return;
//start deployment process
}
The userInput consist of individual checks in the deploy method. Now, I'd like to JUnit test if the user input check algorithms behave right (so if the deployment process would start or not depending on the right or wrong user input). So I need to test this with right and wrong user inputs. I could do this task by checking if anything has been deployed at all, but in this case this is very cumbersome.
So I wonder if it's somehow possible to know in the corresponding JUnit test if the deploy method has been aborted or not (due to wrong user inputs)? (By the way, changing the deploy method is no option.)
As you describe your problem, you can only check your method for side effects, or if it throws an Exception. The easiest way to do this is using a mocking framework like JMockit or Mockito. You have to mock the first method after the checking of user input has finished:
public void deploy(UserInput userInput) {
if (userInput is wrong)
return;
//start deployment process
startDeploy(); // mock this method
}
You can also extend the class under test, and override startDeploy() if it's possible. This would avoid having to use a mocking framework.
Alternative - Integration tests
It sounds like the deploy method is large and complex, and deals with files, file systems, external services (ftp), etc.
It is sometimes easier in the long run to just accept that you're dealing with external systems, and test these external systems. For instance, if deploy() copies a file to directory x, test that the file exists in the target directory. I don't know how complex deploy is, but often mocking these methods can be as hard as just testing the actual behaviour. This may be cumbersome, but like most tests, it would allow you refactor your code so it is simpler to understand. If your goal is refactoring, then in my experience, it's easier to refactor if you're testing actual behaviour rather than mocking.
You could create a UserInput stub / mock with the correct expectations and verify that only the expected calls (and no more) were made.
However, from a design point of view, if you were able to split your validation and the deployment process into separate classes - then your code can be as simple as:
if (_validator.isValid(userInput)) {
_deployer.deploy(userInput);
}
This way you can easily test that if the validator returns false the deployer is never called (using a mocking framework, such as jMock) and that it is called if the validator returns true.
It will also enable you to test your validation and deployment code seperately, avoiding the issue you're currently having.