When I run my xUnit unit tests I sometimes get an error message like "Transaction (Process ID 58) was deadlocked on lock resources with another process and has been chosen as the deadlock victim" on one or more of the tests, seemingly randomly. If I re-run any failing test on its own it passes.
What should I do to prevent this? Is there an option to run the tests one-after-another instead of all at once?
(N.B. I'm running the tests over the API methods in my ASP.Net 5 MVC controllers under Visual Studio 2015)
Here's an example of one of my occasionally failing tests:
[Fact]
private void TestREAD()
{
Linq2SQLTestHelpers.SQLCommands.AddCollections(TestCollections.Select(collection => Convert.Collection2DB(collection)).ToList(), TestSettings.LocalConnectionString);
foreach (var testCollection in TestCollections)
{
var testCollectionFromDB = CollectionsController.Get(testCollection.Id);
Assert.Equal(testCollection.Description, testCollectionFromDB.Description);
Assert.Equal(testCollection.Id, testCollectionFromDB.Id);
Assert.Equal(testCollection.IsPublic, testCollectionFromDB.IsPublic);
Assert.Equal(testCollection.LayoutSettings, testCollectionFromDB.LayoutSettings);
Assert.Equal(testCollection.Name, testCollectionFromDB.Name);
Assert.Equal(testCollection.UserId, testCollectionFromDB.UserId);
}
}
There are two methods the test calls, here's the controller method:
[HttpGet("{id}")]
public Collection Get(Guid id)
{
var sql = #"SELECT * FROM Collections WHERE id = #id";
using (var connection = new SqlConnection(ConnectionString))
{
var collection = connection.Query<Collection>(sql, new { id = id }).First();
return collection;
}
}
and here's the helper method:
public static void AddCollections(List<Collection> collections, string connectionString)
{
using (var db = new DataClassesDataContext(connectionString))
{
db.Collections.InsertAllOnSubmit(collections);
db.SubmitChanges();
}
}
(Note that I'm using Dapper as the micro-ORM in the controller method and so, to avoid potentially duplicating errors in the test, I'm using LINQ to SQL instead in the test to set-up and clean-up test data.)
There are also database calls in the unit test's class's constructor and Dispose method. I can add them to the post if needed.
OK, so looks like a plain vanilla case of deadlocks in your app and the need to handle that - what is your plan on the app side?
The tests and their data rigging can potentially fall prey to the same thing. xUnit doesnt have anything to address this and I'd strongly argue it shouldnt.
So in both the test and the app, you need failure/retry management.
For a web app, you have a fire them a picture of a whale and let them try again pattern but ultimately you want a real solution.
For a test, you don't want whales and definitely want to handle it, i.e. not be brittle.
I'd be using Poly to wrap retry decoration around anything in either the app or the tests that's prone to significant failures -- your exercise is to figure out what are the significant failures in your context.
Under normal circumstances a database with a single reader/writer operating synchronously shouldn't deadlock. Analysing why it happens is a matter of doing the analysis on the DB side. The tools that side would also likely quickly reveal to you if e.g. you have some aspect of your overall System Under Test which is resulting in competing work.
(Obviously your snippets are incomplete as there is a disconnect between CollectionsController.Get(testCollection.Id) and the fact that the controller method is not static - the point of this discussion should not be down at that level IMO though)
Related
So what I'm trying to do is an Orchard Feature that if enabled, runs a separate thread (service) that queries IRepository<> of some PartRecord.
About Starting the service:
I tried starting the service on IFeatureEventHandler.Enabled(), but this gets executed only on enabling the feature, not when Orchard is started.
So i looked in the Orchard framework for anything that i can use and i found IOrchardShellEvents.Activated().
So I basicly did this:
public class MyService : IOrchardShellEvents {
...More stuff...
public void Activated() {
running = true;
//Run DoWork() in separate thread
}
public void Terminating() {
running = false;
}
private void DoWork(){
//do service work while running = true
}
}
This happened to work, but I'm not sure if this is the common practice for starting a custom defined thread when Orchard starts. So please correct me if it's not done like this..
About Repository querying problem:
The repository gets injected and at first it queries the table just fine. After a while tho, it throws an exception saying that: "Multiple simultaneous connections or connections with different connection strings inside the same transaction are not currently supported.".
It seems extremely bizzare, that a query that get executed a couple of times crashes after a while;
Here's the code for the shows how i use the repository:
public MyService(ServiceManager manager, IRepository<SomePartRecord> repo) {
this.manager = manager;
//The manager of the service uses the repository to get a single column(ExpectaId, not a PK) out of each row
manager.LoadIds = () =>
repo.Table.ToList().Select(record => record.ExpectaId);
}
Note: The Func<> manager.LoadIds is called once per 10 seconds
Note: I'm using MySql Server 5.5
OK, so the answer to any question beginning with "how do I spin a separate thread in order to..." is "don't". Seriously. See for example http://ayende.com/blog/158945/thou-shall-not-do-threading-unless-you-know-what-you-are-doing
Fortunately, Orchard provides a way to run tasks in the background without having to spin your own threads: How to run scheduled tasks in Orchard?
I have a web application that uses linq-to-sql queries (will soon be upgraded to linq-to-EF compiled queries) and for which there's data context and a database already in place. I want to create a demo version of the application and for the demo, I want to use an entirely different database file but that will have the same tables. So in essence, I'll have the same data structure for two different databases: one database for logged-in users and one database for demo users. I want to reuse many of the queries I've already written; they look like this:
public class FruitQueries
{
public List<SomeObjectModel> MyQuery(list of parameters)
{
using (MyDataContext TheDC = new MyDataContext())
{
var TheQueryResult = (from f in TheDC.Fruits
......).ToList();
return TheQueryResult;
}
}
public List<SomeObject> AnotherQuery(some other parameters) {...}
}
Now I think I know that this calls for dependency injection where the data context is passed in as a parameter but I'm not sure on the syntax. How do you reuse queries using dependency injection to make them work on two different databases? Right now I'm using a using statement and I want to keep this pattern; is that possible if I inject the DC as a parameter?
Thanks.
Since you already have a lot of code in place, probably the simplest thing to do is to inject a factory:
public interface IMyDataContextFactory
{
MyDataContext CreateNewContext();
}
All the code will roughly stay the same:
public List<SomeObjectModel> MyQuery(params)
{
using (var TheDC = this.factory.CreateNewContext())
{
var TheQueryResult = (from f in TheDC.Fruits
......).ToList();
return TheQueryResult;
}
}
You can let the injected IMyDataContextFactory decide how to construct a MyDataContext instance (based on the user). This would be trivial.
In the end it will probably be better to inject a MyDataContext (or an abstraction such as IUnitOfWork) into consumers, but this changes everything completely. Since this class is passed in from the outside, the consumer isn't responsible anymore for disposing it, but someone else is. Although disposing such instance isn't that hard with most DI container. It gets harder though when you want to share the same MyDataContext instance over multiple consumers (within the same web request for instance) and where do you call SubmitChanges?
Elaborating the previous answer
What you can do, is provide the connectionstring to the DC (would this qualify as contructor injection?)
using (MyDataContext TheDC = new MyDataContext(this.factory.CreateConString()))
This way, disposal is still handled by the consumer and you can continue your Using() approach. Your factory can read the two different connectionstrings from your webconfig and determine the right one to use, based on the user. (not that trivial as it may seem)
PS: I think the quickest way is to deploy the demo application to a different URL so they can have a separate web.config and you do not need to code anything but that does not answer your question.
I have a method under test. Within its call stack, it calls a DAO which intern uses JDBC to chat with the DB. I am not really interested in knowing what will happen at the JDBC layer; I already have tests for that, and they work wonderfully.
I am trying to mock, using JMock, the DAO layer, so I can focus on the details this method under test. Here is a basic representation of what I have.
#Test
public void myTest()
{
context.checking(new Expectations() {
{
allowing(myDAO).getSet(with(any(Integer.class)));
will(returnValue(new HashSet<String>()));
}
});
// Used only to show the mock is working but not really part of this test.
// These asserts pass.
Set<String> temp = myDAO.getSet(Integer.valueOf(12));
Assert.assertNotNull(temp);
Assert.assertTrue(temp.isEmpty());
MyTestObject underTest = new MyTestObject();
// Deep in this call MyDAO is initialized and getSet() is called.
// The mock is failing to return the Set as desired. getSet() is run as
// normal and throws a NPE since JDBC is not (intentionally) setup. I want
// getSet() to just return an empty set at this layer.
underTest.thisTestMethod();
...
// Other assertions that would be helpful for this test if mocking
// was working.
}
It, from what I have learned creating this test, that I cannot mock indirect objects using JMock. OR I am not seeing a key point. I'm hoping for the second half to be true.
Thoughts and thank you.
From the snippet, I'm guessing that MyTestObject uses reflection, or a static method or field to get hold of the DAO, since it has no constructor parameters. JMock does not do replacement of objects by type (and any moment now, there'll be a bunch of people recommending other frameworks that do).
This is on purpose. A goal of JMock is to highlight object design weaknesses, by requiring clean dependencies and focussed behaviour. I find that burying DAO/JDBC access in the domain objects eventually gets me into trouble. It means that the domain objects have secret dependencies that make them harder to understand and change. I prefer to make those relationships explicit in the code.
So you have to get the mocked object somehow into the target code. If you can't or don't want to do that, then you'll have to use another framework.
P.S. One point of style, you can simplify this test a little:
context.checking(new Expectations() {{
allowing(myDAO).getSet(12); will(returnValue(new HashSet<String>()));
}});
within a test, you should really know what values to expect and feed that into the expectation. That makes it easier to see the flow of values between the objects.
I have a method which works like this:
public void deploy(UserInput userInput) {
if (userInput is wrong)
return;
//start deployment process
}
The userInput consist of individual checks in the deploy method. Now, I'd like to JUnit test if the user input check algorithms behave right (so if the deployment process would start or not depending on the right or wrong user input). So I need to test this with right and wrong user inputs. I could do this task by checking if anything has been deployed at all, but in this case this is very cumbersome.
So I wonder if it's somehow possible to know in the corresponding JUnit test if the deploy method has been aborted or not (due to wrong user inputs)? (By the way, changing the deploy method is no option.)
As you describe your problem, you can only check your method for side effects, or if it throws an Exception. The easiest way to do this is using a mocking framework like JMockit or Mockito. You have to mock the first method after the checking of user input has finished:
public void deploy(UserInput userInput) {
if (userInput is wrong)
return;
//start deployment process
startDeploy(); // mock this method
}
You can also extend the class under test, and override startDeploy() if it's possible. This would avoid having to use a mocking framework.
Alternative - Integration tests
It sounds like the deploy method is large and complex, and deals with files, file systems, external services (ftp), etc.
It is sometimes easier in the long run to just accept that you're dealing with external systems, and test these external systems. For instance, if deploy() copies a file to directory x, test that the file exists in the target directory. I don't know how complex deploy is, but often mocking these methods can be as hard as just testing the actual behaviour. This may be cumbersome, but like most tests, it would allow you refactor your code so it is simpler to understand. If your goal is refactoring, then in my experience, it's easier to refactor if you're testing actual behaviour rather than mocking.
You could create a UserInput stub / mock with the correct expectations and verify that only the expected calls (and no more) were made.
However, from a design point of view, if you were able to split your validation and the deployment process into separate classes - then your code can be as simple as:
if (_validator.isValid(userInput)) {
_deployer.deploy(userInput);
}
This way you can easily test that if the validator returns false the deployer is never called (using a mocking framework, such as jMock) and that it is called if the validator returns true.
It will also enable you to test your validation and deployment code seperately, avoiding the issue you're currently having.
This is what I found from my initial attempts to use JMockIt. I must admit that I found the JMockIt documentation very terse for what it provides and hence I might have missed something. Nonetheless, this is what I understood:
Mockito: List a = mock(ArrayList.class) does not stub out all methods
of List.class by default. a.add("foo") is going to do the usual thing
of adding the element to the list.
JMockIt: #Mocked ArrayList<String> a;
It stubs out all the methods of a by default. So, now a.add("foo")
is not going to work.
This seems like a very big limitation to me in JMockIt.
How do I express the fact that I only want you to give me statistics
of add() method and not replace the function implementation itself
What if I just want JMockIt to count the number of times method add()
was called, but leave the implementation of add() as is?
I a unable to express this in JMockIt. However, it seems I can do this
in Mockito using spy()
I really want to be proven wrong here. JMockit claims that it can do everything that
other mocking frameworks do plus a lot more. Does not seem like the case here
#Test
public void shouldPersistRecalculatedArticle()
{
Article articleOne = new Article();
Article articleTwo = new Article();
when(mockCalculator.countNumberOfRelatedArticles(articleOne)).thenReturn(1);
when(mockCalculator.countNumberOfRelatedArticles(articleTwo)).thenReturn(12);
when(mockDatabase.getArticlesFor("Guardian")).thenReturn(asList(articleOne, articleTwo));
articleManager.updateRelatedArticlesCounters("Guardian");
InOrder inOrder = inOrder(mockDatabase, mockCalculator);
inOrder.verify(mockCalculator).countNumberOfRelatedArticles(isA(Article.class));
inOrder.verify(mockDatabase, times(2)).save((Article) notNull());
}
#Test
public void shouldPersistRecalculatedArticle()
{
final Article articleOne = new Article();
final Article articleTwo = new Article();
new Expectations() {{
mockCalculator.countNumberOfRelatedArticles(articleOne); result = 1;
mockCalculator.countNumberOfRelatedArticles(articleTwo); result = 12;
mockDatabase.getArticlesFor("Guardian"); result = asList(articleOne, articleTwo);
}};
articleManager.updateRelatedArticlesCounters("Guardian");
new VerificationsInOrder(2) {{
mockCalculator.countNumberOfRelatedArticles(withInstanceOf(Article.class));
mockDatabase.save((Article) withNotNull());
}};
}
A statement like this
inOrder.verify(mockDatabase, times(2)).save((Article) notNull());
in Mockito, does not have an equivalent in JMockIt as you can see from the example above
new NonStrictExpectations(Foo.class, Bar.class, zooObj)
{
{
// don't call zooObj.method1() here
// Otherwise it will get stubbed out
}
};
new Verifications()
{
{
zooObj.method1(); times = N;
}
};
In fact, all mocking APIs mock or stub out every method in the mocked type, by default. I think you confused mock(type) ("full" mocking) with spy(obj) (partial mocking).
JMockit does all that, with a simple API in every case. It's all described, with examples, in the JMockit Tutorial.
For proof, you can see the sample test suites (there are many more that have been removed from newer releases of the toolkit, but can still be found in the old zip files), or the many JMockit integration tests (over one thousand currently).
The equivalent to Mockito's spy is "dynamic partial mocking" in JMockit. Simply pass the instances you want to partially mock as arguments to the Expectations constructor. If no expectations are recorded, the real code will be executed when the code under test is exercised. BTW, Mockito has a serious problem here (which JMockit doesn't), because it always executes the real code, even when it's called inside when(...) or verify(...); because of this, people have to use doReturn(...).when(...) to avoid surprises on spied objects.
Regarding verification of invocations, the JMockit Verifications API is considerably more capable than any other. For example:
new VerificationsInOrder() {{
// preceding invocations, if any
mockDatabase.save((Article) withNotNull()); times = 2;
// later invocations, if any
}};
Mockito's a much older library than JMockIT, so you could expect that it would have many more features. Have a read through the release list if you want to see some of the less well documented functionality. JMockIT authors have produced a matrix of features in which they missed out every single thing that other frameworks do that they don't, and got several wrong (for instance, Mockito can do strict mocks and ordering).
Mockito was also written to enable unit-level BDD. That generally means that if your tests provide a good example of how to use the code, and if your code is lovely and decoupled and well-designed, then you won't need all the shenanigans that JMockIT provides. One of the hardest things to do in Open Source is say "no" to the many requests that don't help in the long run.
Compare the examples on the front pages of Mockito and JMockIT to see the real difference. It's not about what you test, it's about how well your tests document and describe the behavior of the class.
Declaration of Interest: Szczepan and I were on the same project when he wrote the first draft of Mockito, after seeing some of us roll out our own stub classes rather than use the existing mocking frameworks of the time. So I feel like he wrote it all for me, and am thoroughly biased. Thank you Szczepan.