I want to test a service which in turns create connection with redis.
I want to skip this part in my junit. Is there a way to skip this method call or mock it?
I think the question was more about how the Redis part can be mocked so that the test run when redis isn't available. It's hard because your service is probably using the connection so you'd have to do a lot of mocking. What we do in Spring Boot is check if a redis server is available on localhost and if that's the case run the tests, otherwise skip.
See RedisTestServer and a sample usage. Note that the rule applies to all the tests so you may want to move the tests that are using Redis in an isolated test class.
Annotate with #Ignore to ignore the method. Like this:
#Ignore("reason of skipping")
#Test
public void testConnectionCreation(){
// do some stuff...
}
Optionally you can provide a note to why the test is ignored as shown above.
See http://junit.sourceforge.net/javadoc/org/junit/Ignore.html for more info.
Related
I'm looking for the best practice for following (simplified) scenario:
#Test
public void someTest() {
for(String someText : someTexts) {
Assert.true(checkForValidity(someText));
}
}
This test iterates through x-thousands of texts and in this case I don't want it to be stopped for each failure. I want the errors to be buffered and in case of error(s) to fail at the end. Has JUnit got something on board for for my aim?
First of all, it's not really the correct way to implement this. JUnit allows parametrizing tests by defining a collection of inputs/outputs with the Parametrized test runner. Doing it this way ensures that each test case becomes a unique instance, making test report clearly state which samples passed and which ones failed.
If you still insist on doing it your way you should have a look at AssertJ's Soft Assertions which allow "swallowing" individual assertion failures, accumulating them and only reporting after the test is finished. The linked documentation section uses a nice example and is definitely worth reading.
So I ran into this issue with the Windsor bootstrapper for Nancy. I managed to whip together a small test project where I can reproduce what is going wrong. You can find the project here.
What seems to go wrong is this: DynamicProxy only seems to catch the invocation of the void Handle(Action<string> oncomplete) method and not the string Handle(string input) method that is called on another thread. As if the Engine is no longer proxied after it had been sent to another thread. Scratch that: It's just the call to another method on the same class that is not proxied.
This means the output of the program is only
Handled Handle with return type System.Void
test
and not
Handled Handle with return type System.Void
Handled Handle with return type System.String
test
Is this the expected behaviour of Dynamic Proxy? That proxies on another thread are not longer, well, proxied? Or is there something wrong with the code?
EDIT: Just RTFM'd Dynamic Proxy, and it seems like it Works As Intended. Now how do I configure my IEngine Instance to use the correct kind of Proxy?
Try changing :
Component.For<MyEngine>().Forward<IEngine>().Interceptors<ScopeInterceptor>());
into
Component.For<MyEngine>().Forward<IEngine>().Forward<MyEngine>().Interceptors<ScopeInterceptor>());
I don't have the time to actually try it but this should force windsor into creating a class proxy, which should solve your issue
Kind regards,
Marwijn.
-- edit --
for the current link try replacing :
Component.For<IEngine>().ImplementedBy<Engine>()
with:
Component.For<IEngine, Engine>().ImplementedBy<Engine>()
I have several Spock test classes grouped together in a package. I am using Junit 4.10. Each test class contains several feature test methods.
I want to perform some setup steps (such as loading data into a DB, starting up a web server) before I run any test case, but only once when the testing starts.
I want this "OneTimeSetup" method to be called only once whether:
I run all the test classes in the package (for example if they are grouped in a Test Suite)
I run a few test classes
I run only one test class
I run only a certain feature method within a test class
From reading other posts on SO, it seems that this is what TestNG's #BeforeSuite does.
I am aware of Spock's setupSpec() and cleanupSpec() methods, but they only work within a given test class. I am looking to do something like "setupTestSuite()." How can this be achieved in Spock?
You can write a global extension, use a JUnit test suite, call a static method in a helper class (say from setupSpec) that does its work just once, or let the build tool do the job.
I have ported my code to the RTM version of both WinRT and Rx. I use ReactiveUI in my ViewModels. Before porting the code my unit tests were running without problem but now I got a strange behavior.
Here the test:
var sut = new MyViewModel();
myViewModel.MyCommand.Execute(null) //ReactiveAsyncCommand
Assert.AreEqaul(0, sut.Collection.Count)
If I debug the test step by step, the assertion is not failing, but using the test runner it's failing...
The Collection asserted is modified by a method subscribing to the command:
MyCommand.RegisterAsyncTask(_ => DoWork())
.ObserveOn(SynchronizationContext.Current)
.Subscribe(MethodModifyingCollection);
The code was working before moving it to the RTM. I tried also to remove the ObserveOn and add an await Task.Delay() before the Assert without success.
Steven's got the rightish answer, but there are a few RxUI specific things missing. This is definitely related to scheduling in a test runner, but the reason is that the WinRT version of ReactiveUI can't detect properly whether it's in a test runner at the moment.
The dumb workaround for now is to set this at the top of all your tests:
RxApp.DeferredScheduler = Scheduler.CurrentThread;
Do not use the TestScheduler for every test, it's overkill and actually isn't compatible with certain kinds of testing. TestScheduler is good for tests where you're simulating time passing.
Your problem is that MSTest unit tests have a default SynchronizationContext. So ObserveOn and ReactiveAsyncCommand will marshal to the thread pool instead of to the WPF context. This causes a race condition.
Your first and best option is the Rx TestScheduler.
Another option is to await some completion signal (and ensure your test method is async Task, not async void).
Otherwise, if you just need a SynchronizationContext, you can use AsyncContext from my AsyncEx library to execute the tests within your own SynchronizationContext.
Finally, if you have any code that directly uses Dispatcher instead of SynchronizationContext, you can use WpfContext from the Async CTP download.
I have a method which works like this:
public void deploy(UserInput userInput) {
if (userInput is wrong)
return;
//start deployment process
}
The userInput consist of individual checks in the deploy method. Now, I'd like to JUnit test if the user input check algorithms behave right (so if the deployment process would start or not depending on the right or wrong user input). So I need to test this with right and wrong user inputs. I could do this task by checking if anything has been deployed at all, but in this case this is very cumbersome.
So I wonder if it's somehow possible to know in the corresponding JUnit test if the deploy method has been aborted or not (due to wrong user inputs)? (By the way, changing the deploy method is no option.)
As you describe your problem, you can only check your method for side effects, or if it throws an Exception. The easiest way to do this is using a mocking framework like JMockit or Mockito. You have to mock the first method after the checking of user input has finished:
public void deploy(UserInput userInput) {
if (userInput is wrong)
return;
//start deployment process
startDeploy(); // mock this method
}
You can also extend the class under test, and override startDeploy() if it's possible. This would avoid having to use a mocking framework.
Alternative - Integration tests
It sounds like the deploy method is large and complex, and deals with files, file systems, external services (ftp), etc.
It is sometimes easier in the long run to just accept that you're dealing with external systems, and test these external systems. For instance, if deploy() copies a file to directory x, test that the file exists in the target directory. I don't know how complex deploy is, but often mocking these methods can be as hard as just testing the actual behaviour. This may be cumbersome, but like most tests, it would allow you refactor your code so it is simpler to understand. If your goal is refactoring, then in my experience, it's easier to refactor if you're testing actual behaviour rather than mocking.
You could create a UserInput stub / mock with the correct expectations and verify that only the expected calls (and no more) were made.
However, from a design point of view, if you were able to split your validation and the deployment process into separate classes - then your code can be as simple as:
if (_validator.isValid(userInput)) {
_deployer.deploy(userInput);
}
This way you can easily test that if the validator returns false the deployer is never called (using a mocking framework, such as jMock) and that it is called if the validator returns true.
It will also enable you to test your validation and deployment code seperately, avoiding the issue you're currently having.