Observable Command and Unit tests with Rx RTM - windows-runtime

I have ported my code to the RTM version of both WinRT and Rx. I use ReactiveUI in my ViewModels. Before porting the code my unit tests were running without problem but now I got a strange behavior.
Here the test:
var sut = new MyViewModel();
myViewModel.MyCommand.Execute(null) //ReactiveAsyncCommand
Assert.AreEqaul(0, sut.Collection.Count)
If I debug the test step by step, the assertion is not failing, but using the test runner it's failing...
The Collection asserted is modified by a method subscribing to the command:
MyCommand.RegisterAsyncTask(_ => DoWork())
.ObserveOn(SynchronizationContext.Current)
.Subscribe(MethodModifyingCollection);
The code was working before moving it to the RTM. I tried also to remove the ObserveOn and add an await Task.Delay() before the Assert without success.

Steven's got the rightish answer, but there are a few RxUI specific things missing. This is definitely related to scheduling in a test runner, but the reason is that the WinRT version of ReactiveUI can't detect properly whether it's in a test runner at the moment.
The dumb workaround for now is to set this at the top of all your tests:
RxApp.DeferredScheduler = Scheduler.CurrentThread;
Do not use the TestScheduler for every test, it's overkill and actually isn't compatible with certain kinds of testing. TestScheduler is good for tests where you're simulating time passing.

Your problem is that MSTest unit tests have a default SynchronizationContext. So ObserveOn and ReactiveAsyncCommand will marshal to the thread pool instead of to the WPF context. This causes a race condition.
Your first and best option is the Rx TestScheduler.
Another option is to await some completion signal (and ensure your test method is async Task, not async void).
Otherwise, if you just need a SynchronizationContext, you can use AsyncContext from my AsyncEx library to execute the tests within your own SynchronizationContext.
Finally, if you have any code that directly uses Dispatcher instead of SynchronizationContext, you can use WpfContext from the Async CTP download.

Related

stable-baseline3, gym, train while also step/predict

With stable-baselines3 given an agent, we can call "action = agent.predict(obs)". And then with Gym, this would be "new_obs, reward, done, info = env.step(action)". (more or less, maybe missed an input or an output).
We also have "agent.learn(10_000)" as an example, yet here we're less involved in the process and don't call the environment.
Looking for a way to train the agent while still calling "env.step". If you wander why, just trying to implement self play (agent and a previous version of it) playing with one environment (for example turns play as Chess).
WKR, Oren.
But why do you need it? If you take a look at the implementation of any learn method, you will see it is nothing more than an iteration over time steps calling collect_rollouts and train with some additional logging and setup at the beginning for, e.g., further saving the agent etc. Your env.step is called inside collect_rollouts.
I'd better suggest you to write a callback based on CheckpointCallback, which saves your agent (model) after N training steps and then attach this callback to your learn call. In your environment you could instantiate each N steps a "new previous" version of your model by calling ModelClass.load(file) on the file saved by a callback, so that finally you would be able to select actions of the other player using a self-play in your environment

Does SpecFlow testing platform support async Tasks?

Async/Await support for Specflow Steps =>
I would like to use SpecFlow with the Async Await Features of C#, windows phone 8,
SpecFlow with MSTest can execute Code using async / await but doesn't wait for the Results.
I've changed BindingInvoker.cs and upgraded to .NET 4, in order to support async tasks, and receiving now IOC is not initialized errors.
https://github.com/robfe/SpecFlow/commit/507368327341e71b2f5e2a4a1b7757e0f4fb809d
Yes. SpecFlow does support async steps
See https://docs.specflow.org/projects/specflow/en/latest/Bindings/Asynchronous-Bindings.html
For example:
[When(#"I want to get the web page '(.*)'")]
public async Task WhenIWantToGetTheWebPage(string url)
{
await _webDriver.HttpClientGet(url);
}
It will not continue to the next step until this step was finished but it will release the thread to perform other tests
The problem here is if i put something on background thread then in test execution mode, main thread does not know about it and it just jump to the next piece of code to exectue and verify the result, but uptill that point values are not updated on the background thread. So gives wrong assert. The way to handle this problem is make the main thread to wait/sleep untill background work is over.
Example:
Dim caller As AsyncMethodHandler
Dim result As IAsyncResult
caller = New AsyncMethodHandler(AddressOf lcl_service.CreateSession)
result = caller.BeginInvoke(parameter, Nothing, AddressOf AsyncCallback, Nothing)
While Not result.IsCompleted
Thread.Sleep(1)
End While
The async support is already added to SpecFlow and will be included in the text release. You can use the CI build to check it out.
See https://github.com/techtalk/SpecFlow/issues/542

Is it possible to determine a method's callback if its type is void and its "return;" had skipped its execution?

I have a method which works like this:
public void deploy(UserInput userInput) {
if (userInput is wrong)
return;
//start deployment process
}
The userInput consist of individual checks in the deploy method. Now, I'd like to JUnit test if the user input check algorithms behave right (so if the deployment process would start or not depending on the right or wrong user input). So I need to test this with right and wrong user inputs. I could do this task by checking if anything has been deployed at all, but in this case this is very cumbersome.
So I wonder if it's somehow possible to know in the corresponding JUnit test if the deploy method has been aborted or not (due to wrong user inputs)? (By the way, changing the deploy method is no option.)
As you describe your problem, you can only check your method for side effects, or if it throws an Exception. The easiest way to do this is using a mocking framework like JMockit or Mockito. You have to mock the first method after the checking of user input has finished:
public void deploy(UserInput userInput) {
if (userInput is wrong)
return;
//start deployment process
startDeploy(); // mock this method
}
You can also extend the class under test, and override startDeploy() if it's possible. This would avoid having to use a mocking framework.
Alternative - Integration tests
It sounds like the deploy method is large and complex, and deals with files, file systems, external services (ftp), etc.
It is sometimes easier in the long run to just accept that you're dealing with external systems, and test these external systems. For instance, if deploy() copies a file to directory x, test that the file exists in the target directory. I don't know how complex deploy is, but often mocking these methods can be as hard as just testing the actual behaviour. This may be cumbersome, but like most tests, it would allow you refactor your code so it is simpler to understand. If your goal is refactoring, then in my experience, it's easier to refactor if you're testing actual behaviour rather than mocking.
You could create a UserInput stub / mock with the correct expectations and verify that only the expected calls (and no more) were made.
However, from a design point of view, if you were able to split your validation and the deployment process into separate classes - then your code can be as simple as:
if (_validator.isValid(userInput)) {
_deployer.deploy(userInput);
}
This way you can easily test that if the validator returns false the deployer is never called (using a mocking framework, such as jMock) and that it is called if the validator returns true.
It will also enable you to test your validation and deployment code seperately, avoiding the issue you're currently having.

Debugging surefire/junit's taste in test cases

Puzzled: I added a new test case function to a junit test. I run the entire class from either Eclipse or from maven, and the old case (there was only one before) runs and the new one does not. It doesn't fail. A breakpoint in it is not hit. The new function has an #Test annotation, just like the old one.
Junit version is 4.5.
Is there a way to get junit to log or trace its thought process in selecting functions to run?
I guess you still ran old class file, as new Java file was not be compiled successfully.
You could modify an old test method to see if the class is really modified: to let successful method to fail.

Is there a way to LOG RC Selenium test errors/failures into a database?

Im using phpunit & phpundercontrol to run the RC Selenium on every build.
PHPUnit allows you to implement your own TestListener. Custom test listeners implement the abstract methods in the PHPUnit_Framework_TestListener interface. Specifically, your listener will implement:
startTestSuite()
endTestSuite()
startTest()
endTest()
addError()
addFailure()
addSkippedTest()
addIncompleteTest()
Once you've attached the TestListner these methods will be called each time the corresponding events occur in your test suite. These methods will be written to perform the INSERTs and UPDATEs on a test results database that you'll create.
Attaching the listener class to your suite is as easy as adding a tag to the phpunit.xml configuration file. For example:
<phpunit>
<testsuites>[...]</testsuites>
<selenium>[...]</selenium>
<listeners>
<listener class="Database"
file="/usr/loocal/share/pear/PHPUnit/Util/Log/Database.php">
</listeners>
</phpunit>
That's all you need!
In fact, PHPUnit already comes with a working version of the listener I just described (PHPUnit_Util_Log_Database), as well as two different database schema definitions.
On many systems this class will live at /usr/loocal/share/pear/PHPUnit/Util/Log/Database.php, and the schemas at /usr/loocal/share/pear/PHPUnit/Util/Log/Database/MySQL.sql and /usr/loocal/share/pear/PHPUnit/Util/Log/Database/SQLite3.sql. You may have to do some tweaking depending on the DBMS you're using.
See these sections of the documentation (it wont let me post two links:
http://www.phpunit.de/manual/3.4/en/extending-phpunit.html#extending-phpunit.PHPUnit_Framework_TestListener
htp://www.phpunit.de/manual/3.4/en/api.html#api.testresult.tables.testlistener
(StackOverflow won't let me post two links, so you'll have to correct the HTTP in that second one)
I am working on the same problem.
Have asked a related question here a few days ago.
My attempt using Selenium IDE, Selenium RC and perl.
General strategy:
You can make newer releases of phpunit generate TAP output (options --tap, --log-tap).
(TAP is Test Anything Protocol - standardized output format)
Parse the logfile to obtain the suite metadata from the TAP parser object, insert into database using perl, e.g. "# Number of Passed": , "Failed", "Unexpectedly succeeded",