Whenever an NUnit test fails during its execution (i.e. not when using Assert.*), I want to log additional information (I'm writing web tests and I am especially interested in the web page's current DOM).
How to specify a global exception handler in NUnit which is able to log additional information on NoSuchElementExceptions - test should still fail of course.
You could write an NUnit event listener addin that logs the information. See http://www.nunit.org/index.php?p=nunitAddins&r=2.6.3 and http://www.nunit.org/index.php?p=eventListeners&r=2.6.3. For a tutorial, see https://www.simple-talk.com/dotnet/.net-tools/testing-times-ahead-extending-nunit/.
Related
Say I have CI tests running via GitHub actions. The program I test has a module that checks whether its input parameters are valid. Hence, I run a test where I intentionally provide improper input parameters, so the program catches this and exits with an error (exit 1).
Problem: I want GitHub Actions to mark this test as success. I am aware of continue-on-error: true for a run. Still, this will mark any failed run as success, no matter whether my program exits intentionally with an error due to improper input parameters (as described above), or because there is a bug in my code which then should actually return a failed CI test. So far I am manually inspecting the Actions logs but there must be an automated way to catch this.
Any ideas?
My team is just getting started with X-Ray, and we are setting up our pipelines. However, while doing this I noticed that if I submit a Junit xml file to X-Ray via the REST api, it will create new tests for any test data that isn't already in the system.
Is there a way to have X-Ray ignore test results for tests that don't exist for the test execution? I don't want it constantly creating extra tests.
For example:
(Jira/X-Ray Server) TestExecution MyExecution has test testA
From client, I submit a Junit xml file containing results for testA and testB in the MyExecution TestExecution
testB now exist on the server under MyExecution
I would like to be able to submit the Junit xml file without it creating extra tests.
Whenever you import automation results using the REST API, or any of the available CI plugins, Xray will autoprovision ("generic") Test entities.
The flow is detailed here.
Xray tries to find a unique identifier for the automated test; in the case of JUnit, it's based on the full classname plus the name of the test method; this will become part of the Generic Definition field. The process for JUnit is described in more detail here.
How it works for a different test automation framework/report formats, is similar and is detailed on respective documentation pages.
If a "generic" Test is found, then the Test is reused and a Test Run is created against it. Otherwise, the Test will be auto-provisioned.
This process isn't configurable. However, in theory, if the user that you use for the submission of automation results isn't able to create Test issues, you may have what you need.
Things like these are usually not configurable because they are normally a consequence of applying good practices usually discussed internally with the team(s).
At our organization we are following this DSL model Domain specific language and stuff where users can write tests from a spreadsheet and the underlying java code understands and executes those instructions.
Now here is the problem.
We have a single test method in our class which uses a data provider, reads all the test methods from the file and executes the instructions.
Naturally, when surefire executes and prints results it says:
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
Is there a way to manipulate this in TestNG such that each custom test metod from excel can be picked up by the system as a legitimate test method when the overall suite executes.
I actually made the group migrate from Junit to TestNG and they are questioning if the DataProvider feature can handle that and i have no response for it :(
So essentially we want to break bindings between java methods by using external data providers but at the same time preserve the number of test methods executed as provided in an excel spreadsheet.
If you can give me any direction it would be most helpful to me.
Attaching my spreadsheet here.
My java file has only 1 test method:
#test
RunSuite(){
// Read each test method from file, i want the build server to recognize them someway as a individual test methods
}
I have ported my code to the RTM version of both WinRT and Rx. I use ReactiveUI in my ViewModels. Before porting the code my unit tests were running without problem but now I got a strange behavior.
Here the test:
var sut = new MyViewModel();
myViewModel.MyCommand.Execute(null) //ReactiveAsyncCommand
Assert.AreEqaul(0, sut.Collection.Count)
If I debug the test step by step, the assertion is not failing, but using the test runner it's failing...
The Collection asserted is modified by a method subscribing to the command:
MyCommand.RegisterAsyncTask(_ => DoWork())
.ObserveOn(SynchronizationContext.Current)
.Subscribe(MethodModifyingCollection);
The code was working before moving it to the RTM. I tried also to remove the ObserveOn and add an await Task.Delay() before the Assert without success.
Steven's got the rightish answer, but there are a few RxUI specific things missing. This is definitely related to scheduling in a test runner, but the reason is that the WinRT version of ReactiveUI can't detect properly whether it's in a test runner at the moment.
The dumb workaround for now is to set this at the top of all your tests:
RxApp.DeferredScheduler = Scheduler.CurrentThread;
Do not use the TestScheduler for every test, it's overkill and actually isn't compatible with certain kinds of testing. TestScheduler is good for tests where you're simulating time passing.
Your problem is that MSTest unit tests have a default SynchronizationContext. So ObserveOn and ReactiveAsyncCommand will marshal to the thread pool instead of to the WPF context. This causes a race condition.
Your first and best option is the Rx TestScheduler.
Another option is to await some completion signal (and ensure your test method is async Task, not async void).
Otherwise, if you just need a SynchronizationContext, you can use AsyncContext from my AsyncEx library to execute the tests within your own SynchronizationContext.
Finally, if you have any code that directly uses Dispatcher instead of SynchronizationContext, you can use WpfContext from the Async CTP download.
Im using phpunit & phpundercontrol to run the RC Selenium on every build.
PHPUnit allows you to implement your own TestListener. Custom test listeners implement the abstract methods in the PHPUnit_Framework_TestListener interface. Specifically, your listener will implement:
startTestSuite()
endTestSuite()
startTest()
endTest()
addError()
addFailure()
addSkippedTest()
addIncompleteTest()
Once you've attached the TestListner these methods will be called each time the corresponding events occur in your test suite. These methods will be written to perform the INSERTs and UPDATEs on a test results database that you'll create.
Attaching the listener class to your suite is as easy as adding a tag to the phpunit.xml configuration file. For example:
<phpunit>
<testsuites>[...]</testsuites>
<selenium>[...]</selenium>
<listeners>
<listener class="Database"
file="/usr/loocal/share/pear/PHPUnit/Util/Log/Database.php">
</listeners>
</phpunit>
That's all you need!
In fact, PHPUnit already comes with a working version of the listener I just described (PHPUnit_Util_Log_Database), as well as two different database schema definitions.
On many systems this class will live at /usr/loocal/share/pear/PHPUnit/Util/Log/Database.php, and the schemas at /usr/loocal/share/pear/PHPUnit/Util/Log/Database/MySQL.sql and /usr/loocal/share/pear/PHPUnit/Util/Log/Database/SQLite3.sql. You may have to do some tweaking depending on the DBMS you're using.
See these sections of the documentation (it wont let me post two links:
http://www.phpunit.de/manual/3.4/en/extending-phpunit.html#extending-phpunit.PHPUnit_Framework_TestListener
htp://www.phpunit.de/manual/3.4/en/api.html#api.testresult.tables.testlistener
(StackOverflow won't let me post two links, so you'll have to correct the HTTP in that second one)
I am working on the same problem.
Have asked a related question here a few days ago.
My attempt using Selenium IDE, Selenium RC and perl.
General strategy:
You can make newer releases of phpunit generate TAP output (options --tap, --log-tap).
(TAP is Test Anything Protocol - standardized output format)
Parse the logfile to obtain the suite metadata from the TAP parser object, insert into database using perl, e.g. "# Number of Passed": , "Failed", "Unexpectedly succeeded",