At our organization we are following this DSL model Domain specific language and stuff where users can write tests from a spreadsheet and the underlying java code understands and executes those instructions.
Now here is the problem.
We have a single test method in our class which uses a data provider, reads all the test methods from the file and executes the instructions.
Naturally, when surefire executes and prints results it says:
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
Is there a way to manipulate this in TestNG such that each custom test metod from excel can be picked up by the system as a legitimate test method when the overall suite executes.
I actually made the group migrate from Junit to TestNG and they are questioning if the DataProvider feature can handle that and i have no response for it :(
So essentially we want to break bindings between java methods by using external data providers but at the same time preserve the number of test methods executed as provided in an excel spreadsheet.
If you can give me any direction it would be most helpful to me.
Attaching my spreadsheet here.
My java file has only 1 test method:
#test
RunSuite(){
// Read each test method from file, i want the build server to recognize them someway as a individual test methods
}
Related
My team is just getting started with X-Ray, and we are setting up our pipelines. However, while doing this I noticed that if I submit a Junit xml file to X-Ray via the REST api, it will create new tests for any test data that isn't already in the system.
Is there a way to have X-Ray ignore test results for tests that don't exist for the test execution? I don't want it constantly creating extra tests.
For example:
(Jira/X-Ray Server) TestExecution MyExecution has test testA
From client, I submit a Junit xml file containing results for testA and testB in the MyExecution TestExecution
testB now exist on the server under MyExecution
I would like to be able to submit the Junit xml file without it creating extra tests.
Whenever you import automation results using the REST API, or any of the available CI plugins, Xray will autoprovision ("generic") Test entities.
The flow is detailed here.
Xray tries to find a unique identifier for the automated test; in the case of JUnit, it's based on the full classname plus the name of the test method; this will become part of the Generic Definition field. The process for JUnit is described in more detail here.
How it works for a different test automation framework/report formats, is similar and is detailed on respective documentation pages.
If a "generic" Test is found, then the Test is reused and a Test Run is created against it. Otherwise, the Test will be auto-provisioned.
This process isn't configurable. However, in theory, if the user that you use for the submission of automation results isn't able to create Test issues, you may have what you need.
Things like these are usually not configurable because they are normally a consequence of applying good practices usually discussed internally with the team(s).
I have several Spock test classes grouped together in a package. I am using Junit 4.10. Each test class contains several feature test methods.
I want to perform some setup steps (such as loading data into a DB, starting up a web server) before I run any test case, but only once when the testing starts.
I want this "OneTimeSetup" method to be called only once whether:
I run all the test classes in the package (for example if they are grouped in a Test Suite)
I run a few test classes
I run only one test class
I run only a certain feature method within a test class
From reading other posts on SO, it seems that this is what TestNG's #BeforeSuite does.
I am aware of Spock's setupSpec() and cleanupSpec() methods, but they only work within a given test class. I am looking to do something like "setupTestSuite()." How can this be achieved in Spock?
You can write a global extension, use a JUnit test suite, call a static method in a helper class (say from setupSpec) that does its work just once, or let the build tool do the job.
I have a method which works like this:
public void deploy(UserInput userInput) {
if (userInput is wrong)
return;
//start deployment process
}
The userInput consist of individual checks in the deploy method. Now, I'd like to JUnit test if the user input check algorithms behave right (so if the deployment process would start or not depending on the right or wrong user input). So I need to test this with right and wrong user inputs. I could do this task by checking if anything has been deployed at all, but in this case this is very cumbersome.
So I wonder if it's somehow possible to know in the corresponding JUnit test if the deploy method has been aborted or not (due to wrong user inputs)? (By the way, changing the deploy method is no option.)
As you describe your problem, you can only check your method for side effects, or if it throws an Exception. The easiest way to do this is using a mocking framework like JMockit or Mockito. You have to mock the first method after the checking of user input has finished:
public void deploy(UserInput userInput) {
if (userInput is wrong)
return;
//start deployment process
startDeploy(); // mock this method
}
You can also extend the class under test, and override startDeploy() if it's possible. This would avoid having to use a mocking framework.
Alternative - Integration tests
It sounds like the deploy method is large and complex, and deals with files, file systems, external services (ftp), etc.
It is sometimes easier in the long run to just accept that you're dealing with external systems, and test these external systems. For instance, if deploy() copies a file to directory x, test that the file exists in the target directory. I don't know how complex deploy is, but often mocking these methods can be as hard as just testing the actual behaviour. This may be cumbersome, but like most tests, it would allow you refactor your code so it is simpler to understand. If your goal is refactoring, then in my experience, it's easier to refactor if you're testing actual behaviour rather than mocking.
You could create a UserInput stub / mock with the correct expectations and verify that only the expected calls (and no more) were made.
However, from a design point of view, if you were able to split your validation and the deployment process into separate classes - then your code can be as simple as:
if (_validator.isValid(userInput)) {
_deployer.deploy(userInput);
}
This way you can easily test that if the validator returns false the deployer is never called (using a mocking framework, such as jMock) and that it is called if the validator returns true.
It will also enable you to test your validation and deployment code seperately, avoiding the issue you're currently having.
We have a smoke test that we run every morning to check a number of applications which involves logging in, executing a simple operation and logging out.
The test at the moment is a collection of Selenium IDE scripts which were imported into Selenium RC as Java and run inside JUnit running inside Netbeans.
What we would like to do is run the test from a spreadsheet i.e each line of the spreadsheet has the application URL, the login parameters, some titles and text to check and the log out sequence.
At the moment, our simple POC simply has one JUnit Test class of the form:
#Test
public void testTestSmokeCheck() throws Exception { ... }
This calls a class which loops through the spreadsheet and does:
selenium sel = new DefaultSelenium
sel.start
sel.open
...
sel.close
for each line of the spreadsheet.
This works but the problem is that many lines of the spreadsheet are compressed into one JUnit test which either all passes or all fails.
What we would like is for each line of the spreadsheet to be a separate JUnit test.
This way each line of the spreadsheet would result in either red or green which would be far more meaningful.
Any ideas how to achieve that?
Not exactly a spreadsheet, but you might want to look into FitNesse. It drives tests from tables on Wiki pages, and prints out red/green pass/fail.
You can do multiple pages and test suites, which should solve your problem.
You can use the Parameterized runner (the JavaDoc has an example). If a test fails, the failure message will include the index of the data, which could correspond to the row in the spreadsheet.
If the number of rows isn't huge, I recommend writing a test case per row and getting rid of the spreadsheet. Sooner or later you will want to do specific logic for some of the pages, and a spreadsheet will no longer model that well.
Im using phpunit & phpundercontrol to run the RC Selenium on every build.
PHPUnit allows you to implement your own TestListener. Custom test listeners implement the abstract methods in the PHPUnit_Framework_TestListener interface. Specifically, your listener will implement:
startTestSuite()
endTestSuite()
startTest()
endTest()
addError()
addFailure()
addSkippedTest()
addIncompleteTest()
Once you've attached the TestListner these methods will be called each time the corresponding events occur in your test suite. These methods will be written to perform the INSERTs and UPDATEs on a test results database that you'll create.
Attaching the listener class to your suite is as easy as adding a tag to the phpunit.xml configuration file. For example:
<phpunit>
<testsuites>[...]</testsuites>
<selenium>[...]</selenium>
<listeners>
<listener class="Database"
file="/usr/loocal/share/pear/PHPUnit/Util/Log/Database.php">
</listeners>
</phpunit>
That's all you need!
In fact, PHPUnit already comes with a working version of the listener I just described (PHPUnit_Util_Log_Database), as well as two different database schema definitions.
On many systems this class will live at /usr/loocal/share/pear/PHPUnit/Util/Log/Database.php, and the schemas at /usr/loocal/share/pear/PHPUnit/Util/Log/Database/MySQL.sql and /usr/loocal/share/pear/PHPUnit/Util/Log/Database/SQLite3.sql. You may have to do some tweaking depending on the DBMS you're using.
See these sections of the documentation (it wont let me post two links:
http://www.phpunit.de/manual/3.4/en/extending-phpunit.html#extending-phpunit.PHPUnit_Framework_TestListener
htp://www.phpunit.de/manual/3.4/en/api.html#api.testresult.tables.testlistener
(StackOverflow won't let me post two links, so you'll have to correct the HTTP in that second one)
I am working on the same problem.
Have asked a related question here a few days ago.
My attempt using Selenium IDE, Selenium RC and perl.
General strategy:
You can make newer releases of phpunit generate TAP output (options --tap, --log-tap).
(TAP is Test Anything Protocol - standardized output format)
Parse the logfile to obtain the suite metadata from the TAP parser object, insert into database using perl, e.g. "# Number of Passed": , "Failed", "Unexpectedly succeeded",