My test will fetch test input data from a csv file and sticks them in my soapui's http request and fires them to target server. Since there are 1000s of inputs in my csv file, I cannot have individual test case for each individual input data. Therefore I just have one http test request (for numerous inputs.). A simple data driven approach.
I am using Jenkins to run my test, unfortunately in my junit report I just see 1 test case fail (or pass, if all the iteration passed) and nothing else
Instead I would like to have a junit report that will have each individual steps considered as individual test case. In general a report that will be based on test steps level.
Any help in this is really appreciated.
I allow SOAPUI to run and create junit reports uning the following command (as part of Jenkins)
sh /home/test/SmartBear/soapUI-Pro-4.5.2/bin/testrunner.sh -a -j -r -f/var/www/SOAPUI/Reports/ -ehttp://mytestserver.com/cgi-bin/test.fcgi -FDF -EDefault -I -S MySOAPUI_TestProjectFile.xml
That's easy. In SoapUI, instead of putting multiple tests steps in one test case, just put each test step in its own test case. Then, the report will show separation of each step on the report.
Related
We have .NET Core 3 codebase and the unit and integration tests are ran on the Gitlab CI.
Problem is, when one or more unit/int. tests fail, nothing specific is shown, you have to look at the entire pipeline dump and search to see for individual failed tests.
Looking at https://docs.gitlab.com/ee/ci/unit_test_reports.html, Junit report is exactly solving this issue.
Consulting with the How to capture structured xUnit test output in Gitlab CI?, I still wasn't able to find a proper solution.
Main problem is, there are multiple test projects that are executed with the dotnet test command:
current snapshot of gitlab.yaml file:
artifacts:
when: always
reports:
junit: ./Test.xml
script:
- for proj in $(dotnet sln MySolution.sln list | grep 'Test.csproj$'); do dotnet test --logger "junit;LogFilePath=Test.xml" $proj; done
Now the problematic is the script part, where we iterate through all the test assemblies and do the dotnet test for each project.
Is there a way to somehow produce a single junit xml log file out of each project and feed it to junit test report in the
reports:
junit: ./Test.xml
line?
You don't have to combine the reports. The artifacts:reports:junit key accepts multiple values, including glob patterns.
artifacts:
reports:
junit:
- "test.xml"
- "./tests/*.xml"
So, one solution would be to have all your XML output files in a particular directory and use a glob pattern in your .gitlab-ci.yml file that matches the many files.
If you really want to merge the xUnit XML files isntead, see this answer
I am using this command on cmd to generate my report:
jmeter -n -t C:\Users\Hp\Desktop\WRALoadTest\TestScript.jmx -l C:\Users\Hp\Desktop\WRALoadTest\result.csv -f -e -o C:\Users\Hp\Desktop\WRALoadTest\HTMLReport
but i get the error:
An error occurred: Cannot invoke "org.apache.jmeter.report.processor.MapResultData.getResult(String)" because "resultData" is null
And my csv file is empty with the exception of column headings so I guess data isnt being generate in it but what is the reason behind that?
Also tried doing it manually on jmeter and it give the same error.
Error while producing dashboard report
Can anyone tell me what i might be doing wrong?
If your result.csv is empty it's absolutely expected that you cannot generate dashboard.
The main question is why it's empty, it means that your test failed somewhere somehow, the reasons could be in:
If your test relies on a CSV file and uses CSV Data Set Config incorrect path to this file may lead to the whole test failure
If your test relies on a JMeter Plugin and the plugin is not installed - you won't be able to run it or even open the test plan in JMeter GUI, if this is the case use JMeter Plugins Manager
It might be the case that you have Thread Group configured to stop the test after the very first Sampler failure
The exact answer lives in jmeter.log file, if anything goes wrong you can always find the reason (or at least extended information) there so check it for any suspicious entries and if you won't be able to figure out the root cause of failure yourself - update your question with the jmeter.log file contents
I had the exact same error, however it wasn't a CSV issue for me.
Depending on your test and if you're using the -reportatendofloadtests JMeter parameter, you may need to add a tearDown Thread Group and then a JSR223 Sampler to that tearDown.
The Sampler doesn't need to make any requests, for example this code is all that's needed:
SampleResult.setIgnore();
This will allow the report to be generated and you won't receive any error messages.
I have a task to send reports of periodic execution of FitNesse tests to some specific endpoint in some specific JSON format.
I set periodic execution of tests in Jenkins properties and saving it in XML, but now I need to parse information about results of it.
It cannot be just step in "after build" property in Jenkins (or can, but I don't know a plugin for it), but what it would be and how I can do this?
Especially, I don't need information about the test, only general moments like date of the test, pass rate, status, name of the project, etc.
I think the best way to solve this is to make a script that parses the XML file, and creates the required JSON file. We normally use python scripts for this.
If you need certain generic information of the build in the script, like build number, you can pass this to your script using the Jenkins environments.
To call the script just add a batch or shell step, and place it below your fitnesse build step, to make sure the XML is generated before calling the script.
FitNesse comes with a jUnit runner which allows you to execute a test/suite. If you create a test class annotated with #RunWith(FitNesseRunner.class) and include its execution in a Jenkins Maven job (where the jUnit class is executed by either surefire or failsafe plugin), the outcome of the tests executed will be picked up automatically by Jenkins, just like it picks up other/regular jUnit tests (as surefire or failsafe will include them in their XML reports and Jenkins will pick these up).
You can find a sample Maven FitNesse project using (a slightly customised version of) this approach at https://github.com/fhoeben/sample-fitnesse-project. How to run the tests on Jenkins is described at https://github.com/fhoeben/hsac-fitnesse-fixtures#to-run-the-tests-on-a-build-server:
Have the build server checkout the project and execute mvn clean test-compile failsafe:integration-test. The result in JUnit XML results can be found in: target/failsafe-reports (Jenkins will pick these up automatically for a Maven job)
You indicate you don't need the HTML results, but they will be made available. They can be found in: target/fitnesse-results/index.html, and you could choose to use the 'HTML Publisher' Jenkins plugin to link to them from each build.
Need to run a job in Jenkins after successfully running the tests it needs to perform as post-build action for generating test reports
For this i have configured
Publish JUnit test result report
In the field
Test Report XMLs: continuum/*/target/surefire-reports/*TestSuite.xml
'continuum/*/target/surefire-reports/TestSuite.xml' doesn't match anything: 'continuum' exists but not 'continuum//target/surefire-reports/*TestSuite.xml'
Can you please help me out in resolving the error....??
I assume you have an 'Execute JUnit tests' Build step. This will produce a JUnit XML file to a location you specify, say, TestOutput/junitresults.xml.
In the 'Publish JUnit test result report' Post Build step you just need to specify TestOutput/junitresults.xml.
As long as your tests executed and produced the output file the Post Build step won't fail to publish it, whether the tests failed or not.
You shouldn't be trying to publish files in the surefire-reports directory unless that it where you told JUnit to write its output file. Normally you wouldn't.
If you want to make it even simpler just tell JUnit to write its output file to the Jenkins WORKSPACE root by removing the TestOutput/ and just specify junitresults.xml.
I'm starting a Python script from a Hudson job. The script is started though 'Execute Windows batch command' in build section as 'python my_script.py'
Now I'd need to get some data created by the script back to Hudson and add it to the fail/success emails. My current approach is that the Python script writes data to stderr which is read to a temp file by the batch and then taken into an environment variable. I can see the environment variable correctly right after the script execution (using set command), but in the post-build actions it's not visible any more. The email sending is probably done in different process, so the variables are not visible anymore. I'm accessing the env vars in the email as ${ENV, varname} (or actually in debug mode as $ENV to print them all)
Is there a way to make the environment variable global inside Hudson?
Or can someone provide a better solution for getting data back from Python script to Hudson.
All the related parts (Hudson, batch and Python script) are under my control and can be modified as needed.
Thanks.
Every build step get's is own shell. This implies, that your environment variables are only valid within the build step.
You can just write the data in a nice format to the std output (use a header that is easy to identify) and if the job fails, the data output gets attached in the email.
If you insist on only putting in the data, you can use the following token for the Editable Email Notification post build action (Email-ext plugin).
${BUILD_LOG_REGEX, regex, linesBefore, linesAfter, maxMatches, showTruncatedLines, substText}