I am using the Serenity BDD tool Version 1.2.1-rc.6 with JUnit 4.12 to automate some regression testing. The project is working fine so far but I want to be able to rename the generated reports.
At the moment they are called something like this:
62e91fe28f676d9f760e63756d4ba1d2.html
I want them to be named after the testcase creating them.
I have not seen anything about this in the Serenity Reference Manual nor elsewhere I could think of.
Any help would be appreciated.
You can set the serenity.compress.filenames property in the serenity.properties or serenity.conf file to false. This will create a filename based on the test case path, test case name and test name. Note that if your test names/paths are very long it may cause issues with (some versions of?) Windows (that doesn't support file paths longer than 260 chars).
Related
Note: this is not a dupe of this or this other question. Read on: this question is specific to the Code-Sharing template.
I am doing some pretty basic experiments with NativeScript, Angular and the code sharing templates (see: #nativescript/schematics).
Now I am doing some exploration / poc work on how different "build configuration" are supported by the framework. To be clear, I am searching for a simple -and hopefully official- way to have the application use a different version of a specific file (let's call it configuration.ts) based on the current platform (web/ios/android) and environment (development/production/staging?).
Doing the first part is obviously trivial - after all that is the prime purpose of the code sharing schematics. So, different versions of the same file are identified by different extensions. This page explain things pretty simply.
What I don't get as easily is if the framework/template supports any similar convention-based rule that can be used to switch between debug/release (or even better development/staging/production) versions of a file. Think for example of a config.ts file that contains different parameters based on the environment.
I have done some research in the topic, but I was unable to find a conclusive answer:
the old and now retired documentation for the appbuilder platform mentions a (.debug. and .release.) naming convention for files. I don't think this work anymore.
other sources mention passing parameters during the call to tns build / tns run and then fetching them via webpack env variable... See here. This may work, but seems oddly convoluted
third option that gets mentioned is to use hooks to customize the build (or use a plugin that should do the same)
lastly, for some odd reason, the #nativescript/schematics seems to generate a default project that contains two files called environment.ts and environment.prod.ts. I suspect those only work for the web version of the project (read: ng serve) - I wasn't able to get the mobile compiler to recognize files that end with debug.ts, prod.ts or release.ts
While it may be possible that what I am trying to do isn't just supported (yet?), the general confusion an dissenting opinions on the matter make me think I may be missing something.. somewhere.
In case this IS somehow supported, I also wonder how it may integrate with the NativeScript Sidekick app that is often suggested as a tool to ease the build/run process of NativeScript applications (there is no way to specify additional parameters for the tns commands that the Sidekick automates, the only options available are switching between debug/release mode), but this is probably better to be left for another question.
Environment files are not yet supported, passing environment variables from build command could be the viable solution for now.
But of course, you may write your own schematics if you like immediate support for environment files.
I did not look into sharing environment files between web and mobile yet - I do like Manoj's suggestion regarding modifying the schematics, but I'll have to cross that bridge when I get there I guess. I might have an answer to your second question regarding Sidekick. The latest version does support "Webpack" build option which seems to pass the --bundle parameter to tns. The caveat is that this option seems to be more sensitive to typescript errors, even relatively benign ones, so you have to be careful and make sure to fix them all prior to building. In my case I had to lock the version of #types/jasmine in package.json to "2.8.6" in order to avoid some incompatibility between that and the version of typescript that Sidekick's cloud solution is using. Another hint is to check "Clean Build" after npm dependency changes are made. Good luck!
Can someone tell me please: how to take a screenshot when test method fails (jUnit 5). I have a base test class with BeforeEach and AfterEach methods. Any other classes with #Test methods extends base class.
Well, it is possible to write java code that takes screenshots, see here for example.
But I am very much wondering about the real problem you are trying to solve this way. I am not sure if you figured that yet, but the main intention of JUnit is to provide you a framework that runs your tests in various environments.
Of course it is nice that you can run JUnit within your IDE, and maybe you would find it helpful to get a screenshot. But: "normally" unit tests also run during nightly builds and such - in environments where "taking a screenshot" might not make any sense!
Beyond that: screenshorts are an extremely ineffective way of collecting information! When you have a fail, you should be locking for textual log files, html/xml reports, whatever. You want that failing tests generate information that can be easily digested.
So, the real answer here is: step back from what you are doing right now, and re-consider non-screenshot solutions to the problem you actually want to solve!
You don't need to take screen shots for JUnit test failes/passes, rather the recommended way is to generate various reports (Tests Passed/Failed Report, Code coverage Report, Code complexity Report etc..) automatically using the below tools/plugins.
You can use Cobertura maven plugin or Sonarqube code quality tool so that these will automatically generate the reports for you.
You can look here for Cobertura-maven-plugin and here for Sonarqube for more details.
You need to integrate these tools with your CI (Continuous Integration) environments and ensure that if the code is NOT passing certain quality (in terms of tests coverage, code complexity, etc..) then the project build (war/ear) should fail automatically.
I am experimenting with the framework to build an SPA using polymer. This will include a large number of custom elements at various levels in the overall application hierarchy. I would like to use web-component-tester to run the module tests on them.
web-component-tester seems to be opinionated about where the tests will be stored - in a separate test directory, where it will run all files found.
I am of an opposite opinion. I would like to store tests in the same directory as the element definition. I would like to differentiate tests by naming them xxx.test.html (or possibly xxx.test.js). I also want to run different "sets" of tests controlled by gulp some of which will be watching for changes and then running the tests (for the app side of my project) and some of which will be elements that use core-ajax to unit test my server side scripts. These will more than likely be in a totally different directory hierarchy (my dist directory) and will be served by a proper web server.
I "think" the "suite" config option wct-conf.js file in my project root might be how I can define this, or alternatively a wct command with some file globs. Unfortunately web-component-tester's README is somewhat confusing on any detail and when you have your own web server it says "You'll need to save WCT's browser.js in order to go this route." What does that mean?
Can someone enlighten me on how can get WCT to run each of the elements/**/*.test.html files as its own "suite" ( I actually intend to use describe, it format - but I assume I still need to use the term suite).
Can someone also explain what I need to do the browser.js when I have my own web server.
I ran some experiments and did a bit of debugging with node-inspector. Firstly, the command line overwrites the suites parameter in the config file
wct app/elements**/*.test.html
does find all my module tests if I have them stored with the elements and ignores the contents of the wct.conf.js file's suites parameter.
also putting the same value (ie app/elements/**/*.test.html) in the wct-conf.js file for the suite parameter does the same job. In fact in this mode, gulp test:local
Also works correctly
So to run different tests for module and distribution, I just need to set up for wct.conf.js for my module tests, and set up gulp to run a command line with the correct location of my test file
I still haven't understood the instructions for running with your own web server.
I've been using cucumber for awhile and I've stumbled upon a problem:
Actual question:
Is there a solution to import the examples from a single file/db using cucumber specifically as examples?
Or alternatively is there a way to define a variable while already in-step to be an example?
Or alternatively again, is there an option to send the examples as variables when I launch the feature file/scenario?
The Problem:
I have a couple of scenarios where I would like to use exactly the same examples, over and over again.
It sounds rather easy, but the examples table is very large (more specifically it contains all the countries in the world and their appropriate continents). Thus repeating it would be very troublesome, especially if the table needs changing (I will need to change all the instances of the table separately)
Complication:
I have a rerun function that knows when a specific example failed and reruns it after the test is done.
Restrictions:
I do not want to edit my rerun file
Related:
I've noticed that there is already an open discussion about importing it from csv here:
Importing CSV as test data in Cucumber?
However that discussion is invalid to me because I have the rerun function that only knows to work only with examples, and the solution suggested there ruins that.
Thank you!
You can use CSV and other external file systems with QAF using different BDD syntax.
If you want to use cucumber steps or cucumber runner, you can use QAF-cucumber and BDD2 (preferred) or Gherkin syntax. QAF-cucumber will enable external test data and other qaf features with cucumber.
Below is the example feature file uses BDD2 syntax can be run using TestNG or Cucumber runner.
Feature: feature uses external data file
#datafie:resources/${env}/testdata.csv
#regression
Scenario: Another scenario exploring different combination using data-provider
Given a "${precondition}"
When an event occurs
Then the outcome should "${be-captured}"
testdata.csv file may look like:
TestcaseId,precondition,be-captured
123461,abc,be captured
123462,xyz,not be captured
You can run using TestNG or Cucumber runner. You can use any of inbuilt data provider or custom as well.
Before I begin I must admit that I am new to Xtext and the designing of DSLs. Some of my questions on this matter may be somewhat "less than intelligent".
I have created an Xtext project using the IDE, and I am simultaneously using one of the sample projects provided with Xtext as a guide to what I need to do in my language. I am seeing a lot of warnings that are making me nervous.
Apparently, when the development environment creates a new project, it somehow configures that project to use the Java 5 libraries. I am using Java 6, and as a result I get warnings saying that my project is configured for Java 5 and there is no Java 5 on my system (which there isn't!).
I have tried altering the build path so that it uses Java 6 libraries, but this generates a number of other warnings -- including warnings that the Java 6referenced in my manifest.mf file is invalid!
Then there are the "plugin.xml" warnings. Apparently, the build.properties file references a file called "plugin.xml" which is not created when the IDE creates the project. I have no idea whether or not this file is important enough to create, and I have no idea what should go into it.
Frankly, I hate warnings. Warnings tend to lead to future problems in what I produce. I like clean compiles and clean deployments. I would like to eliminate these warnings, before they start screwing me up down the road (like putting in Java6 classes that would break in a Java5 library).
Has anyone been able to eliminate these warnings reliably? Please advise.
For the JDK warning, you simply switch in the Manifest.MF to a target environment matching your preferred JDK ('JavaSE-1.6 ' in your case).
The warning regarding the missing plugin.xml will be gone as soon as you have run the grammar generator the first time, as it will produce such a file.