Cucumber examples reuse in different features/scenarios - csv

I've been using cucumber for awhile and I've stumbled upon a problem:
Actual question:
Is there a solution to import the examples from a single file/db using cucumber specifically as examples?
Or alternatively is there a way to define a variable while already in-step to be an example?
Or alternatively again, is there an option to send the examples as variables when I launch the feature file/scenario?
The Problem:
I have a couple of scenarios where I would like to use exactly the same examples, over and over again.
It sounds rather easy, but the examples table is very large (more specifically it contains all the countries in the world and their appropriate continents). Thus repeating it would be very troublesome, especially if the table needs changing (I will need to change all the instances of the table separately)
Complication:
I have a rerun function that knows when a specific example failed and reruns it after the test is done.
Restrictions:
I do not want to edit my rerun file
Related:
I've noticed that there is already an open discussion about importing it from csv here:
Importing CSV as test data in Cucumber?
However that discussion is invalid to me because I have the rerun function that only knows to work only with examples, and the solution suggested there ruins that.
Thank you!

You can use CSV and other external file systems with QAF using different BDD syntax.
If you want to use cucumber steps or cucumber runner, you can use QAF-cucumber and BDD2 (preferred) or Gherkin syntax. QAF-cucumber will enable external test data and other qaf features with cucumber.
Below is the example feature file uses BDD2 syntax can be run using TestNG or Cucumber runner.
Feature: feature uses external data file
#datafie:resources/${env}/testdata.csv
#regression
Scenario: Another scenario exploring different combination using data-provider
Given a "${precondition}"
When an event occurs
Then the outcome should "${be-captured}"
testdata.csv file may look like:
TestcaseId,precondition,be-captured
123461,abc,be captured
123462,xyz,not be captured
You can run using TestNG or Cucumber runner. You can use any of inbuilt data provider or custom as well.

Related

How could I go about loading functions from NTDLL without linking against it or any other DLLs?

I've been experimenting with loading functions from the Windows system DLLs using only the loader functions exported by NTDLL. This works as expected. For the sake of curiosity and getting an even better understanding of the process structure in NT-based systems, I've started trying to load functions from NTDLL by doing the following steps:
Load the PEB of the process from gs:[60h]
Iterate over the modules loaded into the process according to the loader to find NTDLL's base address
Parse the PE headers of NTDLL
Try to parse the export table to find LdrLoadDll, LdrGetDllHandle, and LdrGetProcedureAddress
This fails at step 4. After stepping through it in a debugger (both VS2019 and WinDbg Preview), it seems as though the offsets I've tried yield an invalid structure that leads to an access violation when my code compares the current function name to one of the ones I'm searching for. My code is being compiled and run on a 64-bit copy of Windows 10 Pro build 21364. Note that I'm using my own header that contains definitions for the structures used for this (these definitions are from winnt.h and here) because the Windows headers don't really play nice with the rest of my code. The function trying to do this is here. For the record, this is part of an attempt to implement my own libc (again, for the sake of curiosity). The code that calls the functions is here. Any help with this is tremendously appreciated.
Nevermind, turns out I had outdated verbose definitions of the structures I was using. I found better (more up-to-date) definitions at https://vergiliusproject.com.

NativeScript, Code Sharing and different environments

Note: this is not a dupe of this or this other question. Read on: this question is specific to the Code-Sharing template.
I am doing some pretty basic experiments with NativeScript, Angular and the code sharing templates (see: #nativescript/schematics).
Now I am doing some exploration / poc work on how different "build configuration" are supported by the framework. To be clear, I am searching for a simple -and hopefully official- way to have the application use a different version of a specific file (let's call it configuration.ts) based on the current platform (web/ios/android) and environment (development/production/staging?).
Doing the first part is obviously trivial - after all that is the prime purpose of the code sharing schematics. So, different versions of the same file are identified by different extensions. This page explain things pretty simply.
What I don't get as easily is if the framework/template supports any similar convention-based rule that can be used to switch between debug/release (or even better development/staging/production) versions of a file. Think for example of a config.ts file that contains different parameters based on the environment.
I have done some research in the topic, but I was unable to find a conclusive answer:
the old and now retired documentation for the appbuilder platform mentions a (.debug. and .release.) naming convention for files. I don't think this work anymore.
other sources mention passing parameters during the call to tns build / tns run and then fetching them via webpack env variable... See here. This may work, but seems oddly convoluted
third option that gets mentioned is to use hooks to customize the build (or use a plugin that should do the same)
lastly, for some odd reason, the #nativescript/schematics seems to generate a default project that contains two files called environment.ts and environment.prod.ts. I suspect those only work for the web version of the project (read: ng serve) - I wasn't able to get the mobile compiler to recognize files that end with debug.ts, prod.ts or release.ts
While it may be possible that what I am trying to do isn't just supported (yet?), the general confusion an dissenting opinions on the matter make me think I may be missing something.. somewhere.
In case this IS somehow supported, I also wonder how it may integrate with the NativeScript Sidekick app that is often suggested as a tool to ease the build/run process of NativeScript applications (there is no way to specify additional parameters for the tns commands that the Sidekick automates, the only options available are switching between debug/release mode), but this is probably better to be left for another question.
Environment files are not yet supported, passing environment variables from build command could be the viable solution for now.
But of course, you may write your own schematics if you like immediate support for environment files.
I did not look into sharing environment files between web and mobile yet - I do like Manoj's suggestion regarding modifying the schematics, but I'll have to cross that bridge when I get there I guess. I might have an answer to your second question regarding Sidekick. The latest version does support "Webpack" build option which seems to pass the --bundle parameter to tns. The caveat is that this option seems to be more sensitive to typescript errors, even relatively benign ones, so you have to be careful and make sure to fix them all prior to building. In my case I had to lock the version of #types/jasmine in package.json to "2.8.6" in order to avoid some incompatibility between that and the version of typescript that Sidekick's cloud solution is using. Another hint is to check "Clean Build" after npm dependency changes are made. Good luck!

Writing unit tests for Solr plugin using JUnit4, including creating collections

I wrote a plugin for Solr which contains new stream expressions.
Now, I'm trying to understand what is the best way to write them unit tests:
Unit tests which need to include creation of collections in Solr, so I will be able to check if my new stream expressions return me the right data they suppose.
I saw over the web that there is a class called "SolrTestCaseJ4", but I didn't find how to use it for creating new collections in Solr and add them data and so on...
Can you please recommend me which class may I use for that purpose or any other way to test my new classes?
BTW, we are using Solr 7.1 in cloud mode and JUnit4.
Thanks in advance.
you could use MiniSolrCloudCluster
Here is an example how to create collections (all for unit test):
https://github.com/lucidworks/solr-hadoop-common/blob/159cce044c1907e646c2644083096150d27c5fd2/solr-hadoop-testbase/src/main/java/com/lucidworks/hadoop/utils/SolrCloudClusterSupport.java#L132
Eventually I found a better class, which simplifies everything and implements more functionality than MiniSolrCloudCluster (actually it contains MiniSolrCloudCluster inside it as a member).
This class called SolrCloudTestCase, and as you can see here, even Solr's source code uses it in their own unit tests.

Is It Possible to log Selenium Webdriver Test Results to Quality Center?

I am looking for a way to integrate selenium and QC to log Results.Please help me on this how it can be done. Is there any way to do this.
Thanks in advance
Rashmi
For simple storage of results, you can do what Roland recommended and just use the API to upload the results as an attachment to some entity of your choosing. To get a true "Run" record created in QC (like you get for Manual or QTP tests), you will need to have a Test in a Test Set that can be associated with the run results.
Perhaps the easiest option is to create a QTP/UFT wrapper test. This test will do nothing more than invoke your Selenium test, process the results, and then write those results back to QC using the standard 'Reporter' object.
Another, more complicated, approach is to look at creating a custom test type. This is an advanced topic, and you can refer to the QC documentation on the process.
I recommend the QTP wrapper for ease of implementation and flexibility.
You can use QC's REST API or it's OTA API.

Checkstyle and Findbugs for changed files only on Jenkins (and/or Hudson)

We work with a lot of legacy code and we think about introducing some metrics for new code. Is it possible to let Findbugs and Checkstyle run on changed files only instead of a complete project?
It would be nice to assure that only file with a minimum of quality is checked in, but the code base itself is not (yet) touched and evaluated not to confuse people by thousands of issues.
In theory, it would be possible. You would use a shell script to parse the SVN (or whatever SCM) change logs after a given start date, identify the .java files from these change sets and build two patterns from these:
The Findbugs Maven Plugin expects a comma-separated list of class (or
package) names for the parameter onlyAnalyze, so you'll have
to translate file names to fully qualified class names (this will get
tricky when you're dealing with inner classes)
The Maven Checkstyle Plugin is even worse, it expects a
configuration file for its packageNamesLocation parameter.
Unfortunately, only packages are allowed, not individual files. So
you'll have to translate file names to packages.
In the above examples I assume that you are using maven. I am pretty sure that similar things can be done with ant, but I wouldn't know.
I myself would probably use a Groovy script instead of a shell script to achieve the above results.
Findbugs has ant tasks that can do diffs against different findbugs results to see just the deltas, so only reporting new bugs, see
http://findbugs.sourceforge.net/manual/datamining.html