Maintaing test users in cucumber steps - csv

In my tests I have to work with different types of users and environments. At the moment I am manually updating the users since we don't have many features. However we will be adding many new features that will make it very difficult to update all files manually. Most of these are needed in the Given step. Example:
Scenario:
Given I am signed in as "user1#example.com"
I would like to change this to:
Scenario
Given I am signed in as "user1"
"user1" could stored in a csv file or in a db. Can either of these be done? If so which is the recommended method?
The CSV file would have something like:
user1,user1#example.com
user2,user2#example.com
user3,user3#example.com
A table in a db:
| id | user | email |
| 1 | user1 | user1#example.com |
| 2 | user2 | user2#example.com |
Seems using the db might be easier to maintain if it can be done. As always your help is appreciated.

The usual way to abstract test case details in Cucumber is through the use of "Scenario Outlines":
https://github.com/cucumber/cucumber/wiki/Scenario-Outlines
Using a Scenario Outline is equivalent to storing test case data in a CSV file, but it has the advantage of keeping the test case info right there in the .feature file.
If you follow this convention, all parts of the test workflow can be edited in the same place - this actually makes maintenance of the test cases easier than if the test outline and the individual test cases are segregated into separate text files (or segregated between a .feature file and a database instance).

Related

Complex Mail Merge (CSV to Word, CSV to PDF, or Other)

QUESTION:
How do you write an ifStatement for Word or for PDF to calculate multiple rows per matching result?
USEAGE:
What I am trying to do seems fairly straight forward and was very easy when I was able to use MS Access 15 years ago, but with Access being not a possibility anymore, I am hoping somebody has a reasonable solution.
The WHAT:
I am trying to generate Statements/Invoices from a CSV (or spreadsheet of any format) into a nice report layout. Let's say the columns look like this:
First Name | Last Name | Account | Address | Item | Description | Item Total
Jane | Smith | 123 | 111 Main St | Ice Cream | it's really cold | $100.00
This is super easy and I can do in Word within 10 minutes and make it "pretty".
BUT what if there are multiple Items per invoice?
So maybe the CSV looks like:
First Name | Last Name | Account | Address | Item | Description | Item Total
Jane | Smith | 123 | 111 Main St | Ice Cream | it's really cold | $100.00
Jane | Smith | 123 | 111 Main St | Hot Dogs | all beef, all the time | $200.00
I still want there to only be 1 invoice per person but not sure how to do an if statement in Word that would say "If there are multiple items per person, put them on a new row, then total them all together"
I would be glad to have the CSV go into a PDF fillable form if I could get the multiple rows to work - I just cannot figure that portion out.
Other options: I looked at OpenOffice "Base" but couldn't get a nice form for a very custom Report. I researched briefly on how to do something like this on AWS, but without any luck. I don't think Microsoft has anything like Access anymore
You can use Word's Catalogue/Directory Mailmerge facility for this (the terminology depends on the Word version). To see how to do so with any mailmerge data source supported by Word, check out my Microsoft Word Catalogue/Directory Mailmerge Tutorial at:
http://www.msofficeforums.com/mail-merge/38721-microsoft-word-catalogue-directory-mailmerge-tutorial.html
or:
http://www.gmayor.com/Zips/Catalogue%20Mailmerge.zip
The tutorial covers everything from list creation to the insertion & calculation of values in multi-record tables in letters. Do read the tutorial before trying to use the mailmerge document included with it.
Depending on what you're trying to achieve, the field coding for this can be complex. However, since the tutorial document includes working field codes for all of its examples, most of the hard work has already been done for you - you should be able to do little more than copy/paste the relevant field codes into your own mailmerge main document, substitute/insert your own field names and adjust the formatting to get the results you desire. For some worked examples, see the attachments to the posts at:
http://www.msofficeforums.com/mail-merge/9180-mail-merge-duplicate-names-but-different-dollar.html#post23345
http://www.msofficeforums.com/mail-merge/11436-access-word-creating-list-multiple-records.html#post30327
Another option would be to use a DATABASE field in a normal ‘letter’ mailmerge main document and a macro to drive the process. An outline of this approach can be found at: http://answers.microsoft.com/en-us/office/forum/office_2010-word/many-to-one-email-merge-using-tables/8bce1798-fbe8-41f9-a121-1996c14dca5d
Conversely, if you're using a relational database or, Excel workbook with a separate table with just a single instance of each of the grouping criteria, a DATABASE field in a normal ‘letter’ mailmerge main document could be used without the need for a macro. An outline of this approach can be found at:
https://answers.microsoft.com/en-us/msoffice/forum/msoffice_word-mso_winother-mso_2010/mail-merge-to-a-word-table-on-a-single-page/4edb4654-27e0-47d2-bd5f-8642e46fa103
For a working example, see:
http://www.msofficeforums.com/mail-merge/37844-mail-merge-using-one-excel-file-multiple.html
The problem with the DATABASE field, though, is that it won't provide the totals you're after. Nevertheless, if you're going down the macro route, it wouldn't take too much more code to append a totals row to the resulting table.
Alternatively, you may want to try one of the Many-to-One Mail Merge add-ins, from:
Graham Mayor at http://www.gmayor.com/ManyToOne.htm; or
Doug Robbins at https://onedrive.live.com/?cid=5AEDCB43615E886B&id=5AEDCB43615E886B!566
PS: While I'm cognisant of StackOverflow's preference for the substance of answers to be posted here rather than linked to, the complexity in this case is far too great to deal with that way, besides which, one can't post the actual field codes or a document containing them here.

Creating SQL Table layout for dynamic document

I apologize if this question is vague, but I'll try to be as clear as possible. I've been given a task where I'm to take a text file, store its content in SQL Server 2008, and automate the creation of a form letter given certain inputs. I've been able to break it into the following generic structure (pay no attention to the content, it's just generic text, but the situational break-down is similar):
Welcome [User],
[if #purchase = true, add this paragraph]
Thank you for purchasing the [device / subscription / subscription and device]
from this business on [date].
[#purchase = true and #return = true, add this paragraph]
I'm sorry you returned it!
...
Signed,
[Author]
[Author Image]
Assuming I'm already able to bring in all the necessary variables (user, purchase, return, date, device or device and subscription or subscription only), how should I go about storing the letter pieces in SQL? would it be considered fine to have a structure like the following:
+-------+-----------------+----------+--------+
| Order | Text | purchase | return |
+-------+-----------------+----------+--------+
| 1 | (1st paragraph) | TRUE | null |
| 2 | (2nd paragraph) | TRUE | FALSE |
+-------+-----------------+----------+--------+
Where I store the contents of the first paragraph as:
Thank you for purchasing the [device / subscription / subscription and device]
from this business on [date].
And then write a stored procedure to piece it together based on the Boolean columns, and find/Replace the bracketed bits with input variables to output the entire letter as a string? It doesn't seem like it would be able to handle much variability, to be honest. Maybe breaking down the document into paragraph and sentence tables?
My ultimate goal would be to output this to either a report I create or, perhaps more ideally, to a Word document (though this is probably a whole different bit of research). Am I way off base here? Any insight is helpful.
you can use replace in select statment
for example
SELECT replace(replace(Text, 'device', #deviceVaribale), 'subscription', #subscriptionVaribale) FROM Order

Creating / Appending a Flat File Destination based on date.

The Backstory:
I have a process that loads physician demographic data into our system. This data can come in at any time and at any interval between updates. The data is what we call "Term-by-Exclusion", meaning that the source file takes precedence, and any physician record in the db that is not in the source file is marked as "Termed" or Inactive.
The Problem:
I need to be able to output the data from the source data, into a flat file destination as a daily report to a companion COBOL system. The source data is loaded into an ETL.PhysicianLoad table prior to processing and the ETL table is wiped prior to each new processing transaction, so retaining a full days' records is not possible as it stands now, without the output file.
Example: ProcessOutput_10152013.txt
The output file ideally needs to be a comprehensive of the entire days' processing. Meaning I want to continuously append to that days' file until the end of that day, then email a notification stating the file is ready for pickup. Any data that comes in after the turn of the day should then be placed in newly created file.
Output should look like this (no headers)
BatchID | LastName | FirstName | MiddleInitial | Date
0001 | Smith | John | A | 10/15/13
0001 | Smith | Sue | R | 10/15/13
0001 | Zeller | Frank | L | 10/15/13
0002 | Peters | Paula | D | 10/15/13
0002 | Rivers | Patrick | E | 10/15/13
0002 | Waters | Oliver | G | 10/15/13
What I am thinking:
I am thinking about using a CurrentDate Variable that will hold the current date comparing it to an expression based variable called FileName which will concatenate the current mmddyyyy to "ProcessOutput_.txt". My thinking is that I should be able to locate a file with that name in the destination folder and if it exists, I should be able to write to it. Otherwise I will have to create a new file. I can then set my Flat File Destination via expression to the FileName Variable.
Can anyone see a better way of doing this or any issues that may arise from this solution I am not seeing?
My thought process was in the right place, but flawed.
Here is how I solved the problem.
After trying to build my control/data flows using the logic in the original question, I discovered that I was working myself into a corner.
So that got me thinking again, how can I do this the easiest possible way
First, do I have the correct Variables defined? No..
CurrentDate - has to be there to define the date portion of the file name.
FileName - has to be present for obvious reasons.
So what did I miss?
FileExists (Type: boolean) - Something that will identify the existence of the file.
PlaceholderFile (Type: String) - Generic FileName Variable
Now what to do with it?
Add a VB Script Task to the control flow, that sets the FileExists flag.
'Check to see if ProspectivePhysician_<currentdate>.txt exists.
Dts.Variables("User::FileExists").Value = File.Exists(Dts.Variables("User::FileName").Value.ToString)
Now that we have the existence of the destination file defined, create the data flow object from the source table. Checking the FileExists Variable in a conditional split. Seperating the data flow into two branches. Create two Flat File Destinations called "Existing" and "New", setting them both to the same flat file location for the time being.
If you attempt to run the package at this point, you will receive Validation Errors from one of the two destinations, as the first is holding ownership of the file and will not allow the second to validate the file.
How to fix this...Use Expressions to swap the actual FileName value back and forth.
For the Existing Flat File Connection String Value, use the following Expression:
#[User::FileExists] == True ? #[User::FileName] : #[User::PlaceholderFile]
For the New Flat File Connection String value, use the following Expression:
#[User::FileExists] == True ? #[User::PlaceholderFile] : #[User::FileName]
Finally, Right click on each of the Flat File Destination Objects in the Data Flow and set the Overwrite property to True on the New Flat File Destination, and False on the Existing Destination. This will assure that the Append action is used on the existing file.

Is it more performant to have rows or columns in sql?

If I have to save many strings that are related and that may be dividied in different languages: What's the best way to do it?
I think I have the following options. Option 1 and 3 is the most clear solution to me. They have more columns, but result in fewer rows.
Option 2 and 4 are the most flexible ones (I could dynamically add new string_x without changing the database). They have only three columns but they will result in many rows.
Option 5 would result in many tables.
Option 1:
id | string_1 | string_2 | string_3 | string_4 | ... | string_n | lang
Option 2 *(where name would be string_1 or string_2 etc.)*
id | name | lang
Option 3
id | string_1 | string_2 | string_3 | string_4 | ... | string_n
id | lang | stringid
Option 4
id | lang | stringid
id | name
Option 5
id | string_1 | lang
id | string_2 | lang
id | ... |lang
I'm using it to store precached html values for multiple views (one line view, two lines, long description, etc.), if this is of interest.
Option 1 and 3 are not recommended, as you end up with the language (which is data) in the field name. You have to change the database design if you want to add another language.
Option 5 is not recommended, as you end up with the string identifider (which is data) in the table name. You have to change the database design if you want to add another string.
Option 2 or 4 would work fine. Option 4 is more normalised, as you don't have duplicate string names, but option 2 might be easier to work with if you enter values directly into the table view.
Having many rows in a table is not a problem, that's what the database system is built for.
Although I've not had to specifically deal with multi-language interfaces, and if that is all its purpose is, is a translation, I would to option 1, but swapped, something like
id English French German Spanish, etc...
So you would basically have a master column (such as English) as a "primary" word that is always populated, then as available, the other language columns get filled in. This way, you can keep adding as many "words" as you need, and if they get populated across all the different languages, so be it... If not, you still have a "primary" value that could be used.
It depends on a lot of other things. First of all, how many strings could there be? How many languages could there be? To simplify things, let's say if either of those numbers are greater than 5, then options 1 and 3 are infeasible.
Before I go any further, you should definitely look into implementing multi-language functionality outside of the database. In PHP you can use Gettext and put your translation data in flat files. This is a better idea for multiple reasons, the main ones being performance and ease of use with external translators.
If you absolutely must do this in a database then you should use a table structure similar to this:
id | string | language
An example entry would be:
welcome_message | Hello, World! | english
Which I think you've described in Option 2. To clarify, depending on the amount of different languages and different strings, you should use a single table with a fixed number of fields.
If you support only a few languages, you might also consider a schema in which each language is its own column:
ID EN ES FR Etc...
This is less normalized than your option 4, but it is very easy to work with. We have built our database translations like this. As we develop code, we create string resources fill in the English text. Later, a translator fills in the strings of their language.

BDD with Cucumber and MySQL — auto increment issues

I am writing some Cucumber features for my RoR app that insert records into the database then send a query to my XML API. Because of the nature of my requests (hardcoded XML) I need to know what the ID of a row is going to be. Here is my Scenario:
Scenario: Client requests call info
Given There is a call like:
| id | caller_phone_number |
| 1 | 3103937123 |
When I head over to call info
And Post this XML:
"""
<?xml version="1.0" encoding="UTF-8"?>
<request-call-info>
<project-code>1000000001</project-code>
</request-call-info>
"""
Then The call info should match
And The status code should be 0
I've got Cuke set up with my _test database, and I also noticed that it isn't resetting all of the tables prior to running my features.
What is the right way to set this up? Thanks!
Firstly, forgive me as this is going to be a bit of a brain dump, but hopefully it should help or at least give you some ideas:
You could rewrite your scenario like this:
Scenario: Client requests call info
Given There is a call with the phone number "3102320"
When I post "request-call-info" xml to "call info" for the phone number "3102320"
Then the call info for phone number "3102320" should match
And the status code for phone number "3102320" should be 0
This way you can refer to the record by an attribute that isn't the primary key.
Are you using fixtures? If so you can set the ID for the record there explicitly.
Depending on your application you might be able to run your tests using an in memory sqlite3 database.