JMeter - Specify CSV row failure - csv

Within JMeter, I am running a script which uses a .CSV file to enter data as well as verify results. It is working correctly, but I cannot figure out how to tell which row/line of the .CSV caused the individual failures. Is there a way to do this?
Somewhat of an example scenario (not specific to what I'm doing, but similar):
Each row of the .CSV file contains a mathematical equation as well as the expected result.
On page 1, enter the equation (2+2)
Then on Page 2, you get the response: 3.
That test would obviously be a failure.
Say there are 1,000 tests being ran, some that pass and some that do not. How can I tell which .CSV row/line didn't pass?

Do you have any columns in your CSV file which help you to uniquely identify a row?
Let me assume you have a column called 'TestCaseNo' which has values as TC001, TC002,TC003...etc
Add a Beanshell Post Processor to store the result for each iteration.
Add the below code. I assume the you have the PASS or FAIL result stored in the 'Result' variable.
import java.io.File;
import org.apache.jmeter.services.FileServer;
f = new FileOutputStream("someptah/tcstatus.csv", true);
p = new PrintStream(f);
p.println( vars.get("TestCaseNo") + "," + vars.get("Result"));
p.close();
f.close();
The above code creates a CSV file with the results for each testcase.
EDIT:
Do the assertion yourself in the Beanshell post processor.
import java.io.File;
import org.apache.jmeter.services.FileServer;
Result = "FAIL";
Response = prev.getResponseDataAsString();
if (Response.contains("value")) // replace the value with the expected text
Result = "PASS";
f = new FileOutputStream("someptah/tcstatus.csv", true);
p = new PrintStream(f);
p.println( vars.get("TestCaseNo") + "," + Result);
p.close();
f.close();

I would use the following approach
__CSVRead() function - to get data from the .csv file.
__counter function - to track CSV file position. You can include counter variable name into Sampler's label so current .csv file line could be reported. See below image for example
For more information on aforementioned and other useful JMeter functions see How to Use JMeter Functions posts series.

Related

Lua - Match pattern for CSV import to array, that factors in empty values (two commas next to each other)

I have been using the following Lua code for a while to do simply csv to array conversions, but everything previously had a value in every column, but this time on a csv formatted bank statement there are empty values, which this does not handle.
Here’s an example csv, with debit and credits.
Transaction Date,Transaction Type,Sort Code,Account Number,Transaction Description,Debit Amount,Credit Amount,Balance
05/04/2022,DD,'11-70-79,6033606,Refund,,10.00,159.57
05/04/2022,DEB,'11-70-79,6033606,Henry Ltd,30.00,,149.57
05/04/2022,SO,'11-70-79,6033606,NEIL PARKS,20.00,,179.57
01/04/2022,FPO,'11-70-79,6033606,MORTON GREEN,336.00,,199.57
01/04/2022,DD,'11-70-79,6033606,WORK SALARY,,100.00,435.57
01/04/2022,DD,'11-70-79,6033606,MERE BC,183.63,,535.57
01/04/2022,DD,'11-70-79,6033606,ABC LIFE,54.39,,719.20
I’ve tried different patterns (https://www.lua.org/pil/20.2.html), but none seem to work, I’m beginning to think I can’t fix this via the pattern as it’ll break how it works for the rest? I appreciate it if anyone can share how they would approach this…
local csvfilename = "/mnt/nas/Fireflyiii.csv"
local MATCH_PATTERN = "[^,]+"
local function create_array_from_file(csvfilename)
local file = assert(io.open(csvfilename, "r"))
local arr = {}
for line in file:lines() do
local row = {}
for match in string.gmatch(line, MATCH_PATTERN) do
table.insert(row, match)
end
table.insert(arr, row)
end
return arr
end

azure ADF - Get field list of .csv file from lookup activity

context: azure - ADF brief process description:
Get a list of the fields defined in the first row of a .csv(blobed) file. This is the first step, detect fields
then 2nd step would be a kind of compare with actual columns of an SQL table
3rd one a stored procedure execution to make the alter table task, finishing with a (customized) table containing all fields needed to successfully load the .csv file into the SQl table.
To begin my ADF pipeline, I set up a lookup activity that "querys" the first line of my blobed file, "First row only" flag = ON.As a second pipeline activity, an "Append Variable" task, there I would like to get all .csv fields(first row) retrieved from the lookup activity, as a list.
Here is where a getting the nightmare.
As far I know, with dynamic content I can get an array with all values (w/ format like {"field1_name":"field1_value_1st_row", "field2_name":"field2_value_1st_row", etc })
with something like #activity('Lookup1').output.firstrow.
Or any array element with #activity('Lookup1').output.firstrow.<element_name>,
but I can't figure out how to get a list of all field names (keys?) of the array.
I will appreciate any advice, many thanks!
I would save the part of LookUp Activity because it seems that you are familiar with it.
You could use Azure Function HttpTrigger to get the key list of firstrow JSON object. For example your json object like this as you mentioned in your question:
{"field1_name":"field1_value_1st_row", "field2_name":"field2_value_1st_row"}
Azure Function code:
module.exports = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.');
var array = [];
for(var key in req.body){
array.push(key);
}
context.res = {
body: {"keyValue":array}
};
};
Test Output:
Then use Azure Function Activity to get the output:
#activity('<AzureFunctionActivityName>').keyValue
Use Foreach Activity to loop the keyValue array:
#item()
Still based on the above sample input data,please refer to my sample code:
dct = {"field1_name": "field1_value_1st_row", "field2_name": "field2_value_1st_row"}
list = []
for key in dct.keys():
list.append(key)
print(list)
dicOutput = {"keys": list}
print(dicOutput)
Have you considered doing this in ADF data flow? You would map the incoming fields to a SQL dataset without a target schema. Define a new table name in the dataset definition and then map the incoming fields from your CSV to a new target table schema definition. ADF will write the rows to a new table using that file's schema.

SSIS flat file with values containing text qualifier

I received a flat file that cannot be generated in other way. The delimited is a comma and the text qualifier is a double quote. The problem is that sometimes a have a double quote in the value. In example:
"0","12345", "Centre d"edu et de recherche", "B8E7"
Because of the double quote in the value, I received this error:
[Flat File Source [58]] Error: The column delimiter for column "XYZ" was not found.
[Flat File Source [58]] Error: An error occurred while processing file "C:\somefile.csv" on data row 296.
What can I do to process this file?
I use SSIS 2016 with Visual Studio 2015
You can use the Flat File Source error output to redirect bad rows to another flat file and correct values manually while all valid rows will be processed.
There are many links online to learn more about Flat File Source Error Output:
Flat File source Error Output connection in SSIS
How to Avoid Package Design Flaws When Sourcing Data From Flat Files
Flat File Source Editor (Error Output Page)
Update 1 - Workaround using Script Component and conditional split
Since Flat File error output is not working you can use a script component with a conditional split to filter bad rows, the following update is a step by step guide to implement that:
Add a Flat File connection manager, Go To advanced Tab, Delete all columns except one column and change it length to 4000
Add a script component, Go to Input and Output Column Tab, add desired output columns (in this example 4 columns) and add a Flag Column of type DT_BOOL
Inside the Script Component write the following script to check if the number of columns is 4 then Flag = True which means this is a valid row else set Flag as False which mean that this is a bad row:
[Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute]
public class ScriptMain : UserComponent
{
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
if (!Row.Column0_IsNull && !String.IsNullOrWhiteSpace(Row.Column0))
{
string[] cells = Row.Column0.Split(new string[] { "\",\"" }, StringSplitOptions.None);
if (cells.Length == 4)
{
Row.Col1 = cells[0].TrimStart('\"');
Row.Col2 = cells[1];
Row.Col3 = cells[2];
Row.Col4 = cells[3].TrimEnd('\"');
Row.Flag = true;
}
else
{
bool cancel;
Row.Flag = false;
}
}
else
{
Row.Col1_IsNull = true;
Row.Col2_IsNull = true;
Row.Col3_IsNull = true;
Row.Col4_IsNull = true;
Row.Flag = true;
}
}
}
Add a conditional split to split rows based on Flag column
Map the Valid Rows output to the OLEDB Destination, and the Bad Rows output to another flat file where you only map Column0

Export Datatables from Spotfire to CSV using IronPython Script

I have a IronPython script I use to export all my data tables from a Spotfire project.
Currently it works perfectly. It loops through all datatables and exports them as ".xlsx". Now I need to export the files as ".csv" which I thought would be as simple as changing ".xlsx" to ".csv".
This script still exports the files, names them all .csv, but what is inside the file is a .xlsx, Im not sure how or why. The code is just changing the file extension name but not converting the file to csv.
Here is the code I am currently using:
I have posted the full code at the bottom, and the code I believe is relevant to my question in a separate code block at the top.
if(dialogResult == DialogResult.Yes):
for d in tableList: #cycles through the table list elements defined above
writer = Document.Data.CreateDataWriter(DataWriterTypeIdentifiers.ExcelXlsDataWriter)
table = Document.Data.Tables[d[0]] #d[0] is the Data Table name in the Spotfire project (defined above)
filtered = Document.ActiveFilteringSelectionReference.GetSelection(table).AsIndexSet() #OR pass the filter
stream = File.OpenWrite(savePath+'\\'+ d[1] +".csv") #d[1] is the Excel alias name. You could also use d.Name to export with the Data Table name
names = []
for col in table.Columns:
names.append(col.Name)
writer.Write(stream, table, filtered, names)
stream.Close()
I think it may have to do with the ExcelXlsDataWriter.
I tried with ExcelXlsxDataWriter as well. Is there a csv writer I could use for this? I believe csv and txt files have a different writer.
Any help is appreciated.
Full script shown below:
import System
import clr
import sys
clr.AddReference("System.Windows.Forms")
from sys import exit
from System.Windows.Forms import FolderBrowserDialog, MessageBox, MessageBoxButtons, DialogResult
from Spotfire.Dxp.Data.Export import DataWriterTypeIdentifiers
from System.IO import File, FileStream, FileMode
#This is a list of Data Tables and their Excel file names. You can see each referenced below as d[0] and d[1] respectively.
tableList = [
["TestTable1"],
["TestTable2"],
]
#imports the location of the file so that there is a default place to put the exports.
from Spotfire.Dxp.Application import DocumentMetadata
dmd = Application.DocumentMetadata #Get MetaData
path = str(dmd.LoadedFromFileName) #Get Path
savePath = '\\'.join(path.split('\\')[0:-1]) + "\\DataExports\\"
dialogResult = MessageBox.Show("The files will be save to "+savePath
+". Do you want to change location?"
, "Select the save location", MessageBoxButtons.YesNo)
if(dialogResult == DialogResult.Yes):
# GETS THE FILE PATH FROM THE USER THROUGH A FILE DIALOG instead of using the file location
SaveFile = FolderBrowserDialog()
SaveFile.ShowDialog()
savePath = SaveFile.SelectedPath
#message making sure that the user wants to exporthe files.
dialogResult = MessageBox.Show("Export Files."
+" Export Files","Are you sure?", MessageBoxButtons.YesNo)
if(dialogResult == DialogResult.Yes):
for d in tableList: #cycles through the table list elements defined above
writer = Document.Data.CreateDataWriter(DataWriterTypeIdentifiers.ExcelXlsDataWriter)
table = Document.Data.Tables[d[0]] #d[0] is the Data Table name in the Spotfire project (defined above)
filtered = Document.ActiveFilteringSelectionReference.GetSelection(table).AsIndexSet() #OR pass the filter
stream = File.OpenWrite(savePath+'\\'+ d[1] +".csv") #d[1] is the Excel alias name. You could also use d.Name to export with the Data Table name
names = []
for col in table.Columns:
names.append(col.Name)
writer.Write(stream, table, filtered, names)
stream.Close()
#if the user doesn't want to export then he just gets a message
else:
dialogResult = MessageBox.Show("ok.")
For some reason the Spotfire Iron Python implementation does not support the csv package implemented in Python.
The workaround I found to your implementation is using StdfDataWriter instead of ExcelXsDataWriter. The STDF data format is the Spotfire Text Data Format. The DataWriter class in Spotfire supports only Excel and STDF (see here) and STDF comes closest to CSV.
from System.IO import File
from Spotfire.Dxp.Data.Export import DataWriterTypeIdentifiers
writer = Document.Data.CreateDataWriter(DataWriterTypeIdentifiers.StdfDataWriter)
table = Document.Data.Tables['DropDownSelectors']
filtered = Document.ActiveFilteringSelectionReference.GetSelection(table).AsIndexSet()
stream = File.OpenWrite("C:\Users\KLM68651\Documents\dropdownexport.stdf")
names =[]
for col in table.Columns:
names.append(col.Name)
writer.Write(stream, table, filtered, names)
stream.Close()
Hope this helps

How to import comma delimited text file into datawindow (powerbuilder 11.5)

Hi good day I'm very new to powerbuilder and I'm using PB 11.5
Can someone know how to import comma delimited text file into datawindow.
Example Text file
"1234","20141011","Juan, Delacruz","Usa","001992345456"...
"12345","20141011","Arc, Ino","Newyork","005765753256"...
How can I import the third column which is the full name and the last column which is the account number. I want to transfer the name and account number into my external data window. I've tried to use the ImportString(all the rows are being transferred in one column only). I have three fields in my external data window.the Name and Account number.
Here's the code
ls_File = dw_2.Object.file_name[1]
li_FileHandle = FileOpen(ls_File)
li_FileRead = FileRead(li_FileHandle, ls_Text)
DO WHILE li_FileRead > 0
li_Count ++
li_FileRead = FileRead(li_FileHandle, ls_Text)
ll_row = dw_1.ImportString(ls_Text,1)
Loop.
Please help me with the code! Thank You
It seems that PB expects by default a tab-separated csv file (while the 'c' from 'csv' stands for 'coma'...).
Add the csv! enumerated value in the arguments of ImportString() and it should fix the point (it does in my test box).
Also, the columns defined in your dataobject must match the columns in the csv file (at least for the the first columns your are interested in). If there are mode columns in the csv file, they will be ignored. But if you want to get the 1st (or 2nd) and 3rd columns, you need to define the first 3 columns. You can always hide the #1 or #2 if you do not need it.
BTW, your code has some issues :
you should always test the return values of function calls like FileOpen() for stopping processing in case of non-existent / non-readable file
You are reading the text file twice for the first row: once before the while and another inside of the loop. Or maybe it is intended to ignore a first line with column headers ?
FWIF, here is a working code based on yours:
string ls_file = "c:\dev\powerbuilder\experiment\data.csv"
string ls_text
int li_FileHandle, li_fileread, li_count
long ll_row
li_FileHandle = FileOpen(ls_File)
if li_FileHandle < 1 then
return
end if
li_FileRead = FileRead(li_FileHandle, ls_Text)
DO WHILE li_FileRead > 0
li_Count ++
ll_row = dw_1.ImportString(csv!,ls_Text,1)
li_FileRead = FileRead(li_FileHandle, ls_Text)//read next line
Loop
fileclose(li_fileHandle)
use datawindow_name.importfile(CSV!,file_path) method.