Jmeter functional test validation with sql and csv file - csv

Currently I am trying to use Jmeter for functional tests, but I am currently stuck on how to best implement flexible test validation.
The ultimate goal would be to have a csv file for data input and validation, so it is easy for other people to add or remove test cases.
Case:
Login
Execute bulk job (involving a variable nr of objects and creating a variable nr of objects)
Validate the result via datase sql statements. (multiple SQL requests and response assertions)
Logout
Some statements I would like to execute:
Nr of invoiced BI (per contract ID)
Nr of distinct invoice ID's for BI with invoice ID (per contract ID)
*Get list of Invoice ID's to use in following SQL statement:
For list of invoice ID's invoice header must be equal to....
For list of invoice ID's invoice lines must be equal to....
For each sql I would use a response assertion to validate the sql result. Response assertion with for example 10 lines with a variable inside each line and OR statement to match/equal at least one of the lines.
Data is generated, so ID's can be different each run, I am only sure that the data for an object should match one of x cases.
Everything is very dynamic and the nr of check varies for each test case that is being executed from the csv file.
So I imagine I would need a foreach controller for each sql statement/check/assertion
CSV file would look something like this:
Bulk Job / Contracts / Contract Validation / Invoice header validation /
1234 / 12345 / 2 / 456
1234 / 12435 / 5 / 968
4256 / 89754 / 1 / 987465
4256 / 78597 / 4 / 654
4256 / 87596 / 2 / 852
Or like this:
Bulk Job / Contracts / Contract Validation / Invoice header validation /
1234 , 12345:12435 , 2:5 , 456:968
4256 , 89754:78597:87596 , 1:4:2 , 987465:654:852
What would be the best way to store and setup something like this? CSV file or xml or ...?
Maybe use multiple csv files and try to keep everything aligned?

Consider using scripting-based option i.e. JSR223 Assertion
If you provide Result variable name in the JDBC Request sampler:
You will be able to get the Result Set as a single object:
Which is basically an ArrayList:
So you will be able to iterate it in a single simple function
So you should be able to perform dynamic assertions in one shot.
Check out Debugging JDBC Sampler Results in JMeter article for more information if needed.

Related

How to execute ForEach loop 100 times in Jmeter?

I have a thread group with loop count set to 5. Inside it, I have used a JSON extractor with 1st request which creates a total of 100 variables with purchaseOrderId_1 pattern. I used it in ForEach controller next so the loop shoud execute 100 times ideally. Inside the ForEachLoop, I have a while controller, which executes the child requests 5 times until all the 5 rows of data in Csv Dataset config is read. The goal is to run the 3rd request 5 times for one value of output variable of ForEach controller and in the end, 500 times for the 100 values. The 3rd request should run 2500 times in total. But, the test stops after executing the 3rd request 5 times only for purchaseOrder_1.
Screenshot of my test plan flow.
There is quite a lot of nesting. I tried to interpret your statements and created a test plan. Please check if this is the requested pattern you are expecting.
This is the Sample JMX
CSV File csv
Note: Update CSV file path after you download both the files

how can we create a block size in jmeter with csv config? Each thread should pickup specific set of values

how can we create a block size in JMeter with CSV config?
I have 5 multiple users and one Bulkuser.csv file with 4 columns,
The file has around 2000 values.
I wish to create a block of 400 values for my 5threads[users].
1st USER WILL USE 1st – 400 VALUES (Values in ROW 1-400)
2nd USER WILL USE NEXT 5 VALUES (Values in ROW 401-800)
and so on..
How can we implement this? is there a beanshell pre-processor script for each data read and decide to read the specific file as per thread number?
As of JMeter 5.3 this functionality is not supported, the only stable option I can think of is splitting your Bulkuser.csv into 5 separate files like user1.csv, user2.csv, etc. and use __threadNum() and __CSVRead() functions combination for accessing the data like:
${__CSVRead(user${__threadNum}.csv,0)} - reads the value from column 1 from user1.csv file for 1st thread (for 2nd thread it will be user2.csv file, etc)
${__CSVRead(user${__threadNum}.csv,1)} - reads the value from column 2
.....
${__CSVRead(user${__threadNum}.csv,next)} - proceeds to the next row
More information: How to Pick Different CSV Files at JMeter Runtime

SSIS consolidate and concatenate multiple rows into single rows without using SQL

I am trying to accomplish something that is pretty easy to do in SQL, but seemingly very challenging to do in SSIS without using SQL. Basically, I need to consolidate and concatenate a field of a many-to-one relationship.
Given entities: [Contract Item] (many) to (one) [Account]
There is a field [ari_productsummary] that contains the product listed on the Contract Item entity. We want to write that value to the Account as [ari_activecontractitems]. However, an Account may have more than one Contract Item record associated to it, in which case, we want to concatenate those values. We also only want the distinct values to be concatenated (distinct rows already solved within my data flow).
This can be accomplished by writing to a temporary table, and then using a query or view to obtain the summarized results as followed. I created a SQL table called TESTTABLE that contains the [ari_productsummary] from the Contract Item entity along with the referring [accountid] to map it back to Account. I then wrote the following query as a view:
SELECT distinct accountid,
(SELECT TT2.ari_productsummary + '; '
FROM TESTTABLE TT2
WHERE TT2.accountid = TT.accountid
FOR XML PATH ('')
) AS 'ari_activecontractitems'
FROM TESTTABLE TT
Executing that Query provides me the results that I want, which I can then use for importing into the Account entity as shown below:
But how do I do this in a SSIS dataflow without writing to a SQL table as a temporary placeholder for the data?? I want to do the entire process inside one dataflow container, without using a temporary SQL table/view. The whole summarization process needs to be done on the fly:
Does anyone have a solution that doesn't require a temporary SQL table/view/query, but is contained entirely within the data flow?
I am using VS 2017 and the KingswaySoft Dynamic CRM 365 ETL toolset to develop my solution/package.
Spit balling here as I don't Dynamics nor do I have the custom components.
Data Flow 1 - Contract aggregation
The purpose of this data flow is to replicate your logic in the elegant query you provided and shove that into a Cache Connection Manager (see Notes for 2008+ at the end)
KingswaySoft Dynamics Source -> Script Task -> Cache Transform
If you want to keep the sort in there, do it before the script task. The implementation I'll take with the Script Task is that it's fully blocking - that is all the rows must arrive before it can send any on. Tasks like the Merge Join are only partially blocking because the requirement of sorted data means that once you no longer have a match for the current item, you can send it on down the pipeline.
The Script Task is going to be asynchronous transformation. You'll have two output columns, your key accountid and your new derived column of ari_activecontractitems. That column will might need to be big - you'll know your data best but if it's a blob type in Dynamics (> 4k unicode or > 8k ascii characters) then you'll have to define the data type as DT_TEXT/DT_NTEXT
As inputs, you'll select accountid and ari_productsummary from your source.
The code should be pretty easy. We're going to accumulate the inbound data into a Dictionary.
// member variable
Dictionary<string, List<string>> accumulator;
The PreProcess method, we'll tack this in there to initialize our variable
// initialize in PreProcess method
accumulator = new Dictionary<string, List<string>>();
In the OnBufferRowSent (name approx)
// simulate the inbound queue
// row_id would be something like Rows.row_id
if (!accumulator.ContainsKey(row_id))
{
// Create an empty dictionary for our list
accumulator.Add(row_id, new List<string>());
}
// add it if we don't have it
if (!accumulator[row_id].Contains(invoice))
{
accumulator[row_id].Add(invoice);
}
Once you get the signal sent of no more data available, that's when you start buffering output data. The auto generated code will have placeholders for all this.
// This is how we shove data out the pipe
foreach(var kvp in accumulator)
{
// approximately thus
OutputBuffer1.AddRow();
OutputBuffer1.row_id = kvp.Key;
OutputBuffer1.ari_productsummary = string.Join("; ", kvp.Value);
}
We have an upcoming release that comes with a component that does exactly what you are trying to achieve without the need of writing custom code. The feature is currently under preview, please reach out to us for private access to the feature. You can find our contact information on our website.
UPDATE - June 5, 2020, we have made the components available for public access at https://www.kingswaysoft.com/products/ssis-productivity-pack/ as a result of our 2020 Release Wave 1. We have two components available that serve this kind of purpose. The Composition component will take input values and transform into a composite value in a SSIS column. The Decomposition component does the opposite, it would take an input value and split it into multiple rows using either delimiter-based text splitting or XML/JSON array splitting.

JMeter : How to read particular row data in csv file based on a column value and pass value of that column to a sampler?

I am new to Jmeter and doing a POC to do a load test on a web application.
What I am trying to do:
I have a total of 4 user logins(surgeons). Each Login is associated with 'n' number of patients.
I've created 2 CSV files
1. one with the user login and password for surgeons
2. another CSV file that contains the PatientName, PatientID and the Surgeon associated with that Patient like below.
PatientName,PatientId,loginName
Pa1,PID1,user1
Pa2,PID2,user1
Pa3,PID3,user1
Pa4,PID4,user1
Pa5,PID5,user2
Pa6,PID6,user2
Pa7,PID7,user3
Pa8,PID8,user4
My Scenario:
Login as User.
Navigate to Each Patient Dashboard as per their associations.
log out of the application.
My Testplan
Thread Group (4 users, Ramp up time as 1 sec, 1 loop)
-csv1(with username, password )
-Login Page and Navigate to the Main page
- RunTime Controller (To sustain the load of a set amount of time)
-- While Loop(to loop between the patient dashboard of the surgeon/user logged in)
---CSV2 (the data as shown above)
----Navigate to Dashboard
----Navigate to Main
-Log out of the Application
What I want to achieve:
I want to use the single thread group and run it concurrently for all the 4 users. In this process, once the user login, the user should only those patient data from the CSV which are associated.
For Ex:
When the Thread1 is running with User1 login, he should only able to loop through Pa1, Pa2, Pa3, Pa4 users
When the thread2 is running with User2 login, user should only read the Pa5, Pa6 data.
Like this, each user login should only pick those users as per their associations mentioned above.
Is there any way, I can use this single CSV2 file and achieve this task? so that I don't have to create n number of the thread of n numbers of logins with n number CSV files each containing the data specific to the user login.
According to JMeter Test Elements Execution Order
0. Configuration elements
Pre-Processors
Timers
Sampler
Post-Processors (unless SampleResult is null)
Assertions (unless SampleResult is null)
Listeners (unless SampleResult is null)
Being a Configuration Element CSV Data Set Config is initialized once and before anything else therefore you won't be able to use the current variable from 1st CSV Data Set Config in 2nd CSV Data Set Config.
The solution is using __CSVRead() function instead, JMeter Functions are evaluated in the place where they appear in the Test Plan so you can use any hardcoded value or JMeter Variable or another function there.
More information: How to Pick Different CSV Files at JMeter Runtime
1. CSV Data Set Config for Surgeon credentials (loginNameSurgeon & Password)
2. Login Request (take first surgeon credentials from CSV)
3. While ${__jexl3("${loginNameSurgeon}" != "${loginName}")}
a. CSV Data Set Config for patient data w.r.t surgeons (PatientName,PatientId,loginName)
b. If Controller - ${__jexl3("${loginName}" != "<EOF>")} // to check if we have any more loginName left
c. Dashboard request
d. Debug Sampler // Just to validate if the variables are in place.
4. Logout request

Running a thread group multiple times for all the values in a csv file

I have recorded a series of 5 HTTP requests in a thread group (say TG). The response value of a request has to be sent as a parameter in next request, and so on till the last request is made.
To send the parameter in first request, I have created a csv file with unique values (say 1,2,3,4,5).
Now I want this TG to run for all the values read from the csv file (In above case, TG should start running for value 1, then value 2, till 5).
How do I do this?
Given your CSV file looks like:
1
2
3
4
5
In the Thread Group set Loop Count to "Forever"
Add CSV Data Set Config element under the Thread Group and configure it as follows:
Filename: if file is in JMeter's bin folder - file name only. If in the other location - full path to CSV file
Variable Names: anything meaningful, i.e. parameter
Recycle on EOF - false
Stop thread on OEF - true
Sharing mode - according to your scenario
You'll get something like:
See Using CSV DATA SET CONFIG guide for more detailed explanation.
Another option is using __CSVRead() function
This method of creating individual request for each record will not be scalable for multiple records. There is another scalable solution here - Jmeter multiple executions for each record from CSV