JMeter - Save complete JSON response of all the request to CSV file for test data preparation - json

I need to create test data preparation script and capture JSON response data to CSV file.
In the actual test, I need to read parameters from CSV file.
Is there any possibilities of saving entire JSON data as filed in CSV file (or) need to extract each filed and save it to CSV file?

The main issue JSON have comma, You can overcome it by saving JSON to file and use different delimiter instead of comma separated, for example #
Then read file using CSV Data Set Config using # Delimiter
Delimiter to be used to split the records in the file. If there are fewer values on the line than there are variables the remaining variables are not updated - so they will retain their previous value (if any).
Also you can save JSON in every row and then get data using different delimiter as #

You can save entire JSON response into a JMeter Variable by adding a Regular Expression Extractor as a child of the HTTP Request sampler which returns JSON and configuring it like:
Name of created variables: anything meaningful, i.e. response
Regular Expression: (?s)(^.*)
Template: $1$
Then you need to declare this response as a Sample Variable by adding the next line to user.properties file:
sample_variables=response
And finally you can use Flexible File Writer plugin to store the response variable into a file, if you don't have any other Sample Variables you should use variable#0

Related

For Jmeter post request, how can I send multiple requests in body from CSV(already have JSON inside) file?

I have a csv file and it has hundred record of JSON. I want to send JSON in body of JMETER post request one by one from CSV.
I tried this and I am getting desired results, but it is adding " " to every variable or data.for example: while sending this as a body
[
{
"id":"1232435",
"ref":"88f000",
"data":"5a344f",
"number":"896751245"
}
]
jmeeter is processing this body as
"[
{
""id"":""1232435"",
""ref"":""88f000"",
""data"":""5a344f"",
""number"":""896751245""
}
]"
I want it to process same as in csv file.
enter image description here
I cannot reproduce your issue, double check your JSON file contents.
If I put the following line into CSV file:
[{"id":"1232435","ref":"88f000","data":"5a344f","number":"896751245"}]
it's being sent as it is without any extra quotation marks.
Also you have dataa in the CSV Data Set Config and data in the HTTP Request so it might be the case that you're not actually reading the file at all because given your current CSV Data Set Config setup you would get only first line of the file into the data variable to wit your request would look like [
So use Debug Sampler and View Results Tree listener combination to see whether the variable is correct and how it does look like. If there are still extra quotation marks they can be removed using __strReplace() function

CSV Data Set Config not looping

I'm using v5.1.1 of JMeter and attempting to use the "CSV Data Set Config". The file is read correctly as I can tell from the Debug Sampler/Results Tree, but the file is not being read line by line. In other words, it reads the first line and never proceeds to the next line for processing.
I would like to use the data inside the CSV to iterate over a series of HTTP Requests to an external API. I currently have a single thread with only the "CSV Data Set Config" and "HTTP Request".
Do I need to wrap this with a ForEach controller or another looping construct? Perhaps I'm missing it but I do not see in the documentation that would indicate it's necessary.
Thanks
You dont need to wrap this in a ForEach loop. First line in the CSV file is a var name:
Let's say your csv file looks like
foo, bar
1, John
2, George
3, Laura
And you use an http request sampler
then ${foo} and ${bar} will get iterated sequentially. However please make sure you are mindful about the CSV Data Set Config options. The following options works ok for me:
By default CSV Data Set Config doesn't trigged any "looping", it reads next line from the CSV file for each thread (virtual user) for each iteration.
So if you want to see more values from the CSV file - either add more users or loops or both.
Given
This CSV file:
line1
line2
line3
Following CSV Data Set Config setup:
And the following Thread Group setup:
You will get the following values (assuming __threadNum() function to visualize current virtual user number and ${__jm__Thread Group__idx} pre-defined variable to show current Thread Group iteration) :
Check out JMeter Parameterization - The Complete Guide article for more information on various approaches on parameterizing JMeter tests using external data sources

jmeter - how to skip specific row from csv

I've a csv like this:
NAME;F1;F2;
test1;field1;field2
test2;field1;field2
test3;field1;field2
I would test only test1, so I would change the csv in
ID;F1;F2;
test1;field1;field2
#test2;field1;field2
#test3;field1;field2
how can I skip rows of test2 and test3 in jmeter?
There is always a way to do to something..
maybe my way is not the best and "pretty" but it worth!
Thread Group
Loop Controller
csv Data Set Config
if Controller
Http Request
Inside If Controller I added this code:
${__groovy(vars.get('ID').take(1)!='#')}
In this way when you put an # at the start of the row it will be skipped.
I hope it could be helpfull for someone.
You cannot, the only option I can think of is creating a new CSV file out of the existing one with just first 2 lines like:
Add setUp Thread Group to your Test Plan
Add JSR223 Sampler to the setUp Thread Group
Put the following code into "Script" area
new File('original.csv').readLines().take(2).each {line ->
new File('new.csv') << line << System.getProperty('line.separator')
}
Replace original.csv with path to the current CSV file and set up CSV Data Set Config to use new.csv
The above code will write first 2 lines from the original.csv into the new.csv so you will be able to access limited external data instead of the full CSV file.
More information:
File.readLines()
Collection.take()
The Groovy Templates Cheat Sheet for JMeter

How to put parameterize value in CSV file including new line

How should I parameterize the Message body value as it is in the format mentioned (with new line)in CSV file as one value in JMeter tool
"QU SKYTEAM
.TPEFMCI 170219
FFR/8
297-50347905CANJFK/T10K20MC0.12/GENERAL
/XPS
CZ123/31DEC/CANJFK/NN
REF/TPEFMCI
CUS//
/CANCSNCARGO
/GUANGZHOU
SRI/PRD-EQUATION"
The image attached is the script
You can work it around using JMeter Functions
Scenario 1: you have file with multiple lines, i.e:
foo
bar
In this case you can use __FileToString() function like: ${__FileToString(/path/to/the/file/with/the/payload,,)}
Scenario 2: you have file with the data in single line having line breaks like:
foo\r\nbar
And the relevant JMeter Variable holding this value is ${baz}
In this case you can use __javaScript() function which will convert line breaks characters into new lines like: ${__javaScript("${baz}",)}
See How to Use JMeter Functions to learn more about above and other JMeter functions

Load a json file from biq query command line

Is it possible to load data from a json file (not just csv) using the Big Query command line tool? I am able to load a simple json file using the GUI, however, the command line is assuming a csv, and I don't see any documentation on how to specify json.
Here's the simple json file I'm using
{"col":"value"}
With schema
col:STRING
As of version 2.0.12, bq does allow uploading newline-delimited JSON files. This is an example command that does the job:
bq load --source_format NEWLINE_DELIMITED_JSON datasetName.tableName data.json schema.json
As mentioned above, "bq help load" will give you all of the details.
1) Yes you can
2) The documentation is here . Go to step 3: Upload the table in documentation.
3) You have to use --source_format flag to tell the bq that you are uploading a JSON file and not a csv.
4) The complete commmand structure is
bq load [--source_format=NEWLINE_DELIMITED_JSON] [--project_id=your_project_id] destination_data_set.destination_table data_source_uri table_schema
bq load --project_id=my_project_bq dataset_name.bq_table_name gs://bucket_name/json_file_name.json path_to_schema_in_your_machine
5) You can find other bq load variants by
bq help load
It does not support JSON formatted data loading.
Here is the documentation (bq help load) for the loadcommand with the latest bq version 2.0.9:
USAGE: bq [--global_flags] <command> [--command_flags] [args]
load Perform a load operation of source into destination_table.
Usage:
load <destination_table> <source> [<schema>]
The <destination_table> is the fully-qualified table name of table to create, or append to if the table already exists.
The <source> argument can be a path to a single local file, or a comma-separated list of URIs.
The <schema> argument should be either the name of a JSON file or a text schema. This schema should be omitted if the table already has one.
In the case that the schema is provided in text form, it should be a comma-separated list of entries of the form name[:type], where type will default
to string if not specified.
In the case that <schema> is a filename, it should contain a single array object, each entry of which should be an object with properties 'name',
'type', and (optionally) 'mode'. See the online documentation for more detail:
https://code.google.com/apis/bigquery/docs/uploading.html#createtable
Note: the case of a single-entry schema with no type specified is
ambiguous; one can use name:string to force interpretation as a
text schema.
Examples:
bq load ds.new_tbl ./info.csv ./info_schema.json
bq load ds.new_tbl gs://mybucket/info.csv ./info_schema.json
bq load ds.small gs://mybucket/small.csv name:integer,value:string
bq load ds.small gs://mybucket/small.csv field1,field2,field3
Arguments:
destination_table: Destination table name.
source: Name of local file to import, or a comma-separated list of
URI paths to data to import.
schema: Either a text schema or JSON file, as above.
Flags for load:
/usr/local/bin/bq:
--[no]allow_quoted_newlines: Whether to allow quoted newlines in CSV import data.
-E,--encoding: <UTF-8|ISO-8859-1>: The character encoding used by the input file. Options include:
ISO-8859-1 (also known as Latin-1)
UTF-8
-F,--field_delimiter: The character that indicates the boundary between columns in the input file. "\t" and "tab" are accepted names for tab.
--max_bad_records: Maximum number of bad records allowed before the entire job fails.
(default: '0')
(an integer)
--[no]replace: If true erase existing contents before loading new data.
(default: 'false')
--schema: Either a filename or a comma-separated list of fields in the form name[:type].
--skip_leading_rows: The number of rows at the beginning of the source file to skip.
(an integer)
gflags:
--flagfile: Insert flag definitions from the given file into the command line.
(default: '')
--undefok: comma-separated list of flag names that it is okay to specify on the command line even if the program does not define a flag with that name.
IMPORTANT: flags in this list that have arguments MUST use the --flag=value format.
(default: '')