Get complete json body from csv in Jmeter - json

I have a CSV file like this
"{""data"":{""student"":{""name"":""random name""}}}",
"{""data"":{""student"":{""name"":""random name2""}}}"
CSV image for better understanding:
Here I have two JSON strings.
I tried to send these as a JMeter variable as a POST ${body}. It actually takes the value from the CSV, but JMeter sends the value as a string rather than a JSON body. Is there any way to perse those data from CSV and send those as a POST JSON body?
For example, the POST body should be like this:
{
"data": {
"student": {
"name": "random name"
}
}
}
But now, it's like this
"{""data"":{""student"":{""name"":""random name""}}}"
I config the CSV data set in JMeter and send the variable this way:
Just for your info, I do not want to separate the data from the JSON one by one and put variables in the POST body for every JSON data separately. I want the full JSON body from the CSV.

JMeter sends what it finds in CSV file, remove extra quotation marks from the CSV file and JMeter will start sending the valid JSON.
If you cannot manipulate the data in the CSV file, i.e. it's coming from an external source, you can remove these extra quotation marks using JSR223 PreProcessor.
If you want just to send new line from the file with each subsequent request and the file is not very big take a look at __StringFromFile() function, it just returns the next line from the file each time it's being called.
More information on JMeter Functions concept: Apache JMeter Functions - An Introduction

Related

Reading JSON in Azure Synapse

I'm trying to understand the code for reading JSON file in Synapse Analytics. And here's the code provided by Microsoft documentation:
Query JSON files using serverless SQL pool in Azure Synapse Analytics
select top 10 *
from openrowset(
bulk 'https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/ecdc_cases/latest/ecdc_cases.jsonl',
format = 'csv',
fieldterminator ='0x0b',
fieldquote = '0x0b'
) with (doc nvarchar(max)) as rows
go
I wonder why the format = 'csv'. Is it trying to convert JSON to CSV to flatten the file?
Why they didn't just read the file as a SINGLE_CLOB I don't know
When you use SINGLE_CLOB then the entire file is important as one value and the content of the file in the doc is not well formatted as a single JSON. Using SINGLE_CLOB will make us do more work after using the openrowset, before we can use the content as JSON (since it is not valid JSON we will need to parse the value). It can be done but will require more work probably.
The format of the file is multiple JSON's like strings, each in separate line. "line-delimited JSON", as the document call it.
By the way, If you will check the history of the document at GitHub, then you will find that originally this was not the case. As much as I remember, originally the file included a single JSON document with an array of objects (was wrapped with [] after loaded). Someone named "Ronen Ariely" in fact found this issue in the document, which is why you can see my name in the list if the Authors of the document :-)
I wonder why the format = 'csv'. Is it trying to convert json to csv to flatten the hierarchy?
(1) JSON is not a data type in SQL Server. There is no data type name JSON. What we have in SQL Server are tools like functions which work on text and provide support for strings which are JSON's like format. Therefore, we do not CONVERT to JSON or from JSON.
(2) The format parameter has nothing to do with JSON. It specifies that the content of the file is a comma separated values file. You can (and should) use it whenever your file is well formatted as comma separated values file (also commonly known as csv file).
In this specific sample in the document, the values in the csv file are strings, which each one of them has a valid JSON format. Only after you read the file using the openrowset, we start to parse the content of the text as JSON.
Notice that only after the title "Parse JSON documents" in the document, the document starts to speak about parsing the text as JSON.

How do i Pass JSON Multiple parameters in JMeter with different users?

In my current scenario i need to login with multiple users into application and create articles using provided input formed in json which have more than 50 parameters.How can i do that need to prepare two csv files or what,pls suggest.
I would come up with:
A single JSON file containing placeholders for JMeter Functions and Variables which you will be using for parameterization like:
{
"firstName": "${firstName}",
"lastName": "${lastName}",
"phone": ${phone},
etc.
}
Then I would create a CSV file containing the parameterization information like:
firstName,lastName,phone
John,Doe,123456789
Jane,Doe,987654321
etc.
Configure JMeter to read the data from CSV using the CSV Data Set Config
Reference the file from point 1 in the "Body Data" section of the HTTP Request sampler using __eval() and __FileToString() functions combination:
${__eval(${__FileToString(data.json,,)})}

Jmeter use random string inside CSV file and resolve at runtime

I have CSV File where I have stored full JSON Request and using this variable in API RQ - ${Request}
inside each row of the CSV File I have added ${randomVariable}
In my test plan I use
randomVariable ${__RandomString(10,QWERTYUIOPASDFGHJKLZXCVBNM4563456345634_,)}
this generates the random variable but in the JSON instead of actual random value its passed as ${randomVariable}
I have tried using Beanshell PreProcessor with get and put but still doesn't work. Please help.
If you want JMeter to evaluate the variables which are coming from external data sources, i.e. CSV files, you need to wrap the variable reference name into __eval() function, to wit:
${variableFromCSV} - will return ${randomVariable}
${__eval(${variableFromCSV})} - will return the actual value of the ${randomVariable}
Demo:
More information: Here’s What to Do to Combine Multiple JMeter Variables

Parameterizing the JSON Data file in the VuGen

Need some help in LoadRunner scripting in REST API. In my requirement, I'm passing the JSON file in the web_custom_request LR Function,
Following is the content of the JSON File
{"serviceLayerOperationRequest":{
"contextObjectId": "36045467715",
"payload":"{\"UserSessionSearchCriteria\":{\"os_st_id\":\"36045467715\",\"LastName\":\"test\",\"FirstName\":\"test\"}}",
"operationLabel": "CustomerSearch",
"serviceOpInvocationId": "1111111",
"sessionId": "{SessionID}"
}}
New Value is fetched from the previous response body and successfully written into the Parameter SessionID
Currently, it picks the string {SessionID}.
In the above JSON file, variable sessionId's value is dynamic, so I want to parameterize it from the Parameters. What should be the correct syntax for this?

Postman collection runner with big numbers on csv file

I'm having trouble with Postman's collection runner as the csv file I import is previewed with scientific numbers. The numbers are somehow converted in the request as well...
csv file excerpt :
19951195954805,19951195954805171182512001,1555,1500,2017-06-01T10:00:00+02:00,47.237605,6.022034,2017-06-04T10:00:00+02:00,FIAT,FR,BB-000-AA
The second value is apaId which is used in the request sent. Its variable name in the request body is id_FPS.
Request body excerpt :
"apaId": {{id_FPS}},
Request sent :
"apaId": 1.995119595480517e+25,
Is there a way to force Postman to use the number I put on the csv file? This number isn't random, it is meaningful with a set number of characters.