Postman collection runner with big numbers on csv file - csv

I'm having trouble with Postman's collection runner as the csv file I import is previewed with scientific numbers. The numbers are somehow converted in the request as well...
csv file excerpt :
19951195954805,19951195954805171182512001,1555,1500,2017-06-01T10:00:00+02:00,47.237605,6.022034,2017-06-04T10:00:00+02:00,FIAT,FR,BB-000-AA
The second value is apaId which is used in the request sent. Its variable name in the request body is id_FPS.
Request body excerpt :
"apaId": {{id_FPS}},
Request sent :
"apaId": 1.995119595480517e+25,
Is there a way to force Postman to use the number I put on the csv file? This number isn't random, it is meaningful with a set number of characters.

Related

Get complete json body from csv in Jmeter

I have a CSV file like this
"{""data"":{""student"":{""name"":""random name""}}}",
"{""data"":{""student"":{""name"":""random name2""}}}"
CSV image for better understanding:
Here I have two JSON strings.
I tried to send these as a JMeter variable as a POST ${body}. It actually takes the value from the CSV, but JMeter sends the value as a string rather than a JSON body. Is there any way to perse those data from CSV and send those as a POST JSON body?
For example, the POST body should be like this:
{
"data": {
"student": {
"name": "random name"
}
}
}
But now, it's like this
"{""data"":{""student"":{""name"":""random name""}}}"
I config the CSV data set in JMeter and send the variable this way:
Just for your info, I do not want to separate the data from the JSON one by one and put variables in the POST body for every JSON data separately. I want the full JSON body from the CSV.
JMeter sends what it finds in CSV file, remove extra quotation marks from the CSV file and JMeter will start sending the valid JSON.
If you cannot manipulate the data in the CSV file, i.e. it's coming from an external source, you can remove these extra quotation marks using JSR223 PreProcessor.
If you want just to send new line from the file with each subsequent request and the file is not very big take a look at __StringFromFile() function, it just returns the next line from the file each time it's being called.
More information on JMeter Functions concept: Apache JMeter Functions - An Introduction

Foundry Data Connection Rest responses as rows

I'm trying to implement a very simple single get call, and the response returns some text with a bunch of ids separated by newline (like a single column csv). I want to save each one as a row in a dataset.
I understand that in general the Rest connector saves each response as a new row in an avro file, which works well for json responses which can then be parsed in code.
However in my case I need it to just save the response in a txt or csv file, which I can then apply a schema to, getting each id in its own row. How can I achieve this?
By default, the Data Connection Rest connector will place each response from the API as a row in the output dataset. If you know the format type of your response, and it's something that would usually be parsed to be one row per newline (csv for example), you can try setting the outputFileType to the correct format (undefined by default).
For example (for more details see the REST API Plugin documentation):
type: rest-source-adapter2
outputFileType: csv
restCalls:
- type: magritte-rest-call
method: GET
path: '/my/endpoint/file.csv'
If you don't know the format, or the above doesn't work regardless, you'll need to parse the response in transforms to split it into separate rows, this can be done as if the response was a string column, in this case exploding after splitting on newline (\n) might be useful: F.explode(F.split(F.col("response"), r'\n'))

Reading JSON in Azure Synapse

I'm trying to understand the code for reading JSON file in Synapse Analytics. And here's the code provided by Microsoft documentation:
Query JSON files using serverless SQL pool in Azure Synapse Analytics
select top 10 *
from openrowset(
bulk 'https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/ecdc_cases/latest/ecdc_cases.jsonl',
format = 'csv',
fieldterminator ='0x0b',
fieldquote = '0x0b'
) with (doc nvarchar(max)) as rows
go
I wonder why the format = 'csv'. Is it trying to convert JSON to CSV to flatten the file?
Why they didn't just read the file as a SINGLE_CLOB I don't know
When you use SINGLE_CLOB then the entire file is important as one value and the content of the file in the doc is not well formatted as a single JSON. Using SINGLE_CLOB will make us do more work after using the openrowset, before we can use the content as JSON (since it is not valid JSON we will need to parse the value). It can be done but will require more work probably.
The format of the file is multiple JSON's like strings, each in separate line. "line-delimited JSON", as the document call it.
By the way, If you will check the history of the document at GitHub, then you will find that originally this was not the case. As much as I remember, originally the file included a single JSON document with an array of objects (was wrapped with [] after loaded). Someone named "Ronen Ariely" in fact found this issue in the document, which is why you can see my name in the list if the Authors of the document :-)
I wonder why the format = 'csv'. Is it trying to convert json to csv to flatten the hierarchy?
(1) JSON is not a data type in SQL Server. There is no data type name JSON. What we have in SQL Server are tools like functions which work on text and provide support for strings which are JSON's like format. Therefore, we do not CONVERT to JSON or from JSON.
(2) The format parameter has nothing to do with JSON. It specifies that the content of the file is a comma separated values file. You can (and should) use it whenever your file is well formatted as comma separated values file (also commonly known as csv file).
In this specific sample in the document, the values in the csv file are strings, which each one of them has a valid JSON format. Only after you read the file using the openrowset, we start to parse the content of the text as JSON.
Notice that only after the title "Parse JSON documents" in the document, the document starts to speak about parsing the text as JSON.

How to use csv to provide different values for json assertion for different inputs in Jmeter?

I am relatively new to API Testing.
Currently I have a setup where I am having different request jsons and for each request json there is different repsonse json assertion configuration provided with the appropriate values in Jmeter.
Since my json structure will be same and only the input values will be different, I am thinking to generalize the input values using csv and keep only one request configuration.
But would I be able to provide different values(like jsonpath and expected value) to one response json assertion configuration using csv? Because the jsonpath and expected value both will depend upon the input provided and it can have major difference as per the case.
If yes, then how to do it please let me know.
Also if I can achieve my use case using other free API Testing Tool like Postman then also let me know.
You can normally parameterize any JMeter Test Element using CSV Data Set Config
For example you have the following response:
{ "name":"John" }
And the following CSV file:
$.name,John
$.name,Jane
Add CSV Data Set Config to your Test Plan and configure it like:
Add JSON Assertion as a child of the request which returns the above JSON and configure it like:
That's it, each virtual user and/or iteration will pick up the next line from the CSV file and the ${jsonPath} and ${name} placeholders will be replaced with their respective values:
as you can see, the first request passed because name matched John and the second failed because the assertion expected name to be Jane and got John

Parameterizing the JSON Data file in the VuGen

Need some help in LoadRunner scripting in REST API. In my requirement, I'm passing the JSON file in the web_custom_request LR Function,
Following is the content of the JSON File
{"serviceLayerOperationRequest":{
"contextObjectId": "36045467715",
"payload":"{\"UserSessionSearchCriteria\":{\"os_st_id\":\"36045467715\",\"LastName\":\"test\",\"FirstName\":\"test\"}}",
"operationLabel": "CustomerSearch",
"serviceOpInvocationId": "1111111",
"sessionId": "{SessionID}"
}}
New Value is fetched from the previous response body and successfully written into the Parameter SessionID
Currently, it picks the string {SessionID}.
In the above JSON file, variable sessionId's value is dynamic, so I want to parameterize it from the Parameters. What should be the correct syntax for this?