Get json request body from cvs file in Gatling - json

I have a csv file as followed:
Id,searchCriterion
18817,"{"basicSearchCriteria":{"name":{"text":"Kas"}}}"
I want to Post search criterion as a json in request body and prepared the following code as gatling scenario, but it does not work - I received 400 status code because of incorrect json in body:
val feeder = csv("search.csv")
object SearchWithCriteria
{
var request =
feed(feeder)
.exec(
http("POST with criteria page 1")
.post("api/resources?pageNumber=1&pageSize=10&id=${Id}")
.body(StringBody("""${searchCriterion}"""))
.check(status.is(200))
)
}
val basicSearch = scenario("Basic search (no search criteria)").exec(SearchWithCriteria.request)
setUp(
basicSearch.inject(rampUsers(1) during (1 seconds))
).protocols(httpProtocol)
When I past the json from csv file into body statement (as above) it works:
.body(StringBody("""{"basicSearchCriteria":{"name":{"text":"Kas"}}}"""))

Your file is not correct CSV because double quotes are reserved characters that have to be escaped.
From rfc4180:
If fields are not enclosed with double quotes, then double quotes may not appear inside the fields.
If double-quotes are used to enclose fields, then a double-quote appearing inside a field must be escaped by preceding it with another double quote.
You should have:
Id,searchCriterion
18817,"{""basicSearchCriteria"":{""name"":{""text"":""Kas""}}}"

Related

How remove from my json backslash ( \ ) python3

How remove from my json dump backslashes?
my python code is :
#sio.on('donation')
def on_message(data):
y = json.loads(data)
with open('donate.json', 'w') as outfile:
json.dump(data, outfile)
if i make print all fine and no backslashes!But if i open my json file he look like this :
"{\"id\":107864345,\"alert_type\":\"1\",\"is_shown\":\"0\",\"additional_data\":\"{\\\"randomness\\\":811}\",\"billing_system\":\"fake\",\"billing_system_type\":null,\"username\":\"test24\",\"amount\":\"1.00\",\"amount_formatted\":\"1\",\"amount_main\":1,\"currency\":\"USD\",\"message\":\"aaaaaa aaaa\",\"header\":\"\",\"date_created\":\"2022-12-17 21:57:10\",\"emotes\":null,\"ap_id\":null,\"_is_test_alert\":true,\"message_type\":\"text\",\"preset_id\":0}"
i try all what i know
Your code, annotated:
def on_message(data):
When this function is called, it is provided with the argument data, which is a string containing the JSON encoding for a complex object.
y = json.loads(data)
Now data is still the same string, and y is the complex object which was represented by data.
with open('donate.json', 'w') as outfile:
json.dump(data, outfile)
json.dump takes a data object and turns it into a string. With two arguments, as here, it writes the string to a file. But despite its name, data is not the dara object. It's a string. The data object is y.
json.dump will convert any Python object with a JSON representation to a string representing that object, and a string can be represented in JSON. So in this case, the string in data is encoded as a JSON representation. That means that the string must be enclosed in double quotes and any special characters escaped.
But that's not what you wanted. You wanted to dump the data object, which you have named y. Changing that line to
json.dump(y, outfile)
Will probably do what you want.
But if you just wanted to write out the string, there wasn't much point converting it to JSON and back to a string. You could just write it out:
outfile.write(data)
Then you can get rid of y (unless you need it somewhere else).

How to load a json file which is having double quotes within a string into a dataframe in spark scala

I have the below json file which i want to read into a dataframe but i wm getting error as the json file has double quotes within the string.for example:
data:{
"Field1":"val"ue 1",
"Field2":"value2",
"Field3":"va"lu"e3"
}
Required output"
Field1,Field2,Field3
Value1,value2,value2
Your json is not valid (because of the nested double quotes), this is why you have an error when you read the file with Spark data source API or with any other Json parser.
What you can do is to read your file as a dataset of String, then clean each String using a Regex to remove the useless double quotes, and finally use "from_json" function in order to parse each string and convert your dataset from a Dataset[String] to a Dataset[< your case class >].

AWS Lambda Error : 'Could not parse request body into json ' when url parameter contains JSON array

I am trying to invoke my Lambda function by passing parameters as below. it contains apostrophe(').
https://invoke_url?param=[["kurlo jack's book","Adventure Books",8.8,1]]
Stringifyed to be 'https://invoke_url?param=%5B%5B%229780786706211%22s....`
I used the mapping below to pass parameter to lambda
"query": {
#foreach($queryParam in $input.params().querystring.keySet())
"$queryParam": "$util.escapeJavaScript($input.params().querystring.get($queryParam))" #if($foreach.hasNext),#end
#end
}
I got following error
{"message": "Could not parse request body into json: Unrecognized character escape \'\'\' (code 39)\n at [Source: [B#5b70c341; line: 29, column: 65]"}
i have also tried after removing double quotes from mapping template. But did't work.
Be sure to add .replaceAll("\\'","'") to your request body passthrough template after .escapeJavaScript(data)
I found this bit from AWS's documentation to be very helpful for this issue:
$util.escapeJavaScript()
Escapes the characters in a string using JavaScript string rules.
Note This function will turn any regular single quotes (') into
escaped ones (\'). However, the escaped single quotes are not valid in
JSON. Thus, when the output from this function is used in a JSON
property, you must turn any escaped single quotes (\') back to regular
single quotes ('). This is shown in the following example:
$util.escapeJavaScript(data).replaceAll("\\'","'")
I don't have a solution but I have narrowed the root cause. Lambda does not seem to like single quotes to be escaped with a single slash.
If you hardcode your mapping template to look like this:
{
"query-fixed": {
"param": "[[\"kurlo jack\\'s book\",\"Adventure Books\",8.8,1]]"
}
}
my test Lambda invocation succeeds. However, if you hardcode the template to this:
{
"query-fixed": {
"param": "[[\"kurlo jack\'s book\",\"Adventure Books\",8.8,1]]"
}
}
I get the same error message that you got above. Unfortunately, the second variation is what API Gateway produces for the Lambda invocation.
A workaround might involve using the template to replace single quotes escaped with slash to two slashes. See Replace a Substring of a String in Velocity Template Language
I'll follow up with Lambda internally and update if I hear anything or have a functional workaround.
Try changing your encoding of ' to %27 as per what is is defined in this W3Schools page (ironically their example does not encodes the single quote either, I guess its because it belongs to the "supported" ASSCII set of characters)
The "query string" (the part in the hyperlink after ?) must be a string. Whatever you have constructing that must be appended to it like: https://invoke_url?a=x&b=y
In your Lambda code put:
if( event.hasOwnProperty( 'params' ) )
if( event.params.hasOwnProperty( 'querystring' ) )
params = event.params.querystring;
(obviously some extraneous checks, probably unnecessary but ehh)
In your API Gateway go to:
APIs -> api_name -> Resources -> invoke_url -> GET -> Method Execution
Under URL Query String Parameters "Add query string" a and b (or whatever)
When you hit www.com/invoke_url?a=x&b=y you can now access them with:
...
params = event.params.querystring;
console.log( params.a, params.b );
...

Importing JSON into R with in-line quotation marks

I'm attempting to read the following JSON file ("my_file.json") into R, which contains the following:
[{"id":"484","comment":"They call me "Bruce""}]
using the jsonlite package (0.9.12), the following fails:
library(jsonlite)
fromJSON(readLines('~/my_file.json'))
receiving an error:
"Error in parseJSON(txt) : lexical error: invalid char in json text.
84","comment":"They call me "Bruce""}]
(right here) ------^"
Here is the output from R escaping of the file:
readLines('~/my_file.json')
"[{\"id\":\"484\",\"comment\":\"They call me \"Bruce\"\"}]"
Removing the quotes around "Bruce" solves the problem, as in:
my_file.json
[{"id":"484","comment":"They call me Bruce"}]
But what is the issue with the escapement?
In R strings literals can be defined using single or double quotes.
e.g.
s1 <- 'hello'
s2 <- "world"
Of course, if you want to include double quotes inside a string literal defined using double quotes you need to escape (using backslash) the inner quotes, otherwise the R code parser won't be able to detect the end of the string correctly (the same holds for single quote).
e.g.
s1 <- "Hello, my name is \"John\""
If you print (using cat¹) this string on the console, or you write this string on a file you will get the actual "face" of the string, not the R literal representation, that is :
> cat("Hello, my name is \"John\"")
Hello, my name is "John"
The json parser, reads the actual "face" of the string, so, in your case json reads :
[{"id":"484","comment":"They call me "Bruce""}]
not (the R literal representation) :
"[{\"id\":\"484\",\"comment\":\"They call me \"Bruce\"\"}]"
That being said, also the json parser needs double-quotes escaping when you have quotes inside strings.
Hence, your string should be modified in this way :
[{"id":"484","comment":"They call me \"Bruce\""}]
If you simply modify your file by adding the backslashes you will be perfectly able to read the json.
Note that the corresponding R literal representation of that string would be :
"[{\"id\":\"484\",\"comment\":\"They call me \\\"Bruce\\\"\"}]"
in fact, this works :
> fromJSON("[{\"id\":\"484\",\"comment\":\"They call me \\\"Bruce\\\"\"}]")
id comment
1 484 They call me "Bruce"
¹
the default R print function (invoked also when you simply press ENTER on a value) returns the corresponding R string literal. If you want to print the actual string, you need to use print(quote=F,stringToPrint), or cat function.
EDIT (on #EngrStudent comment on the possibility to automatize quotes escaping) :
Json parser cannot do quotes escaping automatically.
I mean, try to put yourself in the computer's shoes and image you should parse this (unescaped) string as json: { "foo1" : " : "foo2" : "foo3" }
I see at least three possible escaping giving a valid json:
{ "foo1" : " : \"foo2\" : \"foo3" }
{ "foo1\" : " : "foo2\" : \"foo3" }
{ "foo1\" : \" : \"foo2" : "foo3" }
As you can see from this small example, escaping is really necessary to avoid ambiguities.
Maybe, if the string you want to escape has a really particular structure where you can recognize (without uncertainty) the double-quotes needing to be escaped, you can create your own automatic escaping procedure, but you need to start from scratch, because there's nothing built-in.

JSON.parse file input differ from parsing string literal

Im using nodejs to parse some JSON files and insert them into mongodb,the JSON in these files have invalid JSON characters like \n,\" etc ..
The thing that i dont understand is that if i tried to parse like :
console.log(JSON.parse('{"foo":"bar\n"}'))
i get
undefined:1
{"foo":"bar
but if i tried to parse the input from the file (The file has the same string {"foo":"bar\n"})like:
new lazy(fs.createReadStream("info.json"))
.lines
.forEach(function(line){
var line = line.toString();
console.log(JSON.parse(line));
}
);
every thing works fine , i want to know if this fine and its ok to parse the files i have, or i should replace all invalid JSON characters before i parse the files ,
and why is there a difference between the two.
Thanks
If you can read "\n" if your text file, then it's not an end of line but the \ character followed by a n.
\n in a JavaScript string literal adds an end of line and they're forbidden in JSON strings.
See json.org :
To put an end of line in a JSON string, you must escape it, which means you must escape the \ in a JavaScript string so that there's "\n" in the string received by JSON.parse :
console.log(JSON.parse('{"foo":"bar\\n"}'))
This would produce an object whose foo property value would contain an end of line :