JMeter: How can I randomize post body data several times? - function

I have a post body data as:
"My data": [{
"Data": {
"var1": 6.66,
"var2": 8.88
},
"var3": 9
}],
Here, if I post these details on POST DATA body, it will call "My Data" just once. I want to make it random as starting from 1 to 10 times so that "My data" is running for several times but randomly. If the random value is 2, then "My data" should run twice.
Help appreciated!

If you need to generate more blocks like this one:
{
"Data": {
"var1": 6.66,
"var2": 8.88
},
"var3": 9
}
It can be done using JSR223 PreProcessor and the following code:
def myData = []
1.upto(2, {
def entry = [:]
entry.put('Data', [var1: 6.66, var2: 8.88])
entry.put('var3', '9')
myData.add(entry)
})
vars.put('myData', new groovy.json.JsonBuilder(myData).toPrettyString())
log.info(vars.get('myData'))
The above example will generate 2 blocks:
If you want 10 - change 2 in the 1.upto(2, { line to 10
The generated data can be accessed as ${myData} where needed.
More information:
Apache Groovy - Parsing and producing JSON
Apache Groovy - Why and How You Should Use It

Related

Dask how to open json with list of dicts

I'm trying to open a bunch of JSON files using read_json In order to get a Dataframe as follow
ddf.compute()
id owner pet_id
0 1 "Charlie" "pet_1"
1 2 "Charlie" "pet_2"
3 4 "Buddy" "pet_3"
but I'm getting the following error
_meta = pd.DataFrame(
columns=list(["id", "owner", "pet_id"]])
).astype({
"id":int,
"owner":"object",
"pet_id": "object"
})
ddf = dd.read_json(f"mypets/*.json", meta=_meta)
ddf.compute()
*** ValueError: Metadata mismatch found in `from_delayed`.
My JSON files looks like
[
{
"id": 1,
"owner": "Charlie",
"pet_id": "pet_1"
},
{
"id": 2,
"owner": "Charlie",
"pet_id": "pet_2"
}
]
As far I understand the problem is that I'm passing a list of dicts, so I'm looking for the right way to specify it the meta= argument
PD:
I also tried doing it in the following way
{
"id": [1, 2],
"owner": ["Charlie", "Charlie"],
"pet_id": ["pet_1", "pet_2"]
}
But Dask is wrongly interpreting the data
ddf.compute()
id owner pet_id
0 [1, 2] ["Charlie", "Charlie"] ["pet_1", "pet_2"]
1 [4] ["Buddy"] ["pet_3"]
The invocation you want is the following:
dd.read_json("data.json", meta=meta,
blocksize=None, orient="records",
lines=False)
which can be largely gleaned from the docstring.
meta looks OK from your code
blocksize must be None, since you have a whole JSON object per file and cannot split the file
orient "records" means list of objects
lines=False means this is not a line-delimited JSON file, which is the more common case for Dask (you are not assuming that a newline character means a new record)
So why the error? Probably Dask split your file on some newline character, and so a partial record got parsed, which therefore did not match your given meta.

Django - Access Json

I have json like this below
{
"id": 1,
"interviewer": "hengtw1",
"incidenttwg1": {
"id": 5,
"child_occupation": [
6
],
},
}
How can i access child_occupation array. All i tried is incidenttwg1['child_occupation'] or ['incidenttwg1']['child_occupation']. Anyway its still doesn't work.
Any Help?? Thanks....
check this if your string is valid json in python
and
refer this to know more about json encoder and decoder
import json
# Decoding json
data = json.loads({"id": 1,"interviewer": "hengtw1","incidenttwg1": {"id": 5,"child_occupation": [6]}})
print(data["incidenttwg1"]["child_occupation"])
# this will print [6] (list)
print(data["incidenttwg1"]["child_occupation"][0])
# this will print 6 (list item)

JSON Response printing issue for Python

I am hitting an API to get ID based on parameters passed from database, script below shows only the API part. I am passing 2 columns. If both the columns have data in the database, then it hits API-1. If only 2nd column has a data then it hits API-2 via API-1. The problem is in printing the response, because both the API have different structure.
Pretty Structure of API-1:
"body": {
"type1": {
"id": id_data,
"col1": "col1_data",
"col2": "col2_data"}
}
Pretty Structure of API-2:
"body": {
"id": id_data,
"col2": "col2_data"
}
Python Code:
print (resp['body']['type1']['id'], resp['body']['type1']['col1'], resp['body']['type1']['col2'])
As you can see structure is different and 'print' works if both the parameters are sent but it fails when only 2nd column is sent as parameter.
Create a printer that handles both cases graciously:
id_data = "whatever"
data1 = {"body": { "type1": { "id": id_data, "col1": "col1_data", "col2": "col2_data"} }}
data2 = { "body": { "id": id_data, "col2": "col2_data"}}
def printit(d):
# try to acces d["body"]["type1"] - fallback to d["body"] - store it
myd = d["body"].get("type1",d["body"])
did = myd.get("id") # get id
col1 = myd.get("col1") # mabye get col1
col2 = myd.get("col2") # get col2
# print what is surely there, check for those that might miss
print(did)
if col1:
print(col1)
print(col2 + "\n")
printit(data1)
printit(data2)
Output:
# data1
whatever
col1_data
col2_data
# data2
whatever
col2_data

How to take any CSV file and convert it to JSON?(with python as a script engine) [Novice user trying to learn NiFi]

1) There is a CSV file containing the following information (the first row is the header):
first,second,third,total
1,4,9,14
7,5,2,14
3,8,7,18
2) I would like to find the sum of individual rows and generate a final file with a modified header. The final file should look like this:
[
{
"first": 1,
"second": 4,
"third": 9,
"total": 14
},
{
"first": 7,
"second": 5,
"third": 2,
"total": 14
},
{
"first": 3,
"second": 8,
"third": 7,
"total": 18
}
]
But it does not work and I am not sure how to fix this. Can anyone provide me an understanding on how to approach this problem?
NiFi flow:
Although i'm not into Python, by just googling around i think this might do it:
import csv
with open("YOURFILE.csv") as f:
reader = csv.DictReader(f)
data = [r for r in reader]
import json
with open('result.json', 'w') as outfile:
json.dump(data, outfile)
You can use Query Record processor and add new property as
total
select first,second,third,first+second+third total from FLOWFILE
Configure the CsvReader controller service with matching avro schema with int as datatype for all the fields and Json Setwriter controller service,Include total field name so that the output from Query Record processor will be all the columns and the sum of the columns as total.
Connect total relationship from Query Record processor for further processing
Refer to these links regarding Query Record and Configure Record Reader/Writer

compare input to fields in a json file in ruby

I am trying to create a function that takes an input. Which in this case is a tracking code. Look that tracking code up in a JSON file then return the tracking code as output. The json file is as follows:
[
{
"tracking_number": "IN175417577",
"status": "IN_TRANSIT",
"address": "237 Pentonville Road, N1 9NG"
},
{
"tracking_number": "IN175417578",
"status": "NOT_DISPATCHED",
"address": "Holly House, Dale Road, Coalbrookdale, TF8 7DT"
},
{
"tracking_number": "IN175417579",
"status": "DELIVERED",
"address": "Number 10 Downing Street, London, SW1A 2AA"
}
]
I have started using this function:
def compare_content(tracking_number)
File.open("pages/tracking_number.json", "r") do |file|
file.print()
end
Not sure how I would compare the input to the json file. Any help would be much appreciated.
You can use the built-in JSON module.
require 'json'
def compare_content(tracking_number)
# Loads ENTIRE file into string. Will not be effective on very large files
json_string = File.read("pages/tracking_number.json")
# Uses the JSON module to create an array from the JSON string
array_from_json = JSON.parse(json_string)
# Iterates through the array of hashes
array_from_json.each do |tracking_hash|
if tracking_number == tracking_hash["tracking_number"]
# If this code runs, tracking_hash has the data for the number you are looking up
end
end
end
This will parse the JSON supplied into an array of hashes which you can then compare to the number you are looking up.
If you are the one generating the JSON file and this method will be called a lot, consider mapping the tracking numbers directly to their data for this method to potentially run much faster. For example,
{
"IN175417577": {
"status": "IN_TRANSIT",
"address": "237 Pentonville Road, N1 9NG"
},
"IN175417578": {
"status": "NOT_DISPATCHED",
"address": "Holly House, Dale Road, Coalbrookdale, TF8 7DT"
},
"IN175417579": {
"status": "DELIVERED",
"address": "Number 10 Downing Street, London, SW1A 2AA"
}
}
That would parse into a hash, where you could much more easily grab the data:
require 'json'
def compare_content(tracking_number)
json_string = File.read("pages/tracking_number.json")
hash_from_json = JSON.parse(json_string)
if hash_from_json.key?(tracking_number)
tracking_hash = hash_from_json[tracking_number]
else
# Tracking number does not exist
end
end