JMeter body data in CSV - csv

I am running some tests in JMeter and the accepted data has to be in the format as shown in the example below.
{
"messageType": 1,
"customerId": 5922429,
"name": "Joe Bloggs",
"phone": "01234567890",
"postcode": "PO6 3EN",
"emailAddress": "joe.bloggs#example.com",
"jobDescription": "do some stuff",
"companyIds": [893999]
}
Now this works great but we want to randomise things up a little and read test data from a CSV file with about 20 samples.
Is this possible with the data having to be set out as above?
Currently the Body Data sits here

You have 2 options:
Modify your payload to rely on JMeter Variables like:
{
"messageType": ${messageType},
"customerId": ${customerId},
"name": "${name}",
"phone": "${phone}",
"postcode": "${postcode}",
"emailAddress": "${emailAddress}",
"jobDescription": "${jobDescription}",
"companyIds": [${companyIds}]
}
once done you can put the values into a CSV file, like:
messageType,customerId,name,phone,postcode,emailAddress,jobDescription,companyIds
1,5922429,Joe Bloggs,01234567890,PO6 3EN,joe.bloggs#example.com,do some stuff,893999
2,5922430,Jane Doe, 0987654321, P06 3EM,janedoe#example.com,do some other stuff,893998
and read the data using CSV Data Set Config so each virtual user will take the next line on each iteration and populate the body with the new values
If you have 20 different JSON files you can use Directory Listing Config plugin to load the file paths and __FileToString() function to read the data from the file in the file system

Related

Modifying JSON Data Structure in Data Factory

I have a JSON file that I need to move to Cosmos DB. I currently have a PowerShell script that will modify this file to a proper format to be used in a Data Flow or Copy activity in Azure Data Factory. However, I was wondering if there is a way to do all these modification in Azure Data Factory without using the Powershell script.
The Powershell script can manipulate a 50MB file in a matter of seconds. I would also like a similar speeds if we build something directly in Azure Data Factory.
Without the modification, I get a error because of the "#" sign. Furthermore, if I want to use companyId as my partition key, it is not allowed because it is inside of an array.
The current JSON file looks similar to the below:
{
"Extract": {
"consumptionInfo": {
"Name": "Test Stuff",
"createdOnTimestamp": "20200101161521Z",
"Version": "1.0",
"extractType": "Incremental",
"extractDate": "20200101113514Z"
},
"company": [{
"company": {
"#action": "create",
"companyId": "xxxxxxx-yyyy-zzzz-aaaa-bbbbbbbbbbbb",
"Status": "1",
"StatusName": "1 - Test - Calendar"
}
}]
}
}
I would like to be converted to the below:
{
"action": "create",
"companyId": "xxxxxxx-yyyy-zzzz-aaaa-bbbbbbbbbbbb",
"Status": "1",
"StatusName": "1 - Test - Calendar"
}
Create a new data flow that reads in your JSON file. Add a Select transformation to choose the properties you wish to send to CosmosDB. If some of those properties are embedded inside of an array, then first use Flatten. You can also use the Select transformation rename "#action" to "action".
Data Factory or Data Flow doesn't works well with nested JSON file. Per my experience, the workaround may be a little complexed but works well:
Source 1 + Flatten active 1 to flat the data in key 'Extract'.
Source 2(same with source 1) + Flatten active 2 to flat the data in
key 'company'.
Add a Union active 1 in source 1 flow to join the data after
flatten active 2
create a Dervied Column to filter the column/key you want after
union active1
Then create the Azure Cosmos DB as sink.
The Data flow overview should like this:

Load multiple increasing json files by ELK stack

I crawled a lot of JSON files in data folder, which all named by timestamp (./data/2021-04-05-12-00.json, ./data/2021-04-05-12-30.json, ./data/2021-04-05-13-00.json, ...).
Now I'm tring to use ELK stack to load those increasing JSON files.
The JSON file is pretty printed like:
{
"datetime": "2021-04-05 12:00:00",
"length": 3,
"data": [
{
"id": 97816,
"num_list": [1,2,3],
"meta_data": "{'abc', 'cde'}"
"short_text": "This is data 97816"
},
{
"id": 97817,
"num_list": [4,5,6],
"meta_data": "{'abc'}"
"short_text": "This is data 97817"
},
{
"id": 97818,
"num_list": [],
"meta_data": "{'abc', 'efg'}"
"short_text": "This is data 97818"
},
],
}
I tried using logstash multiline plugins to extract json file, but it seems like it will handle each file as an event. Is there any way to extract each record in JSON data fileds as an event ?
Also, what's the best practice for loading multiple increasing pretty-printed JSON files in ELK ?
Using multiline is correct if you want to handle each file as one input event.
Then you need to leverage the split filter in order to create one event for each element in the data array:
filter {
split {
field => "data"
}
}
So Logstash reads one file as a whole, it passes its content as a single event to the filter layer and then the split filter as shown above will spawn one new event for each element in the data array.

JSON request using Postman

I am sending a raw Json requet using postman to an API service which feeds it to another web service and finally a database. I want to attach a file to the raw Json request.
I am attaching below the current request I am sending. Is it the right way? The first name and other information is going through but the attachment is not. Any suggestions?
{
"Prefix": "",
"FirstName": "test-resume-dlyon",
"LastName": "test-dlyon-resume",
"AddressLine1": "test2",
"AddressLine2": "",
"City": "Invalid Zipcode",
"State": "GA",
"Zip": "99999",
"Phone": "9999999999",
"Email": "testresumedlyon#gmail.com",
"Source": "V",
"WritingNumber": "",
"AgeVerified": true,
"AdditionalSource": "",
"EnableInternetSource": true,
"InternetSource": "",
"ExternalResult": "",
"PartnerID": "",
"SubscriberID": "15584",
"Languages": [
"English",
"Spanish"
],
"fileName": "resume",
"fileExtension": "docx",
"fileData": "UELDMxE76DDKlagmIF5caEVHmJYFv2qF6DpmMSkVPxVdtJxgRYV"
}
There is no "correct" format to attach a file to a JSON.
JSON is not multipart/form-data (which is designed to include files).
JSON is a text-based data format with a variety of data types (such as strings, arrays, and booleans) but nothing specific for files.
This means that to attach a file, you have to get creative.
For example, you could encode a file in text format (e.g. using base64), but it wouldn't be very efficient, and any Word document would result in you getting a much longer string than "UELDMxE76DDKlagmIF5caEVHmJYFv2qF6DpmMSkVPxVdtJxgRYV".
Of course, the method you use to encode the file has to be the method that whatever is reading the JSON expects you to use. Since there is no standard for this, and you have said nothing about the system which is consuming the JSON you are sending, we have no idea what that method is.
First of all, I'd recommend reading the postman API docs. They have some extremely useful information on there for using the API. Two particular articles that might of interest here are these:
Looking at it and running it through a validator like this one shows that there are no syntax errors so it must be to do with the JSON parameters the API is expecting.
Here's something you can try:
In postman, set method type to POST.
Then select Body -> form-data -> Enter your parameter name (file according to your code)
and on right side next to value column, there will be dropdown "text, file", select File. choose your image file and post it.
For rest of "text" based parameters, you can post it like normally you do with Postman. Just enter parameter name and select "text" from that right side dropdown menu and enter any value for it, hit send button. Your controller method should get called.

Issue with Cloud Datastore backup in BigQuery

I use an App Enginge Datastore backup file and create a BigQuery table. The issue I face is all the JSON values are treated as 'Flattened strings' by default.
I couldn't access the repeated string value for example as below. Value is for column: qoption
[{
"optionId": 0,
"optionTitle": "All inclusive",
"optionImageUrl": "http://sampleurl",
"masterCatInfo": 95680,
"brInfo": 56502428160,
"category": "",
"tags": ["Holiday"]
}, {
"optionId": 1,
"optionTitle": "Self catered",
"optionImageUrl": "http://sampleurl1",
"masterCatInfo": 520280,
"brId": 56598160,
"category": "",
"tags": ["Holiday"]
}]
Is it possibe to again recreate existing table as in JSON format , ideally through BQ CLI, so that I can access table qoption.optionId, qoption.optionTitle,etc
Take a look at Nested and Repeated Data. Basically you have to manually setup your bigquery schema with a nested data schema. Once that is done and your data is imported you should be able to use your nested properties.
Alternatively big query can parse your json ad-hoc.

Docpad page generation from JSON Arrary or similar

is there any way to configure DocPad to generate pages starting from a json array saved in external file (or inline string) instead of from a collection of files?
To clarify, for show posts details I fetch from a JSON file, see below.
instead of this:
<% for post in #getCollection("html").findAll({ relativeOutDirPath: 'posts' }).toJSON(): %>
I use this:
<% for post in JSON.parse #include("posts.json"): %>
Ok. Now I would to generate the post pages directly from this JSON and not creating a page for each post like in the example..
For example I would to create page with url /posts/{urlname}.html when {urlname} exist in JSON like this:
[
{ "id": "1", "urlname": "prod1", "metadata": { "title": "val1" } },
{ "id": "2", "metadata": null },
{ "id": "3", "urlname": "prod3", "metadata": { "title": "val1b", "prop2": "val2b" } }
]
I would to generate /posts/prod1.html and /posts/prod2.html page with metadata those in metadata properties..
Thank's for the replies.. ;)
PS Great work!!!!!!!
Currently there is no official way to inject data into the DocPad in memory database besides having it parsed from the file system on the src directory (the way we are all use to). HOWEVER, this feature (called importers) is the next big todo for DocPad, you can find the task issue here.
For the meantime, you could include the JSON inside your template data, which is suitable for content listings, but not suitable for providing individual documents for each entry.