I have generated a JSON output using Talend. However, my problem is that all my records are outputed in 1 row in the JSON file. Below is the sample output:
[{"field1":"value1_1","field2":"value2_1","field3":"value3_1"},{"field1":"value1_2","field2":"value2_2","field3":"value3_2"},{"field1":"value1_3","field2":"value2_3","field3":"value3_3"}]
My desired output is to have all JSON record separated by newline in the output file:
[{"field1":"value1_1","field2":"value2_1","field3":"value3_1"},
{"field1":"value1_2","field2":"value2_2","field3":"value3_2"},
{"field1":"value1_3","field2":"value2_3","field3":"value3_3"}]
Thanks in advance for the help!
There is no direct way to do it, but if it's necessary you can re-read the file as raw file using a tFileInputRaw component then replace all },{ by },\n{ in a tJavaRow component.
Use the tFileOutputJSON component, which will help collect the rows as a list into JSON file. You may then use a tFileInput Component to read it and send it as Response.
Related
I want to pass value from my csv file in json extractor but it's not working.
I have tried like [?(#.name == '${UserName}')].id but when i am simply writing Username without taking it from CSV then it is working.
Double check that your UserName variable exists and has the anticipated value using Debug Sampler and View Results Tree listener combination
As long as the associated JMeter Variable exists you should be able to use it in the JSON Extractor
Can I get help in creating a table on AWS Athena.
For a sample example of data :
[{"lts": 150}]
AWS Glue generate the schema as :
array (array<struct<lts:int>>)
When I try to use the created table by AWS Glue to preview the table, I had this error:
HIVE_BAD_DATA: Error parsing field value for field 0: org.openx.data.jsonserde.json.JSONObject cannot be cast to org.openx.data.jsonserde.json.JSONArray
The message error is clear, but I can't find the source of the problem!
Hive running under AWS Athena is using Hive-JSON-Serde to serialize/deserialize JSON. For some reason, they don't support just any standard JSON. They ask for one record per line, without an array. In their words:
The following example will work.
{ "key" : 10 }
{ "key" : 20 }
But this won't:
{
"key" : 20,
}
Nor this:
[{"key" : 20}]
You should create a JSON classifier to convert array into list of object instead of a single array object. Use JSON path $[*] in your classifier and then set up crawler to use it:
Edit crawler
Expand 'Description and classifiers'
Click 'Add' on the left pane to associate you classifier with crawler
After that remove previously created table and re-run the crawler. It will create a table with proper scheme but I think Athena will still be complaining when you will try to query it. However, now you can read from that table using Glue ETL job and process single record object instead of array-objects
This json - [{"lts": 150}] would work like a charm with below query:-
select n.lts from table_name
cross join UNNEST(table_name.array) as t (n)
The output would be as below:-
But I have faced a challenge with json like - [{"lts": 150},{"lts": 250},{"lts": 350}].
Even if there are 3 elements in the JSON, the query is returning only the first element. This may be because of the limitation listed by #artikas.
Definitely, we can change the json like below to make it work:-
{"lts": 150}
{"lts": 250}
{"lts": 350}
Please post if anyone is having a better solution to it.
I have a csv file that looks like this:
varCust_id,varCust_name,varCity,varStateProv,varCountry,varUserId,varUsername
When I run the HTTP Post Request to create a new customer, I get a JSON response. I am extracting the cust_id and cust_name using the json extractor. How can I enter this new value into the csv for the correct variable? For example, after creating the customer, the csv would look like this:
varCust_id,varCust_name,varCity,varStateProv,varCountry,varUserId,varUsername
1234,My Customer Name
Or once I create a user, the file might look like this:
varCust_id,varCust_name,varCity,varStateProv,varCountry,varUserId,varUsername
1234,My Customer Name,,,,9876,myusername
In my searching through the net, I have found ways and I'm able to append these extracted variables to a new line but in my case, I need to replace the value in the correct location so it is associated to the correct variable I have set up in the csv file.
I believe what you're looking to do can be done via a BeanShell PostProcessor and is answered here.
Thank you for the reply. I ended up using User Defined Variables for some things and BeanShell PreProcessors for other bits vs. using the CSV.
Well, never tried this. But what you can do is create all these variables and set them to Null / 0.
Once done, update these during your execution. At the end, you can concatenate these with any delimiter (say ; or Tab) and just push in CSV as a single string.
Once you got data in CSV, you can easily split in Ms excel.
By using DataMapper I'm converting XML format to CSV format but problem is unable to use space or tab as delimiter. Please let me know what configuration need to be done in DataMapper component.,
XML :
<OredrId>10</OrderId>
<CustomerName>John<CustomerName>
<OredrId >11</OrderId>
<CustomerName>Tom<CustomerName>
Expected CSV format is :
10 John
11 Tom
Please suggest me on this.,
Thanks in advance:)
You can chose the delimiter when creating the CSV output format. You can change it through the "Open Properties" button on the output mapping area of the properties view.
Through these, you can archive to use the tab as delimiter.
I have tried to use space as delimiter but I was not able to do it by using the UI, but I was successful by modifying the mapping file (mappings/XXX.grp):
<Record fieldDelimiter=" " name="OUTPUT_FORMAT_NAME"
Best regards,
José
im new to jmeter.
I'm in need to test the performance of some json requests which normally do some insertion or update in db .
I need to change some record id dynamically . For example
{"columnName":"company","newValue":"cts","oldValue":"","timeStamp":"11-05-2012 14:54:24","version":"1"}],"instruction":"contact_list","**recordId**":"8294547"}]}
i want record id should be dynamically get from csv file. so i did like
"recordid":"$recordid"}
But after HIT TO THE SERVER I found in the request THAT the one more time request id was printing after the json loop ends like
{"columnName":"company","newValue":"cts","oldValue":"","timeStamp":"11-05-2012 14:54:24","version":"1"}],"instruction":"contact_list","recordId":"**8294547**"}]}
8294547
which returns to malformed json request.
can u please tell the way to avoid extra appending recordid after the json loop .
above json is not the actual one just i copied the part i need .
You're missing curly braces. The format for inserting a variable in JMeter is ${myVar}, not $myVar.