wso2 convert json/xml to csv and write to a csv file - csv

i'm trying to create tab-delimited csv data from json/xml data. While I can do this using payload factory mediator in an iterate loop; the data gets appended to the same line in the file every iteration, creating a long line of data. I want to be append to the next line, but i've been unable to find a way. Any suggestions? Thanks.
(I do not want a solution which uses a csv connector or module)

Edit: I solved it, you just need to use an xslt and "
" line break character.

Related

Creating individual JSON files from a CSV file that is already in JSON format

I have JSON data in a CVS file that I need to break apart into seperate JSON files. The data looks like this: {"EventMode":"","CalculateTax":"Y",.... There are multiple rows of this and I want each row to be a separate JSON file. I have used code provided by Jatin Grover that parses the CVS into JSON:
lcount = 0
out = json.dumps(row)
jsonoutput = open( 'json_file_path/parsedJSONfile'+str(lcount)+'.json', 'w')
jsonoutput.write(out)
lcount+=1
This does an excellent job the problem is it adds "R": " before the {"EventMode... and adds extra \ between each element as well as item at the end.
Each row of the CVS file is already valid JSON objects. I just need to break each row into a separate file with the .json extension.
I hope that makes sense. I am very new to this all.
It's not clear from your picture what your CSV actually looks like.
I mocked up a really small CSV with JSON lines that looks like this:
Request
"{""id"":""1"", ""name"":""alice""}"
"{""id"":""2"", ""name"":""bob""}"
(all the double-quotes are for escaping the quotes that are part of the JSON)
When I run this little script:
import csv
with open('input.csv', newline='') as input_file:
reader = csv.reader(input_file)
next(reader) # discard/skip the fist line ("header")
for i, row in enumerate(reader):
with open(f'json_file_path/parsedJSONfile{i}.json', 'w') as output_file:
output_file.write(row[0])
I get two files, json_file_path/parsedJSONfile0.json and json_file_path/parsedJSONfile1.json, that look like this:
{"id":"1", "name":"Alice"}
and
{"id":"2", "name":"bob"}
Note that I'm not using json.dumps(...), that only makes sense if you are starting with data inside Python and want to save it as JSON. Your file just has text that is complete JSON, so basically copy-paste each line as-is to a new file.

JMeter reaching EOF too early in CSV file

I have setup an SMTP sampler in JMeter that gets the body data from a csv file. It reads the first element and then stops. Any suggestions on what could be wrong?
The CSV file looks like this:
"This is
a multiline
record
"`"This is
a seond
multi line
record
"`"And this is a third record"
Result
Configuration
As per CSV Data Set Config documentation
JMeter supports CSV files with quoted data that includes new-lines.
By default, the file is only opened once, and each thread will use a different line from the file.
So the "line" with newline characters needs to start from the new line (hopefully it makes sense), you need to organize your CSV file a little bit differently to wit:
"This is
a multiline
record
"`
"This is
a seond
multi line
record
"`
"And this is a third record"
If you don't have possibility to amend your CSV file you will have to go for other options of reading the data, i.e. using JSR223 Test Elements and Groovy scripts or storing the data into the database and using JDBC Test Elements for retrieving it

JMeter - Save complete JSON response of all the request to CSV file for test data preparation

I need to create test data preparation script and capture JSON response data to CSV file.
In the actual test, I need to read parameters from CSV file.
Is there any possibilities of saving entire JSON data as filed in CSV file (or) need to extract each filed and save it to CSV file?
The main issue JSON have comma, You can overcome it by saving JSON to file and use different delimiter instead of comma separated, for example #
Then read file using CSV Data Set Config using # Delimiter
Delimiter to be used to split the records in the file. If there are fewer values on the line than there are variables the remaining variables are not updated - so they will retain their previous value (if any).
Also you can save JSON in every row and then get data using different delimiter as #
You can save entire JSON response into a JMeter Variable by adding a Regular Expression Extractor as a child of the HTTP Request sampler which returns JSON and configuring it like:
Name of created variables: anything meaningful, i.e. response
Regular Expression: (?s)(^.*)
Template: $1$
Then you need to declare this response as a Sample Variable by adding the next line to user.properties file:
sample_variables=response
And finally you can use Flexible File Writer plugin to store the response variable into a file, if you don't have any other Sample Variables you should use variable#0

AHK CSV Parse can't Parse line by line

Using Autohotkey, I Looking at AHK Document.
My File that i want to read type is CSV, so i testing Example 4
This is my CSV file. 2 Row, some column.
So, If open file, data will read comma by comma, and line by line. right?
But..
It is Printed answer.
What the freaking this situation?
Why AHK CSV Parse is not cutting data line by line?
Need Some help : <
P.S : Code is same as Example 4.
Loop, Parse, PositionData, CSV
{
MsgBox, 4, , Field %LineNumber%-%A_Index% is:`n%A_LoopField%`n`nContinue?
IfMsgBox, No, break
}
I found.
If CSV Reading in AHK, You must read Line by Line...

How can hadoop mapreduce get data input from CSV file?

I want to implement hadoop mapreduce, and I use the csv file for it's input. So, I want to ask, is there any method that hadoop provide for use to get the value of csv file, or we just do it with Java Split String function?
Thanks all.....
By default Hadoop uses a Text Input reader that feeds the mapper line by line from the input file. The key in the mapper is the number of lines read. Be careful with CSV files though, as single columns/fields can contain a line break. You might want to look for a CSV input reader like this one:
https://github.com/mvallebr/CSVInputFormat/blob/master/src/main/java/org/apache/hadoop/mapreduce/lib/input/CSVNLineInputFormat.java
But, you have to split your line in your code.