Apache Camel - extract TGZ file which contain multiple CSV's - csv

I'm new to Apache Camel and learning its basics. I'm using Yaml DSL
I have a TGZ file which includes 2 small CSV files.
I am trying to decompress the file using gzipDeflater, but when I print the body after the extraction, it includes some data about the CSV (filename, my username, some numbers) - that is preventing me from parsing the CSV only by its known columns.
since the extracted file includes lines that were not included in the original CSV, whenever one of those lines is processed, i get an exception.
Is there a way for me to "ignore" those lines, or perhaps another functionality of Apache Camel that will let me access only the content of those CSV's?
Thanks!

You probably have a gzipped tar file, which is a slightly different thing than just a deflate compressed file.
Try this (convert to YAML if you'd like):
from("file:filedir")
.unmarshal().gzip()
.split(new TarSplitter())
// process/unmarshal CSV

Related

ADF Merge-Copying JSON files in Copy Data Activity creates error for Mapping Data Flow

I am trying to do some optimization in ADF. Setup is a third-party tool copies one JSON file per object to a BLOB storage container. These feed to a Mapping Data Flow. The individual files written by the third party tool work great. If I copy these files to a different BLOB folder using an Azure Copy Data activity, the MDF can no longer parse the files and gives an error: "JSON parsing error, unsupported encoding or multiline." I started this with a Merge Files, but outcome is same regardless of copy behavior I choose.
2ND EDIT: After another day's work, I have found that the Copy Activity Merge File from JSON to JSON definitely adds an EOL character to each single JSON object as it gets imported to the Merge file. I have also found that the MDF fails definitely with those EOL characters in the Merge file. If I remove all EOL characters from the Merge file, the same MDF will work. For me, this is a bug. The copy activity is adding a character that breaks the MDF. There seems to be a second issue in some of my data that doesn't fail as an individual file but does when concatenated that breaks the MDF when I try to pull all the files together, but I have tested the basic behavior on 1-5000 files and been able to repeat the fail/success tests.
I took the original file, and the copied file, ran them through all of sorts of test, what I eventually found when I dump into Notepad++:
Copied file:
{"CustomerMasterData":{"Customer":[{"ID":"123456","name":"Customer Name",}]}}\r\n
Original file:
{"CustomerMasterData":{"Customer":[{"ID":"123456","name":"Customer Name",}]}}\n
If I change the copied file from ending with \r\n to \n, the MDF can read the file again. What is going on here? And how do I change the file write behavior or the MDF settings so that I can concatenate or copy files without the CRLF?
EDIT: NEW INFORMATION -- It seems on further review like maybe the minification/whitespace removal is the culprit. If I download the file created by the ADF copy and format it using a JSON formatter, it works. Maybe the CRLF -> LF masked something else. I'm not sure what to do at this point, but its super frustrating.
Other possibly relevant information:
Both the source and sink JSON datasets are set to use UTF-8 (not default(UTF-8), although I tried that). Would a different encoding fix this?
I have tried remapping schemas, creating new data sets, creating new Mapping Data Flows, still get the same error.
EDITED for clarity based on comments:
In the case of a single JSON element in a file, I can get this to work -- data preview returns same success or failure as pipeline when run
In the case of multiple documents merged by ADF I get the below instead. It seems on further review like maybe the minification/whitespace removal is the culprit. If I download the file created by the ADF copy and format it using a JSON formatter, it works. Maybe the CRLF -> LF masked something else. I'm not sure what to do at this point, but its super frustrating.
Repro: Create any valid JSON as a single file, put it in blob storage, use it as a source in a mapping data flow, to do any sink operation. Create a second file with same schema, get them both to run in same flow using wildcard paths. Use a Copy Activity with Merge Files as the Sink Copy Activity and Array of Objects as the File pattern. Try to make your MDF use this new file. If it fails, download the file created by ADF, run it through a formatter (I have used both VS Code -> "Format Document" from standard VS Code JSON extension, and VS 2019 "Unminify" command) and reupload... It should work now.
don't know if you already solved the problem: I came across the exact same problem 3 days ago and after several tries I found a solution:
in the copy data activity under sink settings, use "set of objects" (instead of "array of objects") under File Pattern, so that the merged big JSON has the value of the original small JSON files written per line
in the MDF after setting up the wildcard paths with the *.json pattern, under JSON Settings select: Document per line as the Document form.
After that you should be good to go, as least it solved my problem. The automatic written CRLF in "array of objects" setting in the copy data activity should be a default setting and MSFT should provide the option to omit it in the settings in the future.
According to my test:
1.copy data activity can't change unix(LF) to windows(CRLF).
2.MDF can also parse unix(LF) file and windows(CRLF) file.
Maybe there is something else wrong.
By the way,I see there is a comma after "name":"Customer Name" in your Original file,I delete it before my test.

 or ? character is prepended to first column when reading csv file from s3 by using camel

The csv file is located in S3 bucket, and I am using camel aws to consume the csv file.
However, whenever the csv file is loaded to local,  or ? character is pretended to first column.
For example,
original file
firstname, lastname
brian,xi
after load to local
firstname,lastname
brian,xi
I have done research on this link : R's read.csv prepending 1st column name with junk text
however, it does not seem to work for camel.
how to read csv file from s3
use aws-s3 to consume csv file from s3 bucket such as "Exchange s3File = consumer.receive(s3Endpoint)" where s3Endpoint = "aws-s3://keys&secret?prefix=%s&deleteAfterRead=false&amazonS3Client=#awsS3client"
The characters  are a UTF-8 BOM (Hex EF BB BF). So this is meta data about the file content that is placed at the beginning of the file (because there is no "header" or similar place where it can be saved to).
If you read a file that begins with this sequence, but you read it as Windows standard encoding (CP1252) or ISO-8859-1, you get exactly these three strange characters at the beginning of the file content.
To avoid that you have to read the file as UTF-8 and BOM aware as suggested in #jws comment. He also provided this link with an example how to use a BOMInputStream to read such files correctly.
If the file is correctly read, and you write it back into a file with a different encoding like CP1252, the BOM should be removed.
So, now the question is how exactly do you read the file with Camel? If you (or a library) read it (perhaps by default) with a non-UTF-8 encoding, that explains why you get these characters in the file content.

Loading JSON data with CSV Data Set Config

I'm new to Jmeter so I hope this question is not too off the wall. I am trying to test an HTTP endpoint that accepts a large JSON payload and processes it. I have collected a few hundred JSON blobs in a file and want to use those as my input for testing. The only way that I have come across for loading the data is using the CSV config. I have a single line of the file for each request. I have attempted to use \n as a delimiter and have also tried adding a tab character \t to the end of each line. My requests all show in put of<EOF>.
Is there a way to read a file of JSON objects, line at a time, and pass them to my endpoint as the body in a POST?
You need to provide more information, to wit: example JSON file (first 2 lines), your CSV Data Set Configuration, jmeter.log file, etc. so we could help.
For the time being I can state that:
Given CSV file looking like:
{"foo":"bar"}
{"baz":"qux"}
And pretty much default CSV Data Set Config setup
JMeter normally reads the CSV data
Be aware that there are alternatives to the CSV Data Set Config, for example:
__CSVRead() function. The equivalent syntax would be ${__CSVRead(test.csv,0)}
__StringFromFile() function. The equivalent syntax would be ${__StringFromFile(test.csv,,,)}
See Apache JMeter Functions - An Introduction to get familiarized with the JMeter Functions concept.

avoid splitting json output by pyspark (v. 2.1)

using spark v2.1 and python, I load json files with
sqlContext.read.json("path/data.json")
I have problem with output json. Using the below command
df.write.json("path/test.json")
data is saved in a folder called test.json (not a file) which includes two empty files: one empty and the other with a strange name:
part-r-00000-f9ec958d-ceb2-4aee-bcb1-fa42a95b714f
Is there anyway to have a clean single json output file?
thanks
Yes, spark writes the output in multiple file when you try to save. Since the computation is distributed the output files are written in multiples part files like (part-r-00000-f9ec958d-ceb2-4aee-bcb1-fa42a95b714f). The number of files created are equal to the number of partition.
If your data is small and can fits in the memory then you can save your output file in a single file. But if your data is large saving on a single file is not the suggested way.
Actually the test.json is a directory and not a json file. It contains multiple part files inside it. This does not create any problem for you you can easily read this later.
If you still want your output in a single file then you need to repartition to 1, which brings your all data to single node and saves. This may cause issue if you have large data.
df.repartition(1).write.json("path/test.json")
Or
df.collect().write.json("path/test.json")

Spark read.json with file names

I need to read a bunch of JSON files from an HDFS directory. After I'm done processing, Spark needs to place the files in a different directory. In the meantime, there may be more files added, so I need a list of files that were read (and processed) by Spark, as I do not want to remove the ones that were not yet processed.
The function read.json converts the files immediately into DataFrames, which is cool but it does not give me the file names like wholeTextFiles. Is there a way to read JSON data while also getting the file names? Is there a conversion from RDD (with JSON data) to DataFrame?
From version1.6 on you can use input_file_name() to get the name of the file in which a row is located. Thus, getting the names of all the files can be done via a distinct on it.