Convert JSON to CSV in nifi - json

I want to convert JSON files to CSV in nifi. We can achieve this in Python and other programming languages and have multiple articles on it. I have multiple JSON files and each file has different schema(one specific file will have one schema only). I can see there are templates to convert CSV to JSON and other conversions. But I didn't see any template to convert JSON data to CSV. I have gone through the article https://community.hortonworks.com/articles/64069/converting-a-large-json-file-into-csv.html ,however here we are hard coding the schema. As I have multiple files and each file has different schema, I can't hardcode the schema. Any suggestions please.

Conversion between formats is typically done through ConvertRecord by plugging in the appropriate record reader and record writer, in this case a JSON reader and CSV writer.
To make use of the record processors you need to defined Avro schemas for your data and put them in a schema registry, NiFi provides a local one.
There are lots of examples and posts out there about the record stuff, this slide deck shows an example of CSV to JSON, but would be easy to reverse the situation for your scenario:
https://www.slideshare.net/BryanBende/apache-nifi-record-processing
This post has some other info:
https://bryanbende.com/development/2017/06/20/apache-nifi-records-and-schema-registries

Related

How do we name the files that are streamed via firehose?

I'm building an architecture using boto3, and I hope to dump the data in JSON format from API to S3. What blocks in my way right now is first, firehose does NOT support JSON; my workaround right now is not compressing them but it's still different from a JSON file. But I still want to see a better choice to make the files more compatible.
And second, the file names can't be customized. All the data I collected will be eventually converted onto Athena for the query, so can boto3 do the naming?
Answering a couple of the questions you have. Firstly if you stream JSON into Firehose it will write JSON to S3. JSON is the file data structure and compression is the file type. Compressing JSON doesn't make it something else. You'll just need to decompress it before consuming it.
RE: file naming, you shouldn't care about that. Let the system name it whatever. If you define the Athena table with the location, you'll be able to query it. When new files are added, you'll be able to query them immediately.
Here is an AWS tutorial that walks you through this process. JSON stream to S3 with Athena query.

How to create a BQ-schema from XSD

I need some guidance on how to proceed with a problem.
Our integration team receives xml files which are converted to json and sent to pub/sub. We then ingest the json files (or are supposed to) into bigquery.
The problem is that the xml files do not include all possible objects or values all the time. So, I cant create a correct schema in bq to receive the json files. I got the xsd file with an extension file which gives me all possible objects but I don't know how to convert this to a correct bq schema.
Do you have any suggestions on how to create a bq schema from xsd files? I was thinking that if I create an xml file with dummy data (including all objects and more than one object when creating repeated objects) with help of the xsd maybe that xml file may be converted to json and then use the auto-schema detection of bq.
Any suggestions?
Thanks,
Cris
If you have the XSD schema files, you can convert these to a valid JSON schema. There are a few tools that can help you to accomplish this.
Keep in mind that the tools are for general purposes and not for the particular case of BigQuery, so you'll have to tune the result to get a valid JSON schema. For this check the components of a BigQuery schema, and for quick reference the sample provided in the documentation.

Read Array Of Jsons From File to Spark Dataframe

I have a gzipped JSON file that contains Array of JSON, something like this:
[{"Product":{"id"1,"image":"/img.jpg"},"Color":"black"},{"Product":{"id"2,"image":"/img1.jpg"},"Color":"green"}.....]
I know this is not the ideal data format to read into scala, however there is no other alternative but to process the feed in this manner.
I have tried :
spark.read.json("file-path")
which seems to take a long time (processes very quickly if you have data in MBs, however takes way long for GBs worth of data ), probably because spark is not able to split the file and distribute accross to other executors.
Wanted to see if there is a any way out to preprocess this data and load it into spark context as a dataframe.
Functionality I want seems to be similar to: Create pandas dataframe from json objects . But I wanted to see if there is any scala alternative which could do similar and convert the data to spark RDD / dataframe .
You can read the "gzip" file using spark.read().text("gzip-file-path"). Since Spark API's are built on top of HDFS API , Spark can read the gzip file and decompress it to read the files.
https://github.com/mesos/spark/blob/baa30fcd99aec83b1b704d7918be6bb78b45fbb5/core/src/main/scala/spark/SparkContext.scala#L239
However, gzip is non-splittable so spark creates an RDD with single partition. Hence, reading gzip files using spark doe not make sense.
You may decompress the gzip file and read the decompressed files to get most out of the distributed processing architecture.
Appeared like a problem with the data format being given to spark for processing. I had to pre-process the data to change the format to a spark friendly format, and run spark processes over that. This is the preprocessing I ended up doing: https://github.com/dipayan90/bigjsonprocessor/blob/master/src/main/java/com/kajjoy/bigjsonprocessor/Application.java

Where do I write a JSON file

I have spent a few hours reading about JSON online and have read a JSON for beginners text book. However, I still can not find out how to start using JSON to store data. Is there a JSON download or interface that is used or do you store JSON files in other languages like python?
I understand JSON is a file format, but where do I write and store JSON files?
My question is different than the suggested duplicate because it is more narrow. My question specifically asks "where do I write a JSON file?", as opposed to "how do I use JSON?"
DISCLAIMER I'm brand new to coding and don't have a teacher to consult to ask questions. I am asking this question because I am genuinely trying to learn and this information is not available online, nor in the textbook that I purchased.
Thank you in advance for your help!
JSON is a file format for storing and passing data between, you can create a JSON file in any text editor, it should follow the JSON format that's it , you can validate the format of your JSON using online validators, I usually use https://jsonlint.com/, JSON is a standard which is supported by almost every programming language, you have API in every language to support it.
you can find examples of JSON data here http://json.org/example.html
just copy them and store them in a file with extension ".json", you can read them from your program by specifying the path where the file is present on the file system.

Can the avro json be extened with additional information?

The avro format is used in hadoop as a header to describe the contents of the binary file that follows. My question is whether the json part of the avro file can be extended to include information that is not necessary for hadoop? The typical use case would be to attach meta-data like the originator of the file and a date to the file without it needing to be data and part of the file.
Yes. Avro files can be annotated with additional information in the json schema or with specific additional name:value pairs. Additionally, we have been able to read these avro files with Pentaho and Google Big Query. One caveat is that the schema and name:value pairs are discarded during the import process. So if you feel you will need them later, you should extract and store local copies of them.