I have a gzipped JSON file that contains Array of JSON, something like this:
[{"Product":{"id"1,"image":"/img.jpg"},"Color":"black"},{"Product":{"id"2,"image":"/img1.jpg"},"Color":"green"}.....]
I know this is not the ideal data format to read into scala, however there is no other alternative but to process the feed in this manner.
I have tried :
spark.read.json("file-path")
which seems to take a long time (processes very quickly if you have data in MBs, however takes way long for GBs worth of data ), probably because spark is not able to split the file and distribute accross to other executors.
Wanted to see if there is a any way out to preprocess this data and load it into spark context as a dataframe.
Functionality I want seems to be similar to: Create pandas dataframe from json objects . But I wanted to see if there is any scala alternative which could do similar and convert the data to spark RDD / dataframe .
You can read the "gzip" file using spark.read().text("gzip-file-path"). Since Spark API's are built on top of HDFS API , Spark can read the gzip file and decompress it to read the files.
https://github.com/mesos/spark/blob/baa30fcd99aec83b1b704d7918be6bb78b45fbb5/core/src/main/scala/spark/SparkContext.scala#L239
However, gzip is non-splittable so spark creates an RDD with single partition. Hence, reading gzip files using spark doe not make sense.
You may decompress the gzip file and read the decompressed files to get most out of the distributed processing architecture.
Appeared like a problem with the data format being given to spark for processing. I had to pre-process the data to change the format to a spark friendly format, and run spark processes over that. This is the preprocessing I ended up doing: https://github.com/dipayan90/bigjsonprocessor/blob/master/src/main/java/com/kajjoy/bigjsonprocessor/Application.java
Related
I am trying to save a data frame into a document but it returns saying that the below error
java.lang.ClassNotFoundException: Failed to find data source: docx. Please find packages at http://spark.apache.org/third-party-projects.html
My code is below:
#f_data is my dataframe with data
f_data.write.format("docx").save("dbfs:/FileStore/test/test.csv")
display(f_data)
Note that i could save files of CSV, text and JSON format but is there any way to save a docx file using pyspark?
My question here. Do we have the support for saving data in the format of doc/docx?
if not, Is there any way to store the file like writing a file stream object into particular folder/S3 bucket?
In short: no, Spark does not support DOCX format out of the box. You can still collect the data into the driver node (i.e.: pandas dataframe) and work from there.
Long answer:
A document format like DOCX is meant for presenting information in small tables with style metadata. Spark focus on processing large amount of files at scale and it does not support DOCX format out of the box.
If you want to write DOCX files programmatically, you can:
Collect the data into a Pandas DataFrame pd_f_data = f_data.toDF()
Import python package to create the DOCX document and save it into a stream. See question: Writing a Python Pandas DataFrame to Word document
Upload the stream to a S3 blob using for example boto: Can you upload to S3 using a stream rather than a local file?
Note: if your data has more than one hundred rows, ask the receivers how they are going to use the data. Just use docx for reporting no as a file transfer format.
I'm building an architecture using boto3, and I hope to dump the data in JSON format from API to S3. What blocks in my way right now is first, firehose does NOT support JSON; my workaround right now is not compressing them but it's still different from a JSON file. But I still want to see a better choice to make the files more compatible.
And second, the file names can't be customized. All the data I collected will be eventually converted onto Athena for the query, so can boto3 do the naming?
Answering a couple of the questions you have. Firstly if you stream JSON into Firehose it will write JSON to S3. JSON is the file data structure and compression is the file type. Compressing JSON doesn't make it something else. You'll just need to decompress it before consuming it.
RE: file naming, you shouldn't care about that. Let the system name it whatever. If you define the Athena table with the location, you'll be able to query it. When new files are added, you'll be able to query them immediately.
Here is an AWS tutorial that walks you through this process. JSON stream to S3 with Athena query.
I want to convert JSON files to CSV in nifi. We can achieve this in Python and other programming languages and have multiple articles on it. I have multiple JSON files and each file has different schema(one specific file will have one schema only). I can see there are templates to convert CSV to JSON and other conversions. But I didn't see any template to convert JSON data to CSV. I have gone through the article https://community.hortonworks.com/articles/64069/converting-a-large-json-file-into-csv.html ,however here we are hard coding the schema. As I have multiple files and each file has different schema, I can't hardcode the schema. Any suggestions please.
Conversion between formats is typically done through ConvertRecord by plugging in the appropriate record reader and record writer, in this case a JSON reader and CSV writer.
To make use of the record processors you need to defined Avro schemas for your data and put them in a schema registry, NiFi provides a local one.
There are lots of examples and posts out there about the record stuff, this slide deck shows an example of CSV to JSON, but would be easy to reverse the situation for your scenario:
https://www.slideshare.net/BryanBende/apache-nifi-record-processing
This post has some other info:
https://bryanbende.com/development/2017/06/20/apache-nifi-records-and-schema-registries
I am facing a problem:
I want to parse a bunch of html files in HDFS, and I want to use spark to process them. For convenient, I want to use pyspark, and also want to use the powerful package BeautifulSoup to parse the html files. Is it possible for me to do that? And how can read the files from HDFS by using BeautifulSoup?
Assuming that the html files are in HDFS, you could pipe a list of the file names into the driver, assigning a random number between 0 and the number of executors in your job. Then put this data into an rdd with parallelize. GroupyBy the random number and do a mapPartitions. The reducer function would process each row of the partition by reading the file in as text from HDFS. Then pass the text to BeautifulSoup and do your processing. Return results will be captured in an RDD. Keep in mind, that BeautifulSoup must be installed on each of your worker nodes or this will not work.
I am using NIFI jsontoavro->avrotoorc->puthdfs. But facing following issues.
1)Single ORC file is being saved on HDFS. I am not using any compression.
2) when i try to access these files they are giving errors like buffer memory.
Thanks for help in advance.
You should be merging together many Avro records before ConvertAvroToORC.
You could do this by using MergeContent with the mode set to Avro right before ConvertAvroToORC.
You could also do this by merging your JSON together using MergeContent, and then sending the merged JSON to ConvertJsonToAvro.
Using PutHDFS to append to ORC files that are already in HDFS will not work. The HDFS processor does not know anything about the format of the data and is just writing additional raw bytes on to the file and will likely create an invalid ORC file.