Limit precision on JSONEncoder's Double output - json

I'm working on a document-based SwiftUI application, and my file contents includes large arrays of doubles. The application involves importing some data from CSV, and then the file saves this data in a data structure.
However, when looking at the file sizes, the custom document type I declare in my application is 2-3 times larger than the CSV file. The other bits of metadata could not possibly be that large.
Here's the JSON output that is saved to file:
Here's the CSV file that was imported:
On looking at the raw text of the JSON output and comparing it with the original CSV, it became obvious that the file format I declare was using a lot of unnecessary precision. How do I make Swift's JSONEncoder only use, for example, 4 decimal places of precision?

Related

How to read 100GB of Nested json in pyspark on Databricks

There is a nested json with very deep structure. File is of the format json.gz size 3.5GB. Once this file is uncompressed it is of size 100GB.
This json file is of the format, where Multiline = True (if this condition is used to read the file via spark.read_json then only we get to see the proper json schema).
Also, this file has a single record, in which it has two columns of Struct type array, with multilevel nesting.
How should I read this file and extract information. What kind of cluster / technique to use to extract relevant data from this file.
Structure of the JSON (multiline)
This is a single record. and the entire data is present in 2 columns - in_netxxxx and provider_xxxxx
enter image description here
I was able to achieve this in a bit different way.
Use the utility - Big Text File Splitter -
BigTextFileSplitter - Withdata Softwarehttps://www.withdata.com › big-text-file-splitter ( as the file was huge and multiple level nested) the split record size I kept was 500. This generated around 24 split files of around 3gb each. Entire process took 30 -40 mins.
Processed the _corrupt_record seperately - and populated the required information.
Read the each split file in a using - this option removes the _corrupt_record and also removes the null rows.
spark.read.option("mode", "DROPMALFORMED").json(file_path)
Once the information is fetched form each file, we can merge all the files into a single file, as per standard process.

Merging and/or Reading 88 JSON Files into Dataframe - different datatypes

I basically have a procedure where I make multiple calls to an API and using a token within the JSON return pass that pack to a function top call the API again to get a "paginated" file.
In total I have to call and download 88 JSON files that total 758mb. The JSON files are all formatted the same way and have the same "schema" or at least should do. I have tried reading each JSON file after it has been downloaded into a data frame, and then attempted to union that dataframe to a master dataframe so essentially I'll have one big data frame with all 88 JSON files read into.
However the problem I encounter is roughly on file 66 the system (Python/Databricks/Spark) decides to change the file type of a field. It is always a string and then I'm guessing when a value actually appears in that field it changes to a boolean. The problem is then that the unionbyName fails because of different datatypes.
What is the best way for me to resolve this? I thought about reading using "extend" to merge all the JSON files into one big file however a 758mb JSON file would be a huge read and undertaking.
Could the other solution be to explicitly set the schema that the JSON file is read into so that it is always the same type?
If you know the attributes of those files, you can define the schema before reading them and create an empty df with that schema so you can to a unionByName with the allowMissingColumns=True:
something like:
from pyspark.sql.types import *
my_schema = StructType([
StructField('file_name',StringType(),True),
StructField('id',LongType(),True),
StructField('dataset_name',StringType(),True),
StructField('snapshotdate',TimestampType(),True)
])
output = sqlContext.createDataFrame(sc.emptyRDD(), my_schema)
df_json = spark.read.[...your JSON file...]
output.unionByName(df_json, allowMissingColumns=True)
I'm not sure this is what you are looking for. I hope it helps

How do I read a Large JSON Array File in PySpark

Issue
I recently encountered a challenge in Azure Data Lake Analytics when I attempted to read in a Large UTF-8 JSON Array file and switched to HDInsight PySpark (v2.x, not 3) to process the file. The file is ~110G and has ~150m JSON Objects.
HDInsight PySpark does not appear to support Array of JSON file format for input, so I'm stuck. Also, I have "many" such files with different schemas in each containing hundred of columns each, so creating the schemas for those is not an option at this point.
Question
How do I use out-of-the-box functionality in PySpark 2 on HDInsight to enable these files to be read as JSON?
Thanks,
J
Things I tried
I used the approach at the bottom of this page:
from Databricks that supplied the below code snippet:
import json
df = sc.wholeTextFiles('/tmp/*.json').flatMap(lambda x: json.loads(x[1])).toDF()
display(df)
I tried the above, not understanding how "wholeTextFiles" works, and of course ran into OutOfMemory errors that killed my executors quickly.
I attempted loading to an RDD and other open methods, but PySpark appears to support only the JSONLines JSON file format, and I have the Array of JSON Objects due to ADLA's requirement for that file format.
I tried reading in as a text file, stripping Array characters, splitting on the JSON object boundaries and converting to JSON like the above, but that kept giving errors about being unable to convert unicode and/or str (ings).
I found a way through the above, and converted to a dataframe containing one column with Rows of strings that were the JSON Objects. However, I did not find a way to output only the JSON Strings from the data frame rows to an output file by themselves. The always came out as
{'dfColumnName':'{...json_string_as_value}'}
I also tried a map function that accepted the above rows, parsed as JSON, extracted the values (JSON I wanted), then parsed the values as JSON. This appeared to work, but when I would try to save, the RDD was type PipelineRDD and had no saveAsTextFile() method. I then tried the toJSON method, but kept getting errors about "found no valid JSON Object", which I did not understand admittedly, and of course other conversion errors.
I finally found a way forward. I learned that I could read json directly from an RDD, including a PipelineRDD. I found a way to remove the unicode byte order header, wrapping array square brackets, split the JSON Objects based on a fortunate delimiter, and have a distributed dataset for more efficient processing. The output dataframe now had columns named after the JSON elements, inferred the schema, and dynamically adapts for other file formats.
Here is the code - hope it helps!:
#...Spark considers arrays of Json objects to be an invalid format
# and unicode files are prefixed with a byteorder marker
#
thanksMoiraRDD = sc.textFile( '/a/valid/file/path', partitions ).map(
lambda x: x.encode('utf-8','ignore').strip(u",\r\n[]\ufeff")
)
df = sqlContext.read.json(thanksMoiraRDD)

Can't load sparse matrix correctly into Octave

I have the task to load symmetric positive define sparse matrices from The University of Florida Sparse Matrix Collection into GNU Octave. I need to study different ordering algorithms, like symamd but I can't use them since the matrices are not stored as squared
I have chosen for example bcsstk17.
I've tried different load methods with the .mat files:
load -mat bcsstk17
load -mat-binary bcsstk17
load -6 bcsstk17
load -v6 bcsstk17
load -7 bcsstk17
load -mat4-binary bcsstk17
error: load: can't read binary file
load -4 bcsstk17
But none of them worked, since my workspace's variables are empty.
When I load the Matrix Market format mtx, load bcsstk17.mtx I get a 219813x3 matrix.
I've tried the full command but I get the same 219813x3 matrix.
What I am doing wrong?
Not sure why you're trying to load the .mtx file when there's a matlab/octave specific .mat format offered there.
Just download the bcsstk17.mat file, and load it:
load bcsstk17.mat
You will then see on your workspace a variable called Problem which is of type struct. This contains several fields, including a A field which seems to hold your data in the form of a sparse matrix. In other words, your data can be accessed as Problem.A
You shouldn't be bothering with the .mtx file at all. However for completion I will explain what you're seeing when you load it. The .mat file is a binary format. However, a .mtx file seems to be a human-readable format (i.e. it contains normal ASCII text). In particular it seems that it consists of a 'header' containing comments, which start with a % character,
a row which seems to encode the size of the sparse matrix in each dimension,
and then it contains "space-delimited" data, where presumably each row represents an element in the matrix, and the three columns presumably represent the row, the column, and the value of that element.
When matlab comes across an ASCII file containing data (+comments), regardless of the extension, as long as the data seems like a valid 2D array of numbers, it loads the data contents of this file onto a variable with the same name as the file.
Clearly this is not what you want. Not least because the first row will be interpreted as a normal row of data in a Nx3 matrix. In other words, matlab/octave is just loading a standard file it perceives as text-based, and it loads the values it sees inside onto a variable. The extention .mtx here is irrelevant as far as matlab/octave is concerned, and it is most definitely not interpreting or decoding the .mtx file in any way related to the .mtx specification.

What is the efficient way of reading and processing very large CSV file in scala (> 1GB)?

In Scala how do you efficiently (memory consumption + performance) read very large csv file? is it fast enough to just stream it line by line and process each line at each iteration?
What i need to do with CSV data :->
In my application Single line in CSV file is treated as an one single record and all the records of the CSV file are to be converted into XML elements and JSON format and save it into another file in xml and json formats.
So here question is while reading the file from csv is it a good idea to read the file in chunks and provide that chunk to another thread which will convert that CSV records into an xml/json and write that xml/json to file? If yes how?
Data of the CSV can be anything, there is no restriction on the type of the data it can be numeric, big decimal, string or date. Any easy way to handle this different data types before saving it to xml? or we don't need to take care of types?
Many Thanks
If this is not a one time task, create a program that will break this 1GB file to small size files. Then provide those new files as a input to separate futures.
Each future will read one file and resolve in the order of file content. File4 resolves after File3, which resolves after File2, which resolves after File1.
As the file has no key-value pair or hierarchical data structure, so I will suggest, just read as a string.
Hope this helps.