Reading JSON in Azure Synapse - json

I'm trying to understand the code for reading JSON file in Synapse Analytics. And here's the code provided by Microsoft documentation:
Query JSON files using serverless SQL pool in Azure Synapse Analytics
select top 10 *
from openrowset(
bulk 'https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/ecdc_cases/latest/ecdc_cases.jsonl',
format = 'csv',
fieldterminator ='0x0b',
fieldquote = '0x0b'
) with (doc nvarchar(max)) as rows
go
I wonder why the format = 'csv'. Is it trying to convert JSON to CSV to flatten the file?

Why they didn't just read the file as a SINGLE_CLOB I don't know
When you use SINGLE_CLOB then the entire file is important as one value and the content of the file in the doc is not well formatted as a single JSON. Using SINGLE_CLOB will make us do more work after using the openrowset, before we can use the content as JSON (since it is not valid JSON we will need to parse the value). It can be done but will require more work probably.
The format of the file is multiple JSON's like strings, each in separate line. "line-delimited JSON", as the document call it.
By the way, If you will check the history of the document at GitHub, then you will find that originally this was not the case. As much as I remember, originally the file included a single JSON document with an array of objects (was wrapped with [] after loaded). Someone named "Ronen Ariely" in fact found this issue in the document, which is why you can see my name in the list if the Authors of the document :-)
I wonder why the format = 'csv'. Is it trying to convert json to csv to flatten the hierarchy?
(1) JSON is not a data type in SQL Server. There is no data type name JSON. What we have in SQL Server are tools like functions which work on text and provide support for strings which are JSON's like format. Therefore, we do not CONVERT to JSON or from JSON.
(2) The format parameter has nothing to do with JSON. It specifies that the content of the file is a comma separated values file. You can (and should) use it whenever your file is well formatted as comma separated values file (also commonly known as csv file).
In this specific sample in the document, the values in the csv file are strings, which each one of them has a valid JSON format. Only after you read the file using the openrowset, we start to parse the content of the text as JSON.
Notice that only after the title "Parse JSON documents" in the document, the document starts to speak about parsing the text as JSON.

Related

JSON variable indent for different entries

Background: I want to store a dict object in json format that has say, 2 entries:
(1) Some object that describes the data in (2). This is small data mostly definitions, parameters that control, etc. and things (maybe called metadata) that one would like to read before using the actual data in (2). In short, I want good human readability of this portion of the file.
(2) The data itself is a large chunk- should more like machine readable (no need for human to gaze over it on opening the file).
Problem: How to specify some custom indent, say 4 to the (1) and None to the (2). If I use something like json.dump(data, trig_file, indent=4) where data = {'meta_data': small_description, 'actual_data': big_chunk}, meaning the large data will have a lot of whitespace making the file large.
Assuming you can append json to a file:
Write {"meta_data":\n to the file.
Append the json for small_description formatted appropriately to the file.
Append ,\n"actual_data":\n to the file.
Append the json for big_chunk formatted appropriately to the file.
Append \n} to the file.
The idea is to do the json formatting out the "container" object by hand, and using your json formatter as appropriate to each of the contained objects.
Consider a different file format, interleaving keys and values as distinct documents concatenated together within a single file:
{"next_item": "meta_data"}
{
"description": "human-readable content goes here",
"split over": "several lines"
}
{"next_item": "actual_data"}
["big","machine-readable","unformatted","content","here","....."]
That way you can pass any indent parameters you want to each write, and you aren't doing any serialization by hand.
See How do I use the 'json' module to read in one JSON object at a time? for how one would read a file in this format. One of its answers wisely suggests the ijson library, which accepts a multiple_values=True argument.

How to remove escaped character when parsing xml to json with copy data activity in Azure Data Factory?

I have an ADF pipeline exporting from xml dataset (ADLS) to json dataset (ADLS) with a copy Data activity. Due to the complex xml structure, I need to parse the nested xml to nested json then use T-SQL to parse the nested json into Synapse table.
However, the output nested has double backslash (It seems like escape characters) at nodes which have comma in it. You can check a sample of xml input and json output below:
xml input
<Address2>test, test</Address2>
json output
"Address2":"test\\, test"
How can I remove the double backslash in the output json with copy data activity in Azure Data Factory ?
Unfortunately there is no such provision in CopyData Activity.
However, I just tried with just the lines you provided as sample source and sink with CopyData Activity and it just copies as is. I don't see any \\. Perhaps you could share the exact pipeline you have, with details of the nested XML, JSON and T-SQL that you are using.
Repro: (with all default settings and properties)

Can an ndjson file be saved using xdmp.save?

I have a requirement to gather certain JSON documents from my database and save them in an outside drive as one file for a downstream consumer.
Using server-side Javascript I can combine the documents in a JSON object or array. However, they need to be saved into this singular file in ndjson format.
Is there any way to do this using xdmp.save in MarkLogic? I thought of saving the documents as a sequence but that throws an error.
xdmp.save() expects a node() for the second parameter.
You could serialize the JSON docs and delimit with a carriage return to generate the Newline Delimited JSON, and then create a text() node from that string.
const ndjson = new NodeBuilder()
.addText(cts.search(cts.collectionQuery("json")).toArray().join("\n"))
.toNode();
xdmp.save("/temp/ndjson.json", ndjson);

What format does BigQuery json export use?

I was trying to load data from Google's json export, but it looks like it's not valid JSON (ECMA-404),(RFC 7159),(RFC 4627). Here is what I'm expecting for json newline:
[{},{},{}]
But here is what it's giving:
{}{}{}
Here's an example output from clicking the "Download as JSON" button on a four-row query result:
{"c0":"001U0000016lf5jIAA","c1":"Tim Burton's Corpse Bride","c2":"a0KU000000OkQ8IMAV","c3":"Luxembourg","c4":"German","c5":"Sub & Audio","c21":null,"c22":"2025542.0"}
{"c0":"001U0000016lf5jIAA","c1":"Tim Burton's Corpse Bride","c2":"a0KU000000OkQ8IMAV","c3":"Luxembourg","c4":"German","c5":"Sub & Audio","c21":null,"c22":"2025542.0"}
{"c0":"001U0000016lf5jIAA","c1":"Tim Burton's Corpse Bride","c2":"a0KU000000OjUuEMAV","c3":"Luxembourg","c4":"French - Parisian","c5":"Sub & Audio","c21":null,"c22":"2025542.0"}
{"c0":"001U0000016lf5jIAA","c1":"Tim Burton's Corpse Bride","c2":"a0KU000000OkQ8IMAV","c3":"Luxembourg","c4":"German","c5":"Sub & Audio","c21":null,"c22":"2025542.0"}
Is there a reason why BigQuery is using this export format for json? Are there other Google services or something that are dependent on this format, or why would it be pushing a non-standard json format? (Maybe I'm just misunderstanding json line format). Note, this is from the web-UI, not the API, which gives valid json.
BigQuery reads and outputs new-line delimited JSON - this because traditional JSON doesn't adapt well to the needs of big data.
See:
http://specs.okfnlabs.org/ndjson/
https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON
The output of "Download as JSON" shown in the question is compatible with the JSON input that BigQuery can read.
Note that the web UI also offers to look at the results of a query as JSON - and those results are formatted as a traditional JSON object. I'm not sure what was the design decision to have this incompatible output here - but results in that form won't be able to be imported back to BigQuery.
So in general, this format is incompatible with BigQuery:
While this is compatible with BigQuery:
Why is this less traditional JSON format the best choice in the big data world? Encapsulating a trillion rows within [...] defines a single object with a trillion rows - which is hard to parse and handle. New line delimited JSON solves this problem, with each row being an independent object.

How do I read a Large JSON Array File in PySpark

Issue
I recently encountered a challenge in Azure Data Lake Analytics when I attempted to read in a Large UTF-8 JSON Array file and switched to HDInsight PySpark (v2.x, not 3) to process the file. The file is ~110G and has ~150m JSON Objects.
HDInsight PySpark does not appear to support Array of JSON file format for input, so I'm stuck. Also, I have "many" such files with different schemas in each containing hundred of columns each, so creating the schemas for those is not an option at this point.
Question
How do I use out-of-the-box functionality in PySpark 2 on HDInsight to enable these files to be read as JSON?
Thanks,
J
Things I tried
I used the approach at the bottom of this page:
from Databricks that supplied the below code snippet:
import json
df = sc.wholeTextFiles('/tmp/*.json').flatMap(lambda x: json.loads(x[1])).toDF()
display(df)
I tried the above, not understanding how "wholeTextFiles" works, and of course ran into OutOfMemory errors that killed my executors quickly.
I attempted loading to an RDD and other open methods, but PySpark appears to support only the JSONLines JSON file format, and I have the Array of JSON Objects due to ADLA's requirement for that file format.
I tried reading in as a text file, stripping Array characters, splitting on the JSON object boundaries and converting to JSON like the above, but that kept giving errors about being unable to convert unicode and/or str (ings).
I found a way through the above, and converted to a dataframe containing one column with Rows of strings that were the JSON Objects. However, I did not find a way to output only the JSON Strings from the data frame rows to an output file by themselves. The always came out as
{'dfColumnName':'{...json_string_as_value}'}
I also tried a map function that accepted the above rows, parsed as JSON, extracted the values (JSON I wanted), then parsed the values as JSON. This appeared to work, but when I would try to save, the RDD was type PipelineRDD and had no saveAsTextFile() method. I then tried the toJSON method, but kept getting errors about "found no valid JSON Object", which I did not understand admittedly, and of course other conversion errors.
I finally found a way forward. I learned that I could read json directly from an RDD, including a PipelineRDD. I found a way to remove the unicode byte order header, wrapping array square brackets, split the JSON Objects based on a fortunate delimiter, and have a distributed dataset for more efficient processing. The output dataframe now had columns named after the JSON elements, inferred the schema, and dynamically adapts for other file formats.
Here is the code - hope it helps!:
#...Spark considers arrays of Json objects to be an invalid format
# and unicode files are prefixed with a byteorder marker
#
thanksMoiraRDD = sc.textFile( '/a/valid/file/path', partitions ).map(
lambda x: x.encode('utf-8','ignore').strip(u",\r\n[]\ufeff")
)
df = sqlContext.read.json(thanksMoiraRDD)