What format does BigQuery json export use? - json

I was trying to load data from Google's json export, but it looks like it's not valid JSON (ECMA-404),(RFC 7159),(RFC 4627). Here is what I'm expecting for json newline:
[{},{},{}]
But here is what it's giving:
{}{}{}
Here's an example output from clicking the "Download as JSON" button on a four-row query result:
{"c0":"001U0000016lf5jIAA","c1":"Tim Burton's Corpse Bride","c2":"a0KU000000OkQ8IMAV","c3":"Luxembourg","c4":"German","c5":"Sub & Audio","c21":null,"c22":"2025542.0"}
{"c0":"001U0000016lf5jIAA","c1":"Tim Burton's Corpse Bride","c2":"a0KU000000OkQ8IMAV","c3":"Luxembourg","c4":"German","c5":"Sub & Audio","c21":null,"c22":"2025542.0"}
{"c0":"001U0000016lf5jIAA","c1":"Tim Burton's Corpse Bride","c2":"a0KU000000OjUuEMAV","c3":"Luxembourg","c4":"French - Parisian","c5":"Sub & Audio","c21":null,"c22":"2025542.0"}
{"c0":"001U0000016lf5jIAA","c1":"Tim Burton's Corpse Bride","c2":"a0KU000000OkQ8IMAV","c3":"Luxembourg","c4":"German","c5":"Sub & Audio","c21":null,"c22":"2025542.0"}
Is there a reason why BigQuery is using this export format for json? Are there other Google services or something that are dependent on this format, or why would it be pushing a non-standard json format? (Maybe I'm just misunderstanding json line format). Note, this is from the web-UI, not the API, which gives valid json.

BigQuery reads and outputs new-line delimited JSON - this because traditional JSON doesn't adapt well to the needs of big data.
See:
http://specs.okfnlabs.org/ndjson/
https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON
The output of "Download as JSON" shown in the question is compatible with the JSON input that BigQuery can read.
Note that the web UI also offers to look at the results of a query as JSON - and those results are formatted as a traditional JSON object. I'm not sure what was the design decision to have this incompatible output here - but results in that form won't be able to be imported back to BigQuery.
So in general, this format is incompatible with BigQuery:
While this is compatible with BigQuery:
Why is this less traditional JSON format the best choice in the big data world? Encapsulating a trillion rows within [...] defines a single object with a trillion rows - which is hard to parse and handle. New line delimited JSON solves this problem, with each row being an independent object.

Related

Reading JSON in Azure Synapse

I'm trying to understand the code for reading JSON file in Synapse Analytics. And here's the code provided by Microsoft documentation:
Query JSON files using serverless SQL pool in Azure Synapse Analytics
select top 10 *
from openrowset(
bulk 'https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/ecdc_cases/latest/ecdc_cases.jsonl',
format = 'csv',
fieldterminator ='0x0b',
fieldquote = '0x0b'
) with (doc nvarchar(max)) as rows
go
I wonder why the format = 'csv'. Is it trying to convert JSON to CSV to flatten the file?
Why they didn't just read the file as a SINGLE_CLOB I don't know
When you use SINGLE_CLOB then the entire file is important as one value and the content of the file in the doc is not well formatted as a single JSON. Using SINGLE_CLOB will make us do more work after using the openrowset, before we can use the content as JSON (since it is not valid JSON we will need to parse the value). It can be done but will require more work probably.
The format of the file is multiple JSON's like strings, each in separate line. "line-delimited JSON", as the document call it.
By the way, If you will check the history of the document at GitHub, then you will find that originally this was not the case. As much as I remember, originally the file included a single JSON document with an array of objects (was wrapped with [] after loaded). Someone named "Ronen Ariely" in fact found this issue in the document, which is why you can see my name in the list if the Authors of the document :-)
I wonder why the format = 'csv'. Is it trying to convert json to csv to flatten the hierarchy?
(1) JSON is not a data type in SQL Server. There is no data type name JSON. What we have in SQL Server are tools like functions which work on text and provide support for strings which are JSON's like format. Therefore, we do not CONVERT to JSON or from JSON.
(2) The format parameter has nothing to do with JSON. It specifies that the content of the file is a comma separated values file. You can (and should) use it whenever your file is well formatted as comma separated values file (also commonly known as csv file).
In this specific sample in the document, the values in the csv file are strings, which each one of them has a valid JSON format. Only after you read the file using the openrowset, we start to parse the content of the text as JSON.
Notice that only after the title "Parse JSON documents" in the document, the document starts to speak about parsing the text as JSON.

Export as JSON using BigQueryToCloudStorageOperator

When I use the BigQuery console manually, I can see that the 3 options when exporting a table to GCS are CSV, JSON (Newline delimited), and Avro.
With Airflow, when using the BigQueryToCloudStorageOperator operator, what is the correct value to pass to export_format in order to transfer the data to GCS as JSON (Newline delimited)? Is it simply JSON? All examples I've seen online for BigQueryToCloudStorageOperator use export_format='CSV', never for JSON, so I'm not sure what the correct value here is. Our use case needs JSON, since the 2nd task in our DAG (after transferring data to GCS) is to then load that data from GCS into our MongoDB Cluster with mongoimport.
I found that the value export_format='NEWLINE_DELIMITED_JSON' was required after finding the documentation https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#jobconfigurationextract and refering to the values for destinationFormat
According to the BigQuery documentation the three possible formats to which you can export BigQuery query results are: CSV, JSON, and Avro (and this is compatible with the UI drop-down menu).
I would try with export_format='JSON' as you already proposed.

Cannot identify proper format for a json request body stored and used in csv file for use in a karate scenario

Am having trouble identifying the propert format to store a json request body in csv format, then use the csv file value in a scenario.
This works properly within a scenario:
And request '{"contextURN":"urn:com.myco.here:env:booking:reservation:0987654321","individuals":[{"individualURN":"urn:com.myco.here:env:booking:reservation:0987654321:individual:12345678","name":{"firstName":"NUNYA","lastName":"BIDNESS"},"dateOfBirth":"1980-03-01","address":{"streetAddressLine1":"1 Myplace","streetAddressLine2":"","city":"LANDBRANCH","countrySubdivisionCode":"WV","postalCode":"25506","countryCode":"USA"},"objectType":"INDIVIDUAL"},{"individualURN":"urn:com.myco.here:env:booking:reservation:0987654321:individual:23456789","name":{"firstName":"NUNYA","lastName":"BIZNESS"},"dateOfBirth":"1985-03-01","address":{"streetAddressLine1":"1 Myplace","streetAddressLine2":"","city":"BRANCHLAND","countrySubdivisionCode":"WV","postalCode":"25506","countryCode":"USA"},"objectType":"INDIVIDUAL"}]}'
However, when stored in csv file as follows (I've tried quite a number other formatting variations)
'{"contextURN":"urn:com.myco.here:env:booking:reservation:0987654321","individuals":[{"individualURN":"urn:com.myco.here:env:booking:reservation:0987654321:individual:12345678","name":{"firstName":"NUNYA","lastName":"BIDNESS"},"dateOfBirth":"1980-03-01","address":{"streetAddressLine1":"1 Myplace","streetAddressLine2":"","city":"LANDBRANCH","countrySubdivisionCode":"WV","postalCode":"25506","countryCode":"USA"},"objectType":"INDIVIDUAL"},{"individualURN":"urn:com.myco.here:env:booking:reservation:0987654321:individual:23456789","name":{"firstName":"NUNYA","lastName":"BIZNESS"},"dateOfBirth":"1985-03-01","address":{"streetAddressLine1":"1 Myplace","streetAddressLine2":"","city":"BRANCHLAND","countrySubdivisionCode":"WV","postalCode":"25506","countryCode":"USA"},"objectType":"INDIVIDUAL"}]}',
and used in scenario as:
And request requestBody
my test returns an "javascript evaluation failed: " & the json above & :1:63 Missing close quote ^ in at line number 1 at column number 63
Can you please identify correct formatting or the usage errors I am missing? Thanks
We just use a basic CSV library behind the scenes. I suggest you roll your own Java helper class that does whatever processing / pre-processing you need.
Do read this answer as well: https://stackoverflow.com/a/54593057/143475
I can't make sense of your JSON but if you are trying to fit JSON into CSV, sorry - that's not a good idea. See this answer: https://stackoverflow.com/a/62449166/143475

How do I read a Large JSON Array File in PySpark

Issue
I recently encountered a challenge in Azure Data Lake Analytics when I attempted to read in a Large UTF-8 JSON Array file and switched to HDInsight PySpark (v2.x, not 3) to process the file. The file is ~110G and has ~150m JSON Objects.
HDInsight PySpark does not appear to support Array of JSON file format for input, so I'm stuck. Also, I have "many" such files with different schemas in each containing hundred of columns each, so creating the schemas for those is not an option at this point.
Question
How do I use out-of-the-box functionality in PySpark 2 on HDInsight to enable these files to be read as JSON?
Thanks,
J
Things I tried
I used the approach at the bottom of this page:
from Databricks that supplied the below code snippet:
import json
df = sc.wholeTextFiles('/tmp/*.json').flatMap(lambda x: json.loads(x[1])).toDF()
display(df)
I tried the above, not understanding how "wholeTextFiles" works, and of course ran into OutOfMemory errors that killed my executors quickly.
I attempted loading to an RDD and other open methods, but PySpark appears to support only the JSONLines JSON file format, and I have the Array of JSON Objects due to ADLA's requirement for that file format.
I tried reading in as a text file, stripping Array characters, splitting on the JSON object boundaries and converting to JSON like the above, but that kept giving errors about being unable to convert unicode and/or str (ings).
I found a way through the above, and converted to a dataframe containing one column with Rows of strings that were the JSON Objects. However, I did not find a way to output only the JSON Strings from the data frame rows to an output file by themselves. The always came out as
{'dfColumnName':'{...json_string_as_value}'}
I also tried a map function that accepted the above rows, parsed as JSON, extracted the values (JSON I wanted), then parsed the values as JSON. This appeared to work, but when I would try to save, the RDD was type PipelineRDD and had no saveAsTextFile() method. I then tried the toJSON method, but kept getting errors about "found no valid JSON Object", which I did not understand admittedly, and of course other conversion errors.
I finally found a way forward. I learned that I could read json directly from an RDD, including a PipelineRDD. I found a way to remove the unicode byte order header, wrapping array square brackets, split the JSON Objects based on a fortunate delimiter, and have a distributed dataset for more efficient processing. The output dataframe now had columns named after the JSON elements, inferred the schema, and dynamically adapts for other file formats.
Here is the code - hope it helps!:
#...Spark considers arrays of Json objects to be an invalid format
# and unicode files are prefixed with a byteorder marker
#
thanksMoiraRDD = sc.textFile( '/a/valid/file/path', partitions ).map(
lambda x: x.encode('utf-8','ignore').strip(u",\r\n[]\ufeff")
)
df = sqlContext.read.json(thanksMoiraRDD)

Extracting JSON data to PDF file

I am trying to extract the data that I receive from a REST client in a JSON format into a PDF file. I know that I need to format it in columns/sections so first I need to convert to a text format, but is there a way to do that in Ruby? If so, does anyone have an example?
Here is the format of the JSON data that I am getting from the REST API:
{"id"=>123456, "documentKey"=>"xyz", "globalId"=>"xyz", "itemType"=>1234,
"project"=>123, "createdDate"=>"2015-02-20T00:11:56.000+0000",
"modifiedDate"=>"2015-02-20T00:11:56.000+0000",
"lastActivityDate"=>"2016-03-02T16:23:52.000+0000",
"createdBy"=>1234, "modifiedBy"=>12342,
"fields"=>{"name"=>"Introduction",
"globalId"=>"Text",
"documentKey"=>"Text-2",
"description"=>"Some introduction"
}
}
Check out Prawn. It does not just 'do' this for you, you will still have to figure out how to properly transform the hierarchical json data into a flat 'text-like' data. You will have to make decisions like, do I want to display timestamps, show empty values, etc.
Here is a very crude example:
require 'prawn'
data = {"id"=>123456, "documentKey"=>"xyz", "globalId"=>"xyz", "itemType"=>1234, "project"=>123, "createdDate"=>"2015-02-20T00:11:56.000+0000", "modifiedDate"=>"2015-02-20T00:11:56.000+0000", "lastActivityDate"=>"2016-03-02T16:23:52.000+0000", "createdBy"=>1234, "modifiedBy"=>12342, "fields"=>{"name"=>"Introduction", "globalId"=>"Text", "documentKey"=>"Text-2", "description"=>" Some introduction"}}
Prawn::Document.generate('example.pdf') do
text "Project: #{data['project']}"
text "Item Type: #{data['itemType']}"
text "Description: #{data['fields']['description']}"
end
For anything more advanced I would check the prawn manual.
The other quick option is to create an HTML template and convert that to PDF, and there are multiple gems for this as well such as Wicked_PDF or PDFKit