How can i convert xml to json in oracle? - json

If i have
<xml><name>himasnhu</name><age>24</age></xml>
How can i covert it to
{"name":"himanshu","age":24} .
Thanks.

In Oracle 12.2 you should be able to use:
SELECT JSON_OBJECTAGG( id VALUE text )
FROM XMLTABLE(
'/xml/*'
PASSING XMLTYPE( '<xml><name>himanshu</name></xml>')
COLUMNS id VARCHAR2(200) PATH './name()',
text VARCHAR2(200) PATH './text()'
);
I'm not on a 12c system so this is untested.
In earlier versions you can write a Java function [1] [2] using one of the many Java JSON packages to perform the conversion and then load it into the database using the loadjava utility (or a CREATE JAVA statement) and then use that.

You can use the XML to JSON filter to convert an XML document to a JavaScript Object Notation (JSON) document. For details on the mapping conventions used, see:Github- Mapping convention
Configuration
To configure the XML to JSON filter, specify the following fields:
Name:
Enter a suitable name to reflect the role of this filter.
Automatically insert JSON array boundaries:
Select this option to attempt to automatically reconstruct JSON arrays from the incoming XML document. This option is selected by default.
[Note]
If the incoming XML document includes the processing instruction, the JSON array is reconstructed regardless of this option setting. If the XML document does not contain , and this option is selected, the filter makes an attempt at guessing what should be part of the array by examining the element names.

Related

Reading JSON in Azure Synapse

I'm trying to understand the code for reading JSON file in Synapse Analytics. And here's the code provided by Microsoft documentation:
Query JSON files using serverless SQL pool in Azure Synapse Analytics
select top 10 *
from openrowset(
bulk 'https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/ecdc_cases/latest/ecdc_cases.jsonl',
format = 'csv',
fieldterminator ='0x0b',
fieldquote = '0x0b'
) with (doc nvarchar(max)) as rows
go
I wonder why the format = 'csv'. Is it trying to convert JSON to CSV to flatten the file?
Why they didn't just read the file as a SINGLE_CLOB I don't know
When you use SINGLE_CLOB then the entire file is important as one value and the content of the file in the doc is not well formatted as a single JSON. Using SINGLE_CLOB will make us do more work after using the openrowset, before we can use the content as JSON (since it is not valid JSON we will need to parse the value). It can be done but will require more work probably.
The format of the file is multiple JSON's like strings, each in separate line. "line-delimited JSON", as the document call it.
By the way, If you will check the history of the document at GitHub, then you will find that originally this was not the case. As much as I remember, originally the file included a single JSON document with an array of objects (was wrapped with [] after loaded). Someone named "Ronen Ariely" in fact found this issue in the document, which is why you can see my name in the list if the Authors of the document :-)
I wonder why the format = 'csv'. Is it trying to convert json to csv to flatten the hierarchy?
(1) JSON is not a data type in SQL Server. There is no data type name JSON. What we have in SQL Server are tools like functions which work on text and provide support for strings which are JSON's like format. Therefore, we do not CONVERT to JSON or from JSON.
(2) The format parameter has nothing to do with JSON. It specifies that the content of the file is a comma separated values file. You can (and should) use it whenever your file is well formatted as comma separated values file (also commonly known as csv file).
In this specific sample in the document, the values in the csv file are strings, which each one of them has a valid JSON format. Only after you read the file using the openrowset, we start to parse the content of the text as JSON.
Notice that only after the title "Parse JSON documents" in the document, the document starts to speak about parsing the text as JSON.

Modify JSON generated in Maximo 7.6.1

I'm able to successfully generate JSON file from Maximo however I would like to modify the JSON before it gets generated. Like below is the sample JSON that gets generated in Maximo,
{"lastreadingdate":"2020-01-30T16:48:33+01:00",
"linearassetmeterid":0,
"sinceinstall":0.0,
"lastreading":"1,150",
"plustinitrdng":0.0,
"sincelastinspect":0.0,
"_rowstamp":"568349195",
"assetnum":"RS100003",
"active":true,
"assetmeterid":85,
"lifetodate":0.0,
"measureunitid":"KWH",
"metername":"1010",
"remarks":"TESTING JSON"}
I need the JSON to be generated as below ,
{"spi:action": "OSLC draft",
"spi:tri1readingdate":"2020-01-30T16:48:33+01:00",
"spi:tryassetmeterid":0,
"spi:install":0.0,
"spi:lastreadingTx":"1,150",
"spi:intrdngtrX":0.0,
and so on...}
Basically I need to change the target attribute names and prefix "spi" Below is the error occuring in JSON Mapping .
You're not specifying how you generate the JSON file but I'll quickly explain how you can achieve this:
As Dex pointed out, there is a JSON Mapping app in the integration module that you can use to map your outbound object structure's fields to your target structure naming.
You define your JSON structure on the JSON Mapping tab by providing a JSON sample.
You then define your mapping with Maximo on the Properties tab, like this:
Reading this IBM doc before jumping right into it should help you a lot:
https://www.ibm.com/developerworks/community/wikis/form/anonymous/api/wiki/02db2a84-fc66-4667-b760-54e495526ec1/page/e10f6e96-435d-433c-8259-5690eb756779/attachment/169224c7-10a5-4cee-af72-697a476f8b2e/media/JSON

NiFi non-Avro JSON Reader/Writer

It appears that the standard Apache NiFi readers/writers can only parse JSON input based on Avro schema.
Avro schema is limiting for JSON, e.g. it does not allow valid JSON properties starting with digits.
JoltTransformJSON processor can help here (it doesn't impose Avro limitations to how the input JSON may look like), but it seems that this processor does not support batch FlowFiles. It is also not based on the readers and writers (maybe because of that).
Is there a way to read arbitrary valid batch JSON input, e.g. in multi-line form
{"myprop":"myval","12345":"12345",...}
{"myprop":"myval2","12345":"67890",...}
and transform it to other JSON structure, e.g. defined by JSON schema, and e.g. using JSON Patch transformation, without writing my own processor?
Update
I am using Apache NiFi 1.7.1
Update 2
Unfortunately, #Shu's suggestion did work. I am getting same error.
Reduced the case to a single UpdateRecord processor that reads JSON with numeric properties and writes to a JSON without such properties using
myprop : /data/5836c846e4b0f28d05b40202
mapping. Still same error :(
it does not allow valid JSON properties starting with digits?
This bug NiFi-4612 fixed in NiFi-1.5 version, We can use AvroSchemaRegistry to defined your schema and change the
Validate Field Names
false
Then we can have avro schema field names starting with digits.
For more details refer to this link.
Is there a way to read arbitrary valid batch JSON input, e.g. in multi-line form?
This bug NiFi-4456 fixed in NiFi-1.7, if you are not using this version of NiFi then we can do workaround to create an array of json messages with ,(comma delimiter) by using.
Flow:
1.SplitText //split the flowfile with 1 line count
2.MergeRecord //merge the flowfiles into one
3.ConvertRecord
For more details regards to this particular issues refer to this link(i have explained with the flow).

Apache Drill view.drill json syntax to describe complex data types

Does the JSON fields array in the view.drill file created by create view support describing an ARRAY, STRUCT type?
Want to define views of PARQUET files so they are described by the JDBC driver (DatabaseMetadata.getTables, getColumns...). Want to project the columns of the PARQUET file as the actual type (i.e. INTEGER) versus leaving Drill to describe them as the ANY type. Unfortunately, that requires CAST ( as ) and the target types supported by CAST does not include ARRAY or STRUCT.
Hence, if the view.drill file can be externally defined without requiring a CAST and supported complex types would build them programmatically.

MySQL UDF for working with json?

Are there any good UDFs in MySQL to deal with json data, that supports the ability to retrieve a particular value in json (by dot notation key - EG: json_get('foo.bar.baz')) as well as the ability to set the value of a particular key - EG: json_set('foo.bar.baz', 'value')?
I found http://www.mysqludf.org/lib_mysqludf_json/ - but it seems to only provide the ability to create json data structures from non-json column values, as opposed to interacting with json column values.
This UDF is able to parse JSON and return the value of an attribute:
https://github.com/kazuho/mysql_json
This other one too: https://github.com/webaroo/mysql-json-udf