Equivalent Xml for Json - json

I came across this service from stackoverflow
https://api.stackexchange.com/2.3/questions?fromdate=1519862400&todate=1522368000&order=desc&sort=activity&site=stackoverflow&tagged=python
I believe the source is from a database. How do I build an Xml to spit me out data in similar format?
I use the below logical lines
xmldoc.Load(xmlFileName);
Newtonsoft.Json.JsonConver.SerializeXmlNode(xmldoc);
Any recommendation of how to build the Xml which is a reverse process? My solutions are heavily dependant on Xml and flatFiles

According to https://api.stackexchange.com/docs the StackExchange API only supports JSON output, not XML. So you will have to convert the JSON to XML.
My own preference is to do this conversion "by hand" using XSLT 3.0 rather than using a standard library, because standard libraries often give you XML that's rather difficult to work with.

Related

Validation of converted JSON against XML or XSD

I have an XML and XSD file. I am using Apache NiFi to convert XML to JSON. However, it is nested in many levels and hence I want to validate if the conversion is fine. I want to validate the same using XSD in Apache NiFi.
I will not be able to share the company sensitive information.
Is there any processor or script that I can use? there is an option of writing Python script in a processor called ExecuteScript.
Thanks in advance
There are two parts to your question.
Can JSON be validated via XSD?
Does nifi have a processor that validates JSON via XSD?
The first part already is answered here:
Validate JSON against XML Schema (XSD)
Now for the second part, depending on the solution you end up going with, neither one is implemented in a nifi processor, and attempting to use the ExecuteScript will not work for you because these require use of imported non-native modules. Instead you would need to create your own custom processor with java and import that into nifi which would solve your problem. This is all a bit labor intensive.
Alternatively, you could try a reverse conversion back to XML into an attribute and then validate that attribute content against the original XSD. This is a method I use a lot when writing unit tests. I haven't personally tried this in nifi, but it sounds like it would be possible and would likely be the least complicated solution.

Automatic library generation for JSON

Is there a tool like Google's Protobuf for JSON? I know you can convert from a Protobuf format to JSON but that requires a whole lot of extra serialization/deserialization, and I was wondering if there is some kind of tool that lets you specify the structure of a JSON message and then automatically generates libraries for use in a specified language (direct serialization/deserialization not just a wrapper around Protobuf's JSON formatter class)
I know nearly all languages provide their own in house way of handling JSON, and many higher level ones even allow you to avoid the boiler plate parsing code, but I was looking for a universal tool where you would only need to specify the format once, and then just get the generated libraries for use in multiple languages.
The Protobuf equivalent would be JSON-Schema, but still is language dependent on having a serializer or code generator available, just as Protobuf is.
If you're looking at making a REST-API, then OpenAPI Spec + swagger-codegen could be an option.

Converting JSON to XML using XSLT

Following is my requirement:
Application A is creating a JSON based on its Java Beans and sending to my Application.
I have to take this JSON and convert it into XML (XSD for this is completely different than my JSON structure) and send to Application B.
Solution 1) I am currently converting this json to xml using json.org library.Then using Apache-xalan and XSL stylesheet, I am converting this to xml format as required by App B.
Solution 2) Converting this json to Java Bean (JB1).Then converting this JB1 to another Java Bean (JB2) as per the xml structure required by Application B.Then convert JB2 to XML for app B.
Solution 3) Using Apache Xalan and Xerces to parse through the input json and make the XML in Java itself without using XSL.
Which is better approach (in simplicity of code, throughput )? As JSON becomes more complex, is it easy to use solution 1 ? Please suggest if there is better approach other than these 3 ?
XSLT 3.0 offers a built-in json-to-xml() function. Once you have the XML, you can easily transform it to your required format. It is implemented in Saxon 9.7 (PE or higher) and I believe in Exselt.
Solution 1: Yes. This is the conventional and best path for both simple and complex JSON and simple or complex targeted XML.
Solution 2: No. There's no reason to introduce Java Beans as an intermediate form, especially if you have no other need for Java Beans. This option unnecessarily introduces transformational and marshalling complexity.
Solution 3: No. Neither Xalan nor Xerces are designed to parse JSON; they are designed to parse XML.
There are sample programs that will map a JSON document into an equivalent XML document and back; I wrote one as a demo for Liberty's support of json-p (javax.json), using an XML vocabulary I called JinX (JSON in XML). That could be used as a pre/post processor wrapped around XSLT, if desired.
Better solutions are possible -- redefine XSLT to operate on JSON trees, for example -- but would take a bit more work.
JSON is, pure-and-simple, "a communications protocol." In other words, "it specifically exists(!) to allow 'arbitrary (JavaScript) data structures' to be conveyed between some-client and some-host," over "the HTTP(S) protocol."
Therefore: "it is not(!) XML," and therefore must never be considered to be "appropriate input to XSLT!"
"Thou shalt not mix Apples and Oranges!"
If you wish to apply "XSLT" technologies to a "JSON-derived" input (which is, by definition, "a data structure ...") then you must first, and "by whatever suitable means," convert that data structure into XML.

JSON and HTML trying to understand

According to a post on Stackflow.com called “what’s is JSOn and why would I use it? “web services used XML as their primary data format for transmitting back data, but since JSON appeared, it is preferred method.” Why do must web services use JSON over XML, is because it’s a better method for interchanging?
XML was designed primarily for document formats, e.g. papers in scientific journals. It contains many features that aren't needed for simple data interchange, and these features can get in the way when you are processing XML because they can't be easily represented in Javascript. So the code for processing the XML ends up a lot more complicated than it could be. By contrasts, JSON has an exact match to the data structures Javascript can handle natively. Of course, that problem could in principle be solved by using a language with better XML support than JavaScript - XSLT, for example - but unfortunately XSLT in the browser has never had the same level of investment put into it.
Additionally, for reasons I have never understood, the browser security folks decided that reading JSON from alien web sites (i.e. from a different domain from your HTML page) is safe, but reading XML from alien sites isn't. So if you switch from XML to JSON, you get rid of a lot of cross-site-scripting hassle.
JSON is less verbose and it is sufficient for simple data transmission, i.e. if you do not need any transformations (XSLT).

Performance Advantages in Storing Documents as JSON in MarkLogic 6

If I were to store the same markup in 2 separate documents, one XML, the other JSON, in MarkLogic 6, does MarkLogic automatically convert the JSON equivalent to XML, and index it in that regard, or are both stored in their respective formats?
What I'm getting at is, does MarkLogic store ALL documents as XML, regardless, and simply apply JSON transformations to JSON documents when queried?
If documents are stored in native format, is there any advantage, in terms of performance, to storing documents in JSON over XML?
Below is an example code-snippet:
if($outputFormat="json") then (: result in json format :)
let $custom-config :=
let $config := json:config("custom")
return (map:put($config, "array-element-names",(xs:QName("lp:lesson_plan"),
xs:QName("lp:instructional_segment"),
xs:QName("lp:strand_type"),
xs:QName("lp:resource"),
xs:QName("lp:level"),
xs:QName("lp:discipline"),
xs:QName("lp:language"),
xs:QName("lp:program"),
xs:QName("lp:grade"),
xs:QName("res:strand_type"),
xs:QName("res:resource"),
xs:QName("res:ISBN"),
xs:QName("res:level"),
xs:QName("res:standard"),
xs:QName("res:secondaryURL"),
xs:QName("res:grade"),
xs:QName("res:keyword"))),
map:put($config, "whitespace","ignore"),
map:put($config, "text-value","value"),
$config)
return json:transform-to-json($finalResult, $custom-config)
else (: finalResult in xml format :)
$finalResult
MarkLogic is XML-native and does need to convert JSON to XML to store it in the database. There is a high-level JSON library to perform transformations. The main functions are json:transform-to-json and json:transform-from-json, and when configured correctly should provide lossless conversions.
I think the main difference from your example is whether you want to convert to XML using your own process or use MarkLogic's toolkit.
For more detailed information, see MarkLogic's docs:
http://docs.marklogic.com/guide/app-dev/json
On disk, MarkLogic stores highly compressed C++ data structures that represent hierarchical trees and corresponding indexes. (OK, that’s an over-simplification, but illustrative nonetheless.) There are two places where you as a developer will typically interact with those data structures: 1) building queries and application logic 2) deserializing/serializing data into and out of this internal data model. Today, MarkLogic uses the XML data model (XDM) for the latter and, correspondingly, XQuery, XPath, and XSLT for the former. We chose this stack for several reasons: XML is good at representing both text mark-up as well as data structures and the tooling around XML is mature and widespread.
Having said that, JSON has emerged as a popular serialization of hierarchical data structures—the “X” in AJAX. While we don't have the same watertight abstraction between JSON and MarkLogic’s internal data model today, we do provide a set of tools that allow you to efficiently and losslessly convert between JSON and the XML data model. Additionally, our REST and Java APIs allow you to store, retrieve, and even query tree structures that originated as JSON without having to think about this conversion step; the APIs handle this in the plumbing.
As for performance, there will be a little overhead converting between a JSON and XDM representation. However, I’d expect that to be negligible for most applications. The real benefits of XML will be in the expressiveness of XQuery, XPath, and XSLT in working with the data. There is no widespread equivalent to these in the JSON world today.
One footnote: The REST API (and thus the Java API wrapper around the REST API) provide a facade for the JSON conversion to XML -- that is, the APIs do the conversion to XML for you.
Usually, you don't need to think about the conversion except when you are creating range and geospatial indexes over the converted elements.
If you need to support JSON documents in your client, then the facade is convenient.
On the other hand, expressing the structure as JSON has no advantages for database operations and some limitations. (For instance, XML has the standards-based, baked atomic data types, schema validation, and server processing with XQuery or XSLT.) So, if you have complete control over the data structure, you might want to write it to the server as XML.
As of MarkLogic 8 (February 2015), JSON is now a native data type, just like XML. This eliminates the needs for a translation layer for applications that want to work exclusively in JSON. In addition, we’ve added JavaScript as a first-class language in the database itself (using Google’s V8 engine). This means that you can write stored procedures, triggers, and even full HTTP applications with JavaScript that runs in the database, close to the data.