Is there any difference between Rapid JSON and Json parser in Boost Library(Boost\property_Tree\Json_parser.hpp)
Thanks.
I have compared 37 C/C++ JSON libraries in nativejson-benchmark for standard conformance and performance.
However, I failed to integrate Boost.PropertyTree (1.60) in the benchmark, because it parses number, true, false, null types as strings.
Edit: To answer the question more directly, Boost.PropertyTree cannot provide JSON functionalities most JSON libraries do. On the other side, RapidJSON is a JSON library with high conformance and performance. BTW, in addition to parsing/stringifying JSON, RapidJSON also provides streaming-style API, JSON pointer and JSON schema. These features are uncommon in open source libraries.
EDIT - the Boost Library seems to only use RapidXML, not RapidJSON.
It should be of no concern to you because it's an implementation detail of the library anyways.
So the answer might be "no" (more likely, "yes") and you stand to gain absolutely nothing from it because you cannot depend on it.
Just pick your own XML library and use it where you need it: What XML parser should I use in C++?
IIRC Boost mostly modified the namespace, so you won't have ODR clashes when you select RapidXML
Related
Is there a tool like Google's Protobuf for JSON? I know you can convert from a Protobuf format to JSON but that requires a whole lot of extra serialization/deserialization, and I was wondering if there is some kind of tool that lets you specify the structure of a JSON message and then automatically generates libraries for use in a specified language (direct serialization/deserialization not just a wrapper around Protobuf's JSON formatter class)
I know nearly all languages provide their own in house way of handling JSON, and many higher level ones even allow you to avoid the boiler plate parsing code, but I was looking for a universal tool where you would only need to specify the format once, and then just get the generated libraries for use in multiple languages.
The Protobuf equivalent would be JSON-Schema, but still is language dependent on having a serializer or code generator available, just as Protobuf is.
If you're looking at making a REST-API, then OpenAPI Spec + swagger-codegen could be an option.
Is JSON.stringify( ) equivalent to serialization or effectively serialization or is it just a necessary step towards
serialization?
In other words, is JSON.stringify( ) sufficient but not necessary for serialization? Or is necessary but not sufficient? Or is it neither necessary nor sufficient for serialization of JavaScript objects?
Serialization is the act of converting data into a format that can be written to disk or transmitted over the network (or written on paper if that's what you want). Usually, serialization is transforming objects to text but that's not necessary since there are several serialization formats such as bittorrent's bencoding and the old/ancient standard asn.1 formats which are binary.
JSON is one form of text-based serialization format and is currently very popular due to it's simplicity. It's not the only one though. Other popular formats include XML and CSV.
Due to its popularity and its origin as javascript object literal syntax ES5 introduced JSON.stringify() to generate a JSON string from an object. Previously you had to use libraries or write a recursive descent parser to do the job.
So, is JSON.stringify() enough for serialization? Yes, if the output format you want is JSON. No, if you want other output formats such as XML or CSV or bencode.
There are limitations to the JSON format. One limitation is that JSON cannot encode functions so JSON.stringify() ignores functions/methods when serializing. JSON also can't encode circular references. Most other serialization formats have this limitation as well but since JSON looks like javascript syntax some people assume it can do what javascript object literals can. It can't.
So the relationship between "JSON" and "serialization" is like the relationship between "Toyota Prius" and "car". JSON.stringify() is simply a function that generates JSON strings so I guess that would make it a Toyota factory.
Old question, but the following information may be useful for posterity.
Of course, you can serialise any way you want, including any number of custom methods, but JSON has become an increasingly popular method.
The most obvious benefit of JSON is that it represents objects in the same way that JavaScript object literals do, though it is slightly less flexible. Nevertheless, if you can represent normal data in JavaScript then JSON is a good match.
The most significant feature is that, since it represents objects as well as arrays, it can represent fairly complex & hierarchical data.
For one reason or another, JSON has more-or-less supplanted XML as the preferred serialisation for sending data between the server and browser. It is so useful that many languages include their own JSON functions (PHP, for example, has the better named json_encode & json_decode functions), as do some modern Databases. I myself have found it convenient to use JSON functions to store a more complex data structure in a single field of a database without JavaScript anywhere in sight).
The short answer is yes, for the most part it is a sufficient step to serializing most data (non-binary). It is not, however, necessary as there are alternatives.
Serializing binary data, on the other hand, now that’s another story …
Short answer... Serialize means the same thing as Stringify, IMHO.
If I were to store the same markup in 2 separate documents, one XML, the other JSON, in MarkLogic 6, does MarkLogic automatically convert the JSON equivalent to XML, and index it in that regard, or are both stored in their respective formats?
What I'm getting at is, does MarkLogic store ALL documents as XML, regardless, and simply apply JSON transformations to JSON documents when queried?
If documents are stored in native format, is there any advantage, in terms of performance, to storing documents in JSON over XML?
Below is an example code-snippet:
if($outputFormat="json") then (: result in json format :)
let $custom-config :=
let $config := json:config("custom")
return (map:put($config, "array-element-names",(xs:QName("lp:lesson_plan"),
xs:QName("lp:instructional_segment"),
xs:QName("lp:strand_type"),
xs:QName("lp:resource"),
xs:QName("lp:level"),
xs:QName("lp:discipline"),
xs:QName("lp:language"),
xs:QName("lp:program"),
xs:QName("lp:grade"),
xs:QName("res:strand_type"),
xs:QName("res:resource"),
xs:QName("res:ISBN"),
xs:QName("res:level"),
xs:QName("res:standard"),
xs:QName("res:secondaryURL"),
xs:QName("res:grade"),
xs:QName("res:keyword"))),
map:put($config, "whitespace","ignore"),
map:put($config, "text-value","value"),
$config)
return json:transform-to-json($finalResult, $custom-config)
else (: finalResult in xml format :)
$finalResult
MarkLogic is XML-native and does need to convert JSON to XML to store it in the database. There is a high-level JSON library to perform transformations. The main functions are json:transform-to-json and json:transform-from-json, and when configured correctly should provide lossless conversions.
I think the main difference from your example is whether you want to convert to XML using your own process or use MarkLogic's toolkit.
For more detailed information, see MarkLogic's docs:
http://docs.marklogic.com/guide/app-dev/json
On disk, MarkLogic stores highly compressed C++ data structures that represent hierarchical trees and corresponding indexes. (OK, that’s an over-simplification, but illustrative nonetheless.) There are two places where you as a developer will typically interact with those data structures: 1) building queries and application logic 2) deserializing/serializing data into and out of this internal data model. Today, MarkLogic uses the XML data model (XDM) for the latter and, correspondingly, XQuery, XPath, and XSLT for the former. We chose this stack for several reasons: XML is good at representing both text mark-up as well as data structures and the tooling around XML is mature and widespread.
Having said that, JSON has emerged as a popular serialization of hierarchical data structures—the “X” in AJAX. While we don't have the same watertight abstraction between JSON and MarkLogic’s internal data model today, we do provide a set of tools that allow you to efficiently and losslessly convert between JSON and the XML data model. Additionally, our REST and Java APIs allow you to store, retrieve, and even query tree structures that originated as JSON without having to think about this conversion step; the APIs handle this in the plumbing.
As for performance, there will be a little overhead converting between a JSON and XDM representation. However, I’d expect that to be negligible for most applications. The real benefits of XML will be in the expressiveness of XQuery, XPath, and XSLT in working with the data. There is no widespread equivalent to these in the JSON world today.
One footnote: The REST API (and thus the Java API wrapper around the REST API) provide a facade for the JSON conversion to XML -- that is, the APIs do the conversion to XML for you.
Usually, you don't need to think about the conversion except when you are creating range and geospatial indexes over the converted elements.
If you need to support JSON documents in your client, then the facade is convenient.
On the other hand, expressing the structure as JSON has no advantages for database operations and some limitations. (For instance, XML has the standards-based, baked atomic data types, schema validation, and server processing with XQuery or XSLT.) So, if you have complete control over the data structure, you might want to write it to the server as XML.
As of MarkLogic 8 (February 2015), JSON is now a native data type, just like XML. This eliminates the needs for a translation layer for applications that want to work exclusively in JSON. In addition, we’ve added JavaScript as a first-class language in the database itself (using Google’s V8 engine). This means that you can write stored procedures, triggers, and even full HTTP applications with JavaScript that runs in the database, close to the data.
There are many possibilities to parse a JSON in context of a Windows Store App.
Regardless in which language (C#, JavaScript or C++).
For example: .NET 4.5 JsonObject and DataContractJsonSerializer, JavaScript Json parser or an extern one like Json.NET.
Does anybody know something about that?
I only read good things about Json.NET's performance.
But are they true and play that a role for JSON's which include datasets of 100k JSON objects? Or the user won't notice a difference?
I only have experience using Json.NET ... works just fast and great! I also used the library in enterprise-projects - i never got disappointed!
If it helps, and FWIW, I've been collecting recently some new JSON parsing / deserialization performance data that can be observed over various JSON payload "shapes" (and sizes), when using four JSON librairies, there:
https://github.com/ysharplanguage/FastJsonParser#Performances
(.NET's out-of-the-box JavaScriptSerializer vs. JSON.NET vs. ServiceStack vs. JsonParser)
Please note:
these figures are for the full .NET only (i.e., the desktop / server tier; not mobile devices)
I was interested in getting new benchmark figures about parsing / deserialization performances only (i.e., not serialization)
finally, I was also especially interested (although not exclusively) in figures re: strongly typed deserialization use cases (i.e., deserializing into POCOs)
'Hope this helps,
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
Is DOM the only way to parse JSON?
Some JSON parsers do offer incremental ("streaming") parser; for Java, at least following parsers from json.org page offer such an interface:
Jackson (pull interface)
Json-simple (SAX-style push interface)
(in addition to Software Monkey's parser referred to by another answer)
Actually, it is kind of odd that so many JSON parsers do NOT offer this simple low-level interface -- after all, they already need to implement low-level parsing, so why not expose it.
EDIT (June 2011): Gson too has its own streaming API (with gson 1.6)
By DOM, I assume you mean that the parser reads an entire document at once before you can work with it. Note that saying DOM tends to imply XML, these days, but IMO that is not really an accurate inference.
So, in answer to your questions - "Yes", there are streaming API's and "No", DOM is not the only way. That said, processing a JSON document as a stream is often problematic in that many objects are not simple field/value pairs, but contain other objects as values, which you need to parse to process, and this tends to end up a recursive thing. But for simple messages you can do useful things with a stream/event based parser.
I have written a pull-event parser for JSON (it was one class, about 700 lines). But most of the others I have seen are document oriented. One of the layers I have built on top of my parser is a document reader, which took about 30 LOC. I have only ever used my parser in practice as a document loader (for the above reason).
I am sure if you search the net you will find pull and push based parsers for JSON.
EDIT: I have posted the parser to my site for download. A working compilable class and a complete example are included.
EDIT2: You'll also want to look at the JSON website.
As stefanB mentioned, http://lloyd.github.com/yajl/ is a C library for stream parsing JSON. There are also many wrappers mentioned on that page for other languages:
yajl-ruby - ruby bindings for YAJL
yajl-objc - Objective-C bindings for YAJL
YAJL IO bindings (for the IO language)
Python bindings come in two flavors, py-yajl OR yajl-py
yajl-js - node.js bindings (mirrored to github).
lua-yajl - lua bindings
ooc-yajl - ooc bindings
yajl-tcl - tcl bindings
some of them may not allow streaming, but many of them certainly do.
If you want to use pure javascript and a library that runs both in node.js and in the browser you can try clarinet:
https://github.com/dscape/clarinet
The parser is event-based, and since it’s streaming it makes dealing with huge files possible. The API is very close to sax and the code is forked from sax-js.
Disclaimer: I'm suggesting my own project.
I maintain a streaming JSON parser in Javascript which combines some of the features of SAX and DOM:
Oboe.js website
The idea is to allow streaming parsing, but not require the programmer to listen to lots of different events like with raw SAX. I like SAX but it tends to be quite low level for what most people need. You can listen for any interesting node from the JSON stream by registering JSONPath patterns.
The code is on Github here:
Oboe.js Github page
LitJSON supports a streaming-style API. Quoting from the manual:
"An alternative interface to handling JSON data that might be familiar to some developers is through classes that make it possible to read and write data in a stream-like fashion. These classes are JsonReader and JsonWriter.
"These two types are in fact the foundation of this library, and the JsonMapper type is built on top of them, so in a way, the developer can think of the reader and writer classes as the low-level programming interface for LitJSON."
If you are looking specifically for Python, then ijson claims to support it. However, it is only a parser, so I didn't come across anything for Python that can generate json as a stream.
For C++ there is rapidjson that claims to support both parsing and generation in a streaming manner.
Here's a NodeJS NPM library for parsing and handling streams of JSON:
https://npmjs.org/package/JSONStream
For Python, an alternative (apparently lighter and more efficient) to ijson is jsaone (see that link for rough benchmarks, showing that jsaone is approximately 3x faster).
DISCLAIMER: I'm the author of jsaone, and the tests I made are very basic... I'll be happy to be proven wrong!
Answering the question title: YAJL a JSON parser library in C:
YAJL remembers all state required to
support restarting parsing. This
allows parsing to occur incrementally
as data is read off a disk or network.
So I guess using yajl to parse JSON can be considered as processing stream of data.
In reply to your 2nd question, no, many languages have JSON parsers. PHP, Java, C, Ruby and many others. Just Google for the language of your choice plus "JSON parser".