Is JSON.stringify( ) equivalent to serialization or effectively serialization or is it just a necessary step towards
serialization?
In other words, is JSON.stringify( ) sufficient but not necessary for serialization? Or is necessary but not sufficient? Or is it neither necessary nor sufficient for serialization of JavaScript objects?
Serialization is the act of converting data into a format that can be written to disk or transmitted over the network (or written on paper if that's what you want). Usually, serialization is transforming objects to text but that's not necessary since there are several serialization formats such as bittorrent's bencoding and the old/ancient standard asn.1 formats which are binary.
JSON is one form of text-based serialization format and is currently very popular due to it's simplicity. It's not the only one though. Other popular formats include XML and CSV.
Due to its popularity and its origin as javascript object literal syntax ES5 introduced JSON.stringify() to generate a JSON string from an object. Previously you had to use libraries or write a recursive descent parser to do the job.
So, is JSON.stringify() enough for serialization? Yes, if the output format you want is JSON. No, if you want other output formats such as XML or CSV or bencode.
There are limitations to the JSON format. One limitation is that JSON cannot encode functions so JSON.stringify() ignores functions/methods when serializing. JSON also can't encode circular references. Most other serialization formats have this limitation as well but since JSON looks like javascript syntax some people assume it can do what javascript object literals can. It can't.
So the relationship between "JSON" and "serialization" is like the relationship between "Toyota Prius" and "car". JSON.stringify() is simply a function that generates JSON strings so I guess that would make it a Toyota factory.
Old question, but the following information may be useful for posterity.
Of course, you can serialise any way you want, including any number of custom methods, but JSON has become an increasingly popular method.
The most obvious benefit of JSON is that it represents objects in the same way that JavaScript object literals do, though it is slightly less flexible. Nevertheless, if you can represent normal data in JavaScript then JSON is a good match.
The most significant feature is that, since it represents objects as well as arrays, it can represent fairly complex & hierarchical data.
For one reason or another, JSON has more-or-less supplanted XML as the preferred serialisation for sending data between the server and browser. It is so useful that many languages include their own JSON functions (PHP, for example, has the better named json_encode & json_decode functions), as do some modern Databases. I myself have found it convenient to use JSON functions to store a more complex data structure in a single field of a database without JavaScript anywhere in sight).
The short answer is yes, for the most part it is a sufficient step to serializing most data (non-binary). It is not, however, necessary as there are alternatives.
Serializing binary data, on the other hand, now that’s another story …
Short answer... Serialize means the same thing as Stringify, IMHO.
Related
Originally, JSON borrowed its syntax from JavaScript (object literals), but then became a programming language agnostic data interchange format. Its structures (string, array, object) can be mapped directly to primitive data types in most dynamic programming languages and vice versa.
Now, since it is no longer tied to JavaScript, what is the abstract data model of JSON today? In other words, if we compare XML with JSON, is there a XML Infoset equivalent for JSON?
Obviously, JSON is not the only format that can be used for serialization of JSON-like documents. Alternatives include YAML, BSON, and even XML. Is there a name for that unified data model and perhaps a formal specification available?
XML is more complicated that JSON format. Some common features that XML has and JSON lacks are: namespaces, attributes, comments. However, both formats can represent any kind of data, but potentially with a different structure logic.
What's the abstract data model of JSON ? The same as it was when it was created, nothing changed. JSON served as a data format for server-client communication. It was never tied to JavaScript, since it is just a formatted data string and not some kind of binary executable. Its format originates from javascript yes, but any language can interpret it with a text parser.
I am not sure what kind of information you are looking for, but the name of the process that converts language-specific structured data into strings and vise versa is called Serialization/Unserialization, but you already know these terms ...
"Unified data model", "formal specification", what are you even looking for ? Are you looking for principles of data formatting ? Data storing ? People need to store/transmit/present their data and they come up with ways to do it, there is nothing more to it.
Basically, I have an webapp that will consume a .net wcf web-service.
I'll be working with a multitude of custom data-types in the web-service, that may vary during the development phase.
To simplify my work here, I thought about creating a single function that would receive 2 parameters:
the id of the operation (1-getArticle, 2-getArticlesList, 3-save article details, and so on)
the data needed to process the operation (article_id for a search, or a complex json
data-structure that would be deserialized in the webservice to be
saved in a database).
To override the problems of escaping special characters like "<" I thought that it would be easier to convert the json to base64 and proceed to decode it in the webapp or in the webservice.
Am I committing a terrible mistake if I choose this path? (json in an xml file, and convert the said json to base64)
Yes, you're making life far too complex for yourself if you use Base 64. JSON is text. XML is built to handle text.
With a proper library for XML construction such as XOM, JDOM, DOM, etc., you don't need to do any extra work at all. Just shove the JSON text into the right place and the library will escape characters like >, &, and < as needed.
Any legitimate XML parser will unescape this on the other end. As long as you use good libraries, it will all just work without any extra effort or thought on your part.
This is a near duplicate of How to reliably hash JavaScript objects?, where someone wants to reliably hash javascript objects ;
Now that the json-ld specification has been validated, I saw that there is a normalization procedure that they advertise as a potential way to normalize a json object :
normalize the data using the RDF Dataset normalization algorithm, and then dump the output to normalized NQuads format. The NQuads can then be processed via SHA-256, or similar algorithm, to get a deterministic hash of the contents of the Dataset.
Building a hash of a json object has always been a pain because something like
sha1(JSON.stringify(object))
does not work or is not guaranteed to work the same across implementations (the order of the keys is not defined of example).
Does json-ld work as advertized ? Is it safe to use it as universal json normalization procedure for hashing objects ? Can those objects be standard json objects or do they need some json-ld decorations (#context,..) to be normalized ?
Yes, normalization works with JSON-LD, but the objects do need to be given context (via the #context property) in order for them to produce any RDF. It is the RDF that is deterministically output in NQuads format (and that can then be hashed, for example).
If a property in a JSON-LD document is not defined via #context, then it will be dropped during processing. JSON-LD requires that you provide global meaning (semantics) to the properties in your document by associating them with URLs. These URLs may provide further machine-readable information about the meaning of the properties, their range, domain, etc. In this way data becomes "linked" -- you can both understand the meaning of a JSON document from one API in the context of another and you can traverse documents (via HTTP) to find more information.
So the short answer to the main question is "Yes, you can use JSON-LD normalization to build a unique hash for a JSON object", however, the caveat is that the JSON object must be a JSON-LD object, which really constitutes a subset of JSON. One of the main reasons for the invention of the normalization algorithm was for hashing and digitally-signing graphs (JSON-LD documents) for comparison.
If I were to store the same markup in 2 separate documents, one XML, the other JSON, in MarkLogic 6, does MarkLogic automatically convert the JSON equivalent to XML, and index it in that regard, or are both stored in their respective formats?
What I'm getting at is, does MarkLogic store ALL documents as XML, regardless, and simply apply JSON transformations to JSON documents when queried?
If documents are stored in native format, is there any advantage, in terms of performance, to storing documents in JSON over XML?
Below is an example code-snippet:
if($outputFormat="json") then (: result in json format :)
let $custom-config :=
let $config := json:config("custom")
return (map:put($config, "array-element-names",(xs:QName("lp:lesson_plan"),
xs:QName("lp:instructional_segment"),
xs:QName("lp:strand_type"),
xs:QName("lp:resource"),
xs:QName("lp:level"),
xs:QName("lp:discipline"),
xs:QName("lp:language"),
xs:QName("lp:program"),
xs:QName("lp:grade"),
xs:QName("res:strand_type"),
xs:QName("res:resource"),
xs:QName("res:ISBN"),
xs:QName("res:level"),
xs:QName("res:standard"),
xs:QName("res:secondaryURL"),
xs:QName("res:grade"),
xs:QName("res:keyword"))),
map:put($config, "whitespace","ignore"),
map:put($config, "text-value","value"),
$config)
return json:transform-to-json($finalResult, $custom-config)
else (: finalResult in xml format :)
$finalResult
MarkLogic is XML-native and does need to convert JSON to XML to store it in the database. There is a high-level JSON library to perform transformations. The main functions are json:transform-to-json and json:transform-from-json, and when configured correctly should provide lossless conversions.
I think the main difference from your example is whether you want to convert to XML using your own process or use MarkLogic's toolkit.
For more detailed information, see MarkLogic's docs:
http://docs.marklogic.com/guide/app-dev/json
On disk, MarkLogic stores highly compressed C++ data structures that represent hierarchical trees and corresponding indexes. (OK, that’s an over-simplification, but illustrative nonetheless.) There are two places where you as a developer will typically interact with those data structures: 1) building queries and application logic 2) deserializing/serializing data into and out of this internal data model. Today, MarkLogic uses the XML data model (XDM) for the latter and, correspondingly, XQuery, XPath, and XSLT for the former. We chose this stack for several reasons: XML is good at representing both text mark-up as well as data structures and the tooling around XML is mature and widespread.
Having said that, JSON has emerged as a popular serialization of hierarchical data structures—the “X” in AJAX. While we don't have the same watertight abstraction between JSON and MarkLogic’s internal data model today, we do provide a set of tools that allow you to efficiently and losslessly convert between JSON and the XML data model. Additionally, our REST and Java APIs allow you to store, retrieve, and even query tree structures that originated as JSON without having to think about this conversion step; the APIs handle this in the plumbing.
As for performance, there will be a little overhead converting between a JSON and XDM representation. However, I’d expect that to be negligible for most applications. The real benefits of XML will be in the expressiveness of XQuery, XPath, and XSLT in working with the data. There is no widespread equivalent to these in the JSON world today.
One footnote: The REST API (and thus the Java API wrapper around the REST API) provide a facade for the JSON conversion to XML -- that is, the APIs do the conversion to XML for you.
Usually, you don't need to think about the conversion except when you are creating range and geospatial indexes over the converted elements.
If you need to support JSON documents in your client, then the facade is convenient.
On the other hand, expressing the structure as JSON has no advantages for database operations and some limitations. (For instance, XML has the standards-based, baked atomic data types, schema validation, and server processing with XQuery or XSLT.) So, if you have complete control over the data structure, you might want to write it to the server as XML.
As of MarkLogic 8 (February 2015), JSON is now a native data type, just like XML. This eliminates the needs for a translation layer for applications that want to work exclusively in JSON. In addition, we’ve added JavaScript as a first-class language in the database itself (using Google’s V8 engine). This means that you can write stored procedures, triggers, and even full HTTP applications with JavaScript that runs in the database, close to the data.
From a comment on the announcement blog post:
Regarding JSON: JSON is structured
similarly to Protocol Buffers, but
protocol buffer binary format is still
smaller and faster to encode. JSON
makes a great text encoding for
protocol buffers, though -- it's
trivial to write an encoder/decoder
that converts arbitrary protocol
messages to and from JSON, using
protobuf reflection. This is a good
way to communicate with AJAX apps,
since making the user download a full
protobuf decoder when they visit your
page might be too much.
It may be trivial to cook up a mapping, but is there a single "obvious" mapping between the two that any two separate dev teams would naturally settle on? If two products supported PB data and could interoperate because they shared the same .proto spec, I wonder if they would still be able to interoperate if they independently introduced a JSON reflection of the same spec. There might be some arbitrary decisions to be made, e.g. should enum values be represented by a string (to be human-readable a la typical JSON) or by their integer value?
So is there an established mapping, and any open source implementations for generating JSON encoder/decoders from .proto specs?
May be this is helpful http://code.google.com/p/protobuf-java-format/
From what I have seen, Protostuff is the project to use for any PB work on Java, including serializing it as JSON, based on protocol definition. I have not used it myself, just heard good things.
Yes, since Protocol Buffers version 3.0.0 (released July 28, 2016) there
is "A well-defined encoding in JSON as an alternative to binary proto
encoding" as mentioned in the release notes
https://github.com/google/protobuf/releases/tag/v3.0.0
I needed to marshal from GeneratedMessageLite to a JSON object but did not need to unmarshal. I couldn't use the protobuf library in Pangea's answer because it doesn't work with the LITE_RUNTIME option. I also didn't want to burden our already large legacy system with generating more compiled code for the existing protocol buffers. For mashalling to JSON, I went with this simple solution to marshal
final Person gpb = Person.newBuilder().setName("Bill Monroe").build();
final Gson gson = new Gson();
final String jsonString = gson.toJson(gpb);
One further thought: if protobuf objects have getters/setters, or appropriately named fields, one could simply use Jackson JSON processor's data binding. By default it handles public getters, any setters and public fields, but these are just default visibility levels and can be changed. If so, Jackson can serialize/deserialize protobuf generated POJOs without problems.
I have actually used this approach with Thrift-generated objects; the only thing I had to configure there was to disable serialization of various "isXXX()" methods that Thrift adds for checking if a field has been explicitly assigned or not.
First of all I think one should reason very carefully on putting an effort into converting a dataset to protobuffs. Here my reasons to convert a data-set to protobuffs
Type Safety: guarantee on the format of the data being considered.
uncompressed memory foot-print of the data. The reason I mention un-compressed is because post compression there isn't much of a difference in the size of JSON compressed and proto compressed but compression has a cost associated with it. Also, the serialization/de-serialization speed is almost similar, infact Jackson json is faster than protobuffs. Please checkout the following link for more information
http://technicalrex.com/2014/06/23/performance-playground-jackson-vs-protocol-buffers/
The protobuffs needs to be transferred over the network a lot.
Saying that once you convert your data-set to Jackson JSON format in the way that the ProtoBuff definition is defined then it can very easily be directly mapped to ProtoBuff format using the Protostuff:JsonIoUtil:mergeFrom function. Signature of the function :
public static <T> void mergeFrom(JsonParser parser, T message, Schema<T> schema, boolean numeric)
Reference to protostuff