Where do I start with JSON? [closed] - json

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am complete blank about JSON but I would like to be able to read some data from the URL http://stokercloud.dk/dev/getdriftjson.php?mac=oz8hp and be able to store them im a DB.
But I don't know where to start, so I thought I would ask here for hints and maybe some links to samples that I might learn from
I know that the output might look confusing, but I have a list of what each item is.
the file is runtime data from my pelletburner

The JSON specification is the first page to read. The standard is so simple it is easy to understand it from this page.
I found a wider tutorial, with illustrations and more resources. Nice to see.
Here is the conclusion of this web page:
JSON is a open,text based, light-weight data interchange format specified as RFC4627, came to the developer world in 2005 and it's
popularity is increased rapidly.
JSON uses Object and Array as data structures and strings, number, true, false and null as values. Objects and arrays can be nested
recursively.
Most (if not all) modern programming languages can be used to work with JSON.
NoSQL databases, which are evolved to get rid of the bottlenecks of the Relational Databases, are using JSON to store data.
JSON gives developers a power to choose between XML and JSON leading to more flexibility.
Besides NoSQL, AJAX, Package Management, and integration of APIs to the web application are the major areas where JSON is being used
extensively.
IMHO the main point with JSON is that it contains documents, or arrays of documents. There is less data types than with Delphi (e.g. no official date/time, and just one numeric type). It is an exchange format, which is widely used now, and, from my own experiment, easier to work with than XML, from both human and computer sides.
In Delphi, you have several libraries around, mainly:
SuperObject
XSuperObject
dwsJSON
lkJSON
DBXJSON which ships with newer versions of Delphi;
mORMot for Win32/Win64
SynCrossPlatformJSON
About performance, you can take a look at our blog article. DBXJSON (and the official JSON unit of Delphi) is by far the slowest, and somewhat difficult to work with. Some methods for easy access to the JSON document content are missing. Other libraries are much easier to work with. Our version shipped with mORMot is very fast, as is dwsJSON. SuperObject is slower than those, especially for huge content, and XSuperObject is slow (but cross-platform). Our SynCrossPlatformJSON unit is also cross-platform, very fast, and has a variant-based document access.
Some code using mORMot library:
uses
SynCrtSock,
SynCommons;
procedure test;
var json: RawUTF8;
jsondata: TDocVariantData;
i: integer;
begin
json := TWinHttp.Get('http://stokercloud.dk/dev/getdriftjson.php?mac=oz8hp');
jsondata := DocVariantData(_json(json).jsondata)^;
for i := 0 to jsondata.Count-1 do
writeln(jsondata.Values[i]); // here all items are converted back to JSON and written
end;

To learn JSON (JavaScript Object Notation), you'd read JSON on Wikipedia.
To download data from url, you can use TIdHttp, which is an Http client of Indy framework.
To parse JSON, I'd suggest use superobject. It includes great examples in demos directory.

JSON is an interchange form for sending data between anything that needs to have data sent to it. Its simplicity is its strength.
The text is valid javascript and so can be interpreted by any javascript compiler, but is now so popular that virtually every language now has a json parser built in or as a library ( see http://json.org/ scroll down to the bottom).
Basically JSON is a very simple structured text. If you google JSON Library Delphi you should get some solutions or for any other language you want to use.

Related

I need to understand JSON without technical jargon [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I am currently exploring the usages of JSON. I have read many posts, articles, YouTube videos. Yet, I still don't understand the purpose (it's been a month), practicality of JSON. No definition is suiting my logic to understand this concept in order to comfortably implement it.
What I understand (brief overall understanding): JSON provides an easier way to format data and send across networks.
My question: Could someone provide me a comprehensible storyline with JSON in action as I am struggling to understand its practicality. I hope this question makes sense, if not, I can try and re-word it.
Edit for #Philipp: Yes, I do have experience with reading Text-based files with Java (Mainly with assignments at Uni). No, I do not have experience with any competing technologies such as XML or YAML. Consciously, I believe JSON to be 'Cookies' in a sort of way but this most likely is wrong. I hope this helps and look forward to your explanation? Maybe it might help me understanding it.
JSON is, overly simplified, a standard for how to structure your own file formats. "File" does not necessarily mean a file stored in a filesystem. It can also be an ephemeral file which is created on one computer, sent to a different computer via network, gets processed and then discarded without ever storing it. But thinking of it as a file format makes things easier.
A JSON-based file format includes a document in a key-value structure. Every value has a key. Every value can either be a string, a number, another key-value structure or a list of the things mentioned before. Here is an example based on the one from the wikipedia article on JSON:
{
"firstName": "John",
"lastName": "Smith",
"address": {
"streetAddress": "21 2nd Street",
"city": "New York",
"state": "NY",
"postalCode": "10021-3100"
},
"phoneNumbers": [
{
"type": "home",
"number": "212 555-1234"
},
{
"type": "office",
"number": "646 555-4567"
}
]
}
This file describes a person who has a first name, a last name, one address consisting of a street address, city, state and postal code, and a list of phone numbers, with each phone number having a type and a number.
OK, but there are certainly other ways to store that kind of information. Ways which might be more concise. So why would you choose to invent a file format based on JSON instead of just starting from scratch?
Library support. There are lots of libraries available for parsing and writing JSON. If you ever wrote a file parsing routine yourself, then you know how much of a PITA those can be. There are a ton of edge-cases you have to keep in mind to prevent your program from crashing or reading garbage data. A JSON library takes care of all of these edge-cases for you. This makes it a lot easier for you to create programs working with JSON data than when you invent your own file format.
Tool support. There are editors available which can edit any form of JSON data in a handy UI. For example, did you notice that Stackoverflow automatically added syntax highlighting to the JSON code above? I didn't do anything to make that happen. Stackoverflow just automatically recognized that it is JSON and colored it accordingly. That would not be possible with a homebrewed file format.
Good compromise between machine-readability and human-readability. The format above is not just easy to read for programs (thanks to the aforementioned library support) but also pretty readable and editable for humans. People can intuitively understand the format and edit it in a text editor without breaking stuff. Especially when they worked with JSON-based file formats before.
Forward- and backward compatibility of file formats. This is something you could technically achieve in your own file format, but JSON makes it a lot easier. Imagine you create version 2.0 of your program, which comes with a version 2.0 of the file format. Your documents now have some additional fields. Handling this in homebrewed text-based formats can be really difficult. But the key-value structure of JSON makes it pretty easy to recognize that certain keys are missing and then replace their values with reasonable defaults. Similarly, the 1.0 version of your program might make limited sense of 2.0 documents by simply ignoring any keys it doesn't understand yet.
Interoperability with JavaScript. This might be kind of situational, but the reason why you see JSON being used a lot in the context of web applications is that JSON is actually valid JavaScript. That means that when you have a browser-based application, converting to and from JavaScript Objects to JSON text and vice versa is trivial. That makes it a preferred choice for exchanging data between browser-based applications and servers. The result is that you see a lot of JSON in cookies or webservice requests (although none of these mandate the use of JSON).
JSON or (JavaScript Object Notation) is simply a lightweight, semi-structured way of representing a set of data.
One sample Storyline:
Let's say you are creating an application that needs to communicate with another application and you want to make it easier for other applications to consume the data your application provides.
There are a lot of ways to do this, but by using JSON you make the process more simple (the applications that consume your data can figure out how to read your data on their own - if they want to) AND you cut down on the amount of raw data that is being passed around.
To answer your question is very simple and lightweight compared to other communication methods like SOAP or connecting straight to the database where you hold your data.

How to convert a large json file to xml [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I have very big .json file (over 5GB) and I want to convert it to .xml format. Is there any software or way (it can be everything) to do that?
I found XML Editor
but there is just xml => json converter.
The typical way to process XML as well as JSON files is to load these files completely into memory. Then you have a so called DOM which allows you various kinds of data processing. But neither XML nor JSON are really designed for storing that much data you have here. To my experience you typically will run into memory problems as soon as you exceed a 200 MByte limit. This is because DOMs are created that are composed from individual objects. This approach results in a huge memory overhead that far exceeds the amount of data you want to process.
The only way for you to process files like that is basically to take a stream approach. The basic idea: Instead of parsing the whole file and loading it into memory you parse and process the file "on the fly". As data is read it is parsed and events are triggered on which your software can react and perform some actions as needed. (For details on that have a look at the SAX API in order to understand this concept in more detail.)
As you stated you are processing JSON, not XML. Stream APIs for JSON should be available in the wild as wel. Anyway you could implement one fairly easily yourself: JSON is a pretty simple data format.
Nevertheless such an approach is not optimal: Typically such a concept will result in very slow data processing because of millions of method invocations involved: For every item encountered you typically need to call a method in order to perform some data processing task. This together with additional checks about what kind of information you currently have encountered in the stream will slow down data processing pretty much.
You really should consider to use a different kind of approach. First split your file into many small ones, then perform processing on them. This approach might not seem to be very elegant, but it helps to keep your task much simpler. This way you gain a main advantage: It will be much easier for you to debug your software.
If you're willing to use the XML Serializer from PEAR, you can convert the JSON to a PHP object and then the PHP object to XML in two easy steps:
check this link for more
convert json to xml
a little example
include("XML/Serializer.php");
function json_to_xml($json) {
$serializer = new XML_Serializer();
$obj = json_decode($json);
if ($serializer->serialize($obj)) {
return $serializer->getSerializedData();
}
else {
return null;
}
}
good luck and try

Protocol Buffers vs XML/JSON for data entry outside of programming effort

I would love to use protocol buffers, but I am not sure if they fit my use case. Here it is:
I have a Quiz app. This requires a bunch of data, like categories, questions, a list of answers (and which ones are correct). I do not want to be responsible for entering this data - I would prefer to pass it off to a non-programmer to serialize all this data for me, in either XML or JSON. Then my app would just read in the data file.
Does Google's Protocol Buffers fit my use case? Or should I stick to a more traditional format like XML or JSON?
I think not: Protobuf is a binary format. So then you would need to support a text format like XML or JSON and Protobuf.
Also it does not seem you would benefit from Protobufs better berformance at all.

IDL for JSON REST/RPC interface

We are designing a fairly complex REST API, in which most of the I/O are JSON encoded objects with a specific structure. One challenge we have found is to document the API in such a way that makes it easier for clients to post correct input and process output. Because the data of both the input and output requires fairly complex JSON objects, client developers often introduce bugs related to the structure of the I/O objects.
With all of the JSON web API's these days, I would have hoped for a general solution, but I am having a hard time finding one. I looked into json-schema which is a json-validation schema but both the IETF draft and implementations seem to be fairly immature (even though they have been around for a while, which is not a good sign).
A slightly different approach is offered by Protocol Buffers and Apache Avro, where the schema is not used for validation, but actually required for the encoding/decoding of the message. Of these 2, Avro seems to have rather limited documentation and implementations. ProtoBuf seems better, but I am not sure if this is really suitable to use in the browser to call a JSON api?
Now I am starting to doubt if I am looking at this from the right angle. Are there other methods available to make my API a bit more strong-typed'ish? Or is a formal description of a JSON REST/RPC API something that defeats the purpose of using JSON?
Edit: 6 months after this topic we found mongoose, which is very close to what we were lookin for.
Below a reply I received by email from Douglas Crockford.
I am not a believer in schemas as an alternative to input validation.
There are properties that cannot be verified from the syntax. I think
that was one of the ways that XML went wrong.
If your formats are too complex, then I would look at simplifying
them.
Such systems exist and I'm the author of one of them. It is called Piqi-RPC and it does IDL-based validation of the input and output parameters for RPC-style APIs over HTTP.
It supports JSON, XML and Google Protocol Buffers as data representation formats for input and output of HTTP POST requests. Clients can choose to use any of the three formats and specify their choice using the standard Accept and Content-Type HTTP headers.
So, yes, in theory, you are looking in the right direction. However, at the moment, Piqi-RPC supports writing servers only in Erlang and it wouldn't be very useful for you if you use a different stack. I heard that Apache Thrift also supports JSON over HTTP transport, but I haven't checked. Another kind of similar system I know of (also for Erlang) is called UBF. I have heard of libraries for Java that can parse and validate JSON based on Protocol Buffers specification (e.g. http://code.google.com/p/protostuff/).
The idea itself is far from being new, but there aren't many systems that approach it in practice. It is a challenging problem.
Historically, IDLs were used for interface definition and binary data serialization and not so much for validating dynamic data interchange formats (e.g. XML and JSON) which emerged later. Sun-RPC IDL and CORBA IDL fall in the first category. WSDL would be one of few examples covering both areas, but it is a terrible piece of technology and it would be a bad choice for most modern systems. In addition, there are many schema languages (also known as DDLs -- data definition languages), most of which are highly specialized and work with only one representation format, e.g. XML or JSON schemas. Few of those have stable implementations.
The Piqi project and Piqi-RPC, which is based on it, are build around several fairly simple realizations:
DLL doesn't have to be explicitly tied to any particular data representation format or built around it. Instead, such language can be fairly universal and cover wide range of practical use-cases (e.g. cross-language data serialization and data validation) and data formats (e.g. JSON, XML, Protocol Buffers).
IDL for RPC-style communication can be implemented as a thin, mostly syntactic layer on top of the universal DDL.
Such IDL and interface specifications can be transport agnostic.
Speaking of REST-style APIs over HTTP compared to RPC-style APIs over HTTP.
With RPC-style APIs, service developer or an automated system have to validate three things: function name (according to some service naming scheme), input and, if you choose so, output.
In case of REST-style APIs, people get themselves in trouble for no good reason. Now, they have a lot more stuff to validate: arbitrarily complex URL syntax, including dynamic parameters encoded in URL segments (for all HTTP methods) and URL query string (only for HTTP GET method), HTTP method correspondence (whether it should be GET, POST, PUT, DELETE, etc.), HTTP body when some parameters go there (sometimes they do it manually twice for parameters represented in JSON and XML), custom HTTP headers, and separately -- service documentation. Imagine an IDL supporting all that!
XML is better for RESTful services in many ways. It has native linking (<link href=, for all those HATEOAS fans), native language support (lang="en") and a great ecosystem.
It is also better for future proofing and future API refactorings. Converting this:
<profile>
<username>alganet</username>
</profile>
To support more usernames:
<profile>
<username>alganet</username>
<username>alexandre</username>
</profile>
Is much more simpler to do without breaking existing clients using XML. JSON is hard on that.
If you really need JSON, JSON-Schema is the way to go. It's immature, but I don't know anything better on that case. Maybe your consumers could choose between XML and JSON, so they can choose between a small payload (JSON) or RESTful candies (XML) using Content Negotiation.
I'd say the answer to your last question is yes. If you need a way to constrain and document the JSON "schema", why didn't you go with XML in the first place? It is not that much harder to parse, and being able to enforce a schema for it is a great advantage.

Is there a streaming API for JSON? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
Is DOM the only way to parse JSON?
Some JSON parsers do offer incremental ("streaming") parser; for Java, at least following parsers from json.org page offer such an interface:
Jackson (pull interface)
Json-simple (SAX-style push interface)
(in addition to Software Monkey's parser referred to by another answer)
Actually, it is kind of odd that so many JSON parsers do NOT offer this simple low-level interface -- after all, they already need to implement low-level parsing, so why not expose it.
EDIT (June 2011): Gson too has its own streaming API (with gson 1.6)
By DOM, I assume you mean that the parser reads an entire document at once before you can work with it. Note that saying DOM tends to imply XML, these days, but IMO that is not really an accurate inference.
So, in answer to your questions - "Yes", there are streaming API's and "No", DOM is not the only way. That said, processing a JSON document as a stream is often problematic in that many objects are not simple field/value pairs, but contain other objects as values, which you need to parse to process, and this tends to end up a recursive thing. But for simple messages you can do useful things with a stream/event based parser.
I have written a pull-event parser for JSON (it was one class, about 700 lines). But most of the others I have seen are document oriented. One of the layers I have built on top of my parser is a document reader, which took about 30 LOC. I have only ever used my parser in practice as a document loader (for the above reason).
I am sure if you search the net you will find pull and push based parsers for JSON.
EDIT: I have posted the parser to my site for download. A working compilable class and a complete example are included.
EDIT2: You'll also want to look at the JSON website.
As stefanB mentioned, http://lloyd.github.com/yajl/ is a C library for stream parsing JSON. There are also many wrappers mentioned on that page for other languages:
yajl-ruby - ruby bindings for YAJL
yajl-objc - Objective-C bindings for YAJL
YAJL IO bindings (for the IO language)
Python bindings come in two flavors, py-yajl OR yajl-py
yajl-js - node.js bindings (mirrored to github).
lua-yajl - lua bindings
ooc-yajl - ooc bindings
yajl-tcl - tcl bindings
some of them may not allow streaming, but many of them certainly do.
If you want to use pure javascript and a library that runs both in node.js and in the browser you can try clarinet:
https://github.com/dscape/clarinet
The parser is event-based, and since it’s streaming it makes dealing with huge files possible. The API is very close to sax and the code is forked from sax-js.
Disclaimer: I'm suggesting my own project.
I maintain a streaming JSON parser in Javascript which combines some of the features of SAX and DOM:
Oboe.js website
The idea is to allow streaming parsing, but not require the programmer to listen to lots of different events like with raw SAX. I like SAX but it tends to be quite low level for what most people need. You can listen for any interesting node from the JSON stream by registering JSONPath patterns.
The code is on Github here:
Oboe.js Github page
LitJSON supports a streaming-style API. Quoting from the manual:
"An alternative interface to handling JSON data that might be familiar to some developers is through classes that make it possible to read and write data in a stream-like fashion. These classes are JsonReader and JsonWriter.
"These two types are in fact the foundation of this library, and the JsonMapper type is built on top of them, so in a way, the developer can think of the reader and writer classes as the low-level programming interface for LitJSON."
If you are looking specifically for Python, then ijson claims to support it. However, it is only a parser, so I didn't come across anything for Python that can generate json as a stream.
For C++ there is rapidjson that claims to support both parsing and generation in a streaming manner.
Here's a NodeJS NPM library for parsing and handling streams of JSON:
https://npmjs.org/package/JSONStream
For Python, an alternative (apparently lighter and more efficient) to ijson is jsaone (see that link for rough benchmarks, showing that jsaone is approximately 3x faster).
DISCLAIMER: I'm the author of jsaone, and the tests I made are very basic... I'll be happy to be proven wrong!
Answering the question title: YAJL a JSON parser library in C:
YAJL remembers all state required to
support restarting parsing. This
allows parsing to occur incrementally
as data is read off a disk or network.
So I guess using yajl to parse JSON can be considered as processing stream of data.
In reply to your 2nd question, no, many languages have JSON parsers. PHP, Java, C, Ruby and many others. Just Google for the language of your choice plus "JSON parser".