Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I am currently exploring the usages of JSON. I have read many posts, articles, YouTube videos. Yet, I still don't understand the purpose (it's been a month), practicality of JSON. No definition is suiting my logic to understand this concept in order to comfortably implement it.
What I understand (brief overall understanding): JSON provides an easier way to format data and send across networks.
My question: Could someone provide me a comprehensible storyline with JSON in action as I am struggling to understand its practicality. I hope this question makes sense, if not, I can try and re-word it.
Edit for #Philipp: Yes, I do have experience with reading Text-based files with Java (Mainly with assignments at Uni). No, I do not have experience with any competing technologies such as XML or YAML. Consciously, I believe JSON to be 'Cookies' in a sort of way but this most likely is wrong. I hope this helps and look forward to your explanation? Maybe it might help me understanding it.
JSON is, overly simplified, a standard for how to structure your own file formats. "File" does not necessarily mean a file stored in a filesystem. It can also be an ephemeral file which is created on one computer, sent to a different computer via network, gets processed and then discarded without ever storing it. But thinking of it as a file format makes things easier.
A JSON-based file format includes a document in a key-value structure. Every value has a key. Every value can either be a string, a number, another key-value structure or a list of the things mentioned before. Here is an example based on the one from the wikipedia article on JSON:
{
"firstName": "John",
"lastName": "Smith",
"address": {
"streetAddress": "21 2nd Street",
"city": "New York",
"state": "NY",
"postalCode": "10021-3100"
},
"phoneNumbers": [
{
"type": "home",
"number": "212 555-1234"
},
{
"type": "office",
"number": "646 555-4567"
}
]
}
This file describes a person who has a first name, a last name, one address consisting of a street address, city, state and postal code, and a list of phone numbers, with each phone number having a type and a number.
OK, but there are certainly other ways to store that kind of information. Ways which might be more concise. So why would you choose to invent a file format based on JSON instead of just starting from scratch?
Library support. There are lots of libraries available for parsing and writing JSON. If you ever wrote a file parsing routine yourself, then you know how much of a PITA those can be. There are a ton of edge-cases you have to keep in mind to prevent your program from crashing or reading garbage data. A JSON library takes care of all of these edge-cases for you. This makes it a lot easier for you to create programs working with JSON data than when you invent your own file format.
Tool support. There are editors available which can edit any form of JSON data in a handy UI. For example, did you notice that Stackoverflow automatically added syntax highlighting to the JSON code above? I didn't do anything to make that happen. Stackoverflow just automatically recognized that it is JSON and colored it accordingly. That would not be possible with a homebrewed file format.
Good compromise between machine-readability and human-readability. The format above is not just easy to read for programs (thanks to the aforementioned library support) but also pretty readable and editable for humans. People can intuitively understand the format and edit it in a text editor without breaking stuff. Especially when they worked with JSON-based file formats before.
Forward- and backward compatibility of file formats. This is something you could technically achieve in your own file format, but JSON makes it a lot easier. Imagine you create version 2.0 of your program, which comes with a version 2.0 of the file format. Your documents now have some additional fields. Handling this in homebrewed text-based formats can be really difficult. But the key-value structure of JSON makes it pretty easy to recognize that certain keys are missing and then replace their values with reasonable defaults. Similarly, the 1.0 version of your program might make limited sense of 2.0 documents by simply ignoring any keys it doesn't understand yet.
Interoperability with JavaScript. This might be kind of situational, but the reason why you see JSON being used a lot in the context of web applications is that JSON is actually valid JavaScript. That means that when you have a browser-based application, converting to and from JavaScript Objects to JSON text and vice versa is trivial. That makes it a preferred choice for exchanging data between browser-based applications and servers. The result is that you see a lot of JSON in cookies or webservice requests (although none of these mandate the use of JSON).
JSON or (JavaScript Object Notation) is simply a lightweight, semi-structured way of representing a set of data.
One sample Storyline:
Let's say you are creating an application that needs to communicate with another application and you want to make it easier for other applications to consume the data your application provides.
There are a lot of ways to do this, but by using JSON you make the process more simple (the applications that consume your data can figure out how to read your data on their own - if they want to) AND you cut down on the amount of raw data that is being passed around.
To answer your question is very simple and lightweight compared to other communication methods like SOAP or connecting straight to the database where you hold your data.
Related
So, I need to implement my new project in Fortran 2008. In this project I need to store large structures and I'd like to store them in binary format. So far I've read here both about unformatted and formatted binary streams, but I'm still a little confused about how to use them to store something more complex than several numbers/strings.
Let's say I'll have my data stored in JSON format:
"net_name": "Test ANN",
"net_type": "Feed-forward",
"train_method": "Back-propagation",
"neurons":{ "1": { "inputs":["2", "3"],
"outputs": ["10", "11"]},
"2": { "inputs":["4", "5"],
"outputs": ["11", "12"]}
}
I know, that I can use some 3rd party modules for working with JSON directly (like this one), but I'd like to be able to use also binary format because file sizes can be really huge.
So, is it possible to store such structure using Fortran binary streams? Or is there any other way in Fortran, how to do it? I'm not asking about unformatted binary files exclusively, if there is some other elegant solution, I'll be glad to hear about it. Still, I prefer ones without many external dependencies.
I'm currently evaluating contentful as a potential cms for a project. I've been playing around with the json api, which is great, but I'm having trouble representing anything more complex than a flat object data structure as a content type.
The workaround I've found is to create a separate entity and reference it, which works, but makes things quite a bit more complicated (far more entities, requires additional publishing, etc.).
As discussed by contentful here, this approach works great for relating content, but that's a different use case. I simply want to create a piece of content like the following:
{
"item": "value",
"subitem": {
"item": "value"
}
}
Is there another approach to handle this?
So what you are talking about is exactly the same issues we had when building one of our applications.
To get around this, we wrote a small npm module that parses these complex content types pretty easily.
Check it out here: https://github.com/remedyhealth/contentpull
If you want to see the parts talking specifically about the parsing, we wrote a simple tonic notebook to show this: https://tonicdev.com/mrsteele/contentpull
(the parser section is towards the bottom)
Let me know if that helps at all, and please feel free to fork and improve if you have any good recommendations.
I'm working on a graduate course project to develop a query client for CKAN and DCAT catalogs. I've read a lot of documentation and specs, yet a lot of things seem to still be proposals so I figured I needed to reach out to ask someone who knows.
The Project Open Data site discusses the DCAT format to be a JSON-LD based format with a particular schema. The schema makes sense but there is a lot of push in my class around targeting US federal government data from data.gov, which runs CKAN (as many of these data sharing systems do according to my research). Everywhere I'm looking, people are suggesting that CKAN supports DCAT, but I'm just not finding that.
For instance, http://catalog.data.gov/api/3/action/package_show?id=national-stock-number-extract shows a completely different JSON format. It appears to have values that could be used to translate to a JSON-LD DCAT object.
The following properties are in the DCAT schema, but most of the document doesn't conform. It just looks like this is something of a translation to JSON-LD DCAT.
{
key: "bureauCode",
value: [
"007:15"
]
},
{
key: "accrualPeriodicity",
value: "R/PT1S"
},
{
key: "spatial",
value: "National and International"
}
Then I came across this page which shows the expected format I'm looking for, but it says that it's a proposal. Is this still accurate? In the case of data.org, I can simply append .rdf to the end of a dataset URI (one of the features the proposal mentions) and it produces an RDF XML document using DCAT vocabulary. But the same data set accessed via the CKAN API doesn't provide the same functionality.
For instance.
http://catalog.data.gov/dataset/housing-affordability-data-system-hads -> page
http://catalog.data.gov/dataset/housing-affordability-data-system-hads.rdf -> rdf xml
http://catalog.data.gov/api/3/action/package_show?id=housing-affordability-data-system-hads -> CKAN's JSON format
http://catalog.data.gov/api/3/action/package_show?id=housing-affordability-data-system-hads.rdf -> NOT FOUND
So what is the deal exactly? I see that the plugin for DCAT is in development, but has it just not been finished and integrated into CKAN for production?
Support for DCAT is not part of CKAN core, there is however the ckanext-dcat extensions. It is currently still "work in progress", so it's not yet finished.
If you have specific needs that are not yet implemented, you might want to fork the repo and add those features.
I know that the Swedish portal Öppnadata.se uses the ckanext-sweden, which customizes ckanext-dcat to some extend.
The specification that you found really seems outdated, but I couldn't find anything better myself. And I guess it's also the basis for the ckanext-dcat extension.
All that said, this is not first-hand information. I will soon start developing a DCAT based catalogue, and actually tried to answer the questions you posed some time ago. My answer above reflects what I found out until now :)
I think you're mixing up a few things. DCAT is an RDF-vocabulary defined by W3C, this means it is standardised way to describe open data using RDF. RDF is a data model, which has different formats: rdf+xml, turtle, n3, json-ld,... This means I can represent the same information in both JSON or XML.
Like Odi mentioned, CKAN does not support DCAT out of the box, it needs to be installed as a plugin.
Coming to your question now. The api link you mentioned is just that, an api for CKAN. It has nothing to do with DCAT. The information revealed by the API is similar to DCAT, because they both describe the information of the datasets. The easiest way to find what is available by the CKAN instance is to look for a link in the html source of a dataset page.
Example taken from the online demo which links to the turtle DCAT feed: <link rel="alternate" type="text/ttl" href="http://demo.ckan.org/dataset/a83cf982-723f-4859-8c1c-0518f9fd1600.ttl"/>
JSON isn't a popular format for exposing DCAT, but you should be able to find RDF libraries that can read the other formats.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am complete blank about JSON but I would like to be able to read some data from the URL http://stokercloud.dk/dev/getdriftjson.php?mac=oz8hp and be able to store them im a DB.
But I don't know where to start, so I thought I would ask here for hints and maybe some links to samples that I might learn from
I know that the output might look confusing, but I have a list of what each item is.
the file is runtime data from my pelletburner
The JSON specification is the first page to read. The standard is so simple it is easy to understand it from this page.
I found a wider tutorial, with illustrations and more resources. Nice to see.
Here is the conclusion of this web page:
JSON is a open,text based, light-weight data interchange format specified as RFC4627, came to the developer world in 2005 and it's
popularity is increased rapidly.
JSON uses Object and Array as data structures and strings, number, true, false and null as values. Objects and arrays can be nested
recursively.
Most (if not all) modern programming languages can be used to work with JSON.
NoSQL databases, which are evolved to get rid of the bottlenecks of the Relational Databases, are using JSON to store data.
JSON gives developers a power to choose between XML and JSON leading to more flexibility.
Besides NoSQL, AJAX, Package Management, and integration of APIs to the web application are the major areas where JSON is being used
extensively.
IMHO the main point with JSON is that it contains documents, or arrays of documents. There is less data types than with Delphi (e.g. no official date/time, and just one numeric type). It is an exchange format, which is widely used now, and, from my own experiment, easier to work with than XML, from both human and computer sides.
In Delphi, you have several libraries around, mainly:
SuperObject
XSuperObject
dwsJSON
lkJSON
DBXJSON which ships with newer versions of Delphi;
mORMot for Win32/Win64
SynCrossPlatformJSON
About performance, you can take a look at our blog article. DBXJSON (and the official JSON unit of Delphi) is by far the slowest, and somewhat difficult to work with. Some methods for easy access to the JSON document content are missing. Other libraries are much easier to work with. Our version shipped with mORMot is very fast, as is dwsJSON. SuperObject is slower than those, especially for huge content, and XSuperObject is slow (but cross-platform). Our SynCrossPlatformJSON unit is also cross-platform, very fast, and has a variant-based document access.
Some code using mORMot library:
uses
SynCrtSock,
SynCommons;
procedure test;
var json: RawUTF8;
jsondata: TDocVariantData;
i: integer;
begin
json := TWinHttp.Get('http://stokercloud.dk/dev/getdriftjson.php?mac=oz8hp');
jsondata := DocVariantData(_json(json).jsondata)^;
for i := 0 to jsondata.Count-1 do
writeln(jsondata.Values[i]); // here all items are converted back to JSON and written
end;
To learn JSON (JavaScript Object Notation), you'd read JSON on Wikipedia.
To download data from url, you can use TIdHttp, which is an Http client of Indy framework.
To parse JSON, I'd suggest use superobject. It includes great examples in demos directory.
JSON is an interchange form for sending data between anything that needs to have data sent to it. Its simplicity is its strength.
The text is valid javascript and so can be interpreted by any javascript compiler, but is now so popular that virtually every language now has a json parser built in or as a library ( see http://json.org/ scroll down to the bottom).
Basically JSON is a very simple structured text. If you google JSON Library Delphi you should get some solutions or for any other language you want to use.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have some configuration files that I store the complex object values as serialized json. Currently there is a configuration file for each environment (localhost, dev, prod etc.) and for each installation by client. Most of the values are identically for the configurations between environments but not all. So for three environments and four clients I currently have 12 total files to manage.
If this were a web.config file there would be web.config transforms that would solve the problem. If this was c# I'd have compiler preprocessor directives that could be useed to substitute the different values based on the current build configuration.
Does anyone know of anything that works basically this way or have some good suggestion on tried and true ways to proceed? What I would like is to reduce the number of files down to a single instance for each installation that can suffice for each environment.
Configuration of configuration always seems a bit overdone to me, but you could use a properties file for the parts that change, and apache ant's <replace> task to do the substitutions. Something like this:
<replace
file="configure.json"
propertyFile="config-of-config.properties">
<replacefilter
token="#token1#"
property="property.key"/>
</replace>
Jsonnet from Google is a language that with with a super-set syntax based on JSON, adding high level language features that help to model data in JSON fromat. The compilation step produces JSON. I have used it in a project to describe complex deployment environments that inherit from one another at times and that share domain attributes albeit utilizing them differently from one instance to another.
As an example, an instance contains applications, tenant subscriptions for those applications, contracts, destinations and so forth. The values for all of these attributes are objects the recur throughout environments.
Their docs are very thorough and don't miss the std functions because they make for some very powerful data rendering capabilities.
I wrote a Squirrelistic JSON Preprocessor which uses Golang Text Templates syntax to generate JSON files based on parameters provided.
JSON template can include reference to other templates, use conditional logic, comments, variables and everything else which Golang Text templates package provides.
This really comes down to your full stack.
If you're talking about some application that runs solely client-side, with no server-side processing, whatsoever, then there's really no such thing as pre-processing.
You can process the data further before actually using it, but that won't mean that it will be processed prior to the page being served -- it means that people have to sit around, waiting for that to happen before the apps which need that data can be initialized.
The benefit of using JSON, to begin with is that it's just a data-store, and is quite language-agnostic, and quite widely supported, now. So if it's not 100% client-side, there's nothing stopping you from pre-processing in whatever language you're using on the server, and caching those versions of those files, to serve (and cache) to users, based on their need.
If you really, really need a system to do live processing of config-files, on the client-side, and you've gone through the work of creating app-views which load early, but show the user that they're deferring initialization (ie: "loading..."/spinners), then download a second JSON file, which holds all of the needed implementation-specific data (you'll have 12 of these tiny little files, which should be simple to manage), parse both JSON files into JS objects, and extend the large config object with the additional data in the secondary file.
Please note: Use localhost or some other storage facility to cache this, so that for html5-browsers, this longer load only happens one time.
There is one, https://www.npmjs.com/package/json-variables
Conceptually, it is a function which takes a string, JSON contents, sprinkled with specially marked variables and it produces a string with those variables resolved. Same like Sass or Less does for CSS - it's used to DRY up the source code.
Here's an example.
You'd put something like this in JSON file:
{
"firstName": "customer.firstName",
"message": "Hi %%_firstName_%%",
"preheader": "%%_firstName_%%, look what's inside"
}
Notice how it's DRY — single source of truth for the firstName value.
json-variables would process it into:
{
"firstName": "customer.firstName",
"message": "Hi customer.firstName",
"preheader": "customer.firstName, look what's inside"
}
that is, Hi %%_firstName_%% would look for firstName at the root level (but equally, it could be a deeper path, for example, data1.data2.firstName). Resolving also "bubbles up" to the root level, also you can use custom data structures and more.
Missing pieces of a JSON-processing task puzzle are:
Means to merge multiple JSON files, various ways (object-merge-advanced)
Means to orchestrate actions — Gulp is good if you're preferred programming language is JS
Means to get/set values by path (object-path - its notation uses dots only, no brackets key1.key2.array.2 instead of key1.key2.array[2])
Means to maintain the same set of keys across set of JSON files - you add a key in one, it's added on all others (object-fill-missing-keys)
In described case, we can do at least two approaches: one-to-many, or many-to-many.
Former - Gulp could be "baking" many JSON files from one or more JSON-like source files, json-variables DRY-ing up the references.
Later - alternatively, it could be "managed" set of JSON files rendered into set of distribution files — Gulp watches src folder, runs object-fill-missing-keys to normalise schemas, maybe even sorting objects (yes, it's possible, sorted-object).
It all depends how similar is the desired set of JSON files and how values are customised and is it done manually or programmatically.