Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I have very big .json file (over 5GB) and I want to convert it to .xml format. Is there any software or way (it can be everything) to do that?
I found XML Editor
but there is just xml => json converter.
The typical way to process XML as well as JSON files is to load these files completely into memory. Then you have a so called DOM which allows you various kinds of data processing. But neither XML nor JSON are really designed for storing that much data you have here. To my experience you typically will run into memory problems as soon as you exceed a 200 MByte limit. This is because DOMs are created that are composed from individual objects. This approach results in a huge memory overhead that far exceeds the amount of data you want to process.
The only way for you to process files like that is basically to take a stream approach. The basic idea: Instead of parsing the whole file and loading it into memory you parse and process the file "on the fly". As data is read it is parsed and events are triggered on which your software can react and perform some actions as needed. (For details on that have a look at the SAX API in order to understand this concept in more detail.)
As you stated you are processing JSON, not XML. Stream APIs for JSON should be available in the wild as wel. Anyway you could implement one fairly easily yourself: JSON is a pretty simple data format.
Nevertheless such an approach is not optimal: Typically such a concept will result in very slow data processing because of millions of method invocations involved: For every item encountered you typically need to call a method in order to perform some data processing task. This together with additional checks about what kind of information you currently have encountered in the stream will slow down data processing pretty much.
You really should consider to use a different kind of approach. First split your file into many small ones, then perform processing on them. This approach might not seem to be very elegant, but it helps to keep your task much simpler. This way you gain a main advantage: It will be much easier for you to debug your software.
If you're willing to use the XML Serializer from PEAR, you can convert the JSON to a PHP object and then the PHP object to XML in two easy steps:
check this link for more
convert json to xml
a little example
include("XML/Serializer.php");
function json_to_xml($json) {
$serializer = new XML_Serializer();
$obj = json_decode($json);
if ($serializer->serialize($obj)) {
return $serializer->getSerializedData();
}
else {
return null;
}
}
good luck and try
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I am currently exploring the usages of JSON. I have read many posts, articles, YouTube videos. Yet, I still don't understand the purpose (it's been a month), practicality of JSON. No definition is suiting my logic to understand this concept in order to comfortably implement it.
What I understand (brief overall understanding): JSON provides an easier way to format data and send across networks.
My question: Could someone provide me a comprehensible storyline with JSON in action as I am struggling to understand its practicality. I hope this question makes sense, if not, I can try and re-word it.
Edit for #Philipp: Yes, I do have experience with reading Text-based files with Java (Mainly with assignments at Uni). No, I do not have experience with any competing technologies such as XML or YAML. Consciously, I believe JSON to be 'Cookies' in a sort of way but this most likely is wrong. I hope this helps and look forward to your explanation? Maybe it might help me understanding it.
JSON is, overly simplified, a standard for how to structure your own file formats. "File" does not necessarily mean a file stored in a filesystem. It can also be an ephemeral file which is created on one computer, sent to a different computer via network, gets processed and then discarded without ever storing it. But thinking of it as a file format makes things easier.
A JSON-based file format includes a document in a key-value structure. Every value has a key. Every value can either be a string, a number, another key-value structure or a list of the things mentioned before. Here is an example based on the one from the wikipedia article on JSON:
{
"firstName": "John",
"lastName": "Smith",
"address": {
"streetAddress": "21 2nd Street",
"city": "New York",
"state": "NY",
"postalCode": "10021-3100"
},
"phoneNumbers": [
{
"type": "home",
"number": "212 555-1234"
},
{
"type": "office",
"number": "646 555-4567"
}
]
}
This file describes a person who has a first name, a last name, one address consisting of a street address, city, state and postal code, and a list of phone numbers, with each phone number having a type and a number.
OK, but there are certainly other ways to store that kind of information. Ways which might be more concise. So why would you choose to invent a file format based on JSON instead of just starting from scratch?
Library support. There are lots of libraries available for parsing and writing JSON. If you ever wrote a file parsing routine yourself, then you know how much of a PITA those can be. There are a ton of edge-cases you have to keep in mind to prevent your program from crashing or reading garbage data. A JSON library takes care of all of these edge-cases for you. This makes it a lot easier for you to create programs working with JSON data than when you invent your own file format.
Tool support. There are editors available which can edit any form of JSON data in a handy UI. For example, did you notice that Stackoverflow automatically added syntax highlighting to the JSON code above? I didn't do anything to make that happen. Stackoverflow just automatically recognized that it is JSON and colored it accordingly. That would not be possible with a homebrewed file format.
Good compromise between machine-readability and human-readability. The format above is not just easy to read for programs (thanks to the aforementioned library support) but also pretty readable and editable for humans. People can intuitively understand the format and edit it in a text editor without breaking stuff. Especially when they worked with JSON-based file formats before.
Forward- and backward compatibility of file formats. This is something you could technically achieve in your own file format, but JSON makes it a lot easier. Imagine you create version 2.0 of your program, which comes with a version 2.0 of the file format. Your documents now have some additional fields. Handling this in homebrewed text-based formats can be really difficult. But the key-value structure of JSON makes it pretty easy to recognize that certain keys are missing and then replace their values with reasonable defaults. Similarly, the 1.0 version of your program might make limited sense of 2.0 documents by simply ignoring any keys it doesn't understand yet.
Interoperability with JavaScript. This might be kind of situational, but the reason why you see JSON being used a lot in the context of web applications is that JSON is actually valid JavaScript. That means that when you have a browser-based application, converting to and from JavaScript Objects to JSON text and vice versa is trivial. That makes it a preferred choice for exchanging data between browser-based applications and servers. The result is that you see a lot of JSON in cookies or webservice requests (although none of these mandate the use of JSON).
JSON or (JavaScript Object Notation) is simply a lightweight, semi-structured way of representing a set of data.
One sample Storyline:
Let's say you are creating an application that needs to communicate with another application and you want to make it easier for other applications to consume the data your application provides.
There are a lot of ways to do this, but by using JSON you make the process more simple (the applications that consume your data can figure out how to read your data on their own - if they want to) AND you cut down on the amount of raw data that is being passed around.
To answer your question is very simple and lightweight compared to other communication methods like SOAP or connecting straight to the database where you hold your data.
I have a large JSON file, its size is 5.09 GB. I want to convert it to an XML file. I tried online converters but the file is too large for them. Does anyone know how to to do that?
The typical way to process XML as well as JSON files is to load these files completely into memory. Then you have a so called DOM which allows you various kinds of data processing. But neither XML nor JSON are really designed for storing that much data you have here. To my experience you typically will run into memory problems as soon as you exceed a 200 MByte limit. This is because DOMs are created that are composed from individual objects. This approach results in a huge memory overhead that far exceeds the amount of data you want to process.
The only way for you to process files like that is basically to take a stream approach. The basic idea: Instead of parsing the whole file and loading it into memory you parse and process the file "on the fly". As data is read it is parsed and events are triggered on which your software can react and perform some actions as needed. (For details on that have a look at the SAX API in order to understand this concept in more detail.)
As you stated you are processing JSON, not XML. Stream APIs for JSON should be available in the wild as wel. Anyway you could implement one fairly easily yourself: JSON is a pretty simple data format.
Nevertheless such an approach is not optimal: Typically such a concept will result in very slow data processing because of millions of method invocations involved: For every item encountered you typically need to call a method in order to perform some data processing task. This together with additional checks about what kind of information you currently have encountered in the stream will slow down data processing pretty much.
You really should consider to use a different kind of approach. First split your file into many small ones, then perform processing on them. This approach might not seem to be very elegant, but it helps to keep your task much simpler. This way you gain a main advantage: It will be much easier for you to debug your software. Unfortunately you are not very specific about your problem, so I can only guess, but large files typically imply that the data model is pretty complex. Therefor you will probably be much better off by having many small files instead of a single huge one. And later it allows you to dig into individual aspects of your data and the data processing process as needed. You will probably fail getting any detailed insights into that while having a single large file of 5 GByte to process. On errors you will have trouble to identify which part of the huge file is causing the problem.
As I already stated you unfortunately are very unspecific about your problem. Sorry, but because of having no more details about your problem (and your data in particular) I can only give you these general recommendations about data processing. I do not know any details about your data, so I can not give you any recommendation about which approach will work best in your case.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am complete blank about JSON but I would like to be able to read some data from the URL http://stokercloud.dk/dev/getdriftjson.php?mac=oz8hp and be able to store them im a DB.
But I don't know where to start, so I thought I would ask here for hints and maybe some links to samples that I might learn from
I know that the output might look confusing, but I have a list of what each item is.
the file is runtime data from my pelletburner
The JSON specification is the first page to read. The standard is so simple it is easy to understand it from this page.
I found a wider tutorial, with illustrations and more resources. Nice to see.
Here is the conclusion of this web page:
JSON is a open,text based, light-weight data interchange format specified as RFC4627, came to the developer world in 2005 and it's
popularity is increased rapidly.
JSON uses Object and Array as data structures and strings, number, true, false and null as values. Objects and arrays can be nested
recursively.
Most (if not all) modern programming languages can be used to work with JSON.
NoSQL databases, which are evolved to get rid of the bottlenecks of the Relational Databases, are using JSON to store data.
JSON gives developers a power to choose between XML and JSON leading to more flexibility.
Besides NoSQL, AJAX, Package Management, and integration of APIs to the web application are the major areas where JSON is being used
extensively.
IMHO the main point with JSON is that it contains documents, or arrays of documents. There is less data types than with Delphi (e.g. no official date/time, and just one numeric type). It is an exchange format, which is widely used now, and, from my own experiment, easier to work with than XML, from both human and computer sides.
In Delphi, you have several libraries around, mainly:
SuperObject
XSuperObject
dwsJSON
lkJSON
DBXJSON which ships with newer versions of Delphi;
mORMot for Win32/Win64
SynCrossPlatformJSON
About performance, you can take a look at our blog article. DBXJSON (and the official JSON unit of Delphi) is by far the slowest, and somewhat difficult to work with. Some methods for easy access to the JSON document content are missing. Other libraries are much easier to work with. Our version shipped with mORMot is very fast, as is dwsJSON. SuperObject is slower than those, especially for huge content, and XSuperObject is slow (but cross-platform). Our SynCrossPlatformJSON unit is also cross-platform, very fast, and has a variant-based document access.
Some code using mORMot library:
uses
SynCrtSock,
SynCommons;
procedure test;
var json: RawUTF8;
jsondata: TDocVariantData;
i: integer;
begin
json := TWinHttp.Get('http://stokercloud.dk/dev/getdriftjson.php?mac=oz8hp');
jsondata := DocVariantData(_json(json).jsondata)^;
for i := 0 to jsondata.Count-1 do
writeln(jsondata.Values[i]); // here all items are converted back to JSON and written
end;
To learn JSON (JavaScript Object Notation), you'd read JSON on Wikipedia.
To download data from url, you can use TIdHttp, which is an Http client of Indy framework.
To parse JSON, I'd suggest use superobject. It includes great examples in demos directory.
JSON is an interchange form for sending data between anything that needs to have data sent to it. Its simplicity is its strength.
The text is valid javascript and so can be interpreted by any javascript compiler, but is now so popular that virtually every language now has a json parser built in or as a library ( see http://json.org/ scroll down to the bottom).
Basically JSON is a very simple structured text. If you google JSON Library Delphi you should get some solutions or for any other language you want to use.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have some configuration files that I store the complex object values as serialized json. Currently there is a configuration file for each environment (localhost, dev, prod etc.) and for each installation by client. Most of the values are identically for the configurations between environments but not all. So for three environments and four clients I currently have 12 total files to manage.
If this were a web.config file there would be web.config transforms that would solve the problem. If this was c# I'd have compiler preprocessor directives that could be useed to substitute the different values based on the current build configuration.
Does anyone know of anything that works basically this way or have some good suggestion on tried and true ways to proceed? What I would like is to reduce the number of files down to a single instance for each installation that can suffice for each environment.
Configuration of configuration always seems a bit overdone to me, but you could use a properties file for the parts that change, and apache ant's <replace> task to do the substitutions. Something like this:
<replace
file="configure.json"
propertyFile="config-of-config.properties">
<replacefilter
token="#token1#"
property="property.key"/>
</replace>
Jsonnet from Google is a language that with with a super-set syntax based on JSON, adding high level language features that help to model data in JSON fromat. The compilation step produces JSON. I have used it in a project to describe complex deployment environments that inherit from one another at times and that share domain attributes albeit utilizing them differently from one instance to another.
As an example, an instance contains applications, tenant subscriptions for those applications, contracts, destinations and so forth. The values for all of these attributes are objects the recur throughout environments.
Their docs are very thorough and don't miss the std functions because they make for some very powerful data rendering capabilities.
I wrote a Squirrelistic JSON Preprocessor which uses Golang Text Templates syntax to generate JSON files based on parameters provided.
JSON template can include reference to other templates, use conditional logic, comments, variables and everything else which Golang Text templates package provides.
This really comes down to your full stack.
If you're talking about some application that runs solely client-side, with no server-side processing, whatsoever, then there's really no such thing as pre-processing.
You can process the data further before actually using it, but that won't mean that it will be processed prior to the page being served -- it means that people have to sit around, waiting for that to happen before the apps which need that data can be initialized.
The benefit of using JSON, to begin with is that it's just a data-store, and is quite language-agnostic, and quite widely supported, now. So if it's not 100% client-side, there's nothing stopping you from pre-processing in whatever language you're using on the server, and caching those versions of those files, to serve (and cache) to users, based on their need.
If you really, really need a system to do live processing of config-files, on the client-side, and you've gone through the work of creating app-views which load early, but show the user that they're deferring initialization (ie: "loading..."/spinners), then download a second JSON file, which holds all of the needed implementation-specific data (you'll have 12 of these tiny little files, which should be simple to manage), parse both JSON files into JS objects, and extend the large config object with the additional data in the secondary file.
Please note: Use localhost or some other storage facility to cache this, so that for html5-browsers, this longer load only happens one time.
There is one, https://www.npmjs.com/package/json-variables
Conceptually, it is a function which takes a string, JSON contents, sprinkled with specially marked variables and it produces a string with those variables resolved. Same like Sass or Less does for CSS - it's used to DRY up the source code.
Here's an example.
You'd put something like this in JSON file:
{
"firstName": "customer.firstName",
"message": "Hi %%_firstName_%%",
"preheader": "%%_firstName_%%, look what's inside"
}
Notice how it's DRY — single source of truth for the firstName value.
json-variables would process it into:
{
"firstName": "customer.firstName",
"message": "Hi customer.firstName",
"preheader": "customer.firstName, look what's inside"
}
that is, Hi %%_firstName_%% would look for firstName at the root level (but equally, it could be a deeper path, for example, data1.data2.firstName). Resolving also "bubbles up" to the root level, also you can use custom data structures and more.
Missing pieces of a JSON-processing task puzzle are:
Means to merge multiple JSON files, various ways (object-merge-advanced)
Means to orchestrate actions — Gulp is good if you're preferred programming language is JS
Means to get/set values by path (object-path - its notation uses dots only, no brackets key1.key2.array.2 instead of key1.key2.array[2])
Means to maintain the same set of keys across set of JSON files - you add a key in one, it's added on all others (object-fill-missing-keys)
In described case, we can do at least two approaches: one-to-many, or many-to-many.
Former - Gulp could be "baking" many JSON files from one or more JSON-like source files, json-variables DRY-ing up the references.
Later - alternatively, it could be "managed" set of JSON files rendered into set of distribution files — Gulp watches src folder, runs object-fill-missing-keys to normalise schemas, maybe even sorting objects (yes, it's possible, sorted-object).
It all depends how similar is the desired set of JSON files and how values are customised and is it done manually or programmatically.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Are there any good PL/SQL libraries for JSON that you've worked with and found useful?
In PL/SQL, I'm having to tediously hand code the return of JSON values to JavaScript functions. I found one PL/SQL library for auto-generating JSON, but it doesn't do exactly everything I need it too. For example, I couldn't extend the base functions in the library to return a complex tree-like JSON data structure required by a JavaScript tree component I was using.
Note:
The system, which has been in production for 8+ years, was architected to use PL/SQL for the CRUDs and most of the business logic. The PL/SQL also generates 90% of the presentation layer (HTML/JavaScript), using mod PL/SQL. The other 10% is reported data done via Oracle Reports Builder.
#Geoff-
The system, which has been in production for 8+ years, was architected to use PL/SQL for the CRUDs and most of the business logic. The PL/SQL also generates 90% of the presentation layer (HTML/JavaScript), using mod PL/SQL. The other 10% is report data done via Oracle Reports Builder.
So, there isn't application code like you'd see in more modern, better architected systems. I do want to do things the right way, I just don't have that luxury given organizational constraints.
I wonder why you don't want to bring the data from Oracle into some application code and make JSON there?
Ouch - generating your interface in PL/SQL. You have my sympathy.
I've never done anything like this, but Googling found this page (which is also referenced from the json.org page).
A relatively new library called PLJSON (no slash) is on GitHub. We're using it in a pretty large project in production and have had no troubles with it at all. Parsing is a tad slow, but that is to be expected.
Disclaimer: I wrote it. If you find bugs or have suggestions, let me know.
In case that anyone is still interested in serving JSON using PL/SQL, I have just completed a PL/SQL data service framework named BackLogic. It is a full REST web service framework. It include a SQL utility to produce complex JSON structure from REF CURSOR, including the "complex tree-like JSON data structure required by a JavaScript tree component" mentioned in the original question, which the early PLJSON framework is not quite capable of doing.
I do see a bright future for PL/SQL in creating REST APIs. Until recently, the Object Relation Impedance has been taken care mainly by ORM frameworks in the middle tier. BackLogic solves this issue in the database, and thus is able to produce complex JSON structures needed by UI framework. Here is a link to BackLogic User Guide. You may find some non-trial examples in Section 5.3.