Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Why is JSON-formatted data stored in MongoDB? Is that the only format supported in MongoDB? What are the advantages of using JSON for storing records in MongoDB? What is the benefit of using JSON in Mong DB over other formats?
Actually Mongo uses BSON, that could represent the same things that JSON, but with less space. "JSON" (that is like the representation for human beings) have some properties useful in a NoSQL database:
No need for a fixed schema. You could just add whatever you want and it will be correct JSON.
There are parsers available for almost any programming language out there.
The format is programmer friendly, not like some alternatives... I'm looking at you, XML ¬¬.
Mongo needs to understand the data, without forcing a "collection schema". You don't need information about the object to reason about it, if it uses JSON. For example, you could get the "title" or "age" for any JSON document, just find that field. With other formats (eg. protocol buffers) thats not possible. At least without a lot of code...
(Added) Because Mongo is a database they want to do queries fast. BSON/JSON is a format that can meet that requirement AND the others at the same time (easily implementable, allow reflectioning about data, parsing speed, no fixed schema, etc).
(Added) Mongo reuses a Javascript engine for their queries, so it have all the sense in the world to reuse JSON for object representation. BSON is a more compact representation for that format.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I need to work with some systems that use JMESPath to search JSON. But I found that it is missing a lot of important features. Let's say, it is very hard to search string with pattern (like this). It does not support regular expression. It does not support case insensitive search. The proposal to add split function has been frozen since 2017 (like this and this). These features are all available to jq. So I want to know why systems like AWS S3 CLI, and Ansible use JMESPath instead of jq to query JSON?
It's not so much about the difference between JMESPath and jq as the different ways they are used.
Suppose you are querying a remote resource, the result is going to number in the millions of records, but you only care about a specific, much smaller subset of the records. You have two choices:
Have every record transmitted to you over the network, then pick out the ones you want locally
Send your filter to the remote resource, and have it do the filtering, only sending you the response.
jq is typically used for the former, JMESPath for the latter. There's no reason why the remote service couldn't accept a jq filter, or that you couldn't use a JMESPath-based executable.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Please help me decide between the following formats for storing articles on a server:
XML
JSON
YAML
CSV
There are too many and I don't have the knowledge to choose. I am looking for objective criteria, not subjective opinions.
The articles may contain a short title, some paragraphs and images.
XML vs JSON vs YAML vs CSV
Here are some considerations you might use to guide your decision:
Choose XML if
You have to represent mixed content (tags mixed within text). [This would appear to be a major concern in your case. You might even consider HTML for this reason.]
There's already an industry standard XSD to follow.
You need to transform the data to another XML/HTML format. (XSLT is great for transformations.)
Choose JSON if
You have to represent data records, and a closer fit to JavaScript is valuable to your team or your community.
Choose YAML if
You have to represent data records, and you value some additional features missing from JSON: comments, strings without quotes, order-preserving maps, and extensible data types.
Choose CSV if
You have to represent data records, and you value ease of import/export with databases and spreadsheets.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I have a Javascript library (SincIt) that I would like to use to synchronise my WebApp with a MySQL Database. However, this SincIt only works with MongoDB at the moment.
I could probably write an MySQL adapter for SincIt, since the library is modular, but I wonder if there is an adapter that translates MongoDB instructions to SQL.
MySQL is a relational database and uses SQL for manipulation of relational algebra. MongoDB is a document database that doesn't support relations or joins.
It does allow for hierarchical nesting of documents, but it's simply an entirely different paradigm.
The most important distinction in this case, is that MongoDB is schema-less. With a mongo collection you never need to do the equivalent of a "CREATE TABLE" statement. Furthermore, mongo has no problem with you starting with one document with a specific json structure, and then adding additional documents to that collection with entirely different structures.
As Mongo works with json data, you would also have the problem with a relational database of needing to convert table data to json documents and vice versa, which really isn't possible in any generic sense.
With MySQL you of course have to have table structures that are maintained in the data dictionary, and if anything changes you need to ALTER the table. You could probably implement a generic table structure of rows where the entire data store would be in a blob, stored in the same json format that Sinclt expects, but at that point you might as well just use mongo.
With that said, if there's some business rule that necessitates it, the fastest way to get it working with MySQL would probably be to do what I just suggested and have a generic row structure with something like:
id
optype (set, update, delete?)
data (blob storing the json payload)
parent_id
Just from a quick perusal of the SincIt docs it appears you'd need something to support the "linked list" aspect of the system.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have got some data that seems to be JSON but with data types and string lengths.
Data=a2:{i:0;a:2:{s:4:"user";s:7:"example";s:5:"email";s:19:"example#example.com";}i:1;a:2:{s:4:"user";s:8:"example2";s:5:"email";s:20:"example2#example.com";}}
The only connection it probably has to json is that it uses things like {, : etc.
This looks like a serialized string: http://en.wikipedia.org/wiki/Serialization
Depending on what is going to happen with it, where it came from, etc you can find out what it was / needs to be. It could be a simple object where your language's "serialize" function was called on, and then made into a literal string to feed to some database
See for an example this php function: http://www.php.net/manual/en/function.serialize.php
What it could be is that you have an app in PHP that reads serialized data from a database, and another app (like java) is trying to (pre-?) fill this database with some object. now java doesn't know how to serialize for php, but it can have a copy/pasted piece of text by a developer in it.
I'm not saying that it is exactly that, but as it kinda looks like php serialized code but the assignment doesn't, it might be some form of combination of the two. Impossible to say without more info.
This is not JSON. JSON has no variations. This appears to be a serialized string. This is pretty close to how PHP serializes, but the start should be a:2 instead of Data=a2. It could be serialized by some other language, though. If you know the source language, it should provide some method for deserializing it into data structures of that language.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am trying to convert a number of .json files to .csv's using Python 2.7
Is there any general way to convert a json file to a csv?
PS: I saw various similar solutions on stackoverflow.com but they were very specific
to json tree and doesn't work if the tree structure changes. I am new to this site and am sorry for my bad english and reposting ty
The basic thing to understand is that json and csv files are extremely different on a very fundamental level.
A csv file is just a series of value separated by commas, this is useful for defining data like those in relational databases where you have exactly the same fields repeated for a large number of objects.
A json file has structure to it, there is no straightforward way to represent any kind of tree structure in a csv. You can have various types of foreign key relationships, but when it comes right down to it, trees don't make any sense in a csv file.
My advice to you would be to reconsider using a csv or post your specific example because for the vast majority of cases, there is no sensible way to convert a json document into a csv.