I've been working with AngularJS and JSON for a while now, and I am currently writing a simple todo app that uses the following array to store its todos:
$scope.todos = [
// todo 1
{
title: 'Personal',
status: 'todo',
// categories for todo 1
categories: [
{
title: 'Shopping',
status: 'doing',
// items for category 1, todo 1
items: [
{
title: 'Buy bacon',
status: 'complete',
},
{
title: 'Buy tuna',
status: 'doing',
},
], // / items
},
], // /categories
},
]; // todos
So far, so well. Now what I am not sure about is how to actually store this data permanently. If I use my application to add or modify a todo, it's all nice and good until I close the browser window and it's all back to the default values (obviously).
Until now, I have always been working with MySQL databases to store relational data. But I was wondering if there is a better way to store this json data?
I was thinking to create a simple php page with saves the whole array to a textfile. But that would mean rewriting the whole file every time I make even the tiniest change to the data.
I've heard there were databases available that allow you to store this type of data, but I don't know where to start? Any pointer would be much appreciated.
Nothing keeps you from saving this in a relation database like MySQL, you could have entities like a Todo, Category and Item, then serialize then into JSON and serve them RESTfully.
I think what you were looking for is a NoSQL database. They can store JSON data natively and can store chunks of data instead of just rows of data like traditional relational databases.
Two popular NoSQL databases are
MongoDB
RethinkDB
I would suggest going with a framework like restangular to define your relations, you will then be able to use all kinds of noSQL databases which have a RESTfull JSON API such as couchdb or mongodb etc.
It uses promises which is nice future proof and modern, it also supports all HTTP methods you might need, but it has a lot more features than that, take a look at the repo's readme.
Here is also a demo which uses mongolabs, mongodb flawored cloud service.
Hope it helps.
Related
I want to store a large document in MongoDB, however, these are the two ways I will interact with the document:
I do frequent reads of that data and need to get a part of that data using aggregations
When I need to write to the document, I will be building it from scratch again, i.e remove the document that exists and insert a new one.
Here is how a sample document looks like:
{
"objects_1": [
{
}
],
"objects_2": [
{
}
],
"objects_3": [
{
}
],
"policy_1": [
{
}
],
"policy_2": [
{
}
],
"policy_3": [
{
}
]
}
Here is how I want to access that data:
{
"objects_1": [
{
}
}
If I was storing it in a conventional way, I would write a query like this:
db.getCollection('configuration').aggregate([
{ $match: { _id: "FAAAAAAAAAAAA" } },
{ $project: {
"_id": 0,
"a_objects": {
$filter: {
input: "$settings.a_objects",
as: "arrayItem",
cond: { $eq: [ "$$arrayItem.name", "objectName" ] }
}
}
}}
])
However, since the size of the document is >16 MB, we cant save it directly to MongoDB. The size can be a max of 50MB.
Solutions I thought of:
I thought of storing the json data in gridfs format and reading it as per the docs here: https://docs.mongodb.com/manual/core/gridfs/ . However, then I would need to read the entire file every time I want to look up only one object inside the large json blob, and I need to do such reads frequently, on multiple large documents which would lead to high memory usage
I thought of splitting the json into parts and storing each object in it's own separate collection, and when I need to fetch the entire document, I can reassemble the json
How should I approach this problem? Is there something obvious that I am missing here?
I think your problem is that you're not using the right tools for the job, or not using the tools you have in the way they were meant to be used.
If you want to persist large objects as JSON then I'd argue that a database isn't a natural choice for that - especially if the objects are large. I'd be looking at storage systems designed to do that well (say if your solution is on Azure/AWS/GCP see what specialist service they offer) or even just the file system if you run on a local server.
There's no reason why you can't have the JSON in a file and related data in a database - yes there are issues with that but the limitations of MongoDB won't be one of them.
I do frequent reads of that data and need to get a part of that data using aggregations
If you are doing frequent reads, and only for part of the data, then forcing your system to always read the whole record means you are just penalizing yourself. One option is to store the bits that are highly read in a way that doesn't incur the performance penalty of the full read.
Storing objects as JSON means you can change your program and data without having to worry about what the database looks like, its convenient. But it also has it's limitations. If you think you have hit those limitations then now might be the time to consider a re-architecture.
I thought of splitting the JSON into parts and storing each object in it's own separate collection, and when I need to fetch the entire document, I can reassemble the JSON
That's definably worth looking into. You just need to make sure that the different parts are not stored in the same table / rows, otherwise there'll be no improvement. Think carefully about how you spilt the objects up - think about the key scenarios the objects deal with - e.g. you mention reads. Designing the sub-objects to align with key scenarios is the way to go.
For example, if you commonly show an object's summary in a list of object summaries (e.g. search results), then the summary text, object name, id are candidates for object data that you would split out.
I need to insert new Schema into MongoDB.The only point of access to the database that I have is through http.post request.
I am currently trying to do it with POSTMAN, but I cannot figure out the syntax I need to send in order to create a schema. This is what I have if someone has ideas I would be grateful for your input.
var TaskSchema = new mongoose.Schema({
title:{type:String,required:true},
instructions:{type:String,required:true},
repeatWeekDay:{type:Number,required:false},
medication:[medicationSchema],
reading:[{
readingType:{type:String,required:true},
measureType:{type:String,required:true},
measureValue:{type:String,required:true},
measureUnits:{type:String,required:true},
measureFormat:{type:String,required:true}
}],
alerts:[{type:Schema.Types.ObjectId,ref:'Alert'}],
plans:[{type:Schema.Types.ObjectId,ref:'Plan'}],
createdAt:{type: Date, default: Date.now },
createdBy:{type:Schema.Types.ObjectId,ref:'User'},
updatedAt:{type: Date, default: Date.now },
updatedBy:{type:Schema.Types.ObjectId,ref:'User'}
});
I think you're laboring under a misapprehension.
mongoose is a JavaScript library/module that enforces structured interaction with the mongodb datastore using Schemas, Types, etc. where a Schema object simply provides necessary programmatic intelligence to its validation function, and is not a data format itself. In other words, internally a mongoose Schema is a function not a data interchange format.
mongodb is a schema-less data storage engine that gleefully accepts any properly formatted JSON document for storage.
I can imagine very few reasons to store a data schema in mongodb.
JSON appears to be a nice way to represent a complex data structure in plain text. If we think of this complex data structure as analogous to an OOP object - an instance of a class - then is there a commonly used JSON-like format that represents the class itself (just the data part - forget methods)? Can JSON itself be used for this?
To put it another way, if JSON encodes name-value pairs, what should I use if I want to encode only the names?
The reason I want this is that I am designing a protocol to use with jQuery (to which I am a complete novice by the way). The client will communicate to the server the structure of the JSON object it wants back, and the server will return a JSON object of that structure with the values added.
The key point is that it is the client that is in full control of what data fields (name-value pairs) the server returns. It's a bit different from all the examples of jQuery that I've found so far on the web where the client makes a request (which usually includes a very limited set of parameters, if any) and the server makes the decision as to what fields to return in the JSON reply.
(Obviously, what the client asks for must be congruent with the server's data model; if the server has an array of widgets each with its own price, the client can't ask for an array of prices each with its own widget.)
This must be a common problem, and I don't want to reinvent the wheel. I want to adopt a solution that is already in common use across the web.
Edit
I just found JSON Schema. This is not what I am looking for. It contains way more than I need.
Edit
I'm looking more for a 'this is how it is usually done' answer, rather than a 'you could try…' answer. (I can invent dozens of possible answers myself.)
To encode only names within JSON, you could use a key/value pair where the key is either the class name or just a key named 'values' - with the value being an array of strings that are the names to be returned by the server. For example:
{ 'class_name' : [ "name1", "name2", "name3" ] }
The server can then either detect the class name from the key used and return the supplied values for the names in the array if the class supports it or ignore if it does not.
I'm looking more for a 'this is how it is usually done' answer
There is no single "correct" way to do what you want. Many people have their implementation. It depends on various factors -- what you want to do, where you want to do, how efficiently you want it to do?
For simple structures I would prefer and suggest the answer given by #dbr9979.
For nested structures, you can have nested arrays. Something like:
{
"nestedfield1": {
"nestedfield11":["nestedfield111", "nestedfield112"],
"nestedfield12":["nestedfield121", "nestedfield122"],
"__SIMPLE_FIELDS__": ["simplefield13", "simplefield14"]
}
}
The point is, if the key is __SIMPLE_FIELDS__, the value is an array of simple fields (string, numbers etc..), else the key stands for the key in the object.
For something more complex, what I would suggest is you have predefined structures, that both the server and the client know of. This is particularly useful when you have to make multiple identical requests. Assign some unique number for each of them. Something like:
1 => <the structure above>
2 => ["simplefield1", "simplefield2" ..]
3 => etc .. etc
The server stores the above structure and the relevant number in the database or something. And now, as it may be obvious by now, client sends across the id of the required structure, and the server responds in the appropriate fashion.
I think what you meant by this:
the client that is in full control of what data fields (name-value pairs) the server returns.
is like the difference between SELECT * FROM Bags and SELECT color, price FROM Bag in SQL. Am I interpreting you correctly?
You could query with:
{
'resource': 'Bag',
'field_names': ['color', 'price']
}
which will return the response:
{
'status': 'success',
'result': [
{'color': 'red', 'price': 50},
{'color': 'blue', 'price': 45},
]
}
most likely though, you may not actually need your request to be a JSON object; I've seen implementations where the field names is taken from the query string, like http://foo.com/bag?fields=color,price
I was looking for Partial Response.
RESTful API Design: can your API give developers just the information they need? explains it all and gives examples from LinkedIn, Facebook, and Google. Google and Facebook both have similar approaches. Here's how Lie Ryan's example would look using Google's approach:
url?fields=status,result(color,price)
Since Google and Facebook are behind this, I would not be surprised to see this become a de facto standard.
In my case I am likely to run into a length limitation on the URL and so have to use POST instead, but this is an excellent starting point for me.
I'm new to redis, and would like to start storing an object that's currently JSON in redis instead. But I need some advice on the best data structure to use.
Basically, the object stores information about which user has looked at which page. Here's the JSON:
all_pageviews = {
'unique_user_session_id_1' : { 'page' : 2, 'country' : 'DE' },
'unique_user_session_id_2': { 'page' : 2, 'country' : 'FR' }
...
};
I've been using a JSON object with the user IDs as keys because that way I can ensure the keys are unique, which is important for various reasons in my app.
I'm going to want to query it efficiently in the following ways:
By user: get all data related to unique_user_session_id_2.
By page: get all user objects related to page number 2.
Any thoughts on what would be the best redis structure to use? Ordering doesn't matter for the purposes of my app, but querying efficiently does.
Please let me know if I have explained myself badly, or if you need more information. Thanks!
To look up data in redis by multiple keys, you'll have to use multiple structures.
I would use a hash to map user_session_ids to the json string, and a sorted set to map pages to user_session_ids
I need to make a file that contains a hierarchical dataset. The dataset in question is a file-system listing (directory names, file name/sizes in each directory, sub-directories, ...).
My first instinct was to use Json and flatten the hierarchy using paths so the parser doesn't have to recurse so much. As seen in the example below, each entry is a path ("/", "/child01", "/child01/gchild01",...) and it's files.
{
"entries":
[
{
"path":"/",
"files":
[
{"name":"File1", "size":1024},
{"name":"File2", "size":1024}
]
},
{
"path":"/child01",
"files":
[
{"name":"File1", "size":1024},
{"name":"File2", "size":1024}
]
},
{
"path":"/child01/gchild01",
"files":
[
{"name":"File1", "size":1024},
{"name":"File2", "size":1024}
]
},
{
"path":"/child02",
"files":
[
{"name":"File1", "size":1024},
{"name":"File2", "size":1024}
]
}
]
}
Then I thought that repeating the keys over and over ("name", "size") for each file kind of sucks. So I found this article about how to use Json as if it were a database - http://peter.michaux.ca/articles/json-db-a-compressed-json-format
Using that technique I'd have a Json table like "Entry" with columns "Id", "ParentId", "EntryType", "Name", "FileSize" where "EntryType" would be 0 for Directory and 1 for File.
So, at this point, I'm wondering if sqlite would be a better choice. I'm thinking that the file size would be a LOT smaller than a Json file, but it might only be negligible if I use Json-DB-compressed format from the article. Besides size, are there any other advantages that you can think of?
I think a Javascript object for datasource, loaded as a file stream into the browser and then used in javascript logic in the browser would consume the least time and have good performance.. BUT only until a limited hierarchy size of the content.
Also, not storing the hierarchy anywhere else and keeping it only as a JSON file badly limits your data source's use in your project to client-side technologies.. or forces conversions to other technologies.
If you are building a pure javascript based application (html, js, css only app), then you could keep it as JSON object alone.. and limit your hierarchy sizes.. you could split bigger hierarchies into multiple files linking json objects.
If you will have server-side code like php, in your project,
Considering managebility of code, and scaling, you should ideally store the data in SQLite DB, at runtime create your json hierarchies for limited levels as ajax loads from your page.
If this is the only data your application stores then you can do something really simple like just store the data in an easy to parse/read text file like this:
File1:1024
File2:1024
child01
File1:1024
File2:1024
gchild01
File1:1024
File2:1024
child02
File1:1024
File2:1024
Files get File:Size and directories get just their name. Indentation gives structure. For something slightly more standard but just as easy to read, use yaml.
http://www.yaml.org/
Both can benefit from decreased file size (but decreased user readability) by gzipping the file.
And if you have more data to store, then use SQLite. SQLite is great.
Don't use JSON for data persistence. It's wasteful.