Store multiple authors in to couchbase database - couchbase

I am a newbie to "couchbase server". What i am looking for is to store 10 author names to couchbase document one after another. Someone please help me whether the structure is like a single document "author" and multiple values
{ id : 1, name : Auther 1}, { id : 2, name : Author 2}
OR store Author 1 to a document and Author 2 to another document.
If so, how can i increment the id automatically before "insert" command.

you can store all authors in a single document
{ doctype : "Authors",
AuthorNames:[
{
id: 1,
Name : "author1"
}
{
id: 2,
Name : "author2"
}
so on
]
IF you want to increase the ID, one is to enter one author name at a time in new document, but ID will be randomly generated and it would not in incremental order.

In Couchbase think more about how your application will be using the data more than how you are want to store it. For example, will your application need to get all of the 10 authors all of the time? If so, then one document might be worthwhile. Perhaps your application needs to only ever read/write one of the authors at a time. Then you might want to put each in their own, but have an object key pattern that makes it so you can get the object really fast. Objects that are used often are kept in the managed cache, other objects that are not used often may fall out of the managed cache...and that is ok.
The other factor is what your reads to writes ratio is on this data.
So like I said, it depends on how your application will be reading and writing your data. Use this as the guidance for how your data should be stored.
The single JSON document is pretty straight forward. The more advanced schema design where each author is in its own document and you access them via object key, might be a bit more complicated, but ultimately faster and more scalable depending on what I already pointed out. I will lay out an example schema and some possibilities.
For the authors, I might create each author JSON document with an object key like this:
authors::ID
Where ID is a value I keep in a special incrementer object that I will called authors::incrementer. Think of that object as a key value pair only holding an integer that happens to be the upper bound of an array. Couchbase SDKs include a special function to increment just such an integer object. With this, my application can put together that object key very quickly. If I want to go after the 5th author, I do a read by object key for "authors::5". If I need to get 10, I do a parallelized BulkGet function and get authors::1 through authors::10. If I want to get all the authors, I get the incrementer object, and get that integer and then to a parallelized bulk get. This way i can get them in order or in whatever order I feel like and I am accessing them by object key which is VERY fast in Couchbase.
All this being said, I could use a view to query this data or the upcoming "SQL for Documents" in Couchbase 4.0 or I can mix and match when I query and when I get objects by their key. Key access will ALWAYS be faster. It is the difference between asking a question then going and getting the object and simply knowing the answer and getting it immediately.

Related

How can I pass the document sub-collections along with the document JSON map in Flutter using Firebase?

I'm trying to get all the documents in the "businesses" collection from Firebase together with their sub-collections.
The problem is when I do the query to Firebase like this :
Stream<List<Business>> getBusinesses() {
return _db.collection('businesses').snapshots().map((snapshot) => snapshot
.docs
.map((document) => Business.fromJson(document.data()))
.toList());
}
, the sub-collections aren't passed with the JSON object document.data(), so in my code, the Business object isn't fully completed, which means there are empty fields (Appointments, ServiceProviders,
Services), instead of getting the data from the sub-collections.
So hopefully I've explained the problem well, my question is how can I fetch all the document data including its sub-collections, and parse it to a Business Object?
Thanks.
What seems to be "the problem" is actually the point of Firestore: Keeping documents shallow so you can only get the data you need. It's then up to you to structure your data the way it will likely be used in the future.
Mind you, subcollections are not fields.
What you can do here, is add a query that fetches the documents in the subcollections (Appointments, ServiceProviders, Services), for each business. You would get the business document Id to use for the query.
It would typically look something like:
_db.collection('businesses').document(documentId).collection('Appointments')
Mind you, this is potentially too much data. It might be better to fetch the docs in those subcollections only when needed/requested by the user.

MySQL: Embedded JSON vs table

I'm designing the database schema for a video production project management app and struggling with how to persist some embedded, but not repeatable data. In the few CS courses I took, part of normalizing a relational database was identifying repeatable blocks and encapsulating them into their own table. What if I have a block of embedded/nested data that I know is likely to be unique to the record?
Example: A video record has many shoot_locations. Those locations are most likely never to be repeated. shoot_locations can also contain multiple shoot_times. Representing this in JSON, might look like this:
{
video: {
shoot_locations: [
{
name: "Bob's Pony Shack",
address: "99 Horseman Street, Anywhere, US 12345",
shoot_times: {
shoot_at: "2015-08-15 21:00:00",
...
}
},
{
name: "Jerry's Tackle",
address: "15 Pike Place, Anywhere, US 12345",
shoot_times: {
shoot_at: "2015-08-16 21:00:00"
...
}
}
],
...
}
}
Options...
store the shoot_locations in a JSON field (available in MySQL 5.7.8?)
create a separate table for the data.
something else?
I get the sense I should split embedded data into it's own tables and save JSON for non-crucial meta data.
Summary
What's the best option to store non-repeating embedded data?
ONE of the reasons of normalizing a database is to reduce redundancy (your "repeatable blocks")
ANOTHER reason is to allow "backwards" querying. If you wanted to know which video was shot at "15 Pike Place", your JSON solution will fail (you'll have to resort to sequential reading, decoding JSON which defeats the purpose of a RDBMS)
Good rules of thumb:
Structured data - put in tables and columns
Data that might be part of query conditions - put in tables and columns
Unstructured data you know you'll never query by - put into BLOBs, XML or JSON fields
If in doubt, use tables and columns. You might have to spend some extra time initially, but you will never regret it. People have regretted their choice for JSON fields (or XML, for that matter) again and again and again. Did I mention "again"?

MongoDB structure and JSON, are they not the same?

Let's say I have a json object:
Object = {
param1: '',
param2: '',
param3: '',
param4: {
paramA: '',
paramB: '',
paramC: '',
paramD: [AnotherJsonObject1,AnotherJsonObject2]
}
}
Will my MongoDB structure not be similar? Would this type of structuring make the data (or some of it) less searchable?
Edit 1:
By less searchable I mean: if the top level entities have sub entities which themselves have sub-entities and so on. Will I be able to reach the lowest level entities with the same efficiency of those in the top level?
I currently depend heavily on JSON files in my website. Those files need not be indexed to searchable, BUT they would fit in the DB logically.
For example: I have a director, the director has the list of movies he created, every movie in this list has itself a list of actors who play in it, and every actor has a bio.
The bio in this example doesn't need to be indexed. I can just include a link to the file that contains the actor's bio, but I am wondering whether I can just add this to the DB because this way it will all fit in logically, or will 'unnecessary' data will harm the db's ability to perform efficiently.
Mongodb stores the document in a BSON format. It will appear similar to JSON structure.
The structure you explained seems to be a proper use case of nested documents.
You can query nested fields using the . operator
Would this type of structuring make the data (or some of it) less
searchable?
It depends on your nested data structure and the kind of queries on those fields. There may be some limitations or queries may be a bit more complicated in nested structure cases in case on nested docs. However, as far the searchability of your nested docs is concerned, it entirely depends on your use case.
For eg.
director:[movies:[{movieName:"movie1", actors:[{firstName:"will", lastName:"smith"}, {firstName:"bruce", lastName:"willis"}]}]]
In the above scenario, if you have search for a director where any of the directed movies has actor with firstName as will and lastName as smith may turn out to be a bit more complex.
a simple query like
{director.movies.actors.firstName:"will", director.movies.actors.lastName:"smith"}
may return a false response
The doc : director:[movies:[actors:actors:[{firstName:"will", lastName:"willis"}, {firstName:"bruce", lastName:"smith"}]]]
will also turn out to be a positive match.
Also, negation queries like where firstName!="bruce" will also return both the documents.
You may like to go through the mongodb docs for the same
For the first case, you can refer to elemMatch

Edit json object by lua script in redis

I want edit my json object before back from the Redis server,
In my Redis server I have 4 keys:
user:1 {"Id":"1","Name":"Gholzam","Posts":"[]"}
user:1:post:1 {"PostId":"1","Content":"Test content"}
user:1:post:2 {"PostId":"2","Content":"Test content"}
user:1:post:3 {"PostId":"3","Content":"Test content"}
I want to get this context by lua script,How ? :
{"Id":"1","Name":"Gholzam","Posts":"[{"PostId":"1","Content":"Test
content"},{"PostId":"1","Content":"Test
content"},{"PostId":"1","Content":"Test content"}]}
The choice of client here is largely irrelevant; the important thing to do is: figure out the data storage. You say you have 4 keys - but it is not obvious to me how we we know, given user:1, what the posts are. Common approaches there include:
have a set called user:1:posts (or something similar) which contains either the full keys (user:1:post:1, etc) or the relative keys (1, etc)
have a hash called user:1:posts (or something similar) which contains the posts keyed by their id
I'd be tempted to use the latter approach, as it is more direct - so I might have:
user:1, a string with contents {"Id":"1","Name":"Gholzam","Posts":"[]"}
user:1:posts, a hash with 3 pairs:
key 1 with value {"PostId":"1","Content":"Test content"}
key 2 with value {"PostId":"2","Content":"Test content"}
key 3 with value {"PostId":"3","Content":"Test content"}
Then you can use hgetall or hvals to get the posts easily.
The second part is how to manipulate json at the server. The good news here is that redis provides access to json tools inside lua via cjson.
I am an expert in neither cjson nor lua; however, frankly my advice is: don't do this. IMO, redis works best if you let it focus on what it is great at: storage and retrieval. You probably can bend it to your whim, but I would be very tempted to do any json manipulation outside of redis.

If JSON represents the 'object', what represents the 'class'?

JSON appears to be a nice way to represent a complex data structure in plain text. If we think of this complex data structure as analogous to an OOP object - an instance of a class - then is there a commonly used JSON-like format that represents the class itself (just the data part - forget methods)? Can JSON itself be used for this?
To put it another way, if JSON encodes name-value pairs, what should I use if I want to encode only the names?
The reason I want this is that I am designing a protocol to use with jQuery (to which I am a complete novice by the way). The client will communicate to the server the structure of the JSON object it wants back, and the server will return a JSON object of that structure with the values added.
The key point is that it is the client that is in full control of what data fields (name-value pairs) the server returns. It's a bit different from all the examples of jQuery that I've found so far on the web where the client makes a request (which usually includes a very limited set of parameters, if any) and the server makes the decision as to what fields to return in the JSON reply.
(Obviously, what the client asks for must be congruent with the server's data model; if the server has an array of widgets each with its own price, the client can't ask for an array of prices each with its own widget.)
This must be a common problem, and I don't want to reinvent the wheel. I want to adopt a solution that is already in common use across the web.
Edit
I just found JSON Schema. This is not what I am looking for. It contains way more than I need.
Edit
I'm looking more for a 'this is how it is usually done' answer, rather than a 'you could try…' answer. (I can invent dozens of possible answers myself.)
To encode only names within JSON, you could use a key/value pair where the key is either the class name or just a key named 'values' - with the value being an array of strings that are the names to be returned by the server. For example:
{ 'class_name' : [ "name1", "name2", "name3" ] }
The server can then either detect the class name from the key used and return the supplied values for the names in the array if the class supports it or ignore if it does not.
I'm looking more for a 'this is how it is usually done' answer
There is no single "correct" way to do what you want. Many people have their implementation. It depends on various factors -- what you want to do, where you want to do, how efficiently you want it to do?
For simple structures I would prefer and suggest the answer given by #dbr9979.
For nested structures, you can have nested arrays. Something like:
{
"nestedfield1": {
"nestedfield11":["nestedfield111", "nestedfield112"],
"nestedfield12":["nestedfield121", "nestedfield122"],
"__SIMPLE_FIELDS__": ["simplefield13", "simplefield14"]
}
}
The point is, if the key is __SIMPLE_FIELDS__, the value is an array of simple fields (string, numbers etc..), else the key stands for the key in the object.
For something more complex, what I would suggest is you have predefined structures, that both the server and the client know of. This is particularly useful when you have to make multiple identical requests. Assign some unique number for each of them. Something like:
1 => <the structure above>
2 => ["simplefield1", "simplefield2" ..]
3 => etc .. etc
The server stores the above structure and the relevant number in the database or something. And now, as it may be obvious by now, client sends across the id of the required structure, and the server responds in the appropriate fashion.
I think what you meant by this:
the client that is in full control of what data fields (name-value pairs) the server returns.
is like the difference between SELECT * FROM Bags and SELECT color, price FROM Bag in SQL. Am I interpreting you correctly?
You could query with:
{
'resource': 'Bag',
'field_names': ['color', 'price']
}
which will return the response:
{
'status': 'success',
'result': [
{'color': 'red', 'price': 50},
{'color': 'blue', 'price': 45},
]
}
most likely though, you may not actually need your request to be a JSON object; I've seen implementations where the field names is taken from the query string, like http://foo.com/bag?fields=color,price
I was looking for Partial Response.
RESTful API Design: can your API give developers just the information they need? explains it all and gives examples from LinkedIn, Facebook, and Google. Google and Facebook both have similar approaches. Here's how Lie Ryan's example would look using Google's approach:
url?fields=status,result(color,price)
Since Google and Facebook are behind this, I would not be surprised to see this become a de facto standard.
In my case I am likely to run into a length limitation on the URL and so have to use POST instead, but this is an excellent starting point for me.