I am building a music application. In my database I have an "Artist" table, related to an "Album" table, related to a "Track" table.
Each user of my application can "like" (thumbs up) or "dislike" (thumbs down) an artist/album/track. Thus, I have to create Many-to-many relationships between users and the artists/albums/tracks with an argument "vote" which can be set to 1/-1.
My question is : Would it be more appropriate to create three "Like/Dislike" tables (user_artist_like, user_album_like, user_track_like) or only one table "user_like" with three columns (artist_id, album_id, track_id) ? Knowing that I will often have to fetch all the likes of a user.
The first option is better, because putting the data in one table implies that data points in the same row are related, which is not true. Multiple tables allows you to manage the data more easily without getting confused by the rows. For instance, how would you structure a INSERT INTO statement for the single table? It couldn't use the space effectively.
Just in case anyone else reads this, you can set up a table that has a column type to handle this.
{
user_id: 7,
object_id: 12,
type: 'album',
is_liked: 1
}
{
user_id: 7,
object_id: 8,
type: 'track',
is_liked: 0 //this as 0 means disliked
}
Related
I have one model called "Author".It has two attributes.
firstName: {
type: 'string',
unique: true
},
lastName: {
type: 'string',
unique: true
}
So i want to insert an array of author(firstName,lastName).So i used create method to store data.
create: function (req, res) {
let authors = req.body.author_name;
let ids=[],i=0;
Author.create(authors,(err,author)=>{
if(err)res.json('err');
else res.json(author);
});
}
in author_name = [{'abc1','xyz1'},{'abc2','xyz2'}]
it will work fine but after that when i enter
author_name = [{'abc1','xyz1'},{'abc3','xyz3'}]
it wont work.Because i mentioned that both first and last name should be unique.1st data of array is already in database But 2nd argument of array is not in database.So in the end nothing will stored .So i want to store that 2nd argument also.If some of data in array is already in DB then it shloud be ignored and other data should be stored.
So how to do that in better way(not looping if possible)
I think, you need execute find query for the input array, it brings already available records, then you need to exclude them from your input data then try to insert the unique data.
I hope it would help you.
I think you are misunderstanding how your database is functioning. No matter which db technology you are using, an Author object is represented by a single row or record. When you make both firstName and lastName unique fields, that means that first names can never be repeated, and neither can last names. So, for example, you could not have a "Bob Smith" and a "Sara Smith" because that would be repeating a last name.
In the records you are entering in your example, you are not providing a first or last name. It creates a single record with both firstName and lastName empty. Since these fields are unique, you are then not allowed to create any other authors with either first or last names empty.
I think you may need to think about whether you want these fields to be unique, and if storing it via an Author model is really what you want - I had a bit of trouble understanding what you mean by "storing the second argument" but "ignoring the first". Maybe you actually want to just store a list of names, not Author objects?
I have a database with 2 tables that look like this:
content
id name
1 Cool Stuff
2 Even Better stuff
--
contentFields
id content label value
5 1 Rating Spectacular
6 1 Info Top Notch
7 2 Rating Poor
As you can see the content column of the contentFields table coincides with the id column of the content table.
I want to write a query that grabs all of the content and stores the applicable content fields with the right content, so that it comes out to this:
[
{
id: 1,
name: 'Cool Stuff',
contentFields: [
{label: 'Rating', value: 'Spectacular'},
{label: 'Info', value: 'Top Notch'}
]
},
{
id: 2,
name: 'Even Better Stuff',
contentFields: [
{label: 'Rating', value: 'Poor'}
]
}
]
I tried an inner join like this:
SELECT * FROM content INNER JOIN contentFields ON content.id = contentFields.content GROUP BY content.id
But that didn't do it.
*Note: I know that I could do this with 2 seperate queries, but I want to find out how to do it in one as that will dramatically improve performance.
What you are trying to achieve is not directly possible with SQL only.
As you have already stated yourself, you are looking for a table within a table. But MySQL does not know about such concepts, and as far as I know, other databases also don't. A result set is always like a table; every row of the result set has the same structure.
So either you let your GROUP BY content.id in place; then, for every row in the result set, MySQL will select a random row from the joined table which fits to that row's content.id (you even can't rely on that it is the same row every time).
Or you remove the GROUP BY; then you will get every row from the joined table, but that is not what you want as well.
When performance is an issue, I would probably choose the second option, adding ORDER BY content.id, and generate the JSON myself. You could do so by looping through the result set and begin a new JSON block every time the content.id changes.
Disclaimer The following is pure speculation.
I don't know anything about node.js and how it transforms result sets into JSON. But I strongly assume that you can configure its behavior; otherwise, it actually would not be of any use in most cases. So there must be a method to tell it how it should group the rows from a result set.
If I am right, you would first have to tell node.js how to group the result set and then let it process the rows from the second option above (i.e. without the GROUP BY).
Imagine that I have on my DB one table called 'match' and I store:
id
round_id
score
start_date
end_date
When my REST API returns an JSON on endpoint /matches i must obligatorily return only the fields/columns that exists in the DB or I can return some custom fields like this:
{id: 1, is_over: true, no_goals: false}
Also, this table match has only relationship with the round table, and the round table has an relationship with season that has relationship with the competition table.
In the /matchs endpoints json, can I return competition data direclty ? Something like this:
/matchs:
{id: 1, is_over: true, no_goals: false, competition: { id: 2, name: 'foo',...}}
It's your API. You can do whatever you want with it!
When you work with a REST API, you work with data.
In this case, you still work with data, you just add new fields which are not in you database.
So it's possible, it's OK to do so, you don't break the REST API model. One thing a lot of people do is to inject new field about your request, like a custom http code, or a custom message.
And for competition, you can do it this way
So all good !
I am asked to allow users to input multiple values in EVERY field. So the option is limitless.
For example. Columns are:
CompanyID-
Company name
Website
Key_Markets
M&A_History
Highlights
Region
Comments
A scenario is a company can have multiple websites,key markets, region, etch. How would I do this professionally? I am thinking of putting every column a seperate table.
Basically there are three ways to realize this.
1) Write multiple fields into one column seperately. This would be a very bad design and you would have to handle the splitting in your application - Do not do that ;-)
2) Use one table with multiple groups to store the data. This would make sense for parameters but not really if you have different values for each customer. For example:
CompanyID
GroupID
Position
Value
Example:
108001, 'homepage', 1, 'www.mypage.com';
108001, 'homepage', 2, 'www.mysecondpage.com';
108001, 'homepage', 3, 'www.anotherpage.com';
108001, 'markets', 1, 'erp';
108001, 'markets', 2, 'software';
108001, 'region', 1, 'germany';
108001, 'region', 2, 'austria';
108001, 'region', 3, 'poland';
3) Use seperate tables for each 1:n relation! This would be the best solution for your needs I guess. This would have the advantage that you can easily extend your schema and store more data in it. For example if you decide to store the amount of users for each region or key markets etc.
Another point: Use n:m relations to avoid double content in your database! For example should the key-markets and regions be stored in a completely seperated table and you store the IDs of the customer and the key-market in a crosstab. So you do not need to store the key-markets as a string for each customer!
You would need a database structure like:
table_master_companies
- record_id
- company_name
table_websites
- record_id
- company_id
- website_address
table_key_markets
- record_id
- company_id
- key_market
etc. You would then need to use joins to concat all the information into a single recordset.
I have a blog application. I need to make a MongoDB query (SQL is fine, I'll translate it), to get a specific post in the blog, and the immediate posts made before and after that post.
For instance, given this list of posts:
12/01/13 - Foo
15/01/13 - Bar
17/01/13 - Baz
27/01/13 - Taz
How do I write a query so that I get one of these, i.g Bar, and the immediate siblings Foo and Baz?
I'd like to do this without making three different queries to the database, for performance reasons.
In my application I fetch a single post like this:
model.findOne({
date: {
$gte: new Date(2013, 0, 15),
$lt: new Date(2013, 0, 15, 24)
},
slug: 'Bar'
}, function(result){
return { entry: result };
});
Here's one possibility (involving 2 queries, one to find the primary post, and the second to find the nearest doc):
Treat the data/posts as if it were a doubly-linked list.
You'll need to store reference IDs as links to the "previous" and "next" posts in each post document (array). This makes inserts a tiny more complex, but inserting a "new" blog post by date somewhere in the past seems unlikely.
Index the link field
Search for documents having the id of the primary document $in the link field