I need to save a list of user ids who viewed a page, streamed a song and / or downloaded it. What I do with the list is add to it and show it. I don't really need to save more info than that, and I came up with two solutions. Which one is better, or is there an even better solution I missed:
The KISS solution - 1 table with the primary key the song id and a text field for each of the three interactions above (view, download, stream) in which there will be a comma separated list of user ids. Adding to it will be just a concatenation operation.
The "best practice" solution - Have 3 tables with the primary key the song id and a field of user id that did the interaction. Each row has one user id and I could add stuff like date and other stuff.
One thing that makes me lean towards options 2 is that it may be easier to check whether the user has already voted on a song?
tl;dr version - Is it better to use a text field to save arrays as comma separated values, or have each item in the array in a separate table row.
Definitely the 2nd:
You'll be able to scale your application as it grows
It will be less programming language dependent
You'll be able to make queries faster and cleaner
It will be less painful for any other programmer coding / debugging your application later
Additionally, I'd add a new table called "operations" with their ID, so you can add different operations if you need later, storing the operation ID instead of a string on each row ("view", "download", "stream").
It's definitely better to have each item in a separate row. Manipulating text fields has performance disadvantages by itself. But if ever you want to find out which songs user 1234 has viewed/listened to/etc., you'd have to do something like
SELECT * FROM songactions WHERE userlist LIKE '%,1234,%' OR userlist LIKE '1234,%' OR userlist LIKE '%,1234' OR userlist='1234';
It'd be just horribly, horribly painful.
Related
Apologies if this is redundant, and it probably is, I gave it a look but couldn't find a question here that fell in with what I wanted to know.
Basically we have a table with about ~50000 rows, and it's expected to grow much bigger than that. We need to be able to allow admin users to add in custom data to an item based on its category, and users can just pick which fields defined by the administrators they want to add info to.
Initially I had gone with an item_categories_fields table which pairs up entries from item_fields to item_categories, so admins can add custom fields and reuse them across categories for consistency. item_fields has a relationship to item_field_values which links values with fields, which is how we handled things in .NET. The project is using CAKEPHP though, and we're just learning as we go, so it can get a bit annoying at times.
I'm however thinking of maybe just adding an item_custom_fields table that is essentially the item_id and a text field that stores XMLish formatted data. This is just for the values of the custom fields.
No problems if I want to fetch the item by its id as the required data is stored in the items table, but what if I wanted to do a search based on a custom field? Would a
SELECT * FROM item_custom_fields
WHERE custom_data LIKE '%<material>Plastic</material>%'
(user input related issues aside) be practical if I wanted to fetch items made of plastic in this case? Like how slow would that be?
Thanks.
Edit: I was afraid of that as realistically this thing will be around 400k rows for that one table at launch, thanks guys.
Any LIKE query that starts with % will not use any indexes you have on the column, so the query will scan the whole table to find the result.
The response time for that depends highly on your machine and the size of the table, but it definitely won't be efficient in any shape or form.
Your previous/existing solution (if well indexed) should be quite a bit faster.
I'm a software developer. I love to code, but I hate databases... Currently, I'm creating a website on which a user will be allowed to mark an entity as liked (like in FB), tag it and comment.
I get stuck on database tables design for handling this functionality. Solution is trivial, if we can do this only for one type of thing (eg. photos). But I need to enable this for 5 different things (for now, but I also assume that this number can grow, as the whole service grows).
I found some similar questions here, but none of them have a satisfying answer, so I'm asking this question again.
The question is, how to properly, efficiently and elastically design the database, so that it can store comments for different tables, likes for different tables and tags for them. Some design pattern as answer will be best ;)
Detailed description:
I have a table User with some user data, and 3 more tables: Photo with photographs, Articles with articles, Places with places. I want to enable any logged user to:
comment on any of those 3 tables
mark any of them as liked
tag any of them with some tag
I also want to count the number of likes for every element and the number of times that particular tag was used.
1st approach:
a) For tags, I will create a table Tag [TagId, tagName, tagCounter], then I will create many-to-many relationships tables for: Photo_has_tags, Place_has_tag, Article_has_tag.
b) The same counts for comments.
c) I will create a table LikedPhotos [idUser, idPhoto], LikedArticles[idUser, idArticle], LikedPlace [idUser, idPlace]. Number of likes will be calculated by queries (which, I assume is bad). And...
I really don't like this design for the last part, it smells badly for me ;)
2nd approach:
I will create a table ElementType [idType, TypeName == some table name] which will be populated by the administrator (me) with the names of tables that can be liked, commented or tagged. Then I will create tables:
a) LikedElement [idLike, idUser, idElementType, idLikedElement] and the same for Comments and Tags with the proper columns for each. Now, when I want to make a photo liked I will insert:
typeId = SELECT id FROM ElementType WHERE TypeName == 'Photo'
INSERT (user id, typeId, photoId)
and for places:
typeId = SELECT id FROM ElementType WHERE TypeName == 'Place'
INSERT (user id, typeId, placeId)
and so on... I think that the second approach is better, but I also feel like something is missing in this design as well...
At last, I also wonder which the best place to store counter for how many times the element was liked is. I can think of only two ways:
in element (Photo/Article/Place) table
by select count().
I hope that my explanation of the issue is more thorough now.
The most extensible solution is to have just one "base" table (connected to "likes", tags and comments), and "inherit" all other tables from it. Adding a new kind of entity involves just adding a new "inherited" table - it then automatically plugs into the whole like/tag/comment machinery.
Entity-relationship term for this is "category" (see the ERwin Methods Guide, section: "Subtype Relationships"). The category symbol is:
Assuming a user can like multiple entities, a same tag can be used for more than one entity but a comment is entity-specific, your model could look like this:
BTW, there are roughly 3 ways to implement the "ER category":
All types in one table.
All concrete types in separate tables.
All concrete and abstract types in separate tables.
Unless you have very stringent performance requirements, the third approach is probably the best (meaning the physical tables match 1:1 the entities in the diagram above).
Since you "hate" databases, why are you trying to implement one? Instead, solicit help from someone who loves and breathes this stuff.
Otherwise, learn to love your database. A well designed database simplifies programming, engineering the site, and smooths its continuing operation. Even an experienced d/b designer will not have complete and perfect foresight: some schema changes down the road will be needed as usage patterns emerge or requirements change.
If this is a one man project, program the database interface into simple operations using stored procedures: add_user, update_user, add_comment, add_like, upload_photo, list_comments, etc. Do not embed the schema into even one line of code. In this manner, the database schema can be changed without affecting any code: only the stored procedures should know about the schema.
You may have to refactor the schema several times. This is normal. Don't worry about getting it perfect the first time. Just make it functional enough to prototype an initial design. If you have the luxury of time, use it some, and then delete the schema and do it again. It is always better the second time.
This is a general idea
please donĀ“t pay much attention to the field names styling, but more to the relation and structure
This pseudocode will get all the comments of photo with ID 5
SELECT * FROM actions
WHERE actions.id_Stuff = 5
AND actions.typeStuff="photo"
AND actions.typeAction = "comment"
This pseudocode will get all the likes or users who liked photo with ID 5
(you may use count() to just get the amount of likes)
SELECT * FROM actions
WHERE actions.id_Stuff = 5
AND actions.typeStuff="photo"
AND actions.typeAction = "like"
as far as i understand. several tables are required. There is a many to many relation between them.
Table which stores the user data such as name, surname, birth date with a identity field.
Table which stores data types. these types may be photos, shares, links. each type must has a unique table. therefore, there is a relation between their individual tables and this table.
each different data type has its table. for example, status updates, photos, links.
the last table is for many to many relation storing an id, user id, data type and data id.
Look at the access patterns you are going to need. Do any of them seem to made particularly difficult or inefficient my one design choice or the other?
If not favour the one that requires the fewer tables
In this case:
Add Comment: you either pick a particular many/many table or insert into a common table with a known specific identifier for what is being liked, I think client code will be slightly simpler in your second case.
Find comments for item: here it seems using a common table is slightly easier - we just have a single query parameterised by type of entity
Find comments by a person about one kind of thing: simple query in either case
Find all comments by a person about all things: this seems little gnarly either way.
I think your "discriminated" approach, option 2, yields simpler queries in some cases and doesn't seem much worse in the others so I'd go with it.
Consider using table per entity for comments and etc. More tables - better sharding and scaling. It's not a problem to control many similar tables for all frameworks I know.
One day you'll need to optimize reads from such structure. You can easily create agragating tables over base ones and lose a bit on writes.
One big table with dictionary may become uncontrollable one day.
Definitely go with the second approach where you have one table and store the element type for each row, it will give you a lot more flexibility. Basically when something can logically be done with fewer tables it is almost always better to go with fewer tables. One advantage that comes to my mind right now about your particular case, consider you want to delete all liked elements of a certain user, with your first approach you need to issue one query for each element type but with the second approach it can be done with only one query or consider when you want to add a new element type, with the first approach it involves creating a new table for each new type but with the second approach you shouldn't do anything...
I have a maybe stupid question but I need to ask it :-)
My Friendly URL (furl) database design approach is fairly summarized in the following diagram (InnoDB at MySQL 5.5 used)
Each item will generate as many furls as languages available on the website. The furl_redirect table represents the controller path for each item. I show you an example:
item.id = 1000
item.title = 'Example title'
furl_redirect = 'item/1000'
furl.url = 'en/example-title-1000'
furl.url = 'es/example-title-1000'
furl.url = 'it/example-title-1000'
When you insert a new item, its furl_redirect and furls must be also inserted. The problem appears becouse of the (necessary) unique constraint in the furl table. As you see above, in order to get unique urls, I use the title of the item (it is not necessarily unique) + the id to create the unique url. That means the order of inserting rows should be as follow:
1. Insert item -- (and get the id of the new item inserted) ERROR!! furl_redirect_id must not be null!!
2. Insert furl_redirect -- (need the item id to create de path)
3. Insert furl -- (need the item id to create de url)
I would like an elegant solution to this problem, but I can not get it!
Is there a way of getting the next AutoIncrement value on an InnoDB Table?, and is it recommended to use it?
Can you think of another way to ensure the uniqueness of the friendly urls that is independent of the items' id? Am I missing something crucial?
Any solution is welcome!
Thanks!
You can get an auto-increment in InnoDB, see here. Whether you should use it or not depends on what kind of throughput you need and can achieve. Any auto-increment/identity type column, when used as a primary key, can create a "hot spot" which can limit performance.
Another option would be to use an alphanumeric ID, like bit.ly or other URL shorteners. The advantage of these is that you can have short IDs that use base 36 (a-z+0-9) instead of base 10. Why is this important? Because you can use a random number generator to pick a number out of a fairly big domain - 6 characters gets you 2 billion combinations. You convert the number to base 36, and then check to see if you already have this number assigned. If not, you have your new ID and off you go, otherwise generate a new random number. This helps to avoid hotspots if that turns out to be necessary for your system. Auto-increment is easier and I'd try that first to see if it works under the loads that you're anticipating.
You could also use the base 36 ID and the auto-increment together so that your friendly URLs are shorter, which is often the point.
You might consider another ways to deal with your project.
At first, you are using "en/" "de/" etc, for changing language. May I ask how does it work in script? If you have different folders for different languages your script and users must suffer a lot. Try to use gettext or any other localisation method (depends on size of your project).
About the friendly url's. My favorite method is to have only one extra column in item's table. For example:
Table picture
id, path, title, alias, created
Values:
1, uploads/pics/mypicture.jpg, Great holidays, great-holidays, 2011-11-11 11:11:11
2, uploads/pics/anotherpic.jpg, Great holidays, great-holidays-1, 2011-12-12 12:12:12
Now in the script, while inserting the item, create alias from title, check if the alias exists already and if does, you can add id, random number, or count (depending on how many same titles u have already).
After you store the alais like this its very simple. User try to access
http://www.mywebsite.com/picture/great-holidays
So in your script you just see that user want to see picture, and picture with alias great-holidays. Find it in DB and show it.
Apologize for the long topic, I didn't intend for it to be this long, but it's a pretty simple issue I've been having. :)
Let's say you have a simple table called tags that has columns tag_id and tag. The tag_id is simply an auto increment column and the tag is the title of the tag. If I need to add a description field, that would be around 1-2 paragraphs on average (max around 3-4 paragraphs probably), should I simply add a description field to the table or should I create a new table called tag_descriptions and store the descriptions with the tag_id?
I remember reading that it is better to do this because if you do a query that doesn't select the description, that description field will still slow down mysql. Is this true? I don't even remember where I read that from, but I've been kind of following it for a couple years now... Finally I question if I need to do this, I have a feeling I don't. You'd also need to inner join whenever you need the description field.
Another question I have is, is it generally bad to create new tables that will only hold very few rows at the max? What if this data doesn't fit anywhere else?
I have a simple case below which relates to these two questions.
I have three tables content, tags, and content_tags that make up a many to many relationship:
content
content_id
region (enum column with
about 6-7 different values and most
likely won't grow later on)
tags
tag_id
tag
content_tags
content_id
tag_id
I want to store a description around 1-2 paragraphs for each tag, but also for each region. I'm wondering what would be the best way to do this?
Option A:
Just add a description column to the
tags table
Create a new table for
region_descriptions
Option B:
Create a new table called
descriptions with fields: id,
description, and type
The id would be id of the content or
id of the enum field
The type would be whether it is a tag
description, or region description
(Would use the enum column for this)
Maybe have a primary key on the id and type?
Option C:
Create a new table for tag_descriptions
Create a new table for region_descriptions
Option A seems to be a good choice if adding the description column doesn't slow down mysql select queries that don't need the description.
Assuming the description column would slow down mysql, option B might be a good choice. It also removes the need for a small table with just 6-7 rows that would hold the region descriptions. Although now that I think of it, would it be slow to connect to this table if originally to get a region description you'd only need to go through very little rows.
Option C would be ideal if the description columns would slow down mysql and if a small table like region descriptions would not matter.
Maybe none of these options are the best, feel free to offer another option. Thanks.
P.S. What would be an ideal column type to use to hold data that usually 1-2 paragraphs, but might be a little more sometimes?
I don't think it really matters if you don't handle thousands of queries per minute. If you are going to have a zillion queries per minute, then I would implement the various options and perform benchmarks for all these options. Based on the results, you can make a decision.
In my (admittedly somewhat uninformed) opinion, it really depends on how much you'll be using both of them.
If properly indexed, that JOIN should not be very expensive. Also, a larger table will be slower. It inhibits caching, and takes longer to access stuff, although indexing seriously mitigates this problem.
If you'll be joining tag names to tag IDs a LOT, and only rarely will be using the descriptions, I'd say go with separate tables. If you'll be using the descriptions more often, go with one table.
For the first part of your question: if you have a tag with an id, a name and a description, you should save it in 1 table.
Now, this query
SELECT name FROM tags WHERE id = 1;
will NOT slow down if you have 1, 2 or 20 extra fields in there.
So on my older work, I had always used the 'text' data type to store items, like so:
0=4151:54;1=995:5000;2=521:1;
So basically: slot=item:amount;
I've been looking into finding the best ways of storing information in a sql database, and everywhere i go, it says that using text is a big performance hit.
I was thinking of doing something else, like having a table with the following columns:
id, owner_id, slot_id, item_id, amount
Where as now i can just insert a row for each item a character allocates. But i have no clue how to save them, since the slot's item can change, etc. A character has 28 inventory slots, and 500 bank slots, should i insert them all at registration? or is there a smarter way to save the items
Yes use that structure. Using text to store relational data defeats the purpose of a relational database.
I don't see what you mean by insert them all at registration. Can you not insert them as you need to?
Edit
Based on your previous comment I would recommend only inserting a slot as it is needed (if I understand your problem). It may be an idea to keep the ID of the slot in the application, if need be.
If I understand you correctly, and that the slot's item can change, then you want to further abstract the mapping between item_id and the item:
entry_tbl.item_id->item_rel_realitems_tbl.real_id->items_tbl
This way, all entries with an itemid point to a table that maps those ids to a mutable item. When you UPDATE an item in 'items_tbl' then the mapping automatically updates the entry_tbl.
Another JOIN is needed however. I would also use stored procedures in any case to abstract the mechanism from semantics.
I am not sure I understand the wording of your question however.