How to add related properties to a table in mysql correctly - mysql

We have been developing the system at my place of work for sometime now and I feel the database design is getting out of hand somewhat.
For example we have a table widgets (I'm spoofing these somewhat):
+-----------------------+
| Widget |
+-----------------------+
| Id | Name | Price |
| 1 | Sprocket | 100 |
| 2 | Dynamo | 50 |
+-----------------------+
*There's about 40+ columns on this table already
We want to add on a property for each widget for packaging information. We need to know if it has packaging information, if it doesn't have packaging information or we don't know if it does or doesn't. We then need to also store the type of packaging details (assuming it does or maybe it doesn't and it's reduntant info now).
We already have another table which stores the details information information (I personally think this table should be divided up but that's another issue).
PD = PackageDetails
+--------------------------------+
| System Properties |
+--------------------------------+
| Id | Type | Value |
| 28 | PD | Boxed |
| 29 | PD | Vacuum Sealed |
+--------------------------------+
*There's thousands of rows in the table for all system wide table properties
Instinctively I would create a number of mapping tables to capture this information. I have however been instructed to just add another column onto each table to avoid doing a join.
My solution:
Create tables:
+---------------------------------------------------+
| widgets_packaging |
+---------------------------------------------------+
| Id | widget_id | packing_info | packing_detail_id |
| 1 | 27 | PACKAGED | 2 |
| 2 | 28 | UNKNOWN | NULL |
+---------------------------------------------------+
+--------------------+
| packaging |
+--------------------+
| Id | |
| 1 | Boxed |
| 2 | Vacuum Sealed |
+--------------------+
If I want to know what packaging a widget has I join through to widgets_packaging and join again to packaging if I want to know the exact details. Therefore no more columns on the widgets table.
I have however been told to ignore this and put the value int for the packing information and another as a foreign key to System Properties table to find the packaging details. Therefore adding another two columns to the table and creating yet more rows in the system properties table to store package details.
+------------------------------------------------------------+
| Widget |
+------------------------------------------------------------+
| Id | Name |Price | has_packaging | packaging_details |
| 1 | Sprocket |100 | 1 | 28 |
| 2 | Dynamo |50 | 0 | 29 |
+------------------------------------------------------------+
The reason for this is because it's simpler and doesn't involve a join if you only want to know if the widget has packaging (there are lots of widgets). They are concerned that more joins will slow things down.
Which is the more correctly solution here and are their concerns about speed legitimate? My gut instint is that we can't just keep adding columns onto the widgets table as it is growing and growing with flags for properties at present.

The answer to this really depends on whether the application(s) using this database are read or write intensive. If it's read intensive, the de-normalized structure is a better approach because you can make use of indexes. Selects are faster with fewer joins, too.
However, if your application is write intensive, normalization is a better approach (the structure you're suggesting is a more normalized approach). Tables tend to be smaller, which means they have a better chance of fitting into the buffer. Also, normalization tends to lead to less duplication of data, which means updates and inserts only need to be done in one place.
To sum it up:
Write Intensive --> normalization
smaller tables have a better chance of fitting into the buffer
less duplicated data, which means quicker updates / inserts
Read Intensive --> de-normalization
better structure for indexes
fewer joins means better performance
If your application is not heavily weighted toward reads over writes, then a more mixed approach would be better.

Related

How do I handle linking a record to another table?

I'm very new to Access and my teacher is... hard to follow. So I feel like there's something pretty basic I'm probably missing here. I think the biggest problem I'm having with this question is that I'm struggling to find the words to communicate what I actually need to do, which is really putting a damper on my google-fu.
In terms of what I think I want to do, I want to make a record reference another table in its entirety.
Main
+----+-------+--------+-------+----------------------------+
| PK | Name | Phone# | [...] | Cards |
+----+-------+--------+-------+----------------------------+
| 1 | Bob | [...] | [...] | < Reference to 2nd table > |
| 2 | Harry | [...] | [...] | [...] |
| 3 | Ted | [...] | [...] | [...] |
+----+-------+--------+-------+----------------------------+
Bob's Cards
+----+-------------+-----------+-------+-------+-------+
| PK | Card Name | Condition | Year | Price | [...] |
+----+-------------+-----------+-------+-------+-------+
| 1 | Big Slugger | Mint | 1987 | .20 | [...] |
| 2 | Quick Pete | [...] | [...] | [...] | [...] |
| 3 | Mac Donald | [...] | [...] | [...] | [...] |
+----+-------------+-----------+-------+-------+-------+
This would necessitate an entire new table for each record in the main table though, if it's even possible.
But the only alternative solution I can think of is to add 'Card1, Condition1, [...], Card2, Condition2, [...], Card3, [...]' fields to the main table and having to add another set of fields any time someone increases the maximum number of cards stored.
So I'm sort of left believing there is some other approach I should be taking that our teacher has failed to properly explain. We haven't even touched on forms and reports yet so I don't need to worry about working them in.
Any pointers?
(Also, the entirety of this data and structure is only a rough facsimile of my own, as I'd rather learn how to do it and apply it myself than be like 'here's my data, pls fix.')
Third option successfully found in comments by the helpful Minty.
This depends on a number of things, however to keep it simple you
would normally add one field to the cards table, with an number data
type called CardOwnerID. In your example it would be 1 indicating Bob.
This is known as a foreign key. (FK) - However if you have a table of
cards and multiple possible owners then you need a third table - a
Junction table. This would consist of the Main Person ID and the Card
ID. – Minty

More efficient to have more columns or more rows?

I'm currently redesigning a database which could contain a lot of data - I have the option to either include a number of different columns in the database or use a lot of rows instead. It's probably easier if I did some kind of outline below:
item_id | user_id | title | description | content | category | template | comments | status
-------------------------------------------------------------------------------------------
1 | 1 | ABC | DEF | GHI | 1 | default | 1 | 1
2 | 1 | ZYX | | QWE | 2 | default | 0 | 1
3 | 1 | A | | RTY | 2 | default | 0 | 0
4 | 2 | ABC | DEF | GHI | 3 | custom | 1 | 1
5 | 2 | CBA | | GHI | 3 | custom | 1 | 1
Versus something in the following structure:
item_id | user_id | attribute | value
---------------------------------------
1 | 1 | title | ABC
1 | 1 | description | DEF
1 | 1 | content | GHI
... | ... | ... | ...
I may want to create additional attributes in the future (50 for arguments sake) - so there could be a lot of empty cells if using multiple columns. The attribute names would be reused, where possible, across different types of content - say a blog entry, event, and gallery - title would easily be reused.
So my question is, is it more efficient to use multiple columns or multiple rows - in terms of query speed and disk space. Or would you instead recommend relationship tables, so there's a table for blogs, a table for events, etc. I'm just trying to come up with an easily expandable solution, where I ideally do not want to create a table for every kind of content as I'm thinking of developers creating new kinds of content via an app/API system (with attributes being tightly controlled).
Supplementary Question if Multiple Rows
How could I, in MySQL, convert multiple rows into a usable column format (I guess temporary tables) - so I could do some filtering by content type, as an example.
Basically, mysql has a variable row length as long as one does not change the on a per table level. Thus, empty cols will not use any space (well, almost).
But with blobs or text columns, it might be better to normalize those, as these may have large data to store and this needs to be read / skipped every time a table is scanned. Even if the column is not in the result set and you're doing queries outside of an index, it will take it's time on a large amount of rows.
As a good practice I think it will be fast to put all administrative and often used cols in one table and normalize all the rest. A kind of "vertical" design as in your second example will be complex to read and as soon as you work with temporary tables you will run in to performance issues sooner or later.
For a traditional row-based store, the cost of spooling through rows will depend on their width, so scanning a table with wide rows will take longer than one with narrow rows.
That said, it you're using an index to locate the rows that are of interest, this won't be so much of an issue.
If you normalise your data by replacing columns with keys to rows in other tables, you can reduce the amount of storage if the linked tables end up being significantly smaller than the original table, however any query will need to include the cost of required joins into the related table.
As with all these things, it's a balancing act that depends on your requirements, but understanding what's going on under the hood can certainly help you to make more informed decisions.
This question is very difficult to answer as it all comes down to what you are looking for and how your database will grow in size and complexity over time. I find the best way to answer these types of questions is to read case studies from other successful sites. For example Reddit would be a case study where they use a lot of rows but very little tables and/or columns. The article is here and a question on it is here.
There is also the option of exploring a NoSQL solution which may be more applicable to what you are trying to achieve.
Google case studies of sites that would have a similar structure to your own and see how they accomplished it as they have most likely encountered all the issues you will and already overcome them.

MySQL LEFT JOIN vs. 2 separate queries (Performance)

I have two tables:
++++++++++++++++++++++++++++++++++++
| Games |
++++++++++++++++++++++++++++++++++++
| ID | Name | Description |
++++++++++++++++++++++++++++++++++++
| 1 | Game 1 | A game description |
| 2 | Game 2 | And another |
| 3 | Game 3 | And another |
| .. | ... | ... |
++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++
| GameReviews |
+++++++++++++++++++++++++++++++++++++++
| ID |GameID| Review |
+++++++++++++++++++++++++++++++++++++++
| 1 | 1 |Review for game 1 |
| 2 | 1 |Another review for game 1|
| 3 | 1 |And another |
| .. | ... | ... |
+++++++++++++++++++++++++++++++++++++++
Option 1:
SELECT
Games.ID,
Games.Name,
Games.Description,
GameReviews.ID,
GameReviews.Review
FROM
GameReviews
LEFT JOIN
Games
ON
Games.ID = GameReviews.GameID
WHERE
Games.ID=?
Option 2:
SELECT
ID,
Name,
Description
FROM
Games
WHERE
ID=?
and then
SELECT
ID,
Review
FROM
GameReviews
WHERE
GameID=?
Obviously query 1 would be "simpler" where it is less code to write, and the other would seem to logically be "easier" on the database as it only queries the Games table once. The question is when it really gets down to it is there really a difference in performance and efficiency?
The vast majority of the time option 1 would be the way to go. The performance difference between the two would not be measurable until you have a lot of data. Keep it simple.
Your example is also fairly basic. At scale, performance issues can start revealing themselves based on what fields are being filtered, joined and pulled. The ideal scenario is to only pull data that exists in indexes (particularly with InnoDB). That usually is not possible, but a strategy is to pull the actual data you need at the last possible moment. Which is sort of what option 2 would be doing.
At extreme scale, you don't want to do any joins in the database at all. Your "joins" would happen in code, minimizing data sent over the network. Go with option 1 until you start having performance issues, which may never happen.
Go with the option 1, that is exactly what RDBMSes are optimized for.
And it always better to hit a database once from the client than hit it repeatedly multiple times.
I don't believe that you will ever have so many games and reviews that it will make sense to go with option 2.

CRM Releationships MySQL

Our Company is developing a CRM and we came now to the point where we have to decide how we want to handle the releationships. This is an important point because there are going to be tons of them. And changing the structure later would be simply not cool..
I know 3 ways how we could do it:
One releationship table:
The way i would do this is creating one table holding all the releationships.
Table: releationships
+----+-------------+-----------+--------------+------------+
| id | record_type | record_id | belongs_type | belongs_id |
+----+-------------+-----------+--------------+------------+
| 1 | person | 42 | company | 12 |
+----+-------------+-----------+--------------+------------+
| 2 | person | 43 | company | 12 |
+----+-------------+-----------+--------------+------------+
| 3 | note | 23 | company | 12 |
+----+-------------+-----------+--------------+------------+
| 4 | attachment | 13 | company | 12 |
+----+-------------+-----------+--------------+------------+
Multiple releationship tables:
I think this is the way how it for example the SugarCRM does.
Table: company_realationships
+----+-----------+------------+--------+
| id | record_id | has_type | has_id |
+----+-----------+------------+--------+
| 1 | 12 | person | 42 |
+----+-----------+------------+--------+
| 2 | 12 | person | 43 |
+----+-----------+------------+--------+
| 3 | 12 | note | 23 |
+----+-----------+------------+--------+
| 2 | 12 | attachment | 13 |
+----+-----------+------------+--------+
All in the record table:
Table: person
+----+-----------+------------+
| id | name | company_id |
+----+-----------+------------+
| 42 | luke | 12 |
+----+-----------+------------+
| 43 | other guy | 12 |
+----+-----------+------------+
ect.
So my Question is wich is the Best way of handling lots of releationships?
Are there other ways to do it?
What are disadvantages / advantages?
Is there a special way how hightraffic sides handle their releationships?
Thanks for your help guys :)
So my Question is wich is the Best way of handling lots of releationships?
The third one or the variation of it (see below).
Every "M:N" relationship should be represented by its own junction table. OTOH, a "1:N" relationship doesn't need additional table - just a proper foreign key in the table on the side of the "N".
If I understand your description correctly, the third option models a 1:N relationship between company and person. If by any chance you wanted to model a M:N relationship between them, you'd have a junction table: company_person ( company_id, person_id, PK (company_id, person_id) ).
Are there other ways to do it?
Sometimes, inheritance (aka. category, subtype, generalization hierarchy etc.) can be used to lower the number of possible "relatable" combinations. In a nutshell, make a relationship to a parent, then every child inherited from that parent is automatically involved in that relationship.
For an example, take a look at this post.
What are disadvantages / advantages?
Enforcing constraints (including FKs) declaratively is better (less prone to errors and probably more performant) than enforcing them through triggers, which is again better than enforcing them in the client code.
Choose a design that better adheres to that principle. For example, your options 1 and 2 don't allow the DBMS to enforce FKs declaratively.
Is there a special way how hightraffic sides handle their releationships?
Good logical design followed by good physical implementation is the only solid basis for good performance. It's hard to "bolt-on" the performance on top of a bad design.
Perhaps, you'd like to take a look at:
ERwin Methods Guide
Use The Index, Luke!
And when it comes to performance, don't guess! Measure on realistic amounts of data.

Database modeling for international and multilingual purposes

I need to create a large scale DB Model for a web application that will be multilingual.
One doubt that I've every time I think on how to do it is how I can resolve having multiple translations for a field. A case example.
The table for language levels, that administrators can edit from the backend, can have multiple items like: basic, advance, fluent, mattern... In the near future probably it will be one more type. The admin goes to the backend and add a new level, it will sort it in the right position.. but how I handle all the translations for the final users?
Another problem with internationalization of a database is that probably for user studies can differ from USA to UK to DE... in every country they will have their levels (that probably it will be equivalent to another but finally, different). And what about billing?
How you model this in a big scale?
Here is the way I would design the database:
Visualization by DB Designer Fork
The i18n table only contains a PK, so that any table just has to reference this PK to internationalize a field. The table translation is then in charge of linking this generic ID with the correct list of translations.
locale.id_locale is a VARCHAR(5) to manage both of en and en_US ISO syntaxes.
currency.id_currency is a CHAR(3) to manage the ISO 4217 syntax.
You can find two examples: page and newsletter. Both of these admin-managed entites need to internationalize their fields, respectively title/description and subject/content.
Here is an example query:
select
t_subject.tx_translation as subject,
t_content.tx_translation as content
from newsletter n
-- join for subject
inner join translation t_subject
on t_subject.id_i18n = n.i18n_subject
-- join for content
inner join translation t_content
on t_content.id_i18n = n.i18n_content
inner join locale l
-- condition for subject
on l.id_locale = t_subject.id_locale
-- condition for content
and l.id_locale = t_content.id_locale
-- locale condition
where l.id_locale = 'en_GB'
-- other conditions
and n.id_newsletter = 1
Note that this is a normalized data model. If you have a huge dataset, maybe you could think about denormalizing it to optimize your queries. You can also play with indexes to improve the queries performance (in some DB, foreign keys are automatically indexed, e.g. MySQL/InnoDB).
Some previous StackOverflow questions on this topic:
What are best practices for multi-language database design?
What's the best database structure to keep multilingual data?
Schema for a multilanguage database
How to use multilanguage database schema with ORM?
Some useful external resources:
Creating multilingual websites: Database Design
Multilanguage database design approach
Propel Gets I18n Behavior, And Why It Matters
The best approach often is, for every existing table, create a new table into which text items are moved; the PK of the new table is the PK of the old table together with the language.
In your case:
The table for language levels, that administrators can edit from the backend, can have multiple items like: basic, advance, fluent, mattern... In the near future probably it will be one more type. The admin goes to the backend and add a new level, it will sort it in the right position.. but how I handle all the translations for the final users?
Your existing table probably looks something like this:
+----+-------+---------+
| id | price | type |
+----+-------+---------+
| 1 | 299 | basic |
| 2 | 299 | advance |
| 3 | 399 | fluent |
| 4 | 0 | mattern |
+----+-------+---------+
It then becomes two tables:
+----+-------+ +----+------+-------------+
| id | price | | id | lang | type |
+----+-------+ +----+------+-------------+
| 1 | 299 | | 1 | en | basic |
| 2 | 299 | | 2 | en | advance |
| 3 | 399 | | 3 | en | fluent |
| 4 | 0 | | 4 | en | mattern |
+----+-------+ | 1 | fr | élémentaire |
| 2 | fr | avance |
| 3 | fr | couramment |
: : : :
+----+------+-------------+
Another problem with internationalitzation of a database is that probably for user studies can differ from USA to UK to DE... in every country they will have their levels (that probably it will be equivalent to another but finally, different). And what about billing?
All localisation can occur through a similar approach. Instead of just moving text fields to the new table, you could move any localisable fields - only those which are common to all locales will remain in the original table.