Say I have the following table:
TABLE: product
============================================================
| product_id | name | invoice_price | msrp |
------------------------------------------------------------
| 1 | Widget 1 | 10.00 | 15.00 |
------------------------------------------------------------
| 2 | Widget 2 | 8.00 | 12.00 |
------------------------------------------------------------
In this model, product_id is the PK and is referenced by a number of other tables.
I have a requirement that each row be unique. In the example about, a row is defined to be the name, invoice_price, and msrp columns. (Different tables may have varying definitions of which columns define a "row".)
QUESTIONS:
In the example above, should I make name, invoice_price, and msrp a composite key to guarantee uniqueness of each row?
If the answer to #1 is "yes", this would mean that the current PK, product_id, would not be defined as a key; rather, it would be just an auto-incrementing column. Would that be enough for other tables to use to create relationships to specific rows in the product table?
Note that in some cases, the table may have 10 or more columns that need to be unique. That'll be a lot of columns defining a composite key! Is that a bad thing?
I'm trying to decide if I should try to enforce such uniqueness in the database tier or the application tier. I feel I should do this in the database level, but I am concerned that there may be unintended side effects of using a non-key as a FK or having so many columns define a composite key.
When you have a lot of columns that you need to create a unique key across, create your own "key" using the data from the columns as the source. This would mean creating the key in the application layer, but the database would "enforce" the uniqueness. A simple method would be to use the md5 hash of all the sets of data for the record as your unique key. Then you just have a single piece of data you need to use in relations.
md5 is not guaranteed to be unique, but it may be good enough for your needs.
First off, your intuition to do it in the DB layer is correct if you can do it easily. This means even if your application logic changes, your DB constraints are still valid, lowering the chance of bugs.
But, are you sure you want uniqueness on that? I could easily see the same widget having different prices, say for sale items or what not.
I would recommend against enforcing uniqueness unless there's a real reason to.
You might have something like this (obvoiusly, don't use * in production code)
# get the lowest price for an item that's currently active
select *
from product p
where p.name = "widget 1" # a non-primary index on product.name would be advised
and p.active
order-by sale_price ascending
limit 1
You can define composite primary keys and also unique indexes. As long as your requirement is met, defining composite unique keys is not a bad design. Clearly, the more columns you add, the slower the process of updating the keys and searching the keys, but if the business requirement needs this, I don't think it is a negative as they have very optimized routines to do these.
Related
Let's assume I have a very large database with tons of tables in it.
Certain of these tables contain datasets to be connected to each other like
table: album
table: artist
--> connected by table: album_artist
table: company
table: product
--> connected by table: company_product
The tables album_artist and company_product contain 3 columns representing primary key, albumID/artistID meanwhile companyID/productID...
Is it a good practice to do something like an "assoc" table which is made up like
---------------------------------------------------------
| id int(11) primary | leftID | assocType | rightID |
|---------------------------------------------------------|
| 1 | 10 | company:product | 4 |
| 2 | 6 | company:product | 5 |
| 3 | 4 | album:artist | 10 |
---------------------------------------------------------
I'm not sure if this is the way to go or if there's anything else than creating multiple connection tables?!
No, it is not a good practice. It is a terrible practice, because referential integrity goes out the window. Referential integrity is the guarantee provided by the RDBMS that a foreign key in one row refers to a valid row in another table. In order for the database to be able to enforce referential integrity, each referring column must refer to one and only one referred column of one and only one referred table.
No, no, a thousand times no. Don't overthink your many-to-many relationships. Just keep them simple. There's nothing to gain and a lot to lose by trying to consolidate all your relationships in a single table.
If you have a many to many relationship between, say guiarist and drummer, then you need a guitarist_drummer table with two columns in it: guitarist_id and drummer_id. That table's primary key should be comprised of both columns. And you should have another index that's made of the two columns in the opposite order. Don't add a third column with an autoincrmenting id to those join tables. That's a waste, and it allows duplicated pairs in those tables, which is generally confusing.
People who took the RDBMS class in school will immediately recognize how these tables work. That's good, because it means you don't have to be the only programmer on this project for the rest of your life.
Pro tip: Use the same column name everywhere. Make your guitarist table contain a primary key called guitarist_id rather than id. It makes your relationship tables easier to understand. And, if you use a reverse engineering tool like Sql Developer that tool will have an easier time with your schema.
The answer is that it "depends" on the situation. In your case and most others, no, it does not make sense. It does make sense if you are doing a many <-> many relationship, the constraints can be enforced by the link table with foreign keys and a unique constraint. Probably the best use case would be if you had numerous tables pointing to a single table. Each table could have a link table with indexes on it. This would be beneficial if one of the tables is a large table, and you need to fetch the linked records separately.
We are really having a technical trouble of designing the primary keys for our new data intensive project.
Please explain us which PK design is better for our data intensive database.
The database is data intensive and persistence.
Atleast 3000 users access it per second.
Please tell us technically which type of PK is better for our database and the tables are less likely to change in the future.
1.INT/BIGINT auto increment column as PK
2.Composite keys.
3.Unique varchar PK.
I would go for option 1, using a BIGINT autoincrement column as the PK. The reason is simple, each write will write to the end of the current page, meaning inserting new rows is very fast. If you use a composite key, then you need an order, and unless you are inserting in the order of the composite key, then you need to split pages to insert, e.g. Imagine this table:
A | B | C
---+---+---
1 | 1 | 4
1 | 4 | 5
5 | 1 | 2
Where the primary key is a composite key on (A, B, C), suppose I want to insert (2, 2, 2), it would need to be inserted as follows:
A | B | C
---+---+---
1 | 1 | 4
1 | 4 | 5
2 | 2 | 2 <----
5 | 1 | 2
So that the clustered key maintains its order. If the page you are already inserting too is already full, then MySQL will need to split the page, moving some of the data to a new page to make room for the new data. These page splits are quite costly, so unless you know you are inserting sequential data then using an autoincrement column as the clustering key means that unless you mess around with the increments you should never have to split a page.
You could still add a unique index to the columns that would be the primary key to maintain integrity, you would still have the same problem with splits on the index, but since the index would be narrower than a clustered index the splits would be less frequent as more data will fit on a page.
More or less the same argument applies against a unique varchar column, unless you have some kind of process that ensures the varchar is sequential, but generating a sequential varchar is more costly than an autoincrement column, and I can see no immediate advantage.
This is not easy to answer.
To start with, using composite keys as primary keys is the straight-forward way. IDs come in handy when the database structure changes.
Say you have products in different sizes sold in different countries. Primary keys are bold.
product (product_no, name, supplier_no, ...)
product_size (product_no, size, ean, measures, ...)
product_country (product_no, country_isocode, translated_name, ...)
product_size_country (product_no, size, country_isocode, vat, ...)
It is very easy to wite data, because you are dealing with natural keys, which is what users work with. The dbms garantees data consistency.
Now the same with technical IDs:
product (product_id, product_no, name, supplier_no, ...)
product_size (product_size_id, size, product_id, ean, measures, ...)
product_country (product_country_id, product_id, country_id, translated_name, ...)
product_size_country (product_size_country_id, product_size_id, country_id, vat, ...)
To get the IDs is an additional step needed now, when inserting data. And still you must ensure that product_no is unique. So the unique constraint on product_id doesn't replace that constraint on product_no, but adds to it. Same for product_size, product_country and product_size_country. Moreover product_size_country may now link to product_country and product_size_country of different products. The dbms cannot guarantee data consistency any longer.
However, natural keys have their weakness when changes to the database structure must be made. Let's say that a new company is introduced in the database and product numbers are only unique per company. With the ID based database you would simply add a company ID to the products table and be done. In the natural key based database you would have to add the company to all primary keys. Much more work. (However, how often must such changes be made to a database. In many databases never.)
What more is there to consider? When the database gets big, you might want to partitionate tables. With natural keys, you could partition your tables by said company, assuming that you will usually want to select data from one company or the other. With IDs, what would you partition the tables by to enhance access?
Well, both concepts certainly have pros and cons. As to your third option to create a unique varchar, I see no benefit in this over using integer IDs.
There are four regions with more than one million records total. Should I create a table with a region column or a table for each region and combine them to get the top ranks?
If I combine all four regions, none of my columns will be unique so I will need to also add an id column for my primary key. Otherwise, name, accountId & characterId would be candidate keys or should I just add an id column anyways.
Table:
----------------------------------------------------------------
| name | accountId | iconId | level | characterId | updateDate |
----------------------------------------------------------------
Edit:
Should I look into partitioning the table by region_id?
Because all records are related to a particular region, a single database table in 3NF(e.g All-Regions) containing a regionId along with other attributes should work.
The correct answer, as usually with database design, is "It depends".
First of all, (IMHO) a good primary key should belong to the database, not to the users :)
So, if accountId and characterId are user-editable or prominently displayed to the user, they should not be used for the primary key of the table(s) anyway. And using name (or any other user-generated string) for a key is just asking for trouble.
As for the regions, try to divine how the records will be used.
Whether most of the queries will use only a single region, or most of them will use data across regions?
Is there a possibility that the schemas for different regions might diverge?
Will there be different usage scenarios for similar data? (e.g. different phone number patterns for different regions)
Bottom line, both approaches will work, let your data tell you which approach will be more manageable.
I've seen a lot of discussion regarding this. I'm just seeking for your suggestions regarding this. Basically, what I'm using is PHP and MySQL. I have a users table which goes:
users
------------------------------
uid(pk) | username | password
------------------------------
12 | user1 | hashedpw
------------------------------
and another table which stores updates by the user
updates
--------------------------------------------
uid | date | content
--------------------------------------------
12 | 2011-11-17 08:21:01 | updated profile
12 | 2011-11-17 11:42:01 | created group
--------------------------------------------
The user's profile page will show the 5 most recent updates of a user. The questions are:
For the updates table, would it be possible to set both uid and date as composite primary keys with uid referencing uid from users
OR would it be better to just create another column in updates which auto-increments and will be used as the primary key (while uid will be FK to uid in users)?
Your idea (under 1.) rests on the assumption that a user can never do two "updates" within one second. That is very poor design. You never know what functions you will implement in the future, but chances are that some day 1 click leads to 2 actions and therefore 2 lines in this table.
I say "updates" quoted because I see this more as a logging table. And who knows what you may want to log somewhere in the future.
As for unusual primary keys: don't do it, it almost always comes right back in your face and you have to do a lot of work to add a proper autoincremented key afterwards.
It depends on the requirement but a third possibility is that you could make the key (uid, date, content). You could still add a surrogate key as well but in that case you would presumably want to implement both keys - a composite and a surrogate - not just one. Don't make the mistake of thinking you have to make an either/or choice.
Whether it is useful to add the surrogate or not depends on how it's being used - don't add a surrogate unless or until you need it. In any case uid I would assume to be a foreign key referencing the users table.
I have a table which contains two type of data, either for Company or Employee.
Identifying that data by either 'C' or 'E' & a column storing primary key of it.
So how can I give foreign key depending on data contained & maintain referential integrity dynamically.
id | referenceid | documenttype
-------------------------------
1 | 12 | E
2 | 7 | C
Now row with id 1 should reference Employee table with pk 12 & row with id 2 should reference Company table with pk 7.
Otherwise I have to make two different tables for both.
Is there any other way to accomplish it.
If you really want to do this, you can have two nullable columns one for CompanyId and one for EmployeeId that act as foreign keys.
But I would rather you to try and review the database schema design.
It would be better to normalize the table - Creating separate tables for Company and Employee. You would also get better performance after normalization. Sincec the Company and Employee are separate entities, its better not to overlap them.
Personally, i would go with the two different table option.
Employee / Company seem to be distinct enough for me not to want to store their data together.
That will make the foreign key references also straight forward.
However, if you do want to still store it in one table, one way of maintaining the referential integrity would be through a trigger.
Have an Insert / Update trigger that checks the appropriate value in Company Master / Employee master depending on the value of column containing 'C' / 'E'
Personally, i would prefer avoiding such logic as triggers are notoriously hard to debug.