We are really having a technical trouble of designing the primary keys for our new data intensive project.
Please explain us which PK design is better for our data intensive database.
The database is data intensive and persistence.
Atleast 3000 users access it per second.
Please tell us technically which type of PK is better for our database and the tables are less likely to change in the future.
1.INT/BIGINT auto increment column as PK
2.Composite keys.
3.Unique varchar PK.
I would go for option 1, using a BIGINT autoincrement column as the PK. The reason is simple, each write will write to the end of the current page, meaning inserting new rows is very fast. If you use a composite key, then you need an order, and unless you are inserting in the order of the composite key, then you need to split pages to insert, e.g. Imagine this table:
A | B | C
---+---+---
1 | 1 | 4
1 | 4 | 5
5 | 1 | 2
Where the primary key is a composite key on (A, B, C), suppose I want to insert (2, 2, 2), it would need to be inserted as follows:
A | B | C
---+---+---
1 | 1 | 4
1 | 4 | 5
2 | 2 | 2 <----
5 | 1 | 2
So that the clustered key maintains its order. If the page you are already inserting too is already full, then MySQL will need to split the page, moving some of the data to a new page to make room for the new data. These page splits are quite costly, so unless you know you are inserting sequential data then using an autoincrement column as the clustering key means that unless you mess around with the increments you should never have to split a page.
You could still add a unique index to the columns that would be the primary key to maintain integrity, you would still have the same problem with splits on the index, but since the index would be narrower than a clustered index the splits would be less frequent as more data will fit on a page.
More or less the same argument applies against a unique varchar column, unless you have some kind of process that ensures the varchar is sequential, but generating a sequential varchar is more costly than an autoincrement column, and I can see no immediate advantage.
This is not easy to answer.
To start with, using composite keys as primary keys is the straight-forward way. IDs come in handy when the database structure changes.
Say you have products in different sizes sold in different countries. Primary keys are bold.
product (product_no, name, supplier_no, ...)
product_size (product_no, size, ean, measures, ...)
product_country (product_no, country_isocode, translated_name, ...)
product_size_country (product_no, size, country_isocode, vat, ...)
It is very easy to wite data, because you are dealing with natural keys, which is what users work with. The dbms garantees data consistency.
Now the same with technical IDs:
product (product_id, product_no, name, supplier_no, ...)
product_size (product_size_id, size, product_id, ean, measures, ...)
product_country (product_country_id, product_id, country_id, translated_name, ...)
product_size_country (product_size_country_id, product_size_id, country_id, vat, ...)
To get the IDs is an additional step needed now, when inserting data. And still you must ensure that product_no is unique. So the unique constraint on product_id doesn't replace that constraint on product_no, but adds to it. Same for product_size, product_country and product_size_country. Moreover product_size_country may now link to product_country and product_size_country of different products. The dbms cannot guarantee data consistency any longer.
However, natural keys have their weakness when changes to the database structure must be made. Let's say that a new company is introduced in the database and product numbers are only unique per company. With the ID based database you would simply add a company ID to the products table and be done. In the natural key based database you would have to add the company to all primary keys. Much more work. (However, how often must such changes be made to a database. In many databases never.)
What more is there to consider? When the database gets big, you might want to partitionate tables. With natural keys, you could partition your tables by said company, assuming that you will usually want to select data from one company or the other. With IDs, what would you partition the tables by to enhance access?
Well, both concepts certainly have pros and cons. As to your third option to create a unique varchar, I see no benefit in this over using integer IDs.
Related
There are four regions with more than one million records total. Should I create a table with a region column or a table for each region and combine them to get the top ranks?
If I combine all four regions, none of my columns will be unique so I will need to also add an id column for my primary key. Otherwise, name, accountId & characterId would be candidate keys or should I just add an id column anyways.
Table:
----------------------------------------------------------------
| name | accountId | iconId | level | characterId | updateDate |
----------------------------------------------------------------
Edit:
Should I look into partitioning the table by region_id?
Because all records are related to a particular region, a single database table in 3NF(e.g All-Regions) containing a regionId along with other attributes should work.
The correct answer, as usually with database design, is "It depends".
First of all, (IMHO) a good primary key should belong to the database, not to the users :)
So, if accountId and characterId are user-editable or prominently displayed to the user, they should not be used for the primary key of the table(s) anyway. And using name (or any other user-generated string) for a key is just asking for trouble.
As for the regions, try to divine how the records will be used.
Whether most of the queries will use only a single region, or most of them will use data across regions?
Is there a possibility that the schemas for different regions might diverge?
Will there be different usage scenarios for similar data? (e.g. different phone number patterns for different regions)
Bottom line, both approaches will work, let your data tell you which approach will be more manageable.
I am in the process of creating a second version of my technical wiki site and one of the things I want to improve is the database design. The problem (or so I think) is that to display each document, I need to join upwards of 15 tables. I have a bunch of lookup tables that contain descriptive data associated with each wiki entry such as programmer used, cpu, tags, peripherals, PCB layout software, difficulty level, etc.
Here is an example of the layout:
doc
--------------
id | author_id | doc_type_id .....
1 | 8 | 1
2 | 11 | 3
3 | 13 | 3
_
lookup_programmer
--------------
doc_id | programmer_id
1 | 1
1 | 3
2 | 2
_
programmer
--------------
programmer_id | programmer
1 | USBtinyISP
2 | PICkit
3 | .....
Since some doc IDs may have multiples entries for a single attribute (such as programmer), I have created the DB to compensate for this. The other 10 attributes have a similiar layout as the 2 programmer tables above. To display a single document article, approx 20 tables are joined.
I used the Sphinx Search engine for finding articles with certain characteristics. Essentially Sphinx indexes all of the data (does not store) and returns the wiki doc ID of interest based on the filters presented. If I want to find articles that use a certain programmer and then sort by date, MYSQL has to first join ALL documents with the 2 programmer tables, then filter, and finally sort the remaining by insert time. No index can help me ordering the filtered results (takes a LONG time with 150k doc IDs) since it is done in a temporary table. As you can imagine, it gets worse really quickly with the more parameters that need to be filtered.
It is because I have to rely on Sphinx to return - say all wiki entries that use a certain CPU AND programer - that lead me to believe that there is a DB smell with my current setup....
edit: Looks like I have implemented a [Entity–attribute–value model]1
I don't see anything here that suggests you've implemented EAV. Instead, it looks like you've assigned every row in every table an ID number. That's a guaranteed way to increase the number of joins, and it has nothing to do with normalization. (There is no "I've now added an id number" normal form.)
Pick one lookup table. (I'll use "programmer" in my example.) Don't build it like this.
create table programmer (
programmer_id integer primary key,
programmer varchar(20) not null,
primary key (programmer_id),
unique key (programmer)
);
Instead, build it like this.
create table programmer (
programmer varchar(20) not null,
primary key (programmer)
);
And in the tables that reference it, consider cascading updates and deletes.
create table lookup_programmer (
doc_id integer not null,
programmer varchar(20) not null,
primary key (doc_id, programmer),
foreign key (doc_id) references doc (id)
on delete cascade,
foreign key (programmer) references programmer (programmer)
on update cascade on delete cascade
);
What have you gained? You keep all the data integrity that foreign key references give you, your rows are more readable, and you've eliminated a join. Build all your "lookup" tables that way, and you eliminate one join per lookup table. (And unless you have many millions of rows, you're probably not likely to see any degradation in performance.)
I've created a database with three tables in it:
Restaurant
restaurant_id (autoincrement, PK)
Owner
owner_id (autoincrement, PK)
restaurant_id (FK to Restaurant)
Deal
deal_id (autoincrement)
owner_id (FK to Owner)
restaurant_id (FK to Restaurant)
(PK: deal_id, owner_id, restaurant_id)
There can be many owners for each restaurant. I chose two foreign keys for Deal so I can reference the deal by either the owner or the restaurant. The deal table would have three primary keys, two being foreign keys. And it would have two one-to-many relationships pointing to it. All of my foreign keys are primary keys and I don't know if I'll regret doing it like this later on down the road. Does this design make sense, and seem good for what I'm trying to achieve?
Edit: What I really need to be able to accomplish here is when a owner is logged in and viewing their account, I want them to be able to see and edit all the deals that are associated with that particular restaurant. And because there can be more that one owner per restaurant, I need to be able to perform a query something like: select *from deals where restaurant_id = restaurant_id. In other words, if I'm an owner and I'm logged in, I need to be able to make query: get all of the deal that are related to not just me, the owner, but to all of the owners associated with this restaurant.
You're having some trouble with terminology.
A table can only ever have a one primary key. It is not possible to create a table with two different primary keys. You can create a table with two different unique indexes (which are much like a primary key) but only one primary key can exist.
What you're asking about is whether you should have a composite or compound primary key; a primary key using more than one column.
Your design is okay, but as written you probably have no need for the column deal_id. It seems to me that restaurant_id and owner_id together are enough to uniquely identify a row in Deal. (This may not be true if one owner can have two different ownership stakes in a single restaurant as the result of recapitalization or buying out another owner, but you don't mention anything like that in your problem statement).
In this case, deal_id is largely wasted storage. There might be an argument to be made for using the deal_id column if you have many tables that have foreign keys pointing to Deal, or if you have instances in which you want to display to the user Deals for multiple restaurants and owners at the same time.
If one of those arguments sways you to adopt the deal_id column, then it, and only it, should be the primary key. There would be nothing added by including the other two columns since the autoincrement value itself would be unique.
If u have a unique field, this should be the PK, that would be the incremented field.
In this specific case it gives u nothing at all to add more fields to this key, it actually somewhat impacts performance (don't ask me how much, u bench it).
if you want to create 2 foreign keys in the deal table which are the restaurant and the owner the logic is something like a table could exist in the deal even without an owner or an owner could exist in the deal even without identifying the table on it but you could still identify the table because it's being used as a foreign key on the owner table, but if your going to put values on each columns that you defined as foreign key then I think it's going to be redundant cause I'm not sure how you would use the deal table later on but by it's name I think it speaks like it would be used to identify if a restaurant table is being reserved or not by a customer and to see how you have designed your database you could already identify the table which they have reserved even without specifying the table as foreign key in the deal table cause by the use of the owner table you would able to identify which table they have reserved already since you use it as foreign key on the owner table you just really have to be wise on defining relationships between your tables and avoid redundancy as much as possible. :)
I think it is not best.
First of all, the Deal table PK should be the deal_id. There is no reason to add additional columns to it--and if you did want to refer to the deal_id in another table, you'd have to include the restaurant_id and owner_id which is not good. Whether deal_id should also be the clustered index (a.k.a. index organized on this column) depends on the data access pattern. Will your database be full of data_id values most often used for lookup, or will you primarily be looking deals up by owner_id or restaurant_id?
Also, using two separate FKs way the you have described it (as far as I can tell!) would allow a deal to have an owner and restaurant combination that are not a valid (combining an owner that does not belong to that restaurant). In the Deal table, instead of one FK to Owner and one FK to Restaurant, if you must have both columns, there should be a composite FK to only the Owner table on (OwnerID, RestaurantID) with a corresponding unique key in the Owner table to allow this link up.
However, with such a simple table structure I don't really see the problem in leaving RestaurantID out of the Deal table, since the OwnerID always fully implies the RestaurantID. Obviously your deals cannot be linked only with the restaurant, because that would imply a 1:M relationship on Deal:Owner. The cost of searching based on Restaurant through the Owner table shouldn't really be that bad.
Its not wrong, it works. But, its not recommended.
Autoincrement Primary Keys works without Foreign Keys (or Master Keys)
In some databases, you cannot use several fields as a single primary key.
Compound Primary Keys or Compose Primary Keys are more difficult to handle in a query.
Compound Primary Key Query Example:
SELECT
D.*
FROM
Restaurant AS R,
Owner AS O,
Deal AS D
WHERE
(1=1) AND
(D.RestaurantKey = D.RestaurantKey) AND
(D.OwnerKey = D.OwnerKey)
Versus
Single Primary Key Query Example:
SELECT
D.*
FROM
Restaurant AS R,
Owner AS O,
Deal AS D
WHERE
(D.OwnerKey = O.OwnerKey)
Sometimes, you have to change the value of foreign key of a record, to another record. For Example, your customers already order, the deal record is registered, and they decide to change from one restaurant table to another. So, the data must be updated, in the "Owner", and "Deal" tables.
+-----------+-------------+
| OwnerKey | OwnerName |
+-----------+-------------+
| 1 | Anne Smith |
+-----------+-------------+
| 2 | John Connor |
+-----------+-------------+
| 3 | Mike Doe |
+-----------+-------------+
+-----------+-------------+-------------+
| OwnerKey | DealKey | Food |
+-----------+-------------+-------------+
| 1 | 1 | Hamburguer |
+-----------+-------------+-------------+
| 2 | 2 | Hot-Dog |
+-----------+-------------+-------------+
| 3 | 3 | Hamburguer |
+-----------+-------------+-------------+
| 1 | 3 | Soda |
+-----------+-------------+-------------+
| 2 | 1 | Apple Pie |
+-----------+-------------+-------------+
| 3 | 3 | Chips |
+-----------+-------------+-------------+
If you use compound primary keys, you have to create a new record for "Owner", and new records for "Deals", copy the other fields, and delete the previous records.
If you use single keys, you just have to change the foreign key of Table, without inserting or deleting new records.
Cheers.
Say I have the following table:
TABLE: product
============================================================
| product_id | name | invoice_price | msrp |
------------------------------------------------------------
| 1 | Widget 1 | 10.00 | 15.00 |
------------------------------------------------------------
| 2 | Widget 2 | 8.00 | 12.00 |
------------------------------------------------------------
In this model, product_id is the PK and is referenced by a number of other tables.
I have a requirement that each row be unique. In the example about, a row is defined to be the name, invoice_price, and msrp columns. (Different tables may have varying definitions of which columns define a "row".)
QUESTIONS:
In the example above, should I make name, invoice_price, and msrp a composite key to guarantee uniqueness of each row?
If the answer to #1 is "yes", this would mean that the current PK, product_id, would not be defined as a key; rather, it would be just an auto-incrementing column. Would that be enough for other tables to use to create relationships to specific rows in the product table?
Note that in some cases, the table may have 10 or more columns that need to be unique. That'll be a lot of columns defining a composite key! Is that a bad thing?
I'm trying to decide if I should try to enforce such uniqueness in the database tier or the application tier. I feel I should do this in the database level, but I am concerned that there may be unintended side effects of using a non-key as a FK or having so many columns define a composite key.
When you have a lot of columns that you need to create a unique key across, create your own "key" using the data from the columns as the source. This would mean creating the key in the application layer, but the database would "enforce" the uniqueness. A simple method would be to use the md5 hash of all the sets of data for the record as your unique key. Then you just have a single piece of data you need to use in relations.
md5 is not guaranteed to be unique, but it may be good enough for your needs.
First off, your intuition to do it in the DB layer is correct if you can do it easily. This means even if your application logic changes, your DB constraints are still valid, lowering the chance of bugs.
But, are you sure you want uniqueness on that? I could easily see the same widget having different prices, say for sale items or what not.
I would recommend against enforcing uniqueness unless there's a real reason to.
You might have something like this (obvoiusly, don't use * in production code)
# get the lowest price for an item that's currently active
select *
from product p
where p.name = "widget 1" # a non-primary index on product.name would be advised
and p.active
order-by sale_price ascending
limit 1
You can define composite primary keys and also unique indexes. As long as your requirement is met, defining composite unique keys is not a bad design. Clearly, the more columns you add, the slower the process of updating the keys and searching the keys, but if the business requirement needs this, I don't think it is a negative as they have very optimized routines to do these.
I have a table which contains two type of data, either for Company or Employee.
Identifying that data by either 'C' or 'E' & a column storing primary key of it.
So how can I give foreign key depending on data contained & maintain referential integrity dynamically.
id | referenceid | documenttype
-------------------------------
1 | 12 | E
2 | 7 | C
Now row with id 1 should reference Employee table with pk 12 & row with id 2 should reference Company table with pk 7.
Otherwise I have to make two different tables for both.
Is there any other way to accomplish it.
If you really want to do this, you can have two nullable columns one for CompanyId and one for EmployeeId that act as foreign keys.
But I would rather you to try and review the database schema design.
It would be better to normalize the table - Creating separate tables for Company and Employee. You would also get better performance after normalization. Sincec the Company and Employee are separate entities, its better not to overlap them.
Personally, i would go with the two different table option.
Employee / Company seem to be distinct enough for me not to want to store their data together.
That will make the foreign key references also straight forward.
However, if you do want to still store it in one table, one way of maintaining the referential integrity would be through a trigger.
Have an Insert / Update trigger that checks the appropriate value in Company Master / Employee master depending on the value of column containing 'C' / 'E'
Personally, i would prefer avoiding such logic as triggers are notoriously hard to debug.