I have two designs in mind. Wanted to check which one is more optimum as per you guys.
So I have three tables offer, offer_type and offer_type_filter.
Original Design of tables
offer
id int(10) unsigned
code varchar(48)
offer_type_id int(10) unsigned
start_date datetime
exp_date datetime
value int(10)
updated timestamp
created datetime
offer_type
id int(10) unsigned
name varchar(48)
condition varchar(512)
offer_type_filter
id int(10) unsigned
filter_type varchar(20)
filter_value varchar(50)
offer_type_id int(10) unsigned
Now as you all may guess that offer has a type and filter specifies in what specific cases offer will apply. If you are wondering then offer_type.condition is mainly for 20$ off on purchase of min. 300$. Offer_type_filter is to apply this offer only for say McDonalds. Offer can exist without filters.
One prob with current design is that every time I create new offer, even though type is same I have to create a duplicate entry in offer_type and then use that type in offer_type_filter (using current type will mess up existing offers).
So in terms of database re-design it is quite obvious that offer_type must not exist in offer_type_filter so I am convinced it has to change to something like this
Redesign (Doing away with offer_type_filter and creating new table filter. It's basically renaming to something more appropriate)
Filter
id int(10) unsigned
filter_type varchar(20)
filter_value varchar(50)
filter_type_set_id int(10) unsigned
For other tables I am thinking of these two options
Option 1 (offer_type_filter from redesign + other tables same from original design)
offer
id int(10) unsigned
code varchar(48)
offer_type_filter_mapping_id int(10) unsigned
offer_type_filter_mapping
id int(10) unsigned
filter_type_set_id int(10) unsigned > from Filter table
offer_type_id int(10) unsigned
If I choose first design then I will have redundant entries in offer_type_filter_mapping. For offers which don't have filters, offer_type_filter_mapping will have entries of offer_type_id with null as filter_type_set_id. Also then for each type I create, I will have to put an entry in mapping table. So I don't like this aspect of design.
Option 2 (offer_type_filter from redesign + other tables same from original design)
offer
id int(10) unsigned
code varchar(48)
filter_type_set_id int(10) unsigned > from Filter table
I came to Option 2 only because in this case there is redundant filter_type_set_id for each offer and in my case offer table is huge
Wanted your critique as to which design do you think is the least painful. Frequent Usecases: Creating lots of offers with and without filters. We already have close to 40-50 Offer types. The types table is not able to cover all scenario so we do create new types 10 % of the times.
Also I use Spring and Hibernate so you can think from that perspective too what my design constraints would be.
P.S. You might even add that in mysql it is not convenient to generate two id's per table as in offer_type_filter but I am thinking about it. Prob use a dummy table for generation or use an externally generated id.
I see it this way, one offer can have only one offer type_filter, so it makes a 1:N relationship
and offer will take the offer_type attributes that u had before.
the cardinality is N:M
EDIT:
for example, if you have in offer_type_filter.
offer_type_filter_id = 1 and it's 30% off.
offer_type_filter_id = 2 and it's 10% off.
offer_type_filter_id = 3 and it's 0% off.
...
etc
and in your offer table you can have:
offer_id=1 and offer_filter_id=1 //this mean that product 1 has 30% off
offer_id=2 and offer_filter_id=1 //this mean that product 2 has 30% off
offer_id=3 and offer_filter_id=2 //this mean that product 2 has 10% off
offer_id=4 and offer_filter_id=3 //this mean that product 2 has 0% off
...
etc
If your cardinality is one offer can be have only one Offer type, is the first design.
if your cardinality is one offer can have multiple discounts and the same discount for multiple products, I recommend the second design
Related
I am trying to create a MySQL table that has a generic ID column, but also a secondary ID column, both of which need some form of auto incrementing
currently my MySQL table looks like this:
`ban_id` mediumint unsigned NOT NULL AUTO_INCREMENT,
`student_uuid` varchar(36) NOT NULL,
`student_ban_id` tinyint unsigned NOT NULL AUTO_INCREMENT,
(a bunch of data irrelevant to this question)
PRIMARY KEY (`student_uuid`, `student_ban_id`),
UNIQUE (`ban_id`)
The desired behavior is that ban_id is just a generic entry_id and that student_ban_id is the ban's number for the given student. (my reasoning is that I want to be able to reference bans by an id value if the student_uuid is unavailable, but the program spec also requires the ability to take student:banID as a valid means of reference)
A example row might be BanID:501, {studentUUID}, studentBanID:2 (501st ban, 2nd ban against the given student)
I have run into the issue that the MyISAM engine does not support tracking two separate incremental columns at once (I believe it can handle both desired behaviors, but not at the same time)
What might be the best way to achieve such a behavior?
Much appreciated!
-Cryptic
Mysql and SQL users. This question related to both of you. Its about indexing. I have this table structure for a classified website. I have a one common table to store title, description, user who post etc.. Also I have this table structure to store detail attributes about a particular ad category.
CREATE TABLE `ad_detail` (
`ad_detail_id_pk` int(10) NOT NULL AUTO_INCREMENT,
`header_id_fk` int(10) NOT NULL,
`brand_id_fk` smallint(5) NULL,
`brand_name` varchar(200) NULL,
`is_brand_new` bool,
.......
`transmission_type_id_fk` tinyint(3) NULL,
`transmission_type_name` varchar(200) NULL,
`body_type_id_fk` tinyint(3) unsigned NULL,
`body_type_name` varchar(200) NULL,
`mileage` double NULL,
`fuel_type_id_fk` tinyint(3) NULL,
......
PRIMARY KEY (`ad_detail_id_pk`)
)
SO as you can see first part of the attributes will belong to mobile ads and second part belongs to vehicle ads like so on I have other attributes for other categories. header_id_fk will hold the relationship to header table which have common information. So all of these foreign keys are some what involves in filtering ads. Some may wants to find all the mobile phones which made by Nokia. SO then the brand_id_fk will be use. Some may wants to filter vehicle by fuel type. So as you can see I need to index every filtering attributes in this table. So now this is my question.
So when user post a mobile ad insert statement will contain certain no of fields to store data. But as we all know index will gain the performance when data retrieval but it will make additional cost to insert and update queries. So if I insert mobile ad, will that insert query suffer from other attributes which are relevant to vehicles ads' index fields?
Yes, normal indexes contain one row for every row in the table (unless you use oracle http://use-the-index-luke.com/sql/where-clause/null). So therefore every index will have a new row inserted every time you insert a row into the table, and the associated index maintenance issues (page splits etc.)
You could create a filtered/partial index which excludes nulls which would solve the particular issue of INSERT performance being slowed down by indexes on fields into which you're inserting NULL but you would need to test the solution thoroughly to make sure that the indexes are still being used by the queries that you expect them to be used. Note that mysql does not support partial indexes, AFAIK, the following is for sql-server.
Create Index ix_myFilteredIndex On ad_detail (brand_id_fk) Where brand_id_fk Is Not Null;
I have a database design like this. I am using MYSQL.
Have a vehicle table to store information about a vehicle
CREATE TABLE `test`.`vehicle` (
`vehicle_id` BIGINT UNSIGNED NOT NULL,
`fuel_type_id_ref` TINYINT UNSIGNED NULL DEFAULT NULL,
`drive_type_id_ref` TINYINT UNSIGNED NULL DEFAULT NULL,
`condition_id_ref` TINYINT UNSIGNED NOT NULL,
`transmission_type_id_ref` TINYINT UNSIGNED NULL DEFAULT NULL,
PRIMARY KEY (`vehicle_id`)
) ENGINE = INNODB CHARSET = latin1 COLLATE = latin1_swedish_ci ;
I used separate tables to store records for each reference id.
for eg: I have a fuel type table to store fuels, transmission type table and so on.
But now I figured that the schema of those tables are pretty much equivalent.
So created a table like this.
CREATE TABLE `test`.`vehicle_feature` (
`veh_feature_id` TINYINT UNSIGNED NOT NULL AUTO_INCREMENT,
`feature_type_id_ref` TINYINT UNSIGNED NOT NULL,
`name` VARCHAR (50) NOT NULL,
`is_active` TINYINT (1) NOT NULL DEFAULT TRUE,
PRIMARY KEY (`veh_feature_id`)
) ENGINE = INNODB CHARSET = latin1 COLLATE = latin1_swedish_ci ;
and I put all those fuels and transmisiion types into this table with a feature type Id to identify the group.
Now I have to join same table again and again to retrieve the values from my vehicle table.
So my question is.
Shall I maintain my separate tables or Shall I go with this new approach? Since I have to write same joins again and again there is no reduce in my code. I can easily join my small tables rather than this one table. Also if I use small tables I can go for inner join to join those tables but in here I have to use left joins to join the tables.Also separate tables have less records comparing to one table. All what this approach doing is reduce the tables of my DB( only 4 tables which I dont care ). Sum of all records in these 4 tables will be 100 records.
So what is performance wise good?
This is a bit of a difficult question, because these are both reasonable approaches. The key to deciding is understand what the application needs from this type of data.
A separate table for the items has one nice advantage because foreign key constraints can actually check the referential integrity of the data. Furthermore, each of the entities is treated as a full-fledged bona-fide entity. This is handy if you have other information about the fuels, drives, and transmissions that is specific to that entity. For instance, the fuel could have an octane rating, which could be in the fuel table but does not need to clutter the other reference tables.
On the other hand, you might end up with lots of similar reference tables. And, for your application, these may not need to be full-fledged entities. In that case, having a single table is quite reasonable. This is actually a bigger advantage if you want to internationalize your application. That is, if you want to provide the names of things in multiple languages.
In an object-oriented language, you would approach this problem using inheritance. The three "types" would all be "subclasses" from a class of vehicle attributes. Unfortunately, SQL does not have such built-in concepts.
From a performance perspective, the two methods would both involve relatively small reference tables (I'm guessing at most a few thousand rows), that are accessed via primary keys. There should be very little performance difference between the two approaches. The important concern is how to properly model the data for your application.
I have a database design where i store image filenames in a table called resource_file.
CREATE TABLE `resource_file` (
`resource_file_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`resource_id` int(11) NOT NULL,
`filename` varchar(200) NOT NULL,
`extension` varchar(5) NOT NULL DEFAULT '',
`display_order` tinyint(4) NOT NULL,
`title` varchar(255) NOT NULL,
`description` text NOT NULL,
`canonical_name` varchar(200) NOT NULL,
PRIMARY KEY (`resource_file_id`)
) ENGINE=InnoDB AUTO_INCREMENT=592 DEFAULT CHARSET=utf8;
These "files" are gathered under another table called resource (which is something like an album):
CREATE TABLE `resource` (
`resource_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
`description` text NOT NULL,
PRIMARY KEY (`resource_id`)
) ENGINE=InnoDB AUTO_INCREMENT=285 DEFAULT CHARSET=utf8;
The logic behind this design comes handy if i want to assign a certain type of "resource" (album) to a certain type of "item" (product, user, project & etc) for example:
CREATE TABLE `resource_relation` (
`resource_relation_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`module_code` varchar(32) NOT NULL DEFAULT '',
`resource_id` int(11) NOT NULL,
`data_id` int(11) NOT NULL,
PRIMARY KEY (`resource_relation_id`)
) ENGINE=InnoDB AUTO_INCREMENT=328 DEFAULT CHARSET=utf8;
This table holds the relationship of a resource to a certain type of item like:
Product
User
Gallery
& etc.
I do exactly this by giving the "module_code" a value like, "product" or "user" and assigning the data_id to the corresponding unique_id, in this case, product_id or user_id.
So at the end of the day, if i want to query the resources assigned to a product with the id of 123 i query the resource_relation table: (very simplified pseudo query)
SELECT * FROM resource_relation WHERE data_id = 123 AND module_code = 'product'
And this gives me the resource's for which i can find the corresponding images.
I find this approach very practical but i don't know if it is a correct approach to this particular problem.
What is the name of this approach?
Is it a valid design?
Thank you
This one uses super-type/sub-type. Note how primary key propagates from a supert-type table into sub-type tables.
To answer your second question first: the table resource_relation is an implementation of an Entity-attribute-value model.
So the answer to the next question is, it depends. According to relational database theory it is bad design, because we cannot enforce a foreign key relationship between data_id and say product_id, user_id, etc. It also obfuscates the data model, and it can be harder to undertake impact analysis.
On the other hand, lots of people find, as you do, that EAV is a practical solution to a particular problem, with one table instead of several. Although, if we're talking practicality, EAV doesn't scale well (at least in relational products, there are NoSQL products which do things differently).
From which it follows, the answer to your first question, is it the correct approach?, is "Strictly, no". But does it matter? Perhaps not.
" I can't see a problem why this would "not" scale. Would you mind
explaining it a little bit further? "
There are two general problems with EAV.
The first is that small result sets (say DATE_ID=USER_ID) and big result sets (say DATE_ID=PRODUCT_ID) use the same query, which can lead to sub-optimal execution plans.
The second is that adding more attributes to the entity means the query needs to return more rows, whereas a relational solution would return the same number of rows, with more columns. This is the major scaling cost. It also means we end up writing horrible queries like this one.
Now, in your specific case perhaps neither of these concerns are relevant. I'm just explaining the reasons why EAV can cause problems.
"How would i be supposed to assign "resources" to for example, my
product table, "the normal way"?"
The more common approach is to have a different intersection table (AKA junction table) for each relationship e.g.USER_RESOURCES, PRODUCT_RESOURCES, etc. Each table would consist of a composite primary key, e.g. (USER_ID, RESOURCE_ID), and probably not much else.
The other approach is to use a generic super-type table with specific sub-type tables. This is the implementation which Damir has modelled. The normal use caee for super-types is when we have a bunch of related entities which have some attributes, behaviours and usages in common plus seom distinct features of their own. For instance, PERSON and USER, CUSTOMER, SUPPLIER.
Regarding your scenario I don't think USER, PRODUCT and GALLERY fit this approach. Sure they are all consumers of RESOURCE, but that is pretty much all they have in common. So trying to map them to an ITEM super-type is a procrustean solution; gaining a generic ITEM_RESOURCE table is likely to be a small reward for the additiona hoops you're going to have to jump through elsewhere.
I have a database design where i store images in a table called
resource_file.
You're not storing images; you're storing filenames. The filename may or may not identify an image. You'll need to keep database and filesystem permissions in sync.
Your resource_file table structure says, "Image filenames are identifiable in the database, but are unidentifiable in the filesystem." It says that because resource_file_id is the primary key, but there are no unique constraints besides that id. I suspect your image files actually are identifiable in the filesystem, and you'd be better off with database constraints that match that reality. Maybe a unique constraint on (filename, extension).
Same idea for the resource table.
For resource_relation, you probably need a unique constraint on either (resource_id, data_id) or (resource_id, data_id, module_code). But . . .
I'll try to give this some more thought later. It's kind of hard to figure out what you're trying to do resource_relation, which is usually a red flag.
Suppose I have Tutors who take online webclasses and create learning packs. Both online webclasses and learning packscan be rated by students and finally a tutor's rating is the simple average of all the ratings on his classes and packs.
This is the table architecture of our current Ratings table -
CREATE TABLE IF NOT EXISTS `Ratings` (
`id_rating` int(10) unsigned NOT NULL auto_increment,
`id_teacher` int(10) unsigned default NULL COMMENT 'the teacher who created the class/pack',
`id_lp` int(10) unsigned default NULL COMMENT 'the id of the learning pack',
`id_wc` int(10) NOT NULL default '0' COMMENT 'the id of the webclass',
`id_user` int(10) unsigned NOT NULL default '0' COMMENT 'the user who has rated',
`rate` int(10) unsigned NOT NULL default '0',
`cdate` timestamp NOT NULL default CURRENT_TIMESTAMP,
`udate` timestamp NULL default NULL,
PRIMARY KEY (`id_rating`),
KEY `Ratings_FKIndex1` (`id_user`),
KEY `id_lp` (`id_lp`),
KEY `id_wc` (`id_wc`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
Currently, both class and pack ratings are entered into the same table (id_wc and id_lp are entered accordingly - one is entered and the other is NULL for every rating record).
So, my question is -
Is this architecture correct or is it better to keep class and pack ratings separate? Why or why not? I need exactly the same number of Rating table fields for both class and pack rating.
I guess, If class and pack ratings were to be separately found, then separate tables would reduce the number of records to be looked up. But, since in our case only tutor ratings are needed (involves both class and packs), that's why all the ratings are put together.
A bit more detailed model.
A teacher at a university can take classes too.
One class may have more than one teacher.
There may be several classes on the same subject, taught by different teachers.
Only students who participate in classes get to vote (rate) the class.
Learning packs are on a subject (math, biology).
One learning pack can have several authors.
Technically, a student can author a learning pack too.
Only members who use a learning pack get to rate a pack.
Although authors can vote for packs and teachers can vote for their classes, those votes are ignored.
If only interested in the ratings table, you could use:
Or, combine both models into something like:
If you think that you'll end up with more entities which will require rating then you need to create something more generic (and not very db-philosophy-friendly).
ratings
-------
id
voterClass
voterId
subjectClass
subjectId
vote
date(s)
By using this design you forget about FKs and referential integrity. But it's very flexible, and using the right indexes it's very scalable. Also, when entities (subjects) are deleted the votes remain. This design saves you from duplicating fields and tables.
Use mySQL workbench. Its cross-platform and works great.
MySQL workbench visurally see what you are doing with your database. http://diariolinux.com/wp-content/uploads/2008/08/wb51linuxpreview2a.png
MySQL workbench Also meets all of your criteria for general-purpose-remote-data-backup-and-download-including-innodb-support question on stackoverflow
BTW: Use <ctrl>+<G> to forward engineer a database.