I have a table with primary id auto generated and a separate order column (integer which might be changed by a GUI application). I would like the order column to be auto incremented when new row is inserted. It can be the same value as new car_id, I don't care. Is it possible?
I think I can have more then one auto increment fields, but they both need to be part of the primary key, and to prevent to potentially have two cars with same car_id I would need unique index on car_id? Am I correct?
I think I can use triggers, but they are prohibited by my hosting company.
CREATE TABLE IF NOT EXISTS `car` (
`car_id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
`description` text,
`order` int(11) DEFAULT NULL,
PRIMARY KEY (`car_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
Why I would like it to be handled by the database is because I use a complicated system of database handling in my program (using generic types and reflection to make everything automated) and this feature would make my life easier.
Related
We have a set of users
CREATE TABLE `users` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`email` varchar(254) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `unique_email` (`email`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=COMPRESSED
Each user can have one or many domains, such as
CREATE TABLE `domains` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` varchar(11) NOT NULL,
`domain` varchar(254) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `domain` (`domain`),
CONSTRAINT `domains_user_id_fk` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=COMPRESSED
And we have a table that has some sort of data, for this example it doesn't really matter what it contains
CREATE TABLE `some_data` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`content` TEXT NOT NULL,
PRIMARY KEY (`id`),
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=COMPRESSED
We want certain elements of some_data to be accessible to only certain users or only certain domains (whitelist case).
In other cases we want elements of some_data to be accessible to everyone BUT certain users or certain domains (blacklist case).
Ideally we would like to retrieve the list of domains that the given element of some_data is accessible to in a single query and ideally do the reverse (list all the data the given domain has access to)
Our approach so far is a single table
CREATE TABLE `access_rules` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`rule_type` enum('blacklist','whitelist')
`some_data_id` int(11) NOT NULL,
`user_id` int(11) NOT NULL,
`domain_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
CONSTRAINT `access_rules_some_data_id_fk` FOREIGN KEY (`some_data_id`) REFERENCES `some_data` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=COMPRESSED
The problem however is the fact that we need to query the db twice (to figure out if the given data entry is operating a blacklist or a whitelist [whitelist has higher priority]). (EDIT: it can be done in a single query)
Also since the domain_id is nullable (to allow blacklisting / whitelisting an entire user) joining is not easy
The API that will use this schema is currently hit 4-5k times per second so performance matters.
The users table is relatively small (50k+ rows) and the domains table is about 1.5 million entries. some_data is also relatively small (sub 100k rows)
EDIT: the question is more around semantics and best practices. With the above structure I'm confident we can make it work, but the schema "feels wrong" and I'm wondering if there is better way
There are two issues to consider, normalization and management.
To normalize traditionally you would need 4 tables.
Set up the 3 master tables USER, DOMAIN, OtherDATA.
Set up a child table with User_Id, Domain_Id, OtherDATA_Id, PermissionLevel
This provides the least amount of repeated data. It also makes the management possible at the user-domain level easier. You could also add a default whitelist/blacklist field at the user and domain tables. This way a script could auto populate the child table and then a manager could just go in and adjust the one value needed.
If you have a two different tables, one for white and one black list, you could get a user or domain on both lists by accident. Actually it would be 4 tables, 2 for users and 2 for domain. Management would be more complex.
I have an extremely large MySQL table that I would like to partition. A simplified create of this table is as given below -
CREATE TABLE `myTable` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`columnA` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`columnB` varchar(50) NOT NULL ,
`columnC` int(11) DEFAULT NULL,
`columnD` varchar(255) DEFAULT NULL,
`columnE` int(11) DEFAULT NULL,
`columnF` varchar(255) DEFAULT NULL,
`columnG` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `UNIQ_B` (`columnB`),
UNIQUE KEY `UNIQ_B_C` (`columnB`,`columnC`),
UNIQUE KEY `UNIQ_C_D` (`columnC`,`columnD`),
UNIQUE KEY `UNIQ_E_F_G` (`columnE`,`columnF`,`columnG`)
)
I want to partition my table either by columnA or id, but the problem is that the MySQL Manual states -
In other words, every unique key on the table must use every column in the table's partitioning expression.
Which means that I cannot partition the table on either of those columns without changing my schema. For example, I have considered adding id to all my unique keys like so -
CREATE TABLE `myTable` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`columnA` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`columnB` varchar(50) NOT NULL ,
`columnC` int(11) DEFAULT NULL,
`columnD` varchar(255) DEFAULT NULL,
`columnE` int(11) DEFAULT NULL,
`columnF` varchar(255) DEFAULT NULL,
`columnG` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `UNIQ_B` (`columnB`,`id`),
UNIQUE KEY `UNIQ_B_C` (`columnB`,`columnC`,`id`),
UNIQUE KEY `UNIQ_C_D` (`columnC`,`columnD`,`id`),
UNIQUE KEY `UNIQ_E_F_G` (`columnE`,`columnF`,`columnG`,`id`)
)
Which I do not mind doing except for the fact that it allows for the creation of rows that should not be created. For example, by my original schema, the following row insertion wouldn't have worked twice -
INSERT into myTable (columnC, columnD) VALUES (1.0,2.0)
But it works with the second schema as columnC and columnD by themselves no longer form a unique key. I have considered getting around this by using triggers to prevent the creation of such rows but then the trigger cost would reduce(or outweigh) the partitioning performance gain
Edited:
Some additional information about this table:
Table has more than 1.2Billion records.
Using Mysql 5.6.34 version with InnoDB Engine and running on AWS RDS.
Few other indexes are also there on this table.
Because of huge data and multiple indexes it is an expensive process to insert and retrieve the data.
There are no unique indexes on timestamp and float data types. It was just a sample table schema for illustration. Our actual table has similar schema as above table.
Other than Partitioning what options do we have to improve the
performance of the table without losing any data and maintaining the
integrity constraints.
How do I partition a MySQL table that contains several unique keys?
Sorry to say, you don't.
Also, you shouldn't. Remember that UPDATE and INSERT operations to a table with unique keys necessarily must query the table to ensure the keys stay unique. If it were possible to partition a table so unique keys weren't built in to the partititon expression, then every insert or update would require querying every partition. This would be likely to make the partitioning worse than useless.
I am new in database, I am going to share with you 2 database tables design here,I just want to know which one is best design and why?
First one i have create a user table, subject table and user_subject table.
In user table i am saving user information and in subject i have saved subject. IN user_subject I have saved user id and subject id.
CREATE TABLE IF NOT EXISTS `subjects` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;
-- --------------------------------------------------------
--
-- Table structure for table `users`
--
CREATE TABLE IF NOT EXISTS `users` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;
-- --------------------------------------------------------
--
-- Table structure for table `user_subjects`
--
CREATE TABLE IF NOT EXISTS `user_subjects` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) NOT NULL,
`subject_id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;
2 one>
CREATE TABLE IF NOT EXISTS `subjects` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;
CREATE TABLE IF NOT EXISTS `users` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
`subject_name` varchar(2000) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;
I have save user and it's subjects in user table with coma (,) sprat and not create another table to save user and subject id.
I thing 2 one is best because we are not need to save data in thrid table. Please tell me which one is best and long lasting for future.
The first version is much, much better. Here are some reasons why you do not want to use comma delimited strings:
SQL does not have particularly good string functions -- the basics, but not much more.
When you store values in a delimited string, the database cannot validate the data. With a separate table you can use foreign key constraints.
Queries on comma-delimited columns cannot make use of standard indexes (although it might be possible to use full text indexes).
With a comma-delimited string, the database cannot validate that the subjects are unique.
The first method uses a junction table and it is the better way to implement this logic in a relational database.
It's ok to use second way IF:
1) subjects have only one value of importance (name)
2) that value uniquely identifies subjects (i.e. no two subjects have the same name), OR there is no need to make distinction between two subjects with same name
Generally speaking first way is better because if you suddenly decide to give a new value to the subjects (for example age) you don't have to redo your whole table structure.
The second solution is not very good anyway, since you can not use joins or index.
Which solution would be the best, depends on the kind of relationship between users and subjects.
If each subject belongs to exactly one user and each user may have an arbitrary number of subjects, which means you have a one-to-many relationship, than you should add user_id to the table subject.
If any subject can belong to more than one user and each user can have many subjects you should use your first solution with a third mapping table (that would be a many-to-many-relationship).
In both cases you can express following queries very easily and cleanly in sql using a simple join:
which subjects belong to a given user
which users have subjects with a name containing a certain expression
which is the user (are the users in second case) of a given subject
I have a table SkillLevel created as
CREATE TABLE `sklllevel` (
`Name` varchar(20) NOT NULL,
`level` enum('No Experience','Beginner','Expert','Advisor') DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
With Values
INSERT INTO test.sklllevel (name,level) values ('No Experience','No Experience'),('Beginner','Beginner'),('Expert','Expert'),('Advisor','Advisor');
I want to refer SkillLevel.Level with testSkill.tkSkill in another table created as:
CREATE TABLE `testskill` (
`pkid` int(11) NOT NULL,
`name` varchar(45) DEFAULT NULL,
`tkSkill` tinyint(4) DEFAULT NULL,
PRIMARY KEY (`pkid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
DO I need to have tkSkill as ENUM with same set of Values to set a foreign key? What is the best practise here?
Short answer: enums are stored as a number, so technically, you could join them with tkSkill as a tinyint.
To use it as a foreign key, you indeed need to have both tkSkill and level to be the same enum - but you need level to be a unique column to qualify as a foreign key, so add unique to it (to be really precise: for InnoDB, you can have non-unique foreign keys if you manually create the index, but non-unique foreign keys are generally a bad idea). But you should think about what your key in sklllevel should be, since now it looks as if you want Name to be the key.
And independently from having it as key you should define tkSkill as (the same) enum to make sure they both mean the same if you at one point would like to change the enums (what is a bad idea!) and e.g. add another skilllevel; or if you want to "read" (directly understand) the value when you select directly from the table testskill without the need to join sklllevel; and if you want to insert values into tkSkill by using their enum-namoe (e.g. 'Expert' instead of 3, but you can use both) without looking them up in sklllevel.
Longer answer: Best practice is: don't use enums. Depending on "belief", there is none to only a handful of cases when enums might be slightly useful, if at all. One could be that you don't want to use a reference table to skip a join to display the textvalue/description of an integer-id. In your setup, you are actually using a reference table and still want to use enums.
The closest you'll get to best practice using enums would be to define tkSkill as enum in testskill and don't have the sklllevel-table (the reference table) at all.
But again, I would urge you not to use enums. You can define your table as
CREATE TABLE `sklllevel` (
`Id` tinyint(4) primary key,
`Name` varchar(20) NOT NULL,
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
and then use that id as a foreign key for tkSkill. Or even
CREATE TABLE `sklllevel` (
`Name` varchar(20) primary key
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
and then define tkSkill as varchar(20) and use this as a foreign key - it will use more space, though, but you will have "readable" values in the table if that was the reason for you to use enums in the first place.
Here you can find some background to enums: 8 Reasons Why MySQL's ENUM Data Type Is Evil
I am wondering if there is a better way to make some mysql tables than what I have been using in this project. I have a series of numbers which represent a specific time. Such as the number 101 would represent Jan 12, 2012 for example. It doesn't only represent time but that is the very basic of that information. So I created a lexicon table which has all the numbers we use and details such as time and meaning of that number. I have another table that is per customer which whenever they make a purchase I check off that the purchase is eligiable for a specific time. But the table where I check off each purchase and the lexicon table are not linked. I am wondering if there is a better way, maybe a way to have an sql statement take all the data from the Lexicon table and turn that into columns while the rows consist of customer ID and a true/false selector.
table structure
THIS IS THE CUSTOMER PURCHASED TABLE T/F
CREATE TABLE `group1` (
`100` TINYINT(4) NULL DEFAULT '0',
`101` TINYINT(4) NULL DEFAULT '0',
`102` TINYINT(4) NULL DEFAULT '0',
... this goes on for 35 times each table
PRIMARY KEY (`CustID`)
)
THIS IS THE LEXICON TABLE
CREATE TABLE `lexicon` (
`Number` INT(3) NOT NULL DEFAULT '0',
`Date` DATETIME NULL DEFAULT NULL,
`OtherPurtinantInfo` .... etc
)
So I guess instead of making groups of numbers every season for the customers I would prefer being able to use the updated lexicon table to automatically generate a table. My only concerns are that we have many many numbers so that would make a very large table all combined together but perhaps that could be limited into groups automatically as well so that it is not an overwhelming table.
I am not sure if I am being clear enough so feel free to comment on things that need to be clarified.
Here's a normalized ERD, based on what I understand your business requirements to be:
The classifieds run on certain dates, and a given advertisement can be run for more than one classifieds date.
The SQL statements to make the tables:
CREATE TABLE IF NOT EXISTS `classified_ads` (
`id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`)
);
CREATE TABLE IF NOT EXISTS `classified_dates` (
`id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`date` DATETIME NOT NULL,
`info` TEXT NULL,
PRIMARY KEY (`id`)
);
CREATE TABLE IF NOT EXISTS `classified_ad_dates` (
`classified_ad_id` INT UNSIGNED NOT NULL,
`classifiend_date_id` INT UNSIGNED NOT NULL,
PRIMARY KEY (`classified_ad_id`, `classifiend_date_id`),
INDEX `fk_classified_ad_dates_classified_ads1` (`classified_ad_id` ASC),
INDEX `fk_classified_ad_dates_classified_dates1` (`classifiend_date_id` ASC),
CONSTRAINT `fk_classified_ad_dates_classified_ads1`
FOREIGN KEY (`classified_ad_id`)
REFERENCES `classified_ads` (`id`)
ON DELETE CASCADE
ON UPDATE CASCADE,
CONSTRAINT `fk_classified_ad_dates_classified_dates1`
FOREIGN KEY (`classifiend_date_id`)
REFERENCES `classified_dates` (`id`)
ON DELETE CASCADE
ON UPDATE CASCADE
);