hello I have a table as follows:
DROP TABLE IF EXISTS `Master_Product`;
CREATE TABLE IF NOT EXISTS `Master_Product` (
`KeyId_Product` bigint(21) UNSIGNED NOT NULL AUTO_INCREMENT COMMENT 'Id of table',
`ID_OrderInform` bigint(21) UNSIGNED NOT NULL AUTO_INCREMENT COMMENT 'Order desire by user for ouput Inform Printed',
`ID_OrderReport` bigint(21) UNSIGNED NOT NULL AUTO_INCREMENT COMMENT 'Order desire by user for ouput Report View Datatable',
`Name_Product` varchar(200) COLLATE utf8_unicode_ci DEFAULT NULL COMMENT 'PDF Name on disk',
PRIMARY KEY (`KeyId`),
UNIQUE KEY `KeyId` (`KeyId`),
KEY `xID_OrderInform` (`ID_OrderInform`),
KEY `xID_OrderReport` (`ID_OrderReport`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ROW_FORMAT=COMPACT;
Each time i insert a new product i need fill this table with the order and name, but i cant use the KeyId_Product to sort in printed Inform or View Datatables, Becouse someone users need use a desirable order.
to get this scalability i need use 2 column aditional to store the desirable order, The problem occurs when a new product must be inserted between 2 existing products, and all products that with a higher index of ordering must be pushed +1 to give space to the new one.
The only solution i find is use 2 query additional to update:
UPDATE
Master_Product
SET
ID_OrderInform = ID_OrderInform + 1
WHERE
ID_OrderInform>$NewitemOrderInform
this other
UPDATE
Master_Product
SET
ID_OrderReport = ID_OrderReport + 1
WHERE
ID_OrderReport>$NewitemOrderReport
how can I do all this in a single query, and if when updating the other products there is an error, apply the rolback that inpides even add the new record.
This does both "at the same time":
UPDATE Master_Product
SET ID_OrderInform = ID_OrderInform + (ID_OrderInform > $NewitemOrderInform),
ID_OrderReport = ID_OrderReport + (ID_OrderReport > $NewitemOrderReport);
To explain,... The (id>$x) is a boolean expression that evaluates to either false or true. false is represented as 0; true as 1. So this adds the 'correct' value (0 or 1) to each of the columns.
Meanwhile, a PRIMARY KEY is a UNIQUE KEY, so get rid of the redundant UNIQUE KEY KeyId (KeyId).
What other queries hit this table? It would probably be better to remove the indexes on Inform and Report. It takes a significant amount of effort to update many of the rows in each for each master UPDATE. And you probably only fetch all the rows when you need the ordered list.
A nitpick: BIGINT is overkill.
Related
So I just started learning SQL online and while learning about constraints, below example was given for using DEFAULT constraint:
CREATE TABLE persons(
ID INT NULL DEFAULT 100,
f_name VARCHAR(25),
l_name VCARCHAR(25),
UNIQUE(ID)
);
My question is, if ID is defaulted to 100, there can be multiple columns having 100 as ID, so wouldn't that contradict UNIQUE constraint, which ensures all columns to have different values?
Thank you for reading and your inputs!
Rohan
Though it's valid SQL and mysql allows this, it is a bad practice to define DEFAULT value on an column with UNIQUE constraint. This poor schema will lead to inconsistency in your data.
mysql> show create table persons;
+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| persons | CREATE TABLE `persons` (
`id` int(11) DEFAULT '100',
`f_name` varchar(10) COLLATE utf8_unicode_ci DEFAULT NULL,
`l_name` varchar(10) COLLATE utf8_unicode_ci DEFAULT NULL,
UNIQUE KEY `id` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci |
+---------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
You are right, the combination of DEFAULT 100 and UNIQUE makes no sense.
The column is defined as nullable, so there can be many rows with the value null. Only when a row has a value different from null, must it be unique.
In order to insert nulls, you'd explicitely have this in your INSERT statement. If you don't set null explicitly, the default 100 will be written. This works for the first row treated that way, but the second time the 100 will violate the unique constraint, just as you say.
Well, a nullable ID makes no sense either, and ideally an ID should be auto-incremented, so you don't have to worry about using an unused ID, especially in an environment where multiple processes may try to insert rows at the same time.
So, the given examle is just very bad.
The combination of DEFAULT 100 and UNIQUE makes sense.
This combination means that the newly inserted row should have explicitly specified ID column value primarily.
The scheme allows to insert one row without ID value specified. But only one row. If you need to insert another row with this default/generic ID value then you must edit existing row and alter its ID value previously (or delete it).
In practice - this allows to save raw, incomplete, row, and edit it completely in future. For example, you insert generic row, then calculate needed row parameters and set needed references, and finally you assign some definite ID value to this row. After this you may insert another generic row and work with it.
Of course this situation is rare. But it may be useful in some cases.
I have a table for storing stats. Currently this is populated with about 10 million rows at the end of the day then copied to daily stats table and deleted. For this reason I can't have an auto-incrementing primary key.
This is the table structure:
CREATE TABLE `stats` (
`shop_id` int(11) NOT NULL,
`title` varchar(255) CHARACTER SET latin1 NOT NULL,
`created` datetime NOT NULL,
`mobile` tinyint(1) NOT NULL DEFAULT '0',
`click` tinyint(1) NOT NULL DEFAULT '0',
`conversion` tinyint(1) NOT NULL DEFAULT '0',
`ip` varchar(20) CHARACTER SET latin1 NOT NULL,
KEY `shop_id` (`shop_id`,`created`,`ip`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
I have a key on shop_id, created, ip but I'm not sure what columns I should use to create the optimal index to increase lookup speeds any further?
The query below takes about 12 seconds with no key and about 1.5 seconds using the index above:
SELECT DATE(CONVERT_TZ(`created`, 'UTC', 'Australia/Brisbane')) AS `date`, COUNT(*) AS `views`
FROM `stats`
WHERE `created` <= '2017-07-18 09:59:59'
AND `shop_id` = '17515021'
AND `click` != 1
AND `conversion` != 1
GROUP BY DATE(CONVERT_TZ(`created`, 'UTC', 'Australia/Brisbane'))
ORDER BY DATE(CONVERT_TZ(`created`, 'UTC', 'Australia/Brisbane'));
If there is no column (or combination of columns) that is guaranteed unique, then do have an AUTO_INCREMENT id. Don't worry about truncating/deleting. (However, if the id does not reset, you probably need to use BIGINT, not INT UNSIGNED to avoid overflow.)
Don't use id as the primary key, instead, PRIMARY KEY(shop_id, created, id), INDEX(id).
That unconventional PK will help with performance in 2 ways, while being unique (due to the addition of id). The INDEX(id) is to keep AUTO_INCREMENT happy. (Whether you DELETE hourly or daily is a separate issue.)
Build a Summary table based on each hour (or minute). It will contain the count for such -- 400K/hour or 7K/minute. Augment it each hour (or minute) so that you don't have to do all the work at the end of the day.
The summary table can also filter on click and/or conversion. Or it could keep both, if you need them.
If click/conversion have only two states (0 & 1), don't say != 1, say = 0; the optimizer is much better at = than at !=.
If they 2-state and you changed to =, then this becomes viable and much better: INDEX(shop_id, click, conversion, created) -- created must be last.
Don't bother with TZ when summarizing into the Summary table; apply the conversion later.
Better yet, don't use DATETIME, use TIMESTAMP so that you won't need to convert (assuming you have TZ set correctly).
After all that, if you still have issues, start over on the Question; there may be further tweaks.
In your where clause, Use the column first which will return the small set of results and so on and create the index in the same order.
You have
WHERE created <= '2017-07-18 09:59:59'
AND shop_id = '17515021'
AND click != 1
AND conversion != 1
If created will return the small number of set as compare to other 3 columns then you are good otherwise you that column at first position in your where clause then select the second column as per the same explanation and create the index as per you where clause.
If you think order is fine then create an index
KEY created_shopid_click_conversion (created,shop_id, click, conversion);.
I am currently working on a project, which involves altering data stored in a MYSQL database. Since the table that I am working on does not have a key, I add a key with the following command:
ALTER TABLE deCoupledData ADD COLUMN MY_KEY INT NOT NULL AUTO_INCREMENT KEY
Due to the fact that I want to group my records according to selected fields, I try to create an index for the table deCoupledData that consists of MY_KEY, along with the selected fields. For example, If I want to work with the fields STATED_F and NOT_STATED_F, I type:
ALTER TABLE deCoupledData ADD INDEX (MY_KEY, STATED_F, NOT_STATED_F)
The real issue is that the fields that I usually work with are more than 16, so MYSQL does not allow super-keys longer than 16 fields.
In conclusion, Is there another way to do this? Can I make (somehow) MYSQL to order the records according to the desired super-key (something like clustering)? I really need to make my script faster and the main overhead is that each group may contain records which are not stored on the same page of the disk, and I assume that my pc starts random I/Os in order to retrieve records.
Thank you for your time.
Nick Katsipoulakis
CREATE TABLE deCoupledData (
AA double NOT NULL DEFAULT '0',
STATED_F double DEFAULT NULL,
NOT_STATED_F double DEFAULT NULL,
MIN_VALUES varchar(128) NOT NULL DEFAULT '-1,-1',
MY_KEY int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (MY_KEY),
KEY AA (AA) )
ENGINE=InnoDB AUTO_INCREMENT=74358 DEFAULT CHARSET=latin1
Okay, first of all, when you add an index over multiple columns and you don't really use the first column, the index is useless.
Example: You have a query like
SELECT *
FROM deCoupledData
WHERE
stated_f = 5
AND not_stated_f = 10
and an index over (MY_KEY, STATED_F, NOT_STATED_F).
The index can only be used, if you have another AND my_key = 1 or something in the WHERE clause.
Imagine you want to look up every person in a telephone book with first name 'John'. Then the knowledge that the book is sorted by last name is useless, you still have to look up every single name.
Also, the primary key does not have to be a surrogate / artificial one. It's nearly always better to have a primary key which is made up of columns which identify each row uniquely anyway.
Also it's not always good to have many indexes. Not only do indexes slow down INSERTs and UPDATEs, sometimes they just cause an extra lookup, since first a look at the index is taken and a second look to find the actual data.
That's just a few tips. Maybe Jordan's hint is not a bad idea, "You should maybe post a new question that has your actual SQL query, table layout, and performance questions".
UPDATE:
Yes, that is possible. According to manual
If you define a PRIMARY KEY on your table, InnoDB uses it as the clustered index.
which means that the data is practically sorted on disk, yes.
Be aware that it's also possible to define a primary key over multiple columns!
Like
CREATE TABLE deCoupledData (
AA double NOT NULL DEFAULT '0',
STATED_F double DEFAULT NULL,
NOT_STATED_F double DEFAULT NULL,
MIN_VALUES varchar(128) NOT NULL DEFAULT '-1,-1',
MY_KEY int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (NOT_STATED_F, STATED_F, AA),
KEY AA (AA) )
ENGINE=InnoDB AUTO_INCREMENT=74358 DEFAULT CHARSET=latin1
as long as the combination of the columns is unique.
I have a table called promotion_codes
CREATE TABLE promotion_codes (
id int(10) UNSIGNED NOT NULL auto_increment,
created_at datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
code varchar(255) NOT NULL,
order_id int(10) UNSIGNED NULL DEFAULT NULL,
allocated_at datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
This table is pre-populated with available codes that will be assigned to orders that meet a specific criteria.
What I need to ensure is that after the ORDER is created, that I obtain an available promotion code and update its record to reflect that it has been allocated.
I am not 100% sure how to not grab the same record twice if simultaneous requests come in.
I have tried locking the row during a select and locking the row during a update - both still seem to allow a second (simultaneous) attempt to grab the same record - which is what I want to avoid
UPDATE promotion_code
SET allocated_at = "' . $db_now . '", order_id = ' . $donation->id . '
WHERE order_id IS NULL LIMIT 1
You can add a second table which holds all used codes. So you can use an unique constraint in the assignment table to make sure that one code is not assigned twice.
CREATE TABLE `used_codes` (`usage` INTEGER PRIMARY KEY auto_increment,
`id` INTEGER NOT NULL UNIQ, -- This makes sure, that there are no two assignments of one code
allocated_at datetime NOT NULL);
You add the ID of an used code into the used_codes table, and query which code you used afterwards. When this two operations are in one transaction, the entire transaction will fail when there is a second try to use the same code.
I did not test the following code, you might to adjust it.
Also you need to make sure that you have your server meets the requirements for transactions.
-- There are changes which have to be atomic, so don't use autocommit
SET autocommit = 0;
BEGIN TRANSACTION
INSERT INTO `used_codes` (`id`, `allocated_at`) VALUES
(SELECT `id` FROM `promotion_codes`
WHERE NOT `id` in (SELECT `id` FROM `used_codes`)
LIMIT 1), now());
SELECT `code` FROM `promotion_codes` WHERE `id` =
-- You might need to adjust the extraction of insertion ID, since
-- I don't know if parallel running transactions can see the maximum
-- their maximum IDs. But there should be a way to extract the last assigned
-- ID within this transaction.
(SELECT `id` FROM `used_codes` HAVING `usage` = max(`usage`));
COMMIT
You can use the returned code if the transaction sucseeded. If there where more than one processes running to use the same code, only one of them succed, while the rest fails with insert errors about the duplicated row. In your software you need to distinguish between the duplicated row error and other errors, and reexecute the statement on duplication errors.
I have a table "Bestelling" with 4 columns: "Id" (PK), "KlantId", "Datum", "BestellingsTypeId", now I want to make the column Id auto_increment, however, when I try to do that, I get this error:
ERROR 1062: ALTER TABLE causes auto_increment resequencing, resulting in duplicate entry '1' for key 'PRIMARY'
SQL Statement:
ALTER TABLE `aafest`.`aafest_bestelling` CHANGE COLUMN `Id` `Id` INT(11) NOT NULL AUTO_INCREMENT
ERROR: Error when running failback script. Details follow.
ERROR 1046: No database selected
SQL Statement:
CREATE TABLE `aafest_bestelling` (
`Id` int(11) NOT NULL,
`KlantId` int(11) DEFAULT NULL,
`Datum` date DEFAULT NULL,
`BestellingstypeId` int(11) DEFAULT NULL,
PRIMARY KEY (`Id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
Anyone got an idea?
This will happen if the table contains an existing record with an id of 0 (or negative). Updating all existing records to use positive values will allow auto_increment to be set on that column.
Edit: Some people asked how that 0 got in there. For clarification, the MySQL Reference Manual states that "For numeric types, the default is 0, with the exception that for integer or floating-point types declared with the AUTO_INCREMENT attribute, the default is the next value in the sequence." So, if you performed an insert on a table without providing a value for the numeric column before the auto_increment was enabled, then the default 0 would be used during the insert. More details may be found at https://dev.mysql.com/doc/refman/5.0/en/data-type-defaults.html.
I also had this issue when trying to convert a column to auto_increment where one row had a value of 0. An alternative to changing the 0 value temporarily is via setting:
SET SESSION sql_mode='NO_AUTO_VALUE_ON_ZERO';
for the session.
This allowed the column to be altered to auto_increment with the zero id in place.
The zero isn't ideal - and I also wouldn't recommend it being used in an auto_increment column. Unfortunately it's part of an inherited data set so I'm stuck with it for now.
Best to clear the setting (and any others) afterwards with:
SET SESSION sql_mode='';
although it will be cleared when the current client session clsoes.
Full details on the 'NO_AUTO_VALUE_ON_ZERO' setting here.
This happens when MySQL can not determine a proper auto_increment value. In your case, MySQL choose 1 as next auto_increment value, however there is already row with that value in the table.
One way to resolve the issue is to choose a proper auto_increment value yourself:
ALTER TABLE ... CHANGE COLUMN `Id` `Id` INT(11) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT = 123456;
(Note the AUTO_INCREMENT=123456 at the end.)
The easiest way that I have found to solve this issue is to first set the table's AUTO INCREMENT value before altering the column. Just make sure that you set the auto increment value higher than the largest value currently in that column:
ALTER TABLE `aafest`.`aafest_bestelling`
AUTO_INCREMENT = 100,
CHANGE COLUMN `Id` `Id` INT(11) NOT NULL AUTO_INCREMENT
I tested this on MySQL 5.7 and it worked great for me.
Edit: Don't know exactly how that would be caused, but I do have a workaround.
First, create a new table like the old one:
CREATE TABLE aafest_bestelling_new LIKE aafest_bestelling;
Then change the column
ALTER TABLE `aafest`.`aafest_bestelling_new`
CHANGE COLUMN `Id` `Id` INT(11) NOT NULL AUTO_INCREMENT
Dump in the new data:
INSERT INTO aafest_bestelling_new
(KlantId, Datum, BestellingTypeId)
SELECT
KlantId, Datum, BestellingTypeId
FROM aafest_bestelling;
Move the tables:
RENAME TABLE
aafest_bestelling TO aafest_bestelling_old,
aafest_bestelling_new TO aafest_bestelling;
Maybe there's some corruption going on, and this would fix that as well.
P.S.: As a dutchman, I'd highly recommend coding in english ;)
I had a similar issue. Issue was the table had a record with ID = 0 similar to what SystemParadox pointed out. I handled my issue by the following steps:
Steps:
Update record id 0 to be x where x = MAX(id)+1
Alter table to set primary key and auto increment setting
Set seed value to be x+1
Change record id x back to 0
Code Example:
UPDATE foo SET id = 100 WHERE id = 0;
ALTER TABLE foo MODIFY COLUMN id INT(11) NOT NULL AUTO_INCREMENT;
ALTER TABLE foo AUTO_INCREMENT = 101;
UPDATE foo SET id = 0 WHERE id = 100;
This happens because your primary key column already has values.
As the error says ...
ALTER TABLE causes auto_increment resequencing, resulting in duplicate entry '1' for key 'PRIMARY'
which means that your column already has a primary key value 1 which when you auto_increment that column is reassigned causing duplication and hence this error
the solution to this is to remove the primary constraint and then empty the column. Then alter the table setting the primary key again, this time with auto increment.
This error comes because the any table contains an existing record with an id of 0 (or negative). Update all existing records to use positive values will allow auto_increment to be set on that column.
If this didn't work then export all the data and save it any where in you computer and dont first make foreign key relation then fill data in parent table .
This error will also happen if have a MyISAM table that has a composite AUTO_INCREMENT PRIMARY KEY and are trying to combine the keys
For example
CREATE TABLE test1 (
`id` int(11) NOT NULL,
`ver` int(10) unsigned NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`,`ver`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
INSERT INTO test1 (`id`, `ver`) VALUES (1,NULL),(1,NULL),(1,NULL), (2,NULL),(2,NULL),(2,NULL);
ALTER TABLE test1 DROP PRIMARY KEY, ADD PRIMARY KEY(`ver`);
Not being able to set an existing column to auto_increment also happens if the column you're trying to modify is included in a foreign key relation in another table (although it won't produce the error message referred to in the question).
(I'm adding this answer even though it doesn't relate to the specific error message in the body of the question because this is the first result that shows up on Google when searching for issues relating to not being able to set an existing MySQL column to auto_increment.)