MySQL: Dupe Error on UPDATE - mysql

I've got a script running this update:
UPDATE `Cq_Item`
SET `rfq_item_id` = '9',
`value` = 'No Bid',
`datetime_created` = '2012-10-23T20:54:42+00:00',
`id` = '101'
WHERE `id` = '101'
Against this table:
CREATE TABLE IF NOT EXISTS `cq_item` (
`id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
`rfq_item_id` mediumint(8) unsigned NOT NULL,
`product_id_quoted` mediumint(8) unsigned DEFAULT NULL,
`quantity` mediumint(6) unsigned DEFAULT '0',
`value` float(10,4) NOT NULL,
`datetime_created` datetime NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `rfq_item_id` (`rfq_item_id`,`product_id_quoted`,`quantity`,`value`),
KEY `product_id` (`product_id_quoted`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=102 ;
and it's throwing this error:
1062 - Duplicate entry '9-321742-1-0.0000' for key 'rfq_item_id'
Granted, i'm no SQL guru, but throwing a dupe error on an update seems a little less than intuitive to me.
I understand why such an error would get thrown on an INSERT, but I can use some help figuring out what i'm doing wrong to get that on this UPDATE :)

An update actually performs two separate operations (when conditions are met of course), a DELETE and then an INSERT. You probably need to make sure that your UPDATE is not causing the row with id = 101 to violate your unique key that is already in place for another row.
So say you had two rows like this before attempting the update
id | rfq_item_id | product_id_quoted | quantity | value | datetime_creates
100 | 9 | 9 | 1 | 0.0000 | <some datetime>
101 | 10 | 9 | 1 | 0.0000 | <some datetime>
Your UPDATE would throw the exact error you are seeing upon execution, because the rfq_item_id/product_id_quoted/quantity/value unique key would be the same.
I would guess your problem is related to trying to set a string value of "No Bid" on the field value which is defined as a float. You thus will get a 0.0000 value in the update.
Also as mentioned in comment to original question, it is generally considered to be bad SQL practice to try to "set" the value of the primary key that you are using in the WHERE clause of an UPDATE statement. That is unless to really have a reason to change the primary key as part of the update.

your table structure says that rfq_item_id is unique key. and the value you are trying too set in update query is already in the table. So, it is not allowing you to update query with rfq_item_id = '9'.

Based on your table description, you have a unique key:
UNIQUE KEY `rfq_item_id` (`rfq_item_id`,`product_id_quoted`,`quantity`,`value`)
You are attempting to perform an UPDATE with values that already meet the criteria of your UNIQUE KEY.
You already have a record with rfq_item_id = '9' and value = 'No Bid', the values for the other two columns in the unique key also already match the values in id = '101'

Related

update column dedicated to specific order

hello I have a table as follows:
DROP TABLE IF EXISTS `Master_Product`;
CREATE TABLE IF NOT EXISTS `Master_Product` (
`KeyId_Product` bigint(21) UNSIGNED NOT NULL AUTO_INCREMENT COMMENT 'Id of table',
`ID_OrderInform` bigint(21) UNSIGNED NOT NULL AUTO_INCREMENT COMMENT 'Order desire by user for ouput Inform Printed',
`ID_OrderReport` bigint(21) UNSIGNED NOT NULL AUTO_INCREMENT COMMENT 'Order desire by user for ouput Report View Datatable',
`Name_Product` varchar(200) COLLATE utf8_unicode_ci DEFAULT NULL COMMENT 'PDF Name on disk',
PRIMARY KEY (`KeyId`),
UNIQUE KEY `KeyId` (`KeyId`),
KEY `xID_OrderInform` (`ID_OrderInform`),
KEY `xID_OrderReport` (`ID_OrderReport`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ROW_FORMAT=COMPACT;
Each time i insert a new product i need fill this table with the order and name, but i cant use the KeyId_Product to sort in printed Inform or View Datatables, Becouse someone users need use a desirable order.
to get this scalability i need use 2 column aditional to store the desirable order, The problem occurs when a new product must be inserted between 2 existing products, and all products that with a higher index of ordering must be pushed +1 to give space to the new one.
The only solution i find is use 2 query additional to update:
UPDATE
Master_Product
SET
ID_OrderInform = ID_OrderInform + 1
WHERE
ID_OrderInform>$NewitemOrderInform
this other
UPDATE
Master_Product
SET
ID_OrderReport = ID_OrderReport + 1
WHERE
ID_OrderReport>$NewitemOrderReport
how can I do all this in a single query, and if when updating the other products there is an error, apply the rolback that inpides even add the new record.
This does both "at the same time":
UPDATE Master_Product
SET ID_OrderInform = ID_OrderInform + (ID_OrderInform > $NewitemOrderInform),
ID_OrderReport = ID_OrderReport + (ID_OrderReport > $NewitemOrderReport);
To explain,... The (id>$x) is a boolean expression that evaluates to either false or true. false is represented as 0; true as 1. So this adds the 'correct' value (0 or 1) to each of the columns.
Meanwhile, a PRIMARY KEY is a UNIQUE KEY, so get rid of the redundant UNIQUE KEY KeyId (KeyId).
What other queries hit this table? It would probably be better to remove the indexes on Inform and Report. It takes a significant amount of effort to update many of the rows in each for each master UPDATE. And you probably only fetch all the rows when you need the ordered list.
A nitpick: BIGINT is overkill.

Query fails to load data in the way I expect, overwrites existing data

I have the following query:
INSERT INTO `user_pen_names` (`uid`,`pnid`,`my_name`) VALUES ('7','200','stink') ON DUPLICATE KEY UPDATE `my_name`=values(`my_name`)
My table has the following columns defined:
id INT primary, auto-increment
uid INT unsigned, unique
pnid INT unsigned, unique
my_name VARCHAR(24)
I have one table entry already:
id(0), uid(7), pnid(100), my_name(test)
When I execute the above query, what I expected to see was two rows:
id(0), uid(7), pnid(100), my_name(test)
id(1), uid(7), pnid(200), my_name(stink)
What's happening, and I am one confused puppy because of it, is that the existing row is being modified...
id(0), uid(7), pnid(100), my_name(stink)
The same thing happens if I modify uid and pnid so they are no longer unique. Can anyone explain to me why this is happening?
EDIT I made the two columns combinationally unique using the following command:
ALTER TABLE `user_pen_names` ADD UNIQUE KEY `upn_unique_id` (`uid`, `pnid`)
I've not done this before, but theoretically, the INSERT command should only shift to its UPDATE sub-command when uid AND pnid match a row already in the table. Nevertheless, this also didn't work.
It works fine for me. I suspect you didn't run the test you thought you were running.
I tested on a Macbook with MySQL 8.0.1:
mysql> CREATE TABLE `user_pen_names` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`uid` int(10) unsigned DEFAULT NULL,
`pnid` int(10) unsigned DEFAULT NULL,
`my_name` varchar(24) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `uid` (`uid`,`pnid`)
);
mysql> INSERT INTO `user_pen_names` (id, `uid`,`pnid`,`my_name`) VALUES (0, '7','100','test');
mysql> INSERT INTO `user_pen_names` (`uid`,`pnid`,`my_name`) VALUES ('7','200','stink')
ON DUPLICATE KEY UPDATE `my_name`=values(`my_name`);
mysql> SELECT * FROM user_pen_names;
+----+------+------+---------+
| id | uid | pnid | my_name |
+----+------+------+---------+
| 1 | 7 | 100 | test |
| 2 | 7 | 200 | stink |
+----+------+------+---------+
Note that when you insert a 0 into an auto-increment column, it generates a new id, starting at 1.
You have uid & pnid set as unique. So because you can't insert another uid=7, it's modifying the 7 row that has uid of 7 already.

Duplicating records in a mysql table intermittently causes statements to hang and not return

We are trying to duplicate existing records in a table: make 10 records out of one. The original table contains 75.000 records, and once the statements are done will contain about 750.000 (10 times as many). The statements sometimes finish after 10 minutes, but many times they never return. Hours later we will receive a timeout. This happens about 1 out of 3 times. We are using a test database where nobody is working on, so there is no concurrent access to the table. I don't see any way to optimise the SQL since to me the EXPLAIN PLAN looks fine.
The database is mysql 5.5 hosted on AWS RDS db.m3.x-large. The CPU load goes up to 50% during the statements.
Question: What could cause this intermittent behaviour? How do I resolve it?
This is the SQL to create a temporary table, make roughly 9 new records per existing record in ct_revenue_detail in the temporary table, and then copy the data from the temporary table to ct_revenue_detail
---------------------------------------------------------------------------------------------------------
-- CREATE TEMPORARY TABLE AND COPY ROLL-UP RECORDS INTO TABLE
---------------------------------------------------------------------------------------------------------
CREATE TEMPORARY TABLE ct_revenue_detail_tmp
SELECT r.month,
r.period,
a.participant_eid,
r.employee_name,
r.employee_cc,
r.assignments_cc,
r.lob_name,
r.amount,
r.gp_run_rate,
r.unique_id,
r.product_code,
r.smart_product_name,
r.product_name,
r.assignment_type,
r.commission_pcent,
r.registered_name,
r.segment,
'Y' as account_allocation,
r.role_code,
r.product_eligibility,
r.revenue_core,
r.revenue_ict,
r.primary_account_manager_id,
r.primary_account_manager_name
FROM ct_revenue_detail r
JOIN ct_account_allocation_revenue a
ON a.period = r.period AND a.unique_id = r.unique_id
WHERE a.period = 3 AND lower(a.rollup_revenue) = 'y';
This is the second query. It copies the records from the temporary table back to the ct_revenue_detail TABLE
INSERT INTO ct_revenue_detail(month,
period,
participant_eid,
employee_name,
employee_cc,
assignments_cc,
lob_name,
amount,
gp_run_rate,
unique_id,
product_code,
smart_product_name,
product_name,
assignment_type,
commission_pcent,
registered_name,
segment,
account_allocation,
role_code,
product_eligibility,
revenue_core,
revenue_ict,
primary_account_manager_id,
primary_account_manager_name)
SELECT month,
period,
participant_eid,
employee_name,
employee_cc,
assignments_cc,
lob_name,
amount,
gp_run_rate,
unique_id,
product_code,
smart_product_name,
product_name,
assignment_type,
commission_pcent,
registered_name,
segment,
account_allocation,
role_code,
product_eligibility,
revenue_core,
revenue_ict,
primary_account_manager_id,
primary_account_manager_name
FROM ct_revenue_detail_tmp;
This is the EXPLAIN PLAN of the SELECT:
+----+-------------+-------+------+------------------------+--------------+---------+------------------------------------+-------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+------------------------+--------------+---------+------------------------------------+-------+-------------+
| 1 | SIMPLE | a | ref | ct_period,ct_unique_id | ct_period | 4 | const | 38828 | Using where |
| 1 | SIMPLE | r | ref | ct_period,ct_unique_id | ct_unique_id | 5 | optusbusiness_20160802.a.unique_id | 133 | Using where |
+----+-------------+-------+------+------------------------+--------------+---------+------------------------------------+-------+-------------+
This is the definition of ct_revenue_detail:
ct_revenue_detail | CREATE TABLE `ct_revenue_detail` (
`participant_eid` varchar(255) DEFAULT NULL,
`lob_name` varchar(255) DEFAULT NULL,
`amount` decimal(32,16) DEFAULT NULL,
`employee_name` varchar(255) DEFAULT NULL,
`period` int(11) NOT NULL DEFAULT '0',
`pk_id` int(11) NOT NULL AUTO_INCREMENT,
`gp_run_rate` decimal(32,16) DEFAULT NULL,
`month` int(11) DEFAULT NULL,
`assignments_cc` int(11) DEFAULT NULL,
`employee_cc` int(11) DEFAULT NULL,
`unique_id` int(11) DEFAULT NULL,
`product_code` varchar(50) DEFAULT NULL,
`smart_product_name` varchar(255) DEFAULT NULL,
`product_name` varchar(255) DEFAULT NULL,
`assignment_type` varchar(100) DEFAULT NULL,
`commission_pcent` decimal(32,16) DEFAULT NULL,
`registered_name` varchar(255) DEFAULT NULL,
`segment` varchar(100) DEFAULT NULL,
`account_allocation` varchar(25) DEFAULT NULL,
`role_code` varchar(25) DEFAULT NULL,
`product_eligibility` varchar(25) DEFAULT NULL,
`rollup` varchar(10) DEFAULT NULL,
`revised_amount` decimal(32,16) DEFAULT NULL,
`original_amount` decimal(32,16) DEFAULT NULL,
`comment` varchar(255) DEFAULT NULL,
`amount_revised_flag` varchar(255) DEFAULT NULL,
`exclude_segment` varchar(10) DEFAULT NULL,
`revenue_type` varchar(50) DEFAULT NULL,
`revenue_core` decimal(32,16) DEFAULT NULL,
`revenue_ict` decimal(32,16) DEFAULT NULL,
`primary_account_manager_id` varchar(100) DEFAULT NULL,
`primary_account_manager_name` varchar(100) DEFAULT NULL,
PRIMARY KEY (`pk_id`,`period`),
KEY `ct_participant_eid` (`participant_eid`),
KEY `ct_period` (`period`),
KEY `ct_employee_name` (`employee_name`),
KEY `ct_month` (`month`),
KEY `ct_segment` (`segment`),
KEY `ct_unique_id` (`unique_id`)
) ENGINE=InnoDB AUTO_INCREMENT=15338782 DEFAULT CHARSET=utf8
/*!50100 PARTITION BY HASH (period)
PARTITIONS 120 */ |
Edit 29.9: The intermittent behaviour was caused by the omission of a delete SQL statement. If the original table was not deleted before automatically duplicating records. The first time all is fine: we started with 75,000 records and ended up with 750,000 records.
Because the delete statement was missed the next time we already had 750,000 records, and the script would make 7.5M records out of it. That would still work, but the subsequent run trying to make 7.5M into 75M records would fail. 1 in 3 failures.
We would then try all the scripts manually, and of course then we would delete the table properly, and all would go well. The reason why we didn't see that beforehand was that our application does not output anything when running the SQL.
The real delay would be with your second query inserting from the temporary table back into the original tables. There are several issues here.
Sheer amount of data
Looking at your table, there are several columns of varchar(255) a conservative estimate would put an average length of your rows at 2kb. That's roughly about 1.5GB that's being copied from one table to another and being moved to different partitions! Partitioning makes reads more efficient but for inserting the engine has to figure out which partition the data should be moved to so it's actually writing to lots of different files instead of sequentially to one file. For spinning disks, this is slow.
Rebuilding the indexes
One of the biggest costs of inserts is rebuilding the indexes. In your case you have many of them.
KEY `ct_participant_eid` (`participant_eid`),
KEY `ct_period` (`period`),
KEY `ct_employee_name` (`employee_name`),
KEY `ct_month` (`month`),
KEY `ct_segment` (`segment`),
KEY `ct_unique_id` (`unique_id`)
And some of this indexes like employee_name are on varchar(255) columns. That means pretty hefty indexes.
Solution part 1 - Normalize
Your database isn't normalized. Here is a classic example:
primary_account_manager_id varchar(100) DEFAULT NULL,
primary_account_manager_name varchar(100) DEFAULT NULL,
You should really be having a table called account_manager and these two fields should be in that. primary_account_manager_id probably should be an integer field. It is only the id that should be in your ct_revenue_detail table.
Similarly you really shouldn't have employee_name, registered_name etc in this table. They should be in separate tables and they should be linked to ct_revenue_detail by foreign keys.
Solution part 2 - Rethink indexes.
Do you need so many? Mysql only uses one index per table per where clause anyway so some of these indexes are probably never used. Is this one really needed:
KEY `ct_unique_id` (`unique_id`)
You already have primary key why do you even need another unique column?
Indexes for the SELECT: For
SELECT ...
FROM ct_revenue_detail r
JOIN ct_account_allocation_revenue a
ON a.period = r.period AND a.unique_id = r.unique_id
WHERE a.period = 3 AND lower(a.rollup_revenue) = 'y';
a needs INDEX(period, rollup_revenue) in either order. However, you also need to declare rollup_revenue to have a ..._ci collation and avoiding the column in a function. That is change lower(a.rollup_revenue) = 'y' to a.rollup_revenue = 'y'.
r needs INDEX(period, unique_id) in either order. But, as e4c5 mentioned, if unique_id is really "unique" in this table, then take advantage of such.
Bulkiness is a problem when shoveling data around.
decimal(32,16) takes 16 bytes and gives you precision and range that are probably unnecessary. Consider FLOAT (4 bytes, ~7 significant digits, adequate range) or DOUBLE (8 bytes, ~16 significant digits, adequate range).
month int(11) takes 4 bytes. If that is just a value 1..12, then use TINYINT UNSIGNED (1 byte).
DEFAULT NULL -- I suspect most columns will never be NULL; if so, say NOT NULL for them.
amount_revised_flag varchar(255) -- if that is really a "flag", such as "yes"/"no", then use an ENUM and save lots of space.
It is uncommon to have both an id and a name in the same table (see primary_account_manager*); that is usually relegated to a "normalization table".
"Normalize" (already mentioned by #e4c5).
HASH partitioning
Hash partitioning is virtually useless. Unless you can justify it (preferably with a benchmark), I recommend removing partitioning. More discussion.
Adding or removing partitioning usually involves changing the indexes. Please show us the main queries so we can help you build suitable indexes (especially composite indexes) for the queries.

MySQL update query locks table for insertion

I am in a problematic situation and found dozens of questions on same topic, but may b i am not able to understand those solutions as per my issue.
I have a system built in Codeigniter, and it does the following
codeigniter->start_transaction()
UPDATE T SET A = 1, MODIFIED = NOW()
WHERE PK IN
( SELECT PK FROM
(SELECT PK, LAST_INSERT_ID(PK) FROM T
where FK = 31 AND A=0 AND R=1 AND R_FK = 21
AND DEAD = 0 LIMIT 0,1) AS TBL1
) and A=0 AND R = 1 AND R_FK = 21 AND DEAD = 0
-- what this query does is , it takes a row dynamically which is not dead yet,
--and not assigned and it's linked to 21 id (R_FK) from R table,
-- when finds the row, update it to be marked as assigned (A=1).
-- PK = LAST_INSERT_ID(PK) ensures that last_insert_id is updated with this row id, so i can retrieve it from PHP
GOTO MODULE B
MODULE B {
INSERT INTO T(A,B,C,D,E,F,R,FK,R_FK,DEAD,MODIFIED) VALUES(ALL VALUES)
-- this line gives me lock wait timeout exceeded.
}
MySQL version is 5.1.63-community-log
Table T is an INNODB table and has only one normal type index on FK field, and no foreign key constraints are there. PrimaryKey (PK) field is an auto_increment field.
I get lock wait timeout in the above case , and that is due to first transactional update holding lock on table, how can i avoid lock on table with that update query ,while using transactions, I cannot commit the transaction until i receive response from MODULE B .
I don't have much detailed knowledge about DB and structural things, so please bear with me if i said something not making sense.
--UPDATE--
-- TABLE T Structure
CREATE TABLE `T` (
`PK` int(11) NOT NULL AUTO_INCREMENT,
`FK` int(11) DEFAULT NULL,
`P` varchar(1024) DEFAULT NULL,
`DEAD` tinyint(1) NOT NULL DEFAULT '0',
`A` tinyint(1) NOT NULL DEFAULT '0',
`MODIFIED` datetime DEFAULT NULL,
`R` tinyint(4) NOT NULL DEFAULT '0',
`R_FK` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`PK`),
KEY `FK_REFERENCE_54` (`FK`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- Indexes Information
SHOW INDEX FROM T;
1- Field FK, Cardinality 65 , NULL => Yes , Index_Type => BTRee
2- Field PK, Cardinality 11153, Index_Type => BTRee

auto-increment value in update conflicts with internally generated values

I've been getting this error from an insert on duplicate update query in MYSQL randomly every now and then.
Any idea what's going on? I can't seem to reproduce the error consistently it occurs sometimes and then sometimes not.
Here is the query in question:
INSERT INTO friendships (u_id_1,u_id_2,status) VALUES (?,?,'active') ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id);
And the schema describing the table is:
DROP TABLE IF EXISTS `friendships`;
CREATE TABLE `friendships` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`u_id_1` int(11) NOT NULL,
`u_id_2` int(11) NOT NULL,
`status` enum('active','pending','rejected','blocked') DEFAULT 'pending' NOT NULL,
`initiatiator` enum('1','2','system') DEFAULT 'system' NOT NULL,
`terminator` enum('1','2','system') DEFAULT NULL,
`confirm_timestamp` timestamp DEFAULT NULL,
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY (`u_id_1`,`u_id_2`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Your ON DUPLICATE KEY UPDATE statement isn't helping you at all here.
You are taking the LAST_INSERT_ID, which is the auto inc of the last successfully inserted row, and trying to update the duplicated row with that id. This will always cause a duplicate primary (you're trying to change the id of some row to match the id of the last thing you added)
If your goal is to either
Insert a new row, or
Update an existing row with 'active'
Then
INSERT INTO friendships (u_id_1,u_id_2,status)
VALUES ( ? , ? ,'active')
ON DUPLICATE KEY UPDATE
status = 'active'; -- I changed this
A separate consideration is to check the source for duplicates. I had a simple audit table
INSERT INTO table
field1, field2, ... , field3
ON DUPLICATE KEY UPDATE row_id=row_id;
where field1 is an INDEX but not UNIQUE with row_ID as INTEGER UNSIGNED AUTO_INCREMENT PRIMARY KEY.
Ran for years, but an unexpected duplicate row triggered this error.
Fixed by de-duping the source.
Possibly a trivial point to many readers here, but it cost me some head-scratching (followed by a facepalm).