Update based on select data - mysql

I have a query which returns
Primary Key | Value
And I want to update my data where primary key = primary key and value = another value. So basically I have
SELECT
id,
custom_value
FROM custom
JOIN user
USING (user_id)
WHERE id = 45
AND custom_name = "stuff";
And this generates
id | custom_value
1 | stuff
2 | stuff 2
And I want to then UPDATE the existing db with
UPDATE table SET field = custom_value WHERE id = id;
Or if I used the first row it would be
UPDATE table SET field = 'stuff' WHERE id = 1;
How could I do this?
So...
SELECT UPDATE table
id, -> WHERE id = id
custom_value -> SET field = custom_value
FROM custom
JOIN user
USING (user_id)
WHERE id = 45
AND custom_name = "stuff";
Select data then update another table with that data.

Two steps:
create temporary table tmp_tbl as select .... your select goes here;
then
update table, tmp_tbl
set table.field = tmp_tbl.custom_value
where table.id = tmp_tbl.id
and then, optionally, to clean up the unneeded data:
drop table tmp_tbl
(Or let MySQL drop it automatically on session closure.)

Related

ON DUPLICATE KEY doesn't work with my INSERT INTO script

I have two tables. One is permission_index and the other is permission_index_bkp.
My script is supposed to transfer some rows and data from permission_index to permission_index_bkp. However, I don't want to create new rows if the data in all 4 columns is the same.
This is what I am trying to use:
INSERT INTO permission_index_bkp
(b_app, b_perm_type, b_perm_type_id, b_perm_3)
SELECT app, perm_type, perm_type_id, perm_3
FROM permission_index
WHERE app = 'module'
AND perm_type = '1'
AND perm_type_id = '2'
AND perm_3 IS NOT NULL
ON DUPLICATE KEY UPDATE app = app,
perm_type = perm_type,
perm_type_id = perm_type_id,
perm_3 = perm_3;
I've been getting errors like:
1052: Column 'app' in field list is ambiguous (I think I fixed this by changing columns' name in permission_index_bkp).
or
1054: Unknown column 'app' in 'field list'
Could you guys tell me how to fix my query?
Thanks!!
EDIT 1:
This has made the errors disappear, however, rows keep being added in permission_index_bkp, even though they have the same values:
INSERT INTO permission_index_bkp
(b_app, b_perm_type, b_perm_type_id, b_perm_3)
SELECT app, perm_type, perm_type_id, perm_3
FROM permission_index AS pi
WHERE app = 'module'
AND perm_type = '1'
AND perm_type_id = '2'
AND perm_3 IS NOT NULL
ON DUPLICATE KEY UPDATE b_app = b_app,
b_perm_type = b_perm_type,
b_perm_type_id = b_perm_type_id,
b_perm_3 = b_perm_3;
Use VALUES() in the ON DUPLICATE KEY clause to specify that you want to update a column to the value that would have been inserted.
And you need to be assigning to the column names in the destination table, not the names in the table you're selecting from, so they should all be b_XXX.
INSERT INTO permission_index_bkp
(b_app, b_perm_type, b_perm_type_id, b_perm_3)
SELECT app, perm_type, perm_type_id, perm_3
FROM permission_index
WHERE app = 'module'
AND perm_type = '1'
AND perm_type_id = '2'
AND perm_3 IS NOT NULL
ON DUPLICATE KEY UPDATE
b_app = VALUES(b_app),
b_perm_type = VALUES(b_perm_type),
b_perm_type_id = VALUES(b_perm_type_id),
b_perm_3 = VALUES(b_perm_3);
Also, you don't need to include the unique key column(s) in the ON DUPLICATE KEY UPDATE clause.
Though I never used this syntax but after doing some search in google I found that you need to declare an unique key on all four columns of your permission_index table to remove duplicate entries.
Please check below example:
create table test (a int, b int);
insert into test values(1,1);
insert into test values(1,2);
create table test2 (a int, b int,
UNIQUE KEY uktest2 (a,b));
INSERT INTO test2
(a,b)
SELECT a,b
FROM test b
WHERE a=1
ON DUPLICATE KEY UPDATE a = b.a,
b=b.b;
select * from test;
select * from test2;
Output:
table test:
a
b
1
1
1
2
table test2:
a
b
1
1
1
2
insert into test values(1,3);
insert into test values(1,1);
INSERT INTO test2
(a,b)
SELECT a,b
FROM test b
WHERE a=1
ON DUPLICATE KEY UPDATE a = b.a,
b=b.b;
select * from test;
select * from test2;
Output:
table test:
a
b
1
1
1
2
1
3
1
1
table test2:
a
b
1
1
1
2
1
3
db<fiddle here

Update field with another auto increment field value MySQL

I have a table in MYSQL database with two fields:
Id (auto increment field).
Post_Id.
When I insert a new record both fields should have the same value. So I should update post_id with Id value, and at the same time make sure that I update the field with the right value not with any other new inserted record value.
I tried this SQL statement but it was very slow and I was not sure that I select the right value:
set #auto_id := (SELECT AUTO_INCREMENT
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME='table_name'
AND TABLE_SCHEMA=DATABASE() );
update table_name set post_id= #auto_id where id=#auto_id ;
I don't have long experience with MySQL and I cannot change the table structure .
The approach you followed is not transaction safe as well.
The best option I can think about is to use trigger
Edit: According to #lagripe's mentionings
CREATE TRIGGER sometrigger
AFTER INSERT ON sometable
BEGIN
SET NEW.post_id := (SELECT id from sometable order by DESC limit 1) + 1 ; // you may need +1 here. I couldn't test it.
END
or you may consider to use LAST_INSERT_ID
insert into table_name values ( .... );
update table_name set post_id = LAST_INSERT_ID();
but why do you need two columns with the same id at first place?
if you really need why don't you use computed/generated columns?
CREATE TABLE Table1(
id DOUBLE,
post_id DOUBLE as (id)
);
you can use triggers :
CREATE TRIGGER UpdatePOST_id
BEFORE INSERT ON table_db
FOR EACH ROW
SET NEW.post_id := (select id from table_db order by id DESC LIMIT 1)+1 ;
from now on, whatever you insert your as a value in post_id column will be replaced with the id inserted automatically.
Test :
|id|post_id|
|20| 20 |
|21| 21 |
|22| 22 |
|23| 23 |
To drop the trigger :
DROP trigger UpdatePOST_id

MySQL loop for every row and update

I have table called users and for example it looks like:
Name ID
Tom 1
Al 55
Kate 22
...
The problem is: the IDs are not in sequence.
I would like to give them new IDs from 1 to length of users. I would like to declare some var=1 and make UPDATE in loop and give them new ID = var, and later do var=var+1 until var <= users length
How can I do this?
Thank you very much!
Here is how you would do that in MySQL. Just run this:
set #newid=0;
update users set ID = (#newid:=#newid+1) order by ID;
If the ID in the Users table is not referenced by other tables by FK, the following query can update the ID in the table to have new consecutive values:
CREATE TABLE IF NOT EXISTS tmpUsers (
ID int not null,
newID int not null auto_increment primary key
) engine = mysisam;
INSERT INTO tmpUsers (ID,newID)
SELECT ID,NULL
FROM users
ORDER BY ID;
UPDATE users u INNER JOIN tmpUsers t
ON u.ID=t.ID
SET u.ID=t.NewID;
DROP TABLE IF EXISTS tmpUsers;
Test script:
CREATE TABLE users (ID int not null, name nvarchar(128) not null);
INSERT users(ID,name)
VALUES (1,'aaa'),(4,'bbb'),(7,'ggg'),(17,'ddd');
SELECT * FROM users;

add duplicate value for primary key

I have a table with fields (id , brand, model , os)
id as primary key
tables have ~ 6000 rows
Now i want to add new field with id=4012 (already exist) & increment id++ for id>4012
silliest way :
make table backup
remove entries with id >= 4012
insert new entry with id = 4012
restore table from backup
stupid, but works ))
Looking for more beautiful solution
Thx
table structure :
CREATE TABLE IF NOT EXISTS `mobileslist` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`brand` text NOT NULL,
`model` text NOT NULL,
`os` text NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=14823 ;
i try :
UPDATE mobileslist SET id = id + 1 WHERE id IN (SELECT id FROM
mobileslist WHERE id >= 4822 ORDER BY id);
but got answer :
1093 - You can't specify target table 'mobileslist' for update in FROM clause
1) Create a temporary table, with descending order by ID.
2) Perform an UPDATE query on the temporary table which sets ID = ID + 1 WHERE ID >= 4012
3) Drop the temporary table
4) Perform your insert operation on the original table.
Hope i understood it right, you want to insert a new entry at position 4012 moving & reassigning all the entries present at Id = 4012 or more with a new id incremented by 1.
Hope this helps.
Try this:
UPDATE <TableName>
SET
id = id + 1
WHERE id IN (SELECT id FROM <TableName> WHERE id >= 4012 ORDER BY id)
INSERT INTO <TableName>
(id , brand, model , os)
VALUE
(4012, "<BrandName>", "<Model>", "<OS>")
Updated Answer:
DECLARE #MaxId INT, #Difference INT
SELECT
#MaxId = MAX(id)
FROM mobileslist
SET #Difference = #MaxId - 4012
UPDATE mobileslist
SET
id = id + #Difference
where id >= 4012
INSERT INTO mobileslist
(id , brand, model , os)
VALUE
(4012, "TestBrand", "TestModel", "TestOS")
UPDATE mobileslist
SET
id = id - #Difference
where id > 4012

How to delete duplicates on a MySQL table?

I need to DELETE duplicated rows for specified sid on a MySQL table.
How can I do this with an SQL query?
DELETE (DUPLICATED TITLES) FROM table WHERE SID = "1"
Something like this, but I don't know how to do it.
This removes duplicates in place, without making a new table.
ALTER IGNORE TABLE `table_name` ADD UNIQUE (title, SID)
Note: This only works well if index fits in memory.
Suppose you have a table employee, with the following columns:
employee (first_name, last_name, start_date)
In order to delete the rows with a duplicate first_name column:
delete
from employee using employee,
employee e1
where employee.id > e1.id
and employee.first_name = e1.first_name
Deleting duplicate rows in MySQL in-place, (Assuming you have a timestamp col to sort by) walkthrough:
Create the table and insert some rows:
create table penguins(foo int, bar varchar(15), baz datetime);
insert into penguins values(1, 'skipper', now());
insert into penguins values(1, 'skipper', now());
insert into penguins values(3, 'kowalski', now());
insert into penguins values(3, 'kowalski', now());
insert into penguins values(3, 'kowalski', now());
insert into penguins values(4, 'rico', now());
select * from penguins;
+------+----------+---------------------+
| foo | bar | baz |
+------+----------+---------------------+
| 1 | skipper | 2014-08-25 14:21:54 |
| 1 | skipper | 2014-08-25 14:21:59 |
| 3 | kowalski | 2014-08-25 14:22:09 |
| 3 | kowalski | 2014-08-25 14:22:13 |
| 3 | kowalski | 2014-08-25 14:22:15 |
| 4 | rico | 2014-08-25 14:22:22 |
+------+----------+---------------------+
6 rows in set (0.00 sec)
Remove the duplicates in place:
delete a
from penguins a
left join(
select max(baz) maxtimestamp, foo, bar
from penguins
group by foo, bar) b
on a.baz = maxtimestamp and
a.foo = b.foo and
a.bar = b.bar
where b.maxtimestamp IS NULL;
Query OK, 3 rows affected (0.01 sec)
select * from penguins;
+------+----------+---------------------+
| foo | bar | baz |
+------+----------+---------------------+
| 1 | skipper | 2014-08-25 14:21:59 |
| 3 | kowalski | 2014-08-25 14:22:15 |
| 4 | rico | 2014-08-25 14:22:22 |
+------+----------+---------------------+
3 rows in set (0.00 sec)
You're done, duplicate rows are removed, last one by timestamp is kept.
For those of you without a timestamp or unique column.
You don't have a timestamp or a unique index column to sort by? You're living in a state of degeneracy. You'll have to do additional steps to delete duplicate rows.
create the penguins table and add some rows
create table penguins(foo int, bar varchar(15));
insert into penguins values(1, 'skipper');
insert into penguins values(1, 'skipper');
insert into penguins values(3, 'kowalski');
insert into penguins values(3, 'kowalski');
insert into penguins values(3, 'kowalski');
insert into penguins values(4, 'rico');
select * from penguins;
# +------+----------+
# | foo | bar |
# +------+----------+
# | 1 | skipper |
# | 1 | skipper |
# | 3 | kowalski |
# | 3 | kowalski |
# | 3 | kowalski |
# | 4 | rico |
# +------+----------+
make a clone of the first table and copy into it.
drop table if exists penguins_copy;
create table penguins_copy as ( SELECT foo, bar FROM penguins );
#add an autoincrementing primary key:
ALTER TABLE penguins_copy ADD moo int AUTO_INCREMENT PRIMARY KEY first;
select * from penguins_copy;
# +-----+------+----------+
# | moo | foo | bar |
# +-----+------+----------+
# | 1 | 1 | skipper |
# | 2 | 1 | skipper |
# | 3 | 3 | kowalski |
# | 4 | 3 | kowalski |
# | 5 | 3 | kowalski |
# | 6 | 4 | rico |
# +-----+------+----------+
The max aggregate operates upon the new moo index:
delete a from penguins_copy a left join(
select max(moo) myindex, foo, bar
from penguins_copy
group by foo, bar) b
on a.moo = b.myindex and
a.foo = b.foo and
a.bar = b.bar
where b.myindex IS NULL;
#drop the extra column on the copied table
alter table penguins_copy drop moo;
select * from penguins_copy;
#drop the first table and put the copy table back:
drop table penguins;
create table penguins select * from penguins_copy;
observe and cleanup
drop table penguins_copy;
select * from penguins;
+------+----------+
| foo | bar |
+------+----------+
| 1 | skipper |
| 3 | kowalski |
| 4 | rico |
+------+----------+
Elapsed: 1458.359 milliseconds
What's that big SQL delete statement doing?
Table penguins with alias 'a' is left joined on a subset of table penguins called alias 'b'. The right hand table 'b' which is a subset finds the max timestamp [ or max moo ] grouped by columns foo and bar. This is matched to left hand table 'a'. (foo,bar,baz) on left has every row in the table. The right hand subset 'b' has a (maxtimestamp,foo,bar) which is matched to left only on the one that IS the max.
Every row that is not that max has value maxtimestamp of NULL. Filter down on those NULL rows and you have a set of all rows grouped by foo and bar that isn't the latest timestamp baz. Delete those ones.
Make a backup of the table before you run this.
Prevent this problem from ever happening again on this table:
If you got this to work, and it put out your "duplicate row" fire. Great. Now define a new composite unique key on your table (on those two columns) to prevent more duplicates from being added in the first place.
Like a good immune system, the bad rows shouldn't even be allowed in to the table at the time of insert. Later on all those programs adding duplicates will broadcast their protest, and when you fix them, this issue never comes up again.
Following remove duplicates for all SID-s, not only single one.
With temp table
CREATE TABLE table_temp AS
SELECT * FROM table GROUP BY title, SID;
DROP TABLE table;
RENAME TABLE table_temp TO table;
Since temp_table is freshly created it has no indexes. You'll need to recreate them after removing duplicates. You can check what indexes you have in the table with SHOW INDEXES IN table
Without temp table:
DELETE FROM `table` WHERE id IN (
SELECT all_duplicates.id FROM (
SELECT id FROM `table` WHERE (`title`, `SID`) IN (
SELECT `title`, `SID` FROM `table` GROUP BY `title`, `SID` having count(*) > 1
)
) AS all_duplicates
LEFT JOIN (
SELECT id FROM `table` GROUP BY `title`, `SID` having count(*) > 1
) AS grouped_duplicates
ON all_duplicates.id = grouped_duplicates.id
WHERE grouped_duplicates.id IS NULL
)
After running into this issue myself, on a huge database, I wasn't completely impressed with the performance of any of the other answers. I want to keep only the latest duplicate row, and delete the rest.
In a one-query statement, without a temp table, this worked best for me,
DELETE e.*
FROM employee e
WHERE id IN
(SELECT id
FROM (SELECT MIN(id) as id
FROM employee e2
GROUP BY first_name, last_name
HAVING COUNT(*) > 1) x);
The only caveat is that I have to run the query multiple times, but even with that, I found it worked better for me than the other options.
This always seems to work for me:
CREATE TABLE NoDupeTable LIKE DupeTable;
INSERT NoDupeTable SELECT * FROM DupeTable group by CommonField1,CommonFieldN;
Which keeps the lowest ID on each of the dupes and the rest of the non-dupe records.
I've also taken to doing the following so that the dupe issue no longer occurs after the removal:
CREATE TABLE NoDupeTable LIKE DupeTable;
Alter table NoDupeTable Add Unique `Unique` (CommonField1,CommonField2);
INSERT IGNORE NoDupeTable SELECT * FROM DupeTable;
In other words, I create a duplicate of the first table, add a unique index on the fields I don't want duplicates of, and then do an Insert IGNORE which has the advantage of not failing as a normal Insert would the first time it tried to add a duplicate record based on the two fields and rather ignores any such records.
Moving fwd it becomes impossible to create any duplicate records based on those two fields.
The following works for all tables
CREATE TABLE `noDup` LIKE `Dup` ;
INSERT `noDup` SELECT DISTINCT * FROM `Dup` ;
DROP TABLE `Dup` ;
ALTER TABLE `noDup` RENAME `Dup` ;
Here is a simple answer:
delete a from target_table a left JOIN (select max(id_field) as id, field_being_repeated
from target_table GROUP BY field_being_repeated) b
on a.field_being_repeated = b.field_being_repeated
and a.id_field = b.id_field
where b.id_field is null;
This work for me to remove old records:
delete from table where id in
(select min(e.id)
from (select * from table) e
group by column1, column2
having count(*) > 1
);
You can replace min(e.id) to max(e.id) to remove newest records.
delete p from
product p
inner join (
select max(id) as id, url from product
group by url
having count(*) > 1
) unik on unik.url = p.url and unik.id != p.id;
I find Werner's solution above to be the most convenient because it works regardless of the presence of a primary key, doesn't mess with tables, uses future-proof plain sql, is very understandable.
As I stated in my comment, that solution hasn't been properly explained though.
So this is mine, based on it.
1) add a new boolean column
alter table mytable add tokeep boolean;
2) add a constraint on the duplicated columns AND the new column
alter table mytable add constraint preventdupe unique (mycol1, mycol2, tokeep);
3) set the boolean column to true. This will succeed only on one of the duplicated rows because of the new constraint
update ignore mytable set tokeep = true;
4) delete rows that have not been marked as tokeep
delete from mytable where tokeep is null;
5) drop the added column
alter table mytable drop tokeep;
I suggest that you keep the constraint you added, so that new duplicates are prevented in the future.
This procedure will remove all duplicates (incl multiples) in a table, keeping the last duplicate. This is an extension of Retrieving last record in each group
Hope this is useful to someone.
DROP TABLE IF EXISTS UniqueIDs;
CREATE Temporary table UniqueIDs (id Int(11));
INSERT INTO UniqueIDs
(SELECT T1.ID FROM Table T1 LEFT JOIN Table T2 ON
(T1.Field1 = T2.Field1 AND T1.Field2 = T2.Field2 #Comparison Fields
AND T1.ID < T2.ID)
WHERE T2.ID IS NULL);
DELETE FROM Table WHERE id NOT IN (SELECT ID FROM UniqueIDs);
Another easy way... using UPDATE IGNORE:
U have to use an index on one or more columns (type index).
Create a new temporary reference column (not part of the index). In this column, you mark the uniques in by updating it with ignore clause. Step by step:
Add a temporary reference column to mark the uniques:
ALTER TABLE `yourtable` ADD `unique` VARCHAR(3) NOT NULL AFTER `lastcolname`;
=> this will add a column to your table.
Update the table, try to mark everything as unique, but ignore possible errors due to to duplicate key issue (records will be skipped):
UPDATE IGNORE `yourtable` SET `unique` = 'Yes' WHERE 1;
=> you will find your duplicate records will not be marked as unique = 'Yes', in other words only one of each set of duplicate records will be marked as unique.
Delete everything that's not unique:
DELETE * FROM `yourtable` WHERE `unique` <> 'Yes';
=> This will remove all duplicate records.
Drop the column...
ALTER TABLE `yourtable` DROP `unique`;
If you want to keep the row with the lowest id value:
DELETE n1 FROM 'yourTableName' n1, 'yourTableName' n2 WHERE n1.id > n2.id AND n1.email = n2.email
If you want to keep the row with the highest id value:
DELETE n1 FROM 'yourTableName' n1, 'yourTableName' n2 WHERE n1.id < n2.id AND n1.email = n2.email
Deleting duplicates on MySQL tables is a common issue, that usually comes with specific needs. In case anyone is interested, here (Remove duplicate rows in MySQL) I explain how to use a temporary table to delete MySQL duplicates in a reliable and fast way, also valid to handle big data sources (with examples for different use cases).
Ali, in your case, you can run something like this:
-- create a new temporary table
CREATE TABLE tmp_table1 LIKE table1;
-- add a unique constraint
ALTER TABLE tmp_table1 ADD UNIQUE(sid, title);
-- scan over the table to insert entries
INSERT IGNORE INTO tmp_table1 SELECT * FROM table1 ORDER BY sid;
-- rename tables
RENAME TABLE table1 TO backup_table1, tmp_table1 TO table1;
delete from `table` where `table`.`SID` in
(
select t.SID from table t join table t1 on t.title = t1.title where t.SID > t1.SID
)
Love #eric's answer but it doesn't seem to work if you have a really big table (I'm getting The SELECT would examine more than MAX_JOIN_SIZE rows; check your WHERE and use SET SQL_BIG_SELECTS=1 or SET MAX_JOIN_SIZE=# if the SELECT is okay when I try to run it). So I limited the join query to only consider the duplicate rows and I ended up with:
DELETE a FROM penguins a
LEFT JOIN (SELECT COUNT(baz) AS num, MIN(baz) AS keepBaz, foo
FROM penguins
GROUP BY deviceId HAVING num > 1) b
ON a.baz != b.keepBaz
AND a.foo = b.foo
WHERE b.foo IS NOT NULL
The WHERE clause in this case allows MySQL to ignore any row that doesn't have a duplicate and will also ignore if this is the first instance of the duplicate so only subsequent duplicates will be ignored. Change MIN(baz) to MAX(baz) to keep the last instance instead of the first.
This works for large tables:
CREATE Temporary table duplicates AS select max(id) as id, url from links group by url having count(*) > 1;
DELETE l from links l inner join duplicates ld on ld.id = l.id WHERE ld.id IS NOT NULL;
To delete oldest change max(id) to min(id)
This here will make the column column_name into a primary key, and in the meantime ignore all errors. So it will delete the rows with a duplicate value for column_name.
ALTER IGNORE TABLE `table_name` ADD PRIMARY KEY (`column_name`);
I think this will work by basically copying the table and emptying it then putting only the distinct values back into it but please double check it before doing it on large amounts of data.
Creates a carbon copy of your table
create table temp_table like oldtablename;
insert temp_table select * from oldtablename;
Empties your original table
DELETE * from oldtablename;
Copies all distinct values from the copied table back to your original table
INSERT oldtablename SELECT * from temp_table group by firstname,lastname,dob
Deletes your temp table.
Drop Table temp_table
You need to group by aLL fields that you want to keep distinct.
DELETE T2
FROM table_name T1
JOIN same_table_name T2 ON (T1.title = T2.title AND T1.ID <> T2.ID)
here is how I usually eliminate duplicates
add a temporary column, name it whatever you want(i'll refer as active)
group by the fields that you think shouldn't be duplicate and set their active to 1, grouping by will select only one of duplicate values(will not select duplicates)for that columns
delete the ones with active zero
drop column active
optionally(if fits to your purposes), add unique index for those columns to not have duplicates again
You could just use a DISTINCT clause to select the "cleaned up" list (and here is a very easy example on how to do that).
Could it work if you count them, and then add a limit to your delete query leaving just one?
For example, if you have two or more, write your query like this:
DELETE FROM table WHERE SID = 1 LIMIT 1;
There are just a few basic steps when removing duplicate data from your table:
Back up your table!
Find the duplicate rows
Remove the duplicate rows
Here is the full tutorial: https://blog.teamsql.io/deleting-duplicate-data-3541485b3473