What's the best way to delete duplicate records in a mysql database using rails or mysql queries?
What you can do is copy the distinct records into a new table by:
select distinct * into NewTable from MyTable
Here's another idea in no particular language:
rs = `select a, b, count(*) as c from entries group by 1, 2 having c > 1`
rs.each do |a, b, c|
`delete from entries where a=#{a} and b=#{b} limit #{c - 1}`
end
Edit:
Kudos to Olaf for that "having" hint :)
well, if it's a small table, from rails console you can do
class ActiveRecord::Base
def non_id_attributes
atts = self.attributes
atts.delete('id')
atts
end
end
duplicate_groups = YourClass.find(:all).group_by { |element| element.non_id_attributes }.select{ |gr| gr.last.size > 1 }
redundant_elements = duplicate_groups.map { |group| group.last - [group.last.first] }.flatten
redundant_elements.each(&:destroy)
Check for Duplicate entries :
SELECT DISTINCT(req_field) AS field, COUNT(req_field) AS fieldCount FROM
table_name GROUP BY req_field HAVING fieldCount > 1
Remove Duplicate Queries :
DELETE FROM table_name
USING table_name, table_name AS vtable
WHERE
(table_name.id > vtable.id)
AND (table_name.req_field=req_field)
Replace req_field and table_name - should work without any issues.
New to SQL :-)
This is a classic question - often asked in interviews:-)
I don't know whether it'll work in MYSQL but it works in most databases -
> create table t(
> a char(2),
> b char(2),
> c smallint )
> select a,b,c,count(*) from t
> group by a,b,c
> having count(*) > 1
a b c
-- -- ------ -----------
(0 rows affected)
> insert into t values ("aa","bb",1)
(1 row affected)
> insert into t values ("aa","bb",1)
(1 row affected)
> insert into t values ("aa","bc",1)
(1 row affected)
> select a,b,c,count(*) from t group by a,b,c having count(*) > 1
a b c
-- -- ------ -----------
aa bb 1 2
(1 row affected)
If you have PK (id) in table (EMP) and want to older delete duplicate records with name column. For large data following query may be good approach.
DELETE t3
FROM (
SELECT t1.name, t1.id
FROM (
SELECT name
FROM EMP
GROUP BY name
HAVING COUNT(name) > 1
) AS t0 INNER JOIN EMP t1 ON t0.name = t1.name
) AS t2 INNER JOIN EMP t3 ON t3.name = t2.name
WHERE t2.id < t3.id;
suppose we have a table name tbl_product and there is duplicacy in the field p_pi_code and p_nats_id in maximum no of count then
first create a new table insert the data from existing table ...
ie from tbl_product to newtable1 if anything else then newtable1 to newtable2
CREATE TABLE `newtable2` (
`p_id` int(10) unsigned NOT NULL auto_increment,
`p_status` varchar(45) NOT NULL,
`p_pi_code` varchar(45) NOT NULL,
`p_nats_id` mediumint(8) unsigned NOT NULL,
`p_is_special` tinyint(4) NOT NULL,
PRIMARY KEY (`p_id`)
) ENGINE=InnoDB;
INSERT INTO newtable1 (p_status, p_pi_code, p_nats_id, p_is_special) SELECT
p_status, p_pi_code, p_nats_id, p_is_special FROM tbl_product group by p_pi_code;
INSERT INTO newtable2 (p_status, p_pi_code, p_nats_id, p_is_special) SELECT
p_status, p_pi_code, p_nats_id, p_is_special FROM newtable1 group by p_nats_id;
after that we see all the duplicacy in the field is removed
I had to do this recently on Oracle, but the steps would have been the same on MySQL. It was a lot of data, at least compared to what I'm used to working with, so my process to de-dup was comparatively heavyweight. I'm including it here in case someone else comes along with a similar problem.
My duplicate records had different IDs, different updated_at times, possibly different updated_by IDs, but all other columns the same. I wanted to keep the most recently updated of any duplicate set.
I used a combination of Rails logic and SQL to get it done.
Step one: run a rake script to identify the IDs of the duplicate records, using model logic. IDs go in a text file.
Step two: create a temporary table with one column, the IDs to delete, loaded from the text file.
Step three: create another temporary table with all the records I'm going to delete (just in case!).
CREATE TABLE temp_duplicate_models
AS (SELECT * FROM models
WHERE id IN (SELECT * FROM temp_duplicate_ids));
Step four: actual deleting.
DELETE FROM models WHERE id IN (SELECT * FROM temp_duplicate_ids);
You can use:
http://lenniedevilliers.blogspot.com/2008/10/weekly-code-find-duplicates-in-sql.html
to get the duplicates and then just delete them via Ruby code or SQL code (I would do it in SQL code but thats up to you :-)
If your table has a PK (or you can easily give it one), you can specify any number of columns in the table to be equal (to qualify is as a duplicate) with the following query (may be a bit messy looking but it works):
DELETE FROM table WHERE pk_id IN(
SELECT DISTINCT t3.pk_id FROM (
SELECT t1.* FROM table AS t1 INNER JOIN (
SELECT col1, col2, col3, col4, COUNT(*) FROM table
GROUP BY col1, col2, col3, col4 HAVING COUNT(*)>1) AS t2
ON t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND t1.col3 = t2.col3 AND
t1.col4 = t2.col4)
AS t3, (
SELECT t1.* FROM table AS t1 INNER JOIN (
SELECT col1, col2, col3, col4, COUNT(*) FROM table
GROUP BY col1, col2, col3, col4 HAVING COUNT(*)>1) AS t2
ON t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND t1.col3 = t2.col3 AND
t1.col4 = t2.col4)
AS t4
WHERE t3.col1 = t4.col1 AND t3.pk_id > t4.pk_id
)
This will leave the first record entered into the database, deleting the 'newest' duplicates. If you want to keep the last record, switch the > to <.
In MySql when I put something like
delete from A where IDA in (select IDA from A )
mySql said something like "you can't use the same table in the select part of the delete operation."
I've just have to delete some duplicate records, and I have succeeded with a .php program like that
<?php
...
$res = hacer_sql("SELECT MIN(IDESTUDIANTE) as IDTODELETE
FROM `estudiante` group by `LASTNAME`,`FIRSTNAME`,`CI`,`PHONE`
HAVING COUNT(*) > 1 )");
while ( $reg = mysql_fetch_assoc($res) ) {
hacer_sql("delete from estudiante where IDESTUDIANTE = {$reg['IDTODELETE']}");
}
?>
I am using Alter Table
ALTER IGNORE TABLE jos_city ADD UNIQUE INDEX(`city`);
I used #krukid's answer above to do the following on a table with around 70,000 entries:
rs = 'select a, b, count(*) as c from table group by 1, 2 having c > 1'
# get a hashmap
dups = MyModel.connection.select_all(rs)
# convert to array
dupsarr = dups.map { |i| [i.a, i.b, i.c] }
# delete dups
dupsarr.each do |a,b,c|
ActiveRecord::Base.connection.execute("delete from table_name where a=#{MyModel.sanitize(a)} and b=#{MyModel.sanitize(b)} limit #{c-1}")
end
Here is the rails solution I came up with. May not be the most efficient, but not a big deal if its a one time migration.
distinct_records = MyTable.all.group(:distinct_column_1, :distinct_column_2).map {|mt| mt.id}
duplicates = MyTable.all.to_a.reject!{|mt| distinct_records.include? mt.id}
duplicates.each(&:destroy)
First, groups by all columns that determine uniqueness, the example shows 2 but you could have more or less
Second, selects the inverse of that group...all other records
Third, Deletes all those records.
Firstly do group by column on which you want to delete duplicate.But I am not doing it with group by.I am writing self join.
You don't need to create the temporary table.
Delete duplicate except one record:
In this table it should have auto increment column.
The possible solution that I've just come across:
DELETE n1 FROM names n1, names n2 WHERE n1.id > n2.id AND n1.name = n2.name
if you want to keep the row with the lowest auto increment id value OR
DELETE n1 FROM names n1, names n2 WHERE n1.id < n2.id AND n1.name = n2.name
if you want to keep the row with the highest auto increment id value.
You can cross check your solution, find duplicate again:
SELECT * FROM `names` GROUP BY name, id having count(name) > 1;
If it return 0 result, then you query is successful.
Related
I'm trying to delete duplicate rows from a mysql table, but still keep one.
However the following query seemingly deletes every duplicate row and I'm not sure why. Basically I want to delete the row if the outputID, title and type all matches.
DELETE DupRows.*
FROM output AS DupRows
INNER JOIN (
SELECT MIN(Output_ID) AS Output_ID, Title, Type
FROM output
GROUP BY Title, Type
HAVING COUNT(*) > 1
) AS SaveRows
ON SaveRows.Title = DupRows.Title
AND SaveRows.Type = DupRows.Type
AND SaveRows.Output_ID = DupRows.Output_ID;
Just :
DELETE DupRows
FROM output AS DupRows
INNER JOIN output AS SaveRows
ON SaveRows.Title = DupRows.Title
AND SaveRows.Type = DupRows.Type
AND DupRows.Output_ID > SaveRows.Output_ID
This will delete all duplicates on Title and Type while keeping the record with the lowest value.
If you are running MySQL 8.0, you can use window function ROW_NUMBER() to assign a rank to each record in Title/Type groups, ordered by id. Then you can delete all records whose row number is not 1.
DELETE FROM output
WHERE Output_ID IN (
SELECT Output_ID
FROM (
SELECT Output_ID, ROW_NUMBER() OVER(PARTITION BY Title, Type ORDER BY Output_ID) rn
FROM output
) x
WHERE rn > 1
)
Delete From output Where Output_ID NOT IN (
Select MIN(Output_ID) from output Group By Title, Type Having COUNT(*)>1
)
By below query duplicate rows with matching condition get deleted and keeps one oldest unique row.
NOTE:- In my query I used id column is auto increment column.
DELETE t1
FROM output t1, output t2
WHERE t1.Title = t2.Title
AND t1.Type = t2.Type
AND t1.Output_ID = t2.Output_ID
AND t1.id>t2.id
If you want to keep newly inserted unique row just change the last condition as:
DELETE t1
FROM output t1, output t2
WHERE t1.Title = t2.Title
AND t1.Type = t2.Type
AND t1.Output_ID = t2.Output_ID
AND t1.id<t2.id
I am working in MySQL trying to insert a bunch of entries into a table. There are many semi-duplicates that I need to get rid of. I want to delete all of the semi-duplicates except for one. When I run the code below, it seems to be deleting everything that I don't want to be deleted by keeping only entries that have multiple semi-duplicates. How can I delete all but one of the semi-duplicates while also keeping the entries of everything else?
delete from t where (ColA, ColB, ColC) in
(select * from (select ColA, ColB, ColC from t
group by ColA, ColB, ColC having count(*) > 1) as t);
Well, i don't think you can solve this without using a where
But you can not operate a SELECT WHERE on a DELETE using the same table
Exemple from my database
delete from tb_test where idtb_test not in (
select idtb_test from tb_test T1
WHERE T1.title in (select title from tb_test T2 group by T2.title having count(*) =1)
)
This query search for non-duplicates rows on col title and delete all the rows except the selected rows
Mysql Error :
Error Code: 1093. You can't specify target table 'tb_test' for update
in FROM clause
If I understand correctly, you want to delete almost all rows which have a duplicate, and you don't care which one remains undeleted.
Also, as I understand - this table has primary key.
In that case, it can be done with this query:
delete test_table t3
where t3.id in (
select t1.id
from test_table t1,
(
select col_a, col_b, col_c
from test_table
group by col_a, col_b, col_c
having count(*) > 1
) s1
where t1.col_a = s1.col_a
and t1.col_b = s1.col_b
and t1.col_c = s1.col_c
and t1.id != (
select min(t2.id)
from test_table t2
where t1.col_a = t2.col_a
and t1.col_b = t2.col_b
and t1.col_c = t2.col_c
group by col_a, col_b, col_c
)
)
Basically, this query deletes all row duplicates except for those with minimal ID, which is PK in my case.
PS: This works in Oracle DB, can't test in MySQL right now.
Requirement:
We have monthly run process, in which we create Table1 and Table2.
1. Let say if we ran our process in Jan it creates Table2. Since Jan is first run we don't have Table1 created before. so, system should put everything from Table2 into Table1.
2. Now let say if we ran our process in Feb it creates Table2. And our system should check if Table1 exists. If yes and (t2.Run_dt > t1.Insert_dt) then pick all ID's from Table2 which don't exists in Table1 and insert/append those records into Table1.
3. Now let say if we re-ran our process in Feb again it creates Table2. And our system should check if Table1 exists if yes and (t2.Run_dt = t1.Insert_dt) then pick all ID's from Table2 which doesn't exists in Table1 and insert/append those records into Table1.
4. and so on...
I have these two tables;
Table1
ID Price Insert_Dt
----- ------- -------------
345 24.35 01-APR-2015
Table2
ID Price Run_Date
----- ------- -------------
345 24.35 01-MAY-2015
678 15.35 01-MAY-2015
I want to write a query to update table1 on the below given logic.
If Table1.Run_date >= Table2.Insert_Dt
and Table2 - Table1 = records found
then insert new records into Table1
If Table1.Run_date >= Table2.Insert_Dt
and Table2-Table1 = no records found
then do nothing
else do nothing
DECLARE
nCount NUMBER;
mCount NUMBER;
BEGIN
select count(*) into nCount from dba_tables where table_name = Table1;
if ( (nCount>0)
and ( (select max(a.Run_Date) from Table1 a)
> (select max(b.Insert_Date) from Table2 b) ) )
then
create table difference as
select * from Table2 c where c.ID not in(select d.ID from Table2 d)
select count(*) into mCount
from dba_tables
where table_name = 'difference';
if (nCount > 0) then insert /*+ append */ into Table1
select ID,Price,Run_Date
from (select ID,Price,Insert_Date from difference);
end
END;
I think this does what you described, but I have a couple of questions:
insert into table1
with max_run as (
select id, max (insert_dt) as max_date
from table1
group by id
)
select
t2.id, t2.price, t2.run_date
from
table2 t2,
max_run t1
where
t2.id = t1.id (+) and
-- (t1.id is null or t2.run_date >= t1.max_date) changed below, per edit
(t1.id is null or t2.run_date > t1.max_date)
Are you sure you want a >= insert_dt and not a > insert_dt? Having a >= means we keep inserting those records over and over, even though they are not really new
Did you want to insert new records or update existing records with the new info? Your inquiry said you wanted to insert them, and I didn't see any update code in your snippet, so I kept that as-is
We have 2 tables called : "post" and " post_extra"
summery construction of "post" table's are: id,postdate,title,description
And for post_extra they are: eid,news_id,rating,views
"id" filed in the first table is related to "news_id" to the second table.
There are more than 100,000 records on the table, that many of them are duplicated. I want to keep only one record and remove duplicate records on "post" table that have the same title, and then remove the related record on "post_extra"
I ran this query on phpmyadmin but the server was crashed. And I had to restart it.
DELETE e
FROM Post p1, Post p2, Post_extra e
WHERE p1.postdate > p2.postdate
AND p1.title = p2.title
AND e.news_id = p1.id
How can I do this?
Suppose you have table named as 'tables' in which you have the duplicate records.
Firstly you have to do group by column on which you want to delete duplicate.But I am not doing it with group by.I am writing self join instead of writing nested query or creating temporary table.
SELECT * FROM `names` GROUP BY title, id having count(title) > 1;
This query return number of duplicate records with their title and id.
You don't need to create the temporary table in this case.
To Delete duplicate except one record:
In this table it should have auto increment column. The possible solution that I've just come across:
DELETE t1 FROM tables t1, tables t2 WHERE t1.id > t2.id AND t1.title = t2.title
if you want to keep the row with the lowest auto increment id value OR
DELETE t1 FROM tables t1, tables t2 WHERE t1.id < t2.id AND t1.title = n2.title
if you want to keep the row with the highest auto increment id value.
You can cross check your solution,by selecting the duplicate records again by given query:
SELECT * FROM `tables` GROUP BY title, id having count(title) > 1;
If it return 0 result, then you query is successful.
This will keep entries with the lowest id for each title
DELETE p, e
FROM Post p
left join Post_extra e on e.news_id = p.id
where id not in
(
select * from
(
select min(id)
from post
group by title
) x
)
SQLFiddle demo
You can delete duplicate record by creating a temporary table with unique index on the fields that you need to check for the duplicate value
then issue
Insert IGNORE into select * from TableWithDuplicates
You will get a temporary table without duplicates .
then delete the records from the original table (TableWithDuplicates) by JOIN the tables
Should be something like
CREATE TEMPORARY TABLE `tmp_post` (
`id` INT(10) NULL,
`postDate` DATE NULL,
`title` VARCHAR(50) NULL,
`description` VARCHAR(50) NULL, UNIQUE INDEX `postDate_title_description` (`postDate`, `title`, `description`) );
INSERT IGNORE INTO tmp_post
SELECT id,postDate,title,description
FROM post ;
DELETE post.*
FROM post
LEFT JOIN tmp_post tmp ON tmp.id = post.id
WHERE tmp.id IS NULL ;
Sorry I didn't tested this code
I have a table with some ids + titles. I want to make the title column unique, but it has over 600k records already, some of which are duplicates (sometimes several dozen times over).
How do I remove all duplicates, except one, so I can add a UNIQUE key to the title column after?
This command adds a unique key, and drops all rows that generate errors (due to the unique key). This removes duplicates.
ALTER IGNORE TABLE table ADD UNIQUE KEY idx1(title);
Edit: Note that this command may not work for InnoDB tables for some versions of MySQL. See this post for a workaround. (Thanks to "an anonymous user" for this information.)
Create a new table with just the distinct rows of the original table. There may be other ways but I find this the cleanest.
CREATE TABLE tmp_table AS SELECT DISTINCT [....] FROM main_table
More specifically:
The faster way is to insert distinct rows into a temporary table. Using delete, it took me a few hours to remove duplicates from a table of 8 million rows. Using insert and distinct, it took just 13 minutes.
CREATE TABLE tempTableName LIKE tableName;
CREATE INDEX ix_all_id ON tableName(cellId,attributeId,entityRowId,value);
INSERT INTO tempTableName(cellId,attributeId,entityRowId,value) SELECT DISTINCT cellId,attributeId,entityRowId,value FROM tableName;
DROP TABLE tableName;
INSERT tableName SELECT * FROM tempTableName;
DROP TABLE tempTableName;
Since the MySql ALTER IGNORE TABLE has been deprecated, you need to actually delete the duplicate date before adding an index.
First write a query that finds all the duplicates. Here I'm assuming that email is the field that contains duplicates.
SELECT
s1.email
s1.id,
s1.created
s2.id,
s2.created
FROM
student AS s1
INNER JOIN
student AS s2
WHERE
/* Emails are the same */
s1.email = s2.email AND
/* DON'T select both accounts,
only select the one created later.
The serial id could also be used here */
s2.created > s1.created
;
Next select only the unique duplicate ids:
SELECT
DISTINCT s2.id
FROM
student AS s1
INNER JOIN
student AS s2
WHERE
s1.email = s2.email AND
s2.created > s1.created
;
Once you are sure that only contains the duplicate ids you want to delete, run the delete. You have to add (SELECT * FROM tblname) so that MySql doesn't complain.
DELETE FROM
student
WHERE
id
IN (
SELECT
DISTINCT s2.id
FROM
(SELECT * FROM student) AS s1
INNER JOIN
(SELECT * FROM student) AS s2
WHERE
s1.email = s2.email AND
s2.created > s1.created
);
Then create the unique index:
ALTER TABLE
student
ADD UNIQUE INDEX
idx_student_unique_email(email)
;
Below query can be used to delete all the duplicate except the one row with lowest "id" field value
DELETE t1 FROM table_name t1, table_name t2 WHERE t1.id > t2.id AND t1.name = t2.name
In the similar way, we can keep the row with the highest value in 'id' as follows
DELETE t1 FROM table_name t1, table_name t2 WHERE t1.id < t2.id AND t1.name = t2.name
This shows how to do it in SQL2000. I'm not completely familiar with MySQL syntax but I'm sure there's something comparable
create table #titles (iid int identity (1, 1), title varchar(200))
-- Repeat this step many times to create duplicates
insert into #titles(title) values ('bob')
insert into #titles(title) values ('bob1')
insert into #titles(title) values ('bob2')
insert into #titles(title) values ('bob3')
insert into #titles(title) values ('bob4')
DELETE T FROM
#titles T left join
(
select title, min(iid) as minid from #titles group by title
) D on T.title = D.title and T.iid = D.minid
WHERE D.minid is null
Select * FROM #titles
delete from student where id in (
SELECT distinct(s1.`student_id`) from student as s1 inner join student as s2
where s1.`sex` = s2.`sex` and
s1.`student_id` > s2.`student_id` and
s1.`sex` = 'M'
ORDER BY `s1`.`student_id` ASC
)
The solution posted by Nitin seems to be the most elegant / logical one.
However it has one issue:
ERROR 1093 (HY000): You can't specify target table 'student' for
update in FROM clause
This can however be resolved by using (SELECT * FROM student) instead of student:
DELETE FROM student WHERE id IN (
SELECT distinct(s1.`student_id`) FROM (SELECT * FROM student) AS s1 INNER JOIN (SELECT * FROM student) AS s2
WHERE s1.`sex` = s2.`sex` AND
s1.`student_id` > s2.`student_id` AND
s1.`sex` = 'M'
ORDER BY `s1`.`student_id` ASC
)
Give your +1's to Nitin for coming up with the original solution.
Deleting duplicates on MySQL tables is a common issue, that usually comes with specific needs. In case anyone is interested, here (Remove duplicate rows in MySQL) I explain how to use a temporary table to delete MySQL duplicates in a reliable and fast way (with examples for different use cases).
In this case, something like this should work:
-- create a new temporary table
CREATE TABLE tmp_table1 LIKE table1;
-- add a unique constraint
ALTER TABLE tmp_table1 ADD UNIQUE(id, title);
-- scan over the table to insert entries
INSERT IGNORE INTO tmp_table1 SELECT * FROM table1 ORDER BY sid;
-- rename tables
RENAME TABLE table1 TO backup_table1, tmp_table1 TO table1;