We have a table business_users with a user_id and business_id and we have duplicates.
How can I write a query that will delete all duplicates except for one?
Completely identical rows
If you want to avoid completely identical rows, as I understood your question at first, then you can select unique rows to a separate table and recreate the table data from that.
CREATE TEMPORARY TABLE tmp SELECT DISTINCT * FROM business_users;
DELETE FROM business_users;
INSERT INTO business_users SELECT * FROM tmp;
DROP TABLE tmp;
Be careful if there are any foreign key constraints referencing this table, though, as the temporary deletion of rows might lead to cascaded deletions elsewhere.
Introducing a unique constraint
If you only care about pairs of user_id and business_id, you probably want to avoid introducing duplicates in the future. You can move the existing data to a temporary table, add a constraint, and then move the table data back, ignoring duplicates.
CREATE TEMPORARY TABLE tmp SELECT * FROM business_users;
DELETE FROM business_users;
ALTER TABLE business_users ADD UNIQUE (user_id, business_id);
INSERT IGNORE INTO business_users SELECT * FROM tmp;
DROP TABLE tmp;
The above answer is based on this answer. The warning about foreign keys applies just as it did in the section above.
One-shot removal
If you only want to execute a single query, without modifying the table structure in any way, and you have a primary key id identifying each row, then you can try the following:
DELETE FROM business_users WHERE id NOT IN
(SELECT MIN(id) FROM business_users GROUP BY user_id, business_id);
A similar idea was previously suggested by this answer.
If the above request fails, because you are not allowed to read and delete from a table in the same step, you can again use a temporary table:
CREATE TEMPORARY TABLE tmp
SELECT MIN(id) id FROM business_users GROUP BY user_id, business_id;
DELETE FROM business_users WHERE id NOT IN (SELECT id FROM tmp);
DROP TABLE tmp;
If you want to, you can still introduce a uniqueness constraint after cleaning the data in this fashion. To do so, execute the ALTER TABLE line from the previous section.
Since you have a primary key, you can use that to pick which rows to keep:
delete from business_users
where id not in (
select id from (
select min(id) as id -- Make a list of the primary keys to keep
from business_users
group by user_id, business_id -- Group by your duplicated row definition
) as a -- Derived table to force an implicit temp table
);
In this way, you won't need to create/drop temp tables and such (except the implicit one).
You might want to put a unique constraint on user_id, business_id so you don't have to worry about this again.
Related
So I have an existing MySQL users table with thousands of records in it. I have noticed duplicate records for users which is a problem that I need to address. I know that the way I need to do this is to somehow make 2 columns unique.
The duplicates are arising with records containing both the same server_id column, and also the same user_id column. These 2 columns are meant to be unique combined. So there should only ever be 1 user_id per server_id.
I have figured out how I can find these duplicates using the following query:
SELECT `server_id`, `user_id`, COUNT(*) AS `duplicates` FROM `guild_users` GROUP BY `server_id`, `user_id` HAVING `duplicates` > 1
From what I have read, I need to delete all duplicates first before I add any constraints. This is one of the things I am unsure about.
Question 1: How would I go about deleting all duplicates, but leaving 1 of each so the user still exists, just not the other duplicates.
Question 2: What is the best way of avoiding duplicates from being created? Should I create a unique constraint for both of the columns, or do something with primary keys instead?
In your table there must exist a primary key column like an id.
So you can use EXISTS to delete the duplicates and keep just 1:
delete gu from guild_users gu
where exists (
select 1 from guild_users
where
server_id = gu.server_id
and
user_id = gu.user_id
and
id > gu.id
)
After that you can create a unique constraint for the 2 columns:
alter table guild_users
add constraint un_server_user unique
(server_id, user_id);
You want to prevent this by adding a unique index:
create unique index unq_guild_users_server_user on guild_users(server_id, user_id);
If you have a primary key, you can delete the duplicates before adding the unique index:
delete g
from guild_users g left join
(select server_id, user_id, max(primary_key) as max_pk
from guild_users
group by server_id, user_id
) su
on gu.primary_key = su.max_pk
where su.max_pk is null;
i am trying to delete e-mail duplicates from table nlt_user
this query is showing correctly records having duplicates:
select [e-mail], count([e-mail])
from nlt_user
group by [e-mail]
having count([e-mail]) > 1
now how can i delete all records having duplicate but one?
Thank you
If MySQL version is prior 5.7.4 you can add a UNIQUE index on the column e-mail with the IGNORE keyword.
This will remove all the duplicate e-mail rows:
ALTER IGNORE TABLE nlt_user
ADD UNIQUE INDEX idx_e-mail (e-mail);
If > 5.7.4 you can use a temporary table (IGNORE not possible on ALTER anymore):
CREATE TABLE nlt_user_new LIKE nlt_user;
ALTER TABLE nlt_user_new ADD UNIQUE INDEX (emailaddress);
INSERT IGNORE INTO nlt_user_new SELECT * FROM nlt_user;
DROP TABLE nlt_user;
RENAME TABLE nlt_user_new TO nlt_user;
Try this :
delete n1 from nlt_user n1
inner join nlt_user n2 on n1.e-mail=n2.e-mail and n1.id>n2.id;
This will keep record with minimum ID value of duplicates and deletes remaining duplicate records
The rank function can be employed to retain only the unique values
1:Create a new table which contains only unique values
Example: nlt_user_unique
CREATE TABLE nlt_user_unique AS
(SELECT * FROM
(SELECT A.*,RANK() OVER (PARTITION BY email ORDER BY email) RNK
FROM nlt_user A)
where RNK=1)
2:Truncate the orignal table containing duplicates
truncate table nlt_user
3:Insert the unique rows from the table created in step 1 to your table nlt_user
INSERT INTO nlt_user()
SELECT email from nlt_user_unique;
I've seen a number of variations on this but nothing quite matches what I'm trying to accomplish.
I have a table, TableA, which contain the answers given by users to configurable questionnaires. The columns are member_id, quiz_num, question_num, answer_num.
Somehow a few members got their answers submitted twice. So I need to remove the duplicated records, but make sure that one row is left behind.
There is no primary column so there could be two or three rows all with the exact same data.
Is there a query to remove all the duplicates?
Add Unique Index on your table:
ALTER IGNORE TABLE `TableA`
ADD UNIQUE INDEX (`member_id`, `quiz_num`, `question_num`, `answer_num`);
Another way to do this would be:
Add primary key in your table then you can easily remove duplicates from your table using the following query:
DELETE FROM member
WHERE id IN (SELECT *
FROM (SELECT id FROM member
GROUP BY member_id, quiz_num, question_num, answer_num HAVING (COUNT(*) > 1)
) AS A
);
Instead of drop table TableA, you could delete all registers (delete from TableA;) and then populate original table with registers coming from TableA_Verify (insert into TAbleA select * from TAbleA_Verify). In this way you won't lost all references to original table (indexes,... )
CREATE TABLE TableA_Verify AS SELECT DISTINCT * FROM TableA;
DELETE FROM TableA;
INSERT INTO TableA SELECT * FROM TAbleA_Verify;
DROP TABLE TableA_Verify;
This doesn't use TEMP Tables, but real tables instead. If the problem is just about temp tables and not about table creation or dropping tables, this will work:
SELECT DISTINCT * INTO TableA_Verify FROM TableA;
DROP TABLE TableA;
RENAME TABLE TableA_Verify TO TableA;
Thanks to jveirasv for the answer above.
If you need to remove duplicates of a specific sets of column, you can use this (if you have a timestamp in the table that vary for example)
CREATE TABLE TableA_Verify AS SELECT * FROM TableA WHERE 1 GROUP BY [COLUMN TO remove duplicates BY];
DELETE FROM TableA;
INSERT INTO TableA SELECT * FROM TAbleA_Verify;
DROP TABLE TableA_Verify;
Add Unique Index on your table:
ALTER IGNORE TABLE TableA
ADD UNIQUE INDEX (member_id, quiz_num, question_num, answer_num);
is work very well
If you are not using any primary key, then execute following queries at one single stroke. By replacing values:
# table_name - Your Table Name
# column_name_of_duplicates - Name of column where duplicate entries are found
create table table_name_temp like table_name;
insert into table_name_temp select distinct(column_name_of_duplicates),value,type from table_name group by column_name_of_duplicates;
delete from table_name;
insert into table_name select * from table_name_temp;
drop table table_name_temp
create temporary table and store distinct(non duplicate) values
make empty original table
insert values to original table from temp table
delete temp table
It is always advisable to take backup of database before you play with it.
As noted in the comments, the query in Saharsh Shah's answer must be run multiple times if items are duplicated more than once.
Here's a solution that doesn't delete any data, and keeps the data in the original table the entire time, allowing for duplicates to be deleted while keeping the table 'live':
alter table tableA add column duplicate tinyint(1) not null default '0';
update tableA set
duplicate=if(#member_id=member_id
and #quiz_num=quiz_num
and #question_num=question_num
and #answer_num=answer_num,1,0),
member_id=(#member_id:=member_id),
quiz_num=(#quiz_num:=quiz_num),
question_num=(#question_num:=question_num),
answer_num=(#answer_num:=answer_num)
order by member_id, quiz_num, question_num, answer_num;
delete from tableA where duplicate=1;
alter table tableA drop column duplicate;
This basically checks to see if the current row is the same as the last row, and if it is, marks it as duplicate (the order statement ensures that duplicates will show up next to each other). Then you delete the duplicate records. I remove the duplicate column at the end to bring it back to its original state.
It looks like alter table ignore also might go away soon: http://dev.mysql.com/worklog/task/?id=7395
An alternative way would be to create a new temporary table with same structure.
CREATE TABLE temp_table AS SELECT * FROM original_table LIMIT 0
Then create the primary key in the table.
ALTER TABLE temp_table ADD PRIMARY KEY (primary-key-field)
Finally copy all records from the original table while ignoring the duplicate records.
INSERT IGNORE INTO temp_table AS SELECT * FROM original_table
Now you can delete the original table and rename the new table.
DROP TABLE original_table
RENAME TABLE temp_table TO original_table
Tested in mysql 5.Dont know about other versions.
If you want to keep the row with the lowest id value:
DELETE n1 FROM 'yourTableName' n1, 'yourTableName' n2 WHERE n1.id > n2.id AND n1.member_id = n2.member_id and n1.answer_num =n2.answer_num
If you want to keep the row with the highest id value:
DELETE n1 FROM 'yourTableName' n1, 'yourTableName' n2 WHERE n1.id < n2.id AND n1.member_id = n2.member_id and n1.answer_num =n2.answer_num
I have a huge table of products but there are lot of duplicate entries. The table has more than10 Thousand entries and I want to remove the duplicate entries in it without manually finding and deleting it. Please let me know if you can provide me a solution for this
You could use SELECT DISTINCT INTO TempTable, drop the original table, and then rename the temp one.
You should also add primary and unique keys to avoid this sort of thing in the future.
for full row duplicates try this.
select distinct * into mytable_tmp from mytable
drop table mytable
alter table mytable_tmp rename mytable
Seems the below statements will help you in resolving your requirements.
if the table(foo) has primary key field
First step
store key values in temporary table, give your unique conditions in group by clause
if you want to delete the duplicate email id, give email id in group by clause and give the primary key name in
select clause like either min(primarykey) or max(primarykey)
CREATE TEMPORARY TABLE temptable AS SELECT min( primarykey ) FROM foo GROUP BY uniquefields;
Second step
call the below delete statement and give the table name and primarykey columns
DELETE FROM foo WHERE primarykey NOT IN (SELECT * FROM temptable );
execute both the query combined in your query analyser or db tool.
If the table(foo) doesn't have a primary key filed
step 1
CREATE TABLE temp_table AS SELECT * FROM foo GROUP BY field or fileds;
step 2
DELETE FROM foo;
step 3
INSERT INTO foo select * from temp_table;
There are different solutions to remove duplicate rows and it fully depends upon your scenario to make use of one from them. The simplest method is to alter the table making the Unique Index on Product Name field:
alter ignore table products add unique index `unique_index` (product_name);
You can remove the index after getting all the duplicate rows deleted:
alter table products drop index `unique_index`;
Please let me know if this resolves the issue. If not I can give you alternate solutions for that.
You can add more than one column to a group by. I.E.
SELECT * from tableName GROUP BY prod_name HAVING count(prod_name) > 1
That will show the unique products. You can write it dump it to new table and drop the existing one.
Similar questions were indeed asked, but I didn't find an answer.
I have a MySql table with 3 non-unique fields. I don't want duplicate rows. Meaning ("a", "b", "c") and ("a", "dasd", "dfsd") are okay (I don't mind having "a" twice in the first fields), but having ("a", "b", "c") twice is wrong.
I need a query which will remove duplicates, leaving only one row for each row group.
Edit This has already been covered on SO before.
One approach would be to create a new table based on the existing table. You could do this through something like:
create table myNewTable SELECT distinct * FROM myOldTable;
Then you could clear the old table's data, and create a unique constraint on the fields you don't want duplicated:
TRUNCATE TABLE myOldTable;
ALTER TABLE myOldTable
ADD UNIQUE (field1, field2);
Then insert your data back into the original table. Because you created myNewTable using DISTINCT, you should not have any duplicates.
INSERT INTO myOldTable SELECT * FROM myNewTable;
Note: It assumes we have primary key apart from column1 and column2 and column3. Also it assumes that last row should be preserved. Helpful when we have some other information also apart from column1,column2 and column3.
It saves the last primary key and delete the rest for unique values of Column1,Column2,Column3
Insert result of below query into a temp table
SELECT MAX(PrimaryKey)
FROM TABLENAME
GROUP BY Column1,Column2,Column3
Delete from TABLENAME where PrimaryKey NOT IN (SELECT PrimaryKey FROM TEMPTABLE)
If we have only these 3 columns, then
Save distinct in temp table
truncate original table
insert back into original from temp table.
You can retrieve a list of the duplicates like this:
SELECT field1, field2, field3, count(*) AS cnt
FROM yourtable
GROUP by field1, field2, field3
HAVING (cnt > 1)
You'll then have to delete the duplicate rows in subsequent seperate queries.
I will solve the problem by using a temporary table and subqueries to find the elements to erase. That will only work if your table 'yourTable' with the fields f1,f2,f3 has also an ID field that is unique.
Create the temporary table to store the IDs of the elements to erase.
CREATE TEMPORARY TABLE ids (ID int);
Find the IDs of the elements to erase:
INSERT INTO ids(ID) SELECT ID FROM yourTable AS t
WHERE 1 != (SELECT COUNT(*) FROM yourTable
WHERE yourTable.ID <= t.ID
AND yourTable.f1 = t.f1
AND yourTable.f2 = t.f2
AND yourTable.f3 = t.f3);
Delete the elements of the table with the previously selected indexes
DELETE yourTable FROM yourTable,ids WHERE yourTable.ID = ids.ID;
Remove the temporary table
DROP TABLE ids;
If SQL supported to to subqueries using the same table for a SELECT and a DELETE we could do all that in the same query, but this is not the case, so we need to go through a temporary table.
To avoir duplicates to happen I will set the three fields as primary keys of the table, in this way:
ALTER TABLE yourTable ADD PRIMARY KEY (f1, f2, f3);
You will be able to alter your table this way, only when you removed all the duplicates and once the table altered subsequent inserts with duplicated values will fail.