I'm aware that you can create a unique column in your MySQL table, but I'm actually looking to compare TWO columns.
So if I had records like:
Time User Table
10:00pm Fred 29
11:00am Bob 33
I COULD insert a new record with the time 10:00pm and table 33 but not 10:00pm table 29.
I know I could run a query and then nullify my ability to insert a new record if I had results from that query based on comparing those two fields, but I'm wondering if there is a more elegant solution wherein I can get a duplicate entry error from MySQL on the INSERT and save myself a few lines of code.
You can create a unique index that incorporates both columns:
CREATE UNIQUE INDEX idx_time_and_table ON reservations (`Time`, `Table`);
This will block any inserts for the same pairing provided both values are not NULL. Having a NULL value side-steps the unique checking.
You're also using reserved SQL keywords for your column names which you might want to change.
Try using a composite unique constraint across both columns:
ALTER TABLE your_table ADD UNIQUE(`Time`, `Table`);
Now any rows attempting to be added that have matching values will force MySQL to throw an error, which you can test for in your app.
Create an unique index based on the columns you want to be unique:
CREATE UNIQUE INDEX index_name ON table_name ( column1, column2,...);
Related
I want to know whether it is possible to avoid duplicate entries or data without any keys or group by statement
Create Unique key constrait.
ALTER TABLE Comment ADD CONSTRAINT uc_Comment UNIQUE (CommentId, Comment)
In above case Comment duplication will not be done as we are creating the unique combination of COmmentId and Comment.
Hope this helps.
More info : http://www.w3schools.com/sql/sql_unique.asp OR
SQL Server 2005 How Create a Unique Constraint?
If you want to suppress duplicates when querying, use SELECT DISTINCT.
If you want to avoid putting duplicates into a table, just don't insert records that are already there. It doesn't matter whether you have a primary/unique key: those will make the database not allow duplicate records, but it's still up to you to avoid trying to insert duplicates (assuming you want your queries to succeed).
You can use SELECT to find whether a record already exists before trying to insert it. Or, if you want to be fancy, you can insert the new records into a temporary table, use DELETE to remove any that are already present in the real table, then use INSERT ... SELECT to copy the remaining records from the temporary table into the real one.
I've got an error on MySQL while trying to add a UNIQUE KEY. Here's what I'm trying to do. I've got a column called 'unique_id' which is VARCHAR(100). There are no indexes defined on the table. I'm getting this error:
#1062 - Duplicate entry '' for key 'unique_id'
When I try to add a UNIQUE key. Here is a screenshot of how I'm setting it up in phpMyAdmin:
Here is the MySQL query that's generate by phpMyAdmin:
ALTER TABLE `wind_archive` ADD `unique_id` VARCHAR( 100 ) NOT NULL FIRST ,
ADD UNIQUE (
`unique_id`
)
I've had this problem in the past and never resolved it so I just rebuilt the table from scratch. Unfortunately in this case I cannot do that as there are many entries in the table already. Thanks for your help!
The error says it all:
Duplicate entry ''
So run the following query:
SELECT unique_id,COUNT(unique_id)
FROM yourtblname
GROUP BY unique_id
HAVING COUNT(unique_id) >1
This query will also show you the problem
SELECT *
FROM yourtblname
WHERE unique_id=''
This will show you where there are values that have duplicates. You are trying to create a unique index on a field with duplicates. You will need to resolve the duplicate data first then add the index.
This is 3rd time i am looking for solution to this problem so for the reference I am posting the answer here.
Depending on the data we may use IGNORE keyword with Alter command. If IGNORE is specified, only the first row is used of rows with duplicates on a unique key, The other conflicting rows are deleted. Incorrect values are truncated to the closest matching acceptable value.
The IGNORE keyword extension to MySQL seems to have a bug in the InnoDB version on some version of MySQL.
You could always, convert to MyISAM, IGNORE-ADD the index and then convert back to InnoDB
ALTER TABLE table ENGINE MyISAM;
ALTER IGNORE TABLE table ADD UNIQUE INDEX dupidx (field);
ALTER TABLE table ENGINE InnoDB;
Note, if you have Foreign Key constraints this will not work, you will have to remove those first, and add them back later.
Make unique_id NULL from NOT NULL and it will solve your problem
select ID from wind_archive
where ID not in (select max(ID) from wind_archive group by unique_id)
and this is what you should remove from the table before you succesfully add the unique key.
this also works for adding unique key with 2 or more columns.
such as -
delete from wind_archive
where ID in (
select * from (select ID from wind_archive where ID not in (
select max(ID) from wind_archive group by lastName, firstName
) ORDER BY ID
) AS p
);
because of you write in your query, unique_id be NOT NULL and previous rows all of them are null and you want this column be unique, then after run this query, you have several rows with the same value it means this column is not unique, then you have to change unique_id NOT NULL to unique_id NULL in your query.
I was getting the same error (Duplicate entry '' for key 'unique_id') when trying to add a new column as unique "after" I had already created a table containing just names of museums. I wanted to go back and add a unique code for each museum name, with the intention of inserting the code values one at a time. Poor table planning on my part.
My solution was to add the new column without making it unique; then entered the data for each code one row at a time; and then changing the column structure to make it unique for future entries. Lucky there were only 10 rows.
I have two tables, the first table has 400 rows. The second table holds the same records with the same count. Now the first table row count increases to 450. I want to insert only those 50 new rows into the second table. I don't need to update the first 400 records.
I am setting the unique index for the particular field (like empid). Now when I insert the first table data it returns the following error:
Duplicate entry 'xxxx' for key 'idx_confirm'
Please help me to fix this error.
Am using the below code to insert the record. But it allows duplicate entry..
insert ignore into tbl_emp_confirmation (fldemp_id,fldempname,fldjoindatefldstatus)
select fldempid, fldempname,DATE_FORMAT (fldjoindate,'%Y-%m-%d') as fldjoindate,fldstatus from tblempgeneral as n;
Modify your INSERT ... statement to INSERT IGNORE ....
See for example this post for an explanation.
You need to make sure that you have a unique index that prevents any duplicates, such as on the primary key.
I have a table that has some duplicate results. For example:
`person_url` `movie_url`
1 2
1 2
2 3
Would become -->
`person_url` `movie_url`
1 2
2 3
I know how to do it by creating a new table,
create table tmp_credits (select distinct * from name);
However, it is a pretty large table and I have a couple indexes on it which will need to be re-created. How would I do this transformation in place, that is, without creating a new table?
You can add a UNIQUE index over your table's columns using the IGNORE keyword:
ALTER IGNORE TABLE name ADD UNIQUE INDEX (person_url, movie_url);
As stated in the manual:
IGNORE is a MySQL extension to standard SQL. It controls how ALTER TABLE works if there are duplicates on unique keys in the new table or if warnings occur when strict mode is enabled. If IGNORE is not specified, the copy is aborted and rolled back if duplicate-key errors occur. If IGNORE is specified, only the first row is used of rows with duplicates on a unique key. The other conflicting rows are deleted. Incorrect values are truncated to the closest matching acceptable value.
This will also prevent duplicates from being added in the future.
`create table temp
(col1 varchar(20),col2 varchar(20));
INSERT INTO temp VALUES
('1','one'),('2','two'),('2','two');
`select col1,col2 from temp
union
select col1,col2 from temp;
`
Have you considered just putting a semantic layer/view on top of the table that de-dups?
select person_url, movie_url
from name
group by person_url, movie_url
I have read many article about this one. I want to hear from you.
My problem is:
A table: ID(INT, Unique, Auto Increase) , Title(varchar), Content(text), Keywords(varchar)
My PHP Code will always do insert new record, but not accept duplicated record base on Title or Keywords. So, the title or keyword can't be Primary field. My PHP Code need to do check existing and insert like 10-20 records same time.
So, I check like this:
SELECT * FROM TABLE WHERE TITLE=XXX
And if return nothing, then I do INSERT.
I read some other post. And some guy say:
INSERT IGNORE INTO Table values()
An other guy suggest:
SELECT COUNT(ID) FROM TABLE
IF it return 0, then do INSERT
I don't know which one faster between those queries.
And I have 1 more question, what is different and faster on those queries too:
SELECT COUNT(ID) FROM ..
SELECT COUNT(0) FROM ...
SELECT COUNT(1) FROM ...
SELECT COUNT(*) FROM ...
All of them show me total of records in table, but I don't know do mySQL think number 0 or 1 is my ID field? Even I do SELECT COUNT(1000) , I still get total records of my table, while my table only have 4 columns.
I'm using MySQL Workbench, have any option for test speed on this app?
I would use insert on duplicate key update command. One important comment from the documents states that: "...if there is a single multiple-column unique index on the table, then the update uses (seems to use) all columns (of the unique index) in the update query."
So if there is a UNIQUE(Title,Keywords) constraint on the table in the example, then, you would use:
INSERT INTO table (Title,Content,Keywords) VALUES ('blah_title','blah_content','blah_keywords')
ON DUPLICATE KEY UPDATE Content='blah_content';
it should work and it is one query to the database.
SELECT COUNT(*) FROM .... is faster than SELECT COUNT(ID) FROM .. or build something like this:
INSERT INTO table (a,b,c) VALUES (1,2,3)
ON DUPLICATE KEY UPDATE c=3;