I want to know whether it is possible to avoid duplicate entries or data without any keys or group by statement
Create Unique key constrait.
ALTER TABLE Comment ADD CONSTRAINT uc_Comment UNIQUE (CommentId, Comment)
In above case Comment duplication will not be done as we are creating the unique combination of COmmentId and Comment.
Hope this helps.
More info : http://www.w3schools.com/sql/sql_unique.asp OR
SQL Server 2005 How Create a Unique Constraint?
If you want to suppress duplicates when querying, use SELECT DISTINCT.
If you want to avoid putting duplicates into a table, just don't insert records that are already there. It doesn't matter whether you have a primary/unique key: those will make the database not allow duplicate records, but it's still up to you to avoid trying to insert duplicates (assuming you want your queries to succeed).
You can use SELECT to find whether a record already exists before trying to insert it. Or, if you want to be fancy, you can insert the new records into a temporary table, use DELETE to remove any that are already present in the real table, then use INSERT ... SELECT to copy the remaining records from the temporary table into the real one.
Related
I have one table and want to remove duplicate in laravel using DB:: not eloquent
my tables name is sale_details and fields are
id,orderId,shop,user,item,quantity,price,total
where orderId,shop,user,item are unique
I want to delete rows where orderId,shop,user,item are duplicate based on these 4 fields, not one.
how can I do it the best way?
The best option would be to add an index on your table :
ALTER IGNORE TABLE sale_details ADD UNIQUE (orderId,shop,user,item);
With IGNORE, only the first found row is kept, and the others are removed.
You should keep this UNIQUE if you don't want future duplicates, if not, you can drop it just after.
This seems like it should be simple, but I couldn't figure out a way to do it. Let's say I have a table with 5,000 rows, each with an ID (primary key) of 1–5000. I am blindly inserting a new value with an existing ID, and it could be something like 2677. What I want to happen is that if the ID already exists, it will use the auto_increment value, in this case 5001. That or the maximum existing value + 1.
Most importantly, I can't use PHP (or anything else other than SQL) to do this, because the output is a query that needs to be directly importable without errors.
I have looked at two similar questions on SO:
Can you use aggregate values within ON DUPLICATE KEY
– the problem here is that they're selecting from an existing table which I can't do.
on duplicate key update with a condition? – the problem here is that I have no information on the table I'm importing to (except the basic structure), and don't know what the maximum value is.
INSERT INTO table (column1,column2) VALUES (1,2) ON DUPLICATE KEY UPDATE id=VALUES(id)
Obviously this requires an id column with AUTO_INCREMENT.
Moreover if you later need to select the inserted id just like if it was a new Insert, you do:
ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(VALUES(id));
I'm not optimistic that this can be done without a stored procedure, but I'm curious if the following is possible.
I want to write a single query insert/update that updates a row if it finds a match and if not inserts into the table with the values it would have been updating.
So... something like
updateInsert into table_a set n = 'foo' where p='bar';
in the event that there is no row where p='bar' it would automatically insert into table_a set n = 'foo';
EDIT:
Based on a couple of comments I see that I need to clarify that n is not a PRIMARY KEY and the table actually needs the freedom to have duplicate rows. I just have a situation where a specific entry needs to be unique... perhaps I'm just mixing metaphors in a bad way and should pull this out into a separate table where this key is unique.
I would enforce this with the table schema - utilize a unique multi-column key on the target table and use INSERT IGNORE INTO - it should throw an error on a duplicate key, but the insert will ignore on error.
I have a database table with one Index where the keyname is PRIMARY, Type is BTREE, Unique is YES, Packed is NO, Column is ID, Cardinality is 728, and Collation is A.
I have a script that runs on page load that adds entries to the MySQL database table and also removes duplicates from the Database Table.
Below is the script section that deletes the duplicates:
// Removes Duplicates from the MySQL Database Table
// Removes Duplicates from the MySQL Database Table based on 'Entry_Date' field
mysql_query("Alter IGNORE table $TableName add unique key (Entry_Date)");
// Deletes the added index created by the Removes Duplicates function
mysql_query("ALTER TABLE $TableName DROP INDEX Entry_Date");
Using the Remove Duplicates command above, an additional index is added to the table. The next line command is suppose to delete this added index.
The problem is that sometimes the added index created by the Removes Duplicates command does not get deleted by the following Delete added index command and therefore more indexes are added to the table. These additional indexes prevent the script from adding additional data to the database until I remove the added indexes by hand.
My Question:
Is there a command or short function that I can add to the script that will delete all indexes except the original index mentioned in the beginning of this post?
I did read the following post, but I don't know if this is the correct script to use:
How to drop all of the indexes except primary keys with single query
I don't think so, what you can do is create copies but that wouldn't copy the index. for example if you make
create table1 as (select * from table_2), he will make copy but without index or PK.
After all the comments I think I realize what is happening.
You actually allow duplicates in the database. You just want to clean them some times.
The problem is that the method you have chosen to clean them is through creating a Unique key and using the IGNORE option which causes duplicate lines to get dropped instead of failing the unique key creation. then you drop the unique key so that duplicate rows can be added again. your problem is that sometimes the unique key is not being dropped.
I suggest you delete the duplicates in another way. supposing that your table name is "my_table" and your primary key is my_mey_column then:
delete from my_table where my_key_column not in (select min(my_key_column) from my_table group by Entry_Date)
Edit: the above won't work due to limitation in mysql as pointed by #a_horse_with_no_name
try the three following queries instead:
create temporary table if not exists tmp_posting_data select id from posting_data where 1=2
insert into tmp_posting_data(id) select min(id) from posting_data group by Entry_Date
delete from Posting_Data where id not in (select id FROM tmp_posting_data)
As a final note, try to reconsider the need to allow the rows to be duplicated also as suggested by #a_horse_with_no_name. instead of allowing rows to be entered and then deleted, you can create the unique key once in the database like:
Alter table posting_data add unique key (Entry_Date)
and then, when you are inserting new data from the RSS use the following instead of "insert" use "replace" which will delete the old row if it is a duplicate on the primary key or any unique index
replace into posting_data (......) values(.....)
I have a table with just one column: userid.
When a user accesses a certain page, his userid is being inserted to the table. Userids are unique, so there shouldn't be two of the same userids in that table.
I'm considering two designs:
Making the column unique and using INSERT commands every time a user accesses that page.
Checking if the user is already recorded in the table by SELECTing from the table, then INSERTing if no record is found.
Which one is faster?
Definitely create a UNIQUE index, or, better, make this column a PRIMARY KEY.
You need an index to make your checks fast anyway.
Why don't make this index UNIQUE so that you have another fallback option (if you for some reason forgot to check with SELECT)?
If your table is InnoDB, it will have a PRIMARY KEY anyway, since all InnoDB tables are index-organized by design.
In case you didn't declare a PRIMARY KEY in your table, InnoDB will create a hidden column to be a primary key, thus making your table twise as large and you will not have an index on your column.
Creating a PRIMARY KEY on your column is a win-win.
You can issue
INSERT
IGNORE
INTO mytable
VALUES (userid)
and check how many records were affected.
If 0, there was a key violation, but no exception.
How about using REPLACE?
If a user already exists it's being replaced, if it doesn't a new row is inserted.
what about doing update, e.g.
UPDATE xxx SET x=x+1 WHERE userid=y
and if that fails (e.g. no matched rows), then do an insert for a new user?
SELECT is faster... but you'd prefer SELECT check not because of this, but to escape from rasing an error..
orrrrrrr
INSERT INTO xxx (`userid`) VALUES (4) ON DUPLICATE KEY UPDATE userid=VALUE(`userid`)
You should make it unique in any cases.
Wether to check first using SELECT, depends on what scenario is most common. If you have new users all the time, and only occationally existing users, it might be overall faster for the system to just insert and catch the exception in the rare occations this happens, but exception is slower than check first and then insert, so if it is a common scenario that it is an existing user, you should allways check first with select.