I have a table with 3 columns. None of the columns are unique key.
I want to run an insert only if a row doesn't exists already with the exact same values in each column.
given the following table:
a b c
----------
1 3 5
7 1 3
9 49 4
a=3 b=4 c=3 should insert
a=7 b=1 c=3 should not insert (a row with these exact values exists)
The solutions I have found so far need a unique primary key.
The most efficient way is adding a UNIQUE KEY to your table. You can also make an algorithm for comparing the values but you do not want to do that if you have many columns in your table.
I'm not sure, I get your point correctly. But hope this help.
First of all you have to SELECT the row with WHERE clause
SELECT * FROM table WHERE a=$a && b=$b && c=$c
After that you fetch_array or fetch_row if the array or row exist, that means 'not insert'.
Related
I am trying to understand how to do this procedure..
Basically,
I have table 1 -> 90+ columns
I have table 2 -> has a column called attributes, which has rows that correspond to one of the 90+ columns of table 1
What I want to do is show table 1 ONLY with the columns that are in table 2's rows.
SELECT [table 2 row values ] FROM table 1
How would I go about doing this? Thank you
I think you should see the JOIN clause here.
It may be helpful for your question.
The table:
ID TYPE USER_ID
======================
1 1 15
2 1 15
. 3 15
. 1 15
.
should keep multiple USER_ID's with TYPE=1 but only 0 or 1 row where TYPE=3.
In the case that TYPE=3, upon insert I need to either update or create (much like insert on duplicate key update) that row.
Is there a good way to accomplish this without first SELECTing, and updating or inserting depending on the SELECT results in the program?
Preferably doing this in a single command, and without triggers?
One way might be to add new tuple, hold the user id you added in a variable, then
delete where type = 3 and id != {Added id}
it would work, but I want to make the disclaimer that it seems dodgy somehow
You can do the update with subqueries. In this case since you would want to read and write to the same tuple you would need to rename the subquery on the same table to erase the lock on that tuple.
Say you want to update the user_id of the first row of data with type=3 to
20 do:
UPDATE tbl SET user_id=20 WHERE id=
(SELECT A.id FROM (SELECT MIN(id) id
FROM tbl
WHERE type=3) A);
See DEMO ON SQL Fiddle.
I am trying to delete all rows from a table with a particular id.
my query is:
DELETE FROM table_name WHERE x_id='46';
the error returned is:
#1136 - Column count doesn't match value count at row 1
my table has a composite primary key x_id is one of the columns in the primary key.
Please Help!
That error is strange for a delete statement. It is most likely coming from badly written trigger that is being executed as a result of the delete.
This error would most likely be encountered on an insert statement such as the following:
insert into foo(bar, baz)
select bar, baz, foobar, 2
from myTable
Note how the insert statement specifies 2 columns, but provides 4 values.
You might try to provide a second value to the delete query to match the composite index for the row.
DELETE FROM CPI
WHERE (CountryID, Year) IN (('AD', 2010), ('AF', 2009), ('AG', 1992))
Cause:
You may have a trigger on this table, then you changed the table structure.
Now, you may get this error when you delete, insert, or update in this table (depending on the trigger event you specified).
Solution:
To solve this issue, you have to update the trigger as well, the number of columns of the trigger should match the number of the columns of the table.
According to the docs:
If [columns a and b are] unique, the INSERT is equivalent to this UPDATE statement instead:
UPDATE table SET c=c+1 WHERE a=1 OR b=2 LIMIT 1;
If a=1 OR b=2 matches several rows, only one row is updated. In general, you should try to avoid using an ON DUPLICATE KEY UPDATE clause on tables with multiple unique indexes.
This is fair enough, but what if I have this as the only key:
PRIMARY KEY (`a`,`b`)
Since the duplicate key is dependant on both fields simultaneously, would the update reliably affect the specific row where the duplicate occurs, or does it do the same as if the fields were individually unique?
Assuming you're using the same query as in your example, it wouldn't reliably update the row with the duplicate key. It would still find the first row in data order that has either of the matching values. Consider the example below.
a | b
1. 1 | 1
2. 1 | 2
3. 1 | 3
4. 1 | 4
5. 2 | 1
6. 2 | 2
the query UPDATE table SET c=c+1 WHERE a=1 OR b=2 LIMIT 1; would update the first row, not the desired second row. So in a few words, it's the same as if the columns were individually unique.
If the on duplicate concerns a column witch is define as unique or primary, or the SAME set of columns defined in a unique or primary key, an insert ... on duplicate update ... statement will update the row where ALL the columns in this PK or unique key have the same values.
To answer your comment on G-Nugget answer, only the row 2 will be updated.
Hope that helps ;-)
Good Morning stackoverflownians,
I have a very big table with duplicates on two columns. Means that if numbers on row a are duplicated in col1 and col2 on row b, I should keep only row a :
## table_1
col1 col2
1 10
1 10
1 10
1 11
1 11
1 12
2 20
2 20
2 21
2 21
# should return this tbl without duplication
col1 col2
1 10
1 11
1 12
2 20
2 21
My previous code account only for col1, and I don't know how to query this on two coluns :
CREATE TABLE temp LIKE db.table_1;
INSERT INTO temp SELECT * FROM table_1 WHERE 1 GROUP BY col1;
DROP TABLE table_1;
ALTER TABLE temp RENAME table_1;
So I thought about that :
CREATE TABLE temp LIKE db.table_1;
INSERT INTO temp(col1,col2)
SELECT DISTINCT col1,col2 FROM table_1;
then drop and rename..
But I'm not sure it's gonna work and MySQL tend to be unstable, if it takes too long I will have to stop the query and that my crash the server again .. T.T
We have 200,000,000 rows and all of them have at least one duplicate..
Any Suggestion of code ? :)
Also .. How long would it take ? minutes or hours ?
you already know quite a ways :)
you can try this also
Use INSERT IGNORE rather than INSERT. If a record doesn't duplicate an existing record, MySQL inserts it as usual. If the record is a duplicate, the IGNORE keyword tells MySQL to discard it silently without generating an error.
Read from existing table and then write on a new table using INSERT IGNORE. This way you can control insert process depending on your resource usage.
When using INSERT IGNORE and you do have key violations, MySQL does NOT raise a warning!!!
the distinct clause is the way to go, but it will take a while to run on that many records. I'd add an ID column that is autoincrment, and is your pk. Then you can run the deduplicate in stages that won't time out.
Good luck and HTH
-- Joe