I don't think this is possible as I couldn't find anything but I thought I would check on here in case I am not searching for the correct thing.
I have a settings table in my database which has two columns. The first column is the setting name and the second column is the value.
I need to update all of these at the same time. I wanted to see if there was a way to update these values at the same time one query like the following
UPDATE table SET col1='setting name' WHERE col2='1 value' AND SET col1='another name' WHERE col2='another value';
I know the above isn't a correct SQL format but this is the sort of thing that I would like to do so was wondering if there was another way that this can be done instead of having to perform separate SQL queries for each setting I want to update.
Thanks for your help.
You can use INSERT INTO .. ON DUPLICATE KEY UPDATE to update multiple rows with different values.
You do need a unique index (like a primary key) to make the "duplicate key"-part work
Example:
INSERT INTO table (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE b = VALUES(b), c = VALUES(c);
-- VALUES(x) points back to the value you gave for field x
-- so for b it is 2 and 5, for c it is 3 and 6 for rows 1 and 4 respectively (if you assume that a is your unique key field)
If you have a specific case I can give you the exact query.
UPDATE table
SET col2 =
CASE col1
WHEN 'setting1'
THEN 'value'
ELSE col2
END
, SET col1 = ...
...
I decided to use multiple queries all in one go. so the code would go like
UPDATE table SET col2='value1' WHERE col1='setting1';
UPDATE table SET col2='value2' WHERE col1='setting1';
etc
etc
I've just done a test where I insert 1500 records into the database. Do it without starting a DB transaction and it took 35 seconds, blanked the database and did it again but starting a transaction first, then once the 1500th record inserted finish the transaction and the time it took was 1 second, so definetely seems like doing it in a db transaction is the way to go.
You need to run separate SQL queries and make use of Transactions if you want to run as atomic.
UPDATE table SET col1=if(col2='1 value','setting name','another name') WHERE col2='1 value' OR col2='another value'
#Frits Van Campen,
The insert into .. on duplicate works for me.
I am doing this for years when I want to update more than thousand records from an excel import.
Only problem with this trick is, when there is no record to update, instead of ignoring, this method inserts a record and on some instances it is a problem. Then I need to insert another field, then after import I have to delete all the records that has been inserted instead of update.
Related
If I have a table that has these rows:
animal (primary)
-------
man
dog
cow
and I want to delete all the rows and insert my new rows (that may contain some of the same data), such as:
animal (primary)
-------
dog
chicken
wolf
I could simply do something like:
delete from animal;
and then insert the new rows.
But when I do that, for a split second, 'dog' won't be accessible through the SELECT statement.
I could simply insert ignore the new data and then delete the rest, one by one, but that doesn't feel like the right solution when I have a lot of rows.
Is there a way to insert the new data and then have MySQL automatically delete the rest afterward?
I have a program that selects data from this table every 5 minutes (and the code I'm writing now will be updating this table once every 30 minutes), so I would like to be as accurate as possible at all times, and I would rather have too many rows for a split second than too few rows for the same time.
Note: I know that this may seem like it is unnecessary but I just feel like if I leave too many of those unlikely possibilities in different places, there will be times where things go wrong.
You may want to use TRUNCATE instead of DELETE here. TRUNCATE is faster than DELETE and resets the table back to its empty state (meaning IDENTITY columns are reset to original values as well).
Not sure why you're having problems with selecting a value that was deleted and re-added, maybe I'm missing some context. But if you're wiping the table clean, you might want to use truncate instead.
You could add another column timestamp and change the select statement to accommodate this scenario where it needs to check for the latest value.
If this is for school, I would argue that you need a timestamp and that is what your professor is looking for. You shouldn't need to truncate a table to get the latest values, you need to adjust the thinking behind the table and how you are querying data. Hope this helps!
Check out these:
How to make a mysql table with date and time columns?
Why not update values instead?
My other questions would be:
How are you loading this into the table?
What does that code look like?
Can you change the way you Select from the table?
What values are being "updated" and change in such a way that you need to truncate the entire table?
If you don't want to add new column, there is an other method.
1. At first step, update table in any way that mark all existing rows for deletion in future. For example:
UPDATE `table_name` SET `animal`=CONCAT('MUST_BE_DELETED_', `animal`)
At second step, insert new rows.
On final step, remove all marked rows:
DELETE FROM `table_name` WHERE `animal` LIKE 'MUST_BE_DELETED_%'
You could implement this by having the updated_on column as timestamp and you may even utilize some default values, but let's go with an example without them.
I presume the table would look something like this:
CREATE TABLE `new_table` (
`animal` varchar(255) NOT NULL,
`updated_on` timestamp,
PRIMARY KEY (`animal`)
) ENGINE=InnoDB
This is just a dummy table example. What's important are the two queries later on.
You would simply perform a query to insert the data, such as:
insert into my_table(animal)
select animal from my_view where animal = 'dogs'
on duplicate key update
updated_on = current_timestamp;
Please notice that my_view is your table/view/query by which you supply the values to insert into your table. Also notice that you need to have primary/unique key constraint on your animal column in this example, in order to work.
Then, you proceed with the following query, to "purge" (delete) the old values:
delete from my_table
where updated_on < (
select *
from (
select max(updated_on) from my_table
) as max_date
);
Please notice that you could make a separate view in order to obtain this max_date value for updated_on entry. This entry should indicate the timestamp for your last updated/inserted values in a previous query, so you could proceed with utilizing it in a where clause in order to issue deletion of old records that you don't want/need anymore.
IMPORTANT NOTE:
Since you are doing multiple queries and it's supposed to be a single operation, I'd advise you to utilize it within a single trancations and to utilize a proper rollback on various potential outcomes (i.e. in case of mysql exceptions). You might wish to utilize a proper stored procedure for that.
I have a MySQL query that performs batch INSERTs and uses ON DUPLICATE KEY UPDATE to update a row in case there's a unique key duplicate.
INSERT INTO table1
(col1,col2,col3)
VALUES
(val1,val2,val3),
(val4,val5,val6),
(val7,val8,val9),
...
(valn,valx,valz)
ON DUPLICATE KEY UPDATE
col3 = VALUES(col3);
In other words, new rows are inserted unless there's a duplicate unique key, in which case col3 is updated.
When the query is finished, I would like to know how many rows were INSERTED as well as how many rows were UPDATED. Is this possible?
No, there's no definitive way to tell from the rows_affected count. There's some corner cases where we can tell... if rows_affected is exactly twice the number of rows we attempted to insert, we know they were all updates. If the rows_affected count is zero, we know that no rows were inserted. If the rows_affected count is one, we know that one row was inserted. But aside from that, there are a lot of permutations.
It might be possible to craft BEFORE INSERT and BEFORE UPDATE trigger to increment user-defined variables. If we initialize the user-defined variables immediately before the INSERT ... ON DUPLICATE KEY UPDATE statement, we could combine use those variables to determine how many rows we attempted to insert, and how many of those rows caused a duplicate key exception. (MySQL doesn't increment the rows_affected for an UPDATE action that causes no actual update to the row.)
EDIT
If you have a guarantee that an UPDATE action will cause an actual change to the row... if you are changing the value of at least one column on each row, for every row that is changed... and if you have a count of the actual number of rows you are attempting to insert, then you could determine from the rows_affected count how many rows were inserted, and how many rows were updated.
The INSERT ... ON DUPLICATE KEY can cause the same row to be inserted and updated, and/or cause the same row to be updated multiple times.
Did you want a count of the number of "update" operations, including updates to the same row, or did you want a count of the number of rows in the table that got updated?
expanding on the hakkikonu's Answer, and read it first or this will make no sense if it does at all ...
and agreeing with #spencer7593 's comments, such things as "concurrency killing (CK) operations with locks",
and the need to fix the formula for determining the update count in hak's Answer
i see no way of getting accurate insert and update counts without CK. Throwing in AFTER triggers
certainly doesn't help solve it without CK, "alone and at the same time being accurate".
were one to have the nullable table1.blabla column only for use with batches against table1, regardless of the frequency
of such batches. if a batch is not running against table1, blabla is guaranteed to be null even if the column is not dropped. it is obvious how.
i believe you can get insert and update counts
accurately. here is how and based on your Insert statement.
table1 has write lock given exclusive to the batch code. let's assume the MyISAM storage engine. hey why not, we
are making assumptions here.
blabla column shows null 'inserted' and 'updated' based on your statement (barely different that what Hakkikonu suggested).
You have your counts.
Concerning what spencer wrote in his Answer about updates and or inserts happening more than once for a given row based on
your Question's Insert statement, I don't see it that way. Unless your batch data has duplicate keys presented in which case what does accuracy matter anyway.
Either the row is there or not to begin with based on whatever threw the ON DUPLICATE KEY. If it threw it, it is an update,
if didn't, insert. Someone correct me.
Then at end the alter table drop blabla is performed or update to null. lock released.
so i guess how important is update and insert counts, the size of the table, and the frequency of batches.
Add a new column something like blabla and give it null as default value.
I assumed, you will use this only once.
Then
ON DUPLICATE KEY UPDATE
col3 = VALUES(col3),
blabla = 'up' ;
SELECT count(blabla) as allrows FROM table1; # returns all rows count
SELECT count(blabla) as updrows FROM table1 WHERE blabla = 'up'; # returns update count
SELECT count(blabla) as insrows FROM table1 WHERE blabla IS NULL; # returns inserts
So I know in MySQL it's possible to insert multiple rows in one query like so:
INSERT INTO table (col1,col2) VALUES (1,2),(3,4),(5,6)
I would like to delete multiple rows in a similar way. I know it's possible to delete multiple rows based on the exact same conditions for each row, i.e.
DELETE FROM table WHERE col1='4' and col2='5'
or
DELETE FROM table WHERE col1 IN (1,2,3,4,5)
However, what if I wanted to delete multiple rows in one query, with each row having a set of conditions unique to itself? Something like this would be what I am looking for:
DELETE FROM table WHERE (col1,col2) IN (1,2),(3,4),(5,6)
Does anyone know of a way to do this? Or is it not possible?
You were very close, you can use this:
DELETE FROM table WHERE (col1,col2) IN ((1,2),(3,4),(5,6))
Please see this fiddle.
A slight extension to the answer given, so, hopefully useful to the asker and anyone else looking.
You can also SELECT the values you want to delete. But watch out for the Error 1093 - You can't specify the target table for update in FROM clause.
DELETE FROM
orders_products_history
WHERE
(branchID, action) IN (
SELECT
branchID,
action
FROM
(
SELECT
branchID,
action
FROM
orders_products_history
GROUP BY
branchID,
action
HAVING
COUNT(*) > 10000
) a
);
I wanted to delete all history records where the number of history records for a single action/branch exceed 10,000. And thanks to this question and chosen answer, I can.
Hope this is of use.
Richard.
Took a lot of googling but here is what I do in Python for MySql when I want to delete multiple items from a single table using a list of values.
#create some empty list
values = []
#continue to append the values you want to delete to it
#BUT you must ensure instead of a string it's a single value tuple
values.append(([Your Variable],))
#Then once your array is loaded perform an execute many
cursor.executemany("DELETE FROM YourTable WHERE ID = %s", values)
I have a MySQL table with one primary key.
Nightly I run a job to insert and update records. I use REPLACE INTO for each operation so it'll either add or replace the existing row.
After the REPLACE INTO query I call mysql_affected_rows() which is returning a count of 1 for many rows which are actually replaced and not 'new' (it returns 2 for the vast majority of rows which are replaced).
I know that some of these 'inserts' are false because I track the count of rows at the start and end of the batch update; the table has no duplicates to throw off that count, plus I've verified the faux 'new' rows existed before the batch update.
This table has nothing special about it; a similar table works as behaved with the same code. Anyone have any ideas why mysql_affected_rows() is returning 1 for an operation which is really a replace and not an insert?
REPLACE INTO actually does a DELETE and then INSERT, not an UPDATE.
You might want to consider using INSERT … ON DUPLICATE KEY UPDATE syntax instead.
Is there a way to remove all repeat rows from a MySQL database?
A couple of years ago, someone requested a way to delete duplicates. Subselects make it possible with a query like this in MySQL 4.1:
DELETE FROM some_table WHERE primaryKey NOT IN
(SELECT MIN(primaryKey) FROM some_table GROUP BY some_column)
Of course, you can use MAX(primaryKey) as well if you want to keep the newest record with the duplicate value instead of the oldest record with the duplicate value.
To understand how this works, look at the output of this query:
SELECT some_column, MIN(primaryKey) FROM some_table GROUP BY some_column
As you can see, this query returns the primary key for the first record containing each value of some_column. Logically, then, any key value NOT found in this result set must be a duplicate, and therefore it should be deleted.
These questions / answers might interest you :
How to delete duplicate records in mysql database?
How to delete Duplicates in MySQL table.
And idea that's often used when you are working with a big table is to :
Create a new table
Insert into that table the unique records (i.e. only one version of the duplicates in the original table, generally using a select distinct)
and use that new table in your application ; or drop the old table and rename the new one to the old name.
Good thing with this principle is you have the possibility to verify what's in the new table before dropping the old one -- always nice to check that sort of thing ^^