Just to get the elephant out of the room since there are so many SO questions similar to this. I have looked through the other posts with similar names and did not find what I was looking for. That being said I may have just not seen how to implement their question to my specific implementation
I am trying to update multiple rows in a table to all [potentially] different values but I don't know how many rows I will be updating as I am targeting using a foreign key. However, due to database constraints it is safe to assume that I have the correct number of inputs. (e.g. 3 values for 3 rows, or 5 values for 5 rows) AND even if it was not safe to assume, I don't mind an error being thrown as that would insure data integrity if something did happen to be wrong...
It does not matter which values get assigned to which rows.
Here is some examples of what I have tried.
UPDATE table1
SET
table1.column1 IN (17 , 37, 62)
WHERE
table1.foreign_key = 877;
After that I found out you can't use IN like that...
Then I tried:
UPDATE table1
SET
table1.column1 = (17 , 37, 62)
WHERE
table1.primary_key IN (SELECT
primary_key
FROM
table1
WHERE
foreign_key = 877);
However, "You can't specify target table 'table1' for update in FROM clause"
This would be fairly trivial to do with a SELECT then an update after (n_rows + 1 queries) (shown below), so I am assuming it is possible to do in one query.
SELECT primary_key from table1 where foreign_key = ?;,[877]
Then mapping the values and primary keys (I am using nodejs backend)
UPDATE table1 set column1 = ? where primary_key = ?;,[value1,primary_key1]
The hard parts seem to be
How to map multiple values to multiple rows.
How to get the unique IDs for each row at run time.
I don't mind if I have to use a stored procedure, I just really don't want to send n+1 queries to the database.
you need to update your datas with just one query like this below.
insert into TABLENAME(key, column1, column2, ...) values (?),(?),... on duplicate key update column1=value(column1), column2=value(column2), ...;
also you can visit my post with this link.
Node.js Update Table with Array in Mysql
I had same problem 2 days ago.
Related
How can I update only the nth row from a table?
To update the second row for example, i tried using UPDATE and LIMIT, but it is giving me an error.
UPDATE database_table
SET field = 'new_value'
LIMIT 0, 1
Is there a way to do it?
If you have a primary key and a column you'd like to order by (to get the nth row), you could do something like:
UPDATE database_table
SET field = 'new_value'
WHERE primary_key IN (
SELECT primary_key
FROM database_table
ORDER BY some_date_column
LIMIT n - 1, 1
)
Edit: I should probably add a caveat. This answer might be technically correct, but you're likely wrong to use it. I can't imagine too many scenarios where you'd actually want to update the nth row of a table. You should generally only be updating tables based on primary keys. Updating the nth row will likely break your app if multiple users (or even multiple sessions with the same user) are using it at the same time.
The real answer is you should probably change your code to update based on primary key.
You would need some sort of id, and then do something like this:
UPDATE database_table SET field = 'new_value' WHERE id = 2
So I know in MySQL it's possible to insert multiple rows in one query like so:
INSERT INTO table (col1,col2) VALUES (1,2),(3,4),(5,6)
I would like to delete multiple rows in a similar way. I know it's possible to delete multiple rows based on the exact same conditions for each row, i.e.
DELETE FROM table WHERE col1='4' and col2='5'
or
DELETE FROM table WHERE col1 IN (1,2,3,4,5)
However, what if I wanted to delete multiple rows in one query, with each row having a set of conditions unique to itself? Something like this would be what I am looking for:
DELETE FROM table WHERE (col1,col2) IN (1,2),(3,4),(5,6)
Does anyone know of a way to do this? Or is it not possible?
You were very close, you can use this:
DELETE FROM table WHERE (col1,col2) IN ((1,2),(3,4),(5,6))
Please see this fiddle.
A slight extension to the answer given, so, hopefully useful to the asker and anyone else looking.
You can also SELECT the values you want to delete. But watch out for the Error 1093 - You can't specify the target table for update in FROM clause.
DELETE FROM
orders_products_history
WHERE
(branchID, action) IN (
SELECT
branchID,
action
FROM
(
SELECT
branchID,
action
FROM
orders_products_history
GROUP BY
branchID,
action
HAVING
COUNT(*) > 10000
) a
);
I wanted to delete all history records where the number of history records for a single action/branch exceed 10,000. And thanks to this question and chosen answer, I can.
Hope this is of use.
Richard.
Took a lot of googling but here is what I do in Python for MySql when I want to delete multiple items from a single table using a list of values.
#create some empty list
values = []
#continue to append the values you want to delete to it
#BUT you must ensure instead of a string it's a single value tuple
values.append(([Your Variable],))
#Then once your array is loaded perform an execute many
cursor.executemany("DELETE FROM YourTable WHERE ID = %s", values)
I got a table with a normal setup of auto inc. ids. Some of the rows have been deleted so the ID list could look something like this:
(1, 2, 3, 5, 8, ...)
Then, from another source (Edit: Another source = NOT in a database) I have this array:
(1, 3, 4, 5, 7, 8)
I'm looking for a query I can use on the database to get the list of ID:s NOT in the table from the array I have. Which would be:
(4, 7)
Does such exist? My solution right now is either creating a temporary table so the command "WHERE table.id IS NULL" works, or probably worse, using the PHP function array_diff to see what's missing after having retrieved all the ids from table.
Since the list of ids are closing in on millions or rows I'm eager to find the best solution.
Thank you!
/Thomas
Edit 2:
My main application is a rather easy table which is populated by a lot of rows. This application is administrated using a browser and I'm using PHP as the intepreter for the code.
Everything in this table is to be exported to another system (which is 3rd party product) and there's yet no way of doing this besides manually using the import function in that program. There's also possible to insert new rows in the other system, although the agreed routing is to never ever do this.
The problem is then that my system cannot be 100 % sure that the user did everything correct from when he/she pressed the "export" key. Or, that no rows has ever been created in the other system.
From the other system I can get a CSV-file out where all the rows that system has. So, by comparing the CSV file and my table I can see if:
* There are any rows missing in the other system that should have been imported
* If someone has created rows in the other system
The problem isn't "solving it". It's making the best solution to is since there are so much data in the rows.
Thanks again!
/Thomas
We can use MYSQL not in option.
SELECT id
FROM table_one
WHERE id NOT IN ( SELECT id FROM table_two )
Edited
If you are getting the source from a csv file then you can simply have to put these values directly like:
I am assuming that the CSV are like 1,2,3,...,n
SELECT id
FROM table_one
WHERE id NOT IN ( 1,2,3,...,n );
EDIT 2
Or If you want to select the other way around then you can use mysqlimport to import data in temporary table in MySQL Database and retrieve the result and delete the table.
Like:
Create table
CREATE TABLE my_temp_table(
ids INT,
);
load .csv file
LOAD DATA LOCAL INFILE 'yourIDs.csv' INTO TABLE my_temp_table
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
(ids);
Selecting records
SELECT ids FROM my_temp_table
WHERE ids NOT IN ( SELECT id FROM table_one )
dropping table
DROP TABLE IF EXISTS my_temp_table
What about using a left join ; something like this :
select second_table.id
from second_table
left join first_table on first_table.id = second_table.id
where first_table.is is null
You could also go with a sub-query ; depending on the situation, it might, or might not, be faster, though :
select second_table.id
from second_table
where second_table.id not in (
select first_table.id
from first_table
)
Or with a not exists :
select second_table.id
from second_table
where not exists (
select 1
from first_table
where first_table.id = second_table.id
)
The function you are looking for is NOT IN (an alias for <> ALL)
The MYSQL documentation:
http://dev.mysql.com/doc/refman/5.0/en/all-subqueries.html
An Example of its use:
http://www.roseindia.net/sql/mysql-example/not-in.shtml
Enjoy!
The problem is that T1 could have a million rows or ten million rows, and that number could change, so you don't know how many rows your comparison table, T2, the one that has no gaps, should have, for doing a WHERE NOT EXISTS or a LEFT JOIN testing for NULL.
But the question is, why do you care if there are missing values? I submit that, when an application is properly architected, it should not matter if there are gaps in an autoincrementing key sequence. Even an application where gaps do matter, such as a check-register, should not be using an autoincrenting primary key as a synonym for the check number.
Care to elaborate on your application requirement?
OK, I've read your edits/elaboration. Syncrhonizing two databases where the second is not supposed to insert any new rows, but might do so, sounds like a problem waiting to happen.
Neither approach suggested above (WHERE NOT EXISTS or LEFT JOIN) is air-tight and neither is a way to guarantee logical integrity between the two systems. They will not let you know which system created a row in situations where both tables contain a row with the same id. You're focusing on gaps now, but another problem is duplicate ids.
For example, if both tables have a row with id 13887, you cannot assume that database1 created the row. It could have been inserted into database2, and then database1 could insert a new row using that same id. You would have to compare all column values to ascertain that the rows are the same or not.
I'd suggest therefore that you also explore GUID as a replacement for autoincrementing integers. You cannot prevent database2 from inserting rows, but at least with GUIDs you won't run into a problem where the second database has inserted a row and assigned it a primary key value that your first database might also use, resulting in two different rows with the same id. CreationDateTime and LastUpdateDateTime columns would also be useful.
However, a proper solution, if it is available to you, is to maintain just one database and give users remote access to it, for example, via a web interface. That would eliminate the mess and complication of replication/synchronization issues.
If a remote-access web-interface is not feasible, perhaps you could make one of the databases read-only? Or does database2 have to make updates to the rows? Perhaps you could deny insert privilege? What database engine are you using?
I have the same problem: I have a list of values from the user, and I want to find the subset that does not exist in anther table. I did it in oracle by building a pseudo-table in the select statement Here's a way to do it in Oracle. Try it in MySQL without the "from dual":
-- find ids from user (1,2,3) that *don't* exist in my person table
-- build a pseudo table and join it with my person table
select pseudo.id from (
select '1' as id from dual
union select '2' as id from dual
union select '3' as id from dual
) pseudo
left join person
on person.person_id = pseudo.id
where person.person_id is null
I don't think this is possible as I couldn't find anything but I thought I would check on here in case I am not searching for the correct thing.
I have a settings table in my database which has two columns. The first column is the setting name and the second column is the value.
I need to update all of these at the same time. I wanted to see if there was a way to update these values at the same time one query like the following
UPDATE table SET col1='setting name' WHERE col2='1 value' AND SET col1='another name' WHERE col2='another value';
I know the above isn't a correct SQL format but this is the sort of thing that I would like to do so was wondering if there was another way that this can be done instead of having to perform separate SQL queries for each setting I want to update.
Thanks for your help.
You can use INSERT INTO .. ON DUPLICATE KEY UPDATE to update multiple rows with different values.
You do need a unique index (like a primary key) to make the "duplicate key"-part work
Example:
INSERT INTO table (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE b = VALUES(b), c = VALUES(c);
-- VALUES(x) points back to the value you gave for field x
-- so for b it is 2 and 5, for c it is 3 and 6 for rows 1 and 4 respectively (if you assume that a is your unique key field)
If you have a specific case I can give you the exact query.
UPDATE table
SET col2 =
CASE col1
WHEN 'setting1'
THEN 'value'
ELSE col2
END
, SET col1 = ...
...
I decided to use multiple queries all in one go. so the code would go like
UPDATE table SET col2='value1' WHERE col1='setting1';
UPDATE table SET col2='value2' WHERE col1='setting1';
etc
etc
I've just done a test where I insert 1500 records into the database. Do it without starting a DB transaction and it took 35 seconds, blanked the database and did it again but starting a transaction first, then once the 1500th record inserted finish the transaction and the time it took was 1 second, so definetely seems like doing it in a db transaction is the way to go.
You need to run separate SQL queries and make use of Transactions if you want to run as atomic.
UPDATE table SET col1=if(col2='1 value','setting name','another name') WHERE col2='1 value' OR col2='another value'
#Frits Van Campen,
The insert into .. on duplicate works for me.
I am doing this for years when I want to update more than thousand records from an excel import.
Only problem with this trick is, when there is no record to update, instead of ignoring, this method inserts a record and on some instances it is a problem. Then I need to insert another field, then after import I have to delete all the records that has been inserted instead of update.
Is there a way to remove all repeat rows from a MySQL database?
A couple of years ago, someone requested a way to delete duplicates. Subselects make it possible with a query like this in MySQL 4.1:
DELETE FROM some_table WHERE primaryKey NOT IN
(SELECT MIN(primaryKey) FROM some_table GROUP BY some_column)
Of course, you can use MAX(primaryKey) as well if you want to keep the newest record with the duplicate value instead of the oldest record with the duplicate value.
To understand how this works, look at the output of this query:
SELECT some_column, MIN(primaryKey) FROM some_table GROUP BY some_column
As you can see, this query returns the primary key for the first record containing each value of some_column. Logically, then, any key value NOT found in this result set must be a duplicate, and therefore it should be deleted.
These questions / answers might interest you :
How to delete duplicate records in mysql database?
How to delete Duplicates in MySQL table.
And idea that's often used when you are working with a big table is to :
Create a new table
Insert into that table the unique records (i.e. only one version of the duplicates in the original table, generally using a select distinct)
and use that new table in your application ; or drop the old table and rename the new one to the old name.
Good thing with this principle is you have the possibility to verify what's in the new table before dropping the old one -- always nice to check that sort of thing ^^