Update all rows in a table with incrementing text value - mysql

How to update a column for every row in a table with an incrementing text value using SQL.
I have a table with a column called called ej_number which is a unique identifier. The field format is EJnnnn, ie EJ followed by four digits. I have imported data that doesn't include a value for ej_number, but some new rows do have it set. I want to update every row without ej_number set, starting from EJ0001. I'll resolve duplication later.
I fist did it in a loop in PHP, but realised that the server would time out because of the number of rows, so I decided to do it in SQL.
My first idea was to use a loop, but my research found that row by row updates are not recommended, especially as the only way I could see to do it would use a cursor, which is also not recommended.
I was able to do it in a single statement - the code below works, but it generates a warning (using MySQL Workbench).
SET #next_number = 0;
UPDATE ej_details
SET ej_number = CASE
WHEN ej_number IS NULL THEN (
CONCAT('EJ', LPAD((#next_number:=#next_number+1), 4, '0')))
ELSE ej_number
END;
The statement does what I want, but generates this warning:
692 row(s) affected, 1 warning(s): 1287 Setting user variables within expressions is deprecated and will be removed in a future release. Please set variables in separate statements instead. Rows matched: 692 Changed: 692 Warnings: 1
I would like to know how best to do this without using a deprecated feature. I looked and found plenty of row by row solutions, but couldn't see an alternative that wasn't row by row, probably because I don't know enough to ask the right question.

I think you just want:
SET #next_number = 0;
UPDATE ej_details
SET ej_number = CONCAT('EJ',
LPAD( (#next_number:=#next_number+1), 4, '0'
)
)
WHERE ej_number IS NULL;
This is simpler, but won't change the error.
If you want to do this in a single call, then:
UPDATE ej_details CROSS JOIN
(SELECT #next_number := 0) params
SET ej_number = CONCAT('EJ',
LPAD( (#next_number:=#next_number+1), 4, '0'
)
)
WHERE ej_number IS NULL;
Unfortunately, if the column is declared as unique, then you cannot resolve duplicate values "later".
If you wanted to solve this without the error, you'll need a primary key/unique column:
UPDATE ej_details ed JOIN
(SELECT ed.*,
ROW_NUMBER() OVER (ORDER BY ej_number) as seqnum
FROM ej_details ed2
WHERE ej_number IS NULL
) ed2
ON ed2.? = ed.? -- the primary key goes here
SET ed.ej_number = CONCAT('EJ', LPAD(ed2.seqnum, 4, '0')
);
However, this version is not backwards compatible.
I would be surprised if they really removed variables from MySQL 9. It would break lots and lots and lots of code.

Related

MySQL fast check if hash exists

I'm trying to create a MySQL function which takes n and m as input and generates random n unique combinations of m ids from result of query.
The function will return one combination per call, and that combination must be distinct from all previous combinations.
During generation it must check another table: if combination already exists, to continue loop until every combination stays unique. Return combination as dash separated ids or if there is no room for unique combination to return false.
So I'm getting 100 random items like this:
SELECT
`Item`.`id`
FROM
`Item`
LEFT JOIN `ItemKeyword` ON `Item`.`id` = `ItemKeyword`.`ItemID`
WHERE
(`Item`.`user_id` = '2')
AND(`ItemKeyword`.`keywordID` = 7130)
AND(`Item`.`type` = 1)
ORDER BY RAND()
LIMIT 100
Past combinations are stored as md5 of concatenation of itemIDs by -.
So I need to concatenate result of this query by - and create md5 of it. Then to send another query into second table named Combination and check with hash column if it exists or not. And continue this loop until I get n results.
I can't figure out how to achieve this correctly and fast. Any suggestion?
Update:
Whole SQL Dump is here: https://gist.github.com/anonymous/e5eb3bf1a10f9d762cc20a8146acf866
If you are testing for uniqueness via the md5, you need to sort the list before taking the md5. This can be demonstrated with SELECT MD5('1-2'), MD5('2-1');
Get rid of LEFT, it seems useless. After that, the Optimizer can choose between starting with ItemKeyword instead of Item. (Without knowing the distribution of the data, I cannot say whether this might help.)
(It would be helpful if you provided SHOW CREATE TABLE for each table. In their absence, I will assume you are using InnoDB and have PRIMARY KEY(id) and PRIMARY KEY(keywordID).)
'Composite' indexes needed:
Item: INDEX(user_id, type, id)
ItemKeyword: INDEX(ItemID, keywordID)
ItemKeyword smells like a many:many mapping table. Most such tables can be improved, starting with tossing the id. See 7 tips on many:many .
I am somewhat lost in your secondary processing.
My tips on RAND may or may not be helpful.
Schema Critique
A PRIMARY KEY is a UNIQUE KEY is an INDEX; eliminate redundant indexes.
INT(4) -- the (4) means nothing; INT is always 32-bits (4 bytes) with a large range. See SMALLINT UNSIGNED (2 bytes, 0..64K range).
An MD5 should be declared CHAR(32) CHARACTER SET ascii, not 255, not utf8. (latin1 is OK.)
The table Combination (id + hash) seems to be useless. Instead, simply change KEY md5 (md5) USING BTREE, to UNIQUE(md5) in the table Item.
You have started toward utf8mb4 with SET NAMES utf8mb4;, yet the tables (and their columns) are still utf8. Emoji and Chinese need utf8mb4; most other text does not.
After addressing these issues, the original Question may be solved (as well as doing some cleanup). If now, please add some further clarification.
Minified
1. Get a sorted list of m unique ids. (I need "sorted" for the next step, and since you are looking for "combinations", it seems that "permutations" are not needed.)
SELECT GROUP_CONCAT(id) AS list
FROM (
SELECT id FROM tbl
ORDER BY RAND()
LIMIT $m
) AS x;
2. Check for uniqueness. Do this by taking MD5(list) (from above) and checking in a table of 'used' md5's. Note: Unless you are asking for a lot of combinations among a small list of ids, dups are unlikely (though not impossible).
3. Deliver the list. However, it is a string of ids separated by commas. Splitting this is best done in application code, not MySQL functions.
4. What will you do with the list? This could be important because it may be convenient to fold step 4 in with step 3.
Bottom line: I would do only step 1 and part of step 2 in SQL; I would build a 'function' in the application code to do the rest.
Permutations
DROP FUNCTION IF EXISTS unique_perm;
DELIMITER //
CREATE FUNCTION unique_perm()
RETURNS VARCHAR(255) CHARACTER SET ascii
NOT DETERMINISTIC
SQL SECURITY INVOKER
BEGIN
SET #n := 0;
iterat: LOOP
SELECT SUBSTRING_INDEX(
GROUP_CONCAT(province ORDER BY RAND() SEPARATOR '-'),
'-', 3) INTO #list -- Assuming you want M=3 items
FROM world.provinces;
SET #md5 := MD5(#list);
INSERT IGNORE INTO md5s (md5) VALUES (#md5); -- To prevent dups
IF ROW_COUNT() > 0 THEN -- Check for dup
RETURN #list; -- Got a unique permutation
END IF;
SET #n := #n + 1;
IF #n > 20 THEN
RETURN NULL; -- Probably ran out of combinations
END IF;
END LOOP iterat;
END;
//
DELIMITER ;
Output:
mysql> SELECT unique_perm(), unique_perm(), unique_perm()\G
*************************** 1. row ***************************
unique_perm(): New Brunswick-Nova Scotia-Quebec
unique_perm(): Alberta-Northwest Territories-New Brunswick
unique_perm(): Manitoba-Quebec-Prince Edward Island
1 row in set (0.01 sec)
Notes:
I hard-coded M=3; adjust as needed. (It could be passed in as an arg.)
Change column and table names for your needs.
With out the test on #n, you could get in a loop if you run out of combinations. (However, if N is even modestly large, that is 'impossible', so you could remove the test.)
If the M is large enough, you will need to increase ##group_concat_max_len. Also, the RETURNS.
CREATE TABLE md5s ( md5 CHAR(32) CHARACTER SET ascii PRIMARY KEY ) ENGINE=InnoDB is needed. And, you will need to TRUNCATE md5s between batches of calls to this function.
That is a working example.
Flaw: It gives unique permutations, not unique combinations. If that is not adequate, read on...
Combinations
DROP FUNCTION IF EXISTS unique_comb;
DELIMITER //
CREATE FUNCTION unique_comb()
RETURNS VARCHAR(255) CHARACTER SET ascii
NOT DETERMINISTIC
SQL SECURITY INVOKER
BEGIN
SET #n := 0;
iterat: LOOP
SELECT GROUP_CONCAT(province ORDER BY province SEPARATOR '-') INTO #list
FROM ( SELECT province FROM world.provinces
ORDER BY RAND() LIMIT 2 ) AS x; -- Assuming you want M=2 items
SET #md5 := MD5(#list);
INSERT IGNORE INTO md5s (md5) VALUES (#md5); -- To prevent dups
IF ROW_COUNT() > 0 THEN -- Check for dup
RETURN #list; -- Got a unique permutation
END IF;
SET #n := #n + 1;
IF #n > 20 THEN
RETURN NULL; -- Probably ran out of combinations
END IF;
END LOOP iterat;
END;
//
DELIMITER ;
Output:
mysql> SELECT unique_comb(), unique_comb(), unique_comb()\G
*************************** 1. row ***************************
unique_comb(): Quebec-Yukon
unique_comb(): Ontario-Yukon
unique_comb(): New Brunswick-Nova Scotia
1 row in set (0.01 sec)
Notes:
The subquery adds some to the cost.
Note that the items in each output string are now (necessarily) ordered.

Select updated rows in mysql

Is there simple way to select updated rows?
I'm trying to store timestamp each time I am read row to be able to delete data that was not readed for a long time.
First I tried execute SELECT query first and even found little bit slow but simple solution like
UPDATE foo AS t, (SELECT id FROM foo WHERE statement=1)q
SET t.time=NOW() WHERE t.id=q.id
but I still want to find a normal way to do this.
I also think that updating time first and then just select updated rows should be much easier, but I didn't find anything even for this
For a single-row UPDATE in MySQL you could:
UPDATE foo
SET time = NOW()
WHERE statement = 1
AND #var := id
#var := id is always TRUE, but it writes the value of id to the variable #var before the update. Then you can:
SELECT #var;
In PostgreSQL you could use the RETURNING clause.
Oracle also has a RETURNING clause.
SQL-Server has an OUTPUT clause.
But MySQL doesn't have anything like that.
Declare the time column as follows:
CREATE TABLE foo (
...
time TIMESTAMP ON UPDATE CURRENT_TIMESTAMP(),
...)
Then whenever a row is updated, the column will be updated automatically.
UPDATE:
I don't think there's a way to update automatically during SELECT, so you have to do it in two steps:
UPDATE foo
SET time = NOW()
WHERE <conditions>;
SELECT <columns>
FROM foo
WHERE <conditions>;
As long as doesn't include the time column I think this should work. For maximum safety you'll need to use a transaction to prevent other queries from interfering.
#Erwin Brandstetter: Not difficult to extend the strategy of using user variables with CONCAT_WS() to get back multiple IDs. Sorry, still can't add comments...
As suggested here you can extract the modified primary keys to update their timestamp column afterwards.
SET #uids := null;
UPDATE footable
SET foo = 'bar'
WHERE fooid > 5
AND ( SELECT #uids := CONCAT_WS(',', fooid, #uids) );
SELECT #uids;
from https://gist.github.com/PieterScheffers/189cad9510d304118c33135965e9cddb
So you should replace the final SELECT #uids; with an update statement by splitting the resulting #uids value (it will be a varchar containing all the modified ids divided by ,).

How to UPDATE just one record in DB2?

In DB2, I need to do a SELECT FROM UPDATE, to put an update + select in a single transaction.
But I need to make sure to update only one record per transaction.
Familiar with the LIMIT clause from MySQL's UPDATE option
places a limit on the number of rows that can be updated
I looked for something similar in DB2's UPDATE reference but without success.
How can something similar be achieved in DB2?
Edit: In my scenario, I have to deliver 1000 coupon codes upon request. I just need to select (any)one that has not been given yet.
The question uses some ambiguous terminology that makes it unclear what needs to be accomplished. Fortunately, DB2 offers robust support for a variety of SQL patterns.
To limit the number of rows that are modified by an UPDATE:
UPDATE
( SELECT t.column1 FROM someschema.sometable t WHERE ... FETCH FIRST ROW ONLY
)
SET column1 = 'newvalue';
The UPDATE statement never sees the base table, just the expression that filters it, so you can control which rows are updated.
To INSERT a limited number of new rows:
INSERT INTO mktg.offeredcoupons( cust_id, coupon_id, offered_on, expires_on )
SELECT c.cust_id, 1234, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 30 DAYS
FROM mktg.customers c
LEFT OUTER JOIN mktg.offered_coupons o
ON o.cust_id = c.cust_id
WHERE ....
AND o.cust_id IS NULL
FETCH FIRST 1000 ROWS ONLY;
This is how DB2 supports SELECT from an UPDATE, INSERT, or DELETE statement:
SELECT column1 FROM NEW TABLE (
UPDATE ( SELECT column1 FROM someschema.sometable
WHERE ... FETCH FIRST ROW ONLY
)
SET column1 = 'newvalue'
) AS x;
The SELECT will return data from only the modified rows.
You have two options. As noted by A Horse With No Name, you can use the primary key of the table to ensure that one row is updated at a time.
The alternative, if you're using a programming language and have control over cursors, is to use a cursor with the 'FOR UPDATE' option (though that may be probably optional; IIRC, cursors are 'FOR UPDATE' by default when the underlying SELECT means it can be), and then use an UPDATE statement with the WHERE CURRENT OF <cursor-name> in the UPDATE statement. This will update the one row currently addressed by the cursor. The details of the syntax vary with the language you're using, but the raw SQL looks like:
DECLARE CURSOR cursor_name FOR
SELECT *
FROM SomeTable
WHERE PKCol1 = ? AND PKCol2 = ?
FOR UPDATE;
UPDATE SomeTable
SET ...
WHERE CURRENT OF cursor_name;
If you can't write DECLARE in your host language, you have to do manual bashing to find the equivalent mechanism.

Prevent auto increment on MySQL duplicate insert

Using MySQL 5.1.49, I'm trying to implement a tagging system
the problem I have is with a table with two columns: id(autoincrement), tag(unique varchar) (InnoDB)
When using query, INSERT IGNORE INTO tablename SET tag="whatever", the auto increment id value increases even if the insert was ignored.
Normally this wouldn't be a problem, but I expect a lot of possible attempts to insert duplicates for this particular table which means that my next value for id field of a new row will be jumping way too much.
For example I'll end up with a table with say 3 rows but bad id's
1 | test
8 | testtext
678 | testtextt
Also, if I don't do INSERT IGNORE and just do regular INSERT INTO and handle the error, the auto increment field still increases so the next true insert is still a wrong auto increment.
Is there a way to stop auto increment if there's an INSERT duplicate row attempt?
As I understand for MySQL 4.1, this value wouldn't increment, but last thing I want to do is end up either doing a lot of SELECT statements in advance to check if the tags exist, or worse yet, downgrade my MySQL version.
You could modify your INSERT to be something like this:
INSERT INTO tablename (tag)
SELECT $tag
FROM tablename
WHERE NOT EXISTS(
SELECT tag
FROM tablename
WHERE tag = $tag
)
LIMIT 1
Where $tag is the tag (properly quoted or as a placeholder of course) that you want to add if it isn't already there. This approach won't even trigger an INSERT (and the subsequent autoincrement wastage) if the tag is already there. You could probably come up with nicer SQL than that but the above should do the trick.
If your table is properly indexed then the extra SELECT for the existence check will be fast and the database is going to have to perform that check anyway.
This approach won't work for the first tag though. You could seed your tag table with a tag that you think will always end up being used or you could do a separate check for an empty table.
I just found this gem...
http://www.timrosenblatt.com/blog/2008/03/21/insert-where-not-exists/
INSERT INTO [table name] SELECT '[value1]', '[value2]' FROM DUAL
WHERE NOT EXISTS(
SELECT [column1] FROM [same table name]
WHERE [column1]='[value1]'
AND [column2]='[value2]' LIMIT 1
)
If affectedRows = 1 then it inserted; otherwise if affectedRows = 0 there was a duplicate.
The MySQL documentation for v 5.5 says:
"If you use INSERT IGNORE and the row is ignored, the AUTO_INCREMENT counter
is **not** incremented and LAST_INSERT_ID() returns 0,
which reflects that no row was inserted."
Ref: http://dev.mysql.com/doc/refman/5.5/en/information-functions.html#function_last-insert-id
Since version 5.1 InnoDB has configurable Auto-Increment Locking. See also http://dev.mysql.com/doc/refman/5.1/en/innodb-auto-increment-handling.html#innodb-auto-inc...
Workaround: use option innodb_autoinc_lock_mode=0 (traditional).
I found mu is too short's answer helpful, but limiting because it doesn't do inserts on an empty table. I found a simple modification did the trick:
INSERT INTO tablename (tag)
SELECT $tag
FROM (select 1) as a #this line is different from the other answer
WHERE NOT EXISTS(
SELECT tag
FROM tablename
WHERE tag = $tag
)
LIMIT 1
Replacing the table in the from clause with a "fake" table (select 1) as a allowed that part to return a record which allowed the insert to take place. I'm running mysql 5.5.37. Thanks mu for getting me most of the way there ....
The accepted answer was useful, however I ran into a problem while using it that basically if your table had no entries it would not work as the select was using the given table, so instead I came up with the following, which will insert even if the table is blank, it also only needs you to insert the table in 2 places and the inserting variables in 1 place, less to get wrong.
INSERT INTO database_name.table_name (a,b,c,d)
SELECT
i.*
FROM
(SELECT
$a AS a,
$b AS b,
$c AS c,
$d AS d
/*variables (properly escaped) to insert*/
) i
LEFT JOIN
database_name.table_name o ON i.a = o.a AND i.b = o.b /*condition to not insert for*/
WHERE
o.a IS NULL
LIMIT 1 /*Not needed as can only ever be one, just being sure*/
Hope you find it useful
You can always add ON DUPLICATE KEY UPDATE Read here (not exactly, but solves your problem it seems).
From the comments, by #ravi
Whether the increment occurs or not depends on the
innodb_autoinc_lock_mode setting. If set to a non-zero value, the
auto-inc counter will increment even if the ON DUPLICATE KEY fires
I had the same problem but didn't want to use innodb_autoinc_lock_mode = 0 since it felt like I was killing a fly with a howitzer.
To resolve this problem I ended up using a temporary table.
create temporary table mytable_temp like mytable;
Then I inserted the values with:
insert into mytable_temp values (null,'valA'),(null,'valB'),(null,'valC');
After that you simply do another insert but use "not in" to ignore duplicates.
insert into mytable (myRow) select mytable_temp.myRow from mytable_temp
where mytable_temp.myRow not in (select myRow from mytable);
I haven't tested this for performance, but it does the job and is easy to read. Granted this was only important because I was working with data that was constantly being updated so I couldn't ignore the gaps.
modified the answer from mu is too short, (simply remove one line)
as i am newbie and i cannot make comment below his answer. Just post it here
the query below works for the first tag
INSERT INTO tablename (tag)
SELECT $tag
WHERE NOT EXISTS(
SELECT tag
FROM tablename
WHERE tag = $tag
)
I just put an extra statement after the insert/update query:
ALTER TABLE table_name AUTO_INCREMENT = 1
And then he automatically picks up the highest prim key id plus 1.

Multiple Updates in MySQL

I know that you can insert multiple rows at once, is there a way to update multiple rows at once (as in, in one query) in MySQL?
Edit:
For example I have the following
Name id Col1 Col2
Row1 1 6 1
Row2 2 2 3
Row3 3 9 5
Row4 4 16 8
I want to combine all the following Updates into one query
UPDATE table SET Col1 = 1 WHERE id = 1;
UPDATE table SET Col1 = 2 WHERE id = 2;
UPDATE table SET Col2 = 3 WHERE id = 3;
UPDATE table SET Col1 = 10 WHERE id = 4;
UPDATE table SET Col2 = 12 WHERE id = 4;
Yes, that's possible - you can use INSERT ... ON DUPLICATE KEY UPDATE.
Using your example:
INSERT INTO table (id,Col1,Col2) VALUES (1,1,1),(2,2,3),(3,9,3),(4,10,12)
ON DUPLICATE KEY UPDATE Col1=VALUES(Col1),Col2=VALUES(Col2);
Since you have dynamic values, you need to use an IF or CASE for the columns to be updated. It gets kinda ugly, but it should work.
Using your example, you could do it like:
UPDATE table SET Col1 = CASE id
WHEN 1 THEN 1
WHEN 2 THEN 2
WHEN 4 THEN 10
ELSE Col1
END,
Col2 = CASE id
WHEN 3 THEN 3
WHEN 4 THEN 12
ELSE Col2
END
WHERE id IN (1, 2, 3, 4);
The question is old, yet I'd like to extend the topic with another answer.
My point is, the easiest way to achieve it is just to wrap multiple queries with a transaction. The accepted answer INSERT ... ON DUPLICATE KEY UPDATE is a nice hack, but one should be aware of its drawbacks and limitations:
As being said, if you happen to launch the query with rows whose primary keys don't exist in the table, the query inserts new "half-baked" records. Probably it's not what you want
If you have a table with a not null field without default value and don't want to touch this field in the query, you'll get "Field 'fieldname' doesn't have a default value" MySQL warning even if you don't insert a single row at all. It will get you into trouble, if you decide to be strict and turn mysql warnings into runtime exceptions in your app.
I made some performance tests for three of suggested variants, including the INSERT ... ON DUPLICATE KEY UPDATE variant, a variant with "case / when / then" clause and a naive approach with transaction. You may get the python code and results here. The overall conclusion is that the variant with case statement turns out to be twice as fast as two other variants, but it's quite hard to write correct and injection-safe code for it, so I personally stick to the simplest approach: using transactions.
Edit: Findings of Dakusan prove that my performance estimations are not quite valid. Please see this answer for another, more elaborate research.
Not sure why another useful option is not yet mentioned:
UPDATE my_table m
JOIN (
SELECT 1 as id, 10 as _col1, 20 as _col2
UNION ALL
SELECT 2, 5, 10
UNION ALL
SELECT 3, 15, 30
) vals ON m.id = vals.id
SET col1 = _col1, col2 = _col2;
All of the following applies to InnoDB.
I feel knowing the speeds of the 3 different methods is important.
There are 3 methods:
INSERT: INSERT with ON DUPLICATE KEY UPDATE
TRANSACTION: Where you do an update for each record within a transaction
CASE: In which you a case/when for each different record within an UPDATE
I just tested this, and the INSERT method was 6.7x faster for me than the TRANSACTION method. I tried on a set of both 3,000 and 30,000 rows.
The TRANSACTION method still has to run each individually query, which takes time, though it batches the results in memory, or something, while executing. The TRANSACTION method is also pretty expensive in both replication and query logs.
Even worse, the CASE method was 41.1x slower than the INSERT method w/ 30,000 records (6.1x slower than TRANSACTION). And 75x slower in MyISAM. INSERT and CASE methods broke even at ~1,000 records. Even at 100 records, the CASE method is BARELY faster.
So in general, I feel the INSERT method is both best and easiest to use. The queries are smaller and easier to read and only take up 1 query of action. This applies to both InnoDB and MyISAM.
Bonus stuff:
The solution for the INSERT non-default-field problem is to temporarily turn off the relevant SQL modes: SET SESSION sql_mode=REPLACE(REPLACE(##SESSION.sql_mode,"STRICT_TRANS_TABLES",""),"STRICT_ALL_TABLES",""). Make sure to save the sql_mode first if you plan on reverting it.
As for other comments I've seen that say the auto_increment goes up using the INSERT method, this does seem to be the case in InnoDB, but not MyISAM.
Code to run the tests is as follows. It also outputs .SQL files to remove php interpreter overhead
<?php
//Variables
$NumRows=30000;
//These 2 functions need to be filled in
function InitSQL()
{
}
function RunSQLQuery($Q)
{
}
//Run the 3 tests
InitSQL();
for($i=0;$i<3;$i++)
RunTest($i, $NumRows);
function RunTest($TestNum, $NumRows)
{
$TheQueries=Array();
$DoQuery=function($Query) use (&$TheQueries)
{
RunSQLQuery($Query);
$TheQueries[]=$Query;
};
$TableName='Test';
$DoQuery('DROP TABLE IF EXISTS '.$TableName);
$DoQuery('CREATE TABLE '.$TableName.' (i1 int NOT NULL AUTO_INCREMENT, i2 int NOT NULL, primary key (i1)) ENGINE=InnoDB');
$DoQuery('INSERT INTO '.$TableName.' (i2) VALUES ('.implode('), (', range(2, $NumRows+1)).')');
if($TestNum==0)
{
$TestName='Transaction';
$Start=microtime(true);
$DoQuery('START TRANSACTION');
for($i=1;$i<=$NumRows;$i++)
$DoQuery('UPDATE '.$TableName.' SET i2='.(($i+5)*1000).' WHERE i1='.$i);
$DoQuery('COMMIT');
}
if($TestNum==1)
{
$TestName='Insert';
$Query=Array();
for($i=1;$i<=$NumRows;$i++)
$Query[]=sprintf("(%d,%d)", $i, (($i+5)*1000));
$Start=microtime(true);
$DoQuery('INSERT INTO '.$TableName.' VALUES '.implode(', ', $Query).' ON DUPLICATE KEY UPDATE i2=VALUES(i2)');
}
if($TestNum==2)
{
$TestName='Case';
$Query=Array();
for($i=1;$i<=$NumRows;$i++)
$Query[]=sprintf('WHEN %d THEN %d', $i, (($i+5)*1000));
$Start=microtime(true);
$DoQuery("UPDATE $TableName SET i2=CASE i1\n".implode("\n", $Query)."\nEND\nWHERE i1 IN (".implode(',', range(1, $NumRows)).')');
}
print "$TestName: ".(microtime(true)-$Start)."<br>\n";
file_put_contents("./$TestName.sql", implode(";\n", $TheQueries).';');
}
UPDATE table1, table2 SET table1.col1='value', table2.col1='value' WHERE table1.col3='567' AND table2.col6='567'
This should work for ya.
There is a reference in the MySQL manual for multiple tables.
Use a temporary table
// Reorder items
function update_items_tempdb(&$items)
{
shuffle($items);
$table_name = uniqid('tmp_test_');
$sql = "CREATE TEMPORARY TABLE `$table_name` ("
." `id` int(10) unsigned NOT NULL AUTO_INCREMENT"
.", `position` int(10) unsigned NOT NULL"
.", PRIMARY KEY (`id`)"
.") ENGINE = MEMORY";
query($sql);
$i = 0;
$sql = '';
foreach ($items as &$item)
{
$item->position = $i++;
$sql .= ($sql ? ', ' : '')."({$item->id}, {$item->position})";
}
if ($sql)
{
query("INSERT INTO `$table_name` (id, position) VALUES $sql");
$sql = "UPDATE `test`, `$table_name` SET `test`.position = `$table_name`.position"
." WHERE `$table_name`.id = `test`.id";
query($sql);
}
query("DROP TABLE `$table_name`");
}
Why does no one mention multiple statements in one query?
In php, you use multi_query method of mysqli instance.
From the php manual
MySQL optionally allows having multiple statements in one statement string. Sending multiple statements at once reduces client-server round trips but requires special handling.
Here is the result comparing to other 3 methods in update 30,000 raw. Code can be found here which is based on answer from #Dakusan
Transaction: 5.5194580554962
Insert: 0.20669293403625
Case: 16.474853992462
Multi: 0.0412278175354
As you can see, multiple statements query is more efficient than the highest answer.
If you get error message like this:
PHP Warning: Error while sending SET_OPTION packet
You may need to increase the max_allowed_packet in mysql config file which in my machine is /etc/mysql/my.cnf and then restart mysqld.
There is a setting you can alter called 'multi statement' that disables MySQL's 'safety mechanism' implemented to prevent (more than one) injection command. Typical to MySQL's 'brilliant' implementation, it also prevents user from doing efficient queries.
Here (http://dev.mysql.com/doc/refman/5.1/en/mysql-set-server-option.html) is some info on the C implementation of the setting.
If you're using PHP, you can use mysqli to do multi statements (I think php has shipped with mysqli for a while now)
$con = new mysqli('localhost','user1','password','my_database');
$query = "Update MyTable SET col1='some value' WHERE id=1 LIMIT 1;";
$query .= "UPDATE MyTable SET col1='other value' WHERE id=2 LIMIT 1;";
//etc
$con->multi_query($query);
$con->close();
Hope that helps.
You can alias the same table to give you the id's you want to insert by (if you are doing a row-by-row update:
UPDATE table1 tab1, table1 tab2 -- alias references the same table
SET
col1 = 1
,col2 = 2
. . .
WHERE
tab1.id = tab2.id;
Additionally, It should seem obvious that you can also update from other tables as well. In this case, the update doubles as a "SELECT" statement, giving you the data from the table you are specifying. You are explicitly stating in your query the update values so, the second table is unaffected.
You may also be interested in using joins on updates, which is possible as well.
Update someTable Set someValue = 4 From someTable s Inner Join anotherTable a on s.id = a.id Where a.id = 4
-- Only updates someValue in someTable who has a foreign key on anotherTable with a value of 4.
Edit: If the values you are updating aren't coming from somewhere else in the database, you'll need to issue multiple update queries.
No-one has yet mentioned what for me would be a much easier way to do this - Use a SQL editor that allows you to execute multiple individual queries. This screenshot is from Sequel Ace, I'd assume that Sequel Pro and probably other editors have similar functionality. (This of course assumes you only need to run this as a one-off thing rather than as an integrated part of your app/site).
And now the easy way
update my_table m, -- let create a temp table with populated values
(select 1 as id, 20 as value union -- this part will be generated
select 2 as id, 30 as value union -- using a backend code
-- for loop
select N as id, X as value
) t
set m.value = t.value where t.id=m.id -- now update by join - quick
Yes ..it is possible using INSERT ON DUPLICATE KEY UPDATE sql statement..
syntax:
INSERT INTO table_name (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE a=VALUES(a),b=VALUES(b),c=VALUES(c)
use
REPLACE INTO`table` VALUES (`id`,`col1`,`col2`) VALUES
(1,6,1),(2,2,3),(3,9,5),(4,16,8);
Please note:
id has to be a primary unique key
if you use foreign keys to
reference the table, REPLACE deletes then inserts, so this might
cause an error
I took the answer from #newtover and extended it using the new json_table function in MySql 8. This allows you to create a stored procedure to handle the workload rather than building your own SQL text in code:
drop table if exists `test`;
create table `test` (
`Id` int,
`Number` int,
PRIMARY KEY (`Id`)
);
insert into test (Id, Number) values (1, 1), (2, 2);
DROP procedure IF EXISTS `Test`;
DELIMITER $$
CREATE PROCEDURE `Test`(
p_json json
)
BEGIN
update test s
join json_table(p_json, '$[*]' columns(`id` int path '$.id', `number` int path '$.number')) v
on s.Id=v.id set s.Number=v.number;
END$$
DELIMITER ;
call `Test`('[{"id": 1, "number": 10}, {"id": 2, "number": 20}]');
select * from test;
drop table if exists `test`;
It's a few ms slower than pure SQL but I'm happy to take the hit rather than generate the sql text in code. Not sure how performant it is with huge recordsets (the JSON object has a max size of 1Gb) but I use it all the time when updating 10k rows at a time.
The following will update all rows in one table
Update Table Set
Column1 = 'New Value'
The next one will update all rows where the value of Column2 is more than 5
Update Table Set
Column1 = 'New Value'
Where
Column2 > 5
There is all Unkwntech's example of updating more than one table
UPDATE table1, table2 SET
table1.col1 = 'value',
table2.col1 = 'value'
WHERE
table1.col3 = '567'
AND table2.col6='567'
UPDATE tableName SET col1='000' WHERE id='3' OR id='5'
This should achieve what you'r looking for. Just add more id's. I have tested it.
UPDATE `your_table` SET
`something` = IF(`id`="1","new_value1",`something`), `smth2` = IF(`id`="1", "nv1",`smth2`),
`something` = IF(`id`="2","new_value2",`something`), `smth2` = IF(`id`="2", "nv2",`smth2`),
`something` = IF(`id`="4","new_value3",`something`), `smth2` = IF(`id`="4", "nv3",`smth2`),
`something` = IF(`id`="6","new_value4",`something`), `smth2` = IF(`id`="6", "nv4",`smth2`),
`something` = IF(`id`="3","new_value5",`something`), `smth2` = IF(`id`="3", "nv5",`smth2`),
`something` = IF(`id`="5","new_value6",`something`), `smth2` = IF(`id`="5", "nv6",`smth2`)
// You just building it in php like
$q = 'UPDATE `your_table` SET ';
foreach($data as $dat){
$q .= '
`something` = IF(`id`="'.$dat->id.'","'.$dat->value.'",`something`),
`smth2` = IF(`id`="'.$dat->id.'", "'.$dat->value2.'",`smth2`),';
}
$q = substr($q,0,-1);
So you can update hole table with one query