I want to add 10,000 rows to a MySQL table. The table has a field, let's call it "Number", that needs to increment from 540000 to 549999.
This is just something that needs to run once, so performance is not critical. Is there a MySQL command that will do this, or do I need to write a script to call 10,000 insert statements?
Assuming you have those 10,000 rows in a tab-delimited file, you can bulk load the data into your table and set the Number value incrementally like this:
set #number = (540000 - 1);
load data infile '/tmp/your_data.txt'
ignore into table your_table
(column_1,...,column_n)
set Number = (#number := #number + 1);
I ended up creating a script with 10,000 insert statements.
Related
I have table which have 20 million records . I have recently added another column to that table.
I need to update the data into that column.
I'm using MYSQL community edition, when I execute the direct update like this :
Update Employee SET Emp_mail = 'xyz_123#gmail.com'
System Getting hanged and Need to close the execution abruptly.
But when I update the statement with filter condition it is executing fine.
Update Employee SET Emp_mail = 'xyz_123#gmail.com' where ID <= 10000;
Update Employee SET Emp_mail = 'xyz_123#gmail.com' where ID >= 10000 AND ID <= 10000 ;
--------
--------
no of Times
Now I'm looking for looping script where I can execute in chunk wise.
For example in SQL it is like this but I'm not sure of MYSQL:
BEGIN
I int = 0 ;
cnt = 0 ;
while 1 > cnt
SET i = i + 1;
Update Employee SET Emp_mail = 'xyz_123#gmail.com' where ID >= cnt AND ID <= I
END
Note : this is a random script syntax wise there may be some errors . Please ignore it.
I'm looking for Looping in MYSQL
In a row based database system as MySQL, if you need to update each and every row, you should really explore a different approach:
ALTER TABLE original_table RENAME TO original_table_dropme;
CREATE TABLE original_table LIKE original_table_dropme;
ALTER TABLE original_table ADD emp_mail VARCHAR(128);
INSERT INTO original_table SELECT *,'xyz_123#gmail.com'
FROM original_table_dropme;
Then, maybe keep the original table for a while - especially to transfer any constraints, primary keys and grants from the old table to the new - and finally drop the %_dropme table.
Each update, in a row based database, of a previously empty column to a value, will make each row longer than it originally was, and require a reorganisation, internally. If you do that with millions of rows, the effort needed will increase exponentially.
First MySQL command line:
use usersbase;
LOAD DATA LOCAL INFILE 'D:/base/users.txt'
INTO TABLE users
FIELDS TERMINATED BY ',';
Second:
use usersbase;
set session transaction isolation level read uncommitted;
select count(1) from users;
How to stop lodaing from file, if i see, that users table have n rows, and i dont need more? How to save current loaded rows, and stop loading?
Try this:
Use LOAD DATA INFILE .. IGNORE ...
Add temporary trigger to this table like
CREATE TRIGGER prevent_excess_lines_insertion
BEFORE INSERT
ON users
FOR EACH ROW
BEGIN
IF 50000 < (SELECT COUNT(*) FROM USERS) THEN
SET NEW.id = 1;
END IF;
END
When the line is loaded then the amount of rows in the table (except the line to be inserted) is counted and compared with pre-defined rows amount (50000).
If current rows amount is less then the row is inserted.
If predefined amount is reached then some predefined value (1) is assigned to primary key column. This causes unique constraint violation, which is ignored due to IGNORE modifier.
In this case the whole file will be nevertheless loaded (but only needed rows amount will be inserted).
If you want to break the process then remove IGNORE modifier and replace SET statement with SIGNAL which sets generic SQL error, and loading process will be terminated.
Do not forget to remove the trigger immediately after performing the import.
Note that COUNT(*) in InnoDB can be pretty slow on large tables. Doing it before each insert might make the load take a while. – Barmar
This is true :(
You may use user-defined variable instead of querying the amount of rows in a table. The trigger will be
CREATE TRIGGER prevent_excess_lines_insertion
BEFORE INSERT
ON users
FOR EACH ROW
BEGIN
IF (#needed_count := #needed_count - 1) < 0 THEN
SET NEW.id = 1;
END IF;
END
Before insertion you must set this variable to the amount of rows to be loaded, for example, SET #needed_count := 50000;. Variable must be set in the same connection strictly !!! And variable's name must not interfere with another variables names if they're used.
I'd like to create a temporary table which includes an iterator.
I would have a MySQL variable #count which includes the number of rows desired.
I want to import that number of rows into the table, with an iterator, so that I have rows 1, 2, 3, etc.
That would allow me to create the desired result set select number from tmp, and I could include other information where available using a left join.
I could even use the number and create a date. select date(now())+interval number day
Create a procedure (CREATE PROCEDURE) that takes an INT parameter.
In the stored routine, create and populate a temporary table (CREATE TEMPORARY TABLE, the MEMORY engine could be a good choice).
You'll also need a WHILE loop
All you need to do is put this all together, which should be straightforward for a seasoned MySQL user like you:
WHILE counter > 0 DO
INSERT INTO temptable SELECT counter, DATE(NOW()) + INTERVAL count DAY;
SET counter = counter -1;
END WHILE;
I have to read 460,000 records from one database and update those records in another database. Currently, I read all of the records in (select * from...) and then loop through them sending an update command to the second database for each record. This process is slower than I hoped and I was wondering if there is a faster way. I match up the records by the one column that is indexed (primary key) in the table.
Thanks.
I would probably optimize the fetch size for reads (e.g. setFetchSize(250)) and JDBC - Batch Processing for writes (e.g. a batch size of 250 records).
I am assuming your "other database" is on a separate server, so can't just be directly joined.
The key is to have fewer update statements. It can often be faster to insert your data into a new table like this:
create table updatevalues ( id int(11), a int(11), b int(11), c int(11) );
insert into updatevalues (id,a,b,c) values (1,1,2,3),(2,4,5,6),(3,7,8,9),...
update updatevalues u inner join targettable t using (id) set t.a=u.a,t.b=u.b,t.c=u.c;
drop table updatevalues;
(batching the inserts into however many statements you can fit in however big your maximum size is configured at, usually in the megabytes).
Alternatively, find unique values and update them together:
update targettable set a=42 where id in (1,3,7);
update targettable set a=97 where id in (2,5);
...
update targettable set b=1 where id in (1,7);
...
1. USE MULTI QUERY
aha. 'another db' means remote database.. in this case you SHOULD reduce number of interaction with remote DB. I suggest that use MULTIPLE QUERY. e.g to execute 1,000 UPDATE at once,
$cnt = 1;
for ($row in $rows)
{
$multi_query .= "UPDATE ..;";
if ($cnt % 1000 == 0)
{
mysql_query($multi_query);
$cnt = 0;
$multi_query = "";
}
++$cnt;
}
Normally Multi query feature is disable (for security reason), To use Multi query
PHP : http://www.php.net/manual/en/mysqli.quickstart.multiple-statement.php
C API : http://dev.mysql.com/doc/refman/5.0/en/c-api-multiple-queries.html
VB : http://www.devart.com/dotconnect/mysql/docs/MultiQuery.html (I'm not a VB user, so not sure this is for MULTI Query for VB)
2. USE Prepared Statement
(When you are already using prepared stmt. skip this)
You are running 460K same structured Queries. So If you use PREPARED STATEMENT, you can obtain two advantages.
Reduce query compile time
without prepared stmt. All queries are compiled, but just one time with prepared stmt.
Reduce Network Cost
Assuming each UPDATE query is 100 bytes long, and there are 4 parameters (each is 4 bytes long)
without prepare stmt : 100 bytes * 460K = 46M
with prepare stmt : 16 bytes * 460K = 7.3M
it doesn't reduce dramatically
Here is how to use prepared statement in VB.
What I ended up doing was using a loop to concatenate my queries together. So instead of sending one query at a time, I would send a group at a time separated by semicolons:
update sometable set x=1 where y =2; update sometable set x = 5 where y = 6; etc...
This ended up improving my time by about 40%. My update went from 3 min 23 secs to 2 min 1 second.
But there was a threshold, where concatenating too many together started to slow it down again when the string got too long. So I had to tweak it until I found just the right mix. It ended up being 100 strings concatenated together that gave the best performance.
Thanks for the responses.
i got to execute loads of the following queries
UPDATE translations SET translation = (SELECT description FROM content WHERE id = 10) WHERE id = 1;
Now, i use the load data infile to do inserts and replacements but what i want in esense is to update just 1 field for each row in that table with out messing with the keys. What could be the syntax for this, note that the queries affect existing rows only.
Thanx
Use CREATE TEMPORARY TABLE to create a temporary table.
Then use LOAD DATA INFILE to populate that temporary table.
Then execute your UPDATE translations SET translation = ... to set the 1 field from a SELECT of the temporary table, JOINed with the real table. example syntax below:
UPDATE realTable, tmpTable
SET realTable.price = tmpTable.price
WHERE realTable.key = tmpTable.key