Inserting multiple rows in mysql - mysql

This question's answers are a community effort. Edit existing answers to improve this post. It is not currently accepting new answers or interactions.
Is the database query faster if I insert multiple rows at once:
like
INSERT....
UNION
INSERT....
UNION
(I need to insert like 2-3000 rows)

INSERT statements that use VALUES syntax can insert multiple rows. To do this, include multiple lists of column values, each enclosed within parentheses and separated by commas.
Example:
INSERT INTO tbl_name
(a,b,c)
VALUES
(1,2,3),
(4,5,6),
(7,8,9);
Source

If you have your data in a text-file, you can use LOAD DATA INFILE.
When loading a table from a text file, use LOAD DATA INFILE. This is usually 20 times faster than using INSERT statements.
Optimizing INSERT Statements
You can find more tips on how to speed up your insert statements on the link above.

Just use a SELECT statement to get the values for many lines of the chosen columns and put these values into columns of another table in one go. As an example, columns "size" and "price" of the two tables "test_b" and "test_c" get filled with the columns "size" and "price" of table "test_a".
BEGIN;
INSERT INTO test_b (size, price)
SELECT size, price
FROM test_a;
INSERT INTO test_c (size, price)
SELECT size, price
FROM test_a;
COMMIT;
The code is embedded in BEGIN and COMMIT to run it only when both statements have worked, else the whole run up to that point gets withdrawn.

Here is a PHP solution ready for use with a n:m (many-to-many relationship) table :
// get data
$table_1 = get_table_1_rows();
$table_2_fk_id = 123;
// prepare first part of the query (before values)
$query = "INSERT INTO `table` (
`table_1_fk_id`,
`table_2_fk_id`,
`insert_date`
) VALUES ";
//loop the table 1 to get all foreign keys and put it in array
foreach($table_1 as $row) {
$query_values[] = "(".$row["table_1_pk_id"].", $table_2_fk_id, NOW())";
}
// Implode the query values array with a coma and execute the query.
$db->query($query . implode(',',$query_values));
EDIT : After #john's comment I decided to enhance this answer with a more efficient solution :
divides the query to multiple smaller queries
use rtrim() to delete last coma instead of implod()
// limit of query size (lines inserted per query)
$query_values = "";
$limit = 100;
$table_1 = get_table_1_rows();
$table_2_fk_id = 123;
$query = "INSERT INTO `table` (
`table_1_fk_id`,
`table_2_fk_id`,
`insert_date`
) VALUES ";
foreach($table_1 as $row) {
$query_values .= "(".$row["table_1_pk_id"].", $table_2_fk_id, NOW()),";
// entire table parsed or lines limit reached :
// -> execute and purge query_values
if($i === array_key_last($table_1)
|| fmod(++$i / $limit) == 0) {
$db->query($query . rtrim($query_values, ','));
$query_values = "";
}
}

// db table name / blog_post / menu / site_title
// Insert into Table (column names separated with comma)
$sql = "INSERT INTO product_cate (site_title, sub_title)
VALUES ('$site_title', '$sub_title')";
// db table name / blog_post / menu / site_title
// Insert into Table (column names separated with comma)
$sql = "INSERT INTO menu (menu_title, sub_menu)
VALUES ('$menu_title', '$sub_menu', )";
// db table name / blog_post / menu / site_title
// Insert into Table (column names separated with comma)
$sql = "INSERT INTO blog_post (post_title, post_des, post_img)
VALUES ('$post_title ', '$post_des', '$post_img')";

Related

Check if the data inserted correctly

I'm trying to get the inserted data from the last multi insert query so I can verify the written data.
I use pdo to execute the query.
$sql="insert into tableName (col1,col2) values (val1,val2),(val4,val5),(val7,val8) ON DUPLICATE KEY UPDATE Col1=VALUES(Col1),Col2=VALUES(Col2)";
$stmt = dbh->prepare($sql);
$stmt->execute();
and then I run the lastInsertId() to get the last id of my autoIncrement column.
$lastId=$dbh->lastInsertId();
and the rowCount() function to get the number of inserted rows (don't care about the duplicates)
$numberOfNewRows=$dbh->rowCount();
now I want to get the data from the previous insert query
$limitRangeStart=$lastId-$numberOfNewRows;
$sql="select * from previusTable limit $limitRangeStart , $lastId ";
$stmt = dbh->prepare($sql);
$stmt->execute();
So my question is will the last query ALWAYS return the data the previously query inserted since multi insert method used?
Is there any possibility that another insert query that might run at that time will "break" the multi insert rows from the previous query?

How to get the count of inserted rows in "insert ignore stament"

I am doing bulk insert and inserting 0.5 million tokens in the database with insert "ignore statement". in 0.5 million tokens there can be duplicate tokens.
so if i insert 0.5 million tokens in the database with insert ignore statement then there is no guarantee that all of tokens are inserted into the database because of duplicate tokens.
After doing insertion i want to know how many tokens are inserted into the database. some people are suggesting to use affected_rows columns to get count of inserted (affected) rows. But affected_rows doesn't give the output of current sql statement it gives the output of last sql statement.
Please tell me the best way to get count of inserted rows with insert ignore statment.
Put select row_count(); just after the insert statement to get the number of rows inserted.
eg:
insert ignore into tbl(col1) values (1),(2); select row_count();
Doing a single SQL insert ignore would work with affected_rows. Not sure tho how would that turn out performance wise since it's 0.5 mil rows to enter.
Anyhow, here's a solution I tried and works with 4 values in a signle INSERT.
<?php
$mysqli = new mysqli('127.0.0.1','root','','test');
if (mysqli_connect_errno()) {
printf("Connect failed: %s\n", mysqli_connect_error());
exit();
}
$sql = "INSERT IGNORE INTO test1 (Name, Attribute, Val) VALUES ('ai', 'blue', '1j'),('ai1', 'white', '2j'),('ai2', 'black', '3j'),('ai1', 'green', '4j')";
$insert = $mysqli->query($sql);
printf ($mysqli->affected_rows);
?>

MySQL Insert 20K rows in single insert

In my table I insert around 20,000 rows on each load. Right now I am doing it one-by-one. From mysql website I came to know inserting multiple rows with single insert query is faster.
Can I insert all 20000 in single query?
What will happen if there are errors within this 20000 rows? how will mysql handle that?
If you are inserting the rows from some other table then you can use the INSERT ... SELECT pattern to insert the rows.
However if you are inserting the values using INSERT ... VALUES pattern then you have the limit of max_allowed_packet.
Also from the docs:-
To optimize insert speed, combine many small operations into a single
large operation. Ideally, you make a single connection, send the data
for many new rows at once, and delay all index updates and consistency
checking until the very end.
Example:-
INSERT INTO `table1` (`column1`, `column2`) VALUES ("d1", "d2"),
("d1", "d2"),
("d1", "d2"),
("d1", "d2"),
("d1", "d2");
What will happen if there are errors within this 20000 rows?
If there are errors while inserting the records then the operation will be aborted.
http://dev.mysql.com/doc/refman/5.5/en/insert.html
INSERT statements that use VALUES syntax can insert multiple rows. To
do this, include multiple lists of column values, each enclosed within
parentheses and separated by commas.
Example:
INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9);
You can use code to generate the insert VALUES section based on your data source.
Errors: if there are errors in the INSERT statement (including in any of the rows) the operation will be aborted.
Generating the query - this will be based on your data source, for example, if you are getting data from an associative array in PHP, you'll do something like this:
$sql = "INSERT INTO tbl_name (a, b, c) VALUES ";
foreach($dataset as $row)
{
$sql .= "(" + $row['a'] + ", " + $row['a'] + ", " + $row['a'] + ")";
// OR
$sql .= "($row[a], $row[b], $row[c])";
}
Some more resources:
Optimize MySQL Queries – Fast Inserts With Multiple Rows
The fastest way to insert 100K records
batch insert with SQL: insert into table (col...coln) values (col... coln),(col...coln)... but the SQL length is limited by 1M default, you can change max_allowed_packet parameter to support more bigger single insert

PDO - lastInsertId() for insert query with multiple rows

I can insert 2 pets into a table, and get their lastInsertId() for further processing one at a time (2 queries). I am wondering if there is a way to get two lastInsertIds() and assign them to variables if I am inserting 2 rows in 1 query:
$query = "INSERT INTO pets (pet_name) VALUES (':coco'),(':jojo')";
$pet_insert = $dbh->prepare($query);
$pet_insert->execute(array(':coco' => $coco,':jojo' => $jojo));
$New_PetID = $dbh->lastInsertId();
Is it possible to get the lastInsertId() for coco and for jojo? So something like:
$New_PetID1 = $dbh->lastInsertId();//coco
$New_PetID2 = $dbh->lastInsertId();//jojo
This will give the same ID, any way to get the 2 IDs? Just for reference, this is in a try block.
It's not possible. If you need generated ids for both rows - you need to perform 2 separated INSERT
Important If you insert multiple rows using a single INSERT statement,
LAST_INSERT_ID() returns the value generated for the first inserted
row only. The reason for this is to make it possible to reproduce
easily the same INSERT statement against some other server.
http://dev.mysql.com/doc/refman/5.5/en/information-functions.html#function_last-insert-id
you can add 1 to last insert id to achieve the real last record id.and if you insert more than 2 record just add rowCount - 1 to last_insert_id.
With innodb_autoinc_lock_mode set to 0 (“traditional”) or 1 (“consecutive”), the auto-increment values generated by any given statement will be consecutive, without gaps, because the table-level AUTO-INC lock is held until the end of the statement, and only one such statement can execute at a time.
and
The ID that was generated is maintained in the server on a per-connection basis. This means that the value returned by the function to a given client is the first AUTO_INCREMENT value generated for most recent statement affecting an AUTO_INCREMENT column by that client. This value cannot be affected by other clients, even if they generate AUTO_INCREMENT values of their own. This behavior ensures that each client can retrieve its own ID without concern for the activity of other clients, and without the need for locks or transactions.
for more info read this document http://dev.mysql.com/doc/refman/5.1/en/innodb-auto-increment-handling.html
Please, please avoid the type of solutions given by Bruce. The auto increment value can be set to be different than 1. For instance in case of master to master replication. A little while back a host where one of our applications runs on changed their db setup. I happened to notice -purely by coincidence- that all of a sudden the id's incremented with two instead of one. Would we have had any code like this it could have caused us serious problems.
The documentation states: Returns the ID of the last inserted row or sequence value
You will need to perform two queries to get the id for each inserted row.
I tried to assume that is is not possible until I tried it on my own and figured that IT IS POSSIBLE.
After each execute, add the lastInsertId() and assign a key.
For my example:
// First query
$sql = "INSERT INTO users
(username, email)
values
(:username, :email)";
$sth = $this->db->prepare($sql);
$sth->bindParam(':username', $data['username'], PDO::PARAM_STR);
$sth->bindParam(':email', $data['email'], PDO::PARAM_STR);
// Second Query
$sql2 = "INSERT INTO users
(username, email)
values
(:username, :email)";
$sth2 = $this->db->prepare($sql2);
$sth2->bindParam(':username', $data['username'], PDO::PARAM_STR);
$sth2->bindParam(':email', $data['email'], PDO::PARAM_STR);
// Start trans
$this->db->beginTransaction();
// Execute the two queries
$sth->execute();
$last['ux1'] = $this->db->lastInsertId(); // <---- FIRST KEY
$sth2->execute();
$last['ux2'] = $this->db->lastInsertId(); // <---- SECOND KEY
// Commit
$this->db->commit();
And I was able to retrieve the two last inserted ids
Array (
[create] => Array
(
[ux1] => 117
[ux2] => 118
)
)
I hope this will help others who are seeking the right answer.
Well, since 1000 inserts is a bit longer than 1 insert, the issue of topic is still interesting. Possible workaround would be to make 2 queries. The first one is to insert 1000 rows and the second one is to select them if there is something unique in those inserted rows.
For example with pets:
INSERT INTO pets ( name ) VALUES ( 'coco', 'jojo' );
SELECT id FROM pets WHERE name IN ( 'coco', 'jojo' );
This could give benefit only for big data sets.
I used the array in the first statement and in the second add if condition. So If insert record get its last id then make the second statement. If insert the second array get its last id then repeat the second statement
for ($count=0; $count < count($_POST["laptop_ram_capacity"]); $count++) {
$stmt = $con->prepare('INSERT INTO `ram`
(ram_capacity_id, ram_type_id, ram_bus_id, ram_brand_id, ram_module_id)
VALUES (?,?,?,?,?)');
//excute query//
$stmt->execute(array(
$_POST['laptop_ram_capacity'][$count],
$_POST["laptop_ram_type"][$count],
$_POST["laptop_ram_bus"][$count],
$_POST["laptop_ram_brand"][$count],
$_POST["laptop_ram_module"][$count]
));
//fetsh the data//
$rows = $stmt->rowCount();
// i add the below statment in if condition to repeat the insert and pass the repeated of the last statement//
if ($rows > 0) {
$LASTRAM_ID = $con->lastInsertId();
$stmt = $con->prepare('INSERT INTO `ram_devicedesc_rel`
(ram_id, Parent_device_description_id)
VALUES (?,?)');
//excute query//
$stmt->execute(array(
$LASTRAM_ID,
$LASTdevicedesc_ID
));
//fetsh the data//
} else {
echo "Error: " . $sql . "<br>" . $conn->error;
}
}
you can use 'multi_query()'
add all the query into one variable like :
'$sql1 = "first insert";'
'$sql2 = "second insert";'
'$sql3 = "third insert";' and so on...
take the count of number of inserts your gonna make.
now execute all the '$sql' queries using 'multi_query()'.
get the last insert id.
now all you have to do is '$rowno = $lastinsertid - $countofinserts'.
so basically you will get a number.
add 1 to it and from the resulting number to the lastinsertid are the insert id's of the insert queries you ran
It is possible! All you need to do is call the PDO lastInsertId() method. This will return the first inserted id when doing multiple or bulk INSERT. Next we need to perform a simple addition subtraction operation with the number of rows affected by the last INSERT statement.
$firstInsertedId = $this->lastInsertId();
$InsertedIds = range($firstInsertedId, ($firstInsertedId + ($stmt->rowCount() - 1)) );
print_r($InsertedIds);

Copying rows in MySQL

I want to copy all of the columns of a row, but not have to specify every column. I am aware of the syntax at http://dev.mysql.com/doc/refman/5.1/en/insert-select.html but I see no way to ignore a column.
For my example, I am trying to copy all the columns of a row to a new row, except for the primary key.
Is there a way to do that without having to write the query with every field in it?
If your id or primary key column is an auto_increment you can use a temp table:
CREATE TEMPORARY TABLE temp_table
AS
SELECT * FROM source_table WHERE id='7';
UPDATE temp_table SET id='100' WHERE id='7';
INSERT INTO source_table SELECT * FROM temp_table;
DROP TEMPORARY TABLE temp_table;
so in this way you can copy all data in row id='7' and then assign
new value '100' (or whatever value falls above the range of your current auto_increment value in source_table).
Edit: Mind the ; after the statments :)
You'll need to list out the columns that you want to select if you aren't selecting them all. Copy/Paste is your friend.
This is a PHP script that I wrote to do this, it will assume that your first col is your auto increment.
$sql = "SELECT * FROM table_name LIMIT 1";
$res = mysql_query($sql) or die(mysql_error());
for ($i = 1; $i < mysql_num_fields($res); $i++) {
$col_names .= mysql_field_name($res, $i).", ";
}
$col_names = substr($col_names, 0, -2);
$sql = "INSERT INTO table_name (".$col_names.") SELECT ".$col_names." FROM table_name WHERE condition ";
$res = mysql_query($sql) or die(mysql_error());
If you don't specify the columns you have to keep the entries in order. For example:
INSERT INTO `users` (`ID`, `Email`, `UserName`) VALUES
(1, 'so#so.com', 'StackOverflow')
Would work but
INSERT INTO `users` VALUES
('so#so.com', 'StackOverflow')
would place the Email at the ID column so it's no good.
Try writing the columns once like:
INSERT INTO `users` (`Email`, `UserName`) VALUES
('so#so.com', 'StackOverflow'),
('so2#so.com', 'StackOverflow2'),
('so3#so.com', 'StackOverflow3'),
etc...
I think there's a limit to how many rows you can insert with that method though.
No, this isn't possible.
But it's easy to get the column list and just delete which one you don't want copied this process can also be done through code etc.
Copy the table to a new one, then delete the column you don't want. Simple.
I'm assuming that since you want to omit the primary key that it is an auto_increment column and you want MySQL to autogenerate the next value in the sequence.
Given that, assuming that you do not need to do bulk inserts via the insert into ... select from method, the following will work for single/multi record inserts:
insert into mytable (null, 'a', 'b', 'c');
Where the first column is your auto_incremented primary key and the others are your other columns on the table. When MySQL sees a null (or 0) for an auto_incremented column it will automatically replace the null with the next valid value (see this link for more information). This functionality can be disabled by disabling the NO_AUTO_VALUE_ON_ZERO sql mode described in that link.
Let me know if you have any questions.
-Dipin