Which is faster: multiple single INSERTs or one multiple-row INSERT? - mysql

I am trying to optimize one part of my code that inserts data into MySQL. Should I chain INSERTs to make one huge multiple-row INSERT or are multiple separate INSERTs faster?

https://dev.mysql.com/doc/refman/8.0/en/insert-optimization.html
The time required for inserting a row is determined by the following factors, where the numbers indicate approximate proportions:
Connecting: (3)
Sending query to server: (2)
Parsing query: (2)
Inserting row: (1 × size of row)
Inserting indexes: (1 × number of indexes)
Closing: (1)
From this it should be obvious, that sending one large statement will save you an overhead of 7 per insert statement, which in further reading the text also says:
If you are inserting many rows from the same client at the same time, use INSERT statements with multiple VALUES lists to insert several rows at a time. This is considerably faster (many times faster in some cases) than using separate single-row INSERT statements.

I know I'm answering this question almost two and a half years after it was asked, but I just wanted to provide some hard data from a project I'm working on right now that shows that indeed doing multiple VALUE blocks per insert is MUCH faster than sequential single VALUE block INSERT statements.
The code I wrote for this benchmark in C# uses ODBC to read data into memory from an MSSQL data source (~19,000 rows, all are read before any writing commences), and the MySql .NET connector (Mysql.Data.*) stuff to INSERT the data from memory into a table on a MySQL server via prepared statements. It was written in such a way as to allow me to dynamically adjust the number of VALUE blocks per prepared INSERT (ie, insert n rows at a time, where I could adjust the value of n before a run.) I also ran the test multiple times for each n.
Doing single VALUE blocks (eg, 1 row at a time) took 5.7 - 5.9 seconds to run. The other values are as follows:
2 rows at a time: 3.5 - 3.5 seconds
5 rows at a time: 2.2 - 2.2 seconds
10 rows at a time: 1.7 - 1.7 seconds
50 rows at a time: 1.17 - 1.18 seconds
100 rows at a time: 1.1 - 1.4 seconds
500 rows at a time: 1.1 - 1.2 seconds
1000 rows at a time: 1.17 - 1.17 seconds
So yes, even just bundling 2 or 3 writes together provides a dramatic improvement in speed (runtime cut by a factor of n), until you get to somewhere between n = 5 and n = 10, at which point the improvement drops off markedly, and somewhere in the n = 10 to n = 50 range the improvement becomes negligible.
Hope that helps people decide on (a) whether to use the multiprepare idea, and (b) how many VALUE blocks to create per statement (assuming you want to work with data that may be large enough to push the query past the max query size for MySQL, which I believe is 16MB by default in a lot of places, possibly larger or smaller depending on the value of max_allowed_packet set on the server.)

A major factor will be whether you're using a transactional engine and whether you have autocommit on.
Autocommit is on by default and you probably want to leave it on; therefore, each insert that you do does its own transaction. This means that if you do one insert per row, you're going to be committing a transaction for each row.
Assuming a single thread, that means that the server needs to sync some data to disc for EVERY ROW. It needs to wait for the data to reach a persistent storage location (hopefully the battery-backed ram in your RAID controller). This is inherently rather slow and will probably become the limiting factor in these cases.
I'm of course assuming that you're using a transactional engine (usually innodb) AND that you haven't tweaked the settings to reduce durability.
I'm also assuming that you're using a single thread to do these inserts. Using multiple threads muddies things a bit because some versions of MySQL have working group-commit in innodb - this means that multiple threads doing their own commits can share a single write to the transaction log, which is good because it means fewer syncs to persistent storage.
On the other hand, the upshot is, that you REALLY WANT TO USE multi-row inserts.
There is a limit over which it gets counter-productive, but in most cases it's at least 10,000 rows. So if you batch them up to 1,000 rows, you're probably safe.
If you're using MyISAM, there's a whole other load of things, but I'll not bore you with those. Peace.

Here are the results of a little PHP bench I did :
I'm trying to insert 3000 records in 3 different ways, using PHP 8.0, MySQL 8.1 (mysqli)
Multiple insert queries, with multiple transaction :
$start = microtime(true);
for($i = 0; $i < 3000; $i++)
{
mysqli_query($res, "insert into app__debuglog VALUE (null,now(), 'msg : $i','callstack','user','debug_speed','vars')");
}
$end = microtime(true);
echo "Took " . ($end - $start) . " s\n";
Did it 5 times, average : 11.132s (+/- 0.6s)
Multiple insert queries, with single transaction :
$start = microtime(true);
mysqli_begin_transaction($res, MYSQLI_TRANS_START_READ_WRITE);
for($i = 0; $i < 3000; $i++)
{
mysqli_query($res, "insert into app__debuglog VALUE (null,now(), 'msg : $i','callstack','user','debug_speed','vars')");
}
mysqli_commit($res);
$end = microtime(true);
echo "Took " . ($end - $start) . " ms\n";
Result with 5 tests : 0.48s (+/- 0.04s)
Single aggregated insert query
$start = microtime(true);
$values = "";
for($i = 0; $i < 3000; $i++)
{
$values .= "(null,now(), 'msg : $i','callstack','user','debug_speed','vars')";
if($i !== 2999)
$values .= ",";
}
mysqli_query($res, "insert into app__debuglog VALUES $values");
$end = microtime(true);
echo "Took " . ($end - $start) . " ms\n";
Result with 5 tests : 0.085s (+/- 0.05s)
So, for a 3000 row insert, looks like :
Using multiple queries in a single write transaction is ~22 times faster than making a multiple queries with multiple transactions for each insert.
Using a single aggregated insert statement is still ~6 times faster than using multiple queries with a single write transaction

Send as many inserts across the wire at one time as possible. The actual insert speed should be the same, but you will see performance gains from the reduction of network overhead.

In general the less number of calls to the database the better (meaning faster, more efficient), so try to code the inserts in such a way that it minimizes database accesses. Remember, unless your using a connection pool, each databse access has to create a connection, execute the sql, and then tear down the connection. Quite a bit of overhead!

You might want to :
Check that auto-commit is off
Open Connection
Send multiple batches of inserts in a single transaction (size of about 4000-10000 rows ? you see)
Close connection
Depending on how well your server scales (its definitively ok with PostgreSQl, Oracle and MSSQL), do the thing above with multiple threads and multiple connections.

I just did a small benchmark and it appears that for a lot of line it's not faster. Here my result to insert 280 000 rows :
by 10 000 : 164.96 seconds
by 5 000 : 37seconds
by 1000 : 12.56 seconds
by 600 : 12.59 seconds
by 500 : 13.81 seconds
by 250 : 17.96 seconds
by 400 : 14.75 seconds
by 100 : 27seconds
It appears that 1000 by 1000 is the best choice.

In general, multiple inserts will be slower because of the connection overhead. Doing multiple inserts at once will reduce the cost of overhead per insert.
Depending on which language you are using, you can possibly create a batch in your programming/scripting language before going to the db and add each insert to the batch. Then you would be able to execute a large batch using one connect operation. Here's an example in Java.

MYSQL 5.5
One sql insert statement took ~300 to ~450ms.
while the below stats is for inline multiple insert statments.
(25492 row(s) affected)
Execution Time : 00:00:03:343
Transfer Time : 00:00:00:000
Total Time : 00:00:03:343
I would say inline is way to go :)

It's ridiculous how bad Mysql and MariaDB are optimized when it comes to inserts.
I tested mysql 5.7 and mariadb 10.3, no real difference on those.
I've tested this on a server with NVME disks, 70,000 IOPS, 1.1 GB/sec seq throughput and that's possible full duplex (read and write).
The server is a high performance server as well.
Gave it 20 GB of ram.
The database completely empty.
The speed I receive was 5000 inserts per second when doing multi row inserts (tried it with 1MB up to 10MB chunks of data)
Now the clue:
If I add another thread and insert into the SAME tables I suddenly have 2x5000 /sec.
One more thread and I have 15000 total /sec
Consider this: When doing ONE thread inserts it means you can sequentially write to the disk (with exceptions to indexes).
When using threads you actually degrade the possible performance because it now needs to do a lot more random accesses.
But reality check shows mysql is so badly optimized that threads help a lot.
The real performance possible with such a server is probably millions per second, the CPU is idle the disk is idle.
The reason is quite clearly that mariadb just as mysql has internal delays.

I would add the information that too many rows at a time depending on their contents could lead to Got a packet bigger than 'max_allowed_packet'.
Maybe consider using functions like PHP's array_chunk to do multiple inserts for your big datasets.

multiple inserts are faster but it has thredshould. another thrik is disabling constrains checks temprorary make inserts much much faster. It dosn't matter your table has it or not. For example test disabling foreign keys and enjoy the speed:
SET FOREIGN_KEY_CHECKS=0;
offcourse you should turn it back on after inserts by:
SET FOREIGN_KEY_CHECKS=1;
this is common way to inserting huge data.
the data integridity may break so you shoud care of that before disabling foreign key checks.

Related

Which is faster? Insert or update?

For data base MySQL
I want to insert rows as fast, as it can be done.
Inserts will be executing in multithreaded way. Let it be near 200 threads.
There are two ways to do it, as I want to do:
1) Use simple Insert command, each Insert will be wrapped into transaction.
There is a nice MySQL solution with Batch Insert
(INSERT INTO t() VALUES (),(),()...) but it can't be used, because of every single row must be independent in terms of transaction. In other words, if some problems appear with operation I want to rollback only one inserted row, but not all rows from the batch.
And here we can approach the second way:
2) Single thread can do the batch inserts with the fake data, totally empty rows except autoincremented IDs. This inserts works so fast, that we can even ignore this time (about 40 nano sec/row) in comparison with single Insert.
After batch insert client side can get LAST_INSERT_ID and ROW_COUNT, i.e. 'range' of inserted IDs. Next step is to do Update with data we wanted to insert before by ID which we can get from previous 'range'. Updates will be executing in multithreaded way. Result will be the same.
And now I want to ask: which way will be faster - single inserts, or batch insert + updates.
There are some indexes in the table.
None of the above.
You should be doing batch inserts. If a BatchUpdateException occurs, you can catch it and find out which inserts failed. You can however still commit what you have so far, and then continue from the point the batch failed (this is driver dependent, some drivers will execute all statements and inform you which ones failed).
The answer depends on the major cause of errors and whatyou want to do with the failed transactions, INSERT IGNORE may be sufficient:
INSERT IGNORE . . .
This will ignore errors in the batch but insert the valid data. This is tricky, if you want to catch the errors and do something about them.
If the errors are caused by duplicate keys (either unique or primary), then ON DUPLICATE KEY UPDATE is probably the best solution.
Plan A:
If there are secondary INDEXes, then the batch-insert + lots of updates is likely to be slower because it will need to insert index rows, then change them. OTOH, since secondary index operations are done in the "Change buffer", hence delayed, you may not notice the overhead immediately.
Do not use 200 threads for doing the multi-threaded inserts or updates. For 5.7, 64 might be the limit; for 5.6 it might be 48. YMMV. These numbers come from Oracle bragging about how they improved the multi-threaded aspects of MySQL. Beyond those numbers, throughput flat-lined and latency went through the roof. You should experiment with your situation, not trust those numbers.
Plan B:
If failed rows are rare, then be optimistic. Batch INSERTs, say, 64 at a time. If a failure occurs, redo them in 8 batches of 8. If any of those fail, then degenerate to one at a time. I have no idea what pattern is optimal. (64-8-1 or 64-16-4-1 or 25-5-1 or ...) Anyway it depends on your frequency of failure and number of rows to insert.
However, I will impart this bit of advice... Beyond 100 threads, you are well into "diminishing returns", so don't bother with large batch that might fail. I have measured that 100/batch is about 90% of the maximal speed.
Another tip (for any Plan):
innodb_flush_log_at_trx_commit = 2
sync_binlog = 0
Caution: These help with speed (perhaps significantly), but run the risk of lost data in a power failure.

Mysql transactions VS multiple insert / update

I use PHP with Mysql. I care about performance and stability and that's why I now use multiple inserts / updates in one single SQL query (which is generated by PHP). With that I insert / update 1000 rows. It takes about 6 seconds.
Kind of like this but larger:
Inserting multiple rows in mysql
I've read about transactions which is meant to speed up where multiple SQL queries are added to a kind of buffer (I guess) and then all is executed.
Which method should I use in terms of performance and stability? Pros and cons?

big differences in mysql execution time: minimum 2 secs - maximum 120 secs

The Situation:
I use a (php) cronjob to keep my database up-to-date. the affected table contains about 40,000 records. basically, the cronjob deletes all entries and inserts them afterwards (with different values ofc). I have to do it this way, because they really ALL change, because they are all interrelated.
The Problem:
Actually, everything works fine. The cronjob is doin' his job within 1.5 to 2 seconds (again, for about 40k inserts - i think this is adequate). MOSTLY.. But sometimes, the query takes up to 60, 90 or even 120 seconds!
I indexed my database. And I think query is good working, due to the fact it only needs 2 seconds mots of the time. I close the connection via mysql_close();
Do you have any ideas? If you need more information please tell me.
Thanks in advance.
Edit: Well, it seems like there was no problem with the inserts. it was a complex SELECT query, that made some trouble. Tho, thanks to everyone who answered!
[Sorry, apparently I haven't mastered the formatting yet]
From what I read, I can conclude that your cronjob is using bulk-insert statements. If you know when cronjob works, I suggest you to start a Database Engine Tuning Advisor session and see what other processes are running while the cronjob do its things. A bulk-insert has some restrictions with the number of fields and the number of rows at once. You could read the subtitles of this msdn http://technet.microsoft.com/en-us/library/ms188365.aspx
Performance Considerations
If the number of pages to be flushed in a
single batch exceeds an internal threshold, a full scan of the buffer
pool might occur to identify which pages to flush when the batch
commits. This full scan can hurt bulk-import performance. A likely
case of exceeding the internal threshold occurs when a large buffer
pool is combined with a slow I/O subsystem. To avoid buffer
overflows on large machines, either do not use the TABLOCK hint (which
will remove the bulk optimizations) or use a smaller batch size
(which preserves the bulk optimizations). Because computers vary, we
recommend that you test various batch sizes with your data load to
find out what works best for you.

How do I memory-efficiently process all rows of a MySQL table?

I have a MySQL table with 237 million rows. I want to process all of these rows and update them with new values.
I do have sequential ID's, so I could just use a lot of select statements:
where id = '1'
where id = '2'
This is the method mentioned in Sequentially run through a MYSQL table with 1,000,000 records?.
But I'd like to know if there is a faster way using something like a cursor that would be used to sequentially read a big file without needing to load the full set into memory. The way I see it, a cursor would be much faster than running millions of select statements to get the data back in manageable chunks.
Ideally, you get the DBMS to do the work for you. You make the SQL statement so it runs solely in the database, not returning data to the application. All else apart, this saves the overhead of 237 million messages to the client and 237 million messages back to the server.
Whether this is feasible depends on the nature of the update:
Can the DBMS determine what the new values should be?
Can you get the necessary data into the database so that the DBMS can determine what the new values should be?
Will every single one of the 237 million rows be changed, or only a subset?
Can the DBMS be used to determine the subset?
Will any of the id values be changed at all?
If the id values will never be changed, then you can arrange to partition the data into manageable subsets, for any flexible definition of 'manageable'.
You may need to consider transaction boundaries; can it all be done in a single transaction without blowing out the logs? If you do operations in subsets rather than as a single atomic transaction, what will you do if your driving process crashes at 197 million rows processed? Or the DBMS crashes at that point? How will you know where to resume operations to complete the processing?

Fork MySQL INSERT INTO (InnoDB)

I'm trying to insert about 500 million rows of garbage data into a database for testing. Right now I have a PHP script looping through a few SELECT/INSERT statements each inside a TRANSACTION -- clearly this isn't the best solution. The tables are InnoDB (row-level locking).
I'm wondering if I (properly) fork the process, will this speed up the INSERT process? At the rate it's going, it will take 140 hours to complete. I'm concerned about two things:
If INSERT statements must acquire a write lock, then will it render forking useless, since multiple processes can't write to the same table at the same time?
I'm using SELECT...LAST_INSERT_ID() (inside a TRANSACTION). Will this logic break when multiple processes are INSERTing into the database? I could create a new database connection for each fork, so I hope this would avoid the problem.
How many processes should I be using? The queries themselves are simple, and I have a regular dual-core dev box with 2GB RAM. I set up my InnoDB to use 8 threads (innodb_thread_concurrency=8), but I'm not sure if I should be using 8 processes or if this is even a correct way to think about matching.
Thanks for your help!
The MySQL documentation has a discussion on efficient insertion of a large number of records. It seems that the clear winner is usage of the LOAD DATA INFILE command, followed by inserts that insert multiple values lists.
1) yes, there will be lock contention, but innodb is designed to handle multiple threads trying to insert. sure, they won't simultaneously insert, but it will handle serializing the inserts for you. just make sure you specifically close your transactions and you do it ASAP. this will ensure you get the best possible insert performance.
2) no, this logic will not break provided you have 1 connection per thread, since last_insert_id() is connection specific.
3) this is one of those things that you just need to benchmark to figure out. actually, i would make the program self-adjust. run 100 inserts with 8 threads and record the execution times. then try again with half as many and twice as many. whichever one is faster, then benchmark more thread count values around that number.
in general, you should always just go ahead and benchmark this kind of stuff to see which is faster. in the amount of time it takes you to think about it and write it up, you could probably already have preliminary numbers.