function dbQuery(sql)
{
dbcon.query(sql, function(err, result) {
//imagine the PING to DB server is 1 second.
//It would take 100 seconds to complete 100 Queries, if the query is run 1 by 1.
});
}
for (var i=0; i<100; i++) {
dbQuery("INSERT INTO table VALUES some values");
}
I am running a Socket Client to get continuous streaming data, and I need to feed the remote database in real time. However, current design of executing query is sequential.
Imagine a PING to remote MySQL server is 1 second. It would take 100 seconds to complete 100 queries.
I need 100 queries to complete in 1 second without waiting for the result. Meaning just "push to DB server and forget about it", and ignore the delays between Network.
If this is not possible in NodeQuery, it is possible to use other programming language to get the desired results ? EG Java, PHP, Python, something else?
Note: I need to execute each queries in real time. Sending by INSERT by batch is not an option. I need to feed database instantly when I get the data from else where.
Non-blocking, non sequential
Simultaneously, Concurrently, Parallel
Just put them in a promise all without waiting for the result? They are executed at the same time (more or less)
Promise.all(
Array.from(Array(100)).map(
() => dbQuery("INSERT INTO table VALUES some values")
)
)
I'm working on a migration from MySQL to Postgres on a large Rails app, most operations are performing at a normal rate. However, we have a particular operation that will generate job records every 30 minutes or so. There are usually about 200 records generated and inserted after which we have separate workers that pick up the jobs and work on them from another server.
Under MySQL it takes about 15 seconds to generate the records, and then another 3 minutes for the worker to perform and write back the results, one at a time (so 200 more updates to the original job records).
Under Postgres it takes around 30 seconds, and then another 7 minutes for the worker to perform and write back the results.
The table being written to has roughly 2 million rows, and 1 sequence column under ID.
I have tried tweaking checkpoint timeouts and sizes with no luck.
The table is heavily indexed and really shouldn't be any different than it was before.
I can't post code samples as its a huge codebase and without posting pages and pages of code it wouldn't make sense.
My question is, can anyone think of why this would possibly be happening? There is nothing in the Postgres log and the process of creating these objects has not changed really. Is there some sort of blocking synchronous write behavior I'm not aware of with Postgres?
I've added all sorts of logging in my code to spot errors or transaction failures but I'm coming up with nothing, it just takes twice as long to run, which doesn't seem correct to me.
The Postgres instance is hosted on AWS RDS on a M3.Medium instance type.
We also use New Relic, and it's showing nothing of interest here, which is surprising
Why does your job queue contain 2 million rows? Are they all live or are have not moved them to an archive table to keep your reporting more simple?
Have you used EXPLAIN on your SQL from a psql prompt or your preferred SQL IDE/tool?
Postgres is a completely different RDBMS then MySQL. It allocates space differently and manipulates space differently so may need to be indexed differently.
Additionally there's a tool called pgtune that will suggest configuration changes.
edit: 2014-08-13
Also, rails comes with a profiler that might add some insight. Here's a StackOverflow thread about rails profiling.
You also want to watch your DB server at the disk IO level. Does your job fulfillment to a large number of updates? Postgres created new rows when you update a existing rows, and marks the old rows as available, instead of just overwriting the existing row. So you may be seeing a lot more IO as a result of your RDBMS switch.
I created a console application which sends data from a Sql Database to RavenDB.
I have a freakish amount of data to transfer, so it's taking an incredibly long time.
(1,000,000 rows takes RavenDB about 2 hours to store)
RavenDB takes longer to import the data than is collected from Sql Server by the Console application.
Is there any way to speed up the transfer or perhaps an existing tool which does this already?
using (var session = this._store.OpenSession())
{
//row.Count is never more than 1024
while (i < row.Count)
{
session.Store(row[i]);
i++;
}
session.SaveChanges();
}
Could you post the code where you insert to RavenDB, this is likely where the bottleneck lies. You should be making requests concurrently.
Setting:
HttpJsonRequest.ConfigureRequest += (e,x)=>((HttpWebRequest)x.Request).UnsafeAuthenticatedConnectionSharing = true;
As well as processing your insert records in a batch.
As for insert performance you'll likely never match SQLServer as RavenDb is optimized for read vs write.
I am trying to optimize one part of my code that inserts data into MySQL. Should I chain INSERTs to make one huge multiple-row INSERT or are multiple separate INSERTs faster?
https://dev.mysql.com/doc/refman/8.0/en/insert-optimization.html
The time required for inserting a row is determined by the following factors, where the numbers indicate approximate proportions:
Connecting: (3)
Sending query to server: (2)
Parsing query: (2)
Inserting row: (1 × size of row)
Inserting indexes: (1 × number of indexes)
Closing: (1)
From this it should be obvious, that sending one large statement will save you an overhead of 7 per insert statement, which in further reading the text also says:
If you are inserting many rows from the same client at the same time, use INSERT statements with multiple VALUES lists to insert several rows at a time. This is considerably faster (many times faster in some cases) than using separate single-row INSERT statements.
I know I'm answering this question almost two and a half years after it was asked, but I just wanted to provide some hard data from a project I'm working on right now that shows that indeed doing multiple VALUE blocks per insert is MUCH faster than sequential single VALUE block INSERT statements.
The code I wrote for this benchmark in C# uses ODBC to read data into memory from an MSSQL data source (~19,000 rows, all are read before any writing commences), and the MySql .NET connector (Mysql.Data.*) stuff to INSERT the data from memory into a table on a MySQL server via prepared statements. It was written in such a way as to allow me to dynamically adjust the number of VALUE blocks per prepared INSERT (ie, insert n rows at a time, where I could adjust the value of n before a run.) I also ran the test multiple times for each n.
Doing single VALUE blocks (eg, 1 row at a time) took 5.7 - 5.9 seconds to run. The other values are as follows:
2 rows at a time: 3.5 - 3.5 seconds
5 rows at a time: 2.2 - 2.2 seconds
10 rows at a time: 1.7 - 1.7 seconds
50 rows at a time: 1.17 - 1.18 seconds
100 rows at a time: 1.1 - 1.4 seconds
500 rows at a time: 1.1 - 1.2 seconds
1000 rows at a time: 1.17 - 1.17 seconds
So yes, even just bundling 2 or 3 writes together provides a dramatic improvement in speed (runtime cut by a factor of n), until you get to somewhere between n = 5 and n = 10, at which point the improvement drops off markedly, and somewhere in the n = 10 to n = 50 range the improvement becomes negligible.
Hope that helps people decide on (a) whether to use the multiprepare idea, and (b) how many VALUE blocks to create per statement (assuming you want to work with data that may be large enough to push the query past the max query size for MySQL, which I believe is 16MB by default in a lot of places, possibly larger or smaller depending on the value of max_allowed_packet set on the server.)
A major factor will be whether you're using a transactional engine and whether you have autocommit on.
Autocommit is on by default and you probably want to leave it on; therefore, each insert that you do does its own transaction. This means that if you do one insert per row, you're going to be committing a transaction for each row.
Assuming a single thread, that means that the server needs to sync some data to disc for EVERY ROW. It needs to wait for the data to reach a persistent storage location (hopefully the battery-backed ram in your RAID controller). This is inherently rather slow and will probably become the limiting factor in these cases.
I'm of course assuming that you're using a transactional engine (usually innodb) AND that you haven't tweaked the settings to reduce durability.
I'm also assuming that you're using a single thread to do these inserts. Using multiple threads muddies things a bit because some versions of MySQL have working group-commit in innodb - this means that multiple threads doing their own commits can share a single write to the transaction log, which is good because it means fewer syncs to persistent storage.
On the other hand, the upshot is, that you REALLY WANT TO USE multi-row inserts.
There is a limit over which it gets counter-productive, but in most cases it's at least 10,000 rows. So if you batch them up to 1,000 rows, you're probably safe.
If you're using MyISAM, there's a whole other load of things, but I'll not bore you with those. Peace.
Here are the results of a little PHP bench I did :
I'm trying to insert 3000 records in 3 different ways, using PHP 8.0, MySQL 8.1 (mysqli)
Multiple insert queries, with multiple transaction :
$start = microtime(true);
for($i = 0; $i < 3000; $i++)
{
mysqli_query($res, "insert into app__debuglog VALUE (null,now(), 'msg : $i','callstack','user','debug_speed','vars')");
}
$end = microtime(true);
echo "Took " . ($end - $start) . " s\n";
Did it 5 times, average : 11.132s (+/- 0.6s)
Multiple insert queries, with single transaction :
$start = microtime(true);
mysqli_begin_transaction($res, MYSQLI_TRANS_START_READ_WRITE);
for($i = 0; $i < 3000; $i++)
{
mysqli_query($res, "insert into app__debuglog VALUE (null,now(), 'msg : $i','callstack','user','debug_speed','vars')");
}
mysqli_commit($res);
$end = microtime(true);
echo "Took " . ($end - $start) . " ms\n";
Result with 5 tests : 0.48s (+/- 0.04s)
Single aggregated insert query
$start = microtime(true);
$values = "";
for($i = 0; $i < 3000; $i++)
{
$values .= "(null,now(), 'msg : $i','callstack','user','debug_speed','vars')";
if($i !== 2999)
$values .= ",";
}
mysqli_query($res, "insert into app__debuglog VALUES $values");
$end = microtime(true);
echo "Took " . ($end - $start) . " ms\n";
Result with 5 tests : 0.085s (+/- 0.05s)
So, for a 3000 row insert, looks like :
Using multiple queries in a single write transaction is ~22 times faster than making a multiple queries with multiple transactions for each insert.
Using a single aggregated insert statement is still ~6 times faster than using multiple queries with a single write transaction
Send as many inserts across the wire at one time as possible. The actual insert speed should be the same, but you will see performance gains from the reduction of network overhead.
In general the less number of calls to the database the better (meaning faster, more efficient), so try to code the inserts in such a way that it minimizes database accesses. Remember, unless your using a connection pool, each databse access has to create a connection, execute the sql, and then tear down the connection. Quite a bit of overhead!
You might want to :
Check that auto-commit is off
Open Connection
Send multiple batches of inserts in a single transaction (size of about 4000-10000 rows ? you see)
Close connection
Depending on how well your server scales (its definitively ok with PostgreSQl, Oracle and MSSQL), do the thing above with multiple threads and multiple connections.
I just did a small benchmark and it appears that for a lot of line it's not faster. Here my result to insert 280 000 rows :
by 10 000 : 164.96 seconds
by 5 000 : 37seconds
by 1000 : 12.56 seconds
by 600 : 12.59 seconds
by 500 : 13.81 seconds
by 250 : 17.96 seconds
by 400 : 14.75 seconds
by 100 : 27seconds
It appears that 1000 by 1000 is the best choice.
In general, multiple inserts will be slower because of the connection overhead. Doing multiple inserts at once will reduce the cost of overhead per insert.
Depending on which language you are using, you can possibly create a batch in your programming/scripting language before going to the db and add each insert to the batch. Then you would be able to execute a large batch using one connect operation. Here's an example in Java.
MYSQL 5.5
One sql insert statement took ~300 to ~450ms.
while the below stats is for inline multiple insert statments.
(25492 row(s) affected)
Execution Time : 00:00:03:343
Transfer Time : 00:00:00:000
Total Time : 00:00:03:343
I would say inline is way to go :)
It's ridiculous how bad Mysql and MariaDB are optimized when it comes to inserts.
I tested mysql 5.7 and mariadb 10.3, no real difference on those.
I've tested this on a server with NVME disks, 70,000 IOPS, 1.1 GB/sec seq throughput and that's possible full duplex (read and write).
The server is a high performance server as well.
Gave it 20 GB of ram.
The database completely empty.
The speed I receive was 5000 inserts per second when doing multi row inserts (tried it with 1MB up to 10MB chunks of data)
Now the clue:
If I add another thread and insert into the SAME tables I suddenly have 2x5000 /sec.
One more thread and I have 15000 total /sec
Consider this: When doing ONE thread inserts it means you can sequentially write to the disk (with exceptions to indexes).
When using threads you actually degrade the possible performance because it now needs to do a lot more random accesses.
But reality check shows mysql is so badly optimized that threads help a lot.
The real performance possible with such a server is probably millions per second, the CPU is idle the disk is idle.
The reason is quite clearly that mariadb just as mysql has internal delays.
I would add the information that too many rows at a time depending on their contents could lead to Got a packet bigger than 'max_allowed_packet'.
Maybe consider using functions like PHP's array_chunk to do multiple inserts for your big datasets.
multiple inserts are faster but it has thredshould. another thrik is disabling constrains checks temprorary make inserts much much faster. It dosn't matter your table has it or not. For example test disabling foreign keys and enjoy the speed:
SET FOREIGN_KEY_CHECKS=0;
offcourse you should turn it back on after inserts by:
SET FOREIGN_KEY_CHECKS=1;
this is common way to inserting huge data.
the data integridity may break so you shoud care of that before disabling foreign key checks.