I have a server which sends up to 20 UPDATE statements to a separate MySQL server every 3-5 seconds for a game. My question is, is it faster to concat them together(UPDATE;UPDATE;UPDATE). Is it faster to do them in a transaction then commit the transaction? Is it faster to just do each UPDATE individually?
Any insight would be appreciated!
It sort of depends on how the server connects. If the connection between the servers is persistent, you probably won't see a great deal of difference between concatenated statements or multiple separate statements.
However, if the execution involves establishing the connection, executing the SQL statement, then tearing down the connection, you will save a lot of resources on the database server by executing multiple statements at a time. The process of establishing the connection tends to be an expensive and time-consuming one, and has the added overhead of DNS resolution since the machines are separate.
It makes the most logical sense to me to establish the connection, begin a transaction, execute the statements individually, commit the transaction and disconnect from the database server. Whether you send all the UPDATE statements as a single concatenation or multiple individual statements is probably not going to make a big difference in this scenario, especially if this just involves regular communication between these two servers and you need not expect it to scale up with user load, for example.
The use of the transaction assumes that your 3-5 second periodic bursts of UPDATE statements are logically related somehow. If they are not interdependent, then you could skip the transaction saving some resources.
As with any question regarding performance, the best answer is if your current system is meeting your performance and scaling needs, you ought not pay too much attention to micro-optimizing it just yet.
It is always faster to wrap these UPDATEs into single transaction block.
Price for this is that if anything fails inside that block it would be that nothing happened at all - you will have to repeat your work again.
Aslo, keep in mind that transactions in MySQL only work when using InnoDB engine.
Related
I have a quick question that I can't seem to find online, not sure I'm using the right wording or not.
Do MySql database automatically synchronize queries or coming in at around the same time? For example, if I send a query to insert something to a database at the same time another connection sends a query to select something from a database, does MySQL automatically lock the database while the insert is happening, and then unlock when it's done allowing the select query to access it?
Thanks
Do MySql databases automatically synchronize queries coming in at around the same time?
Yes.
Think of it this way: there's no such thing as simultaneous queries. MySQL always carries out one of them first, then the second one. (This isn't exactly true; the server is far more complex than that. But it robustly provides the illusion of sequential queries to us users.)
If, from one connection you issue a single INSERT query or a single UPDATE query, and from another connection you issue a SELECT, your SELECT will get consistent results. Those results will reflect the state of data either before or after the change, depending on which query went first.
You can even do stuff like this (read-modify-write operations) and maintain consistency.
UPDATE table
SET update_count = update_count + 1,
update_time = NOW()
WHERE id = something
If you must do several INSERT or UPDATE operations as if they were one, you'll need to use the InnoDB engine, and you'll need to use transactions. The transaction will block SELECT operations while it is in progress. Teaching you to use transactions is beyond the scope of a Stack Overflow answer.
The key to understanding how a modern database engine like InnoDB works is Multi-Version Concurrency Control or MVCC. This is how simultaneous operations can run in parallel and then get reconciled into a consistent "view" of the database when fully committed.
If you've ever used Git you know how you can have several updates to the same base happening in parallel but so long as they can all cleanly merge together there's no conflict. The database works like that as well, where you can begin a transaction, apply a bunch of operations, and commit it. Should those apply without conflict the commit is successful. If there's trouble the transaction is rolled back as if it never happened.
This ability to juggle multiple operations simultaneously is what makes a transaction-capable database engine really powerful. It's an important component necessary to meet the ACID standard.
MyISAM, the original engine from MySQL 3.0, doesn't have any of these features and locks the whole database on any INSERT operation to avoid conflict. It works like you thought it did.
When creating a database in MySQL you have your choice of engine, but using InnoDB should be your default. There's really no reason at all to use MyISAM as any of the interesting features of that engine (e.g. full-text indexes) have been ported over to InnoDB.
It could be a dumb question, and tried to search for it and found nothing.
I been using mysql for years(not that to long) but i never had tried mysql transactions.
Now my question is, what would happen if i issue an insert or delete statement from multiple clients using transactions? does it would lock the table and prevent other client to perform there query?
what would happen if other client issue a transaction query while the other client still have unfinished transaction?
I appreciate for any help will come.
P.S. most likely i will use insert using a file or csv it could be a big chunk of data or just a small one.
MySQL automatically performs locking for single SQL statements to keep clients from interfering with each other, but this is not always sufficient to guarantee that a database operation achieves its intended result, because some operations are performed over the course of several statements. In this case, different clients might interfere with each other.
Source: http://www.informit.com/articles/article.aspx?p=2036581&seqNum=12
I need your advice.
I have a mysql database which stores the data from my minecraft server. The server is using the ebean api for the mysql stuff.
I will have multiple servers running the same synched data when the user base increases. The server that the user is connected to does not matter. It looks all the same for him. But how can I handle an example case in which from two servers two players in the same guild edit something at the same time. One server will throw an optimistic lock exception. But what to do if it is something important like a donation to the guild bank? The amount donated might get duped or is lost. Tell the user to retry it? Or let the server automatically resend the query with the updated data from the database? A friend of mine said something like a socket server in the middle that handles ALL mysql statements might be a good idea. But that would require a lot of work to make sure that it does reconnect to the minecraft servers if the connection is lost etc. It would also require me to get the raw update query or serialize the ebean table but I don't know how to accomplish any of those possibilities.
I have not found an answer to my question yet and I hope that it hasn't been answered before.
There are two different kinds of operations the Minecraft servers can perform on the DBMS. On one hand, you have state-update operations, like making a deposit to an account. The history of these operations matters. For the sake of integrity, you must use transactions for these. They're not idempotent, meaning that you can't repeat them multiple times and expect the same result as if you only did them once. You should investigate the use of SELECT ... FOR UPDATE transactions for these.
If something fails during such a transaction, you must issue a ROLLBACK of the transaction and try again. You'd be smart to log these retries in case you get a lot of rollbacks: that suggests you have some sort of concurrency trouble to track down.
By the way, you don't need to bother with an explicit transaction on a query like
UPDATE credit SET balance = balance + 200 WHERE account = 12367
Your DBMS will get this right, even when multiple connections hit the same account number.
The other kind of operation is idempotent. That is, if you carry out the operation more than once, the result is the same as if you did it once. For example, setting the name of a player is idempotent. For those operations, if you get some kind of exception, you can either repeat the operation, or simply ignore the failure in the assumption that the operation will be repeated later in the normal sequence of gameplay.
I'm in a situation where an entire column in a table (used for user tokens) needs to be wiped, i.e., all user tokens are reset simultaneously. There are two ways of going about it: reset each user's token individually with a separate UPDATE query; or make one big query that affects all rows.
The advantage of one big query is that it will obviously be much faster, but I'm worried about the implications of a large UPDATE query when the database is big. Will requests that occur during the query be affected?
Afraid it's not that simple. Even if you enable dirty reads, running one big update has a lot of drawbacks:
long running transaction that updates one column will effectively block other insert, update and delete transactions.
long running transaction causes enourmous load on disk because server is having to write to a log file everything that is taking place so that you can roll back that huge transaction.
if a transaction fails, you would have to rerun it entirely, it is not restartable.
So if simultaneous requirement can be interpreted "in one batch that may take a while to run", I would opt for batching it. A good research write up on performance of DELETEs in MySql is here: http://mysql.rjweb.org/doc.php/deletebig, and I think most of the findings are applicable to UPDATE.
The trick will be finding the optimal "batch size".
Added benefits of batching is that you can make this process resilient to failures and restart-friendly.
The answer depends on the transaction and isolation level you've established.
You can set isolation to allow "dirty reads", "phantom reads", or force serialization of reads and writes.
However you do that UPDATE, you'll want it to be a single unit of work.
I'd recommend minimizing network latency and updating all the user tokens in one network roundtrip. This means either writing a single query or batching many into one request.
I have a table in SQL server database in which I am recording the latest activity time of users. Can somebody please confirm me that SQL server will automatically handle the scenario when multiple update requests received simultaneously for different users. I am expecting 25-50 concurrent update request on this table but each request is responsible for updating different rows in the table. Do i need something extra like connection pooling etc..?
Yes, Sql Server will handle this scenario.
It is a SGDB and it expects scenarios like this one.
When you insert/update/delete a row in Sql, sql will lock the table/row/page to garantee that you will be able to do what you want. This lock will be released when you are done inserting/updating/deleting the row.
Check this Link
And introduction-to-locking-in-sql-server
But there are a few thing you should do:
1 - Make sure you will do whatener you want fast. Because of the lock issue, if you stay connected for too long other requests to the same table may be locked until you are done and this can lead to a timeout.
2 - Always use a transaction.
3 - Make sure to adjust the fill factor of your indexes. Check Fill Factor on MSDN.
4 - Adjust the Isolation level according to what you want.
5 - Get rid of unused indexes to speed up your insert/update.
Connection pooling are not very related to your question. Connection pooling is a technique that avoid the extra overhead of creating new connections to the Database every time you send a request. In C# and other languages that uses ADO this is automatically done. Check this out: SQL Server Connection Pooling.
Other links that may be usefull:
best-practices-for-inserting-updating-large-amount-of-data-in-sql-2008
Speed Up Insert Performance