It could be a dumb question, and tried to search for it and found nothing.
I been using mysql for years(not that to long) but i never had tried mysql transactions.
Now my question is, what would happen if i issue an insert or delete statement from multiple clients using transactions? does it would lock the table and prevent other client to perform there query?
what would happen if other client issue a transaction query while the other client still have unfinished transaction?
I appreciate for any help will come.
P.S. most likely i will use insert using a file or csv it could be a big chunk of data or just a small one.
MySQL automatically performs locking for single SQL statements to keep clients from interfering with each other, but this is not always sufficient to guarantee that a database operation achieves its intended result, because some operations are performed over the course of several statements. In this case, different clients might interfere with each other.
Source: http://www.informit.com/articles/article.aspx?p=2036581&seqNum=12
Related
I have a quick question that I can't seem to find online, not sure I'm using the right wording or not.
Do MySql database automatically synchronize queries or coming in at around the same time? For example, if I send a query to insert something to a database at the same time another connection sends a query to select something from a database, does MySQL automatically lock the database while the insert is happening, and then unlock when it's done allowing the select query to access it?
Thanks
Do MySql databases automatically synchronize queries coming in at around the same time?
Yes.
Think of it this way: there's no such thing as simultaneous queries. MySQL always carries out one of them first, then the second one. (This isn't exactly true; the server is far more complex than that. But it robustly provides the illusion of sequential queries to us users.)
If, from one connection you issue a single INSERT query or a single UPDATE query, and from another connection you issue a SELECT, your SELECT will get consistent results. Those results will reflect the state of data either before or after the change, depending on which query went first.
You can even do stuff like this (read-modify-write operations) and maintain consistency.
UPDATE table
SET update_count = update_count + 1,
update_time = NOW()
WHERE id = something
If you must do several INSERT or UPDATE operations as if they were one, you'll need to use the InnoDB engine, and you'll need to use transactions. The transaction will block SELECT operations while it is in progress. Teaching you to use transactions is beyond the scope of a Stack Overflow answer.
The key to understanding how a modern database engine like InnoDB works is Multi-Version Concurrency Control or MVCC. This is how simultaneous operations can run in parallel and then get reconciled into a consistent "view" of the database when fully committed.
If you've ever used Git you know how you can have several updates to the same base happening in parallel but so long as they can all cleanly merge together there's no conflict. The database works like that as well, where you can begin a transaction, apply a bunch of operations, and commit it. Should those apply without conflict the commit is successful. If there's trouble the transaction is rolled back as if it never happened.
This ability to juggle multiple operations simultaneously is what makes a transaction-capable database engine really powerful. It's an important component necessary to meet the ACID standard.
MyISAM, the original engine from MySQL 3.0, doesn't have any of these features and locks the whole database on any INSERT operation to avoid conflict. It works like you thought it did.
When creating a database in MySQL you have your choice of engine, but using InnoDB should be your default. There's really no reason at all to use MyISAM as any of the interesting features of that engine (e.g. full-text indexes) have been ported over to InnoDB.
I've just read a mysql docs where I found such sentence: "A consistent read means that InnoDB uses multi-versioning to present to a query a snapshot of the database at a point in time."
I read a lot of mysql doc pages, but still cann't clarify to myself what exactly "to a query" here means. Definitly it ralates to a SELECT statement, but what about if my transaction starts with UPDATE, INSERT, DELETE statement?
Thanks!
I found another way on my answer. And I think it should be apripriate by the others. So, days of searching whiting oracle docs and finaly founed:
InnoDB creates a consistent read view or a consistent snapshot either when the statement
mysql> START TRANSACTION WITH CONSISTENT SNAPSHOT;
is executed or when the first select query is executed in the transaction.
https://blogs.oracle.com/mysqlinnodb/entry/repeatable_read_isolation_level_in
When the query can change the data, the database also uses locks to synchronise queries.
So between queries that change data, locks are used to make sure that only one query at a time can change specific items. Between a query that reads data and a query that changes data, multi-versioning is used to present the data before the change to the query that reads it.
I have a server which sends up to 20 UPDATE statements to a separate MySQL server every 3-5 seconds for a game. My question is, is it faster to concat them together(UPDATE;UPDATE;UPDATE). Is it faster to do them in a transaction then commit the transaction? Is it faster to just do each UPDATE individually?
Any insight would be appreciated!
It sort of depends on how the server connects. If the connection between the servers is persistent, you probably won't see a great deal of difference between concatenated statements or multiple separate statements.
However, if the execution involves establishing the connection, executing the SQL statement, then tearing down the connection, you will save a lot of resources on the database server by executing multiple statements at a time. The process of establishing the connection tends to be an expensive and time-consuming one, and has the added overhead of DNS resolution since the machines are separate.
It makes the most logical sense to me to establish the connection, begin a transaction, execute the statements individually, commit the transaction and disconnect from the database server. Whether you send all the UPDATE statements as a single concatenation or multiple individual statements is probably not going to make a big difference in this scenario, especially if this just involves regular communication between these two servers and you need not expect it to scale up with user load, for example.
The use of the transaction assumes that your 3-5 second periodic bursts of UPDATE statements are logically related somehow. If they are not interdependent, then you could skip the transaction saving some resources.
As with any question regarding performance, the best answer is if your current system is meeting your performance and scaling needs, you ought not pay too much attention to micro-optimizing it just yet.
It is always faster to wrap these UPDATEs into single transaction block.
Price for this is that if anything fails inside that block it would be that nothing happened at all - you will have to repeat your work again.
Aslo, keep in mind that transactions in MySQL only work when using InnoDB engine.
I am experiencing what appears to be the effects of a race condition in an application I am involved with. The situation is as follows, generally, a page responsible for some heavy application logic is following this format:
Select from test and determine if there are rows already matching a clause.
If a matching row already exists, we terminate here, otherwise we proceed with the application logic
Insert into the test table with values that will match our initial select.
Normally, this works fine and limits the action to a single execution. However, under high load and user-abuse where many requests are intentionally sent simultaneously, MySQL allows many instances of the application logic to run, bypassing the restriction from the select clause.
It seems to actually run something like:
select from test
select from test
select from test
(all of which pass the check)
insert into test
insert into test
insert into test
I believe this is done for efficiency reasons, but it has serious ramifications in the context of my application. I have attempted to use Get_Lock() and Release_Lock() but this does not appear to suffice under high load as the race condition still appears to be present. Transactions are also not a possibility as the application logic is very heavy and all tables involved are not transaction-capable.
To anyone familiar with this behavior, is it possible to turn this type of handling off so that MySQL always processes queries in the order in which they are received? Is there another way to make such queries atomic? Any help with this matter would be appreciated, I can't find much documented about this behavior.
The problem here is that you have, as you surmised, a race condition.
The SELECT and the INSERT need to be one atomic unit.
The way you do this is via transactions. You cannot safely make the SELECT, return to PHP, and assume the SELECT's results will reflect the database state when you make the INSERT.
If well-designed transactions (the correct solution) are as you say not possible - and I still strongly recommend them - you're going to have to make the final INSERT atomically check if its assumptions are still true (such as via an INSERT IF NOT EXISTS, a stored procedure, or catching the INSERT's error in the application). If they aren't, it will abort back to your PHP code, which must start the logic over.
By the way, MySQL likely is executing requests in the order they were received. It's possible with multiple simultaneous connections to receive SELECT A,SELECT B,INSERT A,INSERT B. Thus, the only "solution" would be to only allow one connection at a time - and that will kill your scalability dead.
Personally, I would go about the check another way.
Attempt to insert the row. If it fails, then there was already a row there.
In this manner, you check or a duplicate and insert the new row in a single query, eliminating the possibility of races.
I have an application that has been running fine for quite awhile, but recently a couple of items have started popping up in the slow query log.
All the queries are complex and ugly multi join select statements that could use refactoring. I believe all of them have blobs, meaning they get written to disk. The part that gets me curious is why some of them have a lock time associated with them. None of the queries have any specific locking protocols set by the application. As far as I know, by default you can read against locks unless explicitly specified.
so my question: What scenarios would cause a select statement to have to wait for a lock (and thereby be reported in the slow query log)? Assume both INNODB and MYISAM environments.
Could the disk interaction be listed as some sort of lock time? If yes, is there documentation around that says this?
thanks in advance.
MyISAM will give you concurrency problems, an entire table is completely locked when an insert is in progress.
InnoDB should have no problems with reads, even while a write/transaction is in progress due to it's MVCC.
However, just because a query is showing up in the slow-query log doesn't mean the query is slow - how many seconds, how many records are being examined?
Put "EXPLAIN" in front of the query to get a breakdown of the examinations going on for the query.
here's a good resource for learning about EXPLAIN (outside of the excellent MySQL documentation about it)
I'm not certain about MySql, but I know that in SQL Server select statements do NOT read against locks. Doing so will allow you to read uncommitted data, and potentially see duplicate records or miss a record entirely. The reason for this is because if another process is writing to the table, the database engine may decide it's time to reorganize some data and shifts it around on disk. So it moves a record you already read to the end and you see it again, or it moves one from the end up higher where you've already past.
There's a guy on the net somewhere who actually wrote a couple of scripts to prove that this happens and I tried them once and it only took a few seconds before a duplicate showed up. Of course, he designed the scripts in a fashion that would make it more likely to happen, but it proves that it definitely can happen.
This is okay behaviour if your data doesn't need to be accurate and can certainly help prevent deadlocks. However, if you're working on an application dealing with something like people's money then that's very bad.
In SQL Server you can use the WITH NOLOCK hint to tell your select statement to ignore locks. I'm not sure what the equivalent in MySql would be but maybe someone else here will say.