MYSQL how to ensure that one query is ran right after another - mysql

I have problems with concurrent connections. How do i ensure that one query in ran just after another, without queries from another connections coming in between. I'll probably need some kind of locking, but what kind? ..or? transactions?

Run the 2 queries in SERIALIZABLE isolation level - it guarantees that the result from the 2 queries is exactly the same as if they were the only 2 queries performed, by locking every record they access, but without locking the rest of the table/tables.

If you are using the innodb storage engine, you could use transaction to realize what you want. Just execute BEGIN before the both queries and COMMIT after them.

You can use LOCK TABLE to prevent other users from updating that table.
Link
http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html

Related

MySQL queries waiting for other queries to finish

We have 30-40 different projects in Python and PHP that update, insert and select more than 1 million rows of data in MySQL DB every day.
Currently we use InnoDB Engine for our tables.
The problem: we have peaks in MySQL when almost all projects are working and lots of queries are processing in DB. There are main queries that are very important to finish ASAP (high priority) and queries that can wait for finish of main queries (less priority).
But as they go to MySQL concurrent it causes main queries to wait finishing of less priority queries.
Questions:
Is there any possibility to release all lock in tables before executing main queries (so they can finish ASAP)? or create locks if it help?
Can we pause the less priority queries execution when start execution main queries automatically?
Can use HIGH_PRIORITY and LOW_PRIORITY in queries help?
Are there some configurations in MySQL that can help?
Can changing tables to MyISAM or other DB engine help?
Let me know your thoughts and ideas.
No. You might try upgrading to MySQL 5.7 as it allows parallel replication within tables if the transactions do not interfere with each other.
See http://dev.mysql.com/doc/refman/5.7/en/lock-tables.html about how LOW PRIORITY has no effect.
See #2.
It would probably be better to look how you are doing your locking in your application - -are you locking rows up, making changes, unlock quickly or does the code do this in a leisurely fashion?
MyISAM locks at the table level not the row level and MyISAM does not support transactions (Which is probably why you are locking records).
it's hard giving a definitive answer without the locking queries.
If you could add them it will be more useful.
Several things you can look into:
Look for locking statements such as "select for update", "insert on conflict update" etc...
-many times it's better to catch an exception on the application side then let the db do extra work.
the read concurrency: it could be that "read-committed " is enough for you and it takes less locks.
If you have replication- dedicate instances according to usage (e.g. Critical only server)
Regards
Jony
Look through your high priority queries and ensure they are well written and have/use appropriate indexes.
Look at other queries using the same tables as the high priority queries and ensure their optimization the same way.
With better queries/index less CPU/RAM are used, and there will be less implicit locks happening on rows maximising the change that all queries will be quick.
Query and tuning help on the DBA site however more information will be needed.

How do I add millions of rows to a live production mysql table?

I'm looking to add about 7 million rows to a live production database table that gets 1-2 writes per second. Can I do this without locking the database for writes? I think so because the table uses InnoDB?
Are there other considerations or do I just write the insert statement and let it rip?
If you're using InnoDB, you don't need to do anything special.
Just run your inserts. InnoDB uses row level locking for these situations, it will not lock the entire table.
Of course your performance could still take a hit due to the parallel work.
To answer your other question:
"One confusion about transactions: If I am working on transaction A and a stack of writes B come in, do those writes get processed after I commit my transaction"
In general, no. It will not need to wait. This does depend if you are working within the same keyspace or not, and also what isolation level you are working within.

Locking mySQL tables/rows

can someone explain the need to lock tables and/or rows in mysql?
I am assuming that it to prevent multiple writes to the same field, is this the best practise?
First lets look a good document This is not a mysql related documentation, it's about postgreSQl, but it's one of the simplier and clear doc I've read on transaction. You'll understand MySQl transaction better after reading this link http://www.postgresql.org/docs/8.4/static/mvcc.html
When you're running a transaction 4 rules are applied (ACID):
Atomicity : all or nothing (rollback)
Coherence : coherent before, coherent after
Isolation: not impacted by others?
Durability : commit, if it's done, it's really done
In theses rules there's only one which is problematic, it's Isolation. using a transaction does not ensure a perfect isolation level. The previous link will explain you better what are the phantom-reads and suchs isolation problems between concurrent transactions. But to make it simple you should really use Row levels locks to prevent other transaction, running in the same time as you (and maybe comitting before you), to alter the same records. But with locks comes deadlocks...
Then when you'll try using nice transactions with locks you'll need to handle deadlocks and you'll need to handle the fact that transaction can fail and should be re-launched (simple for or while loops).
Edit:------------
Recent versions of InnoDb provides greater levels of isolation than previous ones. I've done some tests and I must admit that even the phantoms reads that should happen are now difficult to reproduce.
MySQL is on level 3 by default of the 4 levels of isolation explained in the PosgtreSQL document (where postgreSQL is in level 2 by default). This is REPEATABLE READS. That means you won't have Dirty reads and you won't have Non-repeatable reads. So someone modifying a row on which you made your select in your transaction will get an implicit LOCK (like if you had perform a select for update).
Warning: If you work with an older version of MySQL like 5.0 you're maybe in level 2, you'll need to perform the row lock using the 'FOR UPDATE' words!
We can always find some nice race conditions, working with aggregate queries it could be safer to be in the 4th level of isolation (by using LOCK IN SHARE MODE at the end of your query) if you do not want people adding rows while you're performing some tasks. I've been able to reproduce one serializable level problem but I won't explain here the complex example, really tricky race conditions.
There is a very nice example of race conditions that even serializable level cannot fix here : http://www.postgresql.org/docs/8.4/static/transaction-iso.html#MVCC-SERIALIZABILITY
When working with transactions the more important things are:
data used in your transaction must always be read INSIDE the transaction (re-read it if you had data from before the BEGIN)
understand why the high isolation level set implicit locks and may block some other queries ( and make them timeout)
try to avoid dead locks (try to lock tables in the same order) but handle them (retry a transaction aborted by MySQL)
try to freeze important source tables with serialization isolation level (LOCK IN SHARE MODE) when your application code assume that no insert or update should modify the dataset he's using (if not you will not have problems but your result will have ignored the concurrent changes)
It is not a best practice. Modern versions of MySQL support transactions with well defined semantics. Use transactions, and forget about locking stuff by hand.
The only new thing you'll have to deal with is that transaction commits may fail because of race conditions, but you'd be doing error checking with locks anyway, and it is easier to retry the logic that led to a transaction failure than to recover from errors in a non-transactional setup.
If you do get race conditions and failed commits, then you may want to fine-tune the isolation configuration for your transactions.
For example if you need to generate invoice numbers which are sequential and have no numbers missing - this is a requirement at least in the country I live in.
If you have a few web servers, then a few users might be buying stuff literally at the same time.
If you do select max(invoice_id)+1 from invoice to get the new invoice number, two web servers might do that at the same time (before the new invoice has been added), and get the same invoice number for the invoices they're trying to create.
If you use a mechanism such as "auto_increment", this is just meant to generate unique values, and makes no guarantees about not missing out numbers (if one transaction tries to insert a row, then does a rollback, the number is "lost"),
So the solution is to (a) lock the table (b) select max(invoice_id)+1 from invoice (c) do the insert (d) commit + unlock the table.
On another note, in MySQL you're best using InnoDB and using row-level locking. Doing a lock table command can implicitly commit the transaciton you're working on.
Take a look here for general introduction to what transactions are and how to use them.
Databases are designed to work in concurrent environments, so locking the tables and/or records helps to keep the transactions consistent.
So a record affected by one transaction should not be altered until this transaction commits or rolls back.

Is MySQL InnoDB is appropriate for this scenario?

My MysQL database contains multiple MyISAM tables, with each table containing millions of rows. There is a heavy insert load on the database, so I cannot issue SELECTs on that live database. Instead, I create a replica of the database for queries and conduct analysis on that.
For the analysis, I need to issue multiple parallel queries. The queries are independent (i.e., the results of the queries are not combined together), but they operate on same tables most of the time. As far as I know, the entire MyISAM table is locked for each query, which means parallel independent queries would be slow. Ideally, I would prefer an engine that supports "NO LOCKING". I am assuming MySQL doesnt have such an engine, so should I use InnoDB? I might be missing lot of things here. Please suggest what is the right path to take here.
Thanks
MyISAM read locks are compatible, so the SELECT queries won't lock each other.
If your analysis queries on the replica database don't write, only read, then it's OK to use MyISAM.
You could stick to MyISAM and use INSERT DELAYED:
When a client uses INSERT DELAYED, it gets an okay from the server at once, and the row is queued to be inserted when the table is not in use by any other thread.
Another major benefit of using INSERT DELAYED is that inserts from many clients are bundled together and written in one block. This is much faster than performing many separate inserts.

What does MySQL do if you attempt to update a table that is being queried?

I have a very slow query that I need to run on a MySQL database from time to time.
I've discovered that attempts to update the table that is being queried are blocked until the query has finished.
I guess this makes sense, as otherwise the results of the query might be inconsistent, but it's not ideal for me, as the query is of much lower importance than the update.
So my question really has two parts:
Out of curiosity, what exactly does MySQL do in this situation? Does it lock the table for the duration of the query? Or try to lock it before the update?
Is there a way to make the slow query not blocking? I guess the options might be:
Kill the query when an update is needed.
Run the query on a copy of the table as it was just before the update took place
Just let the query go wrong.
Anyone have any thoughts on this?
It sounds like you are using a MyISAM table, which uses table level locking. In this case, the SELECT will set a shared lock on the table. The UPDATE then will try to request an exclusive lock and block and wait until the SELECT is done. Once it is done, the UPDATE will run like normal.
MyISAM Locking
If you switched to InnoDB, then your SELECT will set no locks by default. There is no need to change transaction isolation levels as others have recommended (repeatable read is default for InnoDB and no locks will be set for your SELECT). The UPDATE will be able to run at the same time. The multi-versioning that InnoDB uses is very similar to how Oracle handles the situation. The only time that SELECTs will set locks is if you are running in the serializable transaction isolation level, you have a FOR UPDATE/LOCK IN SHARE MODE option to the query, or it is part of some sort of write statement (such as INSERT...SELECT) and you are using statement based binary logging.
InnoDB Locking
For the purposes of the select statement, you should probably issue a:
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
command on the connection, which causes the subsequent select statements to operate without locking.
Don't use the 'SELECT ... FOR UPDATE', as that definitely locks the table rows that are affected by the select statement.
The full list of msql transaction isloation levels are in the docs.
First off all you need to know what engine you´re using (MySam or InnoDb).
This is clearly a transaction problem.
Take a look a the section 13.4.6. SET TRANSACTION Syntax in the mysql manual.
UPDATE LOW_PRIORITY .... may be helpful - the mysql docs aren't clear whether this would let the user requesting the update continue and the update happen when it can (which is what I think happens) or whether the user has to wait (which would be worse than at present ...), and I can't remember.
What table types are you using? If you are on MyISAM, switching to InnoDB (if you can - it has no full text indexing) opens up more options for this sort of thing, as it supports the transactional features and row level locking.
I don't know MySQL, But it sounds like transaction problem.
You should be able to set transaction typ to Dirty Read in your select query.
That won't nessarily give you correct results. But it should'nt be blocked.
Better would be to make the first query go faster. Do some analyzing and check if you can speed it up with correct indeing and so on.