Does with(updlock) reduce deadlocks in select queries? - sql-server-2008

We have tables that are written to and read from simultaneously in our SQL Server 2008 DB (normal isolation levels).
One colleague was wondering if the query hint with(updlock) on the select queries against that table would reduce deadlocks, but I am not quite sure what to make of this.
I am thinking that if a normal shared read lock would cause a timeout, then surely an update-lock would cause a deadlock as well in that situation?! Or am I missing something?
Thanks all!

Update lock is not compatible with Shared lock so basically fewer SELECT statements could run simultaneously under certain circumstances. I believe this would not help your problem.
Have you considered to turn on Read Commited Snapshot (RCSI) database option? This is something you would want to test in your test environment first. It brings some overhead on tempdb for version storing but your database throughput should get higher thanks to optimistic locking of RSCI.

Related

MySQL queries waiting for other queries to finish

We have 30-40 different projects in Python and PHP that update, insert and select more than 1 million rows of data in MySQL DB every day.
Currently we use InnoDB Engine for our tables.
The problem: we have peaks in MySQL when almost all projects are working and lots of queries are processing in DB. There are main queries that are very important to finish ASAP (high priority) and queries that can wait for finish of main queries (less priority).
But as they go to MySQL concurrent it causes main queries to wait finishing of less priority queries.
Questions:
Is there any possibility to release all lock in tables before executing main queries (so they can finish ASAP)? or create locks if it help?
Can we pause the less priority queries execution when start execution main queries automatically?
Can use HIGH_PRIORITY and LOW_PRIORITY in queries help?
Are there some configurations in MySQL that can help?
Can changing tables to MyISAM or other DB engine help?
Let me know your thoughts and ideas.
No. You might try upgrading to MySQL 5.7 as it allows parallel replication within tables if the transactions do not interfere with each other.
See http://dev.mysql.com/doc/refman/5.7/en/lock-tables.html about how LOW PRIORITY has no effect.
See #2.
It would probably be better to look how you are doing your locking in your application - -are you locking rows up, making changes, unlock quickly or does the code do this in a leisurely fashion?
MyISAM locks at the table level not the row level and MyISAM does not support transactions (Which is probably why you are locking records).
it's hard giving a definitive answer without the locking queries.
If you could add them it will be more useful.
Several things you can look into:
Look for locking statements such as "select for update", "insert on conflict update" etc...
-many times it's better to catch an exception on the application side then let the db do extra work.
the read concurrency: it could be that "read-committed " is enough for you and it takes less locks.
If you have replication- dedicate instances according to usage (e.g. Critical only server)
Regards
Jony
Look through your high priority queries and ensure they are well written and have/use appropriate indexes.
Look at other queries using the same tables as the high priority queries and ensure their optimization the same way.
With better queries/index less CPU/RAM are used, and there will be less implicit locks happening on rows maximising the change that all queries will be quick.
Query and tuning help on the DBA site however more information will be needed.

Mysql Lock times in slow query log

I have an application that has been running fine for quite awhile, but recently a couple of items have started popping up in the slow query log.
All the queries are complex and ugly multi join select statements that could use refactoring. I believe all of them have blobs, meaning they get written to disk. The part that gets me curious is why some of them have a lock time associated with them. None of the queries have any specific locking protocols set by the application. As far as I know, by default you can read against locks unless explicitly specified.
so my question: What scenarios would cause a select statement to have to wait for a lock (and thereby be reported in the slow query log)? Assume both INNODB and MYISAM environments.
Could the disk interaction be listed as some sort of lock time? If yes, is there documentation around that says this?
thanks in advance.
MyISAM will give you concurrency problems, an entire table is completely locked when an insert is in progress.
InnoDB should have no problems with reads, even while a write/transaction is in progress due to it's MVCC.
However, just because a query is showing up in the slow-query log doesn't mean the query is slow - how many seconds, how many records are being examined?
Put "EXPLAIN" in front of the query to get a breakdown of the examinations going on for the query.
here's a good resource for learning about EXPLAIN (outside of the excellent MySQL documentation about it)
I'm not certain about MySql, but I know that in SQL Server select statements do NOT read against locks. Doing so will allow you to read uncommitted data, and potentially see duplicate records or miss a record entirely. The reason for this is because if another process is writing to the table, the database engine may decide it's time to reorganize some data and shifts it around on disk. So it moves a record you already read to the end and you see it again, or it moves one from the end up higher where you've already past.
There's a guy on the net somewhere who actually wrote a couple of scripts to prove that this happens and I tried them once and it only took a few seconds before a duplicate showed up. Of course, he designed the scripts in a fashion that would make it more likely to happen, but it proves that it definitely can happen.
This is okay behaviour if your data doesn't need to be accurate and can certainly help prevent deadlocks. However, if you're working on an application dealing with something like people's money then that's very bad.
In SQL Server you can use the WITH NOLOCK hint to tell your select statement to ignore locks. I'm not sure what the equivalent in MySql would be but maybe someone else here will say.

Avoiding deadlock by using NOLOCK hint

Once in a while I get following error in production enviornment which goes away on running the same stored procedure again.
Transaction (Process ID 86) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction
Someone told me that if I use NOLOCK hint in my stored procedures, it will ensure it will never be deadlocked. Is this correct? Are there any better ways of handling this error?
Occasional deadlocks on an RDBMS that locks like SQL Server/Sybase are expected.
You can code on the client to retry as recommended my MSDN "Handling Deadlocks".
Basically, examine the SQLException and maybe a half second later, try again.
Otherwise, you should review your code so that all access to tables are in the same order. Or you can use SET DEADLOCK_PRIORITY to control who becomes a victim.
On MSDN for SQL Server there is "Minimizing Deadlocks" which starts
Although deadlocks cannot be completely avoided
This also mentions "Use a Lower Isolation Level" which I don't like (same as many SQL types here on SO) and is your question. Don't do it is the answer... :-)
What can happen as a result of using (nolock) on every SELECT in SQL Server?
https://dba.stackexchange.com/q/2684/630
Note: MVCC type RDBMS (Oracle, Postgres) don't have this problem. See http://en.wikipedia.org/wiki/ACID#Locking_vs_multiversioning but MVCC has other issues.
While adding NOLOCK can prevent readers and writers from blocking each other (never mind all of the negative side effects it has), it is not a magical fix for deadlocks. Many deadlocks have nothing at all to do with reading data, so applying NOLOCK to your read queries might not cause anything to change at all. Have you run a trace and examined the deadlock graph to see exactly what the deadlock is? This should at least let you know which part of the code to look at. For example, is the stored procedure deadlocking because it is being called by multiple users concurrently, or is it deadlocking with a different piece of code?
Here is a good link on learning to troubleshoot deadlocks. I always try avoid using nolock for the reasons above. Also you might want to better understand Lock Compatibility.

mySQL Replication

We have an update process which currently takes over an hour and means that our DB is unusable during this period.
If I setup up replication would this solve the problem or would the replicated DB suffer from exactly the same problem that the tables would be locked during the update?
Is it possible to have the replicated DB prioritize reading over updating?
Thanks,
D
I suspect that with replication you're just going to be dupolicating the issue (unless most of the time is spent in CPU and only results in a couple of records being updated).
Without knowing a lot more about the scema, distribution and size of data and the update process its impossible to say how best to resolve the problem - but you might get some mileage out of using innodb instead of C-ISAM and making sure that the update is implemented as a number of discrete steps (e.g. using stored procuedures) rather than a single DML statement.
MySQL gives you the ability to run queries delaye. Example: "INSERT DELAYED INTO...", this will cause the query to only be executed when MYSQL has time to take the query.
Based on your input, it sounds like you are using MyISAM tables, MyISAM only support table-wide locking. That means that a single update will lock the whole database table until the query is completed. InnoDB on the other hand uses row locking, which will not cause SELECT queries to wait(hang) for updates to complete.
So you have the best chances of a better sysadmin life if you change to InnoDB :)
When it comes to replication it is pretty normal to seperate updates and selects to two different MySQL servers, and that does tend to work very well. But if you are using MyISAM tables and does a lot of updates, the locking issue itself will still be there.
So my 2 cents: First get rid of MyISAM, then consider replication or a better scaled MySQL server if the problem still exists. (The key for good performance in MySQL is to have at least the size of all indexes across all databases as physical RAM)

when will select statement without for update causing lock?

I'm using MySQL,
I sometimes saw a select statement whose status is 'locked' by running 'show processlist'
but after testing it on local,I can't reproduce the 'locked' status again.
It probably depends on what else is happening. I'm no mySQL expert but in SQL Server various lock levels control when data can be read and written. For example in production your select stateemnt might want to read a record that is being updated. It has to wait until the update is done. Vice-versa - an update might have to wait for a read to finish.
Messing with default lock levels is dangerous. And since dev environs don't have nearly as much traffic you probasbly don't see that kind of contention.
If you spot that again see if you can see if any update is being made against one of the tables your select is referencing.
I'm no expect in mysql, but it sounds like another user is holding a lock against a table/field while your trying to read it.
I'm no MySQL expert either, but locking behavior strongly depends on the isolation level / transaction isolation. I would suggest searching for those terms in the MySQL docs.