I am trying to run a query from our application through springs 2.5 and hibernate 3,But query is neither timed out nor returning results (query hangs out),When I run the same query from the query browser it is working fine.
Even though I increased query timeout still I am not able to fetch the result.
I tried to increase query execution timeout it failed to return records
<property name="javax.persistence.query.timeout" value="3000" />
<tx:advice id="defaultTxAdvice" transaction-manager="transactionManager">
<tx:attributes>
<!-- Keep SequenceService in a isolation transaction -->
<tx:method name="get*" read-only="true" />
<!-- By default, A runtime exception will rollback transaction. -->
<tx:method name="*" timeout="100" rollback-for="ApplicationException" />
</tx:attributes>
</tx:advice>
Any kind of help/suggestion would be greatly appreciated
These are my thoughts about the issue here.Since i don't find any relevant codes here..
Due to execution of multiple queries in single fetch (issues like n+1 issue in hibernate). Hint:Make sure that you set show SQL value as true in developer machine
Due to irregular handling of queries in code, like if you are iterating for n number of time. The Database server will consider its as some threat and it moves to a lock state and stop execution of queries for some time period.
Improper handling of transaction time-out, Like if you are increasing a transaction time-out value then it will probably lock the database for a long time.
There are also other possibilities, but i cant make any suggestions unless i see the code.
Related
Software: Django 2.1.0, Python 3.7.1, MariaDB 10.3.8, Linux Ubuntu 18LTS
We recently added some load to a new application, and starting observing lots of deadlocks. After a lot digging, I found out that the Django select_for_update query resulted in an SQL with several subqueries (3 or 4). In all deadlocks I've seeen so far, at least one of the transactions involves this SQL with multiple subqueries.
my question is... Does the select_for_udpate lock records from every table involved? In my case, would record from the main SELECT, and from other tables used by subqueries get locked? Or only records from the main SELECT?
From Django docs:
By default, select_for_update() locks all rows that are selected by the query. For example, rows of related objects specified in select_related() are locked in addition to rows of the queryset’s model.
However, I'm not using select_related() , at least I don't put it explicitly.
Summary of my app:
with transaction.atomic():
ModelName.objects.select_for_update().filter(...)
...
update record that is locked
...
50+ clients sending queries to the database concurrently
Some of those queries ask for the same record. Meaning different transactions will run the same SQL at the same time.
After a lot of reading, I did the following to try to get the deadlock under control:
1- Try/Catch exception error '1213' (deadlock). When this happens, wait 30 seconds and retry the query. Here, I rely on the ROLLBACK function from the database engine.
Also, print output of SHOW ENGINE INNODB STATUS and SHOW PROCESSLIST. But SHOW PROCESSLIST doesn't give useful information.
2- Modify the Django select_on_update so that it doesn't build an SQL with subqueries. Now, the SQL generated contains a single WHERE with values and no subqueries.
Anything else that could be done to reduce the deadlocks?
If u hv select_for_update inside a transaction, it will only be released went the whole transaction commits or rollbacks. With nowait set to true the other concurrent requests will immediately fail with:
3572, 'Statement aborted because lock(s) could not be acquired immediately and NOWAIT is set.')
So if we cant use optimistic locks and cannot make transactions shorter, we can set nowait=true in our select_for_update, and we will see a lot of failures if our assumptions are correct. Here we can just catch deadlock failures and retry them with backoff strategy. This is based on the assumption that all people are trying to write to the same thing like an auction item, or ticket booking with a short window of time. If that is not the case consider changing the db design a bit to make deadlocks common
Logs showing that from time to time this error is raised.
I'm reading the docs and it's very confusing because we're not locking any tables to do inserts and we have no transactions beyond individual SQL calls.
So - might this be happening because we're running out of the mySQL connection pool in Node? (We've set it to something like 250 simultaneous connections).
I'm trying to figure out how to replicate this but having no luck.
Every query not run within an explicit transaction runs in an implicit transaction that immediately commits when the query finishes or rolls back if an error occurs... so, yes, you're using transactions.
Deadlocks occur when at least two queries are in the process of acquiring locks, and each of them holds row-level locks that they happened to acquire in such an order that they each now need another lock that the other one holds -- so, they're "deadlocked." An infinite wait condition exists between the running queries. The server notices this.
The error is not so much a fault as it is the server saying, "I see what you did, there... and, you're welcome, I cleaned it up for you because otherwise, you would have waited forever."
What you aren't seeing is that there are two guilty parties -- two different queries that caused the problem -- but only one of them is punished. The query that has accomplished the least amount of work (admittedly, this concept is nebulous) will be killed with the deadlock error, and the other query happily proceeds along its path, having no idea that it was the lucky survivor.
This is why the deadlock error message ends with "try restarting transaction" -- which, if you aren't explicitly using transacrions, just means "run your query again."
See https://dev.mysql.com/doc/refman/5.6/en/innodb-deadlocks.html and examine the output of SHOW ENGINE INNODB STATUS;, which will show you the other query -- the one that helped cause the deadlock but that was not killed -- as well as the one that was.
The report works fine in the DEV and QA server but when placed in Production the following error comes up:
An error occurred during client rendering.
An error has occurred during report processing.
Query execution failed for dataset 'Registration_of_Entity'.
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
The strange part was that the Admins have assured me that this report has now been set so there is no timeout at all.
Refresh the report 3 times every morning and the error message goes away.
What can I do to fix this issue so that the report never receives this error?
There are several steps to resolve correctly this issue.
I advise following them in the following order:
1. Reduce the query execution time
Execute the query of the DataSet Registration_of_Entity in SSMS and see how long it takes to complete.
If your query requires more time to execute than the timeout specified for the DataSet, you should first try to reduce this time, for example:
Change the query structure (rethink joins, use CTEs, ...)
Add indexes
Looking at the execution plan can help.
2. Reduce the query complexity
Do you need all those rows/columns?
Do you need to have all these calculations on the database side?
Could it be done in the report instead?
You could try to:
Reduce the query complexity
Split the query in smaller queries
Again, looking at the execution plan can help.
3. Explore additional optimizations not related to the query itself
You really need this query, but do you need the data real-time?
Are there a lot of other queries being executed on this server?
You could look into:
Caching
Replication / Load Balancing
Note that from SSRS 2008 R2, the new Shared DataSets can be cached. I
know it doesn't apply in your case but who knows, it could help
others.
4. Last resort
If all the above steps failed to solve the issue, then you can increase the timeouts.
Here is a link to a blog post explaining the different timeouts and how to increase them.
Do you know if your query is becoming deadlocked? It could be that the report gets blocked on the server during peak times.
Consider optimizing your query or, if the data can be read uncommitted, add WITH (NOLOCK) after each FROM and Join Clause. Be sure to google WITH(NOLOCK) if you are unfamiliar with it so you know what read uncommitted can do.
I have a routine in MySQL that is very long and has multiple SELECT, INSERT, and UPDATE statements in it with some IFs and REPEATs. It's been running fine until lately, where it's hanging an taking over 20 seconds to complete (which is unacceptable considering it used to take 1 second or so).
What is the quickest and easiest way for me to find out where in the routine the bottleneck is coming from? Basically the routine is getting stopped up and some point... how can I find out where that is without breaking apart the routine and testing one-by-one each section?
If you use Percona Server (a free distribution of MySQL with many enhancements), you can make the slow-query log record times for individual queries, using the log_slow_sp_statements configuration variable. See http://www.percona.com/doc/percona-server/5.5/diagnostics/slow_extended_55.html
If you're using stock MySQL, you can add statements in the stored procedure to set a series of session variables to the value returned by the SYSDATE() function. Use a different session variable at different points in the SP. Then after you run the SP in a test execution, you can inspect the values of these session variables to see what section of the SP took the longest.
To analyze the query can see the execution plan of the same. It is not always an easy task but with a bit of reading will find the solution. I leave some useful links
http://dev.mysql.com/doc/refman/5.5/en/execution-plan-information.html
http://dev.mysql.com/doc/refman/5.0/en/explain.html
http://dev.mysql.com/doc/refman/5.0/en/using-explain.html
http://www.lornajane.net/posts/2011/explaining-mysqls-explain
I have a server which sends up to 20 UPDATE statements to a separate MySQL server every 3-5 seconds for a game. My question is, is it faster to concat them together(UPDATE;UPDATE;UPDATE). Is it faster to do them in a transaction then commit the transaction? Is it faster to just do each UPDATE individually?
Any insight would be appreciated!
It sort of depends on how the server connects. If the connection between the servers is persistent, you probably won't see a great deal of difference between concatenated statements or multiple separate statements.
However, if the execution involves establishing the connection, executing the SQL statement, then tearing down the connection, you will save a lot of resources on the database server by executing multiple statements at a time. The process of establishing the connection tends to be an expensive and time-consuming one, and has the added overhead of DNS resolution since the machines are separate.
It makes the most logical sense to me to establish the connection, begin a transaction, execute the statements individually, commit the transaction and disconnect from the database server. Whether you send all the UPDATE statements as a single concatenation or multiple individual statements is probably not going to make a big difference in this scenario, especially if this just involves regular communication between these two servers and you need not expect it to scale up with user load, for example.
The use of the transaction assumes that your 3-5 second periodic bursts of UPDATE statements are logically related somehow. If they are not interdependent, then you could skip the transaction saving some resources.
As with any question regarding performance, the best answer is if your current system is meeting your performance and scaling needs, you ought not pay too much attention to micro-optimizing it just yet.
It is always faster to wrap these UPDATEs into single transaction block.
Price for this is that if anything fails inside that block it would be that nothing happened at all - you will have to repeat your work again.
Aslo, keep in mind that transactions in MySQL only work when using InnoDB engine.