How to get the query plan from a prepared statment - mysql

I don't remember ever seeing a way to use prepared statements from the console and somehow don't think running an explain query thought as a prepared statement from the API will get what I want.
This is related to this old question of mine.
I'm primarily interested in MySQL but would be interested in other DBs as well.

According to the brief research that I conducted, I don't see a way to get it. Ideally, the real execution plan would be generated once the variables are provided. Lookup tables can quickly eliminate actually running the query if a constant is not present. The ideal execution plan would take into account the frequency of occurrence. My understanding is that MySQL at least used to prepare an execution plan when the statement is prepared in order to validate the expression. Then, when you execute it, it generates another explain plan.
I believe the explain plan is temporarily housed in a table in MySQL but is quickly removed after it is used.
I would suggest asking on the MySQL internals list.
Good Luck,
Jacob

"You can't"
https://dev.mysql.com/doc/internals/en/prepared-stored-statement-execution.html
That basically says that the execution plan created for the prepared statement at compile time is not used. At execution time, once the variables are bound, it uses the values to create a new execution plan and uses that one.
This means that if you want to know what it will do, you can take the query you intended on preparing, give it the values you will bind to it, and EXPLAIN PLAN that complete query.

Related

Using MySQL server side prepared statement to improve app performance

My situation:
MySQL 5.5, but possible to migrate to 5.7
Legacy app is executing single MySQL query to get some data (1-10 rows, 20 columns)
Query can be modified via application configuration
Query is very complex SELECT with multiple JOINS and conditions, it's about 20KB of code
Query is well profiled, index usage fine-tuned, I spent much time on this and se no room for improvement without splitting to smaller queries
With traditional app I would split this large query to several smaller and use caching to avoid many JOINS, but my legacy app does not allow to do that. I can use only one query to return results
My plan to improve performance is:
Reduce parsing time. Parsing 20KB of SQL on every request, while only parameters values are changed seems ineffective
I'd like to turn this query into prepared statement and only fill placeholders with data
Query will be parsed once and executed multiple times, should be much faster
Problems/questions:
First of all: does above solution make sense?
MySQL prepared statements seem to be session related. I can't use that since I cannot execute any additional code ("init code") to create statements for each session
Other solution I see is to use prepared statement generated inside procedure or function. But examples I saw rely on dynamically generating queries using CONCAT() and making prepared statement executed locally inside of procedure. It seems that this kind of statements will be prepared every procedure call, so it will not save any processing time
Is there any way to declare server-wide and not session related prepared statement in MySQL? So they will survive application restart and server restart?
If not, is it possible to cache prepared statements declared in functions/procedures?
I think the following will achieve your goal...
Put the monster in a Stored Routine.
Arrange to always execute that Stored Routine from the same connection. (This may involve restructuring your client and/or inserting a "web service" in the middle.)
The logic here is that Stored Routines are compiled once per connection. I don't know whether that includes caching the "prepare". Nor do I know whether you should leave the query naked, or artificially prepare & execute.
Suggest you try some timings, plus try some profiling. The latter may give you clues into what I am uncertain about.

Impact of (server-side) Prepared Statements on a MySQL 5.5+ Query Plan

So, since cursory googling doesn't reveal anything enlightening:
How does MySQL generate a query plan for a Prepared Statement such as the server-side ones implemented in Connector/J for JDBC? Specifically, does it generate it at the time that the SQL statement is compiled and then reuse it with every execution regardless of the parameters or will it actually adjust the plan in the same manner that would be achieved with issuing each SQL query separately?
If it does happen to be "smart" about it, an explanation of how it does this would be great (e.g. variable peeking)
In almost all cases, the query plan is built when you execute the statement. In MySQL (unlike competing products), building the plan is very fast, so you don't really need to worry about whether it is cached in any way.
Also, but building as needed, different values in the query can lead to different query plans, hence faster execution.
(In the extreme, I have seen one statement, with different constants, have 6 different query plans.)

How to check performance of mysql query?

I have been learning query optimization, increase query performance and all but in general if we create a query how can we know if this is a wise query.
I know we can see the execution time below, But this time will not give a clear indication without a good amount of data. And usually, when we create a new query we don't have much data to check.
I have learned about clauses and commands performance. But is there is anything by which we can check the performance of the query? Performance here is not execution time, it means that whether a query is "ok" or not, without data dependency.
As we cannot create that much data that would be in live database.
General performance of a query can be checked using the EXPLAIN command in MySQL. See https://dev.mysql.com/doc/refman/5.7/en/using-explain.html
It shows you how MySQL engine plans to execute the query and allows you to do some basic sanity checks i.e. if the engine will use keys and indexes to execute the query, see how MySQL will execute the joins (i.e. if foreign keys aren't missing) and many more.
You can find some general tips about how to use EXPLAIN for optimizing queries here (along with some nice samples): http://www.sitepoint.com/using-explain-to-write-better-mysql-queries/
As mentioned above, Right query is always data-dependent. Up to some level you can use the below methods to check the performance
You can use Explain to understand the Query Execution Plan and that may help you to correct some stuffs. For more info :
Refer Documentation Optimizing Queries with EXPLAIN
You can use Query Analyzer. Refer MySQL Query Analyzer
I like to throw my cookbook at Newbies because they often do not understand how important INDEXes are, or don't know some of the subtleties.
When experimenting with multiple choices of query/schema, I like to use
FLUSH STATUS;
SELECT ...;
SHOW SESSION STATUS LIKE 'Handler%';
That counts low level actions, such as "read next record". It essentially eliminates caching issues, disk speed, etc, and is very reproducible. Often there is a counter in that output (or multiple counters) that match the number of rows in the table (sometimes +/-1) -- that tells me there are table scan(s). This is usually not as good as if some INDEX were being used. If the query has a LIMIT, that value may show up in some Handler.
A really bad query, such as a CROSS JOIN, would show a value of N*M, where N and M are the row counts for the two tables.
I used the Handler technique to 'prove' that virtually all published "get me a random row" techniques require a table scan. Then I could experiment with small tables and Handlers to come up with a list of faster random routines.
Another tip when timing... Turn off the Query_cache (or use SELECT SQL_NO_CACHE).

How can I find the bottleneck in my slow MySQL routine (stored procedure)?

I have a routine in MySQL that is very long and has multiple SELECT, INSERT, and UPDATE statements in it with some IFs and REPEATs. It's been running fine until lately, where it's hanging an taking over 20 seconds to complete (which is unacceptable considering it used to take 1 second or so).
What is the quickest and easiest way for me to find out where in the routine the bottleneck is coming from? Basically the routine is getting stopped up and some point... how can I find out where that is without breaking apart the routine and testing one-by-one each section?
If you use Percona Server (a free distribution of MySQL with many enhancements), you can make the slow-query log record times for individual queries, using the log_slow_sp_statements configuration variable. See http://www.percona.com/doc/percona-server/5.5/diagnostics/slow_extended_55.html
If you're using stock MySQL, you can add statements in the stored procedure to set a series of session variables to the value returned by the SYSDATE() function. Use a different session variable at different points in the SP. Then after you run the SP in a test execution, you can inspect the values of these session variables to see what section of the SP took the longest.
To analyze the query can see the execution plan of the same. It is not always an easy task but with a bit of reading will find the solution. I leave some useful links
http://dev.mysql.com/doc/refman/5.5/en/execution-plan-information.html
http://dev.mysql.com/doc/refman/5.0/en/explain.html
http://dev.mysql.com/doc/refman/5.0/en/using-explain.html
http://www.lornajane.net/posts/2011/explaining-mysqls-explain

How can I get MySQL trigger execution time?

I have a rather complicated trigger and I'm afraid it's execution time is too long. How can I measure it?
A trigger is like every other sql query, with the difference that it can not be called explicitly. About measuring performance of sql query it really depends on your implementation, so a little more information will be useful.
With php, with some tool... how?
The simplest way(in the db) is to INSERT NOW in the beginning of the trigger and INSERT NOW at the end.
But time measurement(if this is what you asked) is not always the best choice to measure performance.
This is a good way to start - Using the New MySQL Query Profiler