Does MySQL store somehow the last querie(s)? - mysql

While working with MySQL and some really "performance greedy queries" I noticed, that if I run such a greedy query it could take 2 or 3 minutes to be computed. But if I retry the query immediately after it finished the first time, it takes only some seconds. Does MySQL store something like "the last x queries"?

The short answer is yes. there is a Query Cache.
The query cache stores the text of a SELECT statement together with the corresponding result that was sent to the client. If an identical statement is received later, the server retrieves the results from the query cache rather than parsing and executing the statement again. The query cache is shared among sessions, so a result set generated by one client can be sent in response to the same query issued by another client.
from here

The execution plan for the query will be calculated and re-used. The data can be cached, so subsequent executions will be faster.

Yes, depending on how the MySQL Server is configured, it may be using the query cache. This stores the results of identical queries until a certain limit (which you can set if you control the server) has been reached. Read http://dev.mysql.com/doc/refman/5.1/en/query-cache.html to find out more about how to tune your query cache to speed up your application if it issues many identical queries.

Related

Qt SQL `nextResult` function for MySQL Server 8.0: delayed execution per result set?

We are currently doing a lot of small queries. We execute a query, read the results, and then execute the next one. Since network requests cost a lot of time, this ping-ponging gets slow very fast.
This is why we want to do multiple queries at once, sending all data that the SQL server must know to it, and only retrieving one result (consisting of multiple result sets).
We found that Qt 5.14.1's QSqlQuery has the nextResult() function, but in the documentation (link) it says:
Some databases may execute all statements at once while others may delay the execution until the result set is actually accessed, [...].
MY QUESTION:
So, does MySql Server 8.0 delay the execution until the result set is actually accessed? If this is the case, then we still have a ping-pong for every query right? Which would be very slow still.
P.S. Our current solution to just have 1 ping-pong is to union different result sets (resulting in kind of a block diagonal matrix) with lots and lots of null values), and this question is meant to find a better way to do this.

MYSQL - number of rows returned vs number of connections?

Does my query get sent to the database once and I get a list of all the results in one shot which I then loop through, or do I have to request the next row from the DB each time?
Essentially, does reducing the number of rows I expect to return mean less connections/calls to the DB meaning my DB will be able to handle more connections at once, or is the number of database connections not dependent on the number of returned rows?
Your question is very vague, and seems to have the terminology jumbled up.
The number of rows retrieved from a resultset has no bearing on the number of connections. Nor does the number of statements executed have any bearing on connections.
(Unless it's a very badly written application that churns connections, connecting and disconnecting from the database for each statement execution.)
I think what you're asking is whether there's a "roundtrip" made to the database server for each "fetch", to retrieve a row from a resultset returned by a SELECT query.
The answer to that question is no, most database libraries fetch a "batch" of rows. When the client requests the next row, it's returned from the set already returned by the library. Once the batch is exhausted, it's another roundtrip to get the next set. That's all "under the covers" and your application doesn't need to worry about it.
It doesn't matter whether you fetch just one row, and then discard the resultset, or whether you loo[ through and fetch every row. It's the same pattern.
In terms of performance and scalability, if you only need to retrieve four rows, then there's no sense in preparing a resultset containing more than that. When your query runs against the database, the database server generates the resultset, and holds that, until the client requests a row from the resultset. Larger resultsets require more resources on the database server.
A huge resultset is going to need more roundtrips to the database server, to retrieve all of the rows, than a smaller resultset.
It's not just the number of rows, it's also the size of the row being returned. (Which is why DBA types gripe about developer queries that do a SELECT * FROM query to retrieve every flipping column, when the client is actually using only a small subset of the columns.)
Reducing roundtrips to the database generally improves performance, especially if the client is connecting to the database server over a network connection.
But we don't really need to be concerned how many roundtrips it requires to fetch all the rows from a resultset... it takes what it takes, the client needs what it needs.
What we should be concerned with is the number of queries we run
A query execution is WAY more overhead than the simple roundtrip for a fetch. The SQL statement has to be parsed for syntax (order and pattern of keywords and identifiers is correct), for semantics (the identifiers reference valid tables, columns, functions, and user has to appropriate permissions on the database objects), generate a execution plan (evaluate which predicates and operations can be satisfied with which indexes, the permutations for the order the operations are performed in). Finally, the statement can be executed, and if a resultset is being returned, prepare that, and then notify the client of query completion status, and wait for the client to request rows from the resultset, when the client closes the resultset, the database server can clean up, releasing the memory, etc.
These aren't linked. The number of connections you are able to make is based on the quality of the thread library and the amount of RAM available / used by each thread. So essentially, it is limited by the quality of the systems and not the complexity of the database. As the threads use a buffer, the number of rows will only make the processes slower or a fixed amount of RAM. see here for more
https://dev.mysql.com/doc/refman/5.0/en/memory-use.html
Q: Does my query get sent to the database once and I get a list of all the results in one shot which I then loop through, or do I have to request the next row from the DB each time?
A: You are getting back the batch of rows. You are iterating thru until you need the next batch (trip to DB on the same connection). Size of batch depends on multiple conditions, if your query result dataset is small you may get all results in one shot.
Q: Essentially, does reducing the number of rows I expect to return mean less connections/calls to the DB meaning my DB will be able to handle more connections at once, or is the number of database connections not dependent on the number of returned rows?
A: The larger the dataset the more trips (to grab next set of rows) to DB there may be. But number of connections opened to DB does not depend on your result dataset size for single query.

Why is it that when the same query is executed twice in MySQL, it returns two very different response times?

my question is as follows:
Why if I do the same query two times in shell MySql get two very different response times (ie,
the first time and the second a much shorter time)?
and how can I do to prevent this from happening??
thank you very much in advance
This is most likely down to query and/or result caching. if you run a query once, MySQL stores the compiled version of that query, and also has the indexes stored in memory for those particular tables, so any subsequent queries are vastly faster than the original.
This could be due to 1.query caching is turned on or due to 2.the difference in performance states of the system in which it is being executed.
In query caching if you run a query once mysql stores the compiled version of the query and is fetched when called again . the time for compiling is not there in the repeated execution of the same query . query caching can be turned off but it is not a good idea

List of queries executed on mysql server

Does mysql server keeps records of queries executed on it, if it does so , is it possible to retrieve those queries.
Thanks.
You can use the General Query Log, which logs all queries in plain text.
We have this enabled on our development environment only. It shouldn't be a problem to turn this on temporarily on a production server during off-peak times.
If you're looking for slow queries, you may as well use the Slow Query Log.
If you want to keep record of all queries that are executed, you can enable either the General Query Log or the Slow Query Log and set the threshold to 0 seconds (any query that takes more than 0 seconds will be logged).
Another option is the Binary log. However, binary log does not keep record of queries like SELECT or SHOW that do not modify data.
Note that these logs get pretty big pretty fast in a server with traffic. And that both might contain passwords so they have to be protected from unauthorized eyes.
You can use MySQL Proxy which stands between client app and RDBMS itself.
http://forge.mysql.com/wiki/MySQL_Proxy
You can see online queries and also it allows you to rewrite queries based on rules.
There's another option - use a profiler.
For instance: http://www.jetprofiler.com/

Does mysql have cache about sql plan?

If the same sql run many times from different sessions, will mysql parse the same sql many times? In oracle/sql server, the plan for a sql is cached and can be reused. Since it is told that parse and creating sql plan is costly, if mysql doesn't cache them, will it be a problem to parse it many time which could potentially cost a lot?
For execution plan caching: I don't believe MySQL currently offers this feature.
MySQL does have a query cache: http://dev.mysql.com/doc/refman/5.1/en/query-cache.html
The query cache stores the text of a SELECT statement together with the corresponding result that was sent to the client. If an identical statement is received later, the server retrieves the results from the query cache rather than parsing and executing the statement again. The query cache is shared among sessions, so a result set generated by one client can be sent in response to the same query issued by another client.
I'm not sure how up to date this article is (2006), but it talks about these issues in detail:
http://www.mysqlperformanceblog.com/2006/07/27/mysql-query-cache/
To the best of my knowledge, not much has changed since then in this regard.
This is an existing MySQL Feature Request.
However, the last comments (in 2009) where along the lines that it's not clear it would offer any significant performance improvements and that it could lead to deadlock conditions.
If you are concerned about this, you might want to look into using prepared statements.