Get time for MySQL result without showing result - mysql

I realized that using phpMyAdmin for testing the speed of queries might be dumb: it automatically applies a LIMIT clause.
I tried a certain query on a fairly large number of records (31,595) with a GROUP BY clause. phpMyAdmin, adding LIMIT 0, 200, took 0.1556 seconds to fetch the results.
I decided to try the same query from the command line without the LIMIT clause and it took 0.20 seconds. Great, so now I have the real time it takes for that query.
But the downside is I had to wait for 30,000+ records to print on the screen.
Is there a better solution?
EDIT:
To clarify, I am looking for a way to suppress the screen output of a select query while still getting an accurate time for running the query. And I want it to be something that could be typed in and timed at any time (i.e. I don't want to have to tweak slow log settings to capture results).

You could enclose your query in SELECT COUNT(1) to count the number of rows returned, without having all the rows printed out:
SELECT COUNT(1)
FROM (
<<you query goes here>>
) t;

I guess that what you really want is to obtain the best possible speed for your query, not really to time it.
If it's the case, type your query in phpMyAdmin (its adding a LIMIT clause is not important) then click on the "Explain SQL" link to see whether you are using indexes or full-table scans.

You could use console client mysql and time
$ time mysql -u user -h host -ppassword -e "show databases;" > /dev/null
real 0m0.036s
user 0m0.008s
sys 0m0.008s

Related

How do you time multiple SQL queries in MySQL workbench?

A program I inherited runs 800 single queries once every minute or so. I was able to grab all these queries and I want to test to see how long it takes to run them in a sequence to see if its an issue that I need to address or if it is ok as is.
The SQL queries are all simple SELECT queries with a few where clauses:
SELECT DISTINCT roles.desc FROM descRoles roles, listUsers users, listUsers mapping WHERE mapping.roleId = roles.roleId AND mapping.idx = users.idx AND users.UserName = 'fakeNameHere';
If there's a typo in my select query please ignore it they run fine. I want to know if there is something I can put before and after all 800 queries to time how long it takes to run all of them. Also, if I could turn off the result tabs on them since after about 40 I get a message that my maximum result tabs are reached, that also seems necessary.
Workbench is not the tool for timing queries. What you want is mysqlslap https://dev.mysql.com/doc/refman/8.0/en/mysqlslap.html

How to extract the number (count) of queries logged in the mysql slow query log for a 10 minute interval

As per my research I thought of using the mysqldumpslow utility to parse the log and extract the results, but not able to figure out how to use it. I want to get the count of number of queries logged in the slow query log for an interval of 10 minutes, so that the values can be compared for analysis.
Thanks
You could use logrotate to create a new slow.log every 10 minutes and analyze them one after another. Implying you are using Linux. Be aware that your example shows that your mysql instance is configured to "log-queries-not-using-indexes" hence you will also get those SELECT's that dont use an index in your log file too.
Update :
Since i still dont know what OS you are using, a more general aproach to your problem would be redirecting the slow log into mysql itself following the mysql docs and get all records from the slow log table like :
SELECT COUNT(*) FROM slow_log;
Which gives you the total amount of Querys logged. Follwed by a :
TRUNCATE TABLE slow_log;
Having a script in place doing this every 10 minutes would output the desired information.

MySQL/nodejs: query results being artificially limited

I've got a long-running MySQL db operation on my node.js server. This operation performs an INSERT INTO (...) SELECT ... FROM statement that should result in a table with 1000's of rows, but I only end up with a fraction of that amount. I'm noticing that my node server shows the request always taking exactly 120000 MS, so it's led me to believe that something -- either MySQL or node's MySQL connector -- is artificially limiting my results from the SELECT statement.
Some things to note:
I've tried adding my own LIMIT 0,100000 and my final result is exactly the same as if I had no LIMIT clause at all.
If I run with no WHERE clause, my resulting data goes through July of 2013. I can force later data by adding a WHERE theDateField > '2013-08-01'; I can conclude from this that the query itself should be working, but that something is limiting it.
I get the same result by running my query in MySQL workbench after removing the LIMIT via preferences (this suggests that the MySql server itself may be the problem)
Is anyone aware of a setting or something that could cause this behavior?

MySQL queries testing WHERE clause search times

Recently I was pulled into the boss-man's office and told that one of my queries was slowing down the system. I then was told that it was because my WHERE clause began with 1 = 1. In my script I was just appending each of the search terms to the query so I added the 1 = 1 so that I could just append AND before each search term. I was told that this is causing the query to do a full table scan before proceeding to narrow the results down.
I decided to test this. We have a user table with around 14,000 records. The queries were ran five times each using both phpmyadmin and PuTTY. In phpmyadmin I limited the queries to 500 but in PuTTY there was no limit. I tried a few different basic queries and tried clocking the times on them. I found that the 1 = 1 seemed to cause the query to be faster than just a query with no WHERE clause at all. This is on a live database but it seemed the results were fairly consistent.
I was hoping to post on here and see if someone could either break down the results for me or explain to me the logic for either side of this.
Well, your boss-man and his information source are both idiots. Adding 1=1 to a query does not cause a full table scan. The only thing it does is make query parsing take a miniscule amount longer. Any decent query plan generator (including the mysql one) will realize this condition is a NOP and drop it.
I tried this on my own database (solar panel historical data), nothing interesting out of the noise.
mysql> select sum(KWHTODAY) from Samples where Timestamp >= '2010-01-01';
seconds: 5.73, 5.54, 5.65, 5.95, 5.49
mysql> select sum(KWHTODAY) from Samples where Timestamp >= '2010-01-01' and 1=1;
seconds: 6.01, 5.74, 5.83, 5.51, 5.83
Note I used ajreal's query cache disabling.
First at all, did you set session query_cache_type=off; during both testing?
Secondly, both your testing queries on PHPmyadmin and Putty (mysql client) are so different, how to verify?
You should apply same query on both site.
Also, you can not assume PHPmyadmin is query cache off. The time display on the phpmyadmin is including PHP processing, which you should avoid as well.
Therefore, you should just do the testing on mysql client instead.
This isn't a really accurate way to determine what's going on inside MySQL. Things like caching and network variations could skew your results.
You should look into using "explain" to find out what query plan MySQL is using for your queries with and without your 1=1. A DBA will be more interested in those results. Also, if your 1=1 is causing a full table scan, you will know for sure.
The explain syntax is here: http://dev.mysql.com/doc/refman/5.0/en/explain.html
How to interpret the results are here: http://dev.mysql.com/doc/refman/5.0/en/explain-output.html

How to parse a mysql slow query log into something useful?

I have an extensive slow query log which was running for a few weeks. I would like to parse it, which would put the highest occurring queries at the top (with number of executions and average times of execution), and it goes from there in descending order.
What tool/command can I use to accomplish that?
Check out Maatkit:
mk-query-digest - Parses logs and more. Analyze, transform, filter, review and report on queries.
For a better reading of the slow log I use:
mysqldumpslow -s c -t 10 /var/log/mysql_slow_queries.log
Edit the path to the slow log.
Sample output:
Reading mysql slow query log from /var/log/mysql_slow_queries.log
Count: 1 Time=21.52s (21s) Lock=0.00s (0s) Rows=3000.0 (3000), foo#localhost
SELECT * FROM students WHERE
(student_grade > N AND student_date < N)
ORDER BY RAND() ASC LIMIT N
Also, calling just the mysqldumpslow without any options, would print some useful settings like current query_cache_size, path to slow log, etc.
This script gives a more clear response than the maakit mk-query-digest:
https://github.com/LeeKemp/mysql-slow-query-log-parser/
This online analyzer worked really well for me, and it's free! Just drag and drop your slow query file.
https://www.slowquerylog.com/analyzer
You could use a spreadsheet if you can parse the data into individual fields, then sum by query and sort by count of executions and average. Y'know... Normal data analysis stuff.
If you need to, write a regex based application to split the log up into CSV. While you're at it, you could even write a concordance engine for the log names, check averages, and make the tool you're after.
Of course, someone is likely to come in and say "Duh, use 'analysisMySQLProDemon40005K+!'" after I post this.