Profile Stored procedures in MySQL - mysql

I am working with MySQL and using stored procedures. I have a profiling tool that I am using to profile the code that communicates with MySQL through the stored procedures and I was wondering if there was a tool or capability within MySQL client to profile stored procedure executions. What I have in mind is something that's similar to running queries with profiling turned on. I am using MySQL 5.0.41 on Windows XP.
Thanks in advance.

There is a wonderfully detailed article about such profiling: http://mablomy.blogspot.com/2015/03/profiling-stored-procedures-in-mysql-57.html
As of MySQL 5.7, you can use performance_schema to get informations about the duration of every statement in a stored procedure. Simply:
1) Activate the profiling (use "NO" afterward if you want to disable it)
UPDATE performance_schema.setup_consumers SET ENABLED="YES"
WHERE NAME = "events_statements_history_long";
2) Run the procedure
CALL test('with parameters', '{"if": "needed"}');
3) Query the performance schema to get the overall event informations
SELECT event_id,sql_text,
CONCAT(TIMER_WAIT/1000000000,"ms") AS time
FROM performance_schema.events_statements_history_long
WHERE event_name="statement/sql/call_procedure";
| event_id | sql_text | time |
|2432 | CALL test(...) | 1726.4098ms |
4) Get the detailed informations of the event you want to profile
SELECT EVENT_NAME, SQL_TEXT,
CONCAT(TIMER_WAIT/1000000000,"ms") AS time
FROM performance_schema.events_statements_history_long
WHERE nesting_event_id=2432 ORDER BY event_id;
| EVENT_NAME | SQL_TEXT | time |
| statement/sp/stmt | ... 1 query of the procedure ... | 4.6718ms |
| statement/sp/stmt | ... another query of the procedure ... | 4.6718ms |
| statement/sp/stmt | ... another etc ... | 4.6718ms |
This way, you can tell which query takes the longest time in your procedure call.
I don't know any tool that would turn this resultset into a KCachegrind friendly file or so.
Note that this should not be activated on production server (might be a performance issue, a data size bump, and since performance_schema.events_statements_history_long holds the procedure's parameters values, then it might be a security issue [if procedure's parameter is a final user email or password for instance])

You can turn on the slow query logging within MySQL.
Take a look at this other SO question:
MYSQL Slow Query
Depending on which version, you may actually be able to set the value to zero, so every single query in the DB is shown in the slow query log.
See here for additional details:
http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html#sysvar_long_query_time

Related

MySQL 5.5 UNIX_TIMESTAMP function returns different values on prod and local machines

I have an old PHP system, using MySQL 5.5.47 as DB.
The guys who have created the system, have taken a strange decision.
In some cases, they saved a date value without day - for example '2018-01-00'. The field type is DATE.
A lot of queries use where clause like this: UNIX_TIMESTAMP(DATE(<DATE>)) BETWEEN 1514757600 AND 1546207210, where <DATE> is a column which contains records like '2018-01-00', '2018-02-00', etc.
The two timestamps represent dates 2018-01-01 and 2018-12-31.
On production, this type of queries run without issue.
On my local machine, they do not return any results.
What I found is if I run the command: SELECT UNIX_TIMESTAMP( DATE( '2018-01-00' ) ) on production the result is 1514757600, but on my local machine it returns 0.
I'm using a Docker compose to reproduce the production as close as possible. Initially, I have used MySQL 5.6 for local development when I hit this issue, I tried with MySQL 5.5.62, but the result is same.
Does anyone know how I can set up my local MySQL to work as the production one?
Query on production:
mysql> SELECT DATE('2018-01-00'), UNIX_TIMESTAMP(DATE('2018-01-00')), UNIX_TIMESTAMP('2018-01-00');
+--------------------+------------------------------------+------------------------------+
| DATE('2018-01-00') | UNIX_TIMESTAMP(DATE('2018-01-00')) | UNIX_TIMESTAMP('2018-01-00') |
+--------------------+------------------------------------+------------------------------+
| 2018-01-00 | 1514757600 | 0 |
+--------------------+------------------------------------+------------------------------+
Query on local:
mysql> SELECT DATE('2018-01-00'), UNIX_TIMESTAMP(DATE('2018-01-00')), UNIX_TIMESTAMP('2018-01-00');
+--------------------+------------------------------------+------------------------------+
| DATE('2018-01-00') | UNIX_TIMESTAMP(DATE('2018-01-00')) | UNIX_TIMESTAMP('2018-01-00') |
+--------------------+------------------------------------+------------------------------+
| 2018-01-00 | 0 | 0 |
+--------------------+------------------------------------+------------------------------+
It turns out to be a bug in Mysql prior 5.5.48. In the release notes of 5.5.48, there is a statement about fixing a bug related to the unix_timestamp function.
https://dev.mysql.com/doc/relnotes/mysql/5.5/en/news-5-5-48.html
When an invalid date was supplied to the UNIX_TIMESTAMP() function using the STR_TO_DATE() function, no check was performed before converting it to a timestamp value. (Bug #21564557)

Stored procedure hanging

A stored procedure hangs from time to time. Any advices?
BEGIN
DECLARE bookId int;
SELECT IFNULL(id,0) INTO bookId FROM products
WHERE
isbn=p_isbn
and stoc>0
and status='vizibil'
and pret_ron=(SELECT MAX(pret_ron) FROM products
WHERE isbn=p_isbn
and stoc>0
and status='vizibil')
ORDER BY stoc DESC
LIMIT 0,1;
IF bookId>0 THEN
UPDATE products SET afisat='nu' WHERE isbn=p_isbn;
UPDATE products SET afisat='da' WHERE id=bookId;
SELECT bookId INTO obookId;
ELSE
SELECT id INTO bookId FROM products
WHERE
isbn=p_isbn
and stoc=0
and status='vizibil'
and pret_ron=(SELECT MAX(pret_ron) FROM products
WHERE isbn=p_isbn
and stoc=0
and status='vizibil')
LIMIT 0,1;
UPDATE products SET afisat='nu' WHERE isbn=p_isbn;
UPDATE products SET afisat='da' WHERE id=bookId;
SELECT bookId INTO obookId;
END IF;
END
When it hangs it does it on:
| 23970842 | username | sqlhost:54264 | database | Query | 65 | Sending data | SELECT IFNULL(id,0) INTO bookId FROM products
WHERE
isbn= NAME_CONST('p_isbn',_utf8'973-679-50 | 0.000 |
| 1133136 | username | sqlhost:52466 | database _emindb | Query | 18694 | Sending data | SELECT IFNULL(id,0) INTO bookId FROM products
WHERE
isbn= NAME_CONST('p_isbn',_utf8'606-92266- | 0.000 |
First, I'd like to mention the Percona toolkit, it's great for debugging deadlocks and hung transactions. Second, I would guess that at the time of the hang, there are multiple threads executing this same procedure. What we need to know is, which locks are being acquired at the time of the hang. MySQL command SHOW INNODB STATUS gives you this information in detail. At the next 'hang', run this command.
I almost forgot to mention the tool innotop, which is similar, but better: https://github.com/innotop/innotop
Next, I am assuming you are the InnoDB engine. The default transaction isolation level of REPEATABLE READ may be too high in this situation because of range locking, you may consider trying READ COMMITTED for the body of the procedure (SET to READ COMMITTED at the beginning and back to REPEATABLE READ at the end).
Finally, perhaps most importantly, notice that your procedure performs SELECTs and UPDATEs (in mixed order) on the same table using perhaps the same p_isbn value. Imagine if this procedure runs concurrently -- it is a perfect deadlock set up.

How PHPMyAdmin get query statics?

Dear friends: I'm developing a php server monitor for a client. One of the monitor's sections is related to MySQL.
In PHPmyadmin the section Server Status > Status queries show an amount of queries. I thought that was extracted from the "SHOW STATUS" mysql command. But... Differs!
When i go in PHPmyadmin to the section Server Status > Server Status Variables, the system displays the same values that "Status Queries" section.
But when i get the results of "SHOW STATUS" command, the values is not the same".
My English level is too poor to explain the case correctly. So, I will show an example:
In Server Status > Status Queries i can see, in the table:
Sentences | # | per hour| %
---------------------------------
select | 365 | 51.4 |25.29
set option | 266 | 37.4 |18.43
When i go to Server Status > Server Status Variables, i can see:
Variable | Value | Description
---------------------------------
Com select | 365 | Blah Blah....
Com set Option | 266 | Blah Blah....
But if i run "SHOW STATUS", i obtain:
Variable | Value
-----------------------------
com_select | 1
com_set_Option | 2
And, in this point, my brain explode....
Can do you enlighten me?
PD: Again, Sorry if my English is too poor...
Use:
SHOW GLOBAL STATUS;
To get the server status values as shown in PhpMyAdmin
With a GLOBAL modifier, the statement displays the global status values. A global status variable may represent status for some aspect of the server itself (for example, Aborted_connects), or the aggregated status over all connections to MySQL (for example, Bytes_received and Bytes_sent). If a variable has no global value, the session value is displayed.
With a SESSION modifier, the statement displays the status variable values for the current connection. If a variable has no session value, the global value is displayed. LOCAL is a synonym for SESSION.
If no modifier is present, the default is SESSION.

Total run time of multiple queries in mysql

I have some benchmark queries in a .sql file. If i use source in mysql to execute them, mysql will show run time after each query. And there are pages and pages query outputs. Is there any way that I can obtain the total run time of all queries?
Thanks a lot!
You can use MySQL's built in profiling support by adding this line to your my.cnf:
SET profiling=1;
This allows you to easily see the time it took for each query:
mysql> SHOW PROFILES;
+----------+-------------+-------------------------------------------------------------------+
| Query_ID | Duration | Query |
+----------+-------------+-------------------------------------------------------------------+
| 1 | 0.33174700 | SELECT COUNT(*) FROM myTable WHERE extra LIKE '%zkddj%' |
| 2 | 0.00036600 | SELECT COUNT(id) FROM myTable |
| 3 | 0.00087700 | CREATE TEMPORARY TABLE foo LIKE myTable |
| 4 | 33.52952000 | INSERT INTO foo SELECT * FROM myTable |
| 5 | 0.06431200 | DROP TEMPORARY TABLE foo |
+----------+-------------+-------------------------------------------------------------------+
5 rows in set (0.00 sec)
You can them sum up the times to get the total time:
SELECT SUM(Duration) from information_schema.profiling;
You can find more details on MySQL's profiling here.
Another approach you could take it to run execute the SQL queries from command line and use the Unix time command to time the execution. This may, however, not give you the most precise time though. Additionally, it won't give you a breakdown of how long each query took unless you use it in combination with MySQL profiling.
you could modify your .sql to create a begin and end row with a timestamp and then subtract without having to bring out an excel spreadsheet and add it up.
Thanks for all the suggestions.
I end up created another table just to record the start and end time of each query in the .sql file.
I edited the .sql file, add an insert statement after each original query just to record the time. At the end, I can query this "time" table for profiling the .sql file execution.

mysql query - optimizing existing MAX-MIN query for a huge table

I have a more or less good working query (concerning to the result) but it takes about 45seconds to be processed. That's definitely too long for presenting the data in a GUI.
So my demand is to find a much faster/efficient query (something around a few milliseconds would be nice)
My data table has something around 3000 ~2,619,395 entries and is still growing.
Schema:
num | station | fetchDate | exportValue | error
1 | PS1 | 2010-10-01 07:05:17 | 300 | 0
2 | PS2 | 2010-10-01 07:05:19 | 297 | 0
923 | PS1 | 2011-11-13 14:45:47 | 82771 | 0
Explanation
the exportValue is always incrementing
the exportValue represents the actual absolute value
in my case there are 10 stations
every ~15 minutes 10 new entries are written to the table
error is just an indicator for a proper working station
Working query:
select
YEAR(fetchDate), station, Max(exportValue)-MIN(exportValue)
from
registros
where
exportValue > 0 and error = 0
group
by station, YEAR(fetchDate)
order
by YEAR(fetchDate), station
Output:
Year | station | Max-Min
2008 | PS1 | 24012
2008 | PS2 | 23709
2009 | PS1 | 28102
2009 | PS2 | 25098
My thoughts on it:
writing several queries with between statements like 'between 2008-01-01 and 2008-01-02' to fetch the MIN(exportValue) and between 2008-12-30 and 2008-12-31' to grab the MAX(exportValue) - Problem: a lot of queries and the problem with having no data in a specified time range (it's not guaranteed that there will be data)
limiting the resultset to my 10 stations only with using order by MIN(fetchDate) - problem: takes also a long time to process the query
Additional Info:
I'm using the query in a JAVA Application. That means, it would be possible to do some post-processing on the resultset if necessary. (JPA 2.0)
Any help/approaches/ideas are very appreciated. Thanks in advance.
Adding suitable indexes will help.
2 compound indexes will speed things up significantly:
ALTER TABLE tbl_name ADD INDEX (error, exportValue);
ALTER TABLE tbl_name ADD INDEX (station, fetchDate);
This query running on 3000 records should be extremely fast.
Suggestions:
do You have PK set on this table? station, fetchDate?
add indexes; You should experiment and try with indexes as rich.okelly suggested in his answer
depending on experiments with indexes, try breaking your query into multiple statements - in one stored procedure; this way You will not loose time in network traffic between multiple queries sent from client to mysql
You mentioned that You tried with separate queries and there is a problem when there is no data for particular month; it is regular case in business applications, You should handle it in a "master query" (stored procedure or application code)
guess fetchDate is current date and time at the moment of record insertion; consider keeping previous months data in sort of summary table with fields: year, month, station, max(exportValue), min(exportValue) - this means that You should insert summary records in summary table at the end of each month; deleting, keeping or moving detail records to separate table is your choice
Since your table is rapidly growing (every 15 minutes) You should take the last suggestion into account. Probably, there is no need to keep detailed history at one place. Archiving data is process that should be done as part of maintenance.