MySQL slow down suddenly - mysql

On the same database MySQL 5.7.17 (InnoDB) on local PC I run query which takes 25s but after log in on the same db once again I achieved longer execution time - 50s. I suspect that it's MySQL performance issue. It appears many times, sometimes db is performing query longer, next time db speed up and execution takes less time. I'm testing it on many queries in the cmd line client.
One thing which I have changed in cfg is innodb_buffer_pool_size=5G in my.ini
Have anybody has any suspicion what may cause this decrease of performance, or what I have to check?
Thanks for any suggestions!

Related

Does my sql query continue executing after i been disconnected due to long operation?

If I run a SQL query in MySQL workbench and the connections time out after 30 seconds because it is taking a long time. Does my Query continue executing on the MySQL server even though I am disconnected?
For example, if I am doing an update and the update loops over a billion records. Does the MySQL server disconnect me first then it finishes the query after? Or does it disconnect me and terminate the query?
It does. As Mustafa mentioned, you can see the query still running if you look at "Administration tab" --> Management --> Client Connections.
Also good to remember that you can change the 30sec cap to longer, shorter or none.
Yes, MySQL Workbench can disconnect and the query keeps running. This has been reported as a bug, but it's in the "Verified" state, which means it is not fixed: https://bugs.mysql.com/bug.php?id=78809
See also this related SO thread: MySQL Query running even after losing connection
If you have a long-running query that needs to do a bulk update, you may need to change the MySQL Session timeout options in the MySQL Workbench preferences. Alternatively, don't use MySQL Workbench for long-running jobs, use the mysql command line tool.

MySQL heavy disk activity even with no queries running

Trying to troubleshoot an issue with a mysterious disk io bottleneck caused by MySQL.
I'm using the following commands to test disk read/write speed:
#write
dd if=/dev/zero of=/tmp/writetest bs=1M count=1024 conv=fdatasync,notrunc
#read
echo 3 > /proc/sys/vm/drop_caches; dd if=/tmp/writetest of=/dev/null bs=1M count=1024
I rebooted the machine, disabled cron so none of my usual processes are running queries, killed the web server which usually runs, and killed mysqld.
When I run the read test without mysqld running, I get 1073741824 bytes (1.1 GB) copied, 2.19439 s, 489 MB/s. Consistently around 450-500 MB/s.
When I start back up the mysql service back up, then run the read test again, I get 1073741824 bytes (1.1 GB) copied, 135.657 s, 7.9 MB/s. Consistently around 5MB/s.
Running show full processlist in mysql doesn't show any queries (and I disabled everything that would be running queries anyway). In MySQLWorkbench's Server Status tab, I can see InnoDB reads fluctuate between 30-200 reads per second, and 3-15 writes per second even when no queries are running.
If I run iotop -oPa I can see that mysqld is racking up like 1MB disk reads per second when no queries are running. That seems like a lot considering no queries are running, but at the same time that doesn't seem like enough to cause my dd command to take so long... The only other thing performing disk io is jbd2/sda3-8.
Not sure if it's related, but if I try to kill the mysql server with service mysql stop it says "Attempt to stop MySQL timed out", and the mysqld process continues running, but I can no longer connect to the DB. I have to use kill -9 to kill the mysqld process and restart the server.
All of this appears to be out of the blue. This server was doing heavy duty log parsing, high volume inserts and selects for months, until this last weekend we started seeing this disk io bottleneck.
How can I find out why MySQL is doing so much disk reading when it's essentially idle?
Did you update/delete/insert a large number rows? If so, consider these "delays" in writing to disk:
The block containing the data is not written back to disk immediately.
Ditto for UNIQUE keys.
Updates to secondary indexes go into the "change buffer" They get folded into the index blocks, often even later.
Updates/deletes leave behind a "history list" that needs to be cleaned up after the transaction is complete.
Those things are handled by background tasks that do not show up in the PROCESSLIST. They may be visible on mysqld process(es), mostly as I/O. (CPU is probably minimal.)
Was there a ROLLBACK? Transactions are "optimistic". So a ROLLBACK has to do a lot of work to "undo" what was optimistically already committed.
If you abruptly kill mysqld (or turn off the power), then the ROLLBACK occurs after restarting.
SSDs have no "seek" time. HDDs must move the read/write heads by a variable amount; this takes time. If your dd is working on one end of the disk, and mysqld is working on the other end, the "seeking" adds to the apparent I/O time.
This turned out, like many performance problems, to be a multifaceted issue.
Essentially the issue turned out to be with nightly system and db backups writing to a separate HDD raid array running into the next day, then the master sending FLUSH TABLES and causing mysql jobs and replication work to wait for that. In addition, an unnecessary side process copying many gigabytes of text files around the system a few times a day. Tons of context switching as the system was trying to copy data for backups while also performing mysql work (replication and other jobs).
I ended up reducing the number of tables we were replicating (some were unnecessary), reducing the copying of text files around the system when not needed, increasing memory and io allocated to the mysql server, streamlining the mysql backups and system backups, and limiting cron jobs running mysql processes to give the mysql backups more time to complete. With all that, the backups were barely completing by 7AM each morning, so I ended up determining that we need to run the mysql backups only on weekends instead of nightly, which is fine since this is all fairly static data.

Enable alerts for MySQL long running queries

My application is loading very slow. After doing some research I got to know that MySQL is causing this slowness. I have around 15-20 users that access this server. After preliminary investigation (googling and stackoverflow), I found out that they were some queries running at the time and those were the culprit. It’s annoying to run query every now and then to look out for the queries running for a long time. Is there a workaround for this and also get email/SMS alerts for it. How can I enable email/SMS alerts to look over those queries
execute below commands on mysql from admin/root user-
slow_query_log = ON;
long_query_time = 2;
Now get slow query log file path from belowcommand-
SHOW VARIABLES LIKE 'slow_query_log_file';
Now go to your mysql db machine and check logged slow queries and optimize them.
Note: By this you can get slow queries even without restart mysql service but for better do these entries in your conf file so that you can get slow logs even after service restart.

MySQL Starting Speed Issue

I have MySQL 5.6 running on a Windows Server 2008 R2 machine, it runs successfully, I do have some speed issues when creating tables, its extremely slow, however this issue is to do with starting the MySQL service when the server restarts, I would expect MYSQL56 service to start within a minute or two at the most, however it can hang for up to 30 minutes, I know it eventually starts, but sometimes I have to kill the process and attempt to restart the service again, sometimes it can take 10 minutes or sometimes it can take longer and I get so twitchy I have to kill it.
Does anybody know what I should be looking into to try and fix this issue?
Any help would be greatly appreciated, I think maybe if I can fix this it might also fix my slow table creation?
Cheers

Will mysql somehow rerun killed query behind my back?

I ran a very unique experimental long query about 1 week ago from Sequel Pro against a MySQL 5.5 DB. The query is not used in any codes; just a manual one. I remember I killed it after only a few seconds. Then in the last few days, the DBA keeps finding the exact same query was started again after one being killed. The DBA has verified the query was killed at the time he tried. My workstation has been rebooted at least once and moved out of network connection many times since I first ran that query manually. Sequel Pro had no connection to any DB when one of this rerun occurred. And there seems nothing else in my workstation that would trigger that.
My question: is there some way that the query can get stuck in some server-side job/run list and not being killed properly but get rerun?
Found the cause. A DBA has cron script running in the background looking for slow queries in the slow query log and try to run explain on the query! Apparently the slow queries get rerun again and again.