what is causing random spikes in local mysql server query speeds? - mysql

So while playing around on my localhost in phpMyAdmin and doing some stuff with SQL, I realized that I would randomly get huge spikes in the time it took to perform a database query. I have a database table with about 3000 entries, and I was running a very simple query to display the first 2500 of them.
On average, running this query was taking around 0.003 to 0.004 seconds. (Of course, loading the phpMyAdmin page took much longer, but we're just looking at the query times.) However, I noticed that occasionally the query times would go up past 0.01. Once it even shot up to 0.04. So, my curiosity getting the better of me, I decided to repeatedly run the same query, and produced a graph of my results:
I'm not running anything else on my computer that may be interacting with MySQL, and because it's my localhost I'm the only one that's doing anything to mess with my database (right?). Slight outliers are understandable, but what's causing the load times to go up anywhere from 3 to 30 times, completely randomly it seems?
Can anyone help me satiate my curiosity?

I'm not running anything else on my computer that may be interacting with MySQL
But is there anything else running on your computer that might be interacting with your hard drive /CPU on a regular basis? Because that would explain the spikes. Maybe have a scan of running processes, and compare the cpu/disk activity against the spikes.

Even though your database is running on your local host, it's not running in complete isolation. It is competing for your system's resources with every other process you have running.

Related

The Database Connection is going high and also the RDS CPU is hitting 100% during the load testing

When doing the load testing on my application the AWS RDS CPU is hitting 100% and corresponding requests are getting errored out. The RDS is m4.2x.large. With the same configuration the things were fine until 2 weeks back. There are no infra changes done on the environment neither the application level changes. The whole load test used to go smooth until complete 2hrs until 2 weeks back. There are no specific exception apart from GENERICJDBCEXCEPTION.
All other necessary services are up and running on respective instances.
We are using SQL as Database Management System.
Is there any chance that this happens suddenly? How to resolve this? Suggestions are much appreciated. This has created many problems.
Monitoring the slow logs and resolving them did not solve the problem.
Should we upgrade the RDS to next version?
Does more data on then DB slows the Database?
We have modified the connection pool parameters also and tried it.
With "load testing", are you able to finish one day's work in one hour? That sounds great! Or what do you mean by "load testing"?
Or are you trying to launch 200 threads in one second and they are stumbling over each other? That's to be expected. Do you really get 200 new connections in a single second? Or is it spread out?
1 million queries per day is no problem. A million queries all at once will fail.
Do not let your "load test" launch more threads than you can reasonably expect. They will all pile up, and latency will suffer while the server is giving each thread an equal chance.
Meanwhile, use the slowlog to find the "worst" queries in production. Then let's discuss the worst one or two -- Often an improved index makes that query work much faster, thereby no longer contributing to the train wreck.

Server Status showing way too many queries and questions increasing per second on MySQL ClearDB

I have a MySQL DB (ClearDB) serving my backend application hosted on Heroku. That said I have very limited ways to actually see the logs (no access to the logs apparently) to even know what these queries are. I know for sure nobody is using the system right now. On my heroku logs there's really nothing being executed by the backend to trigger any query.
What I can see from MySQL Workbench when looking at Status and Variables is that the values of Queries and Questions increase by the hundreds every second when I refresh it, which to me seems really odd. The value of Threads_connected is always between 120 and 140, although Threads_running is usually lower than 5.
The "Selects Per Second" keep jumping between 200 and 400.
I am mostly a developer with not much skills in being a DBA. Are these values normal? Even when there's no traffic why are they constantly increasing? If not, what are the means I can use to investigate what is actually running there when ClearDB does not give me access to logs?
'show processlist' can only raise my suspicion that something seems off, but then how to procedure from here?

Why are RDS queries from EC2 to RDS taking around 22ms each, which is very slow

I have an EC2 instance (medium, us-east-1d), and RDS instance (us-east-1a, db.t2.medium). I have a PHP page with a few dozen queries. Every single query takes consistently about 22 to 23ms. Which is crazy slow, it should take perhaps 1 or 2 ms each (locally each query takes less than 1 ms).
Any thoughts on how to find out why these queries are so slow? The database is fairly small, there are plenty of indexes, that's not the issue. It's RDS somehow being really slow.
Partial list of queries and how slow they are: (check the first one which consistenly takes > 100ms).
UPDATE
We stopped using RDS and moved the database onto the EC2 instance itself, and now it's blazing fast. I still don't know what happened with the RDS instance, but this fixed it.
Probably a simple answer, relating to the speed of light...
The "slow" server (usually 23ms) is hundreds of miles/kms from your client. The "fast" server (usually 0.5ms) is in the same building as your client.
To further confirm time, run a simple SELECT 1, preferably several times.
That should measure mostly the lag between client and server.
If the server were on the other side of the globe, the timing, even for SELECT 1, would be over 200ms. The ultimate limit is the speed of light (until the next Einstein figures out that wormholes really exist).
If you are stuck with a long network lag, and you need to avoid it, we can talk about writing a Stored procedure with several queries in it; and then a single cross-network CALL to execute it.

A Most Puzzling MySQL Problem: Queries Sporadically Slow

This is the most puzzling MySQL problem that I've encountered in my career as an administrator. Can anyone with MySQL mastery help me a bit on this?:
Right now, I run an application that queries my MySQL/InnoDB tables many times a second. These queries are simple and optimized -- either single row inserts or selects with an index.
Usually, the queries are super fast, running under 10 ms. However, once every hour or so, all the queries slow down. For example, at 5:04:39 today, a bunch of simple queries all took more than 1-3 seconds to run, as shown in my slow query log.
Why is this the case, and what do you think the solution is?
I have some ideas of my own: maybe the hard drive is busy during that time? I do run a cloud server (rackspace) But I have flush_log_at_trx_commit set to 0 and tons of buffer memory (10x the table size on disk). So the inserts and selects should be done from memory right?
Has anyone else experience something like this before? I've searched all over this forum and others, and it really seems like no other MySQL problem I've seen before.
There are many reasons for sudden stalls. For example - even if you are using flush_log_at_trx_commit=0, InnoDB will need to pause briefly as it extends the size of data files.
My experience with the smaller instance types on Rackspace is that IO is completely awful. I've seen random writes (which should take 10ms) take 500ms.
There is nothing in built-in MySQL that will help you identify the problem easier. What you might want to do is take a look at Percona Server's slow query log enhancements. There's a specific feature called "profiling_server" which can break down time:
http://www.percona.com/docs/wiki/percona-server:features:slow_extended#changes_to_the_log_format

mysql query slow after upgrade

I don't know what our Systems team did to mysql. But one of my jsp pages takes about 15 seconds to load. It took only 1 second before upgrade.
There are only about 200 entries in related tables. And the page connects about 60 times to the database. It is weird that such small page has this issue.
Other JSP pages that query mysql have this issue too.
I want to know how to "DEBUG" this issue so I can tell our Systems team how to change. Your reply is highly appreciated!
One of the first things to try is running the same queries from the mysql console, and seeing if they have the same performance problems. If so, you can use "EXPLAIN" to see the query analyzer and see if it is doing something bad. With only 200 entries though, it probably isn't an index issue.
Your page should only physically connect to the DB once. If you're doing it 60 times, that isn't good.
It could be an issue with connection pooling, that the pool size is set too small, so it is blocking waiting on a connection to become available. Is this performance problem consistent?
try with
explain select <your query>
here is link to go with it
http://dev.mysql.com/doc/refman/5.1/en/explain.html