Slow MySQL query and CPU Throttling - mysql

I host my CakePHP 1.3.x application on a shared host (hostmonster). I received a DNS errors from Google's webmasters tools and by contacting the technical support of my host, they indicated that there are CPU throttling occurs for my account and they guided me to check out this document about CPU Throttling.
From the above document, I checked tmp/mysql_slow_queries and I founded some queries takes more than 2 seconds and some of those queries are simple like:
# Sat Dec 14 02:00:38 2013
# Query_time: 3.286778 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
use twoindex_quran;
SET timestamp=1387011638;
SET NAMES utf8
I need to know, why CakePHP applies a query such as SET timestamp and how could I prevent CakePHP to make such query. Also I need to know what's making such simple query slow?

There are a few things that may be worth noting:
You may need to upgrade CakePHP to use the PDO version of PHP MySQL connections since the older version of MySQL connect are being deprecated.
Check the version of PHP your host is using. Make sure there is nothing strange between the host PHP version and the CakePHP PHP version requirements. Did the PHP version change recently causing these issues where they didn't exist before? If so, can you alter the .htaccess and use the previous version or did they cease to support it?
A three second query should not cause a throttle condition on the host. The query information you are listing does not look like a CakePHP specific query. It looks like the PHP connection trying to connect to a specific database. There is nowhere in the code I know of that calls SET timestamp or SET names. Maybe someone can enlighten us on that?

Related

Lost connection to MySQL server during query on random simple queries

FINAL UPDATE: We fixed this problem by finding a way to accomplish our goals without forking. But forking was the cause of the problem.
---Original Post---
I'm running a ruby on rails stack, our mysql server is separate, but housed at the same site as our app servers. (we've tried swapping it out for a different mysql server with double the specs, but no improvement was seen.
during business hours we get a handful of these from no particular query.
ActiveRecord::StatementInvalid: Mysql2::Error: Lost connection to MySQL server during query
most of the queries that fail are really simple, and there seems to be no pattern between one query and another. This all started when I upgraded from Rails 4.1 to 4.2.
I'm at a loss as to what to try. Our database server is less than 5% CPU throughout the day. I do get bug reports from users who have random interactions fail due to this, so it's not queries that have been running for hours or anything like that, of course when they retry the exact same thing it works.
Our servers are configured by cloud66.
So in short: our mysql server is going away for some reason, but it's not because of lack of resources, it's also a brand new server as we migrated from another server when this problem started.
this also happens to me on localhost while developing features sometimes, so I don't believe it's a load issue.
We're running the following:
ruby 2.2.5
rails 4.2.6
mysql2 0.4.8
UPDATE: per the first answer below I increased our max_connections variable to 500 last night, and confirmed the increase via
show global variables like 'max_connections';
I'm still getting dropped connection, the first one today was dropped only a few minutes ago....
ActiveRecord::StatementInvalid: Mysql2::Error: Lost connection to MySQL server during query
I ran select * from information_schema.processlist; and I got 36 rows back. Does this mean my app servers were running 36 connections at that moment? or can a process be multiple connections?
UPDATE: I just set net_read_timeout = 60 (it was 30 before) I'll see if that helps
UPDATE: It didn't help, I'm still looking for a solution...
Heres my Database.yml with credentials removed.
production:
adapter: mysql2
encoding: utf8
host: localhost
database:
username:
password:
port: 3306
reconnect: true
The connection to MySQL can be disrupted by a number of means, but I would recommend revisiting Mario Carrion's answer since it's a very wise answer.
It seems likely that connection is disrupted because it's being shared with the other processes, causing communication protocol errors...
...this could easily happen if the connection pool is process bound, which I believe it is, in ActiveRecord, meaning that the same connection could be "checked-out" a number of times simultaneously in different processes.
The solution is that database connections must be established only AFTER the fork statement in the application server.
I'm not sure which server you're using, but if you're using a warmup feature - don't.
If you're running any database calls before the first network request - don't.
Either of these actions could potentially initialize the connection pool before forking occurs, causing the MySQL connection pool to be shared between processes while the locking system isn't.
I'm not saying this is the only possible reason for the issue, as stated by #sloth-jr, there are other options... but most of them seem less likely according to your description.
Sidenote:
I ran select * from information_schema.processlist; and I got 36 rows back. Does this mean my app servers were running 36 connections at that moment? or can a process be multiple connections?
Each process could hold a number of connections. In your case, you might have up to 500X36 connections. (see edit)
In general, the number of connections in the pool can often be the same as the number of threads in each process (it shouldn't be less than the number of thread, or contention will slow you down). Sometimes it's good to add a few more depending on your application.
EDIT:
I apologize for ignoring the fact that the process count was referencing the MySQL data and not the application data.
The process count you showed is the MySQL server data, which seems to use a thread per connection IO scheme. The "Process" data actually counts active connections and not actual processes or threads (although it should translate to the number of threads as well).
This means that out of possible 500 connections per application processes (i.e., if you're using 8 processes for your application, that would be 8X500=4,000 allowed connections) your application only opened 36 connections so far.
This indicates a timeout error. It's usually a general resource or connection error.
I would check your MySQL config for max connections on MySQL console:
show global variables like 'max_connections';
And ensure the number of pooled connections used by Rails database.yml is less than that:
pool: 10
Note that database.yml reflects number of connections that will be pooled by a single Rails process. If you have multiple processes or other servers like Sidekiq, you'll need to add them together.
Increase max_connections if necessary in your MySQL server config (my.cnf), assuming your kit can handle it.
[mysqld]
max_connections = 100
Note other things might be blocking too, e.g. open files, but looking at connections is a good starting point.
You can also monitor active queries:
select * from information_schema.processlist;
as well as monitoring the MySQL slow log.
One issue may be a long-running update command. If you have a slow-running command that affects a lot of records (e.g. a whole table), it might be blocking even the simplest queries. This means you could see random queries timeout, but if you check MySQL status, the real cause is another long-running query.
Things you did not mention but you should take a look:
Are you using unicorn? If so, are your reconnecting and disconnecting in your after_fork and before_fork?
Is reconnect: true set in your database.yml configuration?
Well,at first glance this sounds like your webserver is keeping the mysql sessions open and sometimes a user runs into a timeout. Try disabling the keep mysql sessions alive.
It will be a hog but you only use 5% ...
other tipps:
Enable the mysql "Slow Query Log" and take a look.
write a short script which pulls and logs the mysql processlist every minute and cross check the log with timeouts
look at the pool size in your db connection or set one!
http://guides.rubyonrails.org/configuring.html#database-pooling
should be equal to the max-connections mysql likes to have!
Good luck!
Find out if your database is limited in terms of multiple connections. Because normally a SQL database is supposed to have more than one active connection.
(Contact your network provider)
Would you mind posting some of your queries? The MySQL documentation has this to say about it:
https://dev.mysql.com/doc/refman/5.7/en/error-lost-connection.html
TL;DR:
Network problems; are any of your boxes renewing leases
periodically, or experiencing other network connection errors
(netstat / ss), firewall timeouts, etc. Not sure how managed your
hosts are by cloud66....
Query timed out. This can happen if you've got commands backed up
behind blocking statements (eg, alters/locking backups on MyISAM
tables). How simple are your queries? No cartesian products in-play?
EXPLAIN query could help.
Exceeding MAX_PACKET_SIZE. Are you storing pictures, video content, etc.?
There are lots of possibilities here, and without more information, will be difficult to pinpoint this.
Would look first at mysql_error.log, then work your way from the DB server back to your application.
UPDATE: this didn't work.
Heres the solution, special thanks to #Myst for pointing out that forking can cause issues, I had no idea to look at this particular code. As the errors seemed random because we forked in this fashion in several places.
It turns out that when I was forking processes, rails was using the same database connection for all forked processes, This created a situation where when one of the processes (the parent process?) terminated the database connection, the remaining process would have its connection interrupted.
The solution was to change this code:
def recalculate_completion
Process.fork do
if self.course
self.course.user_groups.includes(user:[:events]).each do |ug|
ug.recalculate_completion
end
end
end
end
into this code:
def recalculate_completion
ActiveRecord::Base.remove_connection
Process.fork do
ActiveRecord::Base.establish_connection
if self.course
self.course.user_groups.includes(user:[:events]).each do |ug|
ug.recalculate_completion
end
end
ActiveRecord::Base.remove_connection
end
ActiveRecord::Base.establish_connection
end
Making this change stopped the errors from our servers and everything appears to be working well now. If anyone has any more info as to why this worked I would be happy to hear it, as I would like to have a deeper understanding of this.
Edit: it turns out this didn't work either.... we still got dropped connections but not as often.
If you have query cache enabled, please reset it and it should work.
RESET QUERY CACHE;

loopback connector to MySQL does not close

I am presently using an application in the platform IBM Bluemix, that requires a MySQL database.
I decided to use a MySQL database (experimental support), supporting a max of 10 concurrent connections.
The problem is that if I restart my app 10 times (through cf restart, or using the dashboard), it will be impossible to run and the logs clearly say I am using the max amount (10) of connections.
The problem, thus, is that either connections are not closed when the app is stopped, or when it is (re)started, it does not use the already existing MySQL link.
At this point, I am not sure about what to do. Can anyone help?
EDIT
versions : I have used loopback-connector-mysql 2.2.0 and loopback-datasource-juggler#2.41.0
I have found a solution,
After contacting the support, the time before closing is 28800 (that means 8 hours), and they won't change it. However, I managed to go through this problem by changing the application, in files such as datasource.js where I set the "connectionLimit" to 3 instead of 9. Switching from mysql experimental to clearDB MySQL is also a valid option.
This is not exact answer you are looking for, but is workaround.
You can set up timeout in MySQL configuration, so MySQL close the connection if connection is idle for some time.
Please refer to this document
https://dev.mysql.com/doc/refman/5.0/en/gone-away.html
https://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_wait_timeout
Probably you will need to set something like:
wait_timeout = 120 # 2 minutes
interactive_timeout = 120 # 2 minutes

MySQL - Slow query - wp_options table. Website unable to handle traffic

After spending several days researching, I have placed a website on a c1.medium instance, Amazon Linux, and the MySQL database on a db.m1.medium instance. The RDS is running MySQL version 5.6.13. I have allocated 100 GB for the DB instance and have set the provided IOPS at 1,000. The website is photo based, permits user uploads and at peak hours has 400+ visitors.
Once I enabled the slow query logging I found the issue appears to be with the wp_options table, which when looking into phpmyadmin I found contains information on the WordPress plug-ins and theme. Ex:
SET timestamp=1390186963;
SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes';
Time: 140120 3:04:17
User#Host: xxxx Id: 744
Query_time: 49.248039 Lock_time: 0.000180 Rows_sent: 485 Rows_examined: 538
After experimenting with a few of the DB parameters I set the query_cache_type to 1 and the query_cache_size to 64MB. I was hoping that enabling the caching would stop the database from repeatedly calling the wp_options table, but that unfortunately doesn’t appear to be the case. Any suggestions? What would be the next steps to take to figure out the cause of this issue? When looking at the CloudWatch metrics the hardware appears to be sufficient, but maybe not?
Below are screenshots of the CloudWatch metrics for both instances.
EC2:
RDS:
Unless
Query_time: 49.248039 Lock_time: 0.000180 Rows_sent: 485 Rows_examined: 538
is a copy/paste error, something is very very wrong here. That's 50 seconds to select 485 rows out of 538! The only reason for this, that i can imagine, is you have some EXTREMELY long columns in option_value, which is a longtext. Try
SELECT option_name, length(option_value) FROM wp_options WHERE autoload = 'yes';
to check if something went wrong here, like (as you say it's a photo website) an image, or a video file, that would have to have a size of at least a a few dozen gigabytes to produce your effect, sneaking into the theme configuration.
If you can, mysqldump() your database, import the dump into a local database, and try the same select on your local copy. This might help in deciding if it's a problem with the data, or some artificial limit on your Amazon instance that's set too low.
It appears that there is a spike of incoming traffic large enough to create a significant number of server threads. These threads take up so much memory that the instance is pushed to using swap which slows everything to a crawl.
Use FCGI configuration for PHP. If you are using mod_php, every Apache thread loads with mod_php. Even if the thread is not serving a request that requires php processing.
Install APC if you have not already. This will cache php bytecode and speed up requests.
Install W3 total cache. Configure it to use memory caching with memcached. You will likely need to install memcached and a php memcached extension.
If the above isn't enough, set up varnish and/or pass your site through cloudfront or cloudflare.

Mysql resource temporarily unavailable

I'm seeing a few of these errors during high load times:
mysql_connect() [<a
href='function.mysql-connect'>function.mysql-connect</a>]: [2002] Resource
temporarily unavailable (trying to connect via
unix:///var/lib/mysql/mysql.sock)
From what I can tell the mysql server isn't hitting its max connections limit, but there's something else stopping it from serving the query. What other limits would MySQL be hitting?
I'm running RHEL 6.2 64bit with MySQL 5.5.21
Let's assume your system is currently Unix-based (as given in your problem statement). If this is correct, here's the set of issues you may be running into:
You've run out of memory available to MySQL.
This is the most likely problem you're facing. Each connection in MySQL's connection pool requires memory to function, and if this resource is exhausted, no further connections can be made. Of course, the memory footprints and maximum packet sizes of various operations can be tuned in your equivalent to my.cnf if you discover this to be an issue.
Here's an additional thread that can help there, but you may also consider using simpler profiling tools like top to get a good ballpark estimate of what's going on.
You've run out of file descriptors available to your MySQL user account.
Another common issue: if you're trying to service requests that require file IO above the 1,024 boundary (by default), you will run into cases where the operation simply fails. This is because most systems specify a soft and hard limit on the number of open file descriptors each user can have available at one time, and walking over this threshold can cause problems.
This will usually have a series of glaringly obvious signs expressed in your log files. Check /var/log/messages and your comparable directories (for example, /var/log/mysql to see if you can find anything interesting.
You've run into a livelock or deadlock scenario where your thread is unsatisfiable.
Corollary to memory and file descriptor exhaustion, threads can time out if you've overstepped the computational load your system is capable of handling. It won't throw this error message, but this is something to watch out for in the future.
Your system is running out of PIDs available to fork.
Another common scenario: fork only has so many PIDs available for its use at any given time. If your system is simply overforked, it will cease to be able to service requests.
The easiest check for this is to see if any other services can connect through to the machine. For example, trying to SSH into the box and discovering that you cannot is a big clue.
An upstream proxy or connection manager has run out of resources and ceased servicing requests.
If you have any service layer between your client and MySQL, it bears inspecting to see if it has crashed, hung, or otherwise become unstable. The advice above applies.
Your port mapper has exhausted itself after 65,536 connections.
Unlikely, but again, a possible exhaustion case. Checking the trivial service connection as above is, ehm, also the best port of call here.
In short: this is a resource exhaustion scenario, inclusive of the server simply being "down". You're going to have to profile your system further to see what you're blocking on. All the error message gives us in this case is the fact the resource is unavailable to the client -- we'd need to see more information about the server to determine a more adequate remedy.
I still haven't found which limits it was hitting, but I did manage to work around the problem. There was a problem with our session table (in vbulletin) which uses the MEMORY engine. The indexes for this table were HASH and thus when vbulletin purged this table once an hour it would lock the table just long enough to hold up other queries and push mysql to the limit of its resources.
By changing the indexes to BTREE this allowed MySQL to delete the rows from the session table a lot quicker and avoid any limits there were reached previously. The errors only started when we upgraded our master db server to MySQL 5.5, so I'm guessing MEMORY tables are handled differently in the latest release.
See http://www.mysqlperformanceblog.com/2008/02/01/performance-gotcha-of-mysql-memory-tables/ for information on speed increases from using BTREE indexes over HASH For MEMORY.
Geez, this could be so many things. It could be that the socket buffer space is exhausted. It could be that mysql is not accepting connections as fast as they are coming in and the backlog limit is reached (though I'd expect that to give you a "Connection Refused" error, I don't know for sure that's what you'll get for a Unix domain socket). It could be any of the things #MrGomez pointed out.
Since you are running Apache and MySQL on the same server and this is a problem under high load, it could well be that Apache is starving the system of some resource and you're just not seeing (noticing?) the dropped/failed incoming connections/requests in your logs.
Are you using connection pooling? If not, I'd start there.
I'd also look for errors in the Apache logs and syslog around the same time as the mysql_connect error and see what else turns up. I'd especially recommend getting MySQL moved over to its own separate dedicated server.
In my case, I was working with JSON data types with PDO (PHP Driver).
I was using fetch to retrieve one item but forgot to add LIMIT 1 to the query. Adding it solved the problem.

Doctrine 2 Close Connection

I use doctrine 2 PDO with mysql.
When stress testing the server, mysql reports a lot of aborted connections (up-to 20%).
I am trying to locate the issue.
Mysql manual suggests to ensure that connections to the database are closed properly.
http://dev.mysql.com/doc/refman/5.0/en/communication-errors.html
I can't find any information if doctrine actually closes connections or not, or uses persistent connections.
Also, is there anything else that can account for aborted connections? I am at loss here.
PS. Server is ubuntu 10.04, nginx 1.x, php 5.3.5 (fpm) and mysql 5.1.41
From what I've observed, Doctrine uses persistent connections.
We've stumbled upon a problem, launching unit tests in symfony2, where the database was spammed with connections in "Sleep" status. The solution that worked for us:
$entityManager->getConnection()->close();
I have the same problem and
$entityManager->getConnection()->close();
seems to work, but works 'better' in some php versions if you add
gc_collect_cycles()
after closing connections.
I'm still having that kind of issues in older php version, may be something related with the garbage collector I guess.
Will keep you updated if I find a final solution for all php versions
I have found this tweak:
https://sroze.io/phpunit-mysql-too-many-connections-error-ab52cd5798c5
Setting processIsolation="true" option in PhpUnit XML options file seems to do the trick.