Rails Resque workers fail with mysql "Too many connections" - mysql

We recently switched our (ruby) job queueing system from DelayedJob to Resque.
While our latency has gone down, and we've eliminated the database bottleneck, we're now seeing a new problem; one or more of our workers seems to leave a database connection open when it exits. When we look at the process list, there are hundreds of connections in a 'sleep' state. They eventually time out after 90 seconds. We've been throttling back our workers to keep from running out of client connections, but what we really need to find
out is which one (or more) of our jobs is not being polite when it disconnects using the mysql2 ruby client.
Any ideas how we could (1) find the culprits or (2) instrument our code so we can make sure that we are actually disconnecting before the job terminates?
Rails 4.0.x
Resque 1.25.2
mysql2 gem 0.3.16

Make sure your Resque process is disconnecting from the database before forking and re-establishing the connection afterwards. Create an initializer file config/initializers/resque.rb
which contains:
Resque.before_fork do
defined?(ActiveRecord::Base) && ActiveRecord::Base.connection.disconnect!
end
Resque.after_fork do
defined?(ActiveRecord::Base) && ActiveRecord::Base.establish_connection
end

I gave up on Resque and moved to Sidekiq. I'm much happier now.

Related

How to debug why Rails app in Puma server goes down

I have a webapp with a Rails API running in a Puma server with a MySQL database.
I´m running a batch process that is writing a lot of logs to a log file. In a specific moment after some time running the Puma server goes down. In the logs everythink looks fine until aparently with no reason it gets down. But I don´t see any error. Neither in the puma.err. Not sure if it´s related to the server or maybe the database (something related with the pool?). I´m blind. I don´t know where to look at to debug the problem.
UPDATE: I think I have narrowed down the problem. I have realised that the Puma server gets down always when trying to load the same element. I tried to make a test in development and it works there. So, I think I know what is killing it. I´m not sure but strongly suspect the element I´m trying to load in my batch process run a lot of queries within a single transaction in the MySQL database. I think somehow the transaction is not able to process all the queries and for some reason the Puma server gets down. I don´t know if this makes sense, but this is my main suspect. I have read something about transaction sizes and log files How do I determine maximum transaction size in MySQL?
My new question is: if this is really happening, can I see an error related to this in any log file (Puma or Mysql)?
UPDATE 2: I attach enviroment information:
DEVELOPMENT: MacOS, Processor: 2,4GHz, RAM: 8GB
PRODUCTION: Ubuntu14, Processor: 2,5GHz, RAM: 3,7GB (AWS instance)
Regarding the Puma configuration, I´m not very skilled but I´m starting the server with a config file with just two lines in both development and configuration environments:
stdout_redirect 'path_to_log_file.log', 'path_to_error_file.log', true
bind 'unix:///tmp/puma.sock'
When starting up I neither see and workers configuration, so I assume I have only one worker and the default number of threads 0 to 16.

MySQL No. of connections count

I have a medium sized database (7 GB) with around 200 concurrent users. i am getting some database lag issues, suddenly my node-mysql client freezes during selects and inserts.
as a process of troubleshooting i checked SHOW STATUS on the DB, everything seemed to be okay but just the Connections attribute has 262050.
want to understand if this number is okay or if the figure is exorbitant?
Exorbitant. Definitely.
Find a machine running your node app, and watch its logs and/or error outputs while you stop and restart the MySQL server.
Hopefully the node app will chatter away telling you that lots of connections were closed as you stop the MySQL server.
This looks like a connection leak. You probably have some sort of problem with your node connection pooling, or maybe with releasing connections after your node app uses then.

Lost connection to MySQL server during query on random simple queries

FINAL UPDATE: We fixed this problem by finding a way to accomplish our goals without forking. But forking was the cause of the problem.
---Original Post---
I'm running a ruby on rails stack, our mysql server is separate, but housed at the same site as our app servers. (we've tried swapping it out for a different mysql server with double the specs, but no improvement was seen.
during business hours we get a handful of these from no particular query.
ActiveRecord::StatementInvalid: Mysql2::Error: Lost connection to MySQL server during query
most of the queries that fail are really simple, and there seems to be no pattern between one query and another. This all started when I upgraded from Rails 4.1 to 4.2.
I'm at a loss as to what to try. Our database server is less than 5% CPU throughout the day. I do get bug reports from users who have random interactions fail due to this, so it's not queries that have been running for hours or anything like that, of course when they retry the exact same thing it works.
Our servers are configured by cloud66.
So in short: our mysql server is going away for some reason, but it's not because of lack of resources, it's also a brand new server as we migrated from another server when this problem started.
this also happens to me on localhost while developing features sometimes, so I don't believe it's a load issue.
We're running the following:
ruby 2.2.5
rails 4.2.6
mysql2 0.4.8
UPDATE: per the first answer below I increased our max_connections variable to 500 last night, and confirmed the increase via
show global variables like 'max_connections';
I'm still getting dropped connection, the first one today was dropped only a few minutes ago....
ActiveRecord::StatementInvalid: Mysql2::Error: Lost connection to MySQL server during query
I ran select * from information_schema.processlist; and I got 36 rows back. Does this mean my app servers were running 36 connections at that moment? or can a process be multiple connections?
UPDATE: I just set net_read_timeout = 60 (it was 30 before) I'll see if that helps
UPDATE: It didn't help, I'm still looking for a solution...
Heres my Database.yml with credentials removed.
production:
adapter: mysql2
encoding: utf8
host: localhost
database:
username:
password:
port: 3306
reconnect: true
The connection to MySQL can be disrupted by a number of means, but I would recommend revisiting Mario Carrion's answer since it's a very wise answer.
It seems likely that connection is disrupted because it's being shared with the other processes, causing communication protocol errors...
...this could easily happen if the connection pool is process bound, which I believe it is, in ActiveRecord, meaning that the same connection could be "checked-out" a number of times simultaneously in different processes.
The solution is that database connections must be established only AFTER the fork statement in the application server.
I'm not sure which server you're using, but if you're using a warmup feature - don't.
If you're running any database calls before the first network request - don't.
Either of these actions could potentially initialize the connection pool before forking occurs, causing the MySQL connection pool to be shared between processes while the locking system isn't.
I'm not saying this is the only possible reason for the issue, as stated by #sloth-jr, there are other options... but most of them seem less likely according to your description.
Sidenote:
I ran select * from information_schema.processlist; and I got 36 rows back. Does this mean my app servers were running 36 connections at that moment? or can a process be multiple connections?
Each process could hold a number of connections. In your case, you might have up to 500X36 connections. (see edit)
In general, the number of connections in the pool can often be the same as the number of threads in each process (it shouldn't be less than the number of thread, or contention will slow you down). Sometimes it's good to add a few more depending on your application.
EDIT:
I apologize for ignoring the fact that the process count was referencing the MySQL data and not the application data.
The process count you showed is the MySQL server data, which seems to use a thread per connection IO scheme. The "Process" data actually counts active connections and not actual processes or threads (although it should translate to the number of threads as well).
This means that out of possible 500 connections per application processes (i.e., if you're using 8 processes for your application, that would be 8X500=4,000 allowed connections) your application only opened 36 connections so far.
This indicates a timeout error. It's usually a general resource or connection error.
I would check your MySQL config for max connections on MySQL console:
show global variables like 'max_connections';
And ensure the number of pooled connections used by Rails database.yml is less than that:
pool: 10
Note that database.yml reflects number of connections that will be pooled by a single Rails process. If you have multiple processes or other servers like Sidekiq, you'll need to add them together.
Increase max_connections if necessary in your MySQL server config (my.cnf), assuming your kit can handle it.
[mysqld]
max_connections = 100
Note other things might be blocking too, e.g. open files, but looking at connections is a good starting point.
You can also monitor active queries:
select * from information_schema.processlist;
as well as monitoring the MySQL slow log.
One issue may be a long-running update command. If you have a slow-running command that affects a lot of records (e.g. a whole table), it might be blocking even the simplest queries. This means you could see random queries timeout, but if you check MySQL status, the real cause is another long-running query.
Things you did not mention but you should take a look:
Are you using unicorn? If so, are your reconnecting and disconnecting in your after_fork and before_fork?
Is reconnect: true set in your database.yml configuration?
Well,at first glance this sounds like your webserver is keeping the mysql sessions open and sometimes a user runs into a timeout. Try disabling the keep mysql sessions alive.
It will be a hog but you only use 5% ...
other tipps:
Enable the mysql "Slow Query Log" and take a look.
write a short script which pulls and logs the mysql processlist every minute and cross check the log with timeouts
look at the pool size in your db connection or set one!
http://guides.rubyonrails.org/configuring.html#database-pooling
should be equal to the max-connections mysql likes to have!
Good luck!
Find out if your database is limited in terms of multiple connections. Because normally a SQL database is supposed to have more than one active connection.
(Contact your network provider)
Would you mind posting some of your queries? The MySQL documentation has this to say about it:
https://dev.mysql.com/doc/refman/5.7/en/error-lost-connection.html
TL;DR:
Network problems; are any of your boxes renewing leases
periodically, or experiencing other network connection errors
(netstat / ss), firewall timeouts, etc. Not sure how managed your
hosts are by cloud66....
Query timed out. This can happen if you've got commands backed up
behind blocking statements (eg, alters/locking backups on MyISAM
tables). How simple are your queries? No cartesian products in-play?
EXPLAIN query could help.
Exceeding MAX_PACKET_SIZE. Are you storing pictures, video content, etc.?
There are lots of possibilities here, and without more information, will be difficult to pinpoint this.
Would look first at mysql_error.log, then work your way from the DB server back to your application.
UPDATE: this didn't work.
Heres the solution, special thanks to #Myst for pointing out that forking can cause issues, I had no idea to look at this particular code. As the errors seemed random because we forked in this fashion in several places.
It turns out that when I was forking processes, rails was using the same database connection for all forked processes, This created a situation where when one of the processes (the parent process?) terminated the database connection, the remaining process would have its connection interrupted.
The solution was to change this code:
def recalculate_completion
Process.fork do
if self.course
self.course.user_groups.includes(user:[:events]).each do |ug|
ug.recalculate_completion
end
end
end
end
into this code:
def recalculate_completion
ActiveRecord::Base.remove_connection
Process.fork do
ActiveRecord::Base.establish_connection
if self.course
self.course.user_groups.includes(user:[:events]).each do |ug|
ug.recalculate_completion
end
end
ActiveRecord::Base.remove_connection
end
ActiveRecord::Base.establish_connection
end
Making this change stopped the errors from our servers and everything appears to be working well now. If anyone has any more info as to why this worked I would be happy to hear it, as I would like to have a deeper understanding of this.
Edit: it turns out this didn't work either.... we still got dropped connections but not as often.
If you have query cache enabled, please reset it and it should work.
RESET QUERY CACHE;

Restart Mysql automatically when ubuntu on EC2 micro instance kills it when running out of memory

When the system runs out of memory, ubuntu 12.04 kills the mysql process:
Out of memory: Kill process 17074 (mysqld) score 146 or sacrifice child
So the process ends up killed.
This happens at peaks of server load and mainly because of apache getting wild and eating the remaining available memory. Possible approaches could be:
Change somewhere somehow the priority of mysql, so it's not killed (probably a bad fix as something else will be killed)
Monitor the status of mysql and restart automatically whenever it's killed (the one I'm thinking about, but don't know how to do it).
How do you see it?
Abrupt termination of a database server is a very serious crash. You need to avoid this in a production system, because it may not restart cleanly.
The database server is a shared resource, and should almost never terminate in an unplanned fashion in production. The only thing that should cause unplanned termination is a catastrophic hardware or power failure. Most properly configured production data base servers have an unplanned termination once every ten years or less frequently. Seriously.
What to do?
Fix your apache configuration. Limit the number of worker threads and processes it can use, so it can't run wild. Learn how to do this. It's vital. See here: http://httpd.apache.org/docs/current/mod/mpm_common.html#maxrequestworkers
Fix the defects in your web app that are causing your apache to run wild.
If you can, move your mysqld server to a different server machine from apache, so the two don't contend for the same hardware resources.
Configure your mysqld to limit the number of connections it will accept from apache worker threads or other clients. Your web app probably handles the situation where a worker thread needs to wait for a connection. See here. http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_max_connections
Are you on an EC2 micro instance? You need to do some serious tuning. See here: http://ubuntuforums.org/showthread.php?t=1979049
You can check mysql status every minute (with cron) and restart if it is crashed:
* * * * * service mysql status | grep running || service mysql restart

How to delay ActiveRecord MySQL reconnect during a failover

We have a Rails 3.1.3 app, connecting to MySQL via the mysql2 gem. Standard config. We also have a handful of Resque workers performing background jobs. The DB hostname we point to (in database.yml) is actually a Virtual IP (VIP) that points to either node1 or node2.
Behind the scenes, the two MySQL servers (nodes) are setup in a High Availability configuration. The data folders are replicated via DRBD, with mysqld only running on the "active" node. When the cluster detects that node1 is not available, it starts mysqld on node2 and points the VIP to it.
If you want more details on the specific setup, it's very similar to this MySQL HA cookbook.
Here's the issue: When a failover happens, it takes approx 30-60 seconds to complete, during which there is no MySQL server available. Any Resque jobs that are currently running fail badly.
Here's the question(s): How can we tell ActiveRecord to reconnect after a delay? Maybe attempt several reconnects with a backoff timer? Or is there a better way of dealing with this?
Your HA setup is going to cause you infinite amounts of pain in the future. Use database-layer replication instead of block-device-layer replication; MySQL Proxy was designed to do this.