Activerecord Connection Pool Optimization for Highly Cached System - mysql

I have a rails activerecord project that has been scaled out to serve approximately 60-100k requests per minute. We use AWS and it takes about 5 xlarge c4 ec2 instances to serve this many requests. We have optimized the system to serve 99.99% of those requests off of a redis cache rendering our mysql DB barely used.
This is great and all but we keep running into connection limits for mysql. Amazon RDS apparently limits the number of connections we can have and it seems silly for upping our RDS instance size just so that we can have a larger number of sleeping connections. We literally profiled the RDS server and it maybe gets 10-50 queries a day depending on how many times we update the system.
Is there any way to keep the activerecord connection pool from reserving connections?
We tried simply lowering the connection pool and it helped in lowering connections, but then we started getting:
(ActiveRecord::ConnectionTimeoutError) "could not obtain a database connection within 5 seconds
What I would like to achieve is for the rails project to stop trying to pre-allocate connections and only open then up when they are necessary. I'm not well-versed enough in the rails framework and activerecord to understand how the system is reserving connections and why we are getting ConnectionTimeoutErrors even though the application isn't even making any DB calls.

Related

AWS: Too many connections

I have an RDS instance hosting a mySQL database. Instance size is db.t2.micro
I also have an ExpressJS backend connecting to the mySQL RDS instance via a connection pool:
Additionally i have a mobile app, the client, feeding off the ExpressJS API.
The issue i'm facing is, either via the mobile app or via Postman, there are times where i get a 'Too many connections' error and therefore several requests fail:
On the RDS instance. On current activity i sometimes get 65 connections, showing it's reaching the limit. What i need clarity on is:
When 200 mobile app instances connect to the API, to the RDS instance, does it register as 200 connections or 1 connection from ExpressJS?
Is it normal to be reaching the RDS instance 65 connection limit?
Is this just a matter of me using db.t2.micro instance size which is not recommended for prod? Will upgrading the instance size resolve this issue?
Is there something i'm doing wrong with my requests?
Thank you and your feedback is appreciated.
If your app creates a connection pool of 100, that's the number of database connections it will try to open. It must be lower than your MySQL connection limit.
Typically connection pools open all the connections for the pool, so they are ready when a client calls the http API. The connections might normally be running no SQL queries, if there are not many clients using the API at a given moment. The database connections are nevertheless connected.
Sort of like when you ssh to a remote linux server but you just sit there at a shell prompt for a while before running any command. You're still connected.
You asked if a db.t2.micro instance was not recommended for production. Yes, I would agree with that. It's tempting to use the smallest instance possible to save money, but a db.t2.micro is too small for anything but light testing, in my opinion.
In fact, I would not use any t2 instance for production, regardless of size. The t2 type uses "burstable" performance. This means it can provide only brief periods of good performance. Once the instance depletes its performance credits, they recharge slowly, and while they recharge, the performance of that instance is very low. This is okay for testing, but not for production, if you expect to provide consistent performance at any time.

Rails holding on to DB connections too long, can't run Stored Procedures

I'm experiencing a problem with Rails and my MySQL RDS Instance. I have my rails app connected to it through our database.yml file with a pool of 10 (now 5) connections. The other day another user of the database tried running a stored procedure but it would not execute. It was stuck just hanging around waiting to execute. The user looked at the processes and noticed that our rails user had around 30 idle processes so they killed some of those. The stored procedure kicked off then and ran without issue.
We are on an r3.xlarge instance and had ~100 total processes at the time of problem. This doesn't seem alarmingly high to me and I'm not sure why the procedure wouldn't execute without freeing up some of the processes. I guess my question is, is there a way to tell my rails app to release some of these idle connections after x seconds, or a way to control these connections better? I can write a cron which frees them up, but I'd love to do it the rails/best way.
Thanks for any help!
It seems to me that you may have hit the maximum connections limit on the MySQL instance. You can run select ##max_connections on your MySQL to find out the limit.
I don't know of a way to force Rails to close its allocated db connections. Each server process may use up to the pool size connections (i.e. 10 or 5 in your case) to the db for its threads. The distinction between threads and processes is important: if you for example have multiple workers serving your rails app running as separate processes (e.g. puma can be configured like that), then each of the process may allocate up to 5 or 10 connections. If you use background processes (sidekiq etc.), they also may use up to this amount of connections.
The ConnectionPool also provides a reaper that can be used to free allocated db connections from dead threads but unless your app is having some larger troubles, this usually will not help (your threads are more probably idle than dead).
So, I'd give a general advice to try to estimate the maximum number of connections that all your rails processes might need and if it is near or above the MySQL connection limit, either lower the connection pool size or decrease the number of possibly run Rails processes (workers).
If you need more help, please specify what application server do you use to run your Rails app and how it is configured and the same also for any background job workers.
Try setting reaping_frequency in your database.yml file:
reaping_frequency: frequency in seconds to periodically run the Reaper,
which attempts to find and recover connections from dead threads,
which can occur if a programmer forgets to close a connection at the
end of a thread or a thread dies unexpectedly. Regardless of this
setting, the Reaper will be invoked before every blocking wait.
(Default nil, which means don't schedule the Reaper)
Above documentation from: http://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/ConnectionPool.html

Periodic MySql timeout followed by connection spike in ASP.NET website

Every couple of days we have been getting a small number of MySql timeout errors that correspond with a large spike in CPU and DB connections on our MySQL RDS instance. These are queries that are typically very fast (<5ms) that suddenly timeout.
At this point, database operations are very slow for a minute or so (likely because new connections are being allocated). The number of new connections often doubles and seem to correspond to the entire Connection Pool being recycled.
The timeouts do not seem to correspond with heavy database load. The CPU is often under 7% when this happens spiking up to around 12%.
Once these connections are created, the old connections seem to stay around for several hours.
We have some theories:
An occasional network hiccup between EC2 and RDS
A connection pool recycle (is there such a thing?)
Resource contention on the server that backs up all queries (no deadlocks present)
Any help on debugging this would be very much appreciated.
System Details:
Windows 2012 EC2 instances
.NET 4.5
MySql Connector 6.8.3
Entity Framework 6.0.2
MySql.Data.Entities 6.8.3
MySql 5.6.12 (Hosted in Amazon's RDS)
I wanted to put this as a comment not an answer but "...must have 50 reputation to comment..."
Are you maxing out on connections? show variables like 'max_connections'; show process_list; (as root user)
How's your disk I/O: iostat -x 5 via command line and pay special attention to queue sizes & service/wait times. If its an issue you can purchase AWS reserved IOPS for better reliability & performance.
You can profile it - i like Jet Profiler, simple & low load.

Heroku Rails ClearDB Resque Connections

If I have 20 different jobs in Resque, does that mean that my ClearDB database would potential have 20+ connections? How can I monitor how many connections my ClearDB is using?
It doesn't matter how many jobs you have in Resque. It matters how many workers you have running. In Resque, each worker runs in a separate process and hence opens its own connection to the database.
If the number of connections is a concern, you can try using Sidekiq instead. Sidekiq is API-compatible with Resque, but its workers run in threads in a single process. This way, you should be able to use a shared connection pool to manage how many connections are open at the same time.

Really need a db connection pool for unicorn rails?

I can't find any document describing database connection pooling effect for unicorn.
Unicorn forks several worker processes. I configured prefork and it's critical not to share database connections between workers, so I reset db connections after fork.
My rails application has 8 workers per server, and the pool size in database.yml is 5, then I saw 45 connections to mysql.
Each worker is single-threaded that handles 1 request at a time. SQL queries should be blocking. Seems the other 4 connections are useless? Can I set the pool size to 1 for better performance?
Since each worker can only serve 1 request at a time, each worker can also use only one connection at a time, and nothing is gained from having more connections. If you set the pool size to 1, each Unicorn worker should open one connection. You will likely not get a noticeable performance increase but you will save resources by having fewer open connections.