There is laravel API that connects with Amazon RDS. Usually, all works well, but sometimes API loses connections with the database because of locking it. It occurs at a different time, sometimes once a week, sometimes twice a day. When it occurs, several crons are running and API also processes user requests. I've added SQL logging, but can't find something strange there.(maybe because the SQL queries didn't finish and weren't in logs). Any suggestions on how to catch this issue?
Laravel 5.4, MYSQL 5.7
Related
I have a product built with laravel, with multi-tenancy.
Deployed on EC2 instance and using AWS RDS as the database server.
I am currently having around 100 databases on the production.
Laravel's hyn tenancy module is handling the connections.
Now, the problem is for each tenant after some idle time, the first request takes too long. around 15-20 seconds. and after that, it works smoothly.
In the test environment, we are not using RDS but a local MySQL instance. and the problem does not occur in the test environment. the only difference between test and production is the AWS RDS.
I have looked into max connections, query cache, and so on... but no luck so far.
Any suggestions?
The solution will depend on what kind of RDS you have.
I assume it's serverless (more common). In that case, there's a setting for min and max for ACU. It will (I believe) go down to zero by default if the DB is not accessed in a while. Check that and see if it is properly set.
If you have a Provisioned DB, then it's more complex. It will start caching things once queries are executed but until a particular query is run, you will be waiting for the DB to "wake up" and run a full query.
Check this page for relevant info.
I have an RDS instance hosting a mySQL database. Instance size is db.t2.micro
I also have an ExpressJS backend connecting to the mySQL RDS instance via a connection pool:
Additionally i have a mobile app, the client, feeding off the ExpressJS API.
The issue i'm facing is, either via the mobile app or via Postman, there are times where i get a 'Too many connections' error and therefore several requests fail:
On the RDS instance. On current activity i sometimes get 65 connections, showing it's reaching the limit. What i need clarity on is:
When 200 mobile app instances connect to the API, to the RDS instance, does it register as 200 connections or 1 connection from ExpressJS?
Is it normal to be reaching the RDS instance 65 connection limit?
Is this just a matter of me using db.t2.micro instance size which is not recommended for prod? Will upgrading the instance size resolve this issue?
Is there something i'm doing wrong with my requests?
Thank you and your feedback is appreciated.
If your app creates a connection pool of 100, that's the number of database connections it will try to open. It must be lower than your MySQL connection limit.
Typically connection pools open all the connections for the pool, so they are ready when a client calls the http API. The connections might normally be running no SQL queries, if there are not many clients using the API at a given moment. The database connections are nevertheless connected.
Sort of like when you ssh to a remote linux server but you just sit there at a shell prompt for a while before running any command. You're still connected.
You asked if a db.t2.micro instance was not recommended for production. Yes, I would agree with that. It's tempting to use the smallest instance possible to save money, but a db.t2.micro is too small for anything but light testing, in my opinion.
In fact, I would not use any t2 instance for production, regardless of size. The t2 type uses "burstable" performance. This means it can provide only brief periods of good performance. Once the instance depletes its performance credits, they recharge slowly, and while they recharge, the performance of that instance is very low. This is okay for testing, but not for production, if you expect to provide consistent performance at any time.
I've been working with MySQL and AuroraDB on AWS for a while now for a site I'm making. Up until a few minutes ago, I've been able to run any and all mutator functions on the instance through Lambda and MySQL workbench.
However, when I attempted to run an INSERT a few minutes ago, it gave me an error. I traced it back to Lambda saying that the --read-only flag is on my DB instance.
I was able to run these queries this morning, and now I cannot UPDATE, INSERT, or DELETE. I don't understand why it set itself to read-only randomly, besides maybe some sort of management or cleanup occurring on the AWS server, but under notifications there wasn't anything relating to any kind of maintenance.
Please let me know if there's any more information you need. I assumed that the code I was running is arbitrary, as I've tried running SQL commands directly through the workbench with the same results.
I found out the issue - the role of the slave instances of my Aurora cluster had swapped, probably during maintenance. I've since pointed all my Lambda functions to the cluster itself, which I actually didn't even know was something you could do, let alone the standard practice.
Some of the requests to my application fail every day. I want to know every time which all queries went into deadlock.
I am running MySQL 5.6 on AWS.
FYI, tool suggestions are also welcome.
I've had a Rails application running in production for the past 6 months, with weekly deployments, without any issue.
Now, I've been having a recurring issue for about 3 weeks and it seems to get worst every week.
When my app boots and reaches the point where it's trying to connect to the DB, I get this error :
Can't connect to MySQL server on '***.amazonaws.com' (110) (Mysql2::Error)
AFAIK, this error tells me that I've reached MySQL's max connections limit.
From the configs, I should be able to open 296 connections. My app is set to run 7 instances with each a database connection pool of 5, so it can't really exceed 70 connections when deploying a new instance.
I've never seen the connection count go above 20 in either the AWS RDS Console or the SHOW PROCESSLIST command.
I don't think it has anything to do with either Rails or my application server (Puma), since I can't connect through the MySQL Command-Line Tool when the issue occurs.
Has anyone had a similar issue with MySQL on RDS or MySQL itself?
The database pool isn't per application, it's per process. If it's threading/multi process per instance it could be using more than that. Have you tried restarting mysql? It sounds like you have some hanging connections for whatever reason.
I've been getting these issues recently. Could it be related to the pending-restart change of parameter group on my RDS Instance? I sure hope not. As I understand a pending change should have no effect on the current performance.