Aws lambda function too many connection issue rds - mysql

I have more than 20 lambda function for my mobile app API, as in starting the user based is less so it was all going good but now as the user increase (3000 to 4000) I am facing too many connection issues in my lambda function because of which I started getting internal server error form my API, I know i am missing something while creating the connection in lambda but after lot of hit and try i was not able to find out that missing link, the below code I am using for creating the connection
var con;
exports.handler = async (event, context) => {
context.callbackWaitsForEmptyEventLoop = false;
if (!con || con.state == "disconnected" || con === "undefined") {
con = secret
.then((result) => {
var data = JSON.parse(result.SecretString);
var connection = mysql.createConnection(
{
"host": data.host,
"user": data.username,
"password": data.password,
"database": data.db
}
);
connection.connect();
return connection;
}).catch((err) => {
throw err;
});
}
I have tried adding con.destroy() before sending the response, but it does not seem to solve the problem, so if there is anything else I can do then pls let me know.

It's complicated to know exactly what's going on, my first guess always revolves on setting context.callbackWaitsForEmptyEventLoop = false and storing the connection outside the function scope - both which you've correctly done.
With that said, managing connection pools in Lambda kind of goes the other way serverless is by definition, it "lacks" ephemerality. This does not mean that you can't scale with connections, you'll have to dig deeper infos on your issue.
Jeremy Daly provides good practices on dealing with this on the following posts on his blog:
Reusing DB connections
Managing RDS connections + AWS Lambda
Also, he has made a lib that manages this for you, it's called serverless-mysql - which is built to address this specific issue.
Personal experience: I had trouble with connections + lambdas, and due to this fact I've migrated to their DataAPI solution (I had to migrate my RDS to Aurora Serverless, which is not a big pain) - its GA release was about 2/3 weeks ago.
If you want more info on Aurora SLS, check it out here.

Another way to tackle this kind of issue is to user an AWS RDS Proxy: https://aws.amazon.com/fr/rds/proxy/
Many applications, including those built on modern serverless architectures, can have a large number of open connections to the database server, and may open and close database connections at a high rate, exhausting database memory and compute resources. Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and application scalability. With RDS Proxy, failover times for Aurora and RDS databases are reduced by up to 66% and database credentials, authentication, and access can be managed through integration with AWS Secrets Manager and AWS Identity and Access Management (IAM).

Related

ETIMEDOUT on subsequent database connections under load

I've been having a problem for several weeks now about a ETIMEDOUT error when doing subsequent database calls under load and it has left me completely stumped.
For a heads up, I have an Azure App service where I'm running my node js backend. It's running on a S2 plan. I also have a flexible MySQL server on azure too. I don't remember the plan's name, but it's a mid priced one that can allow up to 2700 connections. We are hosting an API that can see thousands of requests in a day and will make a few database calls per request.
The server and database calls all run normally when the server isn't under load, but the moment the server goes under load, even if it's a little such as 100 requests all at the same second, the database calls start to fail with ETIMEDOUT. Here's how I have the database calls setup.
First I grab a connection to be used. My connection is setup as
getConnection(bMultipleStatement) {
return mysql.createConnection({
host: process.env.MYSQL_HOST,
user: process.env.MYSQL_USER,
password: process.env.MYSQL_PASSWORD,
database: process.env.MYSQL_DATABASE,
charset: 'utf8mb4',
multipleStatements: bMultipleStatement
});
}
conn = base.getConnection(false);
Once I get a connection from the database, I perform basic SELECTS and UPDATES as such
conn.query(sql, [variables], function (error, results) {
if (error){
if (conn) { conn.end(); }
// log error
}
else {
if (conn) { conn.end(); }
if (results.length > 0) {
//continue
}
else {
//log not found
}
}
});
The first 20-30 requests will be able to successfully run this query, but everything else after it will just generate a ETIMEDOUT. I have checked my azure metrics and can see that the amount of connections and CPU % are both low very low. Around 5% CPU and 50 active connections.
This problem has left me stumped. Most everything I've searched for this issue only involves being completely unable to connect rather than it working fine until it goes under load. Has anybody experienced a similar issue and know how to solve it? The only thing that I was able to find to help was that it doesn't run into this issue locally, only when it's on azure, but this is because my local machine has much better hardware than the azure server. Upgrading is a last resort option of ours because that's just putting a bandaid over the problem.

Connecting to Aurora MySQL Serverless with Node

I'm trying to connect to my Aurora Serverless MySQL DB cluster using the mysql module, but my connection always times out.
const mysql = require('mysql');
//create connection
const db = mysql.createConnection({
host : 'database endpoint',
user : 'root',
password : 'pass',
database : 'testdb'
});
//connect
db.connect((err) => {
if(err){
throw err;
console.log('connection failed');
}
console.log('mysql connected...');
})
db.end();
My cluster doesn't have a public IP address so I'm trying to use the endpoint. I've successfully connected to the db using Cloud9, but I can't connect using node. I must be missing something.
Aurora Serverless uses an internal AWS networking setup that currently only supports connections from inside a VPC, and it must be the same VPC where the serverless cluster is deployed.
Q: How do I connect to an Aurora Serverless DB cluster?
You access an Aurora Serverless DB cluster from within a client application runing in the same Amazon Virtual Private Cloud (VPC). You can't give an Aurora Serverless DB cluster a public IP address.
https://aws.amazon.com/rds/aurora/faqs/#serverless
This same limitation applies to Amazon EFS, for architecturally similar reasons. You can work around the limitation in EFS, and the same workaround could be used for Aurora Serverless, but you'd need to disable the health checks entirely since those health checking connections would keep the instance alive all the time. Exposing a database to the Internet is a practice best avoided.
You could also use some VPN solutions. They would need to be instance-based and would probably need to use NAT to masquerade the client address behind the VPN instance's internal address -- that's effectively what the proxy workaround mentioned above does, but at a different OSI layer.

Does AWS RDS supports MySQL as document store

I am able to connect normal AWS RDS MySQL instance (5.7.16). But, as I have to use MySQL as document store, I have configured MySQL instance by installing mysqlx plugin, Which is required for document store.
After this, I am trying to connect MySQL document store on port 33060 on same instance but unable to connect. I am using lambda for connection which imports xdevapi (#mysql/xdevapi) package and tries to connect with MySQL RDS instance on port 33060.
But, there is no error which I can see for, therefore I am just wondering does AWS RDS has support for MySQL document store.
Code:
xdevapi.getSession({
host: process.env.HOSTNAME,
port: process.env.PORT,
dbUser: process.env.DB_USER,
dbPassword: process.env.DB_PASSWORD
}).then(function (session) {
console.log("Connected");
session.close();
return callback(null, {'responsne':'connected', statusCode: 200});
}).catch(function (err) {
console.log(err.stack);
return callback(null, {'responsne':err.stack, statusCode: 400});
});
Kindly, help me out to find this.
Since MySQL 8.0.11 is now generally available on AWS, we've been looking at the Document Store functionality via x-plugin.
Following through the sample DB (https://dev.mysql.com/doc/refman/8.0/en/mysql-shell-tutorial-javascript-download.html) it creates the schema and imports it OK, but doesn't seem to expose the db object to mysqlsh.
For example, when I run
\use world_x
connected to a local host instance it outputs
Default schema set to `world_x`.
Fetching table and column names from `world_x` for auto-completion... Press ^C to stop.
whereas when connected to an RDS instance I only get
Default schema set to `world_x`.
Additionally, according to https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html#MySQL.Concepts.VersionMgmt the X Plugin isn't supported which, as I understand it, means Document Store functionality isn't supported.
Pretty late answer, but hopefully it might help to clarify similar questions in the future. Since apparently RDS is running MySQL 5.7.16, it should not load the X Plugin (which enables the Document Store) by default.
Unless you are able to provide mysqld startup options (in this case --plugin-load=mysqlx=mysql.so) or have client access, in which case you can follow the steps described here to enable the plugin, you are out of luck.
There's also the possibility that RDS is running some kind of fork, which does not even bundle the X Plugin.
Also, the X DevAPI connector for Node.js only guarantees support for MySQL 8.0, so, although you should be able to use it with later MySQL 5.7 versions, there are a few limitations.

How to keep server and application separate

I have a nodejs Application running on server with node-mysql and express, At first I faced problem where some exceptions were not handled and the application would go down with network connectivity issues.
I handled all uncaught exceptions and the sever wouldn't go down this time but instead it would hang. I figured it was because I returned response only if query didn't raise any exception, so I handled all query related exceptions too.
Next if MySQL server terminate connection for some reason my application wouldn't reconnect, i tried reconnecting but it would give an error related to "enqueue connection handshake or something". From another stack question I was supposed to use connection pool so if server terminates connection it will regain connectivity some how, which I did.
My here question is that each time I faced an issue I had to shut down whole application and thanks to nodejs where server is configured programmatically goes down too. Can I or better yet how can I decouple my Server and Application almost completely so that if I make some change in my application I wouldn't have to re-deploy?
Specially for case that right now everything is okay and my application is constantly giving me connection pool error on server and in development version its working fine, so even if I restart my application I am not sure how will I face this problem again so I can properly diagnose this.
Let me know if anyone needs more info regarding my question.
Are you using a front-end framework to serve your application, or are you serving it all from server calls?
So fundamentally, if your server barfs for any reason (i.e. 500 error), you WANT to shut down and restart, because once your server is in that state, all of your in-transit data and your stack is in an unknown state. There's no way to correctly recover from that, so you are safer from both a server and an end-user point of view to shutdown the process and restart.
You can minimise the impact of this by using something like Node's Cluster module, which allows you to fork child processes of your server and generate multiple instances of the same server, connected to the same database, allowing access on the same port etc, therefore, if your user (or your server), manages to hit an unhandled exception, it can kill the process and restart without shutting down your entire server.
Edit: Here's a snippet:
var cluster = require('cluster');
var threads = require('os').cpus().length;
if(cluster.isMaster) {
for(var i = 0; i < threads; i++) {
cluster.fork();
}
cluster.on('exit', function(dead, code, signal) {
console.log('worker ' +dead.process.pid+ ' died.');
var worker = cluster.fork();
console.log('worker '+worker.process.pid+ ' started');
});
} else {
//
// do your server logic in here
}
That being said, there's no way for you to run up your application and server separately if Node is serving your client content. Once you terminate your server, your Endpoints are down. If you really wanted to be able to keep a client-side application active and reboot your server, you'd have to entirely separate the logic, i.e. have your Application in a different project to your server, and use your server as API endpoints only.
As for Connection Pools in Node-mysql: I have never used that module so I couldn't say what best practice is there.

When to open the connection using node-mysql module?

I found a very good module (node-mysql) to connect to Mysql database.
The module is very good, I Only have a question about "WHEN" to open the connection to Mysql.
I always have used php-mysql before starting with node, for each request i opened a connection...then query....then close.
Is the same with node? for each request do I have to open a connection and then close it? or can i use persistent connection?
Thank you
The open-query-close pattern generally relies on connection pooling to perform well. Node-mysql doesn't have any built in connection pooling, so if you use this pattern you'll be paying the cost of establishing a new connection each time you run a query (which may or may not be fine in your case).
Because node is single threaded, you can get away with a single persistent connection (especially since node-mysql will attempt to reconnect if the connection dies), but there are possible problems with that approach if you intend to use transactions (since all users of the node client are sharing the same connection and so same transaction state). Also, a single connection can be a limit in throughput since only one sql command can be executed at a time.
So, for transactional safety and for performance, the best case is really to use some sort of pooling. You could build a simple pool yourself in your app or investigate what other packages are out there to provide that capability. But either open-query-close, or persistent connection approaches may work in your case also.
felixge/node-mysql now has connection pooling (at the time of this writing.)
https://github.com/felixge/node-mysql#pooling-connections
Here's a sample code from the above link:
var mysql = require('mysql');
var pool = mysql.createPool(...);
pool.getConnection(function(err, connection) {
// Use the connection
connection.query( 'SELECT something FROM sometable', function(err, rows) {
// And done with the connection.
connection.end();
// Don't use the connection here, it has been returned to the pool.
});
});
So to answer your question (and same as #Geoff Chappell's answer): best case would be to utilize pooling to manage connections.