Intermittent timeouts between AWS Lambda and RDS - mysql

We are currently experiencing what I can only describe as random intermittent timeouts between AWS Lambda and RDS. After deploying our functions and running them successfully, they can randomly switch to a state of timing out with no configuration changes. Important to note, we are also monitoring the DB connections and can confirm that we aren't running into a max connection issue.
Here are the details on our setup:
Code being executed (using Node.JS v. 6.10):
const mysql = require('mysql');
exports.dbWrite = (events, context, callback) => {
const db = mysql.createConnection({
host: <redacted>,
user: <redacted>,
password: <redacted>,
database: <redacted>
});
db.connect(function (err) {
if (err) {
console.error('error connecting: ' + err.stack);
return;
}
console.log('connected !');
});
db.end();
};
We are using the Node.JS mysql library, v. 2.14.1.
From a networking perspective:
The Lambda function is in the same VPC as our RDS instance
The Lambda function has subnets assigned, which are associated with a routing table that does not have internet access (not associated with an internet gateway)
The RDS database is not publicly accessible.
A security group has been created and associated with the Lambda function that has wide open access on all ports (for now - once DB connectivity is reliable, that will change).
The above security group has been whitelisted on port 3306 within a security group associated with the RDS instance.
CloudWatch error:
{
"errorMessage": "connect ETIMEDOUT",
"errorType": "Error",
"stackTrace": [
"Connection._handleConnectTimeout
(/var/task/node_modules/mysql/lib/Connection.js:419:13)",
"Socket.g (events.js:292:16)",
"emitNone (events.js:86:13)",
"Socket.emit (events.js:185:7)",
"Socket._onTimeout (net.js:338:8)",
"ontimeout (timers.js:386:14)",
"tryOnTimeout (timers.js:250:5)",
"Timer.listOnTimeout (timers.js:214:5)",
" --------------------",
"Protocol._enqueue
(/var/task/node_modules/mysql/lib/protocol/Protocol.js:145:48)",
"Protocol.handshake
(/var/task/node_modules/mysql/lib/protocol/Protocol.js:52:23)",
"Connection.connect
(/var/task/node_modules/mysql/lib/Connection.js:130:18)",
"Connection._implyConnect
(/var/task/node_modules/mysql/lib/Connection.js:461:10)",
"Connection.query
(/var/task/node_modules/mysql/lib/Connection.js:206:8)",
"/var/task/db-write-lambda.js:52:12",
"getOrCreateEventTypeId (/var/task/db-write-lambda.js:51:12)",
"exports.dbWrite (/var/task/db-write-lambda.js:26:9)"
]
}
Amongst the references already reviewed:
https://forums.aws.amazon.com/thread.jspa?threadID=221928
(the invocation ID in CloudWatch is different on all timeout cases)
pretty much every post in this list: https://stackoverflow.com/search?q=aws+lambda+timeouts+to+RDS
In summary, the fact that these timeouts are intermittent makes this an issue that is totally confusing. AWS support has stated that NodeJS-mysql is a third-party tool, and is technically not supported, but I know folks are using this technique.
Any help is greatly appreciated!

Considering that the RDS connections are not exhausted, there is a possibility that the lambda running into a particular subnet is always failing to connect to db. I am assuming that the RDS instances and lambdas are running in separate subnets. One way to investigate this is to check flow logs.
Go to EC2 -> Network interfaces -> search for lambda name -> copy eni ref and then go to VPC -> Subnets -> select the subnet of lambda -> Flow Logs -> search by eni ref.
If you see "REJECT OK" in your flow logs for your db port means that there is missing config in Network ACLs.

Updating this issue: It turns out that the issue was related to the fact that the database connection was being made within the handler! Due to the asynchronous nature of Lambda and Node, this was the culprit for the intermittent timeouts.
Here's the revised code:
const mysql = require('mysql');
const database = getConnection();
exports.dbWrite = (events, context, callback) => {
database.connect(function (err) {
if (err) {
console.error('error connecting: ' + err.stack);
return;
}
console.log('connected !');
});
db.end();
function getConnection() {
let db = mysql.createConnection({
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PASS,
database: process.env.DB_NAME
});
console.log('Host: ' + process.env.DB_HOST);
console.log('User: ' + process.env.DB_USER);
console.log('Database: ' + process.env.DB_NAME);
console.log('Connecting to ' + process.env.DB_HOST + '...');
return db;
}

Related

How to query an AWS Aurora MySQL database has been scaled to 0 ACUs

I am using AWS Amazon RDS Aurora serverless database for an API data source.
Aurora 2.08.3 compatible with MySQL 5.7
I have enabled the "Pause after inactivity feature" to "Scale the capacity to 0 ACUs when cluster is idle".
I am finding that when the database is scaled to 0 Aurora Capacity units, the API fails.
Even if I change the API configuration to time out out after 40 seconds, it fails if the database was at 0 ACUs when it was called. Calling the API again after some time yields a successful call.
Whether at connection.connect or connection.query, the failures come with no useful response - the response just doesn't come.
I have not been able to determine if the database needs a moment to scale up to determine if I need to pause the call. Logging to the console the connection info looks the same whether the database is scaled down or ready for a query.
Is there a way to programmatically check if a AWS Serverless v2 Aurora MySQL database is scaled to 0? connection.state does not seem to describe this.
I have tried many many approaches. This post seemed promising, but didn't solve my issue.
What I have now is...
var connection;
exports.getRecords = async (event) => {
await openConnection();
// Simplified connection.query code that works when the database is scaled up
var q = 'SELECT * FROM databaseOne.tableOne';
connection.query(q, function(err,results){
if(err) {
console.log('q err', err);
throw err;
}
console.log(results);
resolve(results);
})
}
async function openConnection() {
connection = mysql.createConnection({
host: process.env.hostname,
port: process.env.portslot,
user: process.env.username,
password: process.env.userpassword,
});
console.log('connection1', connection.state, connection);
try {
await connection.connect();
console.log('connection2', connection.state, connection);
} catch (err) {
console.log('c err', err);
}
}
Thank you

Node Lambda: MySQL query never runs

Testing with Postman, I'll try to make this as clear as possible, please advise if this is not making sense.
I have a Lambda that uses MySQL RDS database on AWS and works fine locally when accessing the database on AWS. After successfully getting a JWT from an auth endpoint I try to hit the login endpoint and I get a 502 Bad Gateway. Using the CloudWatch logs I can trace the failure to right before the login query runs. I've confirmed that my MySQL config is correct and that I have a connection to the database. The lambda and the database are in the same region DB: us-east-1f, lambda: us-east-1.
I've confirmed the OPTIONS and POST request methods for this endpoint both are set up with CORS enabled in the API Gateway. I'm using my serverless.yml to set cors: true on all the endpoints even though I'm using app.use(cors()) in my index file.
The error message for the 502 is, {"message": "Internal server error"}
Here is the point of failure in my code:
'use strict';
const mysql = require('./index');
module.exports = {
loginSql: async (email, password) => {
// MAKES IT HERE AND THE PARAMS ARE CORRECT
try {
console.log('IN TRY %%%%%%%%%%%%%%');
// SEEMS TO DIE HERE
const results = await mysql.query({
sql: `SELECT
id, first_name, last_name, email
FROM users
WHERE email = ?
AND password = ?`,
timeout: 50000,
values: [email, password],
});
// NEVER MAKES IT HERE /////////
console.log('QUERY RAN %%%%%%%%%%%%');
mysql.end();
if (results.length < 1) return false;
return results;
} catch (error) {
// DOESN'T THROW ERROR
console.log('LOGIN DB ERROR', error);
throw new Error('LOGIN DB ERROR THROWN', error);
}
},
};
I just created the exact same use case in that I have a LAMBDA function written in Java querying data from a MySQL RDS instance. It works perfectly.
Here is your issue:
To connect to the RDS instance from a Lambda function, you must set the inbound rules using the same security group as the RDS Instance. For details, How do I configure a Lambda function to connect to an RDS instance?.

How do I connect to an AWS MySQL database in node.js

I have node.js Lambda function on AWS and a MySQL database. I have been using the following code to connect:
const mysql = require('mysql');
const con = mysql.createConnection({
host: "<endpoint>",
user: "<user>",
password: "<password>"
});
con.connect(function(err) {
if (err) throw err;
console.log("Connected!");
con.end();
});
When I try to connect to the endpoint I get the message:
{"message": "Internal server error"}
Any help would be much appreciated! Hope everyone is healthy.
It is most likely an RDS set-up.
Check the security settings on your security group.
Does it have a rule that only allows a connection from a certain security group?
Also, does your RDS have Public Accessibility? you would want to set it to 'Yes'
The port seems to be missing in your createConnection function, I've seen people skip this while others needed to have it.
Another thing for debugging purpose get the stack as well, it will take you there :)
conn.connect(function(err){
if (err){
console.error('db connection failed: ' + err.stack);
return;
}
console.log('connected to db');
});
connection.end();
Check this link to AWS docs, it is for elastic beanstalk, but it's the same :)
Here is another link/blog post to get you there: using was rds with nodejs

How can I handle MySQL disconnection on NodeJS?

First of all, I'm a beginner on NodeJS. Well, I'm using a shared hosting to my project and when the database reaches 1 minute of inactivity, NodeJS crashes and disconnects me from MySQL. Since I'm using a shared hosting, I can't edit the idle time on the MySQL config and I'll need to handle it in code.
I'm using module.exports to handle my connection, as shown below. So how can I make an auto-reconnection script to take care of my issue? Thank you.
var mysql = require('mysql');
module.exports =
{
handle: null,
connect: function(call){
this.handle = mysql.createConnection({
host : 'localhost',
user : 'root',
password : '',
database : 'test',
timezone: 'utc',
charset : 'utf8'
});
this.handle.connect(function (err) {
if(err) {
console.log("[MySQL] Connection error: " + err.code);
} else {
console.log("[MySQL] Successfully connected");
}
});
}
};
The node mysql module that you are using also has a connection pooling mechanism.
Check out the docs at https://github.com/mysqljs/mysql#pooling-connections
Connection pools will make you task easier. You can then store the connection pool object and use its getConnection method to obtain a connection. Make sure that you release the connection when you are done with it.
If for some reason you cant use connection pooling then you will have to listen for error event on the connection and handle it accordingly. But I strongly recommend that you use connection pool.

Issue trying to access mysql through VPC in AWS-RDS through lambda

I have setup a VPC and an RDS database using mysql through a nodejs lambda using serverless.
The issue I am having is that I get a server internal error when testing the lambda. The lambda is using the same VPC as the RDS.
Could this be a permission issue where the lambda needs direct permission to the db instance. If so does anyone have any suggestions on what permissions would be required.
Thankyou
Here is part of the code I am using, this is to test a query and log the result to cloudwatch. It does not show up and only shows timed out.
it seems to work locally using serverless. This is just for educational purposes.
let pool = mysql.createPool({
host: host.length > 0 ? host : body.host,
port: port.length > 0 ? port : body.port,
user: body.username,
password: body.password,
database: body.dbname
});
pool.getConnection(function(err, connection) {
connection.query('SELECT 1 + 1 AS result', function (error, results, fields) {
// And done with the connection.
//connection.release();
// Handle error after the release.
if (error) throw error;
else Logger.info(JSON.stringify(results));
});
});
return {statusCode:200, body: JSON.stringify({})};
}