Lambda + Sequelize randomly getting SequelizeConnectionError ETIMEDOUT - mysql

We are using Sequelize within Aws Lambda and for the most part everything is working great however randomly it is erroring out with the following error:
ETIMEDOUT {"name":"SequelizeConnectionError","parent":{"errorno":"ETIMEDOUT","code":"ETIMEDOUT","syscall":"connect","fatal":true},"original":{"errorno":"ETIMEDOUT","code":"ETIMEDOUT","syscall":"connect","fatal":true}}
We are using Rds - mysql 8.0.15, Serverless framework, serverless-http, serverless-webpack.
Here is our file configuration.
//db.js
... import all models
const sequelize = new Sequelize(
process.env.DATABASE,
process.env.DB_USER,
process.env.DB_PASSWORD,
{
host: process.env.DB_HOST,
port: process.env.STAGE === "dev" ? 3306 : 31304,
dialect: "mysql",
dialectOptions: { decimalNumbers: true },
pool: {
max: 10,
min: 0
}
}
);
const models = {};
// Initialize models
modules.forEach(module => ...
export default models;
//handler.js
import express from "express";
import serverless from "serverless-http";
import db from "./db";
const app = express();
app.use(async (req, res, next) => {
try {
const email = "get email from jwt ...";
req.user = await db.user.findOne({
where: { email }
});
return next();
} catch (e) {
logger.warn("An error occurred" , e);
res.status(500).send({ message: e.message });
}
});
app.use("/api", api);
app.get("*", (req, res) =>
res.status(404).json({ errorCode: 0, message: "Unrecognized route" })
);
const handler = serverless(app);
module.exports.handler = async (event, context) => {
context.callbackWaitsForEmptyEventLoop = false;
return handler(event, context);
};
I thought potentially that we reached the max mysql connections (which for my instance is 66) however the rds dashboard shows the most we have is in the 40's.
What are we doing wrong?

Although you say you are not reaching the maximum connections, you still might want to try creating an Amazon RDS Proxy for your Lambda function to access, hit it with a high load, and see if you are able to reproduce the error.
You don't really have enough logs to diagnose the issue, if the above does not work you will need to dive deeper, potentially enable more RDS logging to see if that tells you the problem.
Other ways you can troubleshoot the issue is if you right the same queries you are executing in another language/framework, simulate and see if problems persists.
You may also want to check Cloudwatch metrics for any other tells which could give you a clue as to what the problem is. Graph your Lambda resource metrics and RDS instance metrics on the same chart to see if there are any patterns with when the Lambda function errors and what your DB is doing, such as if the error occurs if your write or read latency increases.
If the issue persists, and you are not able to solve it, the best you can probably do is implementing retries, which will simply mask the issue, but if the boss is at you for a solution, this might be your best bet.
Hope my suggestions help, I've had similar issues with DB+Lambda & DB+ECS and found those to be effective troubleshooting strategies.

Related

Amazon RDS load balancing not working with mysql.createPool in nodejs

I have implemented load balancing in read database connection like when read db load increased to 60% it will initiate a new read database for balancing load on database but
When I see from AWS developer console dashboard all API calls It will initate new read database instance but most of the API's calls load took placed on database 1 upto 90 percent but like 10 req /sec and on read DB instance 2 1 to 5% database is used like 1req /sec
it should divided API request on both database equaly but It wont work
This issue is because mysql.createPool will not close connection from database 1 (createPool will reuse its opened connections) so that other API calls can move to second database instance.
To solve this problem I had changed mysql.createPool with mysql.createConnection on Each API calls
I had created 2 middleware
1-for createConnection
2-for connection.end()
whenever a request comes in middleware 1 calls and create new connection and on request finish middleware 2 will call which will end the connection. this solution has solved my problem of load balancing but a new issue takes place I have face to many database connection issues with this method
does anyone have a proper solution who has faced this issue or can help?
Sample Code :
var readDB = mysql.createConnection({
database: process.env.READ_DB_DATABASE,
host: process.env.READ_DB_HOST,
user: process.env.READ_DB_DB_USER,
password: process.env.READ_DB_DB_PASSWORD,
charset: "utf8mb4"
});
utils.js
async onFinish(req, res, next) {
return new Promise(async (resolve, reject) => {
try {
let readDB = req.readDB;
const dbEnd = util.promisify(readDB.end).bind(readDB);
const response = await dbEnd();
resolve(response);
} catch (error) {
reject(error);
}
});
}
app.js
/**
* middleware for create connection and end connection on finish
*/
app.use(async (req, res, next) => {
try {
const readDB = await utils.readDBCreateConnection();
req.readDB = readDB;
res.on("finish", function () {
console.log("onFinish called");
utils.onFinish(req, res, next);
});
next();
} catch (error) {
res.status(error.status || 500).send({
code: 500,
message: error.message || `Internal Server Error`,
});
}
});
/**
* Code to initialice routing
*/
require("./modules/v2-routes")(app); // v2 app routes

RDS MySQL timing out intermittently when called from Lambda using NodeJS

My web app uses Lambda using NodeJS and backend is RDS(MySQL). I'm using serverless-mysql to make db calls.
For some reason, the db call times out intermittently. I tried the following:
Enabled flow logs to see if there are any errors (but couldn't find any reject statuses).
Tried making the database publicly available and took lambda out of VPC (to see if it is an issue with VPC configuration). But still, it was failing intermittently. So VPC is out of the equation.
RDS is not having any unusual spikes and connection exhaustion as monitoring shows a peak of only up to 3 connections. Lambda is always kept warm. I tried increasing the time out to up to 25 seconds. Still no luck.
Below is the code I use:
export async function get(event, context, callback) {
if (await warmer(event)) return 'warmed';
context.callbackWaitsForEmptyEventLoop = false;
try {
const userId = getUserIdFromIdentityId(event);
const query = "select * from UserProfile where UserId = ?";
const result = await mysql.query(query, [userId]);
console.log(result);
console.log('getting user account');
mysql.quit();
return success({
profileSettings: result.length > 0 ? result[0] : null,
});
} catch(e) {
console.log(e);
return failure();
}
}
Success function basically returns a json object like below:
return {
statusCode: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Credentials": true
},
body: JSON.stringify(body)
};
mysql is initialized as below:
export const mysql = AWSXray.captureMySQL(require('serverless-mysql')({
config: {
host: process.env.dbHost,
user: process.env.dbUsername,
password: process.env.dbPassword,
database: process.env.database,
}
}));
The only error I can see in Cloudwatch logs is:
Task timed out after 10.01 seconds.

"Unable to acquire a connection" when trying to query more than once

I'm working with a MySQL database in my node.js project. I created a query with Knex to the database and it's ok. But when I try to query one more time, I have this error:
Error: Unable to acquire a connection
at Client_MySQL.acquireConnection
(C:\Users\Darek\Desktop\proj\node_modules\knex\lib\client.js:336:30)
at Runner.ensureConnection
This is my knexfile.js:
const dotenv = require('dotenv');
dotenv.config();
module.exports = {
client: 'mysql',
connection: {
host: 'localhost',
user: process.env.MYSQL_USER,
password: process.env.MYSQL_PASS,
database: 'testDB'
}
};
Then I must restart my npm. I searched for a solution to this problem but there's no working answers for me.
I saw this error another:
(node:8428) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Here is a code, where error occurs. It's function in my user model:
module.exports.check = (number) => {
var bbb = 0;
return knex
.from('employ')
.select('ID')
.where('emp_number', '=', number)
.then((row) => {
bbb = row.length;
return(bbb);
})
.finally(() => {
knex.destroy();
})
};
And there is a call of this func:
const numberExist = await User.check(req.body.number);
You don't need to call knex.destroy() and for the most part, you probably shouldn't. destroy is useful if you have a series of tests or a one-off script, but for a server that needs to keep running for request after request, you want Knex to manage its own pool of connections. I suggest removing your finally block, and further making certain you handle errors gracefully (using catch):
try {
const numberExist = await User.check(req.body.number);
// ... do something with numberExist ...
} catch (e) {
console.error('Uh-oh:', e.message);
res.status(500).json({ error: "Something unexpected happened!" });
}
Note also that your query is a COUNT, so it's more efficient to do it this way:
module.exports.check = number =>
knex('employ')
.count('ID')
.where('emp_number', number)

How to debug an Azure Function using Node.js and mysql2 connecting to database

running into some issues trying to figure out an Azure Function (node.js-based) can connect to our mysql database (also hosted on Azure). We're using mysql2 and following tutorials pretty much exactly (https://learn.microsoft.com/en-us/azure/mysql/connect-nodejs, and similar) Here's the meat of the call:
const mysql = require('mysql2');
const fs = require('fs');
module.exports = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.');
if (req.query.fname || (req.body && req.body.fname)) {
context.log('start');
var config = {
host:process.env['mysql_host'],
user: process.env['mysql_user'],
password: process.env['mysql_password'],
port:3306,
database:'database_name',
ssl:{
ca : fs.readFileSync(__dirname + '\\certs\\cacert.pem')
},
connectTimeout:5000
};
const conn = mysql.createConnection(config);
/*context.log(conn);*/
conn.connect(function (err) {
context.log('here');
if (err) {
context.error('error connecting: ' + err.stack);
context.log("shit is broke");
throw err;
}
console.log("Connection established.");
});
context.log('mid');
conn.query('SELECT 1+1',function(error,results,fields) {
context.log('here');
context.log(error);
context.log(results);
context.log(fields);
});
Basically, running into an issue where the conn.connect(function(err)... doesn't return anything - no error message, no logs, etc. conn.query works similarly.
Everything seems set up properly, but I don't even know where to look next to resolve the issue. Has anyone come across this before or have advice on how to handle?
Thanks!!
Ben
I believe the link that Baskar shared covers debugging your function locally
As for your function, you can make some changes to improve performance.
Create the connection to the DB outside the function code otherwise it will create a new instance and connect every time. Also, you can enable pooling to reuse connections and not cross the 300 limit that the sandbox in which Azure Functions run has.
Use the Promises along with async/await
You basically can update your code to something like this
const mysql = require('mysql2/promise');
const fs = require('fs');
var config = {
host: process.env['mysql_host'],
user: process.env['mysql_user'],
password: process.env['mysql_password'],
port: 3306,
database: 'database_name',
ssl: {
ca: fs.readFileSync(__dirname + '\\certs\\cacert.pem')
},
connectTimeout: 5000,
connectionLimit: 250,
queueLimit: 0
};
const pool = mysql.createPool(config);
module.exports = async function(context, req) {
context.log('JavaScript HTTP trigger function processed a request.');
if (req.query.fname || (req.body && req.body.fname)) {
context.log('start');
const conn = await pool.getConnection();
context.log('mid');
await conn.query('SELECT 1+1', function(error, results, fields) {
context.log('here');
context.log(error);
context.log(results);
context.log(fields);
});
conn.release();
}
};
PS: I haven't test this code as such but I believe something like this should work
Debugging on serverless is challenging for obvious reasons. You can try one of the hacky solutions to debug locally (like Serverless Framework), but that won't necessarily help you if your issue is to do with a connection to a DB. You might see different behaviour locally.
Another option is to see if you can step debug using Rookout, which should let you catch the full stack at different points in the code execution and give you a good sense of what's failing and why.

"Uncaught Error: Received packet in the wrong sequence" with devtools off - Electron + MySQL node driver + Webpack

When I set up a new project using Electron + Webpack + node MySQL my production build is throwing:
Uncaught Error: Received packet in the wrong sequence
The error goes away only if I keep: config.devtools = 'eval' in my production builds, apparently this will result in a larger file size and some performance issues which I would like to avoid.
Why my project / mysql module crashes with devtools set to ''?? I can hardly find similar reports, am I the only one having this issue?
webpack.config.js:
...
if (process.env.NODE_ENV === 'production') {
config.devtool = '' // <-------- mysql will throw Uncaught Error if I omit 'eval'
config.plugins.push(
new webpack.DefinePlugin({
'process.env.NODE_ENV': '"production"'
}),
new webpack.optimize.OccurenceOrderPlugin(),
new webpack.optimize.UglifyJsPlugin({
compress: {
warnings: false
}
})
)
}
home.js:
<script>
var mysql = require('mysql')
var connection = mysql.createConnection({
host: 'localhost',
user: 'root',
password: 'password',
database: 'EONIC'
})
connection.connect()
connection.query('SELECT * from products', function (err, rows, fields) {
if (err) throw err <---- here will the error happen
console.log(rows)
})
connection.end()
</script>
source of the error in mysql/lib/protocol/Protocol.js at line 272:
if (!sequence[packetName]) {
var err = new Error('Received packet in the wrong sequence.');
err.code = 'PROTOCOL_INCORRECT_PACKET_SEQUENCE';
err.fatal = true;
this._delegateError(err);
return;
}
It could have something to do with the mangle option in the default minimizer of Webpack in combination with the Mysql package for node.
I've faced the same and similar issues without really being able to pin point it.
There are a lot of questions out there related to this issue:
https://github.com/webpack/webpack/issues/3150
https://github.com/Bajdzis/vscode-database/issues/78
https://github.com/mysqljs/mysql/issues/1655
But the best solution I've found is this:
optimization: {
minimizer: [new TerserPlugin({ terserOptions: { mangle: false } })] // mangle false else mysql blow ups with "PROTOCOL_INCORRECT_PACKET_SEQUENCE"
},
It is of Rudijs in the mysql issue threat: https://github.com/mysqljs/mysql/issues/1655#issuecomment-484530654
Hope this helps, give me a shout!