I've been having a problem for several weeks now about a ETIMEDOUT error when doing subsequent database calls under load and it has left me completely stumped.
For a heads up, I have an Azure App service where I'm running my node js backend. It's running on a S2 plan. I also have a flexible MySQL server on azure too. I don't remember the plan's name, but it's a mid priced one that can allow up to 2700 connections. We are hosting an API that can see thousands of requests in a day and will make a few database calls per request.
The server and database calls all run normally when the server isn't under load, but the moment the server goes under load, even if it's a little such as 100 requests all at the same second, the database calls start to fail with ETIMEDOUT. Here's how I have the database calls setup.
First I grab a connection to be used. My connection is setup as
getConnection(bMultipleStatement) {
return mysql.createConnection({
host: process.env.MYSQL_HOST,
user: process.env.MYSQL_USER,
password: process.env.MYSQL_PASSWORD,
database: process.env.MYSQL_DATABASE,
charset: 'utf8mb4',
multipleStatements: bMultipleStatement
});
}
conn = base.getConnection(false);
Once I get a connection from the database, I perform basic SELECTS and UPDATES as such
conn.query(sql, [variables], function (error, results) {
if (error){
if (conn) { conn.end(); }
// log error
}
else {
if (conn) { conn.end(); }
if (results.length > 0) {
//continue
}
else {
//log not found
}
}
});
The first 20-30 requests will be able to successfully run this query, but everything else after it will just generate a ETIMEDOUT. I have checked my azure metrics and can see that the amount of connections and CPU % are both low very low. Around 5% CPU and 50 active connections.
This problem has left me stumped. Most everything I've searched for this issue only involves being completely unable to connect rather than it working fine until it goes under load. Has anybody experienced a similar issue and know how to solve it? The only thing that I was able to find to help was that it doesn't run into this issue locally, only when it's on azure, but this is because my local machine has much better hardware than the azure server. Upgrading is a last resort option of ours because that's just putting a bandaid over the problem.
Related
I'm working with MySQL more and ran into a question I can't seem to find an answer for without experimenting myself or looking much deeper.
I create a pool with MySQL and release properly, but I see the that there's an active connection still through MySQL Workbench till process is closed. Code example below:
pool.getConnection((err, connection) => {
if (err) {
console.log(err);
return callback(err);
}
if (args.length > 2) {
sql_args = args[1];
}
connection.query(args[0], sql_args, (err, results) => {
connection.release();
if (err) {
console.log(err);
return callback(err);
}
callback(null, results);
});
});
Is this normal behavior for the connection to remain open and if so, what happens if I had hundred servers connecting to this pool and all of them make a request but not all at once, and hit the max-connection limit with MySQL; Will there be a purge of inactive connections (not being used)?
edit
I'd wonder if the connections don't close would running a shorter TTL be ideal or some type of clearing? I'd like to have 100 servers fit in a 50 max connection and not error out.
It depends on the implementation of the connection pool package.
Most connection pools in my experience allocate a fixed number of connections, open them, and keep them connected permanently for the life of the application. Connections are "lent" to client code that needs to access the database, and when the client code is done, the connection is returned to the pool, but it is not disconnected from MySQL.
If a connection drops by accident, the connection pool makes a new replacement connection, again so it can maintain a fixed number of ready connections. They remain connected permanently, as well as the CP can maintain them.
Yes, if you had 100 app servers that each allocate a pool of 20 connections, you would see 2000 clients connected on the MySQL Server all the time if you ran SHOW PROCESSLIST. We do see this regularly in production where I work.
Is that a problem? MySQL can handle a lot of connections if most of them are idle. We set our max_connections to 4096, for example. But rarely are more than 20-40 of them actually executing a query at any given moment. In fact, we made an alert that fires if the Threads_running goes over 100 on a given MySQL Server instance.
What if you had 100 app servers each of which made a CP of 500 connections? Yes, that would create 50000 connections on the MySQL Server, and that's probably more than it can handle. But why would you do that? The point of a connection pool is that many threads in your app can share a small number of connections.
So do be mindful that the total number of connections is basically your connection pool size multiplied by the number of app instances. And you might even have multiple app instances per app server (e.g. if you run sidecar processes and so on). So it can add up quickly. Do the math.
I am using mysql package of npm in my NodeJS project. I am using connection pool as below -
var pool = mysql.createPool({
connectionLimit: 50,
host: host,
user: user,
password: password,
database: database
});
And then I am using the pool as -
pool.query("Select ....", function (err, data) {
});
But sometimes our database server is stuck due to large queries and I think the connection limit of this connection gets crossed. Then after the stuck queries have executed successfully, the mysql library cannot acquire new connections. I cannot even see the queries in SHOW PROCESSLIST of MySQL. So there is issue in acquiring new connections. There is nothing in the logs too. I sort the issue by restarting the Node Server, but it isnt the ideal solution. Please help me in identifying the cause of the issue. Similar issue occurs with MSSQL connections in NodeJS and I just cannot identify the reason for this.
After you're done processing your query, you should send pool.end() to close the current connection.
I have a websocket node.js app (game server) that runs a multiplayer html5 game.
The game has a website also. The game server and the website are on the same Apache VPS.
The game server uses mysql to store and retrieve data using mysql pooling from the node.js mysql package.
It works fine 99% of the time, but intermittently, at a random point, it will all of a sudden stop being able to get a mysql connection.
When this happens the website stops working and shows a 500 http error. I believe that this is whats causing the problem in the game server. Because of the 500 http error, mysql can no longer be connected to and thus pool.getConnection no longer works in the game server.
I find it strange that even though Apache is throwing up a 500 error, the game server can still be accessed successfully through a websocket as usual. The only thing that appears to have stopped working inside the game server is mysql. The game client connects to the game server via websocket and the functions work correctly, except for being able to connect to mysql.
If I ctrl+c the game server to stop the node.js app (game server) then the 500 error goes away. The website instantly serves up again, and then if I restarting the game server, mysql is now working again.
Obviously something in the game server is causing this to happen. So far i cannot find what it is. I am stuck now, i've spent a full week trying everything i could think of to debug this.
After running debug mode on mysql, im seeing this;
<-- ErrorPacket
ErrorPacket {
fieldCount: 255,
errno: 1203,
sqlStateMarker: '#',
sqlState: '42000',
message: 'User (i've hidden this) already has more than
\'max_user_connections\'
active connections' }
But I have it set to 100000 connections. No way is there that many being used. Every time I finish with a connection I use connection.release() to put it back into the pool. What do I need to do to fix this?
Please, any suggestion you have to debug this is greatly appreciated!
Thanks in advance.
here is the way i'm using mysql in the game server
const mysql = require('mysql');
const pool = mysql.createPool({
connectionLimit : 100000,
host : '***********',
user : '***********',
password : "'***********",
database : '***********',
debug : true
});
pool.getConnection(function(err,connection){
if (err) {
console.log(err);
return false;
}
connection.query("select * from aTable ",function(err,rows){
if(err) {
console.log(err);
connection.release();
return false;
}
// dos stuff here
connection.release();
})
})
1 thing i am wondering, if there is an error in the top error catch block
here ->
pool.getConnection(function(err,connection){
if (err) {
console.log(err);
return false;
}
Then the connection is not released right? So that would keep a connection alive and this is whats causing the problem? over time an error happens here and there, after a random amount of time, enough of these happen and that's what is causing it? it's like a build up of open connections???
This was the mistake i was making;
pool.getConnection(function(err,connection){
if (err) {
console.log(err);
return false;
}
connection.query("select * from aTable ",function(err,rows){
if(err) {
console.log(err);
connection.release();
return false;
}
if(somethingrelevant){
// do stuff
connection.release();
}
})
})
And that meant that if somethingrelevant didn't happen, then the connection would stay open.
My pool would continue to open new connections but they weren't always being put back.
I have more than 20 lambda function for my mobile app API, as in starting the user based is less so it was all going good but now as the user increase (3000 to 4000) I am facing too many connection issues in my lambda function because of which I started getting internal server error form my API, I know i am missing something while creating the connection in lambda but after lot of hit and try i was not able to find out that missing link, the below code I am using for creating the connection
var con;
exports.handler = async (event, context) => {
context.callbackWaitsForEmptyEventLoop = false;
if (!con || con.state == "disconnected" || con === "undefined") {
con = secret
.then((result) => {
var data = JSON.parse(result.SecretString);
var connection = mysql.createConnection(
{
"host": data.host,
"user": data.username,
"password": data.password,
"database": data.db
}
);
connection.connect();
return connection;
}).catch((err) => {
throw err;
});
}
I have tried adding con.destroy() before sending the response, but it does not seem to solve the problem, so if there is anything else I can do then pls let me know.
It's complicated to know exactly what's going on, my first guess always revolves on setting context.callbackWaitsForEmptyEventLoop = false and storing the connection outside the function scope - both which you've correctly done.
With that said, managing connection pools in Lambda kind of goes the other way serverless is by definition, it "lacks" ephemerality. This does not mean that you can't scale with connections, you'll have to dig deeper infos on your issue.
Jeremy Daly provides good practices on dealing with this on the following posts on his blog:
Reusing DB connections
Managing RDS connections + AWS Lambda
Also, he has made a lib that manages this for you, it's called serverless-mysql - which is built to address this specific issue.
Personal experience: I had trouble with connections + lambdas, and due to this fact I've migrated to their DataAPI solution (I had to migrate my RDS to Aurora Serverless, which is not a big pain) - its GA release was about 2/3 weeks ago.
If you want more info on Aurora SLS, check it out here.
Another way to tackle this kind of issue is to user an AWS RDS Proxy: https://aws.amazon.com/fr/rds/proxy/
Many applications, including those built on modern serverless architectures, can have a large number of open connections to the database server, and may open and close database connections at a high rate, exhausting database memory and compute resources. Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and application scalability. With RDS Proxy, failover times for Aurora and RDS databases are reduced by up to 66% and database credentials, authentication, and access can be managed through integration with AWS Secrets Manager and AWS Identity and Access Management (IAM).
I built a program with NodeJS where multiple users access it in the same time and do a lot of operations that queries the MySQL database.
My approach is very simple. I only open one connection when the app is started and leave it that way.
const dbConfig = require('./db-config');
const mysql = require('mysql');
// Create mySQL Connection
const db = mysql.createConnection({
host: dbConfig.host,
user: dbConfig.user,
password: dbConfig.password,
database: dbConfig.database,
multipleStatements: true
});
// Connect MySQL
db.connect((err) => {
if (err) {
throw err;
} else {
console.log('MySQL connected!');
}
});
module.exports = db;
And then, whenever the program needs to query the database, i do like this
db.query('query_in_here', (error, result) => {
*error_handling_and_doing_stuff*
}
I'm having trouble when noone access the app for a long period of time (some hours).
Because when this happens i think the connection is being closed automatically. And then, when a user try to access the app, i see in the console that the connection timed out.
My first thought was too handle the disconnection and connect again. But, it get me thinking if this is the correct approach.
Should i use pool connections instead? Because if i keep only one connection it means that two users can't query the database in the same time?
I tried to understand tutorials with pool connections but couldn't figure out when to create new connections and when should i end them.
UPDATE 1
Instead of create one connection when the app is started i changed to create a pool connection.
const dbConfig = require('./db-config');
const mysql = require('mysql');
// Create mySQL Connection
const db = mysql.createPool({
host: dbConfig.host,
user: dbConfig.user,
password: dbConfig.password,
database: dbConfig.database,
multipleStatements: true
});
module.exports = db;
It seems that when i use now "db.query(....)" the mysql connection and release of that connection is done automatically.
So, it should resolve my issue but i don't know if this is the correct approach.
Should i use pool connections instead?
Yes you should. Pooling is supported out-of-the-box with the mysql module.
var mysql = require('mysql');
var pool = mysql.createPool({
connectionLimit : 10,
host : 'example.org',
user : 'bob',
password : 'secret',
database : 'my_db'
});
pool.query('SELECT 1 + 1 AS solution', function (error, results, fields) {
// should actually use an error-first callback to propagate the error, but anyway...
if (error) return console.error(error);
console.log('The solution is: ', results[0].solution);
});
You're not supposed to know how pooling works. It's abstracted from you. All you need to do is use pool to dispatch queries. How it works internally is not something you're required to understand.
What you should pay attention to is the connectionLimit configuration option. This should match your MySQL server connection limit (minus one, in case you want to connect to it yourself while your application is running), otherwise you'll get "too many connections" errors. The default connection limit for MySQL is 100, so I'd suggest you set connectionLimit to 99.
Because if i keep only one connection it means that two users can't query the database in the same time?
Without pooling, you can't serve multiple user requests in-parallel. It's a must have for any non-hobby, data-driven application.
Now, if you really want to know how connection pooling works, this article sums it up pretty nicely.
In software engineering, a connection pool is a cache of database connections maintained so that the connections can be reused when future requests to the database are required. Connection pools are used to enhance the performance of executing commands on a database. Opening and maintaining a database connection for each user, especially requests made to a dynamic database-driven website application, is costly and wastes resources. In connection pooling, after a connection is created, it is placed in the pool and it is used again so that a new connection does not have to be established. If all the connections are being used, a new connection is made and is added to the pool. Connection pooling also cuts down on the amount of time a user must wait to establish a connection to the database.