#mysql/xdevapi ECONNREFUSED doesn't release connection - mysql

I'm using the #mysql/xdevapi npm package (version 8.0.22) with a local installation of mysql-8.0.15-winx64.
I have set pooling to enabled and attempted to retrieve a session from the client. If I do this before mysql is ready then I get an ECONNREFUSED exception which is expected, but the connection never appears to be released. If the pool size is one then all subsequent attempts to getSession
The exception is thrown from within the getSession method, so the session is not returned for me to call .end() manually.
const mysqlx = require('#mysql/xdevapi');
const client = mysqlx.getClient(config, { pooling: { enabled: true, maxSize: 1, queueTimeout: 2000 } });
const session = await this.client.getSession(); // returns ECONNREFUSED exception
/*
wait for mysql to be ready and accepting connections
*/
const session = await this.client.getSession(); // returns connection pool queue timeout exception because the previous session hasn't been returned to the pool
How can I ensure that the aborted connection is returned to the pool?

This is a bug and I encourage you to report it at https://bugs.mysql.com/ using the Connector for Node.js category.
The only workaround that comes to mind is re-creating the pool if the getSession() method returns a rejected Promise (with or without a specific error). For instance, something like:
const poolConfig = { pooling: { enabled: true, maxSize: 1, queueTimeout: 2000 } }
let pool = mysqlx.getClient(config, poolConfig)
let session = null
try {
session = await pool.getSession()
} catch (err) {
await pool.close()
pool = mysqlx.getClient(config, poolConfig)
session = await pool.getSession()
// do something
}
It's an ugly solution and there's a chance it might be hard to shoehorn into your design but, at least, it lets you enjoy the other benefits of a connection pool.
Disclaimer: I'm the lead developer of the MySQL X DevAPI Connector for Node.js

Related

How to query an AWS Aurora MySQL database has been scaled to 0 ACUs

I am using AWS Amazon RDS Aurora serverless database for an API data source.
Aurora 2.08.3 compatible with MySQL 5.7
I have enabled the "Pause after inactivity feature" to "Scale the capacity to 0 ACUs when cluster is idle".
I am finding that when the database is scaled to 0 Aurora Capacity units, the API fails.
Even if I change the API configuration to time out out after 40 seconds, it fails if the database was at 0 ACUs when it was called. Calling the API again after some time yields a successful call.
Whether at connection.connect or connection.query, the failures come with no useful response - the response just doesn't come.
I have not been able to determine if the database needs a moment to scale up to determine if I need to pause the call. Logging to the console the connection info looks the same whether the database is scaled down or ready for a query.
Is there a way to programmatically check if a AWS Serverless v2 Aurora MySQL database is scaled to 0? connection.state does not seem to describe this.
I have tried many many approaches. This post seemed promising, but didn't solve my issue.
What I have now is...
var connection;
exports.getRecords = async (event) => {
await openConnection();
// Simplified connection.query code that works when the database is scaled up
var q = 'SELECT * FROM databaseOne.tableOne';
connection.query(q, function(err,results){
if(err) {
console.log('q err', err);
throw err;
}
console.log(results);
resolve(results);
})
}
async function openConnection() {
connection = mysql.createConnection({
host: process.env.hostname,
port: process.env.portslot,
user: process.env.username,
password: process.env.userpassword,
});
console.log('connection1', connection.state, connection);
try {
await connection.connect();
console.log('connection2', connection.state, connection);
} catch (err) {
console.log('c err', err);
}
}
Thank you

"ER_CON_COUNT_ERROR: Too many connections" Error with pool connections to mysql from node.js

I have about 20 node.js files that use the following configuration to access my db:
var pool = mysql.createPool({
host: databaseHost,
user: databaseUser,
password: databasePassword,
database: databaseName,
multipleStatements: true
});
The functions all use the following pattern:
pool.getConnection(function (err, connection) {
if (err) {
callback(err);
} else {
// Use the connection
var sql = "...sql statement...";
var inserts = [...inserts...];
connection.query(sql, inserts, function (error, results, fields) {
// And done with the connection.
connection.release();
// Handle error after the release.
if (error) {
callback(error);
} else {
callback(null, results);
}
});
}
});
I recently started getting the error:
"ER_CON_COUNT_ERROR: Too many connections"
on calls to any of my functions. I don't really understand the pool concept well enough. If each function is creating a pool, does that create a separate pool each time that function is called?
I understand get connection and release connection. Just don't really get the createPool.
I tried to log the following:
console.log(pool.config.connectionLimit); // passed in max size of the pool
console.log(pool._freeConnections.length); // number of free connections awaiting use
console.log(pool._allConnections.length); // number of connections currently created, including ones in use
console.log(pool._acquiringConnections.length); // number of connections in the process of being acquired
the result was:
10
0
0
0
I can increase the number of connections but would like to have some better understanding of why the problem exists.
If your createPool is called inside functions everytime there has to be a query, then yes, it is grave! Instead, have a different file only for mysql connection. Write a class where you create a pool inside a function, and then in the constructor you could simply return the connection from the pool. That way, if you simply require this file anywhere in your project, and create an object of the class, you could then simply use it to query and release!

mysql connection lost error nodejs

I am connecting my node to mysql using the below code for all my rest apis which i am using in my project;
i have put this as a common db connecting file for all my query request.
var mysql = require('mysql');
var db_connect = (function () {
function db_connect() {
mysqlConnConfig = {
host: "localhost",
user: "username",
password: "password",
database: "db_name"
};
}
db_connect.prototype.unitOfWork = function (sql) {
mysqlConn = mysql.createConnection(mysqlConnConfig);
try {
sql(mysqlConn);
} catch (ex) {
console.error(ex);
} finally {
mysqlConn.end();
}
};
return db_connect;
})();
exports.db_connect = db_connect;
The above code works fine and i will use my query for execution with the 'sql' as below in all of my rest api as below.
var query1 = "SELECT * FROM table1";
sql.query(query1,function(error,response){
if(error){
console.log(error);
}
else{
console.log(response);
}
})
everything goes good till now but the problem is i am getting the sql protocol connection error
after 8-12 hours of running my forever module
forever start app.js
i am starting my project with the above forever module.
after 8-12 hours i am getting the below error and all my rest api are not working or going down.
"stack": ["Error: Connection lost: The server closed the connection.", " at Protocol.end (/path/to/my/file/node_modules/mysql/lib/protocol/Protocol.js:109:13)", " at Socket.<anonymous> (/path/to/my/file/node_modules/mysql/lib/Connection.js:102:28)", " at emitNone (events.js:72:20)", " at Socket.emit (events.js:166:7)", " at endReadableNT (_stream_readable.js:913:12)", " at nextTickCallbackWith2Args (node.js:442:9)", " at process._tickDomainCallback (node.js:397:17)"],
"level": "error",
"message": "uncaughtException: Connection lost: The server closed the connection.",
"timestamp": "2017-09-13T21:22:25.271Z"
Then i got a solution in my research to configure for handle disconnection as below.
But i am struggling to configure my sql connection as below with my code.
var db_config = {
host: 'localhost',
user: 'root',
password: '',
database: 'example'
};
var connection;
function handleDisconnect() {
connection = mysql.createConnection(db_config); // Recreate the connection, since
// the old one cannot be reused.
connection.connect(function(err) { // The server is either down
if(err) { // or restarting (takes a while sometimes).
console.log('error when connecting to db:', err);
setTimeout(handleDisconnect, 2000); // We introduce a delay before attempting to reconnect,
} // to avoid a hot loop, and to allow our node script to
}); // process asynchronous requests in the meantime.
// If you're also serving http, display a 503 error.
connection.on('error', function(err) {
console.log('db error', err);
if(err.code === 'PROTOCOL_CONNECTION_LOST') { // Connection to the MySQL server is usually
handleDisconnect(); // lost due to either server restart, or a
} else { // connnection idle timeout (the wait_timeout
throw err; // server variable configures this)
}
});
}
handleDisconnect();
can anyone help me in altering my code with the above code?
SHOW SESSION VARIABLES LIKE '%wait_timeout';
SHOW GLOBAL VARIABLES LIKE '%wait_timeout';
One of them is set to 28800 (8 hours). Increase it.
Or... Catch the error and reconnect.
Or... Check on how "connection pooling" is handled in your framework.
But... Be aware that network glitches can occur. So, simply increasing the timeout won't handle such glitches.
Or... Don't hang onto a connection so long. It it not playing nice. And it could lead to exceeding max_connections.
(Sorry, I don't understand your application well enough to be more specific about which of these many paths to pursue.)
max_connections
...wait_timeout and max_connections go together in a clumsy way. If the timeout is "too high", the number of connections can keep growing, thereby threatening "too many connections" error. In typical designs, it is better to lower the timeout to prevent clients from wastefully hanging onto a connection for too long.
If your situation is this: "Fixed number of clients that want to stay connected forever", then increase the timeout, but not max_connections (at least not much beyond the fixed number of clients).
Still, if the network hiccups, the connections could break. So, you can still get "connection lost". (However, if everything is on the same machine, this is rather unlikely.)
I have sloved this problem by using pool connection. Try it in this way https://www.npmjs.com/package/mysql
var mysql = require('mysql');
var pool = mysql.createPool(...);
pool.getConnection(function(err, connection) {
// Use the connection
connection.query('SELECT something FROM sometable', function (error, results, fields) {
// And done with the connection.
connection.release();
// Handle error after the release.
if (error) throw error;
// Don't use the connection here, it has been returned to the pool.
});
});

knexjs promise release pool connection

I currently use knexjs.org, promise instead of regular callback and use pool connection for SQL query. At the first time, it run smoothly. But now i usually face pool connection error. The code something like this
knex('user_detail')
.select('id','full_name','phone','email')
.where('id', id_user)
.then((result) => {
resolve(result);
})
.catch((error) => {
reject(error);
})
But now i usually get error connection timeout and error pool connection in it. The first thing why it gets an error maybe because i haven't release the connection, but i have code like this,
knex('user_detail')
.select('id','full_name','phone','email')
.where('id', id_user)
.then((result) => {
resolve(result);
})
.catch((error) => {
reject(error);
})
.finally(() => {
knex.destroy()
})
It works for the first try, but failed at second try and get an error There is no pool defined on the current client and sometimes error The pool is probably full
Can someone explain to me what's going on and how i solve it? thanks.
There is not enough information in question to be able to tell why you are running out of pool connections in the first place.
The way you are calling some resolve()and reject() functions gives a hunch that you are using promises inefficiently or completely wrong...
If you add complete code example how are you able to get the the pool is probably full error I can edit the answer and be able to help more. For example by creating multiple transactions by accident which are not resolved the pool will fill up.
In the second code example you are calling knex.destroy() which doesn't destroy single pool connection, but completely destroys the knex instance and the pool you are using.
So after knex.destroy() you won't be able to use that knex instance anymore and you have to create completely new instance by giving database connection configuration again.
This way you don't need to handle connection it will automatically commit+release to pool back on return and rollback on throw error.
const resultsAfterTransactionIsComplete = await knex.transaction(async trx => {
const result = await trx('insert-table').insert(req.list).returning('*');
// insert logs in the same transaction
const logEntries = result.map(o=> ({event_id: 1, resource: o.id}));
await trx('log-table').insert(logEntries);
// returning from transaction handler automatically commits and frees connection back to the pool
return results;
});

nodejs mysql Error: Connection lost The server closed the connection

when I use node mysql, an error is appear between 12:00 to 2:00 that the TCP connection is shutdown by the server. This is the full message:
Error: Connection lost: The server closed the connection.
at Protocol.end (/opt/node-v0.10.20-linux-x64/IM/node_modules/mysql/lib/protocol/Protocol.js:73:13)
at Socket.onend (stream.js:79:10)
at Socket.EventEmitter.emit (events.js:117:20)
at _stream_readable.js:920:16
at process._tickCallback (node.js:415:13)
There is the solution. However, after I try by this way, the problem also appear. now I do not know how to do. Does anyone meet this problem?
Here is the way I wrote follow the solution:
var handleKFDisconnect = function() {
kfdb.on('error', function(err) {
if (!err.fatal) {
return;
}
if (err.code !== 'PROTOCOL_CONNECTION_LOST') {
console.log("PROTOCOL_CONNECTION_LOST");
throw err;
}
log.error("The database is error:" + err.stack);
kfdb = mysql.createConnection(kf_config);
console.log("kfid");
console.log(kfdb);
handleKFDisconnect();
});
};
handleKFDisconnect();
Try to use this code to handle server disconnect:
var db_config = {
host: 'localhost',
user: 'root',
password: '',
database: 'example'
};
var connection;
function handleDisconnect() {
connection = mysql.createConnection(db_config); // Recreate the connection, since
// the old one cannot be reused.
connection.connect(function(err) { // The server is either down
if(err) { // or restarting (takes a while sometimes).
console.log('error when connecting to db:', err);
setTimeout(handleDisconnect, 2000); // We introduce a delay before attempting to reconnect,
} // to avoid a hot loop, and to allow our node script to
}); // process asynchronous requests in the meantime.
// If you're also serving http, display a 503 error.
connection.on('error', function(err) {
console.log('db error', err);
if(err.code === 'PROTOCOL_CONNECTION_LOST') { // Connection to the MySQL server is usually
handleDisconnect(); // lost due to either server restart, or a
} else { // connnection idle timeout (the wait_timeout
throw err; // server variable configures this)
}
});
}
handleDisconnect();
In your code i am missing the parts after connection = mysql.createConnection(db_config);
I do not recall my original use case for this mechanism. Nowadays, I cannot think of any valid use case.
Your client should be able to detect when the connection is lost and allow you to re-create the connection. If it important that part of program logic is executed using the same connection, then use transactions.
tl;dr; Do not use this method.
A pragmatic solution is to force MySQL to keep the connection alive:
setInterval(function () {
db.query('SELECT 1');
}, 5000);
I prefer this solution to connection pool and handling disconnect because it does not require to structure your code in a way thats aware of connection presence. Making a query every 5 seconds ensures that the connection will remain alive and PROTOCOL_CONNECTION_LOST does not occur.
Furthermore, this method ensures that you are keeping the same connection alive, as opposed to re-connecting. This is important. Consider what would happen if your script relied on LAST_INSERT_ID() and mysql connection have been reset without you being aware about it?
However, this only ensures that connection time out (wait_timeout and interactive_timeout) does not occur. It will fail, as expected, in all others scenarios. Therefore, make sure to handle other errors.
better solution is to use the pool - it will handle this for you.
const pool = mysql.createPool({
host: 'localhost',
user: '--',
database: '---',
password: '----'
});
// ... later
pool.query('select 1 + 1', (err, rows) => { /* */ });
https://github.com/sidorares/node-mysql2/issues/836
To simulate a dropped connection try
connection.destroy();
More information here: https://github.com/felixge/node-mysql/blob/master/Readme.md#terminating-connections
Creating and destroying the connections in each query maybe complicated, i had some headaches with a server migration when i decided to install MariaDB instead MySQL. For some reason in the file etc/my.cnf the parameter wait_timeout had a default value of 10 sec (it causes that the persistence can't be implemented). Then, the solution was set it in 28800, that's 8 hours. Well, i hope help somebody with this "güevonada"... excuse me for my bad english.