knexjs promise release pool connection - mysql

I currently use knexjs.org, promise instead of regular callback and use pool connection for SQL query. At the first time, it run smoothly. But now i usually face pool connection error. The code something like this
knex('user_detail')
.select('id','full_name','phone','email')
.where('id', id_user)
.then((result) => {
resolve(result);
})
.catch((error) => {
reject(error);
})
But now i usually get error connection timeout and error pool connection in it. The first thing why it gets an error maybe because i haven't release the connection, but i have code like this,
knex('user_detail')
.select('id','full_name','phone','email')
.where('id', id_user)
.then((result) => {
resolve(result);
})
.catch((error) => {
reject(error);
})
.finally(() => {
knex.destroy()
})
It works for the first try, but failed at second try and get an error There is no pool defined on the current client and sometimes error The pool is probably full
Can someone explain to me what's going on and how i solve it? thanks.

There is not enough information in question to be able to tell why you are running out of pool connections in the first place.
The way you are calling some resolve()and reject() functions gives a hunch that you are using promises inefficiently or completely wrong...
If you add complete code example how are you able to get the the pool is probably full error I can edit the answer and be able to help more. For example by creating multiple transactions by accident which are not resolved the pool will fill up.
In the second code example you are calling knex.destroy() which doesn't destroy single pool connection, but completely destroys the knex instance and the pool you are using.
So after knex.destroy() you won't be able to use that knex instance anymore and you have to create completely new instance by giving database connection configuration again.

This way you don't need to handle connection it will automatically commit+release to pool back on return and rollback on throw error.
const resultsAfterTransactionIsComplete = await knex.transaction(async trx => {
const result = await trx('insert-table').insert(req.list).returning('*');
// insert logs in the same transaction
const logEntries = result.map(o=> ({event_id: 1, resource: o.id}));
await trx('log-table').insert(logEntries);
// returning from transaction handler automatically commits and frees connection back to the pool
return results;
});

Related

#mysql/xdevapi ECONNREFUSED doesn't release connection

I'm using the #mysql/xdevapi npm package (version 8.0.22) with a local installation of mysql-8.0.15-winx64.
I have set pooling to enabled and attempted to retrieve a session from the client. If I do this before mysql is ready then I get an ECONNREFUSED exception which is expected, but the connection never appears to be released. If the pool size is one then all subsequent attempts to getSession
The exception is thrown from within the getSession method, so the session is not returned for me to call .end() manually.
const mysqlx = require('#mysql/xdevapi');
const client = mysqlx.getClient(config, { pooling: { enabled: true, maxSize: 1, queueTimeout: 2000 } });
const session = await this.client.getSession(); // returns ECONNREFUSED exception
/*
wait for mysql to be ready and accepting connections
*/
const session = await this.client.getSession(); // returns connection pool queue timeout exception because the previous session hasn't been returned to the pool
How can I ensure that the aborted connection is returned to the pool?
This is a bug and I encourage you to report it at https://bugs.mysql.com/ using the Connector for Node.js category.
The only workaround that comes to mind is re-creating the pool if the getSession() method returns a rejected Promise (with or without a specific error). For instance, something like:
const poolConfig = { pooling: { enabled: true, maxSize: 1, queueTimeout: 2000 } }
let pool = mysqlx.getClient(config, poolConfig)
let session = null
try {
session = await pool.getSession()
} catch (err) {
await pool.close()
pool = mysqlx.getClient(config, poolConfig)
session = await pool.getSession()
// do something
}
It's an ugly solution and there's a chance it might be hard to shoehorn into your design but, at least, it lets you enjoy the other benefits of a connection pool.
Disclaimer: I'm the lead developer of the MySQL X DevAPI Connector for Node.js

How do I serialize transactions with multiple database queries using Sequelize?

My server keeps track of game instances. If there are no ongoing games when a user hits a certain endpoint, the server creates a new one. If the endpoint is hit twice at the same time, I want to make sure only one new game is created. I'm attempting to do this via Sequelize's transactions:
const t = await sequelize.transaction({
isolationLevel: Sequelize
.Transaction
.ISOLATION_LEVELS
.SERIALIZABLE,
});
let game = await Game.findOne({
status: {[Op.ne]: "COMPLETED"},
transaction: t,
});
if(game) {
// ...
} else {
game = await Game.create({}, {
transaction: t,
});
// ...
}
await t.commit();
Unfortunately, when this endpoint is hit twice at the same time, I get the following error: SequelizeDatabaseError: Deadlock found when trying to get lock; try restarting transaction.
I looked at possible solutions here and here, and I understand why my code throws the error, but I don't understand how to accomplish what I'm trying to do (or whether transactions are the correct tool to accomplish it). Any direction would be appreciated!

Is there a way for KNEX ERRORS to also log WHERE in the code they take place?

Some Knex errors log the file and line in which they occur, but many DO NOT. This makes debugging unnecessarily tedious. Is .catch((err)=>{console.log(err)}) supposed to take care of this?
The fact that code tries to repeat around 4 times (I want it to try once and stop, absolutely no need for more attempts, ever - it only messes things up when further entries are made to the database)?
Some Knex errors log the file and line in which they occur, but many DO NOT
Can you give us some of your query examples which silent the error?
I'm heavy Knex user, during my development, almost all errors show which file and line they occurred unless two kind of situations:
query in transaction which may complete early.
In this situation, we have to customize knex inner catch logic and do some knex injection such as Runner.prototype.query, identify the transactionEarlyCompletedError, and log more info: sql or bindings on catch clause.
pool connection error
such as mysql error: Knex:Error Pool2 - Error: Pool.release(): Resource not member of pool
this is another question which depends on your database env and connection package.
The fact that code tries to repeat around 4 times
if your repeat code written in Promise chain,I don't think it will throw 4 times, it should blows up at the first throw.
query1
.then(query2)
.then(query3)
.then(query4)
.catch(err => {})
concurrently executed queries
If any promise in the array is rejected, or any promise returned by the mapper function is rejected, the returned promise is rejected as well.
Promise.map(queries, (query) => {
return query.execute()
.then()
.catch((err) => {
return err;
})
}, { concurrency: 4})
.catch((err) => {
// handle error here
})
if you use try catch and async await
still it would not repeat 4 times, if you already know the error type, meanwhile, if you don't know what error will throw, why don't you execute it only once to find out the error?
async function repeatInsert(retryTimes = 0) {
try {
await knex.insert().into();
} catch(err) {
// handle known error
if (err.isKnown) {
throw err;
}
// and retry
if (retryTimes < 4) {
return await repeatInsert(retryTimes + 1);
}
}
}

"ER_CON_COUNT_ERROR: Too many connections" Error with pool connections to mysql from node.js

I have about 20 node.js files that use the following configuration to access my db:
var pool = mysql.createPool({
host: databaseHost,
user: databaseUser,
password: databasePassword,
database: databaseName,
multipleStatements: true
});
The functions all use the following pattern:
pool.getConnection(function (err, connection) {
if (err) {
callback(err);
} else {
// Use the connection
var sql = "...sql statement...";
var inserts = [...inserts...];
connection.query(sql, inserts, function (error, results, fields) {
// And done with the connection.
connection.release();
// Handle error after the release.
if (error) {
callback(error);
} else {
callback(null, results);
}
});
}
});
I recently started getting the error:
"ER_CON_COUNT_ERROR: Too many connections"
on calls to any of my functions. I don't really understand the pool concept well enough. If each function is creating a pool, does that create a separate pool each time that function is called?
I understand get connection and release connection. Just don't really get the createPool.
I tried to log the following:
console.log(pool.config.connectionLimit); // passed in max size of the pool
console.log(pool._freeConnections.length); // number of free connections awaiting use
console.log(pool._allConnections.length); // number of connections currently created, including ones in use
console.log(pool._acquiringConnections.length); // number of connections in the process of being acquired
the result was:
10
0
0
0
I can increase the number of connections but would like to have some better understanding of why the problem exists.
If your createPool is called inside functions everytime there has to be a query, then yes, it is grave! Instead, have a different file only for mysql connection. Write a class where you create a pool inside a function, and then in the constructor you could simply return the connection from the pool. That way, if you simply require this file anywhere in your project, and create an object of the class, you could then simply use it to query and release!

How does pool.query() and pool.getGetConnection() differ on connection.release()?

As i can understand every pool.query() will cost a connection and it is automatically release when it ends. based from this comment on github issue. But what about the nested queries performed using pool.getConnection()?
pool.getConnection(function(err, connection) {
// First query
connection.query('query_1', function (error, results, fields) {
// Second query
connection.query('query_2', function (error, results, fields) {
// Release the connection
// DOES THIS ALSO RELEASE query_1?
connection.release();
if (error) throw error;
// you can't use connection any longer here..
});
});
});
UPDATE
Here is my code using transaction when performing nested queries.
const pool = require('../config/db');
function create(request, response) {
try {
pool.getConnection(function(err, con) {
if (err) {
con.release();
throw err;
}
con.beginTransaction(function(t_err) {
if (t_err) {
con.rollback(function() {
con.release();
throw t_err;
});
}
con.query(`insert record`, [data], function(i_err, result, fields){
if (i_err) {
con.rollback(function() {
con.release();
throw i_err;
});
}
// get inserted record id.
const id = result.insertId;
con.query(`update query`, [data, id], function(u_err, result, fields)=> {
if (u_err) {
con.rollback(function() {
con.release();
throw u_err;
});
}
con.commit(function(c_err){
if (c_err) {
con.release();
throw c_err;
}
});
con.release();
if (err) throw err;
response.send({ msg: 'Successful' });
});
});
});
});
} catch (err) {
throw err;
}
}
I made a lot of defensive error catching and con.release() since at this point i do not know how to properly release every connection that is in active.
And i also assume that every con.query() inside pool.getConnection() will cost a connection.
EDIT:
A connection is like a wire that connects your application to your database. Each time you connection.query() all you're doing is sending a message along that wire, you're not replacing the wire.
When you ask the pool for a connection, it will either give you a 'wire' it already has in place or create a new wire to the database. When you release() a pooled connection, the pool reclaims it, but keeps it in place for a while in case you need it again.
So a query is a message along the connection wire. You can send as many messages along as you want, it's only one wire.
Original Answer
pool.query(statement, callback) is essentially
const query = (statement, callback) => {
pool.getConnection((err, conn) => {
if(err) {
callback(err);
} else {
conn.query(statement, (error, results, fields) => {
conn.release();
callback(error, results, fields);
});
}
})
}
Ideally you shouldn't be worrying about connections as much as the number of round trips you're making. You can enable multiple statements in your pool config multipleStatements: true on construction of your pool and then take advantage of transactions.
BEGIN;
INSERT ...;
SELECT LAST_INSERT_ID() INTO #lastId;
UPDATE ...;
COMMIT;
It sounds like you are not closing the first query as quickly as you should.
Please show us the actual code. You do not need to hang onto the query to get insertid.
(After Update to Question:) I do not understand the need for "nesting". The code is linear (except for throwing errors):
BEGIN;
INSERT ...;
get insertid
UPDATE ...;
COMMIT;
If any step fails, throw an error. I see no need for two "connections". You are finished with the INSERT before starting the UPDATE, so I don't see any need for "nesting" SQL commands. And get insertid is a meta operation that does not involve a real SQL command.
I don't know Node.js, but looking at the code and Github documentaion, it is almost certain that pool.getConnection gets a connection from a connection pool and it calls the function with a connection object obtained and any error encountered while getting a connection from the pool. Within the function body we may use the connection object any number of times, but once it is released it won't be usable as it goes back to the pool and I assume the connection object will no longer have the reference to underlying mysql connection (a little lower level connection object may be). Now we have to release the connection object only once, and we must release the connection object if we don't want to run out of free connection from the connection pool; otherwise subsequent call to pool.getConnection won't find any connection in "free" list of connection as they are already moved to "in_use" list of connections and they are never released.
Generally, after getting a connection from the connection pool, it may used for any number of operations/queries and it is released "once" to give it back to "free" list of the pool. That is how the connection pooling generally works.