How does pool.query() and pool.getGetConnection() differ on connection.release()? - mysql

As i can understand every pool.query() will cost a connection and it is automatically release when it ends. based from this comment on github issue. But what about the nested queries performed using pool.getConnection()?
pool.getConnection(function(err, connection) {
// First query
connection.query('query_1', function (error, results, fields) {
// Second query
connection.query('query_2', function (error, results, fields) {
// Release the connection
// DOES THIS ALSO RELEASE query_1?
connection.release();
if (error) throw error;
// you can't use connection any longer here..
});
});
});
UPDATE
Here is my code using transaction when performing nested queries.
const pool = require('../config/db');
function create(request, response) {
try {
pool.getConnection(function(err, con) {
if (err) {
con.release();
throw err;
}
con.beginTransaction(function(t_err) {
if (t_err) {
con.rollback(function() {
con.release();
throw t_err;
});
}
con.query(`insert record`, [data], function(i_err, result, fields){
if (i_err) {
con.rollback(function() {
con.release();
throw i_err;
});
}
// get inserted record id.
const id = result.insertId;
con.query(`update query`, [data, id], function(u_err, result, fields)=> {
if (u_err) {
con.rollback(function() {
con.release();
throw u_err;
});
}
con.commit(function(c_err){
if (c_err) {
con.release();
throw c_err;
}
});
con.release();
if (err) throw err;
response.send({ msg: 'Successful' });
});
});
});
});
} catch (err) {
throw err;
}
}
I made a lot of defensive error catching and con.release() since at this point i do not know how to properly release every connection that is in active.
And i also assume that every con.query() inside pool.getConnection() will cost a connection.

EDIT:
A connection is like a wire that connects your application to your database. Each time you connection.query() all you're doing is sending a message along that wire, you're not replacing the wire.
When you ask the pool for a connection, it will either give you a 'wire' it already has in place or create a new wire to the database. When you release() a pooled connection, the pool reclaims it, but keeps it in place for a while in case you need it again.
So a query is a message along the connection wire. You can send as many messages along as you want, it's only one wire.
Original Answer
pool.query(statement, callback) is essentially
const query = (statement, callback) => {
pool.getConnection((err, conn) => {
if(err) {
callback(err);
} else {
conn.query(statement, (error, results, fields) => {
conn.release();
callback(error, results, fields);
});
}
})
}
Ideally you shouldn't be worrying about connections as much as the number of round trips you're making. You can enable multiple statements in your pool config multipleStatements: true on construction of your pool and then take advantage of transactions.
BEGIN;
INSERT ...;
SELECT LAST_INSERT_ID() INTO #lastId;
UPDATE ...;
COMMIT;

It sounds like you are not closing the first query as quickly as you should.
Please show us the actual code. You do not need to hang onto the query to get insertid.
(After Update to Question:) I do not understand the need for "nesting". The code is linear (except for throwing errors):
BEGIN;
INSERT ...;
get insertid
UPDATE ...;
COMMIT;
If any step fails, throw an error. I see no need for two "connections". You are finished with the INSERT before starting the UPDATE, so I don't see any need for "nesting" SQL commands. And get insertid is a meta operation that does not involve a real SQL command.

I don't know Node.js, but looking at the code and Github documentaion, it is almost certain that pool.getConnection gets a connection from a connection pool and it calls the function with a connection object obtained and any error encountered while getting a connection from the pool. Within the function body we may use the connection object any number of times, but once it is released it won't be usable as it goes back to the pool and I assume the connection object will no longer have the reference to underlying mysql connection (a little lower level connection object may be). Now we have to release the connection object only once, and we must release the connection object if we don't want to run out of free connection from the connection pool; otherwise subsequent call to pool.getConnection won't find any connection in "free" list of connection as they are already moved to "in_use" list of connections and they are never released.
Generally, after getting a connection from the connection pool, it may used for any number of operations/queries and it is released "once" to give it back to "free" list of the pool. That is how the connection pooling generally works.

Related

Is there a way for KNEX ERRORS to also log WHERE in the code they take place?

Some Knex errors log the file and line in which they occur, but many DO NOT. This makes debugging unnecessarily tedious. Is .catch((err)=>{console.log(err)}) supposed to take care of this?
The fact that code tries to repeat around 4 times (I want it to try once and stop, absolutely no need for more attempts, ever - it only messes things up when further entries are made to the database)?
Some Knex errors log the file and line in which they occur, but many DO NOT
Can you give us some of your query examples which silent the error?
I'm heavy Knex user, during my development, almost all errors show which file and line they occurred unless two kind of situations:
query in transaction which may complete early.
In this situation, we have to customize knex inner catch logic and do some knex injection such as Runner.prototype.query, identify the transactionEarlyCompletedError, and log more info: sql or bindings on catch clause.
pool connection error
such as mysql error: Knex:Error Pool2 - Error: Pool.release(): Resource not member of pool
this is another question which depends on your database env and connection package.
The fact that code tries to repeat around 4 times
if your repeat code written in Promise chain,I don't think it will throw 4 times, it should blows up at the first throw.
query1
.then(query2)
.then(query3)
.then(query4)
.catch(err => {})
concurrently executed queries
If any promise in the array is rejected, or any promise returned by the mapper function is rejected, the returned promise is rejected as well.
Promise.map(queries, (query) => {
return query.execute()
.then()
.catch((err) => {
return err;
})
}, { concurrency: 4})
.catch((err) => {
// handle error here
})
if you use try catch and async await
still it would not repeat 4 times, if you already know the error type, meanwhile, if you don't know what error will throw, why don't you execute it only once to find out the error?
async function repeatInsert(retryTimes = 0) {
try {
await knex.insert().into();
} catch(err) {
// handle known error
if (err.isKnown) {
throw err;
}
// and retry
if (retryTimes < 4) {
return await repeatInsert(retryTimes + 1);
}
}
}

"ER_CON_COUNT_ERROR: Too many connections" Error with pool connections to mysql from node.js

I have about 20 node.js files that use the following configuration to access my db:
var pool = mysql.createPool({
host: databaseHost,
user: databaseUser,
password: databasePassword,
database: databaseName,
multipleStatements: true
});
The functions all use the following pattern:
pool.getConnection(function (err, connection) {
if (err) {
callback(err);
} else {
// Use the connection
var sql = "...sql statement...";
var inserts = [...inserts...];
connection.query(sql, inserts, function (error, results, fields) {
// And done with the connection.
connection.release();
// Handle error after the release.
if (error) {
callback(error);
} else {
callback(null, results);
}
});
}
});
I recently started getting the error:
"ER_CON_COUNT_ERROR: Too many connections"
on calls to any of my functions. I don't really understand the pool concept well enough. If each function is creating a pool, does that create a separate pool each time that function is called?
I understand get connection and release connection. Just don't really get the createPool.
I tried to log the following:
console.log(pool.config.connectionLimit); // passed in max size of the pool
console.log(pool._freeConnections.length); // number of free connections awaiting use
console.log(pool._allConnections.length); // number of connections currently created, including ones in use
console.log(pool._acquiringConnections.length); // number of connections in the process of being acquired
the result was:
10
0
0
0
I can increase the number of connections but would like to have some better understanding of why the problem exists.
If your createPool is called inside functions everytime there has to be a query, then yes, it is grave! Instead, have a different file only for mysql connection. Write a class where you create a pool inside a function, and then in the constructor you could simply return the connection from the pool. That way, if you simply require this file anywhere in your project, and create an object of the class, you could then simply use it to query and release!

knexjs promise release pool connection

I currently use knexjs.org, promise instead of regular callback and use pool connection for SQL query. At the first time, it run smoothly. But now i usually face pool connection error. The code something like this
knex('user_detail')
.select('id','full_name','phone','email')
.where('id', id_user)
.then((result) => {
resolve(result);
})
.catch((error) => {
reject(error);
})
But now i usually get error connection timeout and error pool connection in it. The first thing why it gets an error maybe because i haven't release the connection, but i have code like this,
knex('user_detail')
.select('id','full_name','phone','email')
.where('id', id_user)
.then((result) => {
resolve(result);
})
.catch((error) => {
reject(error);
})
.finally(() => {
knex.destroy()
})
It works for the first try, but failed at second try and get an error There is no pool defined on the current client and sometimes error The pool is probably full
Can someone explain to me what's going on and how i solve it? thanks.
There is not enough information in question to be able to tell why you are running out of pool connections in the first place.
The way you are calling some resolve()and reject() functions gives a hunch that you are using promises inefficiently or completely wrong...
If you add complete code example how are you able to get the the pool is probably full error I can edit the answer and be able to help more. For example by creating multiple transactions by accident which are not resolved the pool will fill up.
In the second code example you are calling knex.destroy() which doesn't destroy single pool connection, but completely destroys the knex instance and the pool you are using.
So after knex.destroy() you won't be able to use that knex instance anymore and you have to create completely new instance by giving database connection configuration again.
This way you don't need to handle connection it will automatically commit+release to pool back on return and rollback on throw error.
const resultsAfterTransactionIsComplete = await knex.transaction(async trx => {
const result = await trx('insert-table').insert(req.list).returning('*');
// insert logs in the same transaction
const logEntries = result.map(o=> ({event_id: 1, resource: o.id}));
await trx('log-table').insert(logEntries);
// returning from transaction handler automatically commits and frees connection back to the pool
return results;
});

socketstream async call to mysql within rpc actions

First, I need to tell you that I am very new to the wonders of nodejs, socketstream, angularjs and JavaScript in general. I come from a Java background and this might explain my ignorance of the correct way of doing things async.
To toy around with things I installed the ss-angular-demo from americanyak. My problem is now that the Rpc seems to be a synchronous interface and my call the the mysql database has an asynchronous interface. How can I return the database results upon a call of the Rpc?
Here is what I did so far with socketstream 0.3:
In app.js I successfully tell ss to allow my mysql database connection to be accessed by putting ss.api.add('coolStore',mysqlConn); in there at the right place (as explained in the socketstream docs). I use the mysql npm, so I can call mysql within the Rpc
server/rpc/coolRpc.js
exports.actions = function (req, res, ss) {
// use session middleware
req.use('session');
return {
get: function(threshold){
var sql = "SELECT cool.id, cool.score, cool.data FROM cool WHERE cool.score > " + threshold;
if (!ss.arbStore) {
console.log("connecting to mysql arb data store");
ss.coolStore = ss.coolStore.connect();
}
ss.coolStore.query(sql, function(err, rows, fields) {
if(err) {
console.log("error fetching stuff", err);
} else {
console.log("first row = "+rows[0].id);
}
});
var db_rows = ???
return res(null, db_rows || []);
}
}
The console logs the id of my database entry, as expected. However, I am clueless how I can make the Rpc's return statement return the rows of my query. What is the right way of addressing this sort of problem?
Thanks for your help. Please be friendly with me, because this is also my first question on stackoverflow.
It's not synchronous. When your results are ready, you can send them back:
exports.actions = function (req, res, ss) {
// use session middleware
req.use('session');
return {
get: function(threshold){
...
ss.coolStore.query(sql, function(err, rows, fields) {
res(err, rows || []);
});
}
}
};
You need to make sure that you always call res(...) from an RPC function, even when an error occurs, otherwise you might get dangling requests (where the client code keeps waiting for a response that's never generated). In the code above, the error is forwarded to the client so it can be handled there.

node.js response only one html request

I use node.js and mysql module to write a simple select statement.
The problem is it can only respond to one request, subsequent responses will be empty.
I use a browser to load the page for the first time, it return a complete result, but the browser is still loading. What happen:
Code:
var server = http.createServer(function (request, response) {
response.writeHead(200, {"Content-Type": "text/plain"});
client.query('SELECT * FROM ' + tbl,
function selectDb(err, results, fields) {
if (err) {
throw err;
}
for (var i in results){
var result = results[i];
response.write(result['CUSTOMERNAME']); // Writes to the web browser the value of test then a : to seperate values
}
response.end("END RESULT");
client.end();
}
);
});
According to the node-mysql docs (which I assume you are using) found here,
client.end();
Closes the mysql connection.
When you attempt another request, there is no open connection and node-mysql doesn't do any connection pool handling or auto re-connect, its all left up to you.
If you don't mind keeping a single connection open for the lifetime of the app (not the best design) you can just move that client.end() outside your connection handler.
Otherwise, create a little method that checks for an open connection or maybe does a connection pool, see this post for more info.