Node js mysql stored procedure call in a for loop - mysql

So, I'm currently using mysql npm package https://www.npmjs.com/package/mysql. I have a requirement where I'll have to call a stored procedure multiple times with the caveat that subsequent stored procedure calls depend on the previous call. Pseudo code will be as follows:
let mysql = require("mysql");
let mysqlPoolConnection = mysql.createPool({
connectionLimit: 20,
host: '0.0.0.0',
port: '3306',
user: 'user1',
password: 'pwd',
database: 'mysql_db'
});
for (let index = 0; index < params.length; index++) {
let sp_ProcedureCall = "CALL sp_StoredProcedure(?, ?, ?, ?)";
let sp_ProcedureParams = [params[index].firstParam, params[index].secondParam, params[index].thirdParam, params[index].fourthParam];
// This is where the issue is. I'd like to call the stored procedure once and then once I get a response from it then make subsequent calls. Basically, depending on the previous stored procedure result, I decide whether I continue with subsequent calls or not.
mysqlPoolConnection.query(sp_ProcedureCall, sp_ProcedureParams, (errorObj, responseObj, fieldsObj) => {
}
}

NodeJS is asynchronous, meaning your for-loop iterates without waiting for the result of the call. You need to control the flow of your program if you want to "wait" for previous results.
Look at a library like async to enabled that type of control flow. From your description, eachSeries() might be a good fit - to run a function for each value in an array, and only run one at a time. (Or reduce, depending what you need - there are other options too)

Related

OpenResty / Lua - Maintain mysql database connection in worker threads

I have a simple module called "firewall.lua" that I wrote that has a function
firewall.check_ip(ip) which connects to localhost mysql and performs a query and returns the result. The function gets called from within Location / blocks in nginx sites via access_by_lua_block . The module gets initialized by init_worker_by_lua firewall.init().
Everything works as expected.
What I'd like to do however is maintain the database connection on the worker thread(s) so that I don't have to re-connect every time the function is called but instead re-use the existing connection established by the worker during initialization.
I'm not quite sure how to do this or if its actually doable in openresty/lua. I tried initializing the database connection variables outside of the function to give them scope within the module instead of function and I get various API errors that did not point me in the right direction.
Thank you!
This is possible using the OpenResty cosocket API, which gives you the ability to use a pool of non-blocking connections. There's already one MySQL driver (lua-resty-mysql) which uses the cosocket API. Since you didn't provide a code sample, I'm assuming you're not using it.
Example of a connection and query using lua-resty-mysql (untested):
access_by_lua_block {
local mysql = require "resty.mysql";
local db, err = mysql:new()
db:set_timeout(1000) -- 1 second
local ok, err, errcode, sqlstate = db:connect{
host = "127.0.0.1",
port = 3306,
database = "my_db",
user = "my_user",
password = "my_pwd",
}
if not ok then
ngx.say("Connection to MySQL failed: ", err)
return
end
result, err, errcode, sqlstate = db:query("select ...")
if not result then
ngx.say("MySQL error: ", err, ".")
return
end
db:close()
}
In case, e.g., you want to control the pool name or use other options, you can pass on additional parameters to connect:
...
local ok, err, errcode, sqlstate = db:connect{
host = "127.0.0.1",
port = 3306,
database = "my_db",
user = "my_user",
password = "my_pwd",
pool = "my_connection_pool",
}
...
You can find more information in the official docs:
lua-resty-mysql: https://github.com/openresty/lua-resty-mysql
Cosocket API: https://openresty-reference.readthedocs.io/en/latest/Lua_Nginx_API/#ngxsockettcp

Handling of rabbit messages via NESTJs microservice issue

I'm currently having a problem that i'm unable to solve. This is only in extreme cases, when the server, for some reason, goes offline, messages accumulate (100k or more) and then they need to be processed all at the same time. Even though i'm planning for this never to happen, i would like to have a backup plan for it and more control on this issue.
I'm running an NestJS microservice against a RabbitMQ broker to get messages that arrive from IOT devices and insert them into a MySQL database.
Every message has a little conversion/translation operation that needs to be done before the insert. This conversion is based on a single row query done against a table on the same SQL Server.
The order is the following:
read message;
select 1 row from database (table has few thousand rows);
insert 1 row into the database;
Now, i'm facing this error:
(node:1129233) UnhandledPromiseRejectionWarning: SequelizeConnectionAcquireTimeoutError: Operation timeout
at ConnectionManager.getConnection (/home/nunovivas/NestJSProjects/integrador/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:288:48)
at runNextTicks (internal/process/task_queues.js:60:5)
at listOnTimeout (internal/timers.js:526:9)
at processTimers (internal/timers.js:500:7)
at /home/nunovivas/NestJSProjects/integrador/node_modules/sequelize/lib/sequelize.js:613:26
at MySQLQueryInterface.select (/home/nunovivas/NestJSProjects/integrador/node_modules/sequelize/lib/dialects/abstract/query-interface.js:953:12)
at Function.findAll (/home/nunovivas/NestJSProjects/integrador/node_modules/sequelize/lib/model.js:1752:21)
at Function.findOne (/home/nunovivas/NestJSProjects/integrador/node_modules/sequelize/lib/model.js:1916:12)
node_modules/source-map-support/source-map-support.js:516
(node:1129233) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1349)
I think that the promise rejection is inside the sequelize module.
This is my sequelize configuration:
useFactory: async (ConfigService: ConfigService) => ({
dialect: 'mysql',
host: 'someserver',
port: 3306,
username: 'dede',
password: 'dudu!',
database: 'dada',
autoLoadModels: true,
pool: { max: 5, min: 0, adquire: 1800000, idle: 5000 },
synchronize: true,
logQueryParameters: true,
This is part of my message service:
#RabbitRPC({
exchange: 'BACKEND_MAINEXCHANGE',
routingKey: 'Facility_DeviceReadings',
queue: 'Facility_DeviceReadings',
})
public async rpcHandlerDeviceReadings(mensagem: ReadingDevicesPerFacility) {
const schemavalid = mensagem;
this.mylogger.log( 'Received message from BACKEND_MAINEXCHANGE - listening to the queue Facility_DeviceReadings : ' +
' was registered locally on ' +
schemavalid.DateTimeRegistered,
MessagingService.name,
'rpcHandlerDeviceReadings',
);
if (schemavalid) {
try {
let finalschema = new CreateReadingDevicesDto();
if (element.Slot > 0) {
const result = this.readingTransService
.findOneByPlcId(element.deviceId, element.Slot)
.then((message) => {
if (!message) {
throw new NotFoundException('Message with ID not found');
} else {
finalschema.deviceId = message.deviceId;
finalschema.Slot = message.Slot2;
if (this.isNumeric(element.ReadingValue)) {
finalschema.ReadingValue = element.ReadingValue;
finalschema.DateTimeRegistered =
schemavalid.DateTimeRegistered;
this.readingDeviceService
.create(finalschema)
.then((message) => {
this.mylogger.debug(
'Saved',
MessagingService.name,
'rpcHandlerDeviceReadings',
);
return 42;
});
} else {
this.mylogger.error(
'error',
MessagingService.name,
'rpcHandlerDeviceReadings',
);
}
return message;
}
});
The problem seems that this RPC keeps going against rabbit and reading/consuming messages (8 per millisecond) before SQL as a chance to replay back, forcing sequelize into a state that it can't handle anymore and thus throwing the above error.
I have tried tweaking the sequelize config but to no good outcome.
Is there any way to force the RPC to just handle the next message after the previous one is processed?
Would love if someone could steer me in the right direction since this could eventually become a breaking issue.
Thanks in advance for any input you can give me.
It looks to me like your Sequelize connection pool options need some tweaking.
You have
pool: { max: 5, min: 0, adquire: 1800000, idle: 5000 }
adquire isn't a thing. Maybe acquire? Half an hour (1.8 million milliseconds) is a really long time to wait for a connection. Shorten it? acquire: 300000 will give you five minutes. A big production app such as yours probably should always keep one or two connections open. Increase min to 1 or 2.
A modest maximum number of connections is good as long as each operation grabs a connection from the pool, uses it, and releases it. If your operation grabs a connection and then awaits something external, you'll need more connections.
If it's possible to get your program to read in a whole bunch of messages (at least 10) at a time, then put them into your database in one go with bulkCreate(), you'll speed things up. A lot. That's because inserts are cheap, but the commit operations after those inserts aren't so cheap. So, doing multiple inserts within single transactions, then commiting them all at once, can make things dramatically faster. Read about autocommit for more information on this.
Writing your service to chow down on a big message backlog quickly will make errors like the one you showed us less likely.
Edit To use .bulkCreate() you need to accumulate multiple incoming messages. Try something like this.
Create an array of your received CreateReadingDevicesDto messages. let incomingMessages = []
Instead of using .create() to put each new message into your database as you finish receiving and validating it, instead put it into your array. incomingMessages.push(finalschema).
Set up a Javascript interval to take the data from the array and put it into your database with .bulkCreate(). This will do that every 500ms.
setInterval(
function (this) {
if (incomingMessages.length > 0) {
/* create all the items in the array */
this.readingDeviceService
.bulkCreate(incomingMessages)
/* empty out the array */
incomingMessages = []
}, 500, this);
At the cost of somewhere between 0 and 500ms extra latency, this batches up your messages and will let you process your backlog faster.
I haven't debugged this, and it's probably a little more crude than you want in production code. But I have used similar techniques to good effect.

Passing variables through Node MySQL function

I've created a simple function that runs a query and fetches a field value from a MySQL database in Node. I'm using the normal 'mysql' library. For some reason though, I can't pass the resulting field value out to the function. What am I doing wrong?
var mysql = require('mysql');
var connection = mysql.createConnection({
host : 'localhost',
user : 'root',
password : '',
database : 'mydb'
});
//This is my function to fetch the field
function getinfofromdb(inputID){
connection.query('SELECT * FROM `mytable` WHERE ? ORDER BY `ID` DESC LIMIT 1;', {identifierID: inputID}, function (err, rows, fields) {
if(rows[0]['filename'].length > 0){
console.log(rows[0]['filename']); // This works fine! I just can't pass it as response. :(
response = rows[0]['filename'];
}else{
response = 'none found';
}
});
return response;
}
//However, I always get that 'response' is undefined
console.log(getinfofromdb(1));
Furthermore, returning from the inner function also yields nothing.
There is no problem with the query, as I can console.log it just fine, but it just doesn't return it to the function.
if(rows[0]['filename']){
console.log(rows[0]['filename']); //This prints out just fine
return rows[0]['filename']; //Doesn't return anything
}
Update: Everything I'm trying is not yielding anything. Would someone please show me the proper way of writing a function in nodejs using the mysql library (https://github.com/mysqljs/mysql) that receives an input, runs a simple query using that input, and the value of a field in the response row? I'm going nuts.
I found the solution -- this is impossible to do.
Coming from a PHP background, not everything I'm used to write is asynchronous.
This Node JS MySQL library runs its queries asynchronously, and therefore the callback function cannot return anything. It just prints stuff such as console.log.
I guess I'm gonna revert to PHP.
How to return value from an asynchronous callback function?

NodeJS + mysql - automatically closing pool connections?

I wish to use connection pooling using NodeJS with MySQL database. According to docs, there are two ways to do that: either I explicitly get connection from the pool, use it and release it:
var pool = require('mysql').createPool(opts);
pool.getConnection(function(err, conn) {
conn.query('select 1+1', function(err, res) {
conn.release();
});
});
Or I can use it like this:
var mysql = require('mysql');
var pool = mysql.createPool({opts});
pool.query('select 1+1', function(err, rows, fields) {
if (err) throw err;
console.log('The solution is: ', rows[0].solution);
});
If I use the second options, does that mean, that connections are automatically pulled from the pool, used and released? And if so, is there reason to use the first approach?
Yes, the second one means that the pool is responsible to get the next free connection do a query on that and then release it again. You use this for "one shot" queries that have no dependencies.
You use the first one if you want to do multiple queries that depend on each other. A connection holds certain states, like locks, transaction, encoding, timezone, variables, ... .
Here an example that changes the used timezone:
pool.getConnection(function(err, conn) {
function setTimezone() {
// set the timezone for the this connection
conn.query("SET time_zone='+02:00'", queryData);
}
function queryData() {
conn.query( /* some query */, queryData);
}
function restoreTimezoneToUTC() {
// restore the timezone to UTC (or what ever you use as default)
// otherwise this one connection would use +02 for future request
// if it is reused in a future `getConnection`
conn.query("SET time_zone='+00:00'", releseQuery);
}
function releaseQuery() {
// return the query back to the pool
conn.release()
}
setTimezone();
});
In case anyone else stumbles upon this:
When you use pool.query you are in fact calling a shortcut which does what the first example does.
From the readme:
This is a shortcut for the pool.getConnection() -> connection.query() -> connection.release() code flow. Using pool.getConnection() is useful to share connection state for subsequent queries. This is because two calls to pool.query() may use two different connections and run in parallel.
So yes, the second one is also calling connection.release() you just don't need to type it.

mysql - node - Row inserted and queryable via server connection lost on DB restart

I ran into an issue testing today that occurred during or after an insert via a connection on my node server. The code where the insert is performed looks something like this:
// ...
username = esc(username);
firstname = esc(firstname);
lastname = esc(lastname);
var values = [username, firstname, lastname].join(',');
var statement = 'INSERT INTO User(Username,FirstName,LastName) VALUES({0});\n'+
'SELECT FirstName, LastName, Username, Id, IsActive FROM User WHERE Username={1};'
statement = merge( statement, [ values, username ] );
conn.query(statement, function(e, rows, fields){
e ? function() {
res.status(400);
var err = new Error;
err.name = 'Bad request';
err.message = 'A problem occurred during sign-up.';
err.details = e;
res.json(err);
}() : function(){
res.json( rows[1] );
}();
}
A quick note on esc() and merge(), these are simply util functions that help prepare the database statement.
The above code completed successfully, ie. the response was a 200 with the newly inserted user row in the body. The row inserted was queryable via the same connection throughout the day. I only noticed this afternoon when running the following generic query as root via shell, the row was missing.
SELECT Id, FirstName, LastName FROM User;
So at that point I restarted the database and the node server. Unfortunately. Now it would appear the row is gone entirely, as well as any reliable path to troubleshoot.
Here are some details of interest to my server setup. As of yet, no idea how (if at all) any of these could be suspect.
Uses only single connection as opposed to conn pool (for now)
multipleStatements=true in the connection config (obviously above snippet makes use of this)
SET autocommit = 0; START TRANSACTION; COMMIT; used elsewhere in the codebase to control rollback
Using poor man's keep-alive every 30 seconds to avoid connection timing out: SELECT 1;
I've been reading up all evening and am running out of ideas. Any suggestions? Is this likely an issue of uncommitted data? If so, is there a reliable way to debug this? Better yet, is there any way to prevent it? What could be the cause? And finally, if in debugging my server I find data in this state, is there a way to force commit at least so that I don't lose my changes?