trigger code execution based on query results on node server - mysql

In a node server I run a query that can produce three different results:
return a row of data with status='A'
return a row of data with status='B'
return no rows
based on what the query returns, I have to perform different actions after. I.E. if scenario is 1 then update that record with status='B'. If scenario is 3 then insert a new record and so on.
I am looking for the correct syntax to use to check the three conditions.
My code is the following:
con1.query("<my query goes here>", function (err, result, fields) {
if (err) throw err;
console.log(result);
});
If I want to check scenario 3 is it correct to check if(result.length==0)while to check scenario 2 or 3 if(result[0].status=='A') or if(result[0].status=='B')?
Will this always elaborate correctly? I am coming from PHP so I want to understand if my logics are correct:
con1.query("<my query goes here>", function (err, result, fields) {
if (err) throw err;
if(result.length==0){
//do something
}else if(result[0].status=='A'){
//do something else
}else if(result[0].status=='B'){
//perform another action
}
});

Related

How to optimize a sequential insert on mysql

I'm trying to implement an HTTP event streaming server using MySQL where Users are able to append an event to a stream (a MySQL table) and also define the expected sequence number of the event.
The logic is somewhat simple:
Open transaction
get the next sequence number in the table to insert
verify if the next sequence number matches the expected(if supplied)
insert in database
Here's my code:
public async append(
data: any = {},
expectedSeq?: number
): Promise<void> {
let published_at = $date.create();
try {
await $mysql.transaction(async trx => {
let max = await trx(this.table)
.max({
seq: "seq",
})
.first();
if (!max) {
throw $error.InternalError(`unexpected mysql response`);
}
let next = (max.seq || 0) + 1;
if (expectedSeq && expectedSeq !== next) {
throw $error.ExpectationFailed(
`expected seq does not match current seq`
);
}
await trx(this.table).insert({
published_at,
seq: next,
data: $json.stringify(data),
});
});
} catch (err) {
if (err.code === "ER_DUP_ENTRY") {
return this.append(data, expectedSeq);
}
throw err;
}
}
My problem is this is extremely slow since there are race conditions between parallel requests to append to the same stream.. my laptop inserts/second on one stream went from ~1k to ~75.
Any pointers/suggestions on how to optimize this logic?
CONCLUSION
After consideration from comments, I decided to go with auto increment and reset the auto_increment only if there's an error. It yields around the same writes/sec with expectedSeq but much higher rate if ordering is not required.
Here's the solution:
public async append(data: any = {}, expectedSeq?: number): Promise<Event> {
if (!$validator.validate(data, this.schema)) {
throw $error.ValidationFailed("validation failed for event data");
}
let published_at = $date.create();
try {
let seq = await $mysql.transaction(async _trx => {
let result = (await _trx(this.table).insert({
published_at,
data: $json.stringify(data),
})).shift();
if (!result) {
throw $error.InternalError(`unexpected mysql response`);
}
if (expectedSeq && expectedSeq !== result) {
throw $error.ExpectationFailed(
`expected seq ${expectedSeq} but got ${result}`
);
}
return result;
});
return eventFactory(this.topic, seq, published_at, data);
} catch (err) {
await $mysql.raw(`ALTER TABLE ${this.table} auto_increment = ${this.seqStart}`);
throw err;
}
}
Why does the web page need to provide the sequence number? That is just a recipe for messy code, perhaps even messier than what you sketched out. Simply let the auto_increment value be returned to the User.
INSERT ...;
SELECT LAST_INSERT_ID(); -- session-specific, so no need for transaction.
Return that value to user.
Why not use Apache Kafka, It does all of this natively. With the easy answer out of the way, optimization is always tricky with partial information, however I think you've given us one hint that might enable a suggestion. You said without the order clause it performs much faster, which means that getting the max value is what is taking so long. That tells me a few things, first this value is not the clustered index (which is good news), second that you probably do not have sufficient index support (also good news since it's fixable by creating an index on this column, and sorting the index desc). This sounds like a table with millions or billions of rows in it, and this particular column has no guaranteed order, without the right indexing you could be doing a table scan between inserts to get the max value.
Why not use a GUID for your primary key instead of an auto-incremented integer? Then your client could generate the key and would also be able to insert it every time for sure.
Batch inserts versus singleton inserts
Your latency/performance problem is due to a batch size of 1 - as each send to the the database requires multiple round trips to the rdbms. Rather than inserting one row at a time, with a commit and verification after each row, you should rewrite your code to issue batch sizes of 100 or 1000 at a time, inserting n rows and verifying per batch rather than per row. If the batch insert fails, you can retry one row at a time.

Mysql Multiple users using diff diff SQL_CALC_FOUND_ROWS leads to diff diff total row count

I have used multiple various queries in our system where we want to fetch "total" records for pagination things.
All places I have followed the kind of structure -
var query1 = "select SQL_CALC_FOUND_ROWS ...."
sql.query(query1, [], function (error, results, fields) {
var _qry2 = "SELECT FOUND_ROWS() as total;"
sql.query(_qry2, [], function (error2, results2, fields2) {
});
});
The problem I am facing here is that when 1-10 users are my using my all of the system, it gives correct "total" count for all APIs.
But when the no. of users increases to more than 10 say 20, The "total" count keeps changing as each users using one of APIs from all.
So say if the "total" rows in 245, sometimes it gives me 18, sometimes 300 and sometimes 245 based on other queries "total" count.
I don't know what is happening here, totally blank.
Need help.
Thanks
You have to use the same connection to get a reliable result, not just one from the pool.
sql.getConnection(function(err, conn) {
if (err) throw err;
conn.query("SELECT SQL_CALC_FOUND_ROWS * FROM table", [], function(err, result) {
if (err) {
conn.release(); // give connection back to the pool
throw err;
}
conn.query("SELECT FOUND_ROWS() as total", [], function(err, result) {
if (err) {
conn.release(); // give connection back to the pool
throw err;
}
let total = result[0].total;
conn.release();
});
});
});
You have a concurrency issue. FOUND_ROWS() returns the number of rows returned for the last executed select statement.
https://dev.mysql.com/doc/refman/8.0/en/information-functions.html#function_found-rows
The row count available through FOUND_ROWS() is transient and not
intended to be available past the statement following the SELECT
SQL_CALC_FOUND_ROWS statement.
As your number of users increases, so does the probability of another SELECT statement being executed between the initial SELECT and your execution of FOUND_ROWS()

Knex.js verifying query results in server side code

I have a function that is supposed to check if a license plate already exists in my MySQL database, but on the then-promise-return the result comes as:
result: [object Object]. How do you actually get the response of this knex query and parse it to check for a license plate existing?
var licensePlateExists = function(licensePlate){
knex.select('plate').from('licensePlates').where({'plate': licensePlate}).limit(1).then(function(result){
console.log("Result: " + result);
if(!result){
return true;
}
return false;
}).catch(function(error) {
console.error(error);
});
}
I think I might have an error related to the query itself, but I tested the raw query string in MySQL CLI and it outputs the row as expected. Maybe ordering matters in the knex query builder?
P.S. This doesn't error, it executes all the way through.
Try with
knex.select('plate').from('licensePlates').where('plate', licensePlate)
Actually using count query would be better

mysql - how to display error instead of throw err in nodejs

I have put some constraints in my mysql table and thus when a duplicate value is entered, it throws an error and the server shuts down.
What I want instead is that the server should keep running and I should display the error to the end user, so they know they are inputting a duplicate value and hence the error.
how do I do that?
here's my code.
i tried to change the code a bit by putting the render statement in the else option. i.e if it doesnt throw the error then page is rendered otherwise It would do a res.send(err).
is that possible??
here's the code for this
connection.query("INSERT INTO attendance_details(month_year,uan,name,days_present,real_basic_salary,other_allowances,gross_salary,ptax) VALUES ?",
[finalData], function(err) {
if (err){
throw err;
res.send(err);
}else{
var attendanceData = {monthyear :monthFromHTML,rows:rowsLength,uanarr:uanArr,designationarr:designationArr,
namearr:nameArr,finaldata:finalData,realbasicsalary:realBasicSalary,realgrosssalary:realGrossSalary,ptax:pTax,advance:advance};
//put database query for inserting values here
res.render('attendance-data.ejs', attendanceData);
}
connection.end();
});

nodejs blocking further mysql queries

I have a MySQL query that returns ~11,000 results that then need further queries run on each result. While the code is running it doesn't allow other users to log in to the website.
Potential problems I've seen are the use of callback functions or foreach loops but that doesn't seen to help.
code looks like this:
query(sql, params, function(err, result) {
result.foreach(
query(generate_sql(result), params, function(err, result) {
//do stuff
});
)
});
I recommend you using Promises
Then your code would be something like that:
var Promise = require('bluebird');
_query = Promise.promisify(query);
_query(sql, params)
.map(function(singleRow) {
// you get here every single row of your query
})
.then(function() {
// you are finished
});
Your problem is having that much queries. 11,001 query is a very large number, and I don't think that MySQL can handle that number of queries in parallel.
So another way is to use subqueries to handle your problem.