Writing SQL queries in nodejs - mysql

This question is mainly about the best practice of writing queries in nodejs. We had referred several tutorials, but were not able to reach a conclusion.
We have a node js API layer which is mainly used for reading and writing to database. Here is a sample code:
pool.query("update node SET changed = " + params.updationTime + " where nid = " + params.nid);
pool.query("update node_revision SET timestamp = " + params.updationTime +" where nid = " + params.nid);
pool.end();
Is this a correct way of writing code or should we write the sql queries in async format itself.

If your pool configuration allows more than one connection then likely both queries are executed in parallel. Type of call itself does not matter. This example takes 2 seconds to finish:
connection.query('select sleep(1)');
connection.query('select sleep(1)', function() { console.log('done!') });
As well as this one:
connection.query('select sleep(1)', function() {
connection.query('select sleep(1)', function() {
console.log('done!')
});
});
because mysql protocol itself is "sequential" (that is, client is allowed to send next query only after result of previous is fully received). Most async clients hide this limitation by queueing commands internally. In case of two connections, queries actually go in parallel:
connection1.query('select sleep(1)', function() { console.log('done1') });
connection2.query('select sleep(1)', function() { console.log('done2') });
"done1" and "done2" are both going to appear on screen in approximately 1 second
pool.query is a shorlcut for pool.getConnection() + connection.query() + connection.release() - see readme

When writing SQL queries in NodeJS, I cannot promote Knex.js enough!
Programatic way to build dynamic queries. (writing dynamic raw SQL strings is a very manual process)
Connection pools.
Transaction support.
String escaping.
And on and on.
For your specific question, you just make the queries and execute them (using callbacks or Promises), the Knex connection pool will handle all the pooling, and generally things will just work for you.
You'll like it, give it a try : )

I suggest you to use sails.js (http://sailsjs.org/#/) framework, which uses Waterline Query Language(http://sailsjs.org/#/documentation/concepts/ORM/Querylanguage.html) to retrieve data from mySQL/mongodb/Redis database.

Related

MySQL pooling within nodejs

Hi i've just read the docs of mysql package for nodejs. Lil bit not sure of how is the best practice to work with pooling.
var mysql = require('mysql');
var pool = mysql.createPool(...);
pool.getConnection(function(err, connection) {
// Use the connection
connection.query('SELECT something FROM sometable', function (error, results, fields) {
// And done with the connection.
connection.release();
// Handle error after the release.
if (error) throw error;
// Don't use the connection here, it has been returned to the pool.
});
});
Do we have to call release() method everytime we have performed query?
And one more.
What is the difference between using pool directly to perform the query vs. using getConnection method then perform the query?
Code using pool direcly:
var pool = mysql.createPool(...);
pool.query(...)
Using getConnection method then perform the query:
pool.getConnection(function(err, connection) {
connection.query(....);
});
If you ask for getting a connection, you basically reserve that connection for a little while. This is important for 2 reasons:
Only 1 query can be done on a connection at a time, never in parallel. So this prevents 2 things from using the same connection.
Transactions are connection-based and all queries within the transaction must happen on that connection object.
The mysql library would have no way to predict that you are 'done' your transaction, this is why you need to release it.
Aside: You should consider looking into mysql2 for a similar library that's more powerful, and use promises instead of this callback pattern.
Update based on comment
When you do query directly on the pool, the pool will automatically get the connection, run the query and release it for you.
This is useful if you just need to do a single query and don't care about transactions.

How to kill a MySQL query with Node.js without disconnecting?

Context:
I'm building a web application that calls data from a large db (several millions of rows for table); sometimes a user can change his mind and call for new data before the query on the db has been completed.
Technical question:
I tried to kill the query in this cases using:
app.get("/data", function(req, res) {
if (req.query.killQuery == "true") {
con.query("KILL \"" + threadId + "\"", function(err) {
if (err) throw err;
console.log("I have interrupted the executing query for a new request");
giveData(req, res); //The function that will execute a new query
});
return;
}
giveData(req, res); //The function that will execute a new query
});
Now I have several doubts about this code:
I had to use a second connection to kill the thread of the first, since the first was unable to perform new queries before the first was completed. Is this a Node.js behaviour or is it the right way to do this kind of things?
The KILL thread_id statement closes the whole connection instead of stopping the single query. Again, is it Node.js behaviour, or is it MySQL itself? Should I really disconnect and reconnect to stop a query and start with an other?
If you have a modern version of MySQL, you can use KILL QUERY <threadId> instead which will only kill the currently executing query on that connection but leave the connection intact.

Socket.io and MySQL Connections

I'm working on my first node.js-socket.io project. Until now i coded only in PHP. In PHP it is common to close the mysql connection, when it is not needed any more.
My Question: Does it make sense to keep just one mysql-connection during server is running open, or should i handle this like PHP.
Info: In the happy hours i will have about 5 requests/seconds from socket clients and for almost all of them i have to make a mysql_crud.
Which one would you prefer?
io = require('socket.io').listen(3000); var mysql = require('mysql');
var connection = mysql.createConnection({
host:'localhost',user:'root',password :'pass',database :'myDB'
});
connection.connect(); // and never 'end' or 'destroy'
// ...
or
var app = {};
app.set_geolocation = function(driver_id, driver_location) {
connection.connect();
connection.query('UPDATE drivers set ....', function (err) {
/* do something */
})
connection.end();
}
...
The whole idea of Node.js is async io (that includes db queries).
And the rule with a mysql connection is that you can only have one query per connection at a time. So you either make a queue and have a single connection, as in the first option or create a connection each time as with option 2.
I personally would go with option 2, as opening and closing connections are not such a big overhead.
Here are some code samples to help you out:
https://codeforgeek.com/2015/01/nodejs-mysql-tutorial/

MongoDB is much slower in comparision to MySQL - basic find() method

It is my first Stackoverflow question but I'm long time reader.
I'm working on some home project and I tried to compare the speed of MongoDB and MySQL. Suprisingly.. MongoDB is almost 5 times slower even with very basic table? I was reading that they are almost the same speed but it got me thinking.
Can someone explain to me why is this happening?
The MongoDB code:
app.get('/mongodb', function(req, res, next) {
var time = process.hrtime();
User.find(function(err, users) {
if (err) return next(err);
var diff = process.hrtime(time);
if(diff[0] == 0) {
res.send(diff[1].toString());
} else {
res.send(diff[0]+"."+diff[1]);
}
});
});
The MySQL code:
app.get('/mysql', function(req, res, next) {
var time = process.hrtime();
mysql.query("SELECT * FROM users", function(err, results) {
if (err) throw err;
var diff = process.hrtime(time);
if(diff[0] == 0) {
res.send(diff[1].toString());
} else {
res.send(diff[0]+"."+diff[1]);
}
});
});
MySQL returns: 1.52201348
MongoDB returns: 9.746405351
Schema is:
User {
id integer,
name string,
email string,
}
There are around 500000 of Users.
Index is on _id in MongoDB and id in MySQL.
MongoDB shell version: 2.4.6
MySQL version: 5.5.37
Any ideas? Any help or suggestions will be much appreciated.
Regards,
Wnzl.
I think the main problem here is that benchmarking should always be done with respect to a special use case.
The assumption, that they perfom with same speed is pretty unspecific.
So the use case you examine here is defined by your example and means getting all entities of a schema. Even if the use case is pretty simple, the meaningfulness of your result won't be.
MongoDB and SQL are database systems designed for different use cases. This means that each system is optimized for different queries. This is where I think you should dig deeper to get the reason for different performances.
Also take into account, that based on hat, querying all entity's of one schema, perhaps could be realized with multiple but in total faster queries. Last think about bootstrap, caching and indexing mechanisms that may be total different due to optimizations to system specific use cases.

Nodejs Mysql connection pooling using mysql module

We are using mysql module for node and i was just wondering if this approach is good or does it have any bad effects on our application, consider this situation
dbPool.getConnection(function(err, db) {
if(err) return err;
db.query()
Here i am calling the dbPool object and requesting a connection from the pool then using it. However i found another implementation (which is the one i am asking about) which uses the dbPool object directly like:
dbPool.query('select * from test where id = 1' function(err, rows) {})
so i was wondering what does the second implementation does exactly, does it automatically return a free connection and use it ? can explain what is happening exactly in the second case and if it has any effect + or - on my application ? Thank you.
So this is so what called callback chaining. In NodeJS you have a lot of asynchronous calls going around. But sometimes you want to do something when the connection is done with MySQL. That's why the getConnection functionality has a callBack feature.
dbPool.getConnection(function(err, db) {
if(err) return err;
db.query()
Is equal to this:
dbPool.query('select * from test where id = 1' function(err, rows) {})
dbPool.query() will wait for the connection to be open, you don't have to put all your queries inside the getConnection to make it work. This is why it also has a callBack feature.
Tell me if I'm wrong. I hope this solves your question.