Switching databases in a MySQL pool: changeUser vs use database - mysql

I was wondering which approach is better for switching databases...
The idea is to get the database name from a subdomain, and make the specific route SQL query use that databases, until a request comes from another subdomain.
This switch will happen constantly depending on each API request.
changeUser
This can be a middleware before each API route.
pool.getConnection(function(err, conn) {
if (err) {
// handle/report error
return;
}
conn.changeUser({
database: req.session.dbname
}, function (err) {
if (err) {
// handle/report error
return;
}
// Use the updated connection here, eventually
// release it:
conn.release();
});
});
USE DATABASE
Simply prepend each query with the USE statement. This can also be a middleware.
USE specific_db;
select * from table;

If you just need to switch to a different default database, I'd use USE. This preserves your session, so you can continue a transaction, or use temporary tables or session variables. Also your user privileges remain the same.
changeUser() starts a new session, optionally you can change user, or change the default database. But any session-scoped things such as I listed above are ended.
I don't think we can say either is "better" because they are really different actions, each suited to their own purpose. It's like asking whether if() is better than while().
So it depends what is the purpose for the change in your case. You have clarified in the comments that you are doing this in middleware at the time you handle a new request.
It's important to avoid leaking information between requests, because session variables or temp tables might contain private data for the user of the previous request. So it's preferred to reset all session-scoped information. changeUser() will accomplish that, but USE won't.
My analogy is that changeUser() is like logging into Linux potentially as a different user, but USE is like staying in the same Linux shell session, but simply using cd to change directory.

Related

Sequelize querying with encrypted fields using after* before* hooks

Hey there I have a question about the best way to store data encrypted in my database. I use Node.js, a MySQL database and sequelize 6.6.5 as ORM.
Here's what I do:
With beforeCreate and beforeUpdate hooks I'm encrypting my data
before storing it in the database.
With the beforeFind hook I encrypt the condition for querying before
doing so.
And with afterCreate, afterUpdate and afterFind hooks I decrypt the
data to work with it after creating updating or querying for it.
But the the querying itself raises some problems for me which I think come with the way I encrypt my data. I use the Node.js crypto module with the aes-256-cbc algorithm and a random IV for every encryption.
With the random IV every encryption results in a different string. That's why even if I use the beforeFind hook to encrypt my condition the query will never return any result.
myModel.create({myField: "someData"});
// with the beforeCreate hook encrypting this the database will contain something like this
// myField: "1ac4e952cf6207e5fd79630e0e82c901"
myModel.findAll({ where: { myField: "someData" } });
// The beforeFind hook encrypts this condition but as mentioned the result is not the same
// as the encrpyted value in the database
// It will look something like this:
// { where: { myField: "e203a4e22cf654w5fd7390300ef2c2f2" } }
// Because "1ac4e952cf6207e5fd79630e0e82c901" != "e203a4e22cf654w5fd7390300ef2c2f2"
// the query results in null
I obviously could use the same IV to encrypt my data which then would lead to every encryption of the same source resulting in the same encrypted string but I would rather not do that if there is any other way to make it work like this.
So basically my two question are:
Is there a way to make this work with the an encryption using a random IV?
Or is there an even better way to store the data encrypted in the database?
Thank you all in advance!
The purpose of the random (salt) part is exactly to prevent what you are trying to do.
I'm not sure about your use case but sometimes it's ok to encrypt without a salt (same data => same hash), sometimes (think of the user password) absolutely not ok.
From what you have posted I don't know where you are saving the random part, otherwise how do you decrypt the data?

MySQL AES_DECRYPT in NodeJS, placeholder for encryption key?

I found similar replies but nothing really straightforward.
How can AES_DECRYPT be used only for the password field in a query using MySQL extension in NodeJS ?
What I have is as follow:
app.post("/verify",function(req,res){
connection.query('SELECT *, FROM `bosses` where u=? and p=?', [req.body.user,req.body.pass], function (error, results, fields) {
if(results.length){
session.loggedin=1;
res.redirect('/zirkus');
}else{
res.redirect('/soccer');
}
});
I assume that I need to modify the query with something like this:
connection.query('SELECT *, FROM `bosses` where u=? and p=AES_DECRYPT (?, 'ENCRYPTIONKEY')', [req.body.user,req.body.pass], function (error, results, fields) {
but somehow I can't get it to work properly. Should I use a placeholder for the encryption key too ?
EDIT
Thanks for the replies and explanation on why this was generally a bad idea :)
Here is a variation: no decryption password is stored in the code:
connection.query('SELECT *, AES_DECRYPT(p, ?) AS `key` FROM bosses WHERE u = ?', [req.body.pass, req.body.user], function (error, results, fields) {
console.log (req.body.pass + req.body.user )
if(results[0].key){
session.loggedin=1;
res.redirect('/zirkus');
}else{
res.redirect('/soccer');
}
});
});
Here the admin user types the decryption password in the form and if the decryption is successful (the key returns true) it allows the user to log in (without using or saving the password) else access is denied.
I assume that in this solution the only downside are the mysql logs right ?
Answer 1: Don't use encryption for storing user passwords. Use hashing.
There's no reason you need to decrypt user passwords, ever. Instead, when the user logs in, you hash their input with the same hashing function and compare the result to the hash string stored in the database.
Try bcrypt: https://www.npmjs.com/package/bcrypt
Also read https://blog.codinghorror.com/youre-probably-storing-passwords-incorrectly/
Answer 2: I never do encryption or hashing in SQL expressions. The reason is that the if you use the query log, it will contain the plaintext of the sensitive content, as it appears in SQL expressions. It will also be visible in the PROCESSLIST.
Instead, if you need to do encryption or hashing of sensitive content, do it in your application code, and then use the result in SQL statements.
Re your edit:
I assume that in this solution the only downside are the mysql logs right ?
No. The problem is that you're storing the password using reversible encryption. There is no reason to reverse a user password. If I visit a website that offers a "password recovery" feature where they can tell me what my password was (no matter how many other security checks they do), then I know they're storing passwords wrong.
If passwords are stored in a reversible encrypted format, this creates the possibility that someone else other than me can reverse the encryption and read my password. That will never happen with hashing, because you can't reverse hashing to get the original content.
If it is because of the logs ... ?
You could disable the query logs, of course. But there's also other places where the query is visible, such as:
the binary log (if you use statement-based binary logs)
the PROCESSLIST
the performance_schema statement tables
the MySQL network protocol. That is, if you don't use TLS to encrypt the connection between the application and the database, someone could intercept packets on the network and see the plaintext query with the plaintext content.
In your edited example, they could view the user's plaintext decryption key in any of the above contexts.
... why MySQL has this function ...?
There are legitimate uses of encryption other than user passwords. Sometimes you do need to decrypt encrypted content. I'm just talking about user passwords. User passwords can be authenticated without decryption, as I described at the top of this answer. It's covered in the blog I linked to, and also as a chapter in my book SQL Antipatterns Volume 1: Avoiding the Pitfalls of Database Programming.
Another use of encryption and corresponding decryption function in SQL is when you develop code as stored procedures. It would be inconvenient to have to return encrypted data to the client application just to decrypt it, and then send it back to your stored procedures for further processing it.
You have to use doubole quotes for the decryption key or escaping ut
connection.query('SELECT *, FROM `bosses` where u=? and p=AES_DECRYPT (?, "ENCRYPTIONKEY)', [req.body.user,req.body.pass], function (error, results, fields) {
if(results.length){
session.loggedin=1;
res.redirect('/zirkus');
}else{
res.redirect('/soccer');
}
});
But as in every language passwords are usually only stored as hashed values, so that they can't be easily reconstructed, even with the logs. so chelkc for example https://coderrocketfuel.com/article/using-bcrypt-to-hash-and-check-passwords-in-node-js

Prestashop 1.4 - How to execute an initial MySQL query on every request

Kind of old version of Prestashop, I know, but I need to execute an initial MySQL query on every request.
Where do I need to put the logic to be always executed?
Sort of initialization point the application always executes, no matter what URL is requested.
Thanks in advance.
Finally found the solution (and smartly, I think).
And I also think this approach could be applicable to other versions of Prestashop (1.5, 1.6, 1.7...).
In classes/MySQL.php file, within the connect function, I add my query just before returning $this->_link:
public function connect()
{
if (!defined('_PS_DEBUG_SQL_'))
define('_PS_DEBUG_SQL_', false);
if ($this->_link = mysql_connect($this->_server, $this->_user, $this->_password))
{
if (!$this->set_db($this->_database))
die('The database selection cannot be made.');
}
else
die('Link to database cannot be established.');
/* UTF-8 support */
if (!mysql_query('SET NAMES \'utf8\'', $this->_link))
die(Tools::displayError('PrestaShop Fatal error: no utf-8 support. Please check your server configuration.'));
// removed SET GLOBAL SQL_MODE : we can't do that (see PSCFI-1548)
/** MY QUERY IS INSERTED HERE, USING $this->_link BY THE WAY **/
mysql_query('...', $this->_link);
return $this->_link;
}
In this way, 2 advantages arise:
I do not have to maintain a copy of database credentials other than Prestashop's
The query is executed in every request, both shop frontoffice and backoffice user interfaces.
I hope this can help anyone.

Nodejs Mysql connection pooling using mysql module

We are using mysql module for node and i was just wondering if this approach is good or does it have any bad effects on our application, consider this situation
dbPool.getConnection(function(err, db) {
if(err) return err;
db.query()
Here i am calling the dbPool object and requesting a connection from the pool then using it. However i found another implementation (which is the one i am asking about) which uses the dbPool object directly like:
dbPool.query('select * from test where id = 1' function(err, rows) {})
so i was wondering what does the second implementation does exactly, does it automatically return a free connection and use it ? can explain what is happening exactly in the second case and if it has any effect + or - on my application ? Thank you.
So this is so what called callback chaining. In NodeJS you have a lot of asynchronous calls going around. But sometimes you want to do something when the connection is done with MySQL. That's why the getConnection functionality has a callBack feature.
dbPool.getConnection(function(err, db) {
if(err) return err;
db.query()
Is equal to this:
dbPool.query('select * from test where id = 1' function(err, rows) {})
dbPool.query() will wait for the connection to be open, you don't have to put all your queries inside the getConnection to make it work. This is why it also has a callBack feature.
Tell me if I'm wrong. I hope this solves your question.

converting database from mysql to mongoDb

is there any easy way to change the database from mysql to mongoDB ?
or better any one suggest me good tutorial do it
is there any easy way to change the database from mysql to mongoDB ?
Method #1: export from MySQL in a CSV format and then use the mongoimport tool. However, this does not always work well in terms of handling dates of binary data.
Method #2: script the transfer in your language of choice. Basically you write a program that reads everything from MySQL one element at a time and then inserts it into MongoDB.
Method #2 is better than #1, but it is still not adequate.
MongoDB uses collections instead of tables. MongoDB does not support joins. In every database I've seen, this means that your data structure in MongoDB is different from the structure in MySQL.
Because of this, there is no "universal tool" for porting SQL to MongoDB. Your data will need to be transformed before it reaches MongoDB.
If you're using Ruby, you can also try: Mongify
It's a super simple way to transform your data from a RDBS to MongoDB without losing anything.
Mongify will read your mysql database, build a translation file for you and all you have to do is map how you want your data transformed.
It supports:
Auto updating IDs (to BSON ObjectID)
Updating referencing IDs
Type Casting values
Embedding tables into other documents
Before save filters (to allow changes to the data manually)
and much much more...
Read more about it at: http://mongify.com/getting_started.html
There is also a short 5 min video on the homepage that shows you how easy it is.
Here's what I did it with Node.js for this purpose:
var mysql = require('mysql');
var MongoClient = require('mongodb').MongoClient;
function getMysqlTables(mysqlConnection, callback) {
mysqlConnection.query("show full tables where Table_Type = 'BASE TABLE';", function(error, results, fields) {
if (error) {
callback(error);
} else {
var tables = [];
results.forEach(function (row) {
for (var key in row) {
if (row.hasOwnProperty(key)) {
if(key.startsWith('Tables_in')) {
tables.push(row[key]);
}
}
}
});
callback(null, tables);
}
});
}
function tableToCollection(mysqlConnection, tableName, mongoCollection, callback) {
var sql = 'SELECT * FROM ' + tableName + ';';
mysqlConnection.query(sql, function (error, results, fields) {
if (error) {
callback(error);
} else {
if (results.length > 0) {
mongoCollection.insertMany(results, {}, function (error) {
if (error) {
callback(error);
} else {
callback(null);
}
});
} else {
callback(null);
}
}
});
}
MongoClient.connect("mongodb://localhost:27017/importedDb", function (error, db) {
if (error) throw error;
var MysqlCon = mysql.createConnection({
host: 'localhost',
user: 'root',
password: 'root',
port: 8889,
database: 'dbToExport'
});
MysqlCon.connect();
var jobs = 0;
getMysqlTables(MysqlCon, function(error, tables) {
tables.forEach(function(table) {
var collection = db.collection(table);
++jobs;
tableToCollection(MysqlCon, table, collection, function(error) {
if (error) throw error;
--jobs;
});
})
});
// Waiting for all jobs to complete before closing databases connections.
var interval = setInterval(function() {
if(jobs<=0) {
clearInterval(interval);
console.log('done!');
db.close();
MysqlCon.end();
}
}, 300);
});
MongoVUE's free version can do this automatically for you.
It can connect to both databases and perform the import
I think one of the easiest ways is to export the MySQL database to JSON and then use mongorestore to import it to a MongoDB database.
Step 1: Export the MySQL database to JSON
Load the mysql dump file into a MySQL database if necessary
Open MySQL Workbench and connect to the MySQL database
Go to the Schema viewer > Select database > Tables > right-click on the name of the table to export
Select 'Table Data Export Wizard'
Set the file format to .json and type in a filename such as tablename.json
Note: All tables will need to be exported individually
Step 2: Import the JSON files to a MongoDB using the mongorestore command
The mongorestore command should be run from the server command line (not the mongo shell)
Note that you may need to provide the authentication details as well as the --jsonArray option, see the mongorestore docs for more information
mongoimport -d dbname -u ${MONGO_USERNAME} -p ${MONGO_PASSWORD} --authenticationDatabase admin -c collectionname --jsonArray --file tablename.json
Note: This method will not work if the original MySQL database has BLOBs/binary data.
I am kind of partial to TalendOpenStudio for those kind of migration jobs. It is an eclipse based solution to create data migration "scripts" in a visual way. I do not like visual programming, but this is a problem domain I make an exception.
Adrien Mogenet has create a MongoDBConnection plugin for mongodb.
It is probably overkill for a "simple" migration but ut is a cool tool.
Mind however, that the suggestion of Nix will probably save you time if it is a one-of migration.
You can use QCubed (http://qcu.be) framework for that. The procedure would be something like this:
Install QCubed (http://www.thetrozone.com/qcubed-installation)
Do the codegen on your database. (http://www.thetrozone.com/php-code-generation-qcubed-eliminating-sql-hassle)
Take your database offline from the rest of the world so that only one operation runs at a time.
Now write a script which will read all rows from all tables of the database and use the getJson on all objects to get the json. You can then use the data to convert to array and push it into the mongoDB!
If anyone's still looking for a solution, i found that the easiest way is to write a PHP script to connect to your SQL DB, retrieve the information you want using the usual Select statement, transform the information into JSON using the PHP JSON Encode functions and simply output your results to file or directly to MongoDB. It's actually pretty simple and straight forward, the only thing to do is to double check your output against a Json validator, you may have to use functions such as explode to replace certain characters and symbols to make it valid. I have done this before however i currently do not have the script at hand but from what i can remember it was literally half a page of code.
Oh also remember Mongo is a document store so some data mapping is required to get it to be acceptable with mongo.
For those coming to this with the same problem, you can check out this Github project. This is an ongoing development that will help you migrate data from MySQL database to MongoDB by simply running a simple command.
It will generate MongoDB Schemas in TypeScript so you can use them later in your project. Each MySQL table will be a MongoDB collection, and datatypes will be efficiently converted to their MongoDB compatibles.
The documentation for the same can be found in the project's README.md. Feel free to come in and contribute. Would like to help if need be.
If you are looking for a tool to do it for you, good luck.
My suggestion is to just pick your language of choice, and read from one and write to another.
If I could quote Matt Briggs (it solved my roblem one time):
The driver way is by FAR the most straight forward. The import/export tools are fantastic, but only if you are using them as a pair. You are in for a wild ride if your table includes dates and you try to export from the db and import into mongo.
You are lucky too, being in c#. We are using ruby, and have a 32million row table we migrated to mongo. Our ending solution was to craft an insane sql statement in postgres that output json (including some pretty kludgy things to get dates going properly) and piped the output of that query on the command line into mongoimport. It took an incredibly frustrating day to write, and is not the sort of thing that can ever really be changed.
So if you can get away with it, use ado.net with the mongo driver. If not, I wish you well :-)
(note that this is coming from a total mongo fanboi)
MySQL is very similar to other SQL databases, so I send You to the topić:
Convert SQL table to mongoDB document
You can use the following project.It requires solr like configuration file to be written.Its very simple and straight forward.
http://code.google.com/p/sql-to-mongo-importer/
Try this:
Automated conversion of MySQL dump to Mongo updates using simple r2n mappings.
https://github.com/virtimus/mysql2mongo