is there any easy way to change the database from mysql to mongoDB ?
or better any one suggest me good tutorial do it
is there any easy way to change the database from mysql to mongoDB ?
Method #1: export from MySQL in a CSV format and then use the mongoimport tool. However, this does not always work well in terms of handling dates of binary data.
Method #2: script the transfer in your language of choice. Basically you write a program that reads everything from MySQL one element at a time and then inserts it into MongoDB.
Method #2 is better than #1, but it is still not adequate.
MongoDB uses collections instead of tables. MongoDB does not support joins. In every database I've seen, this means that your data structure in MongoDB is different from the structure in MySQL.
Because of this, there is no "universal tool" for porting SQL to MongoDB. Your data will need to be transformed before it reaches MongoDB.
If you're using Ruby, you can also try: Mongify
It's a super simple way to transform your data from a RDBS to MongoDB without losing anything.
Mongify will read your mysql database, build a translation file for you and all you have to do is map how you want your data transformed.
It supports:
Auto updating IDs (to BSON ObjectID)
Updating referencing IDs
Type Casting values
Embedding tables into other documents
Before save filters (to allow changes to the data manually)
and much much more...
Read more about it at: http://mongify.com/getting_started.html
There is also a short 5 min video on the homepage that shows you how easy it is.
Here's what I did it with Node.js for this purpose:
var mysql = require('mysql');
var MongoClient = require('mongodb').MongoClient;
function getMysqlTables(mysqlConnection, callback) {
mysqlConnection.query("show full tables where Table_Type = 'BASE TABLE';", function(error, results, fields) {
if (error) {
callback(error);
} else {
var tables = [];
results.forEach(function (row) {
for (var key in row) {
if (row.hasOwnProperty(key)) {
if(key.startsWith('Tables_in')) {
tables.push(row[key]);
}
}
}
});
callback(null, tables);
}
});
}
function tableToCollection(mysqlConnection, tableName, mongoCollection, callback) {
var sql = 'SELECT * FROM ' + tableName + ';';
mysqlConnection.query(sql, function (error, results, fields) {
if (error) {
callback(error);
} else {
if (results.length > 0) {
mongoCollection.insertMany(results, {}, function (error) {
if (error) {
callback(error);
} else {
callback(null);
}
});
} else {
callback(null);
}
}
});
}
MongoClient.connect("mongodb://localhost:27017/importedDb", function (error, db) {
if (error) throw error;
var MysqlCon = mysql.createConnection({
host: 'localhost',
user: 'root',
password: 'root',
port: 8889,
database: 'dbToExport'
});
MysqlCon.connect();
var jobs = 0;
getMysqlTables(MysqlCon, function(error, tables) {
tables.forEach(function(table) {
var collection = db.collection(table);
++jobs;
tableToCollection(MysqlCon, table, collection, function(error) {
if (error) throw error;
--jobs;
});
})
});
// Waiting for all jobs to complete before closing databases connections.
var interval = setInterval(function() {
if(jobs<=0) {
clearInterval(interval);
console.log('done!');
db.close();
MysqlCon.end();
}
}, 300);
});
MongoVUE's free version can do this automatically for you.
It can connect to both databases and perform the import
I think one of the easiest ways is to export the MySQL database to JSON and then use mongorestore to import it to a MongoDB database.
Step 1: Export the MySQL database to JSON
Load the mysql dump file into a MySQL database if necessary
Open MySQL Workbench and connect to the MySQL database
Go to the Schema viewer > Select database > Tables > right-click on the name of the table to export
Select 'Table Data Export Wizard'
Set the file format to .json and type in a filename such as tablename.json
Note: All tables will need to be exported individually
Step 2: Import the JSON files to a MongoDB using the mongorestore command
The mongorestore command should be run from the server command line (not the mongo shell)
Note that you may need to provide the authentication details as well as the --jsonArray option, see the mongorestore docs for more information
mongoimport -d dbname -u ${MONGO_USERNAME} -p ${MONGO_PASSWORD} --authenticationDatabase admin -c collectionname --jsonArray --file tablename.json
Note: This method will not work if the original MySQL database has BLOBs/binary data.
I am kind of partial to TalendOpenStudio for those kind of migration jobs. It is an eclipse based solution to create data migration "scripts" in a visual way. I do not like visual programming, but this is a problem domain I make an exception.
Adrien Mogenet has create a MongoDBConnection plugin for mongodb.
It is probably overkill for a "simple" migration but ut is a cool tool.
Mind however, that the suggestion of Nix will probably save you time if it is a one-of migration.
You can use QCubed (http://qcu.be) framework for that. The procedure would be something like this:
Install QCubed (http://www.thetrozone.com/qcubed-installation)
Do the codegen on your database. (http://www.thetrozone.com/php-code-generation-qcubed-eliminating-sql-hassle)
Take your database offline from the rest of the world so that only one operation runs at a time.
Now write a script which will read all rows from all tables of the database and use the getJson on all objects to get the json. You can then use the data to convert to array and push it into the mongoDB!
If anyone's still looking for a solution, i found that the easiest way is to write a PHP script to connect to your SQL DB, retrieve the information you want using the usual Select statement, transform the information into JSON using the PHP JSON Encode functions and simply output your results to file or directly to MongoDB. It's actually pretty simple and straight forward, the only thing to do is to double check your output against a Json validator, you may have to use functions such as explode to replace certain characters and symbols to make it valid. I have done this before however i currently do not have the script at hand but from what i can remember it was literally half a page of code.
Oh also remember Mongo is a document store so some data mapping is required to get it to be acceptable with mongo.
For those coming to this with the same problem, you can check out this Github project. This is an ongoing development that will help you migrate data from MySQL database to MongoDB by simply running a simple command.
It will generate MongoDB Schemas in TypeScript so you can use them later in your project. Each MySQL table will be a MongoDB collection, and datatypes will be efficiently converted to their MongoDB compatibles.
The documentation for the same can be found in the project's README.md. Feel free to come in and contribute. Would like to help if need be.
If you are looking for a tool to do it for you, good luck.
My suggestion is to just pick your language of choice, and read from one and write to another.
If I could quote Matt Briggs (it solved my roblem one time):
The driver way is by FAR the most straight forward. The import/export tools are fantastic, but only if you are using them as a pair. You are in for a wild ride if your table includes dates and you try to export from the db and import into mongo.
You are lucky too, being in c#. We are using ruby, and have a 32million row table we migrated to mongo. Our ending solution was to craft an insane sql statement in postgres that output json (including some pretty kludgy things to get dates going properly) and piped the output of that query on the command line into mongoimport. It took an incredibly frustrating day to write, and is not the sort of thing that can ever really be changed.
So if you can get away with it, use ado.net with the mongo driver. If not, I wish you well :-)
(note that this is coming from a total mongo fanboi)
MySQL is very similar to other SQL databases, so I send You to the topić:
Convert SQL table to mongoDB document
You can use the following project.It requires solr like configuration file to be written.Its very simple and straight forward.
http://code.google.com/p/sql-to-mongo-importer/
Try this:
Automated conversion of MySQL dump to Mongo updates using simple r2n mappings.
https://github.com/virtimus/mysql2mongo
Related
I was wondering which approach is better for switching databases...
The idea is to get the database name from a subdomain, and make the specific route SQL query use that databases, until a request comes from another subdomain.
This switch will happen constantly depending on each API request.
changeUser
This can be a middleware before each API route.
pool.getConnection(function(err, conn) {
if (err) {
// handle/report error
return;
}
conn.changeUser({
database: req.session.dbname
}, function (err) {
if (err) {
// handle/report error
return;
}
// Use the updated connection here, eventually
// release it:
conn.release();
});
});
USE DATABASE
Simply prepend each query with the USE statement. This can also be a middleware.
USE specific_db;
select * from table;
If you just need to switch to a different default database, I'd use USE. This preserves your session, so you can continue a transaction, or use temporary tables or session variables. Also your user privileges remain the same.
changeUser() starts a new session, optionally you can change user, or change the default database. But any session-scoped things such as I listed above are ended.
I don't think we can say either is "better" because they are really different actions, each suited to their own purpose. It's like asking whether if() is better than while().
So it depends what is the purpose for the change in your case. You have clarified in the comments that you are doing this in middleware at the time you handle a new request.
It's important to avoid leaking information between requests, because session variables or temp tables might contain private data for the user of the previous request. So it's preferred to reset all session-scoped information. changeUser() will accomplish that, but USE won't.
My analogy is that changeUser() is like logging into Linux potentially as a different user, but USE is like staying in the same Linux shell session, but simply using cd to change directory.
Hey there I have a question about the best way to store data encrypted in my database. I use Node.js, a MySQL database and sequelize 6.6.5 as ORM.
Here's what I do:
With beforeCreate and beforeUpdate hooks I'm encrypting my data
before storing it in the database.
With the beforeFind hook I encrypt the condition for querying before
doing so.
And with afterCreate, afterUpdate and afterFind hooks I decrypt the
data to work with it after creating updating or querying for it.
But the the querying itself raises some problems for me which I think come with the way I encrypt my data. I use the Node.js crypto module with the aes-256-cbc algorithm and a random IV for every encryption.
With the random IV every encryption results in a different string. That's why even if I use the beforeFind hook to encrypt my condition the query will never return any result.
myModel.create({myField: "someData"});
// with the beforeCreate hook encrypting this the database will contain something like this
// myField: "1ac4e952cf6207e5fd79630e0e82c901"
myModel.findAll({ where: { myField: "someData" } });
// The beforeFind hook encrypts this condition but as mentioned the result is not the same
// as the encrpyted value in the database
// It will look something like this:
// { where: { myField: "e203a4e22cf654w5fd7390300ef2c2f2" } }
// Because "1ac4e952cf6207e5fd79630e0e82c901" != "e203a4e22cf654w5fd7390300ef2c2f2"
// the query results in null
I obviously could use the same IV to encrypt my data which then would lead to every encryption of the same source resulting in the same encrypted string but I would rather not do that if there is any other way to make it work like this.
So basically my two question are:
Is there a way to make this work with the an encryption using a random IV?
Or is there an even better way to store the data encrypted in the database?
Thank you all in advance!
The purpose of the random (salt) part is exactly to prevent what you are trying to do.
I'm not sure about your use case but sometimes it's ok to encrypt without a salt (same data => same hash), sometimes (think of the user password) absolutely not ok.
From what you have posted I don't know where you are saving the random part, otherwise how do you decrypt the data?
I was wondering if you can get the metadata or the entire structure of the table and columns using sailsjs or waterline's mysql module
After hours of searching, I've finally found the holy grail. Well half of the holy grail.
Anyway, I just followed the instructions given here: https://github.com/balderdashy/sails/issues/780 then created my own custom query.
Yes its very easy to get structure of MySQL table by Waterline SalsJS...
Model.query("desc table_name",function (err, models) {
if(!err)
{
//Do somethng wth your data in models variable
}
}
)
Here table_name is your table name
For me, below code was working fine...
db('users_table').query("SHOW COLUMNS FROM users_table",function (err, models) {
if (! err )
// working with data
});
I'd like to dump my databases to a file.
Certain website hosts don't allow remote or command line access, so I have to do this using a series of queries.
All of the related questions say "use mysqldump" which is a great tool but I don't have command line access to this database.
I'd like CREATE and INSERT commands to be created at the same time - basically, the same performance as mysqldump. Is SELECT INTO OUTFILE the right road to travel, or is there something else I'm overlooking - or maybe it's not possible?
Use mysqldump-php a pure-PHP solution to replicate the function of the mysqldump executable for basic to med complexity use cases - I understand you may not have remote CLI and/or mysql direct access, but so long as you can execute via an HTTP request on a httpd on the host this will work:
So you should be able to just run the following purely PHP script straight from a secure-directory in /www/ and have an output file written there and grab it with a wget.
mysqldump-php - Pure PHP mysqldump on GitHub
PHP example:
<?php
require('database_connection.php');
require('mysql-dump.php')
$dumpSettings = array(
'include-tables' => array('table1', 'table2'),
'exclude-tables' => array('table3', 'table4'),
'compress' => CompressMethod::GZIP, /* CompressMethod::[GZIP, BZIP2, NONE] */
'no-data' => false,
'add-drop-table' => false,
'single-transaction' => true,
'lock-tables' => false,
'add-locks' => true,
'extended-insert' => true
);
$dump = new MySQLDump('database','database_user','database_pass','localhost', $dumpSettings);
$dump->start('forum_dump.sql.gz');
?>
With your hands tied by your host, you may have to take a rather extreme approach. Using any scripting option your host provides, you can achieve this with just a little difficulty. You can create a secure web page or strait text dump link known only to you and sufficiently secured to prevent all unauthorized access. The script to build the page/text contents could be written to follow these steps:
For each database you want to back up:
Step 1: Run SHOW TABLES.
Step 2: For each table name returned by the above query, run SHOW CREATE TABLE to get the create statement that you could run on another server to recreate the table and output the results to the web page. You may have to prepend "DROP TABLE X IF EXISTS;" before each create statement generated by the results of these queryies (!not in your query input!).
Step 3: For each table name returned from step 1 again, run a SELECT * query and capture full results. You will need to apply a bulk transformation to this query result before outputing to screen to convert each line into an INSERT INTO tblX statement and output the final transformed results to the web page/text file download.
The final web page/text download would have an output of all create statements with "drop table if exists" safeguards, and insert statements. Save the output to your own machine as a ".sql" file, and execute on any backup host as needed.
I'm sorry you have to go through with this. Note that preserving mysql user accounts that you need is something else entirely.
Use / Install PhpMySQLAdmin on your web server and click export. Many web hosts already offer you this as a service pre-configured, and it's easy to install if you don't already have it (pure php): http://www.phpmyadmin.net/
This allows you to export your database(s), as well as perform other otherwise tedious database operations very quickly and easily -- and it works for older versions of PHP < 5.3 (unlike the Mysqldump.php offered as another answer here).
I am aware that the question states 'using query' but I believe the point here is that any means necessary is sought when shell access is not available -- that is how I landed on this page, and PhpMyAdmin saved me!
I am currently using juggling db with NodeJS for ORM. I now need to do some reporting, which involves joining data from several tables using arbitrary SQL with SUM and GROUP BY too. How would I do this using the jugglingdb framework, and get a list of objects containing data from several columns.
You can access to the query function from the mysql package using the adapter client:
var Schema = require('jugglingdb').Schema;
var schema = new Schema('mysql', {
// your config
});
schema.client.query('your very wild query', function(err, data) {
// data will be an Array of Objects if no error
});
Actually this is a direct call to the query function of node-mysql package