I have the following architechture:
I Import data from a SQL datbase into a mongodb. I use the importer to migrate data into the mongodb that provides the data to a website via an API.
The importing can take a couple of minutes and if it fails I would like to be able to either rollback (it would be awsome to be able to rollback multiple imports) or drop the database/collections of the uncommited rows (if you think of it as SQL transactions).
I tried to import everything into a transactions collection that, on success, moved the data into the correct collection. This took way to much time to be performant. I also tried the solution of importing into a temp db and then swapping them. But then I run into problems if someone e.g. registers a new user on the website after the db-copy but before the importing is done (that user will be lost when swapping).
How can a perform an import in a safe way and not have the most basic concurrency problems?
EDIT:
To clarify the solution:
I will run the importer in a cron job, at least once a day. I currently keep a timestamp for the latest synchronization and select everything that is newer than that from the SQL-db. Things will automagically appear in the SQL-db over time.
At the end of the importing I run a downloader that downloads all the images from urls in the SQL db.
I don't want to start a new sync before the images are downloaded since that could result in strange behaviour.
In cases like this, where you need to move data between very different types of databases, you're going to tend to want something really reliable, robust, and, most-importantly, with its primary concern being transferring data and doing it well. Node.js is wonderful, but I would highly recommend you find some tool out there that is focused on only doing the transfer/mapping/etc. and use that regardless of what language/technology it uses. There's nothing about node.js, as much as I love it, that particularly recommends it for this sort of thing (e.g., it's best characteristics don't necessarily make it good for this sort of transfer/migration).
Better to find a mature, well-developed library that handles this sort of thing :)
Some tools/resources that turned up in my research:
mongify
SQL to MongoDB Mapping Chart
Would love if people could suggest more in comments :)
Related
I am currently working on an application like to analitics, i has Angularjs app which communicates with Spring REST Client App from which user creates token(trackingID) and use generated script with this id putting on his website to collect information about visitor's actions through another Spring REST tracking App, for tracking app i am using as mongodb to collect visitor actions/visitor info for fast insertion, but for rest client app mysql with user/accounts details.
My question is how to migrate mongo data from tracking app to mysql maybe for getting posibility of join for easily and fastest way of analyze data with any kind of filters from angularjs client app, to create manually any workers that periodically will transfer data from last point to present state from mongo to mysql, or are any existed tools that can be setted for this transfer?
There is no official library to do this.
But you can use mongoexport feature from mongoDB to export it in a CSV format and mysqlimport to import them into MySQL.
Here are links to the documentation MySQL import and MongoDB Export.
One more method you can try to write a program in one of your favorite language and read from MongoDB and write into MySQL
MySQL 5.7 has a new JSON data type, that can be very convenient.
You can create a table at MySQL to receive the JSON messages AS IS, and then use SQL to query it or do a post processing to load the data in a structured set of database tables.
Check this out: https://dev.mysql.com/doc/refman/5.7/en/json.html
I realise this question is a few years old - but recently I've had a number of people enquiring whether a tool I developed (https://virtual.blue/apps/json-converter) can do exactly what the OP is asking (convert MongoDB to SQL) so I am guessing it is still something people want. Keep reading to find out why I am honestly not surprised by this.
The short answer to whether the tool can help you is: perhaps. If your existing data relationships are not too complicated, and your database is not enormous, it may well be worth a try.
However, I thought it might help to try and explain what the issues are with this kind of conversion, since all the answers I have seen so far are along the lines of "try tool X" or "first convert to format Y and then you can slurp it into MySQL using utility Z". ie there is no thought to whether what you get at the end of doing this is going to make sense in terms of data relationships and integrity.
For example, you could just stick your entire database dump in a single field of a single SQL table (ok space limitations might prevent this in reality, but hopefully you get my point). Then your database would be "in MySQL format", but it would be absolutely no use to anyone.
The point is, what you actually want is a fully defined database model, correctly encapsulating all of the intrinsic data relationships. ("Database normalization" as it is known.) If your conversion process gets those relationships wrong, then you have a broken model, and any queries you try to run over it are likely to return nonsense. Unfortunately there is no magic tool that is just going to "know" the best way to represent your data in MySQL, and closing your eyes and shovelling it into a bunch of random tools is unlikely to miraculously get you what you want.
And herein lies the fundamental problem with the "NoSQL" philosophy (fad). They sold people the bogus notion of "non-relational data". My first thought when I heard this was, "How does that work? Surely all data is relational?" By the looks of things we are steadily getting more and more evidence that my instincts were right. ("NoSQL? Why stop there? I go with 'NoDatabase'. It returns no results at all, but it sure is fast!")
The NoSQL madness throws several important fundamental engineering principles to the wind. We shouted "don't hard code!", "DRY!" (Don't Repeat Yourself) because these actions infuse inflexibility into systems. Traditional wisdom makes precisely the same flexibility argument when it advises "create a fully described model with all the data relationships represented". Then you can execute any arbitrary query over it and expect meaningful results. "Yes but there are a whole bunch of queries we are never going to need to run," says the NoSQL proponent. But surely we learnt our lesson on things we are "never going to need to do"? ("I hard code liberally, because I know I am never going to want to change my code." Hmm...)
The arguments about speed are largely moot. Say it turns out you are frequently doing a complex 9 table join, with unsurprisingly sluggish performance. So create an index. Cache it. Swap some disk space for speed. The NoSQL philosophy is to swap data integrity for speed, which makes no sense at all.
When you generate your fast lookup index (cache/table/map/whatever) what you are really doing is creating a view over your model. If your model changes, you can readily update your view. Going from a model to a view is easy - it's a one to many operation and you are on the right side of entropy.
However, when you went with MongoDB you effectively decided to create views without bothering to describe your fundamental model. Now you discover there are queries you want to run, but can't - and so it's no wonder you want to move over to SQL and actually have your data modelled correctly. The problem is you now want to go from a view to a model. Now you're on the wrong side of entropy. Your view is a lossy representation of the model's fundamental relationships. You can't expect a tool to "translate" your database, because you are asking it to insert new relationships which were not originally defined. These are real world relationships that are not machine-guessable. The tool cannot know what relationships were intended.
In short the only way you can do this reliably is to get your hands dirty. An intelligent human, with complete understanding of the system you are modelling needs to sit down and carefully come up with (possibly a substantial amount of) code which effectively picks through the data and resolves all of the insufficiently represented data relationships. If your data is complex then it's going to be a headache and there is no way to cheat.
If your data is still relatively simple then I would suggest making the conversion as soon as possible, before it becomes difficult. In this case my tool (https://virtual.blue/apps/json-converter) may be able to help.
(They really should have asked a Physicist before they came up with all this nonsense...!)
You can download a trial version of Studio 3T for Mongo and export your database to SQL (or JSON) directly
I'm working on a group project where we all have a mysql database working on a local machine. The table mainly has filenames and stats used for image processing. We all will run some processing, which updates the database locally with results.
I want to know what the best way is to update everyone else's database, once someone has changed theirs.
My idea is to perform a mysqldump after each processing run, and let that file be tracked by git (which we use religiously). I've written a bunch of python utils for the database, and it would be simple enough to read this dump into the database when we detect that the db is behind. I don't really want to do this though, less it clog up our git repo with unnecessary 10-50Mb files with every commit.
Does anyone know a better way to do this?
*I'll also note that we are Aerospace students. I have some DB experience, but it only comes out of need. We're busy and I'm not looking to become an IT networking guru. Just want to keep it hands off for them since they are DB noobs and get the glazed over look of fear whenever I tell them to do anything with the database. I made it hands off for them thus far.
You might want to consider following the Rails-style database migration concept, whereby as you are developing you provide roll-forward and roll-back SQL statements that work as patches, allowing you to roll your database to any particular revision state that is required.
Of course, this is typically meant for dealing with schema changes only (i.e. you don't worry about revisioning data that might be dynamically populated into tables.). For configuration tables or similar tables that are basically static in content, you can certainly add migrations as well.
A Google search for "rails migrations for python" turned up a number of results, including the following tool:
http://pypi.python.org/pypi/simple-db-migrate
I would suggest to create a DEV MySQL server on any shared hosting. (No DB experience is required).
Allow remote access to this server. (again, no experience is required, everything could be done through Control Panel)
And you and your group of developers will have access to the database at any time from any place and from any device. (As long as you have internet connection)
Does it make sense to use a combination of MySQL and MongoDB. What im trying to do basically is use MySQl as a "raw data backup" type thing where all the data is being stored there but not being read from there.
The Data is also stored at the same time in MongoDB and the reads happen only from mongoDB because I dont have to do joins and stuff.
For example assume in building NetFlix
in mysql i have a table for Comments and Movies. Then when a comment is made In mySQL i just add it to the table, and in MongoDB i update the movies document to hold this new comment.
And then when i want to get movies and comments i just grab the document from mongoDb.
My main concern is because of how "new" mongodb is compared to MySQL. In the case where something unexpected happens in Mongo, we have a MySQL backup where we can quickly get the app fallback to mysql and memcached.
On paper it may sound like a good idea, but there are a lot of things you will have to take into account. This will make your application way more complex than you may think. I'll give you some examples.
Two different systems
You'll be dealing with two different systems, each with its own behavior. These different behaviors will make it quite hard to keep everything synchronized.
What will happen when a write in MongoDB fails, but succeeds in MySQL?
Or the other way around, when a column constraint in MySQL is violated, for example?
What if a deadlock occurs in MySQL?
What if your schema changes? One migration is painful, but you'll have to do two migrations.
You'd have to deal with some of these scenarios in your application code. Which brings me to the next point.
Two data access layers
Your application needs to interact with two external systems, so you'll need to write two data access layers.
These layers both have to be tested.
Both have to be maintained.
The rest of your application needs to communicate with both layers.
Abstracting away both layers will introduce another layer, which will further increase complexity.
Chance of cascading failure
Should MongoDB fail, the application will fall back to MySQL and memcached. But at this point memcached will be empty. So each request right after MongoDB fails will hit the database. If you have a high-traffic site, this can easily take down MySQL as well.
Word of advice
Identify all possible ways in which you think 'something unexpected' can happen with MongoDB. Then use the most simple solution for each individual case. For example, if it's data loss you're worried about, use replication. If it's data corruption, use delayed replication.
I'd like to migrate an existing MySQL database (around 40tables, 400mb data) to Postgres before it gets bigger. I searched the web and tried some migration-scripts (some of them can be found here). None of them works seamlessly - if it would be just a few glitches I had to fix manually, it wouldn't be a problem, but the resulting dumps don't look like valid PostgreSQL at all.
Did anybody succeed in migrating a production table without using a full workday - is there an easy solution to that problem?
Note: I also would consider commercial products (as long as pricing is still feasible).
In spite of SQL being a standard, it's not full featured enough to do without each server software implementing extensions. The translation from MySQL to PostgreSQL is not simple, unless your schema is trivial. Automated translation scripts will only get you so far.
The very best approach would be to hand translate the schema, and then write your own transfer scripts for the data itself. You should also write verification scripts to make sure the schema and data come over correctly.
This isn't a cop-out answer. If your database is important enough to migrate then it's important enough to spend some time on yourself. In the end you would spend at least as much time figuring out the quirks and subtle messes than an automated migration script would cause as in the time to migrate the data yourself. But doing it yourself you have the chance to take advantage of features in PostgreSQL that aren't present in MySQL, as well as the chance to make the kinds of improvements that only come from having the chance to do something a second time.
Bite the bullet and do it.
I'm rewriting a PHP+MySQL site that averages 40-50 hits a day using Django.
Is SQLite a suitable database to use here? Are there any advantages/disadvantages between them?
I'm just using the db to store a blog and the users who can edit it. I am using fulltext search for the blog search, but no complex joins anywhere.
40-50 hits per day is very small and SQLLite can be used without any problem.
MySql might be better once you will get more hit because it handles in a better way multiple connexion (lock isn't the same with MySql and SqlLite).
The major problem with sqlite is concurrency. If you expect 40-50 hits a day, that's probably a non-issue. However, if that load increases you should be ready to migrate to a database daemon such as MySQL - better abstract your database specific code to make such a switch as painless as possible.
The performance section of the SQLite wiki might be of use to you.
Since you're already using an adequate database, I don't see a reason to migrate to a smaller one.
While sqlite might be perfectly adequate, too - changing to a less-capable platform from a more-capable one doesn't seem the best choice :)
SQLite will work just fine for you. It sounds as though you're largely using the database as read-only (with occasional writes to update the content). SQLite excels at this kind of access pattern. The only place where SQLite chokes is when you have a lot of writes to a database, because once a process attempts to write the file is locked until the write is complete. Also, if you do lots of writes (like updating rows in a loop) you should look into putting all those writes into a transaction - while the file is locked once the transaction hits a write query, the updates themselves take much less time because they're written to the file at once and not individually.
SQLite would be fine for this level of traffic. It actually performs quite well, the only thing that it is lacking is caching of data and queries because it needs to be spun up every time your page is accessed. That said, it is still very quick and it shouldn't be too hard to migrate to MySQL later if need be.