How can I keep my development server database up to date? - mysql

I develop websites for several small clients and I would love to be able to keep my local databases up to date with each client's production servers. (I'm thinking nightly updates) Many of their databases are in the hundreds of megabytes, so I feel like creating and transferring complete dumps every night is excessive.
Here are the harebrained ideas I have come up with so far:
Create a dump on the server and rsync with the previous night's dump
on my local machine. That should only transfer the changed bits of
the file right?
Create a dump on the server, and locally diff it from last night's
dump. Transfer only that diff. Maybe it could also send to md5 of the
original dump so I could be sure I was applying the diff to the same
base file.
Getting ssh access to (most) of these servers is possible, but requires getting my clients to call their hosting providers with that rather technical request, which is something I would rather not have to ask them to do. Bonus points if you can suggest a solution that I could implement via ftp/php.
Edit:
#Jacob's answer prompted some further Googling which lead to this thread: Compare two MySQL databases
This question (in several forms) seems pretty common. Most of the need, and therefore most of the tools, seems to focus around keeping the schema up to date rather than the data. Also, most of the tools seem to be commercial and GUI.
So far the best looking option seems to be pt-table-sync from the Percona Toolkit, although it looks like it might be a pain to setup on OSX.
I am downloading Navicat to test right now. I runs on OSX but is a commercial GUI program.

I think the most efficient way to set this up is to use rsync to backup the files is by using rsync to run each night , doing a incremental backup by syncing yesterday's back to todays backup then running the rsync for that day. In terms of backing-up the different websites i would look at having the sites as slaves which back up to your server (master) using the replicator function this will allow you to keep all the databases upto date and backed up
http://dev.mysql.com/doc/refman/5.0/en/replication.html

Related

What is an efficient way to maintain a local readonly copy of a live remote MySQL database?

I maintain a server that runs daily cron jobs to aggregate data sources and generate reports, accessible by a private Ruby on Rails application.
One of our data sources is a partial dump of one of our partner's databases. The partner runs an active application and the MySQL DB has hundreds of tables. They have given us read-only access to a relatively underpowered readonly slave of their application DB.
Because of latency issues and performance bottlenecking on their slave DB, we have been maintaining a limited local copy of their DB. We only need about 20 tables for our reports, so I only dump those tables. We also only need the data to a daily granularity, so realtime sync is not a requirement.
For a few months, I had implemented a nightly cron which streamed the dump of the necessary tables into a local production_tmp database. Then, when all tables were imported, I dropped production and renamed production_tmp to production. This was working until the DB grew to over 25GB, and we started running into disk space limitations.
For now, I have removed the redundancy step and am just streaming the dump straight into production on our local server. This feels a bit flimsy to me, and I would like to implement a safer approach. Also, currently doing the full dump/load takes our server over 2 hours, and I'd like to implement an approach that doesn't take as long. The database will only keep growing, so I'd like to implement something future proof.
Any suggestions would be appreciated!
I take it you have never heard of, or considered MySQL Replication?
The idea is that you do your backup & restore once, and then configure the replica to "subscribe" to a continuous stream of changes as they are made on the primary MySQL instance. Any change applied to the primary is applied automatically to the replica within seconds. You don't have to do the backup & restore procedure again, unless the replica gets damaged.
It takes some care to set up and keep working, but it's a much more efficient method of keeping two instances in sync.
#SusannahPotts mentions hot backup and/or incremental backup. You can get both of these features for free, without paying for MySQL Enterprise using Percona XtraBackup.
You can also consider using MySQL Transportable Tablespaces.
You'll need filesystem access to run either Percona XtraBackup or MySQL Enterprise Backup. It's not possible to use these physical backup tools for Amazon RDS, for example.
One alternative is to create a replication slave in the same network as the live system, and run Percona XtraBackup on that slave, where you do have filesystem access.
Another option is to stream the binary logs to another host (see https://dev.mysql.com/doc/refman/5.6/en/mysqlbinlog-backup.html) and then transfer them periodically to your local instance and replay them.
Each of these solutions has pros and cons. It's hard to recommend which solution is best for you, because you aren't sharing full details about your requirements.
This was working until the DB grew to over 25GB, and we started running into disk space limitations.
Some question marks "here":
Why don't you just increase the available Diskspace for your database? 25 GB seems nothing when it comes down to disk-space?
Why don't you modify your script to: download table1, import table1_tmp, drop table1_prod, rename table1_tmp to table1_prod; rinse and repeat.
Other than that:
Why don't you ask your partner for a system with enough performance to run your reports on? I'm quite sure, he would prefer this rather than having YOU download sensitive data every day to your "local site"?
Last thought (requires MySQL Enterprise Backup https://www.mysql.de/products/enterprise/backup.html):
Rather than dumping, downloading and importing 25 GB every day:
Create a full backup
Download and import
Use Differential or incremental backups from now.
The next day you download (and import) only the data-delta: https://dev.mysql.com/doc/mysql-enterprise-backup/4.0/en/mysqlbackup.incremental.html

MySQL replication or something similar

I have question about data backup.
We are developing backed for mobile application.
So we have a few EC2 servers, one for api sub-domain and one for admin sub-domain. One RDS Mysql server for the database, also with 2 databases.
But I'm worried about one thing, RDS snapshots is good for database structure. If we will have some errors in application, or will need to revert some changes in structure.
I will just restore from yesterday snapshot. And how about content, because its adding every minute.
Maybe some one can describe mechanism or tools to prevent data our lost. Replications or something like that.
I think I've found the answer - bin log
https://dev.mysql.com/doc/refman/5.5/en/binary-log.html

Node.js, Express, MySQL - update schema

I have a small app running on a production server. In the next update the db schema will change; this means the production database schema will need to change and there will need to be some data manipulation.
What's the best way to do this? I.E run a one off script to complete these tasks when I deploy to the production server?
Stack:
Nodejs
Expressjs
MySQL using node mysql
Codeship
Elasticbeanstalk
Thanks!
"The best way" depends on your circumstances. Is this a rather seldom occurrence, or is it likely to happen on a regular basis? How many production servers are there? Are there other environments, e.g. for integration tests, staging etc.? Do your developers have an own DB environment on their machines? Does your process involve continuous integration?
The more complex your landscape is, the better it is to use solutions like Todd R suggested (Liquibase, Flywaydb).
If you just have one production server and it can be down for maintenance for a few hours, the it could be sufficient to
Schedule a maintenance downtime with your stakeholders and users
Shutdown the server
Create a backup
Update the database structure and contents as necessary
Deploy software updates
Restart the server
Test the result (manually or automatically)
Inform your stakeholders and users
If anything goes wrong, rollback to a backed up version of the database and your software.
Having database update scripts is advisable. Having tested them once or more is advisable even more. Creating a backup in advance is essential.
http://www.liquibase.org/ or http://flywaydb.org/ - pretty "heavy" for one time use, but if you'll need to change the schema again in the future, probably worth investing the time to learn one of these.

What are the best practices for Mysql backup

We have one php application and mysql server running on one of our production server.
Mysql server is currently 4GB big with intention to grow up to tens or even up to hundreds of GB.
What am curious to find out is what are the best practices for backup of mysql database in condition that application must be live under any circumstance? What is better, to have mysql replication server on which we will run backup scripts or to run on live server? What is more likely to slow down We have possibility to add additional server(s) if needed. Where do I need to store mysql dumps? Is it suggested to ftp copy mysql backup files to remote server.
What is the best practice to organize web application backup if don't have problem with number of server instances?
MySQL backup methods are documented on MySQL documentation.
The ideal backup solution will be to use MySQL Enterprise Backup. This is a licensed product sold on Oracle store. It is very fast compared to mysqldump.
MySQL Enterprise Backup: A licensed product that performs hot backups
of MySQL databases. It offers the most efficiency and flexibility when
backing up InnoDB tables, but can also back up MyISAM and other kinds
of tables.
If you are looking for a free solution with MySQL community edition, then you can install another replication server and either run mysqldump to take backup or make a raw data backup. During backup on your replication server, your main master database will be running. Since your data is big or will get bigger, it is recommended to backup raw data files. It is basically a process of copying data and log files from disk. Details are explained on MySQL documentation.
For larger databases, where mysqldump would be impractical or
inefficient, you can back up the raw data files instead. Using the raw
data files option also means that you can back up the binary and relay
logs that will enable you to recreate the slave in the event of a
slave failure.
Finally, you should copy backup files to another physical disk on the same to recover from disk failures or to another physical server to easily recover from complete server failures.
Replication is something that protects against hardware errors, for example, a hard disk crashed.
Backup - protects against software errors, for example, due to the human factor, data has been deleted from a table.
It is definitely good practice to combine both of these technologies by running a utility to create a backup on a replica. This not only reduces the load on the product database, but also covers more recovery scenarios.
In case of a hardware error, you can restore the most up-to-date data from the replica, and in cases of data corruption, you can already consider about from the what date to use the backup for recovery. Well, if your both the main server and the replica fail, then the backup will also save you.
What is the best way to make backups?
mysqldump is a good solution for small databases. This is a utility for creating logical backups nad it is included to MySQL Server. At the output, the utility creates a .sql file to recreate the database.
For large databases, it is better to use a physical backup. There are two ways on how to do it.
mysqlbackup is a utility included with MySQL Enterprise Solution. As a result, you get a binary file. Such a backup is created much faster than using mysqldump and is less load on the server.
xtrabackup, from Percona, is a lot like the MySQL Enterprise backup utility, but it's free. A more detailed comparison can be found here.
How often the backups should be made?
The more often you make backups, the better, but you can't make many such backups - since you will run out of space in the backup storage. There are two ways:
Find a compromise between the frequency of backups and the duration of storage.
Use incremental backups. The above utilities support incremental backups, but the management of such backups is more complicated (read more here)
Where the backups should be stored?
Anywhere you prefer, but not in the same place as the MySQL Server. Overall, I think using cloud storage is a good choice. Almost everyone today has a command line interface.
How to automate a backup?
The process of creating regular backups should be automated, and a person should intervene in it only in case of failure. A good backup process should include the following steps:
Creating a backup copy
Compression\Encryption
Uploading to storage
Sending success\fail notification
Removing old backups from the storage (so that it does not overflow)
The simplest script that implements this can be found, for example, here.
Something else?
Yes, the most important thing is not to create a backup and then restore it. Therefore, it is best practice to regularly test the recovery scenarios.
Happy backups!
What is better, to have mysql replication server on which we will run backup scripts or to run on live server
It depends on your db size (and time needed to dump it using mysqldump) and your reliability requirements.
If your db is relatively small and mysqldump dumps it in seconds or in a few minutes then its ok to just run scheduled backups. For most cases it is sufficient to have a daily backup which runs at a time when your app is mostly idle (at night when you clients are sleeping). You can use a nice tool automysqlbackup for that: it cares about the scheduling and backup rotation, all you need to do is to add it as your cron task and set up its config once.
Setting up a replica is only needed if:
Your backup takes long time (dozens of minutes or hours) to complete so you can not just stop your service for that long.
You can not afford loosing any history in case of main db crash. E.g. if you process financial transactions you may want to ensure that nothing will be lost if master db server dies.
In this cases you may want a replica with backups. Though you must understand that adding replication adds a new layer of problems: replicas may go out of sync, silently crash (and you will not notice that as the master and your app is running fine) etc.

How to clone mySQL continuously .. instantly on shared hosting

I have a MySQL install on a shared server and have access through phpMyAdmin. I want to make a continuous, real time clone of that database to a cloud mySQL database (we have created an Nginx-ready MySQL server specially for this database) I want to create a real time clone of the old one, then update code to point to the new database...
I think you will have difficulty doing real-time replication of a MySQL in a shared server environment. Since you appear to be moving db servers, I would be inclined to do a hot copy of your data, and install that on the new db server. At the same time as taking that copy, you should switch on query logging on your application.
Your switch over would then consist of running logged queries against the new database (faster than they were logged!) and finally, at a point that all logged queries have been run, switching the configuration of the app so that the new db is used.
Edit: the problem with a hot copy is that data is being written to the db at the same time as it is being copied. That means that the 'last updated' time will be different for each table. On that basis, is it possible in your application to set up a 'last_updated' column for each row? If so you will be able to tell for each table which logged queries still need to be copied.
What you're looking for is replication. It has far to many options to cover here in a single post.
http://dev.mysql.com/doc/refman/5.5/en/replication.html
If your going to do replication over the internet you'll want to secure it.Your host might allow a virtual local area network So this doesn't use up your bandwidth resources.
A great set of tools from percona you should look at are maatkit
https://launchpad.net/percona-toolkit
Documentation and usage examples
http://www.maatkit.org/doc/
It's good for other tasks but it also allows you to replicate a live database quickly.
When your working with live databases make sure your backups are upto date.