I am trying to create an automatic process which will synchronize the databases of two servers. One site is live, and I need the testing environment to sync up with the live site every so often (I am thinking a cron job for that).
How can I implement this?
You can keep the systems up to date with MySQL replication
http://dev.mysql.com/doc/refman/5.0/en/replication.html
You are basically looking at a Master-Slave configuration
If you'd like something a little simpler, you can use mysqldump to dump your database, then ssh to ship it over the wire, and mysql to load it in again.
mysqldump mydatabase | ssh -h the_test_server "mysql mytestdatabase"
You will have to purge mytestdatabase before doing the transfer, but if you are looking for a single command to 'synchronize' database, this will do it.
Related
I'd like to download a copy of a MySQL database (InnoDB) to use it locally. Since the database is growing rapidly, I want to find out a way to speed up this process and save bandwidth.
I'm using this command to copy the database to my local computer (Ubuntu):
ssh myserver 'mysqldump mydatabase --add-drop-database | gzip' | zcat | mysql mydatabase
I've added multiple --ignore-tables to ignore tables that don't need to be up to date.
I've already got an (outdated) version of the database, so there is no need to download all tables (some tables hardly change). I'm thinking of using the checksum for each table and add unchanged tables to --ignore-tables.
Since I can't find many example of using checksums and mysqldump, I'm brilliant (not very likely) or there is an even better way to download (or better: one-way sync) the database in a smart way.
Database replication is not what I'm looking for, since that requires a binary log. That's a bit overkill.
What's the best way to one-way sync a database, ignoring tables that haven't been changed?
One solution could be using the mysqldump --tab option. mysqldump delimited
mkdir /tmp/dbdump
chmod 777 /tmp/dbdump
mysqldump --user=xxx --password=xxx --skip-dump-date --tab=/tmp/dbdump database
Then use rsync with --checksum to send over changed files to destination. Run the create scripts, then load data using LOAD DATA INFILE
Quick help here...
I have these 2 mysql instances... We are not going to pay for this service anymore; so they will be gone... How can I obtain a backup file that I can keep for the future?
I do not have much experience with mysql, and all threads talk about mysqldump, which I don't know if its valid for this case. I also see the option to take a snapshot but I want a file I can save (like a .bak).
See screenshot:
Thanks in advance!
You have several choices:
You can replicate your MySQL instances to MySQL servers running outside AWS. This is a bit of a pain, but will result in a running instance. http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Exporting.NonRDSRepl.html
You can use the commandline mysqldump --all-databases to generate a (probably very large) .sql file from each database server instance. Export and Import all MySQL databases at one time
You can use the commandline mysqldump to export a database at a time. This is what I would do.
You can use a gui MySQL client -- like HeidiSQL -- in place of the commandline to export your databases one at a time. This might be easier.
You don't need to, and should not, export the mysql, information_schema, or performance_schema databases; these contain system information and will already exist on another server.
In order to connect from outside AWS, you'll have to set the AWS protections appropriately. And you'll have to know the internet address, username, and password (and maybe port) of the MySQL server at AWS.
Then you can download HeidiSQL (if you're on windows) or some appropriate client software, connect to your database instance, and export your databases at your leisure.
I have a mysql database at a remote server that gets updated once in a while and what i normally do to transfer it to my locale machine is to
mysql -u root -padmin databasename > backup.sql
Then on the workbench of my local machine i just delete the old database and import this new database.
I usually did this because updates came in once a month. So i wasn't bothered. But now the data has gotten pretty big and i can't afford to do this anymore. I just thought there is a better approach so i looked into incremental backups but i don't quite get it. In my situation, how will i use incremental backups ? So that in the remote server i only backup the latest changes in the remote database then import to my local database ?
As long as you are using InnoDB, the percona guys can help you with xtrabackup: http://www.percona.com/software/percona-xtrabackup
It has incremental options and also restores quickly. I'm not sure it supports myisam (since that engine is not ACID).
We use it at work to great effect.
I'm considering switching to a new hosting provider, and I would like to transfer my database for my production site to the new hosting provider. I'm using mysql. What are the steps I would need to take to transfer my db?
Appreciate any help.
Thank you,
Brian
Assuming a relatively simple app (PHP, something like that), one app server, one db server, then briefly:
On the new host, create the necessary accounts on the database that you're using on the old host's database.
Copy the app code over.
"Lock" your app on the old host so no data changes can occur (if this is feasible.)
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html is your friend. Dump schema and data, and capture it to a file. Here is the command I used to dump the database exampledb that has the login of example:
mysqldump --add-drop-table -u example -p exampledb > output.sql
(The --add-drop-table makes it easier to re-run the script if you need to later. But it does create a script that will destroy your database, so careful how you run it.)
Now copy (maybe using scp) the output.sql file to your new host.
On the new host, run mysql to build the database with the schema and data from the old host. I use a command like this one, assuming user "example" and a database name of "exampledb":
mysql -u example -p exampledb < output.sql
(Be careful to run this ONLY ON THE NEW HOST. It will obliterate your database.)
The nice thing is, you've got a blank slate of a new machine. You can keep trying different things on that machine without breaking anything.
Turn on the app on new host. Test. If it's been a while, you may need to make changes to get your code up to a newer version of the language. (I did in my case. But maybe you were better about keeping your code up to date.)
Shut down app on old host.
Point DNS/router/whatever to new host.
What'd I miss? (Just went through this moving my silly website to a new machine.)
It's pretty simple, especially for just a single database?
mysqldump followed by a mysqlimport.
MySQL Dump
Generating the .sql file is all you need, because that will contain all of the table information such as CREATE INDEXES, which when you then run through all of your inserts, will add the indexes.
If you struggle with command lines, may I suggest using Navicat Lite. It is free, and is the best GUI that I've seen on the market.
Navicat Lite
I have a test database on a separate remote server than my production DB. Every once in awhile, I want to try and test things by uploading a copy of my production DB to my testing DB. Unfortunately, the backup file is now half a gig and I'm having trouble transferring it via FTP or SSH. Is there an easy way that I can use the mysql restore command between servers? Also, is there another way to move over large files that I'm not considering? Half a gig doesn't seem that big, I would imagine that people run into this issue frequently.
Thanks!
Are the servers accessible to each other?
If so, you can just pipe the data from one db to another without using a file.
ex: mysqldump [options] | mysql -h test -u username -ppasswd
0.Please consider whether you really need production data (especially if it contains some sensitive information)
1.The simplest solution is to compress the backup on the source server (usually gzip), transfer it across the wire, then decompress on the target server.
http://www.techiecorner.com/44/how-to-backup-mysql-database-in-command-line-with-compression/
2.If you don't need the exact replica of production data (e.g. you don't need some application logs, errors, some other technical stuff) you can consider creating a backup and restore on a source server to a different DB name, delete all unnecessary data and THEN take a backup that you will use.
3.Restore full backup once on your reference server in your Dev environment and then copy transaction logs only (to replay them on the reference server). Depending on the usage pattern transaction logs may take a lot less space as the whole database.
Mysql allows you to connect to a remote database server to run sql commands. Using this feature, we can pipe the output from mysqldump and ask mysql to connect to the remote database server to populate the new database.
mysqldump -u root -p rootpass SalesDb | mysql --host=185.32.31.96 -C SalesDb
Use an efficient transfer method, rather than ftp.
If you have a dump file created by mysqldump, on the test db server, and you update it every so often. I think you could save time (if not disk space) by using rsync to transfer it. Rsync will use ssh and compress data for the transfer, but I think both the local and remote files should/could be uncompressed.
Rsync will only transfer the changed portion of a file.
It may take some time to decide what, precisely, has changed in a dump file, but the transfer should be quick.
I must admit though, I've never done it with a half-gigabyte dump file.