I want to create a cronjob to synchronize two servers on a weekly basis. I have two servers A and B and I want to synchronize the files and mysql databases to the second server on a weekly basis. I think rsync can be used to synchronize the files. But how can I synchronize the database ?
Thanks,
I have managed to create a script that will restore the database to the remote server.
mysqldump -u username -ppassword database_name | mysql -u Username_remote -ppassword_remote --host=remote_server_IP -C databse_name >> ``date "+%Y-%m-%d".log`
This is not the best way to do this and not recommended in live servers.
Related
I'm trying to implement a database backup cron (other solutions welcome) in my job but I have a small problem:
I have a large database that is over 10GB in space and the current vm doesn't have space to store it in the temporary file that mysql creates.
I know I can use mysqldump with a host parameter, but my question is, when doing that does the temporary file generated by mysqldump stay at the machine that is running it or does it stay on the database server?
UPDATE:
I forgot to mention that I'm trying to backup a network of websites and that some of them are behind a firewall (needing VPN access), some need server hopping to get to the database server.
You can run a shell script from an archive host, where you've traded password-less ssh keys with the database server. This lets you transfer the file directly over ssh, without creating any temp files on the remote database server:
ssh -C myhost.com mysqldump -u my_user --password=bigsecret \
--skip-lock-tables --opt database_name > local_backup_file.sql
Obviously there are ways to secure that password on the command line, but this a method that could accomplish what you want. One advantage of this method is that it doesn't require the archive host to have access to port 3306 on the remote host.
This guy's version is cool because it also compresses the data on-the-fly before transferring it over the network, and then he uncompresses it before loading it into a local database.
ssh me#remoteserver 'mysqldump -u user -psecret production_database | \
gzip -9' | gzip -d | mysql local_database
But that's why my version uses ssh -C, which enables its own compression algorithm and avoids extra gzip pipes.
Depending on the circumstance it might be a better idea to use MySQL replication. Set up MySQL on your backup server and configure it as a slave of your production database (see http://dev.mysql.com/doc/refman/5.7/en/replication-howto.html). You can then dump the slave database easily.
An advantage of this approach is you're not transferring 10GB each time you want to backup, you're only transferring any changes to the database as and when they occur.
You'll need to keep an eye on the replication though, because if it fails your slave database will become stale.
I want to backup MySQL database every 10 minutes. how i can do it. I don't know how to use procedure or function for it.
I have used
mysqldump -u root -p mydatabase > mydb_backup.sql
I also want to add date and time in end of backup database name. I should only keep latest 3 backup database in system and destroy other database.
How about a backup every second? Well, actually it is "continually". It is called "Replication".
You build another mysql server (machine) as the Slave.
Then copy the data to the Slave, and do CHANGE MASTER on the Slave to have it continually replicate from the Master (which is your current instance of mysql).
AutoMySQLBackup has some great features to:
backup a single database, multiple databases, or all the databases on the server;
each database is saved in a separate file that can be compressed (with gzip or bzip2);
it will rotate the backups and not keep them filling your hard drive (as normal in the daily backup you will have only the last 7 days of backups, the weekly if enabled will have one for each week, etc.).
or you can find more info here 10 Ways to Automatically & Manually Backup MySQL Database
If you are working in unix or linux you can use crontab for scheduling.
To add the the time and date to the backup file you can use a syntax similar to the following
mysqldump -u root -p mydatabase > mydb_backup_`date+"%Y%m%d%H%M%S"`.sql
is there a more efficient method to transfer all the databases from phpmyadmin,rather than to create database copies and manually import them on other machines.
Earlier I tried to copy the entire folder and replace the older config.inc.php but I am unable to see all the databases
there are a couple of options which usually depends on your needs/db size/tables engine and so on:
mysqldump + rsync dump + restore (if you do not have a good connection between hosts)
mysqldump | mysql -h H -u -p
If you can't stand with big downtime you can setup replication between two hosts using innodbbackup utility
If you use MyISAM tables you can shutdown MySQL you copy related files to destination datadir. This option will not work for InnoDB tables.
How can I create a MySql Job that runs daily to generate a database backup and stores in on the server?
Also How can I create a second Job that does a maintenance on the database to keep it running without problem?
Thanks
one possible way is run the following as daily cron.
mysqldump -u <db_user> -p <db_password> <db_name> -h <db_host_if_any> > /home/backups/backup_<timestamp>.sql
gzip it and store it(just a mechanism to reduce size)
gzip /home/backup/backup_<timestamp>.sql
I have to delete some table data in prod. db and for the records that are going to be deleted a backup of records should be copied to another local db. This involves two databases, residing in two different servers/instances.
Is it possible to do via sql (mysql) query to do this?
I would use mysqldump with a where condition to get the records out. Once you have everything saved, you can then delete them from prod. These commands should work from the command line, including the password to avoid the prompt is optional.
mysqldump -u user -pPassword -h hostname1 dbname tablename
--where 'field1="delete"'
--skip-add-drop-table --no-create-db --no-create-info > deleted.sql
mysql -u user -pPassword -h hostname2 dbname < deleted.sql
mysql -u user -pPassword -h hostname1 dbname -e 'DELETE FROM tablename WHERE field1="delete"'
I'm trying to do exactly the same thing, copy data from a table to another server, then delete it from the original.
So far I see two options:
copy data to a local database then replicate that database to another server
use Federated storage engine
Both require some serious reconfiguration of our servers as neither Federated or binary logging (required for replication) are not enabled. This would take time and it would be best if other solutions could be found.
The process needs to be executed on a daily basis, so it needs to be fully automated.
Perhaps a third option is to automate things with a cron job:
copy the data to a separate database on the same server
backup that database with mysqldump in a folder which is linked on the other server too
on the second server, restore the database from the sql dump