I have a mysql database at a remote server that gets updated once in a while and what i normally do to transfer it to my locale machine is to
mysql -u root -padmin databasename > backup.sql
Then on the workbench of my local machine i just delete the old database and import this new database.
I usually did this because updates came in once a month. So i wasn't bothered. But now the data has gotten pretty big and i can't afford to do this anymore. I just thought there is a better approach so i looked into incremental backups but i don't quite get it. In my situation, how will i use incremental backups ? So that in the remote server i only backup the latest changes in the remote database then import to my local database ?
As long as you are using InnoDB, the percona guys can help you with xtrabackup: http://www.percona.com/software/percona-xtrabackup
It has incremental options and also restores quickly. I'm not sure it supports myisam (since that engine is not ACID).
We use it at work to great effect.
Related
Quick help here...
I have these 2 mysql instances... We are not going to pay for this service anymore; so they will be gone... How can I obtain a backup file that I can keep for the future?
I do not have much experience with mysql, and all threads talk about mysqldump, which I don't know if its valid for this case. I also see the option to take a snapshot but I want a file I can save (like a .bak).
See screenshot:
Thanks in advance!
You have several choices:
You can replicate your MySQL instances to MySQL servers running outside AWS. This is a bit of a pain, but will result in a running instance. http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Exporting.NonRDSRepl.html
You can use the commandline mysqldump --all-databases to generate a (probably very large) .sql file from each database server instance. Export and Import all MySQL databases at one time
You can use the commandline mysqldump to export a database at a time. This is what I would do.
You can use a gui MySQL client -- like HeidiSQL -- in place of the commandline to export your databases one at a time. This might be easier.
You don't need to, and should not, export the mysql, information_schema, or performance_schema databases; these contain system information and will already exist on another server.
In order to connect from outside AWS, you'll have to set the AWS protections appropriately. And you'll have to know the internet address, username, and password (and maybe port) of the MySQL server at AWS.
Then you can download HeidiSQL (if you're on windows) or some appropriate client software, connect to your database instance, and export your databases at your leisure.
For some technical reasons, I have to migrate our actual server, running on CentOS, to a new server, but running on Ubuntu.
I MUST keep the SQL DB as it is now, same version of MySQL, etc.
I have tried several dump scripts and methods, but impossible to import the dumps to a newer version of MySQL.
So I'm thinking of copying the /var/lib/MySQL folder and other MySQL folders to the new server.
What's your opinion about this ? Will it work ?
I was also thinking of going to the recovery console of the old server and doing an rsync to the new server, this way I copy the entire system to the new server.
But that's a bit heavy in my opinion. and I don't know Unix enough to remount discs and perform all tasks that would make the copy work.
There are actually several ways to do this. The first and by far the best is what the commenters have said. Use mysqldump to create a sql file then import it onto the other machine. If space is an issue on one of the machines you can import more directly like so: mysqldump {YOUR OPTIONS HERE} | mysql -h {NEWSERVER} -u root -p
You should do the above method if at all possible. It will work better 100% of the time when doing cross version imports. Just fix the errors you see. You may need to rewrite triggers and stored procedures if the syntax differs across versions.
If you know the danger and are ok with losing your data, you can try the following xtrabackup method:
Install xtrabackup on the older server (http://www.percona.com/software/percona-xtrabackup)
Get a full and consistent backup from xtrabackup (read the docs)
Shutdown mysql on the new server
Copy the backup from xtrabackup to that new server
Start up mysql again, it may take a while as it upgrades the tables
I have done this before across mysql server versions, BUT this is not reliable, and you quite possibly could end up with unreliable or corrupted data. Do it at your own risk.
not sure its a stack overflow question
I have a Mac and am hosting a Apache MySQL server on it using MAMP Pro. If I back up my data on the time machine, is MySQL database also backed up or do I have to create mysqldump and backup up as a cron job? In case of a crash do I do a normal restore in case it can be backed up on time machine.
Thanks
Make a regular dump with MySQL dump or use another specific database back-up tool. A copy of the data folder is not ok.
MySQL dump will really read the data and can be checked. It is not always true that all data is written completely to the data file and lockings give issues.
If you have a specific time of back-up just run a cron before that moment and verify whether it is safe and finished. MySQL will take care of lockings, changes, transactions etc.
Always, read always, verify your back-up with a restore test.
I have a database on a 32bit Linux server with MySQL that I would like to import/copy/migrate to a 64bit Linux server.
I have considered
service mysqld stop
tar czf /root/db.tar.gz /var/lib/mysql
and copy this to the new server.
Or perhaps
mysqldump -uroot -p --all-databases > /root/db.sql
Question
Is that possible, and if so, what the recommended way?
Using mysqldump and re-importing the resultant file will work for certain and is recommended, unless your database is very large and the dump/import process' slower speed is an issue.
Unless the server environment is identical in most ways, you may have some cleaning up to do and permissions corrections if you were to copy the data files over directly. There is documentation on performing the transfer with raw data files, but mysqldump is the usual preferred method.
If you have a large database, I would suggest using XtraBackup. It will likely be much faster than using mysqldump followed by an import.
I am trying to create an automatic process which will synchronize the databases of two servers. One site is live, and I need the testing environment to sync up with the live site every so often (I am thinking a cron job for that).
How can I implement this?
You can keep the systems up to date with MySQL replication
http://dev.mysql.com/doc/refman/5.0/en/replication.html
You are basically looking at a Master-Slave configuration
If you'd like something a little simpler, you can use mysqldump to dump your database, then ssh to ship it over the wire, and mysql to load it in again.
mysqldump mydatabase | ssh -h the_test_server "mysql mytestdatabase"
You will have to purge mytestdatabase before doing the transfer, but if you are looking for a single command to 'synchronize' database, this will do it.