Is there a simpler approach to routinely keep a local MySQL database updated with data from a remote database? My setup needs me to run a local copy of the project in the office network to allow local email sending. But the emails link back to a live server. Also, the admin users need to access the project from the internet anywhere to compose the emails. Currently, my options are:
Connect local project to the remote database.
Export the remote database, clean the local database and then import the dump.
This is something I need to routinely do every week. I went with approach #1 but it takes a long time to pull data this way. So I am really wondering if I should do this in the long run?
On routine basis, just do the mysqldump export on remote server and then on local server do mysqldup import or mysql import.
mysqldump -u root-proot -h remote-server test > db%FileDate%.sql
And on local server, do the import
mysql -u root-proot -h local-server test < db%FileDate%.sql
You can use mysql incremental backup. Please refer below link.
https://www.percona.com/forums/questions-discussions/percona-xtrabackup/10772-[script]-automatic-backups-incremental-full-and-restore
https://dev.mysql.com/doc/mysql-enterprise-backup/8.0/en/mysqlbackup.incremental.html
Related
Situation: I have 2 servers, one of them currently hosting a live WordPress site, and I want to be able to transfer the site to the other server in case the first server goes down. Transferring the source files is easy; transferring the database is what I need to figure out how to do. Both of the servers are Windows Server 2008.
Is there any easy to do this?
Simplest way would be to mysqldump the database, transfer it using the same mechanism you have for your source files, then import it into mysql.
Dump the primary database...
mysqldump -u user -p database > c:\somedir\backup.sql
...transfer the sql file...
Import on the failover...
mysql -u user -p database < c:\somedir\backup.sql
Both export and import can easily be scripted in batch files.
The easiest way that I know is using the plugin "Duplicator". I used it several times with Apache servers, but as is commented here, seems that three years ago it was running ok with Windows 2008 IIS 7, so I figure now it would be better.
Duplicator generates two packages: one with fields (where you can exclude uploads if needed) and the other with the database. Once you have the two packages, you need to upload into your new server and install the package. Of course you need the new database credentials. The plugin ask you in the las step for the new url base to make the adequate substitutions in all the database.
Can someone please tell me how to do a Simple Database Backup for MySQL Database on diff host (computer). I am trying to move my database from one host (server) to a new host (server)
If you just need to transfer a database between servers, using phpMyAdmin, you can use Export on the database on the source to generate a .SQL script, and then use Import on the target server to transfer it.
Alternatively, if the database is too big, you could use something like SQLDumper.
I have a local Perl script that does a lot of parsing of web pages and then successfully updates my local MySQL database (WAMP server). I now want to send this local data to my remote server, but remotely connecting to my database isn't allowed with my hosting company. Unfortunately I never thought of that problem.
So, I now need to find an automated way to update my remote server (every 15mins). I mistakenly thought I could just edit my Perl script with the details of the remote server.
I am aware that I could use CGI or PHP to do the parsing on the server, but I really want to keep the parsing local for now.
Summary:
Local MySQL database -> remote MySQL database every 15mins ??
Any ideas what I can do?
Thanks :-)
if replication is not an option but you can still establish an ssh connection from local box to remote box, then
run mysqldump to export data into a file http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_where
scp file to remote box
mysql -u username -p password database_name < dumpfile.sql
If your server does not accept connections to mysql remotely you can create a ssh tunnel. Then you can apply the replication solution proposed by matcheek.
Here is a hint: http://realprogrammers.com/how_to/set_up_an_ssh_tunnel_with_putty.html
Based on the responses I've received, I think the answer to my original question is to stop using a cheap shared hosting company (no remote access to server, no cron jobs, etc) and start using a VPS hosting company. That will give me the freedom to remotely connect to my server, etc.
Thanks again to those who replied.
From how you described the problem replication seems to be the way to go
http://dev.mysql.com/doc/refman/4.1/en/replication-howto.html
Using a cron job could be another option. It would read file from your local machine and import data in the remote box.
I suggest the follwing:
On every local run, write the SQL statements (sans SELECT),
that you run against your copy of the DB also into a file
On your WAMP server create a small PHP script, gives back the oldest script from the first step (soem auth ofcourse)
On your remote server run a cronjob, that gets this from your local server and runs the SQL against the DB, then acknowledges it
On acknowledgement on your WAMP server, drop the file and give back the next one.
While this seems complicated, it allows for a restart after connectivity loss - something that I consider imposrtant.
I'm very surprised that it seems impossible to upload more than a few megabytes of data to mysql database through PHPMyAdmin whereas I can upload a msaccess table easily up to 2 Gigabytes.
So is there any script in php or anything that can allow to do so unlike phpmyadmin ?
PhpMyAdmin is based on HTML and PHP. Both technologies were not built and never intended to handle such amounts of data.
The usual way to go about this would be transferring the file to the remote server - for example using a protocol like (S)FTP, SSH, a Samba share or whatever - and then import it locally using the mysql command:
mysql -u username -p -h localhost databasename < infile.sql
another very fast way to exchange data between two servers with the same mySQL version (it doesn't dump and re-import the data but copies the data directories directly) is mysqlhotcopy. It runs on Unix/Linux and Netware based servers only, though.
No. Use the command line client.
mysql -hdb.example.com -udbuser -p < fingbigquery.sql
I have a test database on a separate remote server than my production DB. Every once in awhile, I want to try and test things by uploading a copy of my production DB to my testing DB. Unfortunately, the backup file is now half a gig and I'm having trouble transferring it via FTP or SSH. Is there an easy way that I can use the mysql restore command between servers? Also, is there another way to move over large files that I'm not considering? Half a gig doesn't seem that big, I would imagine that people run into this issue frequently.
Thanks!
Are the servers accessible to each other?
If so, you can just pipe the data from one db to another without using a file.
ex: mysqldump [options] | mysql -h test -u username -ppasswd
0.Please consider whether you really need production data (especially if it contains some sensitive information)
1.The simplest solution is to compress the backup on the source server (usually gzip), transfer it across the wire, then decompress on the target server.
http://www.techiecorner.com/44/how-to-backup-mysql-database-in-command-line-with-compression/
2.If you don't need the exact replica of production data (e.g. you don't need some application logs, errors, some other technical stuff) you can consider creating a backup and restore on a source server to a different DB name, delete all unnecessary data and THEN take a backup that you will use.
3.Restore full backup once on your reference server in your Dev environment and then copy transaction logs only (to replay them on the reference server). Depending on the usage pattern transaction logs may take a lot less space as the whole database.
Mysql allows you to connect to a remote database server to run sql commands. Using this feature, we can pipe the output from mysqldump and ask mysql to connect to the remote database server to populate the new database.
mysqldump -u root -p rootpass SalesDb | mysql --host=185.32.31.96 -C SalesDb
Use an efficient transfer method, rather than ftp.
If you have a dump file created by mysqldump, on the test db server, and you update it every so often. I think you could save time (if not disk space) by using rsync to transfer it. Rsync will use ssh and compress data for the transfer, but I think both the local and remote files should/could be uncompressed.
Rsync will only transfer the changed portion of a file.
It may take some time to decide what, precisely, has changed in a dump file, but the transfer should be quick.
I must admit though, I've never done it with a half-gigabyte dump file.