mysqldump on remote server - mysql

If there are two machines client and server .From client how to do a mysqldump to the server such that the dump is avaliable on the client and not stored in the server
Thanks..

Here is a PHP script that generates a mysqldump. It outputs directly to the client, and does not create any files on the server.
https://github.com/tylerl/web-scripts/tree/master/mysqldump

Do this in two steps:
dump data on server
transfer to client (possibly compress first)
If you need to do it often, then write a script on the server that dumps, compresses and copies the data to the client (don't forget to archive/delete old backups on server, as neded)

You could write a simple script, that could run in your crontab to create such dump and move it to some particular area of your file system, like an http accessible folder, or an ftp folder.
Then you could write an script to run in your clients that would fetch such dumps if you need this to be automatic too.

Either you do the backup serverside (if you have access to the server), using mysqldump to dump it, gzip or bzip2 to zip the file, and ftp/sftp/scp to transfer the file to the client afterwards. You can later script this, and afterwards crontab it to have it run automatically each X time. Checkout logrotate to avoid storing too many backups.
Or you use a tool on the client to fetch the data. The default (free) MySQL Workbench can back-up an entire database, or you can select which tables to backup (and interestingly, afterwards which tables to restore - nice if you only need to reset 1 table)

See the answer to similar question elsewhere:
https://stackoverflow.com/a/2990732/176623
In short, you can use mysqldump on the client to connect to and dump the server data directly on the client.

Related

Run a mysql query remotely from a PC without mysql

I am trying to automate the upload of data to a mysql database. I am using MySql Workbench on a windows pc to remotely access my database on AWS. I have a sql file that I use to load a csv file into the db using LOAD DATA LOCAL INFILE. My csv file is created daily using a scheduled task and I would like to load it into the db using a batch type file and a scheduled task.
Is it possible?
At Windows you may use PHP form Wamp Server, its very straightforward installation. You don't need MySQL Server at your local PC to update remote AWS with the data but only a scripting language.
I would suggest to install MySQL on your local PC to check firstly on that local MySQL if the update does what you expect it to do. Once it meets your expectation you just change the MySQL connection parameters to these from AWS and your set.
In MySQL Workbench you may add additional MySQL Server as local to check the local database and all applied to it changes
Perhaps this example can help link you to do first steps in writing php script to update database
PHP scripts can be executed form command line as well so once you write your script that updates the database you should be able to run it from windows CMD console this way
php -f path-to-your-sript.php
but if so you need to edit php scipt this way that it already knows where the csv file is and reads its content, maybe by this function file_get_contents() or you can also give a try a dedicated to csv files function fgetcsv() that is even more suitable because it reads line by line your CSV file so if you use loop you can even process very big CSV files without running out of the memory.

Full backup of GoDaddy site via command-line script

Is there a simple way to do an automated backup of an entire website on a host like GoDaddy via the command-line?
So far, I know I need to backup all the files in my home directory recursively. I could possibly automated SFTP to connect and issue a get -R * command to get the full file dump, or just use SCP.
The other half of the puzzle is getting all of the tables available, mostly WordPress tables. My guess is that maybe there's a command-line command I could issue which dumps the database contents to a flat file, which I could then also pull via SFTP. If such a command exists, my plan is to use a combination of Telnet and EXPECT scripts to login to the GoDaddy site, issue some commands, then disconnect back to my local shell.
The end result should be that I have a folder with all of my server content in it, plus the flat file backup of the SQL database from the server. I know there are WordPress backup plugins, but they tend to provide a slew of ZIP files, when all I want is the raw data directly so I can put it in my private SVN server for backup and versioning.
So my question: how do I extract all of the databases on my GoDaddy server via the command-line to a file?
Thank you.
In the end, I found a working solution.
First, I used 2 separate expect scripts.
Telnet into the server, delete old backups, use mysqldump to extract all tables to a flat file via mysqldump -u db_owner -p --all-databases > output.sql, and create a massive tarball of everything. Logout.
Use SCP to pull the newly created tarball, extract it to a local SVN controlled working copy folder.
Use a second expect script to login to the server and delete the backup. Logout.
From there, I just manually svn add and svn commit as needed.

Update my remote MySQL database with my local MySQL database

I have a local Perl script that does a lot of parsing of web pages and then successfully updates my local MySQL database (WAMP server). I now want to send this local data to my remote server, but remotely connecting to my database isn't allowed with my hosting company. Unfortunately I never thought of that problem.
So, I now need to find an automated way to update my remote server (every 15mins). I mistakenly thought I could just edit my Perl script with the details of the remote server.
I am aware that I could use CGI or PHP to do the parsing on the server, but I really want to keep the parsing local for now.
Summary:
Local MySQL database -> remote MySQL database every 15mins ??
Any ideas what I can do?
Thanks :-)
if replication is not an option but you can still establish an ssh connection from local box to remote box, then
run mysqldump to export data into a file http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_where
scp file to remote box
mysql -u username -p password database_name < dumpfile.sql
If your server does not accept connections to mysql remotely you can create a ssh tunnel. Then you can apply the replication solution proposed by matcheek.
Here is a hint: http://realprogrammers.com/how_to/set_up_an_ssh_tunnel_with_putty.html
Based on the responses I've received, I think the answer to my original question is to stop using a cheap shared hosting company (no remote access to server, no cron jobs, etc) and start using a VPS hosting company. That will give me the freedom to remotely connect to my server, etc.
Thanks again to those who replied.
From how you described the problem replication seems to be the way to go
http://dev.mysql.com/doc/refman/4.1/en/replication-howto.html
Using a cron job could be another option. It would read file from your local machine and import data in the remote box.
I suggest the follwing:
On every local run, write the SQL statements (sans SELECT),
that you run against your copy of the DB also into a file
On your WAMP server create a small PHP script, gives back the oldest script from the first step (soem auth ofcourse)
On your remote server run a cronjob, that gets this from your local server and runs the SQL against the DB, then acknowledges it
On acknowledgement on your WAMP server, drop the file and give back the next one.
While this seems complicated, it allows for a restart after connectivity loss - something that I consider imposrtant.

mysql restore for files on another server

I have a test database on a separate remote server than my production DB. Every once in awhile, I want to try and test things by uploading a copy of my production DB to my testing DB. Unfortunately, the backup file is now half a gig and I'm having trouble transferring it via FTP or SSH. Is there an easy way that I can use the mysql restore command between servers? Also, is there another way to move over large files that I'm not considering? Half a gig doesn't seem that big, I would imagine that people run into this issue frequently.
Thanks!
Are the servers accessible to each other?
If so, you can just pipe the data from one db to another without using a file.
ex: mysqldump [options] | mysql -h test -u username -ppasswd
0.Please consider whether you really need production data (especially if it contains some sensitive information)
1.The simplest solution is to compress the backup on the source server (usually gzip), transfer it across the wire, then decompress on the target server.
http://www.techiecorner.com/44/how-to-backup-mysql-database-in-command-line-with-compression/
2.If you don't need the exact replica of production data (e.g. you don't need some application logs, errors, some other technical stuff) you can consider creating a backup and restore on a source server to a different DB name, delete all unnecessary data and THEN take a backup that you will use.
3.Restore full backup once on your reference server in your Dev environment and then copy transaction logs only (to replay them on the reference server). Depending on the usage pattern transaction logs may take a lot less space as the whole database.
Mysql allows you to connect to a remote database server to run sql commands. Using this feature, we can pipe the output from mysqldump and ask mysql to connect to the remote database server to populate the new database.
mysqldump -u root -p rootpass SalesDb | mysql --host=185.32.31.96 -C SalesDb
Use an efficient transfer method, rather than ftp.
If you have a dump file created by mysqldump, on the test db server, and you update it every so often. I think you could save time (if not disk space) by using rsync to transfer it. Rsync will use ssh and compress data for the transfer, but I think both the local and remote files should/could be uncompressed.
Rsync will only transfer the changed portion of a file.
It may take some time to decide what, precisely, has changed in a dump file, but the transfer should be quick.
I must admit though, I've never done it with a half-gigabyte dump file.

How do I migrate a populated mySQL database from dev to a shared host?

The title pretty much says it all, but to elaborate: If I build a mySQL database on my local dev machine, populate it with data, and subsequently want to migrate the database to a shared host (in this case, Siteground,) how do I do so in a way that keeps structure and data intact?
In this case, I don't have file access to the database server.
use mysqldump (doc) and dump your database (mysqldump [databasename] for a simple configuration) on your development machine to a dump (a file containing sql statements needed to recover both schema and data). Now insert the dump on your shared-host using the provided utilities (normaly you get phpMyAdmin preinstalled from your hoster, which can import dumps)
In addition to the response made by theomega (namely, do a dump of your development database and then insert the dump into your production database), be aware that you may need to enable large SQL insert statements if you have a lot of data. I would recommend you first FTP the file to the host, and then do the insert from a file. Each host has their own way of doing it, but if you can connect to the remote server using SSH, there is likely the ability to run the insert using the command line.
also in addition to theomega: most tools for mysql has dump / execute functions for sql files.
if you're using navicat, for an example, you're just a right-click away:
right-click on the database you want to export, and choose "dump sql file". this will allow you to save the .sql file on your local drive in the folder of your choosing.
then, right click on the destination database and choose "execute batch file". browse to the newly-created .sql file and it will execute all sql commands from that file in the destination database. namely, creating a copy of the exported db.