I took over a website at a less-than-optimal hoster with no backups yet.
I do have an FTP-access and I know the database access parameters of the installed web-app to the MySQL server, but I don't have access to the MySQL interface or the underlying server.
I would like to do an automated backup to a Linux server under my control.
I can download all data via FTP, zip it and store it on a backed up storage.
How to do this for the database?
As an initial solution I installed phpMyAdmin and did a manual backup, but I would like to automate this process.
You can use mysqldump to back up a remote MySQL database.
Suppose your MySQL database is on a host called "dbhost". You can reach that host over the network from your new Linux host.
Run this command on your new Linux host:
$ mysqldump --single-transaction --all-databases --host dbhost > datadump.sql
(You might also need to add the --user and --password options.)
You can automate any command you can run at the command-line. Put it in a shell script. Then you an invoke the script for example from cron.
Related
I'm trying to implement a database backup cron (other solutions welcome) in my job but I have a small problem:
I have a large database that is over 10GB in space and the current vm doesn't have space to store it in the temporary file that mysql creates.
I know I can use mysqldump with a host parameter, but my question is, when doing that does the temporary file generated by mysqldump stay at the machine that is running it or does it stay on the database server?
UPDATE:
I forgot to mention that I'm trying to backup a network of websites and that some of them are behind a firewall (needing VPN access), some need server hopping to get to the database server.
You can run a shell script from an archive host, where you've traded password-less ssh keys with the database server. This lets you transfer the file directly over ssh, without creating any temp files on the remote database server:
ssh -C myhost.com mysqldump -u my_user --password=bigsecret \
--skip-lock-tables --opt database_name > local_backup_file.sql
Obviously there are ways to secure that password on the command line, but this a method that could accomplish what you want. One advantage of this method is that it doesn't require the archive host to have access to port 3306 on the remote host.
This guy's version is cool because it also compresses the data on-the-fly before transferring it over the network, and then he uncompresses it before loading it into a local database.
ssh me#remoteserver 'mysqldump -u user -psecret production_database | \
gzip -9' | gzip -d | mysql local_database
But that's why my version uses ssh -C, which enables its own compression algorithm and avoids extra gzip pipes.
Depending on the circumstance it might be a better idea to use MySQL replication. Set up MySQL on your backup server and configure it as a slave of your production database (see http://dev.mysql.com/doc/refman/5.7/en/replication-howto.html). You can then dump the slave database easily.
An advantage of this approach is you're not transferring 10GB each time you want to backup, you're only transferring any changes to the database as and when they occur.
You'll need to keep an eye on the replication though, because if it fails your slave database will become stale.
In order to load databases to mysql using terminal, as shown here, the sql file must be already in the server? Or I can upload from my local pc?
If you have mysql installed in your local machine: then you can at once run the import command from local machine's command line
mysql -hmy_server -umy_user -p database_name < full/path/to/file.sql
or you can just go to the directory where your sql file exists and run
mysql -hmy_server -umy_user -p database_name < file.sql
(password will be required after hitting Enter). This way the file will be imported into the remote database from your local computer.
No mysql is installed in your local machine: in this case you have to manually upload the file into the server with some ftp client and then connect remotely(e.g. by Putty) and run similar command
mysql -umy_user -p database_name < path/to/sql/file.sql
Yes. You can SCP or SFTP it to the server, but that command expects a file on the machine. In fact, that particular command expects it to be in the same directory that you are in.
Assuming the file contains SQL statements it can go anywhere. What is important is that the MySQL client you use to read the file can access the database server - usually on port 3306 for MySQL.
On the server, unless your local directory is accessible via network to the server (i.e. network share on another machine which is mapped to some address on your server - convoluted but essentially possible).
Basically, just upload the file to the server and run the command locally there.
NO, the SQL file or SQL script file generally can be present in your machine. When you execute the command, said sql script gets executed against the database.
mysql -u user -p -h servername DB < [Full Path]/test.sql
I just uploaded my database to online server then my application successfully connected but suddenly my application goes too slow about 5 seconds delay in action and sometimes not responding.
To over come this, I think the solution is to sync my local database dynamically then connect my application to lacal database . .
My question is How can I sync local database to web server?
I'm using mysql and Ado.net as connector
With out using any third party software . .
Your question isn't very clear, but if I understand, you want to syncronise a mysql database on a remote server, with the database on your local machine.
There are a number of ways to do this efficiently:
use mysqldump to make a backup of the data on your local machine, transfer the dump file to the remote server, via ftp/sftp/scp and then restore it on the remote server.
mysqldump -u yourusername -p yourpassword yourdatabasename > yourbackupfilename.sql
and then run on the remote server
mysql -u yourusername -p yourpassword remotedatabasename < yourbackupfilename.sql
Use a mysql GUI (my favourite is sqlyog (https://www.webyog.com/product/sqlyog), but you have to pay for a license). sqlyog has a really nice feature where you can connect to both your local mysql installation, and the remote server. You can then copy databases from one to the other, all in the GUI. Super-convenient and easy, but method 1 will be faster for large databases.
If I misunderstood, please try to be clearer about what you want to do.
I have cPanel & WHM installed on my server.
Is it safe to backup this directory (if I only care about backing up the MySQL databases):
"/var/lib/mysql/"
I don't care about the other MySQL databases that cPanel provide by default. I only care about the MySQL databases that other cPanel users have created and currently own.
I know I could just back it up with other ways, but let's say due to a hard disk drive failure, I cannot access cPanel and WHM.
The only access to the server I have is via SSH (and SFTP).
Okay, so would it be my best interest to just download everything in "/var/lib/mysql/"?
If not, what other files would I need to back up? Let me guess, just the "/home/" directory?
I hope my description of my issue was made clear and was descriptive.
Basically, I need to transfer the MySQL databases from one HDD to another, but the HDD with the MySQL databases has lots of errors, is corrupted (I cannot access cPanel/WHM) and my server provider tells me my HDD has failed.
In advance, I would like to thank you very much for your help.
Even if you did not help, thank you very much for taking your time reading this. It is much appreciated.
You mentioned that you can access the server via SSH but have no access to WHM or cPanel. I guess you have no access to phpMyAdmin(?). I am also guessing that the second HDD is on another server.
Instead of backing up a directory, I would suggest you connect via SSH to your server, then make remote backups with mysqldump, download them locally with SFTP and then import the database backups to the other HDD/server.
Connect to your server with SSH
ssh root#xxx.xxx.xxx.xx1
Where xxx.xxx.xxx.xx1 is the IP address of your first server. Give your password when prompted.
Use mysqldump to make a backup of your database(s) to the server.
mysqldump -uroot -p mydatabase1 > mydatabase1.sql
mysqldump -uroot -p mydatabase2 > mydatabase2.sql
...
Type your MySQL password when prompted and then the sql files (backups of your databases) will be created. I would suggest you don't make the backups on a publicly available directory of your server.
If you are on a Unix system you can type "ll" or "ls" to see that the .sql files have been created. Make a note of the directory in your server where the backups are located.
Terminate the SSH session:
exit
Then use your favourite SFTP program to connect to your server or use terminal like this:
sftp root#mywebsite.com
Type your password when prompted.
Navigate to the directory where the backups are located and download them by using the "GET" command:
get mydatabase1.sql
Your mydatabase1.sql backup file will be downloaded to your local machine.
Don't forget to close the session:
exit
Now SFTP to your other HDD to upload the database backups:
sftp root#xxx.xxx.xxx.xx2
where xxx.xxx.xxx.xx2 is the IP address of your other machine. Give password when prompted.
Don't forget to close the SFTP session:
exit
Now that you have uploaded the databases, you can connect again with SSH to the other HDD/server just like before:
ssh root#xxx.xxx.xxx.xx2
Once connected, create the new database:
mysql -uroot -e "create database mydatabase1"
Import the backup to the database:
mysql -uroot -p mydatabase1 < mydatabase1.sql
Now the database backup should be imported in the new server/hdd. I hope this helps.
How can I import existing MySQL database into Amazon RDS?
I found this page on the AWS docs which explains how to use mysqldump and pipe it into an RDS instance.
Here's their example code (use in command line/shell/ssh):
mysqldump acme | mysql --host=hostname --user=username --password acme
where acme is the database you're migrating over, and hostname/username are those from your RDS instance.
You can connect to RDS as if it were a regular mysql server, just make sure to add your EC2 IPs to your security groups per this forum posting.
I had to include the password for the local mysqldump, so my command ended up looking more like this:
mysqldump --password=local_mysql_pass acme | mysql --host=hostname --user=username --password acme
FWIW, I just completed moving my databases over. I used this reference for mysql commands like creating users and granting permissions.
Hope this helps!
There are two ways to import data :
mysqldump : If you data size is less than 1GB, you can directly make use of mysqldump command and import your data to RDS.
mysqlimport : If your data size is more than 1GB or in any other format, you can compress the data into flat files and upload the data using sqlimport command.
I'm a big fan of the SqlYog tool. It lets you connect to your source and target databases and sync schema and/or data. I've also used SQLWave, but switched to SqlYog. Been so long since I made the switch that I can't remember exactly why I switched. Anyway, that's my two cents. I know some will object to my suggestion of Windows GUI tools for MySQL. I actually like the SqlYog product so much that I run it from Wine (works flawlessly from Wine on Ubuntu for me).
This blog might be helpful.
A quick summary of a GoSquared Engineering post:
Configuration + Booting
Select a maintenance window and backup window when the instance will be at lowest load
Choose Multi-AZ or not (highly recommended for auto-failover and maintenance)
Boot your RDS instance
Configure security groups so your apps etc can access the new instance
Data migration + preparation
Enable binlogging if you haven't already
Run mysqldump --single-transaction --master-data=2 -C -q dbname -u username -p > backup.sql on the old instance to take a dump of the current data
Run mysql -u username -p -h RDS_endpoint DB_name < backup.sql to import the data into your RDS instance (this may take a while depending on your DB size)
In the meantime, your current production instance is still serving queries - this is where the master-data=2 and binlogging comes in
In your backup.sql file, you'll have a line at the top that looks like CHANGE MASTER TO MASTER_LOG_FILE=’mysql-bin.000003′, MASTER_LOG_POS=350789121;
Get the diff since backup.sql as an SQL file mysqlbinlog /var/log/mysql/mysql-bin.000003 --start-position=350789121 --base64-output=NEVER > output.sql
Run those queries on your RDS instance to update it cat output.sql | mysql -h RDS_endpoint -u username -p DB_name
Get the new log position by finding end_log_pos at the end of the latest output.sql file.
Get the diff since the last output.sql (like step 6) and repeat steps 7 + 8.
The actual migration
Have all your apps ready to deploy quickly with the new RDS instance
Get the latest end_log_pos from output.sql
Run FLUSH TABLES WITH READ LOCK; on the old instance to stop all writes
Start deploying your apps with the new RDS instance
Run steps 6-8 from above to update the RDS instance with the last queries to the old server
Conclusion
Using this method, you'll have a small amount of time (depending on how long it takes to deploy your apps + how many writes your MySQL instance serves - probably only a minute or two) with writes being rejected from your old server, but you will have a consistent migration with no read downtime.
A full and detailed post explaining how we (GoSquared) migrated to RDS with minimal downtime (including error debugging) is available here: https://engineering.gosquared.com/migrating-mysql-to-amazon-rds.
I am completely agree with #SanketDangi.
There are two ways of doing this one way is as suggested using either mysqldump or mysqlimport.
I have seen cases where it creates problem while restoring data on cloud gets corrupt.
However importing applications on cloud has became much easier now a days. You try uploading your DB server on to public cloud through ravello.
You can import your database server itself on Amazon using ravello.
Disclosure: I work for ravello.
Simplest example:
# export local db to sql file:
mysqldump -uroot -p —-databases qwe_db > qwe_db.sql
# Now you can edit qwe_db.sql file and change db name at top if you want
# import sql file to AWS RDS:
mysql --host=proddb.cfrnxxxxxxx.eu-central-1.rds.amazonaws.com --port=3306 --user=someuser -p qwe_db < qwe_db.sql
AWS RDS Customer data Import guide for Mysql is available here : http://aws.amazon.com/articles/2933
Create flat files containing the data to be loaded
Stop any applications accessing the target DB Instance
Create a DB Snapshot
Disable Amazon RDS automated backups
Load the data using mysqlimport
Enable automated backups again
If you are using the terminal this is what worked for me:
mysqldump -u local_username -plocal_password local_db_name | mysql -h myRDS-at-amazon.rds.amazonaws.com -u rds-username -prds_password_xxxxx remote_db_name
and then i used MYSQL WorkBench (free download) to check it was working because the command line was static after pressing submit, i could have probably put -v at end to see it's output
Note: there is no space after -p
Here are the steps which i have done and had sucess.
Take the MySQLdump of the needed database.
mysqldump -u username -p databasename --single-transaction --quick --lock-tables=false >databasename-backup-$(date +%F).sql
( Dont forget to replace the username as root – most of the times, and databasename -> Db name of database which you are going to migrate to RDS )
Once prompted, enter your password.
Once done, login to the RDS Instance from your MySQL server ( Make sure the security groups are configured to allow the connection from Ec2 to RDS )
mysql -h hostaddress -P 3306 -u rdsusername -p
( Dont forget to replace hostaddress with the address of your RDS Instance and rdsusernmae with username for your RDS Instance, when prompted give the password too )
You find that hostaddress under – Connectivity & security -> Endpoint & port under RDS Database From AWS Console.
Once logged in, create the database using MySQL commands :
create database databasename;
\q
Once Database is created in RDS, Import the SQL file created in Step 1 :
mysql -h hostaddress -u rdsusername -p databasename < backupfile.sql
This should import the SQL file to RDS and restore the contents into the new database.
Reference from : https://k9webops.com/blog/migrate-an-existing-database-on-mysql-mariadb-to-an-already-running-rds-instance-on-the-aws/