I'm hoping I can use a shell script that will pull a sql dump down from my production site and into my local database. Ideally, I'd like to be able to run something like this:
sync_site example_com example_local
Where the first argument is the production database and the second is the local database. The remote database is always on the same server, behind SSH, with known MySQL credentials.
Figured it out:
ssh me#myserver.com mysqldump -u user -ppass remote_db | mysql -u user -ppass local_db
Related
I need to copy an entire database from a mysql installation on a remote machine via SSH to my local machines mysql.
I know the SSH and both local and remote MYSQL admin user and password.
Is this enough information, and how is it done?
From remote server to local machine
ssh {ssh.user}#{remote_host} \
'mysqldump -u {remote_dbuser} --password={remote_dbpassword}
{remote_dbname} | bzip2 -c' \ | bunzip2 -dc | mysql -u {local_dbuser}
--password={local_dbpassword} -D {local_dbname}
That will dump remote DB in your local MySQL via pipes :
ssh mysql-server "mysqldump --all-databases --quote-names --opt --hex-blob --add-drop-database" | mysql
You should take care about users in mysql.users
Moreover, to avoid typing users and passwords for mysqldump and mysql on local and remote hosts, you can create a file ~/.my.cnf :
[mysql]
user = dba
password = foobar
[mysqldump]
user = dba
password = foobar
See http://dev.mysql.com/doc/refman/5.1/en/option-files.html
Try reading here:
Modified from http://www.cyberciti.biz/tips/howto-copy-mysql-database-remote-server.html - modified because I prefer to use .sql as the extension for SQL files:
Usually you run mysqldump to create a database copy and backups as
follows:
$ mysqldump -u user -p db-name > db-name.sql
Copy db-name.out file using sftp/ssh to remote MySQL server:
$ scp db-name.sql user#remote.box.com:/backup
Restore database at remote server (login over ssh):
$ mysql -u user -p db-name < db-name.sql
Basically you'll use mysqldump to generate a dump of your database, copy it to your local machine, then pipe the contents into mysql to regenerate the DB.
You can copy the DB files themselves, rather than using mysqldump, but only if you can shutdown the MySQL service on the remote machine.
I would recommend the Xtrabackup tool by Percona. It has support for hot copying data via SSH and has excelent documentation. Unlike using mysqldump, this will copy all elements of the MySQL instance including user permissions, triggers, replication, etc...
ssh into the remote machine
make a backup of the database using mysqldump
transfer the file to local machine using scp
restore the database to your local mysql
I have a 3 huge 5GB .sql files which I need to upload to my server so I can process it. Its too large to run on my computer so I can't process the information offline.
So the question, How can I upload a huge 5GB sql file to MySql server without using PHP because its upload limit is 2MB.
The MySql is hosted on a remote server miles away so I can't just plug it in but I have turned remote access on which should be of some help.
All ideas welcome.
Thank-you for your time
Paul
You cannot use PHP to import such huge files. You need to use command prompt to do so. If you have access to the server then it's quite easy to do so. You can login to the server through SecureCRT(paid) or putty(free).
Use this command to import the sql database to the server - Using IP Address
$ mysql -u username -p -h 202.54.1.10 databasename < data.sql
OR - Using hostname
$ mysql -u username -p -h mysql.hosting.com database-name < data.sql
If you wanna try it out locally first to get used to the command prompt. You can use this.
$ mysql -u username -p -h localhost data-base-name < data.sql
Are you using something like phpMyAdmin? If you have ftp/sftp access upload the file and then run it with phpMyAdmin.
I have a local MySQL databases and I want to deploy it on the remote server.
There is some tool in PhpMyAdmin to synchronize local and remote databases, however is very slow if you have large databases, as in my case.
Is there a better and a faster way ? Such as shell through ssh, or something else ?
Fond a solution:
mysqldump -u my_local_db_username --password=my_local_db_password --host=localhost -C my_local_db_name | ssh user#www.my_domain.com "mysql -u my_remote_db_username --password=my_remote_db_password my_remote_db_name"
source: here
How can I import existing MySQL database into Amazon RDS?
I found this page on the AWS docs which explains how to use mysqldump and pipe it into an RDS instance.
Here's their example code (use in command line/shell/ssh):
mysqldump acme | mysql --host=hostname --user=username --password acme
where acme is the database you're migrating over, and hostname/username are those from your RDS instance.
You can connect to RDS as if it were a regular mysql server, just make sure to add your EC2 IPs to your security groups per this forum posting.
I had to include the password for the local mysqldump, so my command ended up looking more like this:
mysqldump --password=local_mysql_pass acme | mysql --host=hostname --user=username --password acme
FWIW, I just completed moving my databases over. I used this reference for mysql commands like creating users and granting permissions.
Hope this helps!
There are two ways to import data :
mysqldump : If you data size is less than 1GB, you can directly make use of mysqldump command and import your data to RDS.
mysqlimport : If your data size is more than 1GB or in any other format, you can compress the data into flat files and upload the data using sqlimport command.
I'm a big fan of the SqlYog tool. It lets you connect to your source and target databases and sync schema and/or data. I've also used SQLWave, but switched to SqlYog. Been so long since I made the switch that I can't remember exactly why I switched. Anyway, that's my two cents. I know some will object to my suggestion of Windows GUI tools for MySQL. I actually like the SqlYog product so much that I run it from Wine (works flawlessly from Wine on Ubuntu for me).
This blog might be helpful.
A quick summary of a GoSquared Engineering post:
Configuration + Booting
Select a maintenance window and backup window when the instance will be at lowest load
Choose Multi-AZ or not (highly recommended for auto-failover and maintenance)
Boot your RDS instance
Configure security groups so your apps etc can access the new instance
Data migration + preparation
Enable binlogging if you haven't already
Run mysqldump --single-transaction --master-data=2 -C -q dbname -u username -p > backup.sql on the old instance to take a dump of the current data
Run mysql -u username -p -h RDS_endpoint DB_name < backup.sql to import the data into your RDS instance (this may take a while depending on your DB size)
In the meantime, your current production instance is still serving queries - this is where the master-data=2 and binlogging comes in
In your backup.sql file, you'll have a line at the top that looks like CHANGE MASTER TO MASTER_LOG_FILE=’mysql-bin.000003′, MASTER_LOG_POS=350789121;
Get the diff since backup.sql as an SQL file mysqlbinlog /var/log/mysql/mysql-bin.000003 --start-position=350789121 --base64-output=NEVER > output.sql
Run those queries on your RDS instance to update it cat output.sql | mysql -h RDS_endpoint -u username -p DB_name
Get the new log position by finding end_log_pos at the end of the latest output.sql file.
Get the diff since the last output.sql (like step 6) and repeat steps 7 + 8.
The actual migration
Have all your apps ready to deploy quickly with the new RDS instance
Get the latest end_log_pos from output.sql
Run FLUSH TABLES WITH READ LOCK; on the old instance to stop all writes
Start deploying your apps with the new RDS instance
Run steps 6-8 from above to update the RDS instance with the last queries to the old server
Conclusion
Using this method, you'll have a small amount of time (depending on how long it takes to deploy your apps + how many writes your MySQL instance serves - probably only a minute or two) with writes being rejected from your old server, but you will have a consistent migration with no read downtime.
A full and detailed post explaining how we (GoSquared) migrated to RDS with minimal downtime (including error debugging) is available here: https://engineering.gosquared.com/migrating-mysql-to-amazon-rds.
I am completely agree with #SanketDangi.
There are two ways of doing this one way is as suggested using either mysqldump or mysqlimport.
I have seen cases where it creates problem while restoring data on cloud gets corrupt.
However importing applications on cloud has became much easier now a days. You try uploading your DB server on to public cloud through ravello.
You can import your database server itself on Amazon using ravello.
Disclosure: I work for ravello.
Simplest example:
# export local db to sql file:
mysqldump -uroot -p —-databases qwe_db > qwe_db.sql
# Now you can edit qwe_db.sql file and change db name at top if you want
# import sql file to AWS RDS:
mysql --host=proddb.cfrnxxxxxxx.eu-central-1.rds.amazonaws.com --port=3306 --user=someuser -p qwe_db < qwe_db.sql
AWS RDS Customer data Import guide for Mysql is available here : http://aws.amazon.com/articles/2933
Create flat files containing the data to be loaded
Stop any applications accessing the target DB Instance
Create a DB Snapshot
Disable Amazon RDS automated backups
Load the data using mysqlimport
Enable automated backups again
If you are using the terminal this is what worked for me:
mysqldump -u local_username -plocal_password local_db_name | mysql -h myRDS-at-amazon.rds.amazonaws.com -u rds-username -prds_password_xxxxx remote_db_name
and then i used MYSQL WorkBench (free download) to check it was working because the command line was static after pressing submit, i could have probably put -v at end to see it's output
Note: there is no space after -p
Here are the steps which i have done and had sucess.
Take the MySQLdump of the needed database.
mysqldump -u username -p databasename --single-transaction --quick --lock-tables=false >databasename-backup-$(date +%F).sql
( Dont forget to replace the username as root – most of the times, and databasename -> Db name of database which you are going to migrate to RDS )
Once prompted, enter your password.
Once done, login to the RDS Instance from your MySQL server ( Make sure the security groups are configured to allow the connection from Ec2 to RDS )
mysql -h hostaddress -P 3306 -u rdsusername -p
( Dont forget to replace hostaddress with the address of your RDS Instance and rdsusernmae with username for your RDS Instance, when prompted give the password too )
You find that hostaddress under – Connectivity & security -> Endpoint & port under RDS Database From AWS Console.
Once logged in, create the database using MySQL commands :
create database databasename;
\q
Once Database is created in RDS, Import the SQL file created in Step 1 :
mysql -h hostaddress -u rdsusername -p databasename < backupfile.sql
This should import the SQL file to RDS and restore the contents into the new database.
Reference from : https://k9webops.com/blog/migrate-an-existing-database-on-mysql-mariadb-to-an-already-running-rds-instance-on-the-aws/
I want to copy all the tables, fields, and data from my local server mysql to my hosting sites mysql. Is there a way to copy all the data? (It's only 26kb, very small)
In phpMyAdmin, just export a dump (using the export) tab and re-import it on the other server using the sql tab.
Make sure you compare the results, I have had phpMyAdmin screw up the import more than once.
If you have shell access to both servers, a combination of
mysqldump -u username -p databasename > dump.sql
and a
mysql -u username -p databasename < dump.sql
on the target server is the much more fast and reliable alternative in my experience.
Have a look at
Copying MySQL Databases to Another Machine
Copy MySQL database from one server to another remote server
Please follow the following steps:
Create the target database using MySQLAdmin or your preferred method. In this example, db2 is the target database, where the source database db1 will be copied.
Execute the following statement on a command line:
mysqldump -h [server] -u [user] -p[password] db1 | mysql -h [server]
-u [user] -p[password] db2
Note: There is NO space between -p and [password]
I copied this from Copy/duplicate database without using mysqldump.
It works fine. Please ensure that you are not inside mysql while running this command.
If you have the same version of mysql on both systems (or versions with compatible db file sytsem), you may just copy the data files directly. Usually files are kept in /var/lib/mysql/ on unix systems.