Currently I have an EC2 Instance and a RDS (mysql) instance. I exported my database from my local workstation and uploaded it to my EC2 instance and put it in my ec2-user home directory.
I log into the EC2 instance and run this command in the same directory as my projectname.sql file which in this case would be substituted for "backupfile.sql" in the command. After running this command my site was able to successfully connect to the database. I knew this because all my errors on my site disapeared. The issue now is that my tables did not seem to upload.
mysql -h host.address.for.rds.server -u rdsusername -p rdsdatabase < backupfile.sql
Running this command:
mysql -h host.address.for.rds.server -P 3306 -u rdsusername -p
With my correct credentials logs me in to the rds server. I then run:
use databasename
show tables
But no tables are shown.
My end goal is to get my localhost database onto AWS RDS by uploading a sql file. If there is an easier way please let me know!This is my first time setting up AWS and these road blocks are killing me.
I use the third party product SQLyog to replicate my database into AWS. It allows you to have connections to multiple databases at the same time and copy between them. In order to set up a connection to your AWS instance you will need the .pem file on your local machine and have to enable MySQL/Aurora in your Security group on EC2. There is a some more information here. Once it is set up it is extremely easy to copy databases or individual tables.
From the aws documentation
sudo mysqldump -u <local_user> \
--databases world \
--single-transaction \
--compress \
--order-by-primary \
-p <local_password> | mysql -u <RDS_user_name> \
--port=3306 \
--host=hostname \
-p <RDS_password>
Related
I have a 885 GB MySQL 5.6 database running in Amazon RDS. I'd like to move it into Google's CloudSQL service. To do so I'm taking the following steps:
Following Amazon's instructions for moving a database out of RDS (since Google seems to require GTID for replication and RDS does not supported GTID for MySQL 5.6).
Created a RDS read-replica.
Once the read-replica was up to date with the master I stopped replication, recorded the binlog location, and dumped the database to a file.
Brought up an EC2 instance running Ubuntu and MySQL 5.6 and I'm importing the dump file into the EC2 database.
The problem I'm having is the import of the dump file into the EC2 database is taking much longer than I had hoped. After about 3 and half days the EC2 instance is only about 60% done with the database load.
The mysqldump command I ran was based off Amazon's recommendation...
mysqldump -h RdsInstanceEndpoint \
-u user \
-p password \
--port=3306 \
--single-transaction \
--routines \
--triggers \
--databases database database2 \
--compress \
--compact > dumpfile.sql.gz
I decompressed the dumpfile and to import the data I am simply running...
mysql -u user -p password < dumpfile.sql
Is there anything I can do to make this process run faster? Are there any command line options I should be using that I am not?
I have been trying this but no success, i get to this stage and no further.
If you need to migrate from local database to azure cleardb, simply use the mysqldump utility in conjunction with the mysql command line client. For example:
mysqldump --single-transaction -u (old_database_username) -p -h (old_database_host) (database_name) | mysql -h (cleardb_host) -u (cleardb_user) -p -D (cleardb_database)
But if you insist on using MySQL Workbench then this might be of help -
http://nullsession.com/2014/02/04/downgrading-you-cleardb-mysql-database-in-azure/
I need to copy an entire database from a mysql installation on a remote machine via SSH to my local machines mysql.
I know the SSH and both local and remote MYSQL admin user and password.
Is this enough information, and how is it done?
From remote server to local machine
ssh {ssh.user}#{remote_host} \
'mysqldump -u {remote_dbuser} --password={remote_dbpassword}
{remote_dbname} | bzip2 -c' \ | bunzip2 -dc | mysql -u {local_dbuser}
--password={local_dbpassword} -D {local_dbname}
That will dump remote DB in your local MySQL via pipes :
ssh mysql-server "mysqldump --all-databases --quote-names --opt --hex-blob --add-drop-database" | mysql
You should take care about users in mysql.users
Moreover, to avoid typing users and passwords for mysqldump and mysql on local and remote hosts, you can create a file ~/.my.cnf :
[mysql]
user = dba
password = foobar
[mysqldump]
user = dba
password = foobar
See http://dev.mysql.com/doc/refman/5.1/en/option-files.html
Try reading here:
Modified from http://www.cyberciti.biz/tips/howto-copy-mysql-database-remote-server.html - modified because I prefer to use .sql as the extension for SQL files:
Usually you run mysqldump to create a database copy and backups as
follows:
$ mysqldump -u user -p db-name > db-name.sql
Copy db-name.out file using sftp/ssh to remote MySQL server:
$ scp db-name.sql user#remote.box.com:/backup
Restore database at remote server (login over ssh):
$ mysql -u user -p db-name < db-name.sql
Basically you'll use mysqldump to generate a dump of your database, copy it to your local machine, then pipe the contents into mysql to regenerate the DB.
You can copy the DB files themselves, rather than using mysqldump, but only if you can shutdown the MySQL service on the remote machine.
I would recommend the Xtrabackup tool by Percona. It has support for hot copying data via SSH and has excelent documentation. Unlike using mysqldump, this will copy all elements of the MySQL instance including user permissions, triggers, replication, etc...
ssh into the remote machine
make a backup of the database using mysqldump
transfer the file to local machine using scp
restore the database to your local mysql
Is there a way to get a MySQL Dump of a database that is not stored locally. I have the connection string to the database located on another server, but it doesn't seem like MySQLDump wants to do anything if the server is remote.
MySQLDump has a -h parameter to connect to a remote host.
First try the mysql client application:
mysql -h your.server.com -uYourUser -pYourPass
If that works, use the same format for MySQLDump
mysqldump -h your.server.com -uYourUser -pYourPass --all-databases
Edit for ajreal:
By default, mysqld (the MySQL server) will run on 3306, and mysql (the client application) will connect using that port. However, if you changed your configuration, update your command accordingly. For example for port 3307, use
mysql -h your.server.com -P 3307 -uYourUser -pYourPass
Check your MySQL configfile to see how you can connect to your MySQL server.
Here is example, how to extract mysql db named 'abc123' direct to zip, w/o super big text dump file on disk.
mysqldump -u root --opt --databases abc123 | gzip > /tmp/abc123.export.sql.gz
i know how to import an sql file via the cli:
mysql -u USER -p DBNAME < dump.sql
but that's if the dump.sql file is local. how could i use a file on a remote server?
You didn't say what network access you have to the remote server.
Assuming you have SSH access to the remote server, you could pipe the results of a remote mysqldump to the mysql command. I just tested this, and it works fine:
ssh remote.com "mysqldump remotedb" | mysql localdb
I put stuff like user, password, host into .my.cnf so I'm not constantly typing them -- annoying and bad for security on multiuser systems, you are putting passwords in cleartext into your bash_history! But you can easily add the -u -p -h stuff back in on both ends if you need it:
ssh remote.com "mysqldump -u remoteuser -p'remotepass' remotedb" | mysql -u localuser -p'localpass' localdb
Finally, you can pipe through gzip to compress the data over the network:
ssh remote.com "mysqldump remotedb | gzip" | gzip -d | mysql localdb
Just thought I'd add to this as I was seriously low on space on my local VM, but if the .sql file exists already on the remote server you could do;
ssh <ip-address> "cat /path/to/db.sql" | mysql -u <user> -p<password> <dbname>
I'd use wget to either download it to a file or pipe it in.