importing mysql database to RDS - mysql

I am trying to dump my local mysql database data to an RDS database. I use the following command in windows command:
mysqldump -u <username> <localdbname> --single-transaction --compress --order-by-primary
-p<localdbpassword> > mysql -u <rdsusername> --port=3306 --host=<host> -p<rdspassword>
I noticed that I get a database doesn't exist when the name of my local database and rds database differ. But when they have the same name, this command runs but nothing happens. How should I change it?

Related

Is there a fast way to export MySQL data from AWS RDS and import it into Google CloudSQL?

I have a 885 GB MySQL 5.6 database running in Amazon RDS. I'd like to move it into Google's CloudSQL service. To do so I'm taking the following steps:
Following Amazon's instructions for moving a database out of RDS (since Google seems to require GTID for replication and RDS does not supported GTID for MySQL 5.6).
Created a RDS read-replica.
Once the read-replica was up to date with the master I stopped replication, recorded the binlog location, and dumped the database to a file.
Brought up an EC2 instance running Ubuntu and MySQL 5.6 and I'm importing the dump file into the EC2 database.
The problem I'm having is the import of the dump file into the EC2 database is taking much longer than I had hoped. After about 3 and half days the EC2 instance is only about 60% done with the database load.
The mysqldump command I ran was based off Amazon's recommendation...
mysqldump -h RdsInstanceEndpoint \
-u user \
-p password \
--port=3306 \
--single-transaction \
--routines \
--triggers \
--databases database database2 \
--compress \
--compact > dumpfile.sql.gz
I decompressed the dumpfile and to import the data I am simply running...
mysql -u user -p password < dumpfile.sql
Is there anything I can do to make this process run faster? Are there any command line options I should be using that I am not?

Migration large MySQL 5.7 on AWS to Aurora 5.6

We have quite large (about 1TB) MySQL 5.7 DB, hosted on RDS. We want to migrate it to Aurora 5.6 - because of parallel queries (these are available only for 5.6).
It's not possible to do that by snapshot, because the version is not the same. We need to do mysqldump and then restore it.
I tried several options, but most of them always failed, because of the size of DB.
For example straight import
nohup mysqldump -h fmysql_5_7host.amazonaws.com -u user -pPass db_name | mysql -u user2 -pPAss2 -h aurora_5_6.amazonaws.com db_name
error in nohup.out :
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table
Also dump to s3 file failed
nohup mysqldump -h mysql_5_7host.amazonaws.com -u user -pPAss db_name | aws s3 cp - s3://bucket/db-dump.sql
error:
An error occurred (InvalidArgument) when calling the UploadPart operation: Part number must be an integer between 1 and 10000, inclusive
Both of previous methods worked for me on smaller DB, about 10GB, but not on 1TB.
Is there any other way how to migrate such database?
Many thanks.

Migrating local database to Azure ClearDb through mySQL Workbench

I have been trying this but no success, i get to this stage and no further.
If you need to migrate from local database to azure cleardb, simply use the mysqldump utility in conjunction with the mysql command line client. For example:
mysqldump --single-transaction -u (old_database_username) -p -h (old_database_host) (database_name) | mysql -h (cleardb_host) -u (cleardb_user) -p -D (cleardb_database)
But if you insist on using MySQL Workbench then this might be of help -
http://nullsession.com/2014/02/04/downgrading-you-cleardb-mysql-database-in-azure/

how to migrate a large database to new server

I need to migrate my database from my old server to my new server. I have a very big problem by transferring the database because I have a large database with 5gb. I tried to transfer using c panel transfer but I can't it is not useful. I need a more efficient way to transfer the data.
Can anyone guide me with the full transfer details? How to transfer using import and export or do I need to use any other method?
MySQL type is MyISAM and size is 5gb.
You can try command line if you have access to SSH for both server as command below if not you can try using Navicat application to sync databases
SSH commands
Take mysqldump of database
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
create tar ball of SQL dump file using
tar -zcvf db.tar.gz db.sql
now upload tar.gz file to other server using scp command
scp -Cp db.gz {username}#{server}:{path}
now login on other server using SSH
Untar file using linux cmd
tar -zxvf db.tar.gz
import to database
mysql -u{username} -p {database} < db.sql
Please have a look at syntax though syntax will work but consider this as direction only
Thanks..
For large databases I would suggest to use mysqldump if you have SSH access to the server.
From the manual:
Use mysqldump --help to see what options are available.
The easiest (although not the fastest) way to move a database between two machines is to run the following commands on the machine on which the database is located:
shell> mysqladmin -h 'other_hostname' create db_name
shell> mysqldump db_name | mysql -h 'other_hostname' db_name
If you want to copy a database from a remote machine over a slow network, you can use these commands:
shell> mysqladmin create db_name
shell> mysqldump -h 'other_hostname' --compress db_name | mysql db_name
You can also store the dump in a file, transfer the file to the target machine, and then load the file into the database there. For example, you can dump a database to a compressed file on the source machine like this:
shell> mysqldump --quick db_name | gzip > db_name.gz
Transfer the file containing the database contents to the target machine and run these commands there:
shell> mysqladmin create db_name
shell> gunzip < db_name.gz | mysql db_name
You can also use mysqldump and mysqlimport to transfer the database. For large tables, this is much faster than simply using mysqldump. In the following commands, DUMPDIR represents the full path name of the directory you use to store the output from mysqldump.
First, create the directory for the output files and dump the database:
shell> mkdir DUMPDIR
shell> mysqldump --tab=DUMPDIR db_name
Then transfer the files in the DUMPDIR directory to some corresponding directory on the target machine and load the files into MySQL there:
shell> mysqladmin create db_name # create database
shell> cat DUMPDIR/*.sql | mysql db_name # create tables in database
shell> mysqlimport db_name DUMPDIR/*.txt # load data into tables
Do not forget to copy the mysql database because that is where the grant tables are stored. You might have to run commands as the MySQL root user on the new machine until you have the mysql database in place.
After you import the mysql database on the new machine, execute mysqladmin flush-privileges so that the server reloads the grant table information.

Extracting a remote MySQL Database without MyODBC?

Is there a way to get a MySQL Dump of a database that is not stored locally. I have the connection string to the database located on another server, but it doesn't seem like MySQLDump wants to do anything if the server is remote.
MySQLDump has a -h parameter to connect to a remote host.
First try the mysql client application:
mysql -h your.server.com -uYourUser -pYourPass
If that works, use the same format for MySQLDump
mysqldump -h your.server.com -uYourUser -pYourPass --all-databases
Edit for ajreal:
By default, mysqld (the MySQL server) will run on 3306, and mysql (the client application) will connect using that port. However, if you changed your configuration, update your command accordingly. For example for port 3307, use
mysql -h your.server.com -P 3307 -uYourUser -pYourPass
Check your MySQL configfile to see how you can connect to your MySQL server.
Here is example, how to extract mysql db named 'abc123' direct to zip, w/o super big text dump file on disk.
mysqldump -u root --opt --databases abc123 | gzip > /tmp/abc123.export.sql.gz