I'm using RDS on Amazon and need a way to import a large database of over 3TB over to Google SQL Cloud.
The biggest problem here is time -- I need this to happen quickly. It doesn't feel right having to compress 3TB of data into a single .sql file, move it to an s3 bucket and then import that huge file into Google - which is what they seem to prefer you to do.
Apparently AWS doesn't let you create an image and move it to S3, so I can't then import that image over into Google.
Also, there doesn't seem to be a method to do a mysqldump / import from a remote sever via the Google Cloud Console.
Has anybody faced the same issue and is there a quick a direct way of approaching this?
After many hours of searching around, I was able to use an exising AWS instance to act as a proxy between the two remote SQL servers.
After allowing access to the Google SQL server (entering the IP for an AWS machine under the 'authorization' tab) you can connect to both remote servers and use something like this to directly copy each database table over:
mysqldump -h yourdatabase.rds.amazonaws.com -u user -ppassword awsdbtable --compress --single-transaction | mysql --host=YourGoogleSQL_IP --user=dbuser --password=dbpassword googledbtable
Related
I want to import a mySQL DB into an AWS mySQL DD by using a sql file that I previously exported from another DB,
I am using Sequel Pro but I take ages. I would like to know if there is a faster way to do it, like uploading directly the sql file to AWS instead of using Sequel Pro
Yes, It will take time because you are making the import via client tool and this transmission is happening over the public internet. Best and secure way to import the database are
1 - Create a Dedicate EC2 instance in the same VPC of RDS
2 - Zip the backup file using the best compression tool to Decrease the Size and ship it to EC2 instance directly via SCP
3 - Once Shipping Process completed, Unzip the backup file and Import using Traditional import command. This import process will happen over private networks
mysql -u username -ppassword database_name table_name(Optional) -h endpoint of RDS
Is there a simpler approach to routinely keep a local MySQL database updated with data from a remote database? My setup needs me to run a local copy of the project in the office network to allow local email sending. But the emails link back to a live server. Also, the admin users need to access the project from the internet anywhere to compose the emails. Currently, my options are:
Connect local project to the remote database.
Export the remote database, clean the local database and then import the dump.
This is something I need to routinely do every week. I went with approach #1 but it takes a long time to pull data this way. So I am really wondering if I should do this in the long run?
On routine basis, just do the mysqldump export on remote server and then on local server do mysqldup import or mysql import.
mysqldump -u root-proot -h remote-server test > db%FileDate%.sql
And on local server, do the import
mysql -u root-proot -h local-server test < db%FileDate%.sql
You can use mysql incremental backup. Please refer below link.
https://www.percona.com/forums/questions-discussions/percona-xtrabackup/10772-[script]-automatic-backups-incremental-full-and-restore
https://dev.mysql.com/doc/mysql-enterprise-backup/8.0/en/mysqlbackup.incremental.html
Quick help here...
I have these 2 mysql instances... We are not going to pay for this service anymore; so they will be gone... How can I obtain a backup file that I can keep for the future?
I do not have much experience with mysql, and all threads talk about mysqldump, which I don't know if its valid for this case. I also see the option to take a snapshot but I want a file I can save (like a .bak).
See screenshot:
Thanks in advance!
You have several choices:
You can replicate your MySQL instances to MySQL servers running outside AWS. This is a bit of a pain, but will result in a running instance. http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Exporting.NonRDSRepl.html
You can use the commandline mysqldump --all-databases to generate a (probably very large) .sql file from each database server instance. Export and Import all MySQL databases at one time
You can use the commandline mysqldump to export a database at a time. This is what I would do.
You can use a gui MySQL client -- like HeidiSQL -- in place of the commandline to export your databases one at a time. This might be easier.
You don't need to, and should not, export the mysql, information_schema, or performance_schema databases; these contain system information and will already exist on another server.
In order to connect from outside AWS, you'll have to set the AWS protections appropriately. And you'll have to know the internet address, username, and password (and maybe port) of the MySQL server at AWS.
Then you can download HeidiSQL (if you're on windows) or some appropriate client software, connect to your database instance, and export your databases at your leisure.
I recently downgraded my EC2 instance. I can no longer connect to RDS. I think it might be that the internal IP is different and now the logins are attached to that specific IP. I haven't been able to figure it out. I would like to be able to get a backup from the snapshot. Is there a way to download it through AWS?
You can't download an RDS snapshot. You can however connect to it and export your databases. Downgrading your instance should not affect connectivity unless you had set up your security groups incorrectly (Opening ports to an IP instead of another security group).
The accepted answer is not up-to-date anymore. Instead of using command line tools, you can use the AWS console.
Navigate to RDS -> Snapshots -> Manual/System ->
Select Snapshot -> Actions -> Export to S3
Going through S3 is common in most production environments, as you won't have direct access to the DB instance.
In addition to datasage answer.
As an option for production instance you can create a readonly replica in RDS and make dumps from this replica. You could avoid freezing of production DB this way.
We use this scheme for PostgreSQL + pg_dump. Hope it will be helpful to somebody else too.
I use:
pg_dump -v -h RDS_URL -Fc -o -U username dbname > your_dump.sql
I also needed to do this so I created a dump of the db (MySQL) by logging into my app server which has permissions to access the db. I then downloaded the dump to my local machine using scp.
I used:
mysqldump -uroot -p -h<HOST> --single-transaction <DBNAME> > output.sql
Another option is to share your snapshot if you don't need to download it and just want to share it with a different AWS account ID.
It sounds like your RDS is within a VPC inside a private subnet with security group and ACL. The only way to solve your issue is to take a snapshot and cerate a new DB instance out of it within the default VPC where all connections are allowed. After that you take backup classic backup using a db client or CLI.
I'm very surprised that it seems impossible to upload more than a few megabytes of data to mysql database through PHPMyAdmin whereas I can upload a msaccess table easily up to 2 Gigabytes.
So is there any script in php or anything that can allow to do so unlike phpmyadmin ?
PhpMyAdmin is based on HTML and PHP. Both technologies were not built and never intended to handle such amounts of data.
The usual way to go about this would be transferring the file to the remote server - for example using a protocol like (S)FTP, SSH, a Samba share or whatever - and then import it locally using the mysql command:
mysql -u username -p -h localhost databasename < infile.sql
another very fast way to exchange data between two servers with the same mySQL version (it doesn't dump and re-import the data but copies the data directories directly) is mysqlhotcopy. It runs on Unix/Linux and Netware based servers only, though.
No. Use the command line client.
mysql -hdb.example.com -udbuser -p < fingbigquery.sql