Export mysql query to amazon s3 bucket - mysql

Based on the accepted answer here, I am able to export the results of mysql query to a csv file on my Amazon EC2 instance using the following:
mysql -user -pass -e "SELECT * FROM table" > /data.csv
However, as the file exported is large, I want to export an Amazon s3-bucket (s3:\\mybucket) which is accessible from my EC2 instance
I tried:
mysql -user -pass -e "SELECT * FROM table" > s3:\\mybucket\data.csv
But it doesn't export the file.

If you want to use the mysql command line program, then you have two choices:
Increase the size of your instance's storage so that the file can be created. Then copy the file to S3
Create a separate program or script that reads from Standard Input and writes to S3.
Another solution would be to create a simple program that processes your SELECT and directly writes to S3. There are lots of examples of this on the Internet in Python, Java, etc.

In addition to the accepted answer, if you are using the Aurora, then you can do
SELECT * FROM table INTO OUTFILE S3 's3:\\mybucket\table-data';
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.html
Alternate approach,
mysqldump -h [db_hostname] -u [db_user] -p[db_passwd] [databasename] [tablename] | aws s3 cp - s3://[s3_bucketname]/[mysqldump_filename]
it'll directly store the file to s3 without occupying space

Related

Importing a DB into AWS RDS DB

I want to import a mySQL DB into an AWS mySQL DD by using a sql file that I previously exported from another DB,
I am using Sequel Pro but I take ages. I would like to know if there is a faster way to do it, like uploading directly the sql file to AWS instead of using Sequel Pro
Yes, It will take time because you are making the import via client tool and this transmission is happening over the public internet. Best and secure way to import the database are
1 - Create a Dedicate EC2 instance in the same VPC of RDS
2 - Zip the backup file using the best compression tool to Decrease the Size and ship it to EC2 instance directly via SCP
3 - Once Shipping Process completed, Unzip the backup file and Import using Traditional import command. This import process will happen over private networks
mysql -u username -ppassword database_name table_name(Optional) -h endpoint of RDS

Import/Export mysql between two remote hosts over SSH

I'm using RDS on Amazon and need a way to import a large database of over 3TB over to Google SQL Cloud.
The biggest problem here is time -- I need this to happen quickly. It doesn't feel right having to compress 3TB of data into a single .sql file, move it to an s3 bucket and then import that huge file into Google - which is what they seem to prefer you to do.
Apparently AWS doesn't let you create an image and move it to S3, so I can't then import that image over into Google.
Also, there doesn't seem to be a method to do a mysqldump / import from a remote sever via the Google Cloud Console.
Has anybody faced the same issue and is there a quick a direct way of approaching this?
After many hours of searching around, I was able to use an exising AWS instance to act as a proxy between the two remote SQL servers.
After allowing access to the Google SQL server (entering the IP for an AWS machine under the 'authorization' tab) you can connect to both remote servers and use something like this to directly copy each database table over:
mysqldump -h yourdatabase.rds.amazonaws.com -u user -ppassword awsdbtable --compress --single-transaction | mysql --host=YourGoogleSQL_IP --user=dbuser --password=dbpassword googledbtable

load data from s3 to mySQL running EC2 instance (not RDS)

I want to able able to use the load data infile command in mySQL, but instead of loading the data from a local file I want to load it from a CSV file.
I.e., if the file is in local storage it'd look like:
LOAD DATA INFILE'C:\\abc.csv' INTO TABLE abc
But if it's in S3, not sure how I could do something like this.
Is this possible?
NOTE: this is not an RDS machine, so this command does not seem to work:
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-template-copys3tords.html
The mysql CLI allows you to execute STDIN as a stream of SQL statements.
Using a combination of the s3 CLI and mkfifo, you can stream data out of s3.
Then it's a simple matter of connecting the streams with something that re-formats the CSV into valid SQL.
mkfifo /tmp/mypipe
s3 cp s3://your/s3/object /tmp/mypipe
python transform_csv_to_sql.py < /tmp/mypipe | mysql target_database
You might be able to remove the python step and use MySQL's CSV code if you tell MySQL to load the data directly from your fifo:
mkfifo /tmp/mypipe
s3 cp s3://your/s3/object /tmp/mypipe
mysql target_database --execute "LOAD DATA INFILE '/tmp/mypipe'"
Good luck!

Easiest way to chunk data from MySQL for import into shared hosting MySQL database?

I have a MySQL table with c.1,850 rows and two columns - ID (int - not auto-incrementing) and data (mediumblob). The table is c.400MiB, with many individual entries exceeding 1MiB and some as large as 4MiB. I must upload it to a typical Linux shared-hosting installation.
So far, I have run into a variety of size restrictions. Bigdump, which effortlessly imported the rest of the database, cannot handle this table - stopping at different places, whichever method I have used (various attempts using SQL or CSV). Direct import using phpMyAdmin has also failed.
I now accept that I have to split the table's content in some way, if the import is ever to be successful. But as (for example) the last CSV displayed 1.6m rows in GVIM (when there are only 1,850 rows in the table), I don't even know where to start with this.
What is the best method? And what settings must I use at export to make the method work?
mysqldump -u username -p -v database > db.sql
Upload the SQL file to your FTP.
Create a script in a language of your choice (eg: PHP) that will call system/exec commands to load in the SQL file into the MySQL database.
nohup mysql -u username -p newdatabase < db.sql &
this will run a process in background for you.
you might have to run initially a which mysqldump and which mysql to get the absolute path of the executables.

How to upload 10 Gb of data to MySQL programmatically without crashing like on PHPMyAdmin?

I'm very surprised that it seems impossible to upload more than a few megabytes of data to mysql database through PHPMyAdmin whereas I can upload a msaccess table easily up to 2 Gigabytes.
So is there any script in php or anything that can allow to do so unlike phpmyadmin ?
PhpMyAdmin is based on HTML and PHP. Both technologies were not built and never intended to handle such amounts of data.
The usual way to go about this would be transferring the file to the remote server - for example using a protocol like (S)FTP, SSH, a Samba share or whatever - and then import it locally using the mysql command:
mysql -u username -p -h localhost databasename < infile.sql
another very fast way to exchange data between two servers with the same mySQL version (it doesn't dump and re-import the data but copies the data directories directly) is mysqlhotcopy. It runs on Unix/Linux and Netware based servers only, though.
No. Use the command line client.
mysql -hdb.example.com -udbuser -p < fingbigquery.sql