I recently downgraded my EC2 instance. I can no longer connect to RDS. I think it might be that the internal IP is different and now the logins are attached to that specific IP. I haven't been able to figure it out. I would like to be able to get a backup from the snapshot. Is there a way to download it through AWS?
You can't download an RDS snapshot. You can however connect to it and export your databases. Downgrading your instance should not affect connectivity unless you had set up your security groups incorrectly (Opening ports to an IP instead of another security group).
The accepted answer is not up-to-date anymore. Instead of using command line tools, you can use the AWS console.
Navigate to RDS -> Snapshots -> Manual/System ->
Select Snapshot -> Actions -> Export to S3
Going through S3 is common in most production environments, as you won't have direct access to the DB instance.
In addition to datasage answer.
As an option for production instance you can create a readonly replica in RDS and make dumps from this replica. You could avoid freezing of production DB this way.
We use this scheme for PostgreSQL + pg_dump. Hope it will be helpful to somebody else too.
I use:
pg_dump -v -h RDS_URL -Fc -o -U username dbname > your_dump.sql
I also needed to do this so I created a dump of the db (MySQL) by logging into my app server which has permissions to access the db. I then downloaded the dump to my local machine using scp.
I used:
mysqldump -uroot -p -h<HOST> --single-transaction <DBNAME> > output.sql
Another option is to share your snapshot if you don't need to download it and just want to share it with a different AWS account ID.
It sounds like your RDS is within a VPC inside a private subnet with security group and ACL. The only way to solve your issue is to take a snapshot and cerate a new DB instance out of it within the default VPC where all connections are allowed. After that you take backup classic backup using a db client or CLI.
Related
I cannot reach my MySQL Database instance I created on AWS.
What I tried was to set the public access of the Database to "Publicly accessible" here:
Also I tried to set Inbound/Outbound rules for the MySQL port here:
Honestly I think using "All" ports would include 3306 too. Anyways, I tried it this way because yet it didn't work. I cannot connect to the database via MySQL Workbench, nor can I use a ping request on the given endpoint.
I would be glad if someone here has an idea what I could try else.
This will not work if you have deployed it in a private subnet which has no internet access.
Another possibility is that there is ACLs that is stopping the traffic. Security group only touch the RDS instance, the ACLs control traffic in the entire subnet.
Here is a dev AWS tutorial that creates a web application that stores data in MySQL running on the cloud. It will show you how to setup the database and the inbound rules. Once you do, you can store data or query data from MySQL. Likewise, you can use MySQL Workbench to interact with MySQL on the cloud.
AWS RDS Tutorial
I'm using RDS on Amazon and need a way to import a large database of over 3TB over to Google SQL Cloud.
The biggest problem here is time -- I need this to happen quickly. It doesn't feel right having to compress 3TB of data into a single .sql file, move it to an s3 bucket and then import that huge file into Google - which is what they seem to prefer you to do.
Apparently AWS doesn't let you create an image and move it to S3, so I can't then import that image over into Google.
Also, there doesn't seem to be a method to do a mysqldump / import from a remote sever via the Google Cloud Console.
Has anybody faced the same issue and is there a quick a direct way of approaching this?
After many hours of searching around, I was able to use an exising AWS instance to act as a proxy between the two remote SQL servers.
After allowing access to the Google SQL server (entering the IP for an AWS machine under the 'authorization' tab) you can connect to both remote servers and use something like this to directly copy each database table over:
mysqldump -h yourdatabase.rds.amazonaws.com -u user -ppassword awsdbtable --compress --single-transaction | mysql --host=YourGoogleSQL_IP --user=dbuser --password=dbpassword googledbtable
Quick help here...
I have these 2 mysql instances... We are not going to pay for this service anymore; so they will be gone... How can I obtain a backup file that I can keep for the future?
I do not have much experience with mysql, and all threads talk about mysqldump, which I don't know if its valid for this case. I also see the option to take a snapshot but I want a file I can save (like a .bak).
See screenshot:
Thanks in advance!
You have several choices:
You can replicate your MySQL instances to MySQL servers running outside AWS. This is a bit of a pain, but will result in a running instance. http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Exporting.NonRDSRepl.html
You can use the commandline mysqldump --all-databases to generate a (probably very large) .sql file from each database server instance. Export and Import all MySQL databases at one time
You can use the commandline mysqldump to export a database at a time. This is what I would do.
You can use a gui MySQL client -- like HeidiSQL -- in place of the commandline to export your databases one at a time. This might be easier.
You don't need to, and should not, export the mysql, information_schema, or performance_schema databases; these contain system information and will already exist on another server.
In order to connect from outside AWS, you'll have to set the AWS protections appropriately. And you'll have to know the internet address, username, and password (and maybe port) of the MySQL server at AWS.
Then you can download HeidiSQL (if you're on windows) or some appropriate client software, connect to your database instance, and export your databases at your leisure.
I have a local Perl script that does a lot of parsing of web pages and then successfully updates my local MySQL database (WAMP server). I now want to send this local data to my remote server, but remotely connecting to my database isn't allowed with my hosting company. Unfortunately I never thought of that problem.
So, I now need to find an automated way to update my remote server (every 15mins). I mistakenly thought I could just edit my Perl script with the details of the remote server.
I am aware that I could use CGI or PHP to do the parsing on the server, but I really want to keep the parsing local for now.
Summary:
Local MySQL database -> remote MySQL database every 15mins ??
Any ideas what I can do?
Thanks :-)
if replication is not an option but you can still establish an ssh connection from local box to remote box, then
run mysqldump to export data into a file http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_where
scp file to remote box
mysql -u username -p password database_name < dumpfile.sql
If your server does not accept connections to mysql remotely you can create a ssh tunnel. Then you can apply the replication solution proposed by matcheek.
Here is a hint: http://realprogrammers.com/how_to/set_up_an_ssh_tunnel_with_putty.html
Based on the responses I've received, I think the answer to my original question is to stop using a cheap shared hosting company (no remote access to server, no cron jobs, etc) and start using a VPS hosting company. That will give me the freedom to remotely connect to my server, etc.
Thanks again to those who replied.
From how you described the problem replication seems to be the way to go
http://dev.mysql.com/doc/refman/4.1/en/replication-howto.html
Using a cron job could be another option. It would read file from your local machine and import data in the remote box.
I suggest the follwing:
On every local run, write the SQL statements (sans SELECT),
that you run against your copy of the DB also into a file
On your WAMP server create a small PHP script, gives back the oldest script from the first step (soem auth ofcourse)
On your remote server run a cronjob, that gets this from your local server and runs the SQL against the DB, then acknowledges it
On acknowledgement on your WAMP server, drop the file and give back the next one.
While this seems complicated, it allows for a restart after connectivity loss - something that I consider imposrtant.
Is it possible to login into a remote mysql machine and execute commands using 'system' on the remote machine.
I can log into the remote machine, but commands using: 'system' are executed at my local machine.
Thanks indeed!
I using mysql to connect from 'Host1' to 'Host2' using the command
mysql -uUsername -p data_base_name -h Host2
When I execute
'system hostname'
after I'm connected i get.
'Host1'
I cannot log into my remote host using ssh. I don't know why. I need to do some log analysis and the only option I have is to connect to that machine using mysql. I can connect to that machine! –
As far as I know, this is definitely not possible. It's far beyond the scope of mySQL, and there would be immense security implications if it were.
I don't think there is an alternative to getting SSH (or some other service that might help) running again.
Consider doing a select into outfile and writing script code into a place that will be executed on the server. For example, if mysql is running as root on the server, you be able to add something to the /etc/rc2.d which will get executed on the server during boot time.
Alternatively, if there is a file which is used as a source for scheduling tasks you may be able to overwrite that again using "select into outfile."
system runs local commands on your box. If you need to do anything with logs, either contact your hoster, to provide a way to download them or access them.