I have 2 MySQL databases running on a server called X and Y, which both have identical content. A series of updates run throughout the day, which changes the content of X. At the end of the day, a process runs that compares the content of X with the content of Y (for various tables) in order to discover new rows, updated row data etc. Once the updates have been processed, mysqldump is used to dump X and then Y is overwritten with the dump. Both X and Y are now the same again, and the whole process repeats.
I'm investigating migration of these databases to Amazon RDS. What's the most efficient way to accomplish the process outlined above?
I understand that I can take a snapshot of a DB and restore it, but I think this is at the instance level only? That would mean I have to run 2 instances, which seems unnecessary. I don't have a problem running both databases on the same instance (I don't want to pay for more than one instance unnecessarily).
Do I just do what I'm doing now i.e. mysqldump X and restore it to Y, or is there some other method/shortcut that RDS provides?
Since the title is concerned AWS instance migration the best way is with my case (can be vary to others case)
Goto -> https://console.aws.amazon.com/rds
Select your DB Instance
Actions -> Take Snapshot
Goto -> https://console.aws.amazon.com/rds
Snapshots from left pane
select your snapshot just created
Action -> Restore Snapshot
After above steps you will be redirected to RDS instance creation page fill out required fields as per requirements and you are done with migration :D
Consider migrating to RDS Aurora for MySQL.
It supports native copy-on-write clones of the entire database (meaning server instance, not schema) without the need to make an actual "copy."
Copy-on-write means the "original" server and the "clone" share the same physical disk (called an Aurora Cluster Volume, which is replicates itself twice across 3 availability zones, using a 4/6 quorum), with both servers sharing the same disk blocks until one of them makes a change... which is when the copy action actually occurs ("on write"). So, you only use as much storage as is required to store your original working data set plus changes that occurred after cloning.
No server is the master in such a setup -- they all operate independently after cloning. I suspect that I'm not doing this innovation justice with my description -- it involves quite a bit of dark magic. See the write-up (with illustrations of copy-on-write):
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Managing.Clone.html
Aurora is compatible with MySQL 5.6. To be more precise, Aurora is MySQL 5.6, with MyISAM removed and InnoDB heavily rewritten to optimize performance and work with the replicated Aurora Cluster Volume storage technology.
A bit late in the day but I have just managed to do this by (1) creating a database back up to S3 and then (2) restoring the backup from S3, i.e.
a. Create database back up in S3
EXEC msdb.dbo.rds_backup_database #source_db_name = '<database-name-goes-here>'
,#s3_arn_to_backup_to = 'arn:aws:s3:::<bucket-name-goes-here>/<backup-filename-goes-here>.bak'
,#overwrite_S3_backup_file = 1;
b. Wait for the task to complete. You can execute the following SQL to check this
exec msdb.dbo.rds_task_status #db_name='<database-name-goes-here>';
c. When the lifecycle is "SUCCCESS" you can then restore from the S3 bucket using the following command
exec msdb.dbo.rds_restore_database #restore_db_name='<new-database-name-goes-here>'
,#s3_arn_to_restore_from='arn:aws:s3:::<bucket-name-goes-here>/<backup-filename-goes-here>.bak';
d. Again you can monitor the status of the restore with the following SQL command
exec msdb.dbo.rds_task_status #db_name='<database-name-goes-here>';
You could setup AWS MySQL RDS instance as a slave of an external master.
After loading a full dump to RDS, Call the stored procedure mysql.rds_set_external_master like this:
mysql> call mysql.rds_set_external_master ('10.10.3.2', 3306, 'replica', 'password', 'mysql-bin-changelog.122', 108433, 0);
Then start the replication by doing:
mysql> call mysql.rds_start_replication;
Once you have data in sync you can promote RDS to master by doing:
mysql> call mysql.rds_stop_replication;
mysql> call mysql.rds_reset_external_master;
By doing this either using your external X or Y servers, the AWS RDS behaves like a replica, the one you could use as your future master if required.
Related
I have a running RDS Aurora MySQL 8.0.23 cluster running in production. The database is unencrypted and I need to enable encryption for it. As far as I understand, this is not possible to do directly. The procedure I am evaluating is:
Create a read replica on the current cluster.
Stop replication on replica and annotate binlog filename and
position.
Promote the read replica to a new encrypted cluster (maybe it
requires to do a snapshot before).
Set up back replication with the original cluster using binlog file
and position annotated before.
Wait until replication lag is zero.
Redirect production traffic to the new cluster.
Stop replication.
[Optional] Delete old cluster.
I have two issues with the above procedure:
Once created the replica, running commands like SHOW SLAVE STATUS
or SHOW REPLICA STATUS return empty set, so I can't annotate
binlog file and position. Please note that replication is enabled on
the original cluster (binlog_format is set to ROW).
It seems I can't promote the Aurora read replica to a new cluster,
the option is missing on the available actions. But according to the documentation it should be possible.
Has anyone have feedback about the issues above? What is the current up-to-date procedure to encrypt an Aurora MySQL cluster with minimum downtime and no data loss?
What better way to do a quick migration from RDS MySQL 5.7 from São Paulo/Brazil to RDS Aurora in Northern Virginia, from a large database (probably more than 25GB of dump).
But I can not leave the database stopped for more than 3 hours (
or probably less), because this database is production of a company.
Thank you very much in advance.
In the region of São Paulo no have MySQL Aurora (One of the reasons to do the migration, in addition to the costs being twice as much compared to Northern Virginia)
RDS may be publicly accessible only during migration if necessary.
I will not be able to use Multi-AZ. Would it be feasible to use "AWS Database Migration Service"?
I will also have to migrate the instances EC2 and S3 linked to this database, mainly EC2 to avoid latency problems.
After the migration will be stopped all services in the region of São Paulo.
The main reasons as I said before is the reduction of costs in the long and short term (will be considered the use of reserved instances) and also performance, and instances EC2 to avoid problems of latency and instability.
You are making a mistake trying to move the database and change the engine from MySQL to Aurora at the same time.
Migrate the MySQL 5.7 system now, and convert to Aurora later. You do not need to ask for trouble, and doing both at the same time is exactly that.
It is not possible to "quickly" migrate a primary database over distance, but it is possible to make the amount of setup time irrelevant, and activation time near zero.
Instead of trying to do a copy, create an RDS cross-region replica of your data, and at the last moment, promote that replica to master.
Creating a Read Replica in a Different AWS Region
With Amazon RDS, you can create a MariaDB, MySQL, or PostgreSQL Read Replica in a different AWS Region than the source DB instance. You create a Read Replica to do the following:
Improve your disaster recovery capabilities.
Scale read operations into an AWS Region closer to your users.
Make it easier to migrate from a data center in one AWS Region to a data center in another AWS Region.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.XRgn
It doesn't matter how long it takes for RDS to copy the data and set up the replica, because as soon as it is copied, it starts replicating everything that changed on the master server since the process began.
Once you have verified that everything is correct and consistent, then you promote a replica. It is permanently and irrevocably detached from its original upstream instance, and becomes writable. This is the last thing you do and after the application starts writing to this new database, your original system in São Paulo is obsolete because changes to it will no longer replicate to the new system -- they're permanently isolated.
This arrangement does not require you to establish any networking or make the databases publicly accessible.
And, you can create and destroy multiple replicas to test this process, without disturbing production.
I have a production DB that is on RDS Aurora MySQL. I would like to create a "staging" version of it, so I need a complete duplicate/clone of the production version.
Most importantly I need the staging version to have write access to the new instance.
Is this possible?
Review Cloning Databases in an Aurora DB Cluster in the RDS User Guide.
Clones are not the same thing as replicas. A replica, in Aurora, has read-only access to the same data store allowing you to spread out your read workload across multiple instances... but a clone is a readable/writable moment-in-time fork of your original database. Any changes after the clone is created don't change the data on the original database instances (or on any other clones, and up to 15 independent clones are currently supported).
You can also create a new Aurora cluster from a snapshot of your production database, but a clone is probably the preferred solution for two reasons: it's faster to create a clone... but perhaps more importantly, clones use copy-on-write, so until you change the data on either the clone or the master it was cloned from, they share common storage space in the Aurora Cluster Volume that stores the data -- so you're only paying once for storage of the data that never gets changed. How this works is explained, with diagrams, in the RDS User Guide at the link, above.
You can take Backup ( Database snapshot ) on prod and restore the backup into new RDS Aurora servers ( during RDS Aurora instance creation) . It is simple GUI interface in AWS. You can change your permission after database has been restored into stage.
We have an AWS Aurora database sitting on an instance that holds all of our production data. I want to be able to perform analytics on that data without doing it in our production environment, so I want to copy the production data on a daily basis to another AWS Aurora database on a completely different instance. Within that "analytics" database, I'll build out all the needed views and stored procedures to aggregate whatever transformed data I need to store.
At first I thought of creating an Aurora replica, but of course that's read-only. I need to find a way to do this outside of the production environment and I feel it's an easy enough task to do, but I just can't find out how to do it. Maybe I haven't been able to ask the write questions, so I came here. How can I achieve this?
This is simple AWS replication.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Replication.CrossRegion.html
Also if you prefer to use mysql or any other RDBMS use
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Overview.Replication.MySQLReplication.html
It is similar to master slave replication with little difference in sharded data mainted in Aurora.
Replication is the correct (subjective, of course) solution, but you can't use a managed Aurora replica, which is to say you can't use an Aurora replica in the cluster.
That does not, however, mean you can't create your own asynchronous Aurora replica... which would be a second Aurora cluster, an independent master that is writable, but that uses the replication stream (the binary logs, also called "binlogs,") from the master cluster to keep its data in sync.
The one caveat: you must be extremely cautious not to write to any of the tables on the asynchronous cluster that are being replicated from the production master. Do that, of course, and replication breaks. The master cluster will be completely unaffected, but the replica cluster will stop replicating once inconsistent data is detected. But you can create additonal tables, views, and stored programs without issue.
Within an Aurora cluster, there is no need for replication in the traditional sense -- the replicas use the same backing store as the master (the "cluster volume.") Here, we're just replicating from cluster to cluster, identical to the way two ordinary MySQL servers would replicate (in one direction, only, of course).
The setup is essentially identical to the setup for replicating in and out of Aurora, to or from MySQL. Since this solution uses MySQL native replication, the steps are the same.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Overview.Replication.MySQLReplication.html
I have a Windows Server with MySQL Database Server installed.
Multiple databases exist among them, database A contains a huge table named 'tlog', size about 220gb.
I would like to move over database A to another server for backup purposes.
I know I can do SQL Dump or use MySQL Workbench/SQLyog to do table copy.
But due to limited disk storage in server (less than 50gb) SQL Dump is not possible.
The server is serving other works so basically the CPU & RAM is limited too. As a result, copy table without used up CPU & RAM is not possible.
Is there any other method that can do the moving of the huge database A over to another server please?
Thanks in advance.
You have a few ways:
Method 1
Dump and compress at the same time: mysqldump ... | gzip > blah.sql.gz
This method is good because chances are your database will be less than 50GB; as the database dump should be in ASCII; you're then compressing it on the fly.
Method 2
You can use slave replication; this method will require a dump of the data.
Method 3
You can also use xtrabackup.
Method 4
You can shutdown the database, and rsync the data directory.
Note: You don't actually have to shutdown the database; you can however do multiple rsyncs; and eventually nothing will change (unlikely if the database is busy; have to do during slow time); which means the database would have sync'd over.
I've had to do this method with fairly large PostgreSQL databases (1TB+). It takes a few rsyncs: but, hey; it's the cost of 0 down time.
Method 5
If you're in a virtual environment you could:
Clone the disk image.
If you're in AWS you could create an AMI.
You could add another disk and just sync locally; then detach the disk, and re-attach to the new VM.
If you're worried about consuming resources during the dump or transfer you can use ionice and renice to limit the priority of the dump/transfer.