Copy tomcat and mysql from one Amazon EBS volume to another - mysql

I launched an Amazon EC2 with Amazon Linux and Amazon-EBS as root volume. I also started tomcat7 and mysql 5.5 on this EBS volume.
Later I decided to change from Amazon Linux to Ubuntu. To do that I need to launch another Amazon EC2 instance with a new EBS root volume. Now I want to copy tomcat7 and mysql from older EBS volume to new one. I have tables and data in mysql which I don't want to loose and an application running on tomcat. How to go about it?

A couple of thoughts and suggestions.
First, if you are going to be having any kind of significant load on your database, running it on EBS-backed volume is probably not a great idea as EBS-backed storage is incredibly slow relative to the machine's local/ephemeral storage (/mnt). Now obviously you don't want DB data on ephemeral storage, so there is really nothing you can do about it if you want to run MySQL on EC2. So my suggestion would be to utilize an RDS instance for your DB if your infrastructure requirements allow for it.
Second, if this is a production application, you are undoubtedly going to have some down time as you make this transition. The question is whether you need to absolutely minimize the amount of downtime. If so, then you need to have an idea as to the size of your database. Is it going to take a long time to dump/load? If not, you could probably just get your new instance up and running, and tested on an older copy of your database and then just dump and load the current database at the time of cutover.
If it is a large database then perhaps you can turn on MySQL binary logging. Then make a dump of the database at a known binary log position. Then install this dump on your new instance. Then when ready to cutover, you can replay the binary logs on the new instance to bring it current. Similarly, you could just set up the DB on the new instance as a replica until the cutover, at which point you make it the master.
You may even consider just using rsync to sync the physical database files if you don't want to mess with binary logging, though this can be a problematic approach if you are not that familiar with dealing with the actual physical database files.
As far as your application goes, that should be much simpler to migrate assuming it is just a collection of files. I would not copy the Tomcat7 installation itself, but rather just install Tomcat on Ubuntu and then adjust the configuration to match current.
As far as the cutover itself goes, this should be pretty straightforward and would vary in approach depending on whether you are using an elastic IP for your server or whether it is behind a load balancer,

Related

Manually copy MySQL 5.5 database to a different computer

My company uses a product that uses MySQL 5.5 for its backend database. The product automatically installs and configures MySQL during it's installation process. The product can be configured to run in a Hot Standby Redundant configuration. In these cases, the same installation process is performed on 2 separate servers and then during the products initial configuration redundant mode is selected. The product internally handles all the processes of duplicating the database data and keeping the 2 databases in sync. MySQL has know knowledge of the redundant setup. The MySQL installation on both server are identical, same location and same structure. The product does not have a very elegant/efficient way to sync a large, say 300G is size with 3K tables, database from the Primary server to the Backup server in cases where this is required, such as when creating a redundant system from a Single/Primary server config that has already been running for a while. My question is as follows.
Is there a safe/supported way to just manually copy the database/files from the Primary server to the Backup server considering that the MySQL installation on both servers are identical? BTW, this is on Production Windows Servers. I know I can do a full Export of the database from the Primary and then Import it on the BU server, but this can take hours. I am hoping there is a faster supported way to just copy the files from one server to the other, but in researching this I see conflicting info.
System Info
Windows
MySQL 5.5
Identical installation on both servers
"C:\ProgramData\MySQL\MySQL Server 5.5\data"
Innodb
File per table = true
Thanks in advance for any advice.
I once tried to just copy the Database Folder that contains all the innodb table files, "C:\ProgramData\MySQL\MySQL Server 5.5\data\Mydbase", from one server to another but mysql would not start up and had errors.
Yes: shut down the MySQL Server service on both computers. Then you can move the files in the datadir in any way you want. But this incurs some downtime while you do the file transfer.
If you must have no downtime, it's also possible, but requires more steps.
What I do is use Percona XtraBackup to make a physical backup of the source instance, but this won't work as easily for you because you are on Windows. XtraBackup doesn't work on Windows. Some people use tricks to run XtraBackup in a Docker container on Windows.
Then restore the XtraBackup to your new computer in the normal way, and configure it as a replica of the source instance. See https://docs.percona.com/percona-xtrabackup/8.0/howtos/setting_up_replication.html
By making the new instance a replica, you can let it get updated with the most recent changes that have occurred on the source instance while you were setting up the replica.
Then at some point you decide to switch to the new instance. Then you set the source instance to read-only mode, to prevent client applications from making any new changes. Let the replica catch up with the last final changes (this should only take a second if the replica was keeping up with changes already). Now you can change your client applications to use the replica instead of the former source. Then un-configure replication on the new instance with RESET SLAVE because the last thing you want is for any more changes to occur on the former source and replicate to the new instance.
If you try this procedure, I suggest you test it on a test instance — NOT your production instance — until you are comfortable with the tools.
P.S.: In addition to not supporting Windows, I have no idea if the current version of XtraBackup works with MySQL 5.5. That version was released in 2010, and reached its end of life in 2018. So I think you will need to research which version of XtraBackup still can read a MySQL 5.5 instance. You might have to use an old version of XtraBackup.

Best practice - Developing on a local mysql server / RDS?

I am developing a Drupal site using MariaDB.
The import process of a 77MB dump file locally (docker container running maria db) takes about 2 minutes.
The same import to an Amazon RDS (db.m4.large) running a MariaDB database takes more than 30 minutes.
Isn't the Amazon RDS supposed to be quicker ?
What is the recommended practice for having a quick dev environment for SQL ? (the local docker service is running too slow)
Thanks,
Yaron
If you are already on RDS, just use a snapshot.
Take a snapshot from production. (or find one of the automated snapshots)
Create a new DB from the snapshot
It's very fast and doesn't have the issues of latency and running millions of queries which an import has.
However, this is just one very crude approach to making a dev environment.
Some people have scripts that create the data sat for DEV from scratch. This might be more appropriate and even necessary, if for example you have a large database and developers that like to work locally on their computer.
Some people have scripts that sanitize DEV to eliminate sensitive and personal data, which you could run after the snapshot.
Some people even have DEV as a replica of the main DB and modify the DEV db so that additional usage doesn't clash with the replicated changes. This is a bit delicate though.
Often Dev and Tests use dummy data, and Staging uses real data (cloned from Production and possibly sanitized).

Cloud based LAMP cluster

I run a pretty customized cluster for processing large amounts of scientific data based on a basic LAMP design. In general, I run a separate MySQL server with around 128GB of ram and about 1TB of storage. Separately, I run a head node that serves as an nfs mount point for the data input of my process, and a webserver to display results. Finally, I trypically have a few compute nodes that get their jobs from a mysql table, get the data from NFS, do some heavy lifting, then put results into mysql.
I have come across a dataset I would like to process which is pretty large (1TB of input data), and I don't really have the hardware on hand to handle it. As a result, I began investigating google compute engine etc, and the prospect of scaling instances to process these data rapidly with the results stored in a mysql instance. Upon completion the mysql tables could be dumped from the cloud and brought up locally for analysis. I would have no problem deploying a MySQL server, along with the rest of the LAMP pieces and the compute nodes, but I can't quite figure out how I would do this in the cloud.
A major sticking point seems to be the lack of read/write NFS which would allow me to get the data onto several instances, crunch it, then push the results to MySQL. This is a necessary step for me as I could queue hundreds of jobs from the webserver, then have the instances (as many as 50-100) pick the jobs up by connecting to a centralized mysql instance to find out what jobs an instance needs to do and where the data is. Process the data (there is a file conversion that happens which make the write part necessary), crunch the data, then load results to mysql. I hope I'm explaining my situation clearly. This seems like a great example of a CPU intensive process that would scale nicely in the cloud, I just can't seem to put all the pieces together... Any input is appreciated!
It sounds quite possible; I've been doing similar things in GCE for a while now.
NFS mount - you just need configure it as you would normally. Set up the NFS server on the head node, and then configure the clients on the slave nodes to mount it. Here and here are some basic configuration instructions for Centos 6 I used to get NFS up and running.
Setting up a LAMP stack is very straightforward. These machines run pretty much vanilla Linux distros, so you can just use yum or apt-get to install components.
For the cluster, you will probably end up having an image for the head node you use once, and then another image for the slave nodes that you replicate for each one.
For the scheduler, I've used Condor and Sge successfully, but I'm sure the other ones would work just as well.
Hope this helps.

Amazon RDS mysqldump outside of the Amazon eco-system

I would like to do a daily mysqldump to my own local disk out side of the amazon eco-system. I have few reasons I want to do this daily.
I want to be in more control of my database when RDS\EBS goes down again.
RDS only allows you to restore within the same availability zone. This really gets me because a natural disaster or network fault at the availability zone pretty much renders backups useless because you can only restore to the same zone. :/
Would like a sandbox/test database where I don't have to pay for space and bandwith.
My big question is if I do a daily mysql dump of a 50gb database will my bandwidth\IO costs skyrocket? I'm assuming they will! Has anyone done something like this before?
UPDATE:
I am running a Multi-AZ production environment be recent outages still proved that there is no such thing as complete failover.
Our company has two services, a front facing web site and internal processing. It's most important that our internal operations don't stop. Our web site could go dark for several hours if need be. Having a recent mysql dump at my figure tips seem priceless to me.
So you have a few points of concern that you note.
With regard to being in control of your database, I am not really sure what you are getting to here. If your production DB goes down, you don't have control over it. Even if you have a local backup of it, that isn't going to do you much good if you don't have a place to host that data.
Is your current production RDS instance a multi-AZ instance to help shield against AZ outtage? If it is, the fail over would happen automatically for you.
RDS snapshots are available to restore in different availability zones. See the documentation for rds-restore-db-instance command line at this link http://docs.amazonwebservices.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-RestoreDBInstanceFromDBSnapshot.html
Note that you can specify which AZ you want to restore to.
Based on a daily backup of 50GB, you would be talking about spending $180 in data transfers for backups alone. It would be MUCH cheaper to simply have a small test RDS in the same region as your production RDS instance for testing (I think it is like $5/month for a micro). All your data transfer between these boxes (i.e. moving snapshots onto it) would be free.
You can do the math on pricing yourself here: http://aws.amazon.com/rds/#pricing
This is not to mention that doing your daily backups against production would interrupt your production DB access for the time it locks the DB to perform the dump. This is of course unless you pay to have an RDS read replica that you can take the dumps from.
Finally, there are subtle differences between RDS and a standalone MySQL server in regards to how they are configured, I would much rather have my testing environment be as similar to my production environment as possible.
Just try it. I pull from Amazon to my local mysql-server which is Ubuntu.
mysqldump signs -h signs.c3x4aregvxxx.us-east-1.rds.amazonaws.com -P 3306 -u cartersxxx -pxxxxxx | mysql -u root -pxxxxxx signs
I have been unable to predetermine billing at Amazon and I am actively trying to get away from them. FYI I pay $72/month for 10GB mysql with low bandwidth. IMHO table size dictates cost.

How can I backup a MySQL database on AWS?

I've been playing with AWS EC2 and really like it. There is one drawback though, the instance could disappear due to hardware failure or whatever reason. This happened to me in my first week of operation. I was wondering whether there are good solutions to backup a MySQL database so that I don't lose my customer credentials?
You can transfer mysql database directly from EC2 machine to S3bucket but you will consume more cost for bandwidth and storage. You go for a third party application (which is safe) to backup your mysql or any plugins. Because they compress your data & encrypt and then save in S3 storage. Also, you can enable snap shot and take snap shots for volumes (hard drives)
I suggest you to use 'StoreGrid' backup software to backup your mysql database in EC2 machine. check this following link to know more about Online Backup Service on Amazon EC2/S3 http://storegrid.vembu.com/online-backup/amazon-ec2-s3-cloud-online-backup.php
Check this following link to configure MySQL database BACKUP http://storegrid.vembu.com/online-backup/mysql-backup.php?ct=1
Note: You have mentioned Hardware failure occurs often ! --- you can backup entire hard drives too using the above software.
I hope, now your MySQL data base is backed up from EC2 instance and stored in S3 storage safely.
Cheers !
Amazon now offers Relational Database Storage, that is, pre-configured EC2 instances, without any OS access to host MySQL (or Oracle, or T-SQL for real) for you, but aim to solve much of the availability, reliability and durability issues one faces when trying to host transactional data store yourself on a bare EC2 instance.
http://aws.amazon.com/rds/
"automated backups, DB snapshots, automatic host replacement, and Multi-AZ deployments"