Here is the situation
In instance A , have EBS volume where my mysql db data located was created based on this http://qugstart.com/blog/amazon-web-services/how-to-set-up-db-server-on-amazon-ec2-with-data-stored-on-ebs-drive-formatted-with-xfs/
I want to move db into separate instance B so I have created instance and installed Mysql already.
Both instances and volume were in same region.
My question here was if I detach ebs volume from instance A and attach to instance B will work automatically or do I have to make any precaution steps?
If you are following the instructions from the link/blog. You don't have to shutdown the instance to detach the EBS volume. You only need to shutdown your EC2 instance if your EC2 volume is the root volume. i.e /dev/sda1 /dev/sda /dev/xvda
Having said that, you do need to shutdown your mysql service on instance A before detaching the volume:
service mysqld stop
Then you can bring up another instance B and then attach the EBS volume where your data is and then mount it. (Assuming you are attaching to /dev/sdh or /dev/xvdh)
echo "/dev/sdh /vol xfs noatime 0 0" | sudo tee -a /etc/fstab
sudo mkdir -m 000 /vol
sudo mount /vol
You can move EBS volumes, but before you detach it from the original server, you should stop the server.
When you attach the volume to the new server, look into EC2 console to see where it is attached to (i.e. /dev/xvdb). Then all you need is mount it somewhere. Your Mysql server's data directory should point to that mount location:
http://dev.mysql.com/doc/refman/5.5/en/server-options.html#option_mysqld_datadir
I have been able to easily detach ebs volumes from on instance and reattach to another running instance with no problems at all.
I would certainly make sure you first terminate any programs that may have open files or are using that volume before detaching.
Not very familiar with MySQL, but I assume when you attach the new volume you will need to let MySQL know about the database and where it is. In SQL Server you would do this by 'Attaching' it to a running sql server instance - mySQL probably has a similar process.
Related
im trying to restart my mysql server. The server is on a Kubernetes/fedora container. I have tried to use # /etc/init.d/mysqld restart and # systemctl restart mysqld. The problem is that there is no files in the init.d.
When running # /etc/init.d/mysqld restart bash says No such file, obviously as there is no such file. When running # systemctl restart mysqld it responds "bash: systemctl: Command not found"
The mysql-servier is running fine and i can log into it, however i cant restart it. Please help.
To restart a server on Kubernetes you simply need to delete the pod with kubectl delete pod <id>. If you didn't create pod manually, but rather with a deployment, it will restart and come back online automatically.
Deleting a pod is a correct way of shutting down servers. First Kubernetes will send mysql a TERM signal that will politely ask it to shutdown. Then after some time (configurable) it will shoot it with KILL if it doesn't comply.
The mysql-servier is running fine and i can log into it, however i cant restart it.
You have 2 options and both have it's 'dangers' to address:
More likely, container is started with either CMD given in docker image or command directive in kubernetes manifest that is actually starting your mysql. In that case, regardless of the way you manage to do termination (during restart) of running instance of mysql on that container you will also - terminate the whole container. Kubernetes will then restart that container, or whole pod.
Less likely, but possible is that container was started with some other 'main' command, and mysql is started as part of a separate script. In that case inspecting docker file or kubernetes manifest will give you details about start/stop procedure and only in that case you can restart mysql without actually killing the container it is running onto.
Dangers: data persistence. If you don't have proper data persistence in place killing running container (either through restart or refresh) will also destroy any ephemeral data with it.
Additional note: have you tried service mysql status and service mysql restart?
I have pod that utilizes php and I have a persistent MySQL storage created on openshift online. Whenever I click the option "add storage to php" and I set mysql as the storage with mount point /var/lib/mysql the server attempts to redeploy but the new container is stuck creating and then fails. I get multiple error messages like this one:
Failed to attach volume "pvc-d4962378-aae0-11e7-8a41-0a2a2b777307" on node "ip-172-31-50-169.us-west-2.compute.internal" with: Error attaching EBS volume "vol-0087ade77401256f5" to instance "i-0b8b81e68bc629f01": VolumeInUse: vol-0087ade77401256f5 is already attached to an instance status code: 400, request id: dfbdac9b-bad0-4211-8158-080a4e120b1a. The volume is currently attached to instance "i-02a6b44c53ab0d7f2"
Isn't this the proper way to connect mysql storage to a pod?
EBS volume type can only be mounted on one node at a time in an OpenShift cluster. When you have PHP and MySQL as separate applications that can land on different nodes and as a result, you can't mount the persistent volume against both. The error is warning you of this.
The only way you can use a single EBS volume against PHP and MySQL at the same time is for them to be running in separate containers of the same pod. You also need to ensure that the deployment strategy is set to Recreate and not Rolling, as rolling results in a new instance being created when the old still exists, with same issue arising as the new and old could be on different nodes.
I'm experiencing some weirdness with docker.
I have an Ubuntu server VM running in Windows Azure.
If I start a new docker container for e.g. Wordpress like so:
sudo docker run --name some-wordpress --link some-mysql:mysql -p 80:80 -d wordpress
everything works nicely, I get a resonably snappy site considering the low end VM settings.
However, if I reboot the VM, and start the containers:
sudo docker start some-mysql
sudo docker start some-wordpress
The whole thing runs very slowly, the response time for a single page gets up to some 2-4 seconds.
Removing the containers and starting new ones makes everything run normally again.
What can cause this?
I suspect it has to do with disk usage, does the MySQL container use local disk for storage? When you restart an existing docker container, you reuse the existing volume, normally stored at in a sub folder of /var/lib/docker, whereas a new container creates a new volume.
I find a few search results saying that Linux on Azure doesn't handle "soft" reboots well and that stuff doesn't get reconnected as it should. A "hard" reboot supposedly fixes that.
Not sure if it helps, my Docker experience is all from AWS.
Your containers are running on a disk which is stored in a blob storage with a max. 500 IOPS per disk. You can avoid hitting the disk (not very realistic with MySQL) or add more disks to use with striping (RAID0) or use SSD (D series in Azure) And depending on your use case, you might also rebase Docker completely to use ephemeral storage (/dev/sdb) - here's how for CoreOS. BTW, there are some MySQL performance (non-Docker) suggestions in azure.com.
Several months ago, I followed http://aws.amazon.com/articles/1663 and got it all running. Then, my PC crashed and I lost the keypair (http://stackoverflow.com/questions/7949835/accessing-ec2-instance-after-losing-keypair) and could no longer access the instance.
I want to now launch a new instance and mount this MySQL/DB volume which is left over from before and see if I can get to the data on it. How can I go about doing that?
You outlined the correct approach to this problem already, and the author of the article you referenced, Eric Hammond, has written another one detailing this very process, see Fixing Files on the Root EBS Volume of an EC2 Instance - it boils down to:
start another EC2 instance
stop the EC2 instance you can't access anymore
detach the EBS volume from the stopped instance
attach the EBS volume to the running instance
SSH into the running instance
mount the EBS volume in the running instance
perform whatever fixes necessary, i.e. adjust the /var permissions in your case
Please see Eric's instructions for details on how to do this from the command line; obviously you can achieve all steps up to the SSH access via the AWS Management Console as well, removing the need to install the Amazon EC2 API Tools, in case they aren't readily available already.
I need autoscaling of mysql-slave on ec2 can anybody guide me how to do that and how to transfer the load on newly added instance
i would use opscode's chef. you can create "roles" in chef, such as a slave_server role. make the role set up a new server and install mysql (check out the opscode provided cookbooks for mysql to do the first parts). Then what you want to do is write your own recipe to grab a copy of your slave db (perhaps grabbing a recent snapshot of the ebs volume of one of your other slave servers) and use that to create a new ebs volume on your new server. Then its just a matter of making your recipe configure the server as a slave and get slaving going so it catches up to master.