I have a backup for Boot Volume. I can create new Boot Volume from it and attach to a new instance.
But I need to save the existing one with all settings. Is it possible to just replace one Boot Volume to another?
No.
There is no direct way to replace the boot volume of an instance. You have to first terminate the instance. So that you can use the boot volume again.
Then you have to create a new instance with same or different settings(as per your need). While creating the instance in the image selection field, there is a tab called boot volumes at which you can select the existing boot volume.
Related
I have pod that utilizes php and I have a persistent MySQL storage created on openshift online. Whenever I click the option "add storage to php" and I set mysql as the storage with mount point /var/lib/mysql the server attempts to redeploy but the new container is stuck creating and then fails. I get multiple error messages like this one:
Failed to attach volume "pvc-d4962378-aae0-11e7-8a41-0a2a2b777307" on node "ip-172-31-50-169.us-west-2.compute.internal" with: Error attaching EBS volume "vol-0087ade77401256f5" to instance "i-0b8b81e68bc629f01": VolumeInUse: vol-0087ade77401256f5 is already attached to an instance status code: 400, request id: dfbdac9b-bad0-4211-8158-080a4e120b1a. The volume is currently attached to instance "i-02a6b44c53ab0d7f2"
Isn't this the proper way to connect mysql storage to a pod?
EBS volume type can only be mounted on one node at a time in an OpenShift cluster. When you have PHP and MySQL as separate applications that can land on different nodes and as a result, you can't mount the persistent volume against both. The error is warning you of this.
The only way you can use a single EBS volume against PHP and MySQL at the same time is for them to be running in separate containers of the same pod. You also need to ensure that the deployment strategy is set to Recreate and not Rolling, as rolling results in a new instance being created when the old still exists, with same issue arising as the new and old could be on different nodes.
From the docs # http://docs.ejabberd.im/admin/guide/clustering/#clustering-setup
Adding a node into the cluster is done by starting a new ejabberd node within the same network, and running a command from a cluster node. On second node for example, as ejabberd is already started, run the following command as the ejabberd daemon user, using the ejabberdctl script: ejabberdctl join_cluster 'ejabberd#first'
How does this translate into deployment in the cloud- where instances can (hopefully) be shutdown/restarted based on a consistent image and behind a load balancer?
Can all of them, including the initial instance, use "example.com" as "first" in the example above (assuming "example.com" is setup in DNS to point to the cloud load balancer)?
Or must the first instance not attempt to join a cluster, and subsequent ones all use the ip address of that initial instance instead of "first" (and if this is the case- does it get wacky if that initial instance goes down)?
Can all of them, including the initial instance, use "example.com" as "first" in the example above (assuming "example.com" is setup in DNS to point to the cloud load balancer)?
No, the node name parameter is the node name of an Erlang ejabberd node. It should even more be on the internal Amazon network, not the public one, so it should not rely on a central DNS. It must be a name of an Erlang node as the newly started node with connect to the existing node to share the same "cluster schema" and do an initial sync of the data.
So, the deployment is done as follow:
first instance does not need to join a cluster indeed as there is no cluster schema to share.
New instance can use the node name of any other node of the cluster. It means they will add themselves to the ejabberd cluster schema. It means ejabberd knows that users can be on any node of this cluster. You can point to any running node in the cluster to add a new one, as they are all equivalent (there is no master).
You still need to configure the load balancer to balance traffic to public XMPP port on all nodes.
You only need to perform the cluster config for each once for each extra cluster node. The configuration with all the node is kept locally, so when you stop and restart a node, it will then automatically rejoined the cluster after it has been properly set up.
I'm working with Google Compute Engine and to test it I created a small instance type which isn't very powerful. Now I want to change it to a more powerful CPU but can't seem to figure out how to do that.
Is it possible to change the instance type of a running VM?
You can't change the instance type of a running instance, so you'll have to shut it down and start a new one.
If you used a persistent root disk, you can reuse that disk on your replacement instance. If you used a scratch disk though, you'll have to make sure you back up your changes first.
In December 2013, Compute Engine was promoted to v1 (General Availability). Some notes to hopefully save folks time:
Scratch disks are deprecated. By default boot disks are now created as persistent.
Before you delete the old instance, save off its settings for easy reference when creating the new one:
gcutil getinstance instance-name
The disk name to use later for addinstance is the last part of the disk.source setting.
If the disk.autoDelete setting is True, set it to False to preserve the disk:
gcutil setinstancediskautodelete instance-name --auto_delete=False
Safely delete the old instance:
gcutil deleteinstance instance-name
To create a new instance using the old instance's persistent disk as the boot disk, you need to specify the boot flag, e.g. using gcutil:
gcutil addinstance --disk=instance-disk-name,mode=rw,boot [...]
Otherwise it complains that the disk already exists and fails the instance creation.
Several months ago, I followed http://aws.amazon.com/articles/1663 and got it all running. Then, my PC crashed and I lost the keypair (http://stackoverflow.com/questions/7949835/accessing-ec2-instance-after-losing-keypair) and could no longer access the instance.
I want to now launch a new instance and mount this MySQL/DB volume which is left over from before and see if I can get to the data on it. How can I go about doing that?
You outlined the correct approach to this problem already, and the author of the article you referenced, Eric Hammond, has written another one detailing this very process, see Fixing Files on the Root EBS Volume of an EC2 Instance - it boils down to:
start another EC2 instance
stop the EC2 instance you can't access anymore
detach the EBS volume from the stopped instance
attach the EBS volume to the running instance
SSH into the running instance
mount the EBS volume in the running instance
perform whatever fixes necessary, i.e. adjust the /var permissions in your case
Please see Eric's instructions for details on how to do this from the command line; obviously you can achieve all steps up to the SSH access via the AWS Management Console as well, removing the need to install the Amazon EC2 API Tools, in case they aren't readily available already.
I need autoscaling of mysql-slave on ec2 can anybody guide me how to do that and how to transfer the load on newly added instance
i would use opscode's chef. you can create "roles" in chef, such as a slave_server role. make the role set up a new server and install mysql (check out the opscode provided cookbooks for mysql to do the first parts). Then what you want to do is write your own recipe to grab a copy of your slave db (perhaps grabbing a recent snapshot of the ebs volume of one of your other slave servers) and use that to create a new ebs volume on your new server. Then its just a matter of making your recipe configure the server as a slave and get slaving going so it catches up to master.