From the docs # http://docs.ejabberd.im/admin/guide/clustering/#clustering-setup
Adding a node into the cluster is done by starting a new ejabberd node within the same network, and running a command from a cluster node. On second node for example, as ejabberd is already started, run the following command as the ejabberd daemon user, using the ejabberdctl script: ejabberdctl join_cluster 'ejabberd#first'
How does this translate into deployment in the cloud- where instances can (hopefully) be shutdown/restarted based on a consistent image and behind a load balancer?
Can all of them, including the initial instance, use "example.com" as "first" in the example above (assuming "example.com" is setup in DNS to point to the cloud load balancer)?
Or must the first instance not attempt to join a cluster, and subsequent ones all use the ip address of that initial instance instead of "first" (and if this is the case- does it get wacky if that initial instance goes down)?
Can all of them, including the initial instance, use "example.com" as "first" in the example above (assuming "example.com" is setup in DNS to point to the cloud load balancer)?
No, the node name parameter is the node name of an Erlang ejabberd node. It should even more be on the internal Amazon network, not the public one, so it should not rely on a central DNS. It must be a name of an Erlang node as the newly started node with connect to the existing node to share the same "cluster schema" and do an initial sync of the data.
So, the deployment is done as follow:
first instance does not need to join a cluster indeed as there is no cluster schema to share.
New instance can use the node name of any other node of the cluster. It means they will add themselves to the ejabberd cluster schema. It means ejabberd knows that users can be on any node of this cluster. You can point to any running node in the cluster to add a new one, as they are all equivalent (there is no master).
You still need to configure the load balancer to balance traffic to public XMPP port on all nodes.
You only need to perform the cluster config for each once for each extra cluster node. The configuration with all the node is kept locally, so when you stop and restart a node, it will then automatically rejoined the cluster after it has been properly set up.
Related
I have pod that utilizes php and I have a persistent MySQL storage created on openshift online. Whenever I click the option "add storage to php" and I set mysql as the storage with mount point /var/lib/mysql the server attempts to redeploy but the new container is stuck creating and then fails. I get multiple error messages like this one:
Failed to attach volume "pvc-d4962378-aae0-11e7-8a41-0a2a2b777307" on node "ip-172-31-50-169.us-west-2.compute.internal" with: Error attaching EBS volume "vol-0087ade77401256f5" to instance "i-0b8b81e68bc629f01": VolumeInUse: vol-0087ade77401256f5 is already attached to an instance status code: 400, request id: dfbdac9b-bad0-4211-8158-080a4e120b1a. The volume is currently attached to instance "i-02a6b44c53ab0d7f2"
Isn't this the proper way to connect mysql storage to a pod?
EBS volume type can only be mounted on one node at a time in an OpenShift cluster. When you have PHP and MySQL as separate applications that can land on different nodes and as a result, you can't mount the persistent volume against both. The error is warning you of this.
The only way you can use a single EBS volume against PHP and MySQL at the same time is for them to be running in separate containers of the same pod. You also need to ensure that the deployment strategy is set to Recreate and not Rolling, as rolling results in a new instance being created when the old still exists, with same issue arising as the new and old could be on different nodes.
We have a gcluod project with a couple of compute instances and a Container Engine cluster. The (unmanaged) compute instances' IP can be resolved using [instance-name].c.[project-name].internal. Is there a similar DNS name, like an A record with multiple values, for the GKE nodes?
What we want to do is to access a cluster NodePort service from the compute instances. A multi-value DNS record would be fine because it doesn't matter which node we access. Individual GKE node names must be considered ephemeral.
I've tried to use the instance group name, gke-... as found using
gcloud container clusters describe [cluster-name] | grep instanceGroupManagers
but with no luck.
The GKE nodes, just like the other VMs, can be individually resolved using [node-name].c.[project-name].internal. But there isn't a single DNS A record that lists all of the node names together.
My free trial period has been cancelled but I would like to continue using my existing engines.
The console says my instance is terminated and is no longer running.
Rebooting the instance gives "The resource .... is not ready' error.
How can I continue using my engines with exact same IP setting and other configurations?
Once an instance is in a 'TERMINATED' state it can no longer be booted. You will need to recreate an instance with the same configuration, IP address and boot disk as you indicate. See this FAQ for more information about the terminated state: https://cloud.google.com/compute/docs/troubleshooting#terminate
To retain your existing IP address you will need to promote it to a static IP address resource. You can then reassign this address resource to your new instance.
$ gcloud compute addresses create address-name --addresses IP_ADDRESS --region REGION
See this article for the exact steps:
https://cloud.google.com/compute/docs/instances-and-network#promote_ephemeral_ip
To migrate the existing data on your disk you could create a snapshot and then restore that snapshot when creating a new instance:
$ gcloud compute disks snapshot DISK
See this article for the detailed steps:
https://cloud.google.com/compute/docs/disks#creating_snapshots
Finally to migrate all the associated configuration and metadata you could use the describe subcommand in the Cloud SDK:
$ gcloud compute instances describe INSTANCE
This would print out the entire configuration for your existing instance which you can then use to recreate in your new instance.
The exact steps are quite similar to the process of migrating an instance from one zone to another. You could essentially follow the guide for that process but recreate your new instance in the same zone if you prefer not to move the location of your data. The steps for migrating an instance across zones can be found here:
https://cloud.google.com/compute/docs/instances#moving_an_instance_between_zones
Several months ago, I followed http://aws.amazon.com/articles/1663 and got it all running. Then, my PC crashed and I lost the keypair (http://stackoverflow.com/questions/7949835/accessing-ec2-instance-after-losing-keypair) and could no longer access the instance.
I want to now launch a new instance and mount this MySQL/DB volume which is left over from before and see if I can get to the data on it. How can I go about doing that?
You outlined the correct approach to this problem already, and the author of the article you referenced, Eric Hammond, has written another one detailing this very process, see Fixing Files on the Root EBS Volume of an EC2 Instance - it boils down to:
start another EC2 instance
stop the EC2 instance you can't access anymore
detach the EBS volume from the stopped instance
attach the EBS volume to the running instance
SSH into the running instance
mount the EBS volume in the running instance
perform whatever fixes necessary, i.e. adjust the /var permissions in your case
Please see Eric's instructions for details on how to do this from the command line; obviously you can achieve all steps up to the SSH access via the AWS Management Console as well, removing the need to install the Amazon EC2 API Tools, in case they aren't readily available already.
I need autoscaling of mysql-slave on ec2 can anybody guide me how to do that and how to transfer the load on newly added instance
i would use opscode's chef. you can create "roles" in chef, such as a slave_server role. make the role set up a new server and install mysql (check out the opscode provided cookbooks for mysql to do the first parts). Then what you want to do is write your own recipe to grab a copy of your slave db (perhaps grabbing a recent snapshot of the ebs volume of one of your other slave servers) and use that to create a new ebs volume on your new server. Then its just a matter of making your recipe configure the server as a slave and get slaving going so it catches up to master.