How can an autoscaling managed instance group be added to a target pool?
It's easy enough to add existing instances to a target pool via
$ gcloud compute target-pools create mypool --region us-central1
$ gcloud compute target-pools add-instances mypool \
--instances existing-instance1 existing-instance2 --zone us-central1-b
However, I want all the instances that appear in my autoscaling group to automatically be added to my target pool.
You can use gcloud compute instance-groups managed set-target-pools command to set the target pool for an existing manged instance group. You can refer to this link for more information.
There are four different types of resources in your setup:
instance is a virtual machine
a target pool is a pool of instances used only for the purpose of L3 (e.g. IP) level network load balancing
managed instance group is a group of instances, used among others as a target for your autoscaler
autoscaler looks at a managed instance group and adds/deletes instances in this group as appropriate according to load (and your policy)
To make sure that all the instances in your managed instance group (that is all the instances in your autoscaling group) are automatically in your target pool, you need to tell the managed instance group about the target pool.
As #Faizan correctly mentioned, the command to do it is:
gcloud compute instance-groups managed set-target-pools instance-group-name --target-pools your-target-pool
The help page for this command seems more useful than the online documentation:
gcloud compute instance-groups managed set-target-pools --help
Please note that this help page seems to be out of date though. Setting a new target pool now DOES apply to existing instances in the group (when using the API version v1 or later). It was not the case in the beta versions (v1beta2).
Related
There is openshift-origin cluster version 3.11. (upgraded from 3.9)
I want to add two new nodes to cluster.
Node Hosts created in openstack project with nat, and use internal network class C (192.168.xxx.xxx), also there are floating ip attached to hosts
There are dns records which resolve fqdn of hosts to floating ips and back.
Scaleup playbook works fine but new nodes appear in cluster with their internal ips and thus nothing works.
In openshift v3.9 and earlier i used in my inventory variable
openshift_set_node_ip = true
and point openshift_ip for adding node.
Now it doesn't work.
What should i use instead of openshift_set_node_ip?
I had a similar problem I solved after reading https://stackoverflow.com/a/29496135 where Kashyap explain how to change the ansible_default_ipv4 fact used to guess the IP address to use.
This variable is created testing a call to 8.8.8.8 (https://github.com/ansible/ansible/blob/e41f1a4d7d8d7331bd338a62dcd880ffe27fc8ea/lib/ansible/module_utils/facts/network/linux.py#L64). You can then add a specific route to 8.8.8.8 to change the ansible_default_ipv4 fact result:
sudo ip r add 8.8.8.8 via YOUR_RIGHT_GATEWAY
Maybe it could help to solve your case.
I try to configure RabbitMQ cluster in a cloud using config file.
My procedure is this:
Get list of instances I want to cluster with (via cloud API, before cluster startup).
Modify config file like that:
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config
cluster_formation.classic_config.nodes.1 = rabbit#host1.my.long.domain.name
cluster_formation.classic_config.nodes.2 = rabbit#host2.my.long.domain.name
...
Run rabbitmq-server
I expect all nodes to cluster but instead there might be 2+ independent clusters. How do I solve this issue?
UPDATE:
I found out that when I run rabbitmqctl join_cluster rabbit#host.in.existing.cluster on node that is in some cluster already, this node leaves his previous cluster (I expected this clusters to merge). That might be root of the problem.
UPDATE 2:
I have 4 instances. 3 run bare rabbitmq-servers, 1 is configured to join other 3. When started, it joins with the last instance in its config, 2 others show no activity in threir logs. Happens on classic config and erlang config both.
When you initially start up your cluster, there is no means to resolve race condition. Using peer discovery backends will not help with this issue (tested on etcd).
What actually resolved this issue is not deploying instances simultaneously. When started one by one everything is fine and you get one stable cluster which can handle scaling without failure.
I was trying to add an IP# to my Google Compute Engine (RHEL7) instance, but I typed the invocation wrong:
sudo ifconfig eth0 1.2.3.4
The existing IP# on eth0 was 1.2.3.3, so that invocation changed my existing IP# to one that isn't known to anything else. And so I lost all connections (ssh, http, even ping) to the instance.
How do I recover from this mistake? Is there a gcloud or GCP Console method I can use, since I can't connect directly to the instance anymore.
Since the ifconfig was invoked from a shell, not reconfigured in any startup scripts (or anywhere else), just resetting the instance will reboot it and cause it to config its eth0 according to its startup scripts:
$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
<instance-name> <instance-zone> <machine-type> <preemptible> <bad-internal-ip#> <external-ip#>
$ gcloud compute instances reset <instance-name>
For the following instance:
- [<instance-name>]
choose a zone:
[1] asia-east1-a
[2] asia-east1-b
[...]
Please enter your numeric choice: <N-of-instance-zone>
Updated [https://www.googleapis.com/compute/v1/projects/<project-name>/zones/<instance-zone>/instances/<instance-name].
$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
<instance-name> <instance-zone> <machine-type> <preemptible> <default-internal-ip#> <external-ip#> RUNNING
After you enter your numeric zone it can take several seconds or a longer (but probably not more than 5 minutes) for the instance to restart.
Look around in the cloud platform console. You usually can change the external IP, then go the long way around - provided its instanced.
I recently started using managed instance group with multi zone configuration. When i use GCE api to fetch instances for this instance groups zone is a required parameter. For a managed instance group with instances in multiple zone instance group does not belong to one zone. how do i fetch instances in this case?
What API or gcloud command are you using to list the instances? I guess you are using the instanceGroups.listInstances API, but for regional instance group, you need to use regionInstanceGroupManagers.listManagedInstances, or the corresponding gcloud command:
$ gcloud beta compute instance-groups managed list-instances instance-group-1 --region us-central1
NAME ZONE STATUS ACTION LAST_ERROR
instance-group-1-mk4j us-central1-b RUNNING NONE
instance-group-1-xnyk us-central1-c RUNNING NONE
instance-group-1-g23r us-central1-f RUNNING NONE
Note that this feature is still in beta.
This command used to work (around May 2016) but for some reason it does not anymore:
gcloud compute --verbosity error --project ""phantomjscloud-20160125"" instance-groups managed list
I now get the following error:
ERROR: (gcloud.compute.instance-groups.managed.list) More than one
Autoscaler with given targe.
I can't find any details regarding this error. What changed, and how do I again properly enumerate my instance groups?
Given that all my instance groups use (and have always used) autoscaling I'm not sure why I am now getting this error.
I don't know what the root cause was, but I deleted an instance group with a very similar name to another one ez-deploy-pjsc-api-preempt-large-usa-central1-a-1 vs ez-deploy-pjsc-api-preempt-large-usa-central1-a and now it works.
Seems like a bug in the gcloud system.