Using the Azure CLI, I would like to remove a series of backends from a Front Door backend pool based on their address. But from what I can tell you need to know the position of the backends in the list (the index) rather than pick from the address.
I am using az network front-door backend-pool backend list to get the list of backends, the response does not provide an index to use.
Can I remove a backend by the address, or some other identifier, rather than an index?
If I am forced to delete by index:
If I list the backends multiple times, can I be guaranteed that they always come back in the same order?
If I add a new backend to the pool, is it always the last in the list, and therefore the highest index?
If I delete the first backend (index = 1), does that index get replaced with the next in the list?
Azure CLI only provides a way with an index to remove a backend. But you can use the command below to get the index of a backend that you want to remove by its address:
backends=$(az network front-door backend-pool backend list --resource-group <resource group name> --front-door-name <front door name> --pool-name <pool name>)
echo $backends |jq
echo $backends | jq '[ .[] | .address == "stantest1016.blob.core.windows.net" ] | index(true) +1'
Result:
It is recommended that query backend list to get the latest list after add/remove backends.
Related
With the gcloud command line tool I can do:
$ gcloud compute instances list --filter='tags.items:development'
The documentation claims: "..you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. Use filtering on nested fields to take advantage of labels to organize and search for results based on label values." But no examples are provided, so it's not clear how one actually goes about this.
I've tried labels.development eq *.*, labels eq *development*, labels:development et al.. I've also tried setting the verbosity of the of the command line client to info and looking through the output, as well as monitoring requests that go to the API from the Compute Engine web console, but neither has gotten me anywhere.
Finding Tags with regular expression filters
I'm struggling with the same issue but I think that regular expressions solve the problem.
I have many instances with multiple tags but I can search across all tags with the '~' operator e.g. to find all servers with the production tag:
gcloud compute instances list --filter='tags.items~^production$'
For many servers the 'production' tag is the third entry in tags.items yet the regexp finds it.
This seems to work but I can't find any documentation that specifically says that it should work. The nearest is the section on topic filters which mentions this
key ~ value True if key matches the RE (regular expression) pattern
value.
You can also search for multiple tags
gcloud compute instances list --filter='tags.items~^production$ AND tags.items~^european$'
which would find all servers with the two tags 'production' and 'european'
Tags v Custom metadata
If you want something a bit more flexible than tags (which can only be present or missing), you can attach your own custom multi-valued metadata to an instance (via the UI, command-line or API). You can then search for particular values of that item.
For example suppose I have different instances supporting eCommerce for different brands, I could attach a custom 'brand' metadata item to each server and then find all of the servers which run my "Coca-Cola" brands via ..
gcloud compute instances list --filter="metadata.items.key['brand']['value']='Coca-Cola'"
... and my 'Pepsi Cola' servers with ...
gcloud compute instances list --filter="metadata.items.key['brand']['value']='Pepsi Cola'"
Finding Metadata with regular expression filters
You probably guessed this already but the regular expression operator also works with metadata filters so you can do
gcloud compute instances list --filter="metadata.items.key['brand']['value']~'Cola'"
As explained here you can use the following syntax using gcloud cli:
gcloud compute instances list --filter labels.env=dev
Multi Filtering example that I'm using:
gcloud compute instances list --filter="zone:( europe-west1-d )" --filter="name:( testvm )" --filter labels.group=devops --filter labels.environment_type=production
Add instance metadata MyKey=MyValue for gke-cluster-asia-eas-default-pool-dc8f484c-knbs:
gcloud compute instances add-metadata gke-cluster-asia-eas-default-pool-dc8f484c-knbs --metadata=MyKey=MyValue
Display all instances with MyKey=MyValue:
gcloud compute instances list --filter="metadata.items.key['MyKey'][value]='MyValue'"
Display all instances that belong to the cluster cluster-asia-east1-a:
gcloud compute instances list --filter="metadata.items.key['cluster-name'][value]='cluster-asia-east1-a'"
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-cluster-asia-eas-default-pool-dc8f484c-knbs asia-east1-a n1-standard-1 10.140.0.2 104.155.227.25 RUNNING
gke-cluster-asia-eas-default-pool-dc8f484c-x8cv asia-east1-a n1-standard-1 10.140.0.3 104.199.226.16 RUNNING
gke-cluster-asia-eas-default-pool-dc8f484c-z5wv asia-east1-a n1-standard-1 10.140.0.4 104.199.134.9 RUNNING
I am trying to use Ansible to do some parallel computation. My data is trivially parallelizable, I just need to split the file across my hosts (EC2 instances). Is there a canonical way to do this?
The next best thing would be to have a counter that increments for each host. Assuming I have already split my data into my number of workers, I would like to be able to say within each worker task:
- file: src=data/users-{{host_index}}.csv dest=/mnt/users.csv`.
Then, each worker can process their copy of users.csv with a separate script, that is agnostic to which set of users they have. Is there any way to get this counter index?
I am a beginner to Ansible, so I wonder whether I am overlooking a simple module or idiom, either in Ansible or Jinja. Thanks in advance.
It turns out I have access to a variable called ami_launch_index inside of the ec2_facts module that gives me a zero-indexed unique ID to each EC2 instance. Here is the code for copying over files with numerical suffixes to their corresponding EC2 instances:
tasks:
- name: Gather ec2 facts
action: ec2_facts
register: facts
- name: Share data to nodes
copy: src=data/websites-{{facts.ansible_facts.ansible_ec2_ami_launch_index}}.txt dest=/mnt/websites.txt
The copy line produces the following for the src values:
data/websites-1.txt
data/websites-0.txt
data/websites-2.txt
(There is no guarantee that the hosts will iterate in ami_launch_index order)
I want know how we can delete all keys for a specific namespace only?
FLUSHALL
delete all keys, its a problem when using multiple app with same redis server.
you can not flush a namespace in Redis, while you can flush all keys matching a pattern.
$ redis-cli --scan --pattern 'user:*' | xargs redis-cli unlink
You should use (always with caution) FLUSHDB after ensuring you had SELECTed the right database.
On a related subject, you should consider using shared databases really carefully - there are all sorts of nastiness that can ensue from as well as the fact that it isn't compatible with the upcoming Redis cluster (v3 that is in beta, expected release EOY). You may want to look at this benchmark post about Shared vs. Dedicated Redis instances for more background on the subject.
flush(namespace){
console.log('delete namespace: ', namespace);
redis.defineCommand('flush', {
numberOfKeys: 0,
lua: 'local keys = redis.call(\'keys\', ARGV[1]) for i=1,#keys,5000 do ' +
'redis.call(\'del\', unpack(keys, i, math.min(i+4999, #keys))) end return keys'
});
}
I have a list of DNs, and for performance reasons I want to retrieve the attributes of every DN in the list in a single trip to the LDAP server.
Seems like searching by DN, i.e., using DN as a filter search, is not possible
Using DN in Search Filter
http://www.openldap.org/lists/openldap-software/200503/msg00520.html
....is there any alternative?
Sure you can.
ldapsearch -h <ldaphost> -b "cn=joe,dc=yourdoamin,dc=com" -s base -D cn=admin,dc=yourdomain,dc=com -W "(objectclass=*)" "*"
Will retrieve all user attributes for the DN: cn=joe,dc=yourdoamin,dc=com.
But, for the list, you would need to repeat the search for each one.
We often do this in a bash script.
Can you use a filter to identify which DNs you need?
-jim
Seems like it is only possible in Active Directory. All I had to do is filter by the distinguishedName attribute, however on my tests there was no performance gain.
Active Directory includes the distinguishedName attribute on every
object; the value is the object's DN. The following example elaborates
the previous example to include a value of distinguishedName on each
object.
http://msdn.microsoft.com/en-us/library/cc223167.aspx
How to request the number of nodes (not procs), while job submission in SGE?
for e.g. In TORQUE, we can specify qsub -l nodes=3
How to request the nodes by their names in SGE?
for e.g. In TORQUE, we can do this by qsub -l nodes=abc+xyz+pqr, where abc, xyz and pqr are hostnames
For single hostname, qsub -l hostname=abc it works. But how do I delimit multiple hostnames in SGE?
Requesting the number of nodes with Grid Engine is done indirectly.
When you want to submit a parallel job then you have to request
a parallel environment (man sge_pe) together with the amount
of slots (processors etc) like qsub -pe mytestpe 12...
Depending on the allocation_rule defined in the parallel environment
(qconf -sp mytestpe) the slots are distributed over one or more
nodes. If you have a so called fixed allocation rule where you
just add a certain number as allocation rule like 4 (4 slots per
host) it is easy. If you like one host just submit with -pe mytestpe 4
if you want 10 nodes just submit with -pe mytestpe 40.
Node name can be requested by the -l h=abc. Since node names are
RESTRINGS (regular expression strings) in Grid Engine you can create
a regular expression for host filtering: qsub -l h="abc|xyz".
You can also create host groups (qconf -ahgrp) and request
so called queue domains (qsub -q all.q##mygroup).
Daniel
http://www.gridengine.eu
you can use -tc to limit the number of concurrent tasks (i.e., number of slots that will be used for an array job). I use this when I submit array jobs with 100 sub-jobs to limit the impact on our queue, defaulting to 10 simultaneous jobs with -tc 10. As each job finishes, another array job from the pending pool will be submitted.
the only way I've been able to figure out to do this would be to set up specific resource quota sets (using qconf -mrqs) specifying the particular host groups you want to use. You would have to set up all of the combinations that you want, first. I don't see a real reason to specify specific hosts, though, unless these hosts have specific resources that you want to use (in which case, I'd set up consumable resources for those and apply the appropriate number of resources to each host that can supply them, then use that instead of specifying the specific hosts for a particular job).