I have written a playbook as below:
tasks:
- name: List of all EIP
ec2_eip_info:
region: "{{ region }}"
register: list_of_eip
- name: initiate EIP list
set_fact:
eip_list: []
- name: List of all unused EIP
set_fact:
eip_list: "{{ list_of_eip.addresses | json_query(jmesquery) }}"
vars:
jmesquery: "[?instance_id== None].allocation_id"
- name: Release IP
command: aws ec2 release-address --allocation-id {{ eip_list }} --region {{ region }}
vars:
jmesquery: "[?instance_id== None].allocation_id"
In the output, I am getting an error in task Release IP:
aws ec2 release-address --allocation-id [u'eipalloc-xxxxxxxxxxxxxxx'] --region us-east-1",
allocation id not found.
I need to pass the allocation id as eipalloc-xxxxxxxxxxxxxxx and I don't know how to fix the above. Also, can someone point me in the right direction on how to loop if I have multiple EIPs?
You are constructing a list. Your command expects a single string element. You just need to pass that single element. Since you expect to have multiple elements in that list anyway, the obvious solution is to loop over your list so you can manage one or more elements the exact same way (actually 0 as well if the list is empty in which case no loop iteration will take place).
Moreover, there is no need to complexify your playbook with jmespath (great for complex json queries but we are not in that case) nor with setting facts.
The following should meet your requirement. I don't have an example of your data and no access to an aws account to try it myself so it is untested and you may have to tune the filter in rejectattr (and maybe switch to selectattr if easier) depending on your exact input data. See data manipulation in ansible documentation, rejectattr / selectattr in jinja2 documentation.
Note: for you next question consider crafting a correct minimal, complete, verifiable example
- name: List of all EIP
ec2_eip_info:
region: "{{ region }}"
register: list_of_eip
- name: Release IP
command: aws ec2 release-address --allocation-id {{ item }} --region {{ region }}
loop: "{{ list_of_eip.addresses | rejectattr('instance_id') | map(attribute='allocation_id') }}"
Related
We try to pass some env variables using a workaround to the reusable workflow as follows, but no variables are passed.
Workflow YAML is:
name: "call my_reusable_workflow"
on:
workflow_dispatch:
env:
env_branch: ${{ github.head_ref }}
env_workspace: ${{ github.workspace }}
jobs:
call_reusable_workflow_job:
uses: my_github/my-reusable-workflow-repo/.github/workflows/used_wf_test.yml#master
with:
env_vars: |
hello-to=Meir
branch_name=${{ env.env_branch }}
secrets:
my_token: ${{secrets.ENVPAT}}
and the reusable workflow YAML is:
name: my_reusable_workflow
on:
workflow_call:
inputs:
env_vars:
required: true
type: string
description: list of vars and values
secrets:
giraffe_token:
required: true
jobs:
reusable_workflow_job:
runs-on: ubuntu-latest
steps:
- name: set environment variables
if: ${{ inputs.env_vars }}
run: |
for env in "${{ inputs.env_vars }}"
do
printf "%s\n" $env >> $GITHUB_ENV
done
When the action is running it gets the value of hello-to=Meir but doesn't get the value branch_name=${{ env.env_branch }}.
I tried to pass the value also as branch_name=${{ github.head_ref }} but with no success.
According to the Limitations of Reusing workflows:
Any environment variables set in an env context defined at the workflow level in the caller workflow are not propagated to the called workflow. For more information, see "Variables" and "Contexts."
So, the env context is not supported in reusable workflow callers at the moment.
However, you can pass the Default Environment Variables to reusable workflow callers.
For example, in your particular scenario, you want to use these contexts:
github.head_ref
github.workspace
The equivalent default environment variables are:
GITHUB_HEAD_REF
GITHUB_WORKSPACE
And, your reusable workflow (e.g. reusable_workflow_set_env_vars.yml) will be called by its caller (e.g. reusable_workflow_set_env_vars_caller.yml) like this:
name: reusable_workflow_set_env_vars_caller
on:
workflow_dispatch:
jobs:
set-env-vars:
uses: ./.github/workflows/reusable_workflow_set_env_vars.yml
with:
env_vars: |
TEST_VAR='test var'
GITHUB_HEAD_REF=$GITHUB_HEAD_REF
GITHUB_WORKSPACE=$GITHUB_WORKSPACE
GITHUB_REF=$GITHUB_REF
Apart from that, regarding your implementation of the reusable workflow (e.g. reusable_workflow_set_env_vars.yml):
As the env_vars is of string type, you need to somehow solidify it against YAML multiline whitespace variants e.g. >.
You can visualize and observe whitespace on this online utility (https://yaml-multiline.info/).
With the current implementation, there may be word splitting for variable values containing spaces in them. So, you might need to iterate per line i.e. up to the newline character. This thread (https://superuser.com/questions/284187/bash-iterating-over-lines-in-a-variable) might be helpful for this.
We use helm to deploy a microservice on different systems.
Among other things, we have a ConfigMap template and of course a value file with the default values in the repo of the service. Some of these values are JSON and so far stored as JSON string:
apiVersion: v1
data:
Configuration.json: {{ toYaml .Values.config | indent 4 }}
kind: ConfigMap
metadata:
name: service-cm
config: |-
{
"val1": "key1",
"val2": "key2"
}
We also have a deployment repo where the different systems are defined. Here we overwrote the values with json strings as well.
Since the usability of these json strings is not so good, we want to move them to json files.
We use AKS and Azure Pipelines to deploy the service.
We create the chart with:
helm chart save Chart $(acr.name).azurecr.io/$(acr.repo.name):$(BUILD_VERSION)
and push it with:
helm chart push $(acr.name).azurecr.io/$(acr.repo.name):$(BUILD_VERSION)
and upgrade after pull and export in another job:
helm upgrade --install --set image-name --wait -f demo-values.yaml service service-chart
What we have already done is to set the json config in the upgrade command with --set-file:
upgrade --install --set image-name --wait -f demo-values.yaml --set-file config=demo-config.json service service-chart
What works though only for the values, of the different systems, not for the default values. But we also want to outsource these and also do not want to do without them.
Therefore at this point the first question, is there a way to inject the default values already per file, so that they are in the saved chart?
We know that you can read files in the templates with the following syntax:
Configuration.json: |-
{{ .Files.Get "default-config.json" | indent 4 }}
But we can't override that. Another idea was to inject the path from the values:
Configuration.json: |-
{{ .Files.Get (printf "%s" .Values.config.filename) | indent 4 }}
But the path seems to be relative to the chart folder. So there is no path to the deployment repo.
We now have the following solution with conditional templates:
data:
{{ if .Values.config.overwrite }}
Configuration.json: {{ toYaml .Values.config.value | indent 4 }}
{{ else }}
Configuration.json: |-
{{ .Files.Get "default-config" | indent 4 }}
{{ end }}
In the deployment repo the value file then looks like this:
config:
overwrite: true
value: will_be_replaced_by_file_content
And demo-config.json is set with the upgrade command in the pipeline.
This works, but seems a bit fiddly to us. So the question: Do you know a better way?
In your very first setup, .Values.config is a string. The key: |- syntax creates a YAML block scalar that contains an indented text block that happens to be JSON. helm install --set-file also sets a value to a string, and .Files.Get returns a string.
All of these things being strings means you can simplify the logic around them. For example, consider the Helm default template function: if its parameter is an empty string, it is logically false, and so default falls back to its default value.
In your final layout you want to keep the default configuration in a separate file, but use it only if an override configuration isn't provided. So you can go with an approach where:
In values.yaml, config is an empty string. (null or just not defining it at all will also work for this setup.)
# config is a string containing JSON-format application configuration.
config: ''
As you already have it, .Files.Get "default-config.json" can be the fallback value.
Use the default function to check if .Values.config is non-empty, and if not, fall back to that default.
Use helm install --set-file config=demo-config.json to provide an alternate config at deploy time.
The updated ConfigMap could look like:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "myapp.fullname" . }}
data:
Configuration.json:
{{ .Values.config | default (.Files.Get "default-config.json") | indent 4 }}
(Since any form of .Values.config is a string, it's not a complex structure and you don't need to call toYaml on it.)
I'm debugging a set of nested plays that puts hosts in a Rackspace load balancer.
- include create_servers.yml
...
- include add_to_load_balancers.yml
in the first play, I am using the rax_clb module to create the servers. We register the variable rax and use the rax.success list within it to add those hosts to a group in create_servers.yml:
- name: create instances on Rackspace
local_action:
module: rax
image: "{{ IMAGE }}"
flavor: "{{ FLAVOR }}"
wait: yes
count: "{{ COUNT }}"
...
register: rax
- name: some other play
local_action:
...
with_items: rax.success
- name: register rax.success as rax_servers for later use
set_fact:
rax_servers: rax.success
When using rax.success within the other play using with_items, it works. But later on when I try to use rax_servers in add_to_load_balancers.yml:
- name: place new hosts in the load balancer
rax_clb_nodes:
address={{ item.rax_accessipv4 }}
state=present
...
with_items: rax_servers
I get an error that there is no rax_accessipv4 in item. I should, though, since this is how I use it in the previous play (and it works). So I print out rax_servers:
TASK: [debug var=rax_servers] *************************************************
ok: [127.0.0.1] => {
"var": {
"rax_servers": "rax.success"
}
}
I'm obviously doing something wrong, but I can't seem to figure out from the documentation what I am doing wrong when either storing or referencing this variable. Both plays are run from and on localhost, so it should be giving me the same list, no?
Thanks for bearing with this newbie, any help is appreciated :)
It should be:
- name: register rax.success as rax_servers for later use
set_fact:
rax_servers: "{{ rax.success }}"
Without double braces in this case, 'rax.success' is just a string.
Sometimes I need to test some jinja2 templates that I use in my ansible roles. What is the simplest way for doing this?
For example, I have a template (test.j2):
{% if users is defined and users %}
{% for user in users %}{{ user }}
{% endfor %}
{% endif %}
and vars (in group_vars/all):
---
users:
- Mike
- Smith
- Klara
- Alex
At this time exists 4 different variants:
1_Online (using https://cryptic-cliffs-32040.herokuapp.com/)Based on jinja2-live-parser code.
2_Interactive (using python and library jinja2, PyYaml)
import yaml
from jinja2 import Template
>>> template = Template("""
... {% if users is defined and users %}
... {% for user in users %}{{ user }}
... {% endfor %}
... {% endif %}
... """)
>>> values = yaml.load("""
... ---
... users:
... - Mike
... - Smith
... - Klara
... - Alex
... """)
>>> print "{}".format(template.render(values))
Mike
Smith
Klara
Alex
3_Ansible (using --check)
Create test playbook jinja2test.yml:
---
- hosts: 127.0.0.1
tasks:
- name: Test jinja2template
template: src=test.j2 dest=test.conf
and run it:
ansible-playbook jinja2test.yml --check --diff --connection=local
sample output:
PLAY [127.0.0.1] **************************************************************
GATHERING FACTS ***************************************************************
ok: [127.0.0.1]
TASK: [Test jinja2template] ***************************************************
--- before: test.conf
+++ after: /Users/user/ansible/test.j2
## -0,0 +1,4 ##
+Mike
+Smith
+Klara
+Alex
changed: [127.0.0.1]
PLAY RECAP ********************************************************************
127.0.0.1 : ok=2 changed=1 unreachable=0 failed=0
4_Ansible (using -m template) thanks for #artburkart
Make a file called test.txt.j2
{% if users is defined and users %}
{% for user in users %}
{{ user }}
{% endfor %}
{% endif %}
Call ansible like so:
ansible all -i "localhost," -c local -m template -a "src=test.txt.j2 dest=./test.txt" --extra-vars='{"users": ["Mike", "Smith", "Klara", "Alex"]}'
It will output a file called test.txt in the current directory, which will contain the output of the evaluated test.txt.j2 template.
I understand this doesn't directly use a vars file, but I think it's the simplest way to test a template without using any external dependencies. Also, I believe there are some differences between what the jinja2 library provides and what ansible provides, so using ansible directly circumvents any discrepancies. When the JSON that is fed to --extra-vars satisfies your needs, you can convert it to YAML and be on your way.
If you have a jinja2 template called test.j2 and a vars file located at group_vars/all.yml, then you can test the template with the following command:
ansible all -i localhost, -c local -m template -a "src=test.j2 dest=./test.txt" --extra-vars=#group_vars/all.yml
It will output a file called test.txt in the current directory, which will contain the output of the evaluated test.j2 template.
I think this is the simplest way to test a template without using any external dependencies. Also, there are differences between what the jinja2 library provides and what ansible provides, so using ansible directly circumvents any discrepancies. It's also possible to test ad-hoc variables without making an additional vars file by using JSON:
ansible all -i "localhost," -c local -m template -a "src=test.j2 dest=./test.txt" --extra-vars='{"users": ["Mike", "Smith", "Klara", "Alex"]}'
You can use the debug module
tasks:
- name: show templating results
debug:
msg: "{{ lookup('template', 'template-test.j2') }}"
Disclaimer - I am the author of this, but I put together JinjaFx (https://github.com/cmason3/jinjafx).
This is a Python based tool that allows you to pass Jinja2 templates with a YAML file for variables. I originally wrote it so it can pass CSV based data to generate group_vars and host_vars for our deployments, but it also allows easy testing of Jinja2 templates - there is an online version at https://jinjafx.io
I needed to verify that the template I had defined gave the right result for the server it was created for. (The template included the hostname as a variable and other per host defined variables.)
Neither of the above methods worked for me. The solution for me was to add
check_mode: yes
diff: yes
to the task executing the template command, this got me the difference between the generated file and the file actually on the server without changing the remote file.
For me it actually worked better than looking at the whole generated file, since the changes was the interesting part anyway.
It needs to log in on the remote machine, so a limited use-case.
Example of a complete command:
- name: diff server.properties
check_mode: yes
diff: yes
ansible.builtin.template:
src: "src.properties"
dest: "/opt/kafka/config/server.properties"
I am trying to provision hosts on EC2, so I am working with Ansible Dynamic Inventory.
What I want to do is; to set serial number for each node.
For example: "myid" configuration of Zookeeper
Zookeeper requires serial number named "myid" for each node; 1 for hostA, 2 for hostB, 3 for hostC and so on.
Here is the part of my playbook that copies "myid" file to hosts.
- name: Set myid
sudo: yes
template: src=var/lib/zookeeper/myid.j2 dest=/var/lib/zookeeper/myid
And myid.j2 should be something like this below.
{{ serial_number }}
The question is: What should the variable "{{ serial_number }}" be like?
I found a nice clean way to do this using Ansible's with_index_items syntax:
tasks:
- name: Set Zookeeper Id
set_fact: zk_id={{item.0 + 1}}
with_indexed_items: "{{groups['tag_Name_MESOS_MASTER']}}"
when: item.1 == "{{inventory_hostname}}"
/etc/zookeeper/conf/myid template can then be set to
{{zk_id}}
This assumes you are using AWS dynamic inventory.
I solved this by assigning a number to each EC2 instance as a tag when creating them. I then refer to that tag when creating the myid file. Below are the tasks I used to create my EC2 instances with all non-important fields left out for brevity.
- name: Launch EC2 instance(s)
with_sequence: count="{{ instance_count }}"
ec2:
instance_tags:
number: "{{ item }}"
Then when installing ZooKeeper on these servers, I use the dynamic inventory to obtain all the servers tagged with zookeeper and use the number tag in the myid file.
- name: Render and copy myid file
copy: >
content={{ ec2_tag_number }}
dest=/etc/zookeeper/conf/myid
Note: when creating the EC2 instances, I needed to use with_sequence rather than the count field in the ec2 module. Otherwise I wouldn't have an index to capture for the tag.
If you want the playbook to handle being able to add nodes to the current cluster, you can query for the number of EC2 instances tagged with zookeeper and add that to the iteration index. This is fine to have normally because current_instance_count will be 0 if there aren't any.
- name: Determine how many instances currently exist
shell: echo "{{ groups['tag_zookeeper'] | length }}"
register: current_instance_count
- name: Launch EC2 instance(s)
with_sequence: count="{{ instance_count }}"
ec2:
instance_tags:
number: "{{ item|int + current_instance_count.stdout|int }}"
There is no need to use template, you can directly assign the content of myid file in the playbook. Assume you have collect all ec2 instance into the group "ec2hosts".
- hosts: ec2hosts
user: ubuntu
sudo:Trues
tasks:
- name: Set Zookeeper Id
copy: >
content={{ item.0 + 1 }}
dest=/var/lib/zookeeper/myid
with_indexed_items: "{{groups['ec2hosts']}}"
when: item.1 == "{{inventory_hostname}}"