saltstack jinja for loop in parallel - jinja2

hope to find you well and healthy :)
I have one jinja loop which is working, unfortunately not the way I would like :|
Short story:
Using salt-run for orchestration, minion targeting is accomplished by passing the pre-defined nodegroup as a pillar
{% set minions = salt.saltutil.runner('cache.mine', tgt=nodegroup,tgt_type='nodegroup').keys() %}
{% for minion_id in minions %}
patch-n1-{{ minion_id }}:
salt.state:
- tgt: {{ minion_id }}
- sls:
- patching.patch-n1
- pillar:
minion_id: {{ minion_id }}
reboot_minion-{{ minion_id }}:
salt.function:
- name: cmd.run_bg
- arg:
- 'salt-call system.reboot 1'
- tgt: {{ minion_id }}
{% endfor %}
The problem is that with this loop both tasks are executed minion by minion. In my case, this is not efficient ...
If remove the loop both states are applied however again doesn't help so much.
Main goal is to apply patch-n1-{{ minion_id }} and reboot_minion-{{ minion_id }} for each minion in the nodegroup indipended from each other.
Or said in a different way I need a for loop which to work simultaneously for all minions in it.
Do you have any ideas about that?
Thanks!

When we target minions by globbing, or nodegroups, the defined states will be applied to them in parallel. So one way to achieve this is by moving the "reboot minion" functionality into the state file patch-n1.sls itself.
Example /srv/salt/patch-n1.sls file:
# Some tasks to perform patching, just using 'include' for example
# from /srv/salt/patching/os_pkg.sls
include:
- patching.os_pkg
reboot-after-package-update:
module.run:
- name: system.reboot
And in orchestrate /srv/salt/orch/patch_all.sls file:
patch-group1:
salt.state:
- tgt: group1
- tgt_type: nodegroup
- sls:
- patch-n1
patch-group2:
salt.state:
- tgt: group2
- tgt_type: nodegroup
- sls:
- patch-n2
When we run the orchestration, the each minion will run patching and reboot in parallel.

system.reboot didn't help at my case bacause my minions are mixed (Windows, Linux)
reboot-after-package-update:
module.run:
- name: system.reboot
I was able to achieve my goal by:
# Some tasks to perform patching, just using 'include' for example
# from /srv/salt/patching/uptodate.sls
include:
- patching.uptodate
reboot-after-package-update:
cmd.run:
- name: shutdown -r -t 60
{% set stage = pillar['stage'] %}
{% set nodegroup = salt['pillar.get']('nodegroup', 'PILLAR nodegroup NOT FOUND!') %}
{% set minions = salt.saltutil.runner('cache.mine', tgt=nodegroup,tgt_type='nodegroup').keys() %}
wait_for_reboot-{{ stage }}:
salt.wait_for_event:
- name: salt/minion/*/start
- id_list: {% for minion_id in minions %}
- {{minion_id}}{% endfor %}
- timeout: 6000
- require:
- cmd: reboot-after-package-update

Related

SaltStack check if grain exists in Jinja file

I'm using SaltStack to manage my infra. Machines are hosted in different DCs, so they also have slightly different network setup.
Currently, I'm running into the following issue:
Comment: Unable to manage file: Jinja variable 'dict object' has no attribute 'macaddress'; line 9
---
[...]
ethernets:
{{ grains['interface_context'] }}:
dhcp4: {{ grains['dhcp4'] }}
dhcp6: {{ grains['dhcp6'] }}
addresses: [{{ grains['ipv4'] }}, "{{ grains['ipv6'] }}"]
{% if grains['macaddress'] %} <======================
match:
macaddress: {{ grains['macaddress'] }}
{% endif %}
routes:
- to: default
[...]
---
As the message indicates, the grain "macaddress" is missing, which I can confirm, it's not set for this minion. But What I do not understand is how I can simply check if this variable/grain exists at all within a jinja template?
I wouldn't expect this error to come up, as I actually wanted to catch it with the if statement.
Can somebody help?
Use get to return None instead of raising:
{% if grains.get('macaddress') is not none %}
Or if you want to treat "empty" values the same:
{% if not grains.get('macaddress') %}

Accessing items in nested dicts and lists in Jinja2

Data:
primaries:
ca:
- 10.51.60.45
- 10.51.60.46
ny:
- 10.52.60.45
- 10.52.60.46
az:
- 10.53.60.45
- 10.53.60.46
I want a flattened list of all IP's(or a for loop which can iterate through just the IP's), but the cities ca and ny and az could be anything.
Ansible's extract filter, which extracts the value of a key from a container, makes this very simple.
{{ primaries | map('extract', primaries) | flatten }}
You can also directly use the dictionary's values() method, which is slightly less flexible (the extract approach allows you to filter the keys beforehand, which you can't do here.)
{{ primaries.values() | flatten }}
You just need to iterate through the keys of the dictionary.
{% for region, ips in primaries.items() %}
{% for ip in ips %}
{{ ip }}
{% endfor %}
{% endfor %}
Read the Jinja docs on for.
Using Ansible, you can get a flattened list of ips using the json_query filter:
List of ip addresses:
{% for addr in primaries|json_query('*[][]') %}
- {{ addr }}
{% endfor %}
This results in:
List of ip addresses:
- 10.51.60.45
- 10.51.60.46
- 10.52.60.45
- 10.52.60.46
- 10.53.60.45
- 10.53.60.46
Here's a runnable example:
- hosts: localhost
gather_facts: false
vars:
primaries:
ca:
- 10.51.60.45
- 10.51.60.46
ny:
- 10.52.60.45
- 10.52.60.46
az:
- 10.53.60.45
- 10.53.60.46
tasks:
- copy:
dest: addresses.txt
content: |
List of ip addresses:
{% for addr in primaries|json_query('*[][]') %}
- {{ addr }}
{% endfor %}
The json_query filter uses the JMESPath query language.

Ansible: Get all the IP addresses of a group

Let's imagine an inventory file like this:
node-01 ansible_ssh_host=192.168.100.101
node-02 ansible_ssh_host=192.168.100.102
node-03 ansible_ssh_host=192.168.100.103
node-04 ansible_ssh_host=192.168.100.104
node-05 ansible_ssh_host=192.168.100.105
[mainnodes]
node-[01:04]
In my playbook I now want to create some variables containing the IP addresses of the group mainnodes:
vars:
main_nodes_ips: "192.168.100.101,192.168.100.102,192.168.100.103,192.168.100.104"
main_nodes_ips_with_port: "192.168.100.101:3000,192.168.100.102:3000,192.168.100.103:3000,192.168.100.104:3000"
This is what I got so far:
vars:
main_nodes_ips: "{{groups['mainnodes']|join(',')}}"
main_nodes_ips_with_port: "{{groups['mainnodes']|join(':3000,')}}"
but that would use the host names instead of the IP addresses.
Any ideas how this could be done?
Update:
looking at the docs for a while, I think this would allow me to loop through all the ip adresses:
{% for host in groups['mainnodes'] %}
{{hostvars[host]['ansible_ssh_host']}}
{% endfor %}
But I just can't figure out how to create an array that holds all these IPs. So that I can use the |join() command on them.
Update2:
I just thought I had figured it out... but it turns out that you cannot use the {% %} syntax in the playbook... or can I?
Well in the vars section it didn't. :/
vars:
{% set main_nodes_ip_arr=[] %}
{% for host in groups['mesos-slave'] %}
{% if main_nodes_ip_arr.insert(loop.index,hostvars[host]['ansible_ssh_host']) %} {% endif %}
{% endfor %}
main_nodes_ips: "{{main_nodes_ip_arr|join(',')}}"
main_nodes_ips_with_port: "{{main_nodes_ip_arr|join(':3000,')}}"
I find the magic map extract here.
main_nodes_ips: "{{ groups['mainnodes'] | map('extract', hostvars, ['ansible_host']) | join(',') }}"
main_nodes_ips_with_port: "{{ groups['mainnodes'] | map('extract', hostvars, ['ansible_host']) | join(':3000,') }}:3000"
An alternative(idea comes from here):
main_nodes_ips: "{{ groups['mainnodes'] | map('extract', hostvars, ['ansible_eth0', 'ipv4', 'address']) | join(',') }}"
(Suppose the interface is eth0)
i came across this problem a while back and this is what i came up with (not optimal, but it works)
---
# playbook.yml
- hosts: localhost
connection: local
tasks:
- name: create deploy template
template:
src: iplist.txt
dest: /tmp/iplist.txt
- include_vars: /tmp/iplist.txt
- debug: var=ip
and the template file is
ip:
{% for h in groups['webservers'] %}
- {{ hostvars[h].ansible_ssh_host }}
{% endfor %}
This do the trick for me. Not relying on the interface name
- main_nodes_ips: "{{ groups['mainnodes'] | map('extract', hostvars, ['ansible_default_ipv4', 'address']) | join(',') }}"
- name: Create List of nodes to be added into Cluster
set_fact: nodelist={%for host in groups['mygroup']%}"{{hostvars[host].ansible_eth0.ipv4.address}}"{% if not loop.last %},{% endif %}{% endfor %}
- debug: msg=[{{nodelist}}]
- name: Set Cluster node list in config file
lineinfile:
path: "/etc/myfonfig.cfg"
line: "hosts: [{{ nodelist }}]"
as results you will have the following line in config file:
hosts: ["192.168.126.38","192.168.126.39","192.168.126.40"]
I got it to work on my own now. I'm not too happy about the solution, but it will do:
main_nodes_ips: "{% set IP_ARR=[] %}{% for host in groups['mainnodes'] %}{% if IP_ARR.insert(loop.index,hostvars[host]['ansible_ssh_host']) %}{% endif %}{% endfor %}{{IP_ARR|join(',')}}"
main_nodes_ips_with_port: "{% set IP_ARR=[] %}{% for host in groups['mainnodes'] %}{% if IP_ARR.insert(loop.index,hostvars[host]['ansible_ssh_host']) %}{% endif %}{% endfor %}{{IP_ARR|join(':3000,')}
I've done this by using ansible facts in a playbook.
This playbook takes ansible_all_ipv4_addresses list and ansible_nodename (which is actually fully qualified domain name), iterates through all hosts and saves the data in localpath_to_save_ips file on your localhost. You can change localpath_to_save_ips to the absolute path on your localhost.
---
- hosts: all
become: yes
gather_facts: yes
tasks:
- name: get ip
local_action: shell echo {{ ansible_all_ipv4_addresses }} {{ ansible_nodename }} >> localpath_to_save_ips
I found the "only way" to acceess other group's ip's, when any of the following is true:
some members are not bootstrapped by ansible yet
using serial
group is not part of playbook
Is as follows:
{% set ips=[] %}{% for host in groups['othergroup'] %}{% if ips.append(lookup('dig', host)) %}{% endif %}{% endfor %}{{ ips }}
Requires dnspython on the machine running ansible, install via
sudo apt-get install python-dnspython
If anyone knows a better way given the conditions, I'd love to get rid of this abomination.
this is what I did in order to not be relied on eth0 (thanks to ADV-IT's answer):
- name: gathering facts
hosts: mainnodes
gather_facts: true
- hosts: mainnodes
tasks:
- name: Create List of nodes
set_fact: nodelist={%for host in groups['mainnodes']%}"{{hostvars[host]['ansible_env'].SSH_CONNECTION.split(' ')[2]}}"{% if not loop.last %},{% endif %}{% endfor %}
I ran into a similar problem getting the IP address of a node in another group.
Using a construct like:
the_ip: "{{ hostvars[groups['master'][0]]['ansible_default_ipv4'].address }}"
works only when running the group master, which was not part of my playbook (I was running on localhost).
I have overcome the problem by adding an extra play to playbook, like:
- hosts: master
gather_facts: yes
become: no
vars:
- the_master_ip: "{{ hostvars[groups['master'][0]]['ansible_default_ipv4'].address }}"
tasks:
- debug: var=the_master_ip
- set_fact: the_ip={{ the_master_ip }}
After which I can use the the_ip in the next play of the playbook.
This may also solve the abomination mentioned by #Petroldrake ?
##Just fetch Ip's using -ansible_default_ipv4.address- & redirect to a local file & then use it
name: gathering_facts
hosts: hosts
gather_facts: true
tasks:
name: Rediret to the file
shell: echo "{{ansible_default_ipv4.address}}" >>ipss.txt
delegate_to: localhost

Jinja variable is undefined

I'm trying to build the hosts file on my servers using salt. In some of the servers, the eth0 network interface has the inet set and in some others is the bond0 interface.
In the init.sls i have:
/etc/hosts:
file.managed:
- source: salt://configs/etc/hosts/hostsfile
- user: root
- group: root
- mode: 644
- template: jinja
- context:
{% for host, interface in salt['mine.get']('*', 'network.interfaces').items() %}
{% if interface['bond0'].has_key('inet') %}
ip: {{ salt['network.interfaces']()['bond0']['inet'][0]['address'] }}
{% else %}
ip: {{ salt['network.interfaces']()['eth0']['inet'][0]['address'] }}
{% endif %}
{% endfor %}
hostname: {{ salt['network.get_hostname']() }}
And in my hosts file that is set above in the "- source", i have:
{{ ip }} {{ hostname }}
Then, when i run a state.highstate from the salt master, i get an error saying:
SaltRenderError: Jinja variable 'ip' is undefined; line 97
It seems like that the salt function that retrieves the network interface, does not work when it is inside the jinja for loop(or i'm doing something wrong).
I'm saying that because last line where it returns the hostname, works just fine.
What am i doing wrong here ? I'm suspecting that the if condition is not met and thus the "ip" variable never gets assigned a value.
Thank you,

Templating multiple yum .repo files with Ansible template module

I am attempting to template yum .repo files. We have multiple internal and external yum repos that the various hosts we manage may or may not use.
I want to be able to specify any number of repos and what .repo file they will be templated in. It makes sense to group these repos in the same .repo file where they have a common purpose (e.g. all centos repos)
I am unable to determine how to combine ansible, yaml and j2 to achieve this. I have tried using the ansible 'with_items', 'with_subelements' and 'with_dict' unsuccessfully.
YAML data
yum_repo_files:
- centos:
- name: base
baseurl: http://mirror/base
- name: updates
baseurl: http://mirror/updates
- epel:
- name: epel
baseurl: http://mirror/epel
Ansible task
- name: create .repo files
template: src=yumrepos.j2 dest="/etc/yum.repos.d/{{ item }}.repo"
with_items: yum_repo_files
j2 template
{% for repofile in yum_repo_files.X %} {# X being the relative index for the current repofile, e.g. centos = 0 and epel = 1 #}
{% for repo in repofile %}
name={{ repo.name }}
baseurl={{ repo.baseurl }}
{% endfor %}
{% endfor %}
When you use with_items with the template module the special variable item will be passed into your jinja template.
Try this:
{% for repofile in item %}
{% for repo in repofile %}
name={{ repo.name }}
baseurl={{ repo.baseurl }}
{% endfor %}
{% endfor %}
user24364's answer helped solve half the issue, I then used some python methods to get the correct data out of the lists and dicts.
Giving the full filename 'centos.repo' rather than 'centos' simplified the logic (and aligned better with the logic for other tasks):
yum_repo_files:
- centos.repo:
- name: base
baseurl: http://mirror/base
- name: updates
baseurl: http://mirror/updates
- epel.repo:
- name: epel
baseurl: http://mirror/epel
The .iterkeys() and .next() methods are used on items to get the repo filenames out of the list of dicts:
- name: create .repo files
template: src=yumrepos.j2 dest="/etc/yum.repos.d/{{item.iterkeys().next()}}"
with_items: yum_repo_files
The .itervalues() method is used to get the list of dicts containing all the keys/values for each given repo:
{% for repofile in item.itervalues() %}
{% for repo in repofile %}
[{{repo.repo}}]
name={{ repo.name }}
baseurl={{ repo.baseurl }}
{% endfor %}
{% endfor %}
I also added some other tasks to clean up unmanaged files, etc. Once I've sanitised the code, I'll post it to the ansible galaxy as nobody else seems to have shared such a role.
Your files would be named as: *.repo.j2; then, you can use fileglob:
- name: create x template
- template: src={{ item }} dest=/tmp/{{ item | basename | regex_replace('.j2','') }}
- with_fileglob:
- files/*.j2
Reference:
https://serverfault.com/questions/578544/deploying-a-folder-of-template-files-using-ansible