Templating multiple yum .repo files with Ansible template module - jinja2

I am attempting to template yum .repo files. We have multiple internal and external yum repos that the various hosts we manage may or may not use.
I want to be able to specify any number of repos and what .repo file they will be templated in. It makes sense to group these repos in the same .repo file where they have a common purpose (e.g. all centos repos)
I am unable to determine how to combine ansible, yaml and j2 to achieve this. I have tried using the ansible 'with_items', 'with_subelements' and 'with_dict' unsuccessfully.
YAML data
yum_repo_files:
- centos:
- name: base
baseurl: http://mirror/base
- name: updates
baseurl: http://mirror/updates
- epel:
- name: epel
baseurl: http://mirror/epel
Ansible task
- name: create .repo files
template: src=yumrepos.j2 dest="/etc/yum.repos.d/{{ item }}.repo"
with_items: yum_repo_files
j2 template
{% for repofile in yum_repo_files.X %} {# X being the relative index for the current repofile, e.g. centos = 0 and epel = 1 #}
{% for repo in repofile %}
name={{ repo.name }}
baseurl={{ repo.baseurl }}
{% endfor %}
{% endfor %}

When you use with_items with the template module the special variable item will be passed into your jinja template.
Try this:
{% for repofile in item %}
{% for repo in repofile %}
name={{ repo.name }}
baseurl={{ repo.baseurl }}
{% endfor %}
{% endfor %}

user24364's answer helped solve half the issue, I then used some python methods to get the correct data out of the lists and dicts.
Giving the full filename 'centos.repo' rather than 'centos' simplified the logic (and aligned better with the logic for other tasks):
yum_repo_files:
- centos.repo:
- name: base
baseurl: http://mirror/base
- name: updates
baseurl: http://mirror/updates
- epel.repo:
- name: epel
baseurl: http://mirror/epel
The .iterkeys() and .next() methods are used on items to get the repo filenames out of the list of dicts:
- name: create .repo files
template: src=yumrepos.j2 dest="/etc/yum.repos.d/{{item.iterkeys().next()}}"
with_items: yum_repo_files
The .itervalues() method is used to get the list of dicts containing all the keys/values for each given repo:
{% for repofile in item.itervalues() %}
{% for repo in repofile %}
[{{repo.repo}}]
name={{ repo.name }}
baseurl={{ repo.baseurl }}
{% endfor %}
{% endfor %}
I also added some other tasks to clean up unmanaged files, etc. Once I've sanitised the code, I'll post it to the ansible galaxy as nobody else seems to have shared such a role.

Your files would be named as: *.repo.j2; then, you can use fileglob:
- name: create x template
- template: src={{ item }} dest=/tmp/{{ item | basename | regex_replace('.j2','') }}
- with_fileglob:
- files/*.j2
Reference:
https://serverfault.com/questions/578544/deploying-a-folder-of-template-files-using-ansible

Related

DBT using macros within external schema yml file

We define external redshift spectrum table using an yml file. Like the one below.
- name: visitor
description: 'visitor data'
external:
location: "s3://{{ env_var('ENV') }}-temp-raw-s3/{{ s3_prefix({{ env_var('ENV') }}) }}/Visitor"
file_format: parquet
partitions:
- name: year
data_type: VARCHAR(10)
vals: # macro w/ keyword args to generate list of values
macro: dbt.dates_in_range
args:
start_date_str: '{{ var("START_DT") }}'
end_date_str: '{{ var("START_DT") }}'
in_fmt: '%Y-%m-%d'
out_fmt: '%Y'
I would like to change the prefix of the s3 bucket based on the environment variable.
Here is the macro I have tried to change the prefix. I am calling that macro in in the yml file above in line 4 (location).
{% macro s3_prefix(env_var) %}
{% if env_var == "prod" %}
prefix_string = "prefix_prod"
{% else %}
prefix_string = "prefix_non_prod"
{% endif %}
{{ return(prefix_string) }}
{% endmacro %}
I am getting some compilation errors. Is there an easier way to achieve this? Any help would be really appreciated.

saltstack jinja for loop in parallel

hope to find you well and healthy :)
I have one jinja loop which is working, unfortunately not the way I would like :|
Short story:
Using salt-run for orchestration, minion targeting is accomplished by passing the pre-defined nodegroup as a pillar
{% set minions = salt.saltutil.runner('cache.mine', tgt=nodegroup,tgt_type='nodegroup').keys() %}
{% for minion_id in minions %}
patch-n1-{{ minion_id }}:
salt.state:
- tgt: {{ minion_id }}
- sls:
- patching.patch-n1
- pillar:
minion_id: {{ minion_id }}
reboot_minion-{{ minion_id }}:
salt.function:
- name: cmd.run_bg
- arg:
- 'salt-call system.reboot 1'
- tgt: {{ minion_id }}
{% endfor %}
The problem is that with this loop both tasks are executed minion by minion. In my case, this is not efficient ...
If remove the loop both states are applied however again doesn't help so much.
Main goal is to apply patch-n1-{{ minion_id }} and reboot_minion-{{ minion_id }} for each minion in the nodegroup indipended from each other.
Or said in a different way I need a for loop which to work simultaneously for all minions in it.
Do you have any ideas about that?
Thanks!
When we target minions by globbing, or nodegroups, the defined states will be applied to them in parallel. So one way to achieve this is by moving the "reboot minion" functionality into the state file patch-n1.sls itself.
Example /srv/salt/patch-n1.sls file:
# Some tasks to perform patching, just using 'include' for example
# from /srv/salt/patching/os_pkg.sls
include:
- patching.os_pkg
reboot-after-package-update:
module.run:
- name: system.reboot
And in orchestrate /srv/salt/orch/patch_all.sls file:
patch-group1:
salt.state:
- tgt: group1
- tgt_type: nodegroup
- sls:
- patch-n1
patch-group2:
salt.state:
- tgt: group2
- tgt_type: nodegroup
- sls:
- patch-n2
When we run the orchestration, the each minion will run patching and reboot in parallel.
system.reboot didn't help at my case bacause my minions are mixed (Windows, Linux)
reboot-after-package-update:
module.run:
- name: system.reboot
I was able to achieve my goal by:
# Some tasks to perform patching, just using 'include' for example
# from /srv/salt/patching/uptodate.sls
include:
- patching.uptodate
reboot-after-package-update:
cmd.run:
- name: shutdown -r -t 60
{% set stage = pillar['stage'] %}
{% set nodegroup = salt['pillar.get']('nodegroup', 'PILLAR nodegroup NOT FOUND!') %}
{% set minions = salt.saltutil.runner('cache.mine', tgt=nodegroup,tgt_type='nodegroup').keys() %}
wait_for_reboot-{{ stage }}:
salt.wait_for_event:
- name: salt/minion/*/start
- id_list: {% for minion_id in minions %}
- {{minion_id}}{% endfor %}
- timeout: 6000
- require:
- cmd: reboot-after-package-update

including grain data when querying pillar in saltstack managed file

I have a state using file.managed, which generates a config file via a jinja for loop from a key in pillar.
My pillar looks like this:
configuration:
server01:
key1: value1
key2: value2
server02:
key03: value03
key04: value04
and the managed file:
{% set kv = pillar['configuration']['server01'] %}
{% for key, value in kv.iteritems() %}
{{ key }}:{ value }};
{% endfor %}
The way I differentiate between different servers right now in my state file is
config:
file.managed:
- name: /etc/config.conf
- source: salt://files/{{ grains['id'] }}.conf.jinja
- template: jinja
but this is less than ideal, since I have to create an almost identical file for every server.
Is there a way to dynamically replace server01 with the ID of the actual server, something like
{% set kv = pillar['configuration']['{{ grains[id''] }}'] %}
The goal is to generally limit the necessary changes only to the corresponding pillar file, when adding a new server, so other suggestions are also welcome too.
i think you should use pillar info in your state file.
your state file like bellow :
{% if grains['id'] in pillar['configuration'] %}
{% set nodeinfo = pillar['configuration'][grains['id']] %}
config:
file.managed:
- name: /etc/config.conf
- source: salt://conf.jinja
- template: jinja
- defaults :
nodeinfo: {{nodeinfo}}
{% endif %}
then, conf.jinja:
{% for key, value in nodeinfo.iteritems() -%}
{{ key }}:{{ value }};
{% endfor -%}
i hope that will solve your problem, thanks.

Google Deployment manager: MANIFEST_EXPANSION_USER_ERROR multiline variables

Trying to use Google Deployment Manager with YAML and Jinja with a multi-line variables, such as:
startup_script_passed_as_variable: |
line 1
line 2
line 3
And later:
{% if 'startup_script_passed_as_variable' in properties %}
- key: startup-script
value: {{properties['startup_script_passed_as_variable'] }}
{% endif %}
Gives MANIFEST_EXPANSION_USER_ERROR:
ERROR: (gcloud.deployment-manager.deployments.create) Error in
Operation operation-1432566282260-52e8eed22aa20-e6892512-baf7134:
MANIFEST_EXPANSION_USER_ERROR
Manifest expansion encountered the following errors: while scanning a simple key in "" could not found expected ':' in ""
Tried (and failed):
{% if 'startup_script' in properties %}
- key: startup-script
value: {{ startup_script_passed_as_variable }}
{% endif %}
also
{% if 'startup_script' in properties %}
- key: startup-script
value: |
{{ startup_script_passed_as_variable }}
{% endif %}
and
{% if 'startup_script' in properties %}
- key: startup-script
value: |
{{ startup_script_passed_as_variable|indent(12) }}
{% endif %}
The problem is the combination of YAML and Jinja. Jinja escapes the variable but fails to indent it as YAML would require when being passed as a variable.
Related: https://github.com/saltstack/salt/issues/5480
Solution: Pass the multi-line variable as an array
startup_script_passed_as_variable:
- "line 1"
- "line 2"
- "line 3"
The quoting is important if your value starts with # (which startup script on GCE does, ie #!/bin/bash) since it will be treated as a comment otherwise.
{% if 'startup_script' in properties %}
- key: startup-script
value:
{% for line in properties['startup_script'] %}
{{line}}
{% endfor %}
{% endif %}
Putting it here since there aren't much Q&A material for Google Deployment manager.
There is no clean way to do this in Jinja. As you yourself have pointed out, because YAML is a whitespace-sensitive language, it is difficult to effectively template around.
One possible hack is to split the string property into a list and then iterate over the list.
For example, providing the property:
startup-script: |
#!/bin/bash
python -m SimpleHTTPServer 8080
you can use it in your Jinja template:
{% if 'startup_script' in properties %}
- key: startup-script
value: |
{% for line in properties['startup-script'].split('\n') %}
{{ line }}
{% endfor %}
Here is also a full working example of this.
This method will work, but generally cases like this are when people start considering using a python template. Because you are working with an object model in python, you do not have to deal with indentation problems. For example:
'metadata': {
'items': [{
'key': 'startup-script',
'value': context.properties['startup_script']
}]
}
An example of the python template can be found in the Metadata From File example.

Comparing variables in template to build JSON - Ansible

Starting off with Ansible and I am trying to use ReST API to interact with an external application.Maybe I am missing something simple here.
I am trying to compare every host in my inventory file with the POD name specified in the variable file used by the role that invokes the jinja2 template.
My inventory file looks like this:
[all]
'POD-9'
'POD-10'
Variable file :
pods:
params:
- name: POD-9
- name: POD-10
{% for pod in pods.params %}
{% if '{{ inventory_hostname }}' == '{{ pod.name }}' %}
<generate JSON template here>
{% endif %}
{% endfor %}
The if statement however does not take effect. I want the template to be generated only in the inventory_hostname is equal to the pod name in the variable file
The current JSON file includes both :
{
"pod": {
"name": "POD-9"
}
"pod": {
"name": "POD-10"
}
}
In Jinja2 the double curly braces are used as a print statement. If you access variables inside tags don’t put the braces around them
{% for pod in pods.params %}
{% if inventory_hostname == pod.name %}
<generate JSON template here>
{% endif %}
{% endfor %}
Found the problem :
{% if pod.name == inventory_hostname %}