I have a jinja template in salt that every time apply.state is ran, salt thinks it has changed when it hasn't.
The issue is that a reconfigure command executes when the file "changes", and this reconfigure would change the file from a single lined JSON to a lined JSON. For example:
salt creates the file like:
{ "key1": { "sub-key1": "sub-value1", "sub-key2": "sub-value2"}, "key2": "value2" }
but when the reconfigure command executes, it changes the file to:
{
"key1": {
"sub-key1": "sub-value1",
"sub-key2": "sub-value2"
},
"key2": "value2"
}
Is there a way to have salt create the file as a formatted JSON to begin with?
This is what I have:
{{ gb_server.secrets_file }}:
file.managed:
- source: {{ gb_server.secrets_tmpl }}
- user: {{ gb_server.secure_user }}
- group: {{ gb_server.secure_user }}
- mode: 600
- template: jinja
the template is as follow:
{%- import_yaml 'gb-server/defaults.yaml' as defaultmap -%}
{%- set secrets = gb_server_config['secrets'] -%}
{{ secrets|tojson|replace('\\\\n','\\n') }}
The values are in pillar in yaml format.
Ideally, the resulted JSON would have its key sorted alphabetically, that way the content of the resulted file wouldn't "change" and the reconfigure command wouldn't run every single time.
Is there a way to make this happen with salt?
Thank you.
It looks like file.serialize will do what you want here
{{ gb_server.secrets_file }}:
file.serialize:
- user: {{ gb_server.secure_user }}
- group: {{ gb_server.secure_user }}
- mode: 600
- data: {{ gb_server.secrets }}
- serializer: json
- serializer_opts:
- indent: 2
- sort_keys: true
Related
Attempting to make a decision in a template based on the last character of a variable (third level domain hostname) , but the epiphany alludes me. Make a config stanza if value else, do the other.
I set a fact in play:
- name: Set third level domain name to a variable
set_fact:
my_3rd_levelname: "{{ ansible_nodename.split('.')[0] }}"
- name: Ascertain if which server we're on
set_fact:
my_one_or_two: "{{ my_3rd_levelname[-1]|int }}"
...which appears to echo out with debug, save the casting as an int...see below.
TASK [role-test : Echo out my_one_or_two] *******************************************************************************************************************
ok: [w.x.y.42] => {
"my_one_or_two": "2"
}
Then in the template.j2...
{# If my_one_or_two is even list server1 first. If not, second. #}
{% if lookup('vars,',my_one_or_two) + my_one_or_two|int is 1 %}
[some config file stanza here]
{% else %}
[some other config file stanza instead]
I've poked and hoped until I can stand it no longer and am reaching out. I've tried just using the raw variable, e.g., {% if my_one_or_two|int == 1 %} along with many other attempts, but I'm stuck. I can't seem to overcome this error:
AnsibleError: template error while templating string: expected token 'name', got 'integer'. String: [the contents of my template]
Any input would be greatly appreciated at this juncture.
Thanks
Okay...leaving this here in case someone else doesn't realize you can use any Python method that the object supports. Here's what I did. Remember the server names end in 1 or 2 and its a String.
Created a varible in /roles/[rolename]/vars...
my_simple_hostname: "{{ ansible_nodename.split('.')[0] }}"
Then used the 'endswith' method to evaluate it....
% if my_simple_hostname.endswith('1') == true %}
[content if true]
{% else %}
[content when false]
{% endif %}
I'm trying to get Ansible to convert an array of hashes, into to a list of key value pairs with the keys being one of the values from the first hash and the values being a different value from the first hash.
An example will help.
I want to convert :-
TASK [k8s_cluster : Cluster create | debug result of private ec2_vpc_subnet_facts] ***
ok: [localhost] => {
"result": {
"subnets": [
{
"availability_zone": "eu-west-1c",
"subnet_id": "subnet-cccccccc",
},
{
"availability_zone": "eu-west-1a",
"subnet_id": "subnet-aaaaaaaa",
},
{
"availability_zone": "eu-west-1b",
"subnet_id": "subnet-bbbbbbbb",
}
]
}
}
into
eu-west-1a: subnet-aaaaaaaa
eu-west-1b: subnet-bbbbbbbb
eu-west-1c: subnet-cccccccc
I've tried result.subnets | map('subnet.availability_zone': 'subnets.subnet_id') (which doesn't work at all) and json_query('subnets[*].subnet_id' which simply pickes out the subnet_id values and puts them into a list.
I think I could do this with Zip and Hash in Ruby but I don't know how to make this work in Ansible, or more specifically in Jmespath.
I have generated the below list I will add a new line to the generated list(thought to share this first)
---
- name: play
hosts: localhost
tasks:
- name: play
include_vars: vars.yml
- name: debug
debug:
msg: "{% for each in subnets %}{{ each.availability_zone }}:{{ each.subnet_id }}{% raw %},{% endraw %}{% endfor %}"
output --->
ok: [localhost] => {
"msg": "eu-west-1c:subnet-cccccccc,eu-west-1a:subnet-aaaaaaaa,eu-west-1b:subnet-bbbbbbbb,"
}
Jmespath does not allow to use dynamic names in multi select hashes. I have found an extension to jmespath allowing to do such thing by using key references, but it is not part of the plain jmespath implementation nor ansible.
To do this in plain ansible, you will have to create a new variable and populate it with a loop. There might be other ways using other filters but this is the solution I came up with:
- name: Create the expected hash
set_fact:
my_hash: >-
{{
my_hash
| default({})
| combine({ item.availability_zone: item.subnet_id })
}}
loop: "{{ subnets }}"
- name: Print result
debug:
var: my_hash
i got the following and stuck by getting the right answer. i got a dict that i want to template with item.key in file name and all the values in the template.
my_dict:
name1:
{ path=/x/y/z, action=all, filter=no },
{ path=/a/b/c, action=some, filter=yes }
name2:
{ path=/z/y/x, action=nothing, filter=no },
{ path=/c/b/a, action=all, filter=yes }
tasks:
- name: generate check config
template:
src: check.j2
dest: "{{ config_dir }}/{{ item.key }}-directories.json"
owner: Own
group: Wheel
mode: 0644
with_dict:
- "{{ my_dict }}"
when:
- my_dict is defined
become: true
My template looks like
{
"configs": [
{% for value in my_dict %}
{
"path": "{{ value.path }}",
"action": "{{ value.action }}",
{% if value.filter is defined %}
"filter": "{{ value.filter }}"
{% endif %}
}{% if !loop.last %},{% endif %}
{% endfor %}
]
}
So i tested so much that now i dont see any forest cause of too many trees.
Above should result in 2 files.
File name = name1-directories.json
Content:
{
"configs": [
{
"path": /x/y/z,
"action": all,
"filter": no
},
{
"path": /a/b/c,
"action": some,
"filter": yes
}
]
}
Thx in Advance
Let me start with the following. I see some problems with your current solution.
You're template references the value of the array items with value.<key> when it should instead read item.value.<key>.
with_dict expects a dict, but you're passing an array containing a dict as the only element. In yaml, - denotes array elemtents. To use that correctly you just write: with_dict: "{{ my_dict }}"
Using the shorthand yaml syntax is discouraged in ansible as it makes the playbooks harder to read.
I would suggest you do the following:
There is a jinja2 Filter that just converts your dict to json:
{{ dict_variable | to_json }} # or
{{ dict_variable | to_nice_json }}
The second one makes it human readable. What you're currently trying to do may work ( haven't looked into it so thoroughly) but it's not pretty and error prone.
To make it work with the jinja2 filter restructure your variables at the top the following way:
my_dict:
- name1:
configs:
- path: /x/y/z
action: all
filter: no
- path: /a/b/c
action: some
filter: yes
- name2:
configs:...
When the vars are formatted like this, you can just use the copy module to print the configs to the files like this:
- name: Print the configs to the files
copy:
content: "{{ item.value | to_nice_json }}"
dest: "{{ config_dir }}/{{ item.key }}-directories.json"
with_dict: "{{ my_dict }}"
I have a variable that is an array [{'foo':1},{'bar':2}].
I want to combine it with the following hash: {'baz':3} using a set fact (?) such as my output registered variable is:
[{'foo':1, 'baz':3},{'bar':2, 'baz':3}]
I've looked into the combine filter, but it only works when I already have an hash to work with. In my case I have an array.
Is there a way to achieve that using ansible?
Actually, I have found a way. map can be used with any filters, and arguments have to be passed after a comma
- name: test
set_fact:
_test: "{{ [{'foo':1}, {'bar':2}] | map('combine', {'baz':3}) | list }}"
produces:
ok: [localhost] => {
"_test": [
{
"baz": 3,
"foo": 1
},
{
"bar": 2,
"baz": 3
}
]
}
Jinja2 doesn't have list comprehension, but I think you can use set and for loop to achieve it:
{% set outputarray = [] -%}
{% for d in inputarray -%}
{% set r = d|combine({'baz': 3}) -%}
{{ ouputarray.append(r) and '' }}
{%- endfor %}
In my playbook, a JSON file is included using the include_vars module. The content of the JSON file is as given below:
{
"Component1": {
"parameter1" : "value1",
"parameter2" : "value2"
},
"Component2": {
"parameter1" : "{{ NET_SEG_VLAN }}",
"parameter2": "value2"
}
}
After the JSON file is included in the playbook, I am using uri module to sent an http request as given below:
- name: Configure Component2 variables using REST API
uri:
url: "http://0.0.0.0:5000/vse/api/v1.0/config/working/Component2/configvars/"
method: POST
return_content: yes
HEADER_x-auth-token: "{{ login_resp.json.token }}"
HEADER_Content-Type: "application/json"
body: "{{ Component2 }}"
body_format: json
As it can be seen, the body of the http request is send with the JSON data Component2. However, Jinja2 tries to substitute the {{ NET_SEG_VLAN }} in the JSON file and throws and undefined error. The intention is not to substitute anything inside the JSON file using Jinja2 and send the body as it is in http request.
How to prevent the Jinja2 substitution for the variables included from the JSON file?
You should able to escape the variable even with {{'{{NET_SEG_VLAN}}'}} to tell jinja not to template anything inside that block.
You should be able to escape the variable with {% raw %} and {% endraw %} to tell Jinja not to template anything inside that block.
!unsafe
From documentation at https://docs.ansible.com/ansible/2.10/user_guide/playbooks_advanced_syntax.html#unsafe-or-raw-strings:
When handling values returned by lookup plugins, Ansible uses a data type called unsafe to block templating. Marking data as unsafe prevents malicious users from abusing Jinja2 templates to execute arbitrary code on target machines. The Ansible implementation ensures that unsafe values are never templated. It is more comprehensive than escaping Jinja2 with {% raw %} ... {% endraw %} tags.
You can use the same unsafe data type in variables you define, to prevent templating errors and information disclosure. You can mark values supplied by vars_prompts as unsafe. You can also use unsafe in playbooks. The most common use cases include passwords that allow special characters like { or %, and JSON arguments that look like templates but should not be templated.
I am using it all the time, like this:
# Load JSON content, as a raw string with !unsafe
- tags: ["always"]
set_fact:
dashboard_content: !unsafe "{{ lookup('file', './dash.json') | to_json }}"
# Build dictionnary via template
- tags: ["always"]
set_fact:
cc: "{{ lookup('template', './templates/cm_dashboard.yaml.j2') | from_yaml }}"
## cm_dashboard.yaml.j2 content:
hello: {{ cc_dashboard_content }}
# Now, "cc" is a dict variable, with "hello" field protected!