SaltStack jinja run command on master before pushing to minion - jinja2

I have a simple task, which I'm trying to execute with Salt.
I want to dynamicly create a motd file for all my servers, which requires rendering ascii art at top, with the machines hostname.
I would like to make this render on the master, and then be pushed to the minion.
So far I have this simple file: /srv/salt/content/all/etc/update-motd.d/05-hostname
#!/bin/bash
cat << "EOF"
{{ salt.cmd.shell('figlet TestServer') }}
EOF
This file is then used in: /srv/salt/motd/init.sls
/etc/update-motd.d/05-hostname:
file.managed:
- source: salt://content/all/etc/update-motd.d/05-hostname
- template: jinja
If I try to run this, It'll will save the file with the output: /bin/sh: 1: figlet: not found, which I guess, is because the command is executed on the minion and not on the master.
sudo salt 'server' state.sls motd
I do realize, that I can make the saltmaster install figlet on all servers, but I think that would be a waste. The master already knows the hostname through grains, and it should therefore be a simple task, to generate this file on the master before pushing it.
Does anyone have any ideas for accomplishing this?

State jinja are rendered on the minion itself so there is no way the file.managed would work that way.
In order to render something on the master you need to use pillars.
So you would need to add a pillar on the master which looks something like this :
{% set host = grains['fqdn'] %}
{% set command = 'figlet ' + host %}
{% set output = salt.cmd.shell(command) %}
motd:
out: {{ output|yaml_encode }}
then point /srv/salt/content/all/etc/update-motd.d/05-hostname to the pillar.
#!/bin/bash
cat << "EOF"
{{ pillar['motd']['out'] }}
EOF

Related

Helm: Overwrite configuration from json files, is there a better way?

We use helm to deploy a microservice on different systems.
Among other things, we have a ConfigMap template and of course a value file with the default values in the repo of the service. Some of these values are JSON and so far stored as JSON string:
apiVersion: v1
data:
Configuration.json: {{ toYaml .Values.config | indent 4 }}
kind: ConfigMap
metadata:
name: service-cm
config: |-
{
"val1": "key1",
"val2": "key2"
}
We also have a deployment repo where the different systems are defined. Here we overwrote the values with json strings as well.
Since the usability of these json strings is not so good, we want to move them to json files.
We use AKS and Azure Pipelines to deploy the service.
We create the chart with:
helm chart save Chart $(acr.name).azurecr.io/$(acr.repo.name):$(BUILD_VERSION)
and push it with:
helm chart push $(acr.name).azurecr.io/$(acr.repo.name):$(BUILD_VERSION)
and upgrade after pull and export in another job:
helm upgrade --install --set image-name --wait -f demo-values.yaml service service-chart
What we have already done is to set the json config in the upgrade command with --set-file:
upgrade --install --set image-name --wait -f demo-values.yaml --set-file config=demo-config.json service service-chart
What works though only for the values, of the different systems, not for the default values. But we also want to outsource these and also do not want to do without them.
Therefore at this point the first question, is there a way to inject the default values already per file, so that they are in the saved chart?
We know that you can read files in the templates with the following syntax:
Configuration.json: |-
{{ .Files.Get "default-config.json" | indent 4 }}
But we can't override that. Another idea was to inject the path from the values:
Configuration.json: |-
{{ .Files.Get (printf "%s" .Values.config.filename) | indent 4 }}
But the path seems to be relative to the chart folder. So there is no path to the deployment repo.
We now have the following solution with conditional templates:
data:
{{ if .Values.config.overwrite }}
Configuration.json: {{ toYaml .Values.config.value | indent 4 }}
{{ else }}
Configuration.json: |-
{{ .Files.Get "default-config" | indent 4 }}
{{ end }}
In the deployment repo the value file then looks like this:
config:
overwrite: true
value: will_be_replaced_by_file_content
And demo-config.json is set with the upgrade command in the pipeline.
This works, but seems a bit fiddly to us. So the question: Do you know a better way?
In your very first setup, .Values.config is a string. The key: |- syntax creates a YAML block scalar that contains an indented text block that happens to be JSON. helm install --set-file also sets a value to a string, and .Files.Get returns a string.
All of these things being strings means you can simplify the logic around them. For example, consider the Helm default template function: if its parameter is an empty string, it is logically false, and so default falls back to its default value.
In your final layout you want to keep the default configuration in a separate file, but use it only if an override configuration isn't provided. So you can go with an approach where:
In values.yaml, config is an empty string. (null or just not defining it at all will also work for this setup.)
# config is a string containing JSON-format application configuration.
config: ''
As you already have it, .Files.Get "default-config.json" can be the fallback value.
Use the default function to check if .Values.config is non-empty, and if not, fall back to that default.
Use helm install --set-file config=demo-config.json to provide an alternate config at deploy time.
The updated ConfigMap could look like:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "myapp.fullname" . }}
data:
Configuration.json:
{{ .Values.config | default (.Files.Get "default-config.json") | indent 4 }}
(Since any form of .Values.config is a string, it's not a complex structure and you don't need to call toYaml on it.)

SaltStack - Unable to check if file exists on minion

I am trying to check if a particular file with some extension exists on a centos host using salt stack.
create:
cmd.run:
- name: touch /tmp/filex
{% set output = salt['cmd.run']("ls /tmp/filex") %}
output:
cmd.run:
- name: "echo {{ output }}"
Even if the file exists, I am getting the error as below:
ls: cannot access /tmp/filex: No such file or directory
I see that you already accepted an answer for this that talks about jinja being rendered first. which is true. but i wanted to add to that you don't have to use cmd.run to check the file. there is a state that is built in to salt for this.
file.exists will check for a file or directories existence in a stateful way.
One of the things about salt is you should be looking for ways to get away from cmd.run when you can.
create:
file.managed:
- name: /tmp/filex
check_file:
file.exists:
- name: /tmp/filex
- require:
- file: create
In SaltStack Jinja is evaluated before YAML. The file creation will (cmd.run) be executed after Jinja. So your Jinja variable is empty because the file isn’t created, yet.
See https://docs.saltproject.io/en/latest/topics/jinja/index.html
Jinja statements such as your set output line are evaluated when the sls file is rendered, before any of the states in it are executed. It's not seeing the file because the file hasn't been created yet.
Moving the check to the state definition should fix it:
output:
cmd.run:
- name: ls /tmp/filex
# if your underlying intent is to ensure something runs only
# once the file exists, you can enforce that here
- require:
- cmd: create

Use minion targeting within Jinja templates

SaltStack allows for precise targeting of minions on the command-line, e.g.:
salt 'prefix*' grains.items
^^^^^^^
Is there some way to use the same way of targeting minions within Jinja templates? I'd like to iterate over hosts targeted by the given matcher, e.g:
{% for minion in salt.minions['prefix*'] %}
^^^^^^^
Unfortunately I didn't find anything in the official help on using Jinja.
From the additional comments you posted, it seems you are looking for the Salt mine functionality. Using Salt mine we can collect data from minions (to master), and use the data on (usually) other minions.
Salt Mine can be enabled in minion configuration file, or minion's pillar, but format is same.
For the sake of example:
Consider 4 minions as below:
web1.example.local
db1.example.local
db2.example.local
db3.example.local
Now in the pillar file (db.sls) for the dbX minions, I have defined two Salt modules as Mine functions to get hostname and additionally IP address.
mine_functions:
network.get_hostname: []
network.ip_addrs: []
Note that the Mine and Pillar might have to be updated (mine.update)/refreshed (saltutil.refresh_pillar) for this change to reflect.
Now any of the above Mine functions can be referenced by the Mine module's get function. Minions can be targeted in the usual way - wildcard, grains, compound, etc.
Sample example.conf.j2 template where I render the minion id and hostname.
{% for minion_id, hostname in salt['mine.get']('db*', 'network.get_hostname') | dictsort() %}
Minion id {{ minion_id }} has hostname {{ hostname }}
{% endfor %}
This template can be rendered on the web1.example.local with below state:
create-example-conf:
file.managed:
- name: /tmp/example.conf
- source: salt://example.conf.j2
- mode: 0664
- template: jinja
This will result in /tmp/example.conf file with the lines showing minion_id and hostname.

use variable from YFM in post

How can I use dynamic data file?
Say I have several data files: file1.yml, file2.yml, file3.yml and in YFM I want to tell which data file to use:
---
datafilename: file1
---
{{ site.data.datafilename.person.name }}
^
How to tell liquid that here should be file1
Ideally would be to use post's file name. So that post1.md would use post1.yml data file and so on.
This should work from inside a post :
{{ site.data[page.slug].person.name }}

How can I test jinja2 templates in ansible?

Sometimes I need to test some jinja2 templates that I use in my ansible roles. What is the simplest way for doing this?
For example, I have a template (test.j2):
{% if users is defined and users %}
{% for user in users %}{{ user }}
{% endfor %}
{% endif %}
and vars (in group_vars/all):
---
users:
- Mike
- Smith
- Klara
- Alex
At this time exists 4 different variants:
1_Online (using https://cryptic-cliffs-32040.herokuapp.com/)Based on jinja2-live-parser code.
2_Interactive (using python and library jinja2, PyYaml)
import yaml
from jinja2 import Template
>>> template = Template("""
... {% if users is defined and users %}
... {% for user in users %}{{ user }}
... {% endfor %}
... {% endif %}
... """)
>>> values = yaml.load("""
... ---
... users:
... - Mike
... - Smith
... - Klara
... - Alex
... """)
>>> print "{}".format(template.render(values))
Mike
Smith
Klara
Alex
3_Ansible (using --check)
Create test playbook jinja2test.yml:
---
- hosts: 127.0.0.1
tasks:
- name: Test jinja2template
template: src=test.j2 dest=test.conf
and run it:
ansible-playbook jinja2test.yml --check --diff --connection=local
sample output:
PLAY [127.0.0.1] **************************************************************
GATHERING FACTS ***************************************************************
ok: [127.0.0.1]
TASK: [Test jinja2template] ***************************************************
--- before: test.conf
+++ after: /Users/user/ansible/test.j2
## -0,0 +1,4 ##
+Mike
+Smith
+Klara
+Alex
changed: [127.0.0.1]
PLAY RECAP ********************************************************************
127.0.0.1 : ok=2 changed=1 unreachable=0 failed=0
4_Ansible (using -m template) thanks for #artburkart
Make a file called test.txt.j2
{% if users is defined and users %}
{% for user in users %}
{{ user }}
{% endfor %}
{% endif %}
Call ansible like so:
ansible all -i "localhost," -c local -m template -a "src=test.txt.j2 dest=./test.txt" --extra-vars='{"users": ["Mike", "Smith", "Klara", "Alex"]}'
It will output a file called test.txt in the current directory, which will contain the output of the evaluated test.txt.j2 template.
I understand this doesn't directly use a vars file, but I think it's the simplest way to test a template without using any external dependencies. Also, I believe there are some differences between what the jinja2 library provides and what ansible provides, so using ansible directly circumvents any discrepancies. When the JSON that is fed to --extra-vars satisfies your needs, you can convert it to YAML and be on your way.
If you have a jinja2 template called test.j2 and a vars file located at group_vars/all.yml, then you can test the template with the following command:
ansible all -i localhost, -c local -m template -a "src=test.j2 dest=./test.txt" --extra-vars=#group_vars/all.yml
It will output a file called test.txt in the current directory, which will contain the output of the evaluated test.j2 template.
I think this is the simplest way to test a template without using any external dependencies. Also, there are differences between what the jinja2 library provides and what ansible provides, so using ansible directly circumvents any discrepancies. It's also possible to test ad-hoc variables without making an additional vars file by using JSON:
ansible all -i "localhost," -c local -m template -a "src=test.j2 dest=./test.txt" --extra-vars='{"users": ["Mike", "Smith", "Klara", "Alex"]}'
You can use the debug module
tasks:
- name: show templating results
debug:
msg: "{{ lookup('template', 'template-test.j2') }}"
Disclaimer - I am the author of this, but I put together JinjaFx (https://github.com/cmason3/jinjafx).
This is a Python based tool that allows you to pass Jinja2 templates with a YAML file for variables. I originally wrote it so it can pass CSV based data to generate group_vars and host_vars for our deployments, but it also allows easy testing of Jinja2 templates - there is an online version at https://jinjafx.io
I needed to verify that the template I had defined gave the right result for the server it was created for. (The template included the hostname as a variable and other per host defined variables.)
Neither of the above methods worked for me. The solution for me was to add
check_mode: yes
diff: yes
to the task executing the template command, this got me the difference between the generated file and the file actually on the server without changing the remote file.
For me it actually worked better than looking at the whole generated file, since the changes was the interesting part anyway.
It needs to log in on the remote machine, so a limited use-case.
Example of a complete command:
- name: diff server.properties
check_mode: yes
diff: yes
ansible.builtin.template:
src: "src.properties"
dest: "/opt/kafka/config/server.properties"