Ansible - Set content of file based on dictionary key - jinja2

Goal : Set the content of a file based on a dictionary value that I retrieve via FACT.
In other words, I have dictionary with like:
clients:
client0:
bar: my stuff
I learn the client name from a FACT, I would like to use the client name to index into the dictionary and retrive bar and set as the content of the file.
- name: Copy Client File Content
copy:
dest="/opt/myfile"
content=clients[{{client_name}}].bar
owner=root
group=root
mode=0600
no_log: true
Expected Content of File is : my stuff

This works for me:
- hosts: localhost
vars:
clients:
client0:
bar: my stuff
client_name: client0
tasks:
- name: copy client file content
copy:
dest: ./myfile.txt
content: "{{clients[client_name].bar}}"
Your issue is that you appear to be trying to perform variable de-referencing outside of a Jinja2 variable substitution; putting everything inside {{...}} is what makes this work.

Related

Inject Key Storage Password into Option's ValuesUrl

I'm trying to request a raw file from a Gitlab repository as a values JSON file for my job option. The only way so far that I managed to do it is by writing my secret token as plain text in the request URL:
https://mycompanygitlab.com/api/v4/projects/XXXX/repository/files/path%2Fto%2Fmy%2Ffile.json/raw?ref=main&private_token=MyV3ry53cr3Tt0k3n
I've tried using option cascading; I created a Secure password Input Option called gitlab_token which points to a Key Storage Password and tried every possible notation (with or without .value, quoted or unquoted option) in the valuesUrl field of the second option, instead of the plain token, but I keep receiving this error message pointing an invalid char at the position of the dollar sign:
I've redacted sensitive info and edited the error print accordingly
I reproduced your issue. It works using a text option, you can use the value in this way: ${option.mytoken.value}.
- defaultTab: nodes
description: ''
executionEnabled: true
id: e4f114d5-b3af-44a5-936f-81d984797481
loglevel: INFO
name: ResultData
nodeFilterEditable: false
options:
- name: mytoken
value: deda66698444
- name: apiendpoint
valuesUrl: https://mocki.io/v1/xxxxxxxx-xxxx-xxxx-xxxx-${option.mytoken.value}
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo ${option.mytoken}
- exec: echo ${option.apiendpoint}
keepgoing: false
strategy: node-first
uuid: e4f114d5-b3af-44a5-936f-81d984797481
Another workaround (if you don't want to use a plain text option) could be to pass the secure option to an inline script and manage the logic from there.
Please open a new issue here.

Helm: Overwrite configuration from json files, is there a better way?

We use helm to deploy a microservice on different systems.
Among other things, we have a ConfigMap template and of course a value file with the default values in the repo of the service. Some of these values are JSON and so far stored as JSON string:
apiVersion: v1
data:
Configuration.json: {{ toYaml .Values.config | indent 4 }}
kind: ConfigMap
metadata:
name: service-cm
config: |-
{
"val1": "key1",
"val2": "key2"
}
We also have a deployment repo where the different systems are defined. Here we overwrote the values with json strings as well.
Since the usability of these json strings is not so good, we want to move them to json files.
We use AKS and Azure Pipelines to deploy the service.
We create the chart with:
helm chart save Chart $(acr.name).azurecr.io/$(acr.repo.name):$(BUILD_VERSION)
and push it with:
helm chart push $(acr.name).azurecr.io/$(acr.repo.name):$(BUILD_VERSION)
and upgrade after pull and export in another job:
helm upgrade --install --set image-name --wait -f demo-values.yaml service service-chart
What we have already done is to set the json config in the upgrade command with --set-file:
upgrade --install --set image-name --wait -f demo-values.yaml --set-file config=demo-config.json service service-chart
What works though only for the values, of the different systems, not for the default values. But we also want to outsource these and also do not want to do without them.
Therefore at this point the first question, is there a way to inject the default values already per file, so that they are in the saved chart?
We know that you can read files in the templates with the following syntax:
Configuration.json: |-
{{ .Files.Get "default-config.json" | indent 4 }}
But we can't override that. Another idea was to inject the path from the values:
Configuration.json: |-
{{ .Files.Get (printf "%s" .Values.config.filename) | indent 4 }}
But the path seems to be relative to the chart folder. So there is no path to the deployment repo.
We now have the following solution with conditional templates:
data:
{{ if .Values.config.overwrite }}
Configuration.json: {{ toYaml .Values.config.value | indent 4 }}
{{ else }}
Configuration.json: |-
{{ .Files.Get "default-config" | indent 4 }}
{{ end }}
In the deployment repo the value file then looks like this:
config:
overwrite: true
value: will_be_replaced_by_file_content
And demo-config.json is set with the upgrade command in the pipeline.
This works, but seems a bit fiddly to us. So the question: Do you know a better way?
In your very first setup, .Values.config is a string. The key: |- syntax creates a YAML block scalar that contains an indented text block that happens to be JSON. helm install --set-file also sets a value to a string, and .Files.Get returns a string.
All of these things being strings means you can simplify the logic around them. For example, consider the Helm default template function: if its parameter is an empty string, it is logically false, and so default falls back to its default value.
In your final layout you want to keep the default configuration in a separate file, but use it only if an override configuration isn't provided. So you can go with an approach where:
In values.yaml, config is an empty string. (null or just not defining it at all will also work for this setup.)
# config is a string containing JSON-format application configuration.
config: ''
As you already have it, .Files.Get "default-config.json" can be the fallback value.
Use the default function to check if .Values.config is non-empty, and if not, fall back to that default.
Use helm install --set-file config=demo-config.json to provide an alternate config at deploy time.
The updated ConfigMap could look like:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "myapp.fullname" . }}
data:
Configuration.json:
{{ .Values.config | default (.Files.Get "default-config.json") | indent 4 }}
(Since any form of .Values.config is a string, it's not a complex structure and you don't need to call toYaml on it.)

SaltStack - Unable to check if file exists on minion

I am trying to check if a particular file with some extension exists on a centos host using salt stack.
create:
cmd.run:
- name: touch /tmp/filex
{% set output = salt['cmd.run']("ls /tmp/filex") %}
output:
cmd.run:
- name: "echo {{ output }}"
Even if the file exists, I am getting the error as below:
ls: cannot access /tmp/filex: No such file or directory
I see that you already accepted an answer for this that talks about jinja being rendered first. which is true. but i wanted to add to that you don't have to use cmd.run to check the file. there is a state that is built in to salt for this.
file.exists will check for a file or directories existence in a stateful way.
One of the things about salt is you should be looking for ways to get away from cmd.run when you can.
create:
file.managed:
- name: /tmp/filex
check_file:
file.exists:
- name: /tmp/filex
- require:
- file: create
In SaltStack Jinja is evaluated before YAML. The file creation will (cmd.run) be executed after Jinja. So your Jinja variable is empty because the file isn’t created, yet.
See https://docs.saltproject.io/en/latest/topics/jinja/index.html
Jinja statements such as your set output line are evaluated when the sls file is rendered, before any of the states in it are executed. It's not seeing the file because the file hasn't been created yet.
Moving the check to the state definition should fix it:
output:
cmd.run:
- name: ls /tmp/filex
# if your underlying intent is to ensure something runs only
# once the file exists, you can enforce that here
- require:
- cmd: create

Can a GitHub Action know its version that was specified after # character?

In order to use a specific version of an action we use this syntax:
- name: Setup Python
uses: actions/setup-python#v1
or:
- name: Setup Python
uses: actions/setup-python#main
v1 is the name of a tag or a branch, so the code we want to use is already there.
However, I'd like to know if there is a way to get the version string inside the YAML file that defines an action.
When we create an action, we create a repository with action.yml.
Now, I'd like to retrieve this "v1" or "main" string from within the code in action.yml.
In action.yml, I'd like to have:
runs:
using: 'docker'
image: 'docker://some_url_to_image:${ the retrieved "v1" or "main" here}'
So that I could use an image that matches the version of the action.
Is this possible?

Concatenate two CSV files with different columns into one using panda or bash within an Ansible playbook

this is my first post and I'm also very new into programming. Sorry if the terminology I use doesn't always make perfect sense. Feel free to correct any non-sense that would make your eyes bleed.
I am actually a network engineer but with the current trend in my field, I need to start coding and automating but have postponed it until my company had a real use case. Well, that use case arrived and it is called ACI.
I've been learning how to automate many basic things with ansible and so far so good.
My current use case requires a playbook that will concatenate two CSV files with different columns into one single CSV file which will later be used to set variables in other plays.
We mainly work with CSV files containing system names, VLAN IDs and Leaf ports, something like this:
VPC_SYS_NAME, VLAN_ID, LEAF_PAIR
sys1, 3001, 101-102
sys2, 2500, 111-112
... , ..., ... ...
So far what I have tried is to take this data, read it with the read_csv module in ansible, and use the fields in each column as variables to loop in another play:
- name: read the csv file
read_csv:
path: list.csv
delimiter: ','
register: csv
- name: GET EPG UNI PATH FROM VLAN ID
aci_rest:
host: "{{ ansible_host }}"
username: "{{ username }}"
password: "{{ password }}"
validate_certs: False
method: get
path: api/class/fvAEPg.json?query-target-filter=eq(fvAEPg.name,"{{item.VLAN_ID}}")
loop: "{{ csv.list }}"
register: register_as_variable
Once this play has finished, it will register the output into another variable, in this case, called register_as_variable.
I then parse this output with json_query and set it into a new variable:
- set_fact:
fact1: "{{ register_as_variable | json_query('results[].imdata[].fvAEPg.attributes.dn') }}"
lastly, I copy this output into another CSV file.
With the Ansible shell module and using cat and awk I remove any unwanted characters and change the CSV file from a list with 1 single row to a headerless column, getting something like this:
"uni/tn-tenant/ap-AP01/epg-3001",
"uni/tn-tenant/ap-AP01/epg-2500",
"uni/tn-tenant/ap-AP01/epg-...",
Up to this point, it works as I expect it (even if it is clearly not the cleanest way).
Where I am struggling at the moment is to find a way to merge/concatenate both the original CSV with the system name, VLAN ID etc and the newly created CSV with the output "uni/tn-tenant/ap-AP01/epg-...." into one unique "master" CSV file that would be used by other plays. The "master" CSV file should look something like this:
VPC_SYS_NAME, VLAN_ID, LEAF_PAIR, MO_PATH
sys1, 3001, 101-102, "uni/tn-tenant/ap-AP01/epg-3001",
sys2, 2500, 111-112, "uni/tn-tenant/ap-AP01/epg-2500",
... , ..., ... ..., "uni/tn-tenant/ap-AP01/epg-....",
Adding the MO_PATH header can be done with sed -i '1iMO_PATH' file.csv but merging the columns of both files in a given order is what I'm unable to accomplish.
So far I have tried to use panda and cat but without success.
I would be extremely thankful if anyone could help me just a bit or guide me in the right direction.
Thanks!
Hello and welcome to StackOverflow! A former network engineer is here to help :)
The easiest way to merge two files line by line (if you are sure that they order is correct) is to use paste utility.
I have the following files:
1.csv
VPC_SYS_NAME,VLAN_ID,LEAF_PAIR
sys1,3001,101-102
sys2,2500,111-112
2.csv
"uni/tn-tenant/ap-AP01/epg-3001",
"uni/tn-tenant/ap-AP01/epg-2500",
Then i came up with
Adding a new header to resulting file 3.csv:
echo "$(head -n 1 1.csv),MO_PATH" > 3.csv
we are reading header of 1.csv, adding missing column and redirecting output to 3.csv (while overwriting it completely)
Merging two files using paste utility, while skipping the header of 1.csv
tail -n+2 1.csv | paste -d"," - 2.csv >> 3.csv
Let's split this one:
tail -n+2 1.csv - reads 1 csv starting from 2nd line to stdout
paste -d"," - 2.csv - merges two files line by line, using , as delimiter, while getting contents of the first file from stdin (represented as -). We used a pipe | symbol to pass stdout of tail command to stdin of paste command
>> used to append the content to already existing 3.csv
The result:
VPC_SYS_NAME,VLAN_ID,LEAF_PAIR,MO_PATH
sys1,3001,101-102,"uni/tn-tenant/ap-AP01/epg-3001",
sys2,2500,111-112,"uni/tn-tenant/ap-AP01/epg-2500",
And for pipes to work, don't forget to use shell module instead of command, since this question is marked as ansible