SaltStack - Unable to check if file exists on minion - jinja2

I am trying to check if a particular file with some extension exists on a centos host using salt stack.
create:
cmd.run:
- name: touch /tmp/filex
{% set output = salt['cmd.run']("ls /tmp/filex") %}
output:
cmd.run:
- name: "echo {{ output }}"
Even if the file exists, I am getting the error as below:
ls: cannot access /tmp/filex: No such file or directory

I see that you already accepted an answer for this that talks about jinja being rendered first. which is true. but i wanted to add to that you don't have to use cmd.run to check the file. there is a state that is built in to salt for this.
file.exists will check for a file or directories existence in a stateful way.
One of the things about salt is you should be looking for ways to get away from cmd.run when you can.
create:
file.managed:
- name: /tmp/filex
check_file:
file.exists:
- name: /tmp/filex
- require:
- file: create

In SaltStack Jinja is evaluated before YAML. The file creation will (cmd.run) be executed after Jinja. So your Jinja variable is empty because the file isn’t created, yet.
See https://docs.saltproject.io/en/latest/topics/jinja/index.html

Jinja statements such as your set output line are evaluated when the sls file is rendered, before any of the states in it are executed. It's not seeing the file because the file hasn't been created yet.
Moving the check to the state definition should fix it:
output:
cmd.run:
- name: ls /tmp/filex
# if your underlying intent is to ensure something runs only
# once the file exists, you can enforce that here
- require:
- cmd: create

Related

How to use different config files for different environments in airflow?

I'm using SparkKubernetesOperator which has a template_field called application_file. Normally on giving this field a file's name, airflow reads that file and templates the jinja variable in it (just like script field in the BashOperator).
So this works and the file information is shown in the Rendered Template tab with the jinja variables replaced with the correct values.
start_streaming = SparkKubernetesOperator(
task_id='start_streaming',
namespace='spark',
application_file='user_profiles_streaming_dev.yaml',
...
dag=dag,
)
I want to use different files in the application_file field for different environments
So I used a jinja template in the field. But when I change the application_file with user_profiles_streaming_{{ var.value.env }}.yaml, the rendered output is just user_profiles_streaming_dev.yaml and not the file contents.
I know that recursive jinja variable replacement is not possible in airflow but I was wondering if there is any workaround for having different template files.
What I have tried -
I tried using a different operator and doing xcom push to read the file contents and sending it to SparkKubernetesOperator. While this was good for reading different files based on environment, it did not solve the issue of having the jinja variable replaced.
I also tried making a custom operator which inherits the SparkKubernetesOperator and has a template_field applicaton_file_name thinking that jinja replacement will take place 2 times, but this didn't work too.
I made an env file which had the environment details (dev/prod). Then I added this code to the start of my dag file
ENV = None
with open('/home/airflow/env', 'r') as env_file:
value = env_file.read()
if value == None or value == "":
raise Exception("ENV FILE NOT PRESENT")
ENV = value
and then accessed the environment in the code like this
submit_job = SparkKubernetesOperator(
task_id='submit_job',
namespace="spark",
application_file=f"adhoc_{ENV}.yaml",
do_xcom_push=True,
dag=dag,
)
This way I could have separate dev and prod files.

Helm: Overwrite configuration from json files, is there a better way?

We use helm to deploy a microservice on different systems.
Among other things, we have a ConfigMap template and of course a value file with the default values in the repo of the service. Some of these values are JSON and so far stored as JSON string:
apiVersion: v1
data:
Configuration.json: {{ toYaml .Values.config | indent 4 }}
kind: ConfigMap
metadata:
name: service-cm
config: |-
{
"val1": "key1",
"val2": "key2"
}
We also have a deployment repo where the different systems are defined. Here we overwrote the values with json strings as well.
Since the usability of these json strings is not so good, we want to move them to json files.
We use AKS and Azure Pipelines to deploy the service.
We create the chart with:
helm chart save Chart $(acr.name).azurecr.io/$(acr.repo.name):$(BUILD_VERSION)
and push it with:
helm chart push $(acr.name).azurecr.io/$(acr.repo.name):$(BUILD_VERSION)
and upgrade after pull and export in another job:
helm upgrade --install --set image-name --wait -f demo-values.yaml service service-chart
What we have already done is to set the json config in the upgrade command with --set-file:
upgrade --install --set image-name --wait -f demo-values.yaml --set-file config=demo-config.json service service-chart
What works though only for the values, of the different systems, not for the default values. But we also want to outsource these and also do not want to do without them.
Therefore at this point the first question, is there a way to inject the default values already per file, so that they are in the saved chart?
We know that you can read files in the templates with the following syntax:
Configuration.json: |-
{{ .Files.Get "default-config.json" | indent 4 }}
But we can't override that. Another idea was to inject the path from the values:
Configuration.json: |-
{{ .Files.Get (printf "%s" .Values.config.filename) | indent 4 }}
But the path seems to be relative to the chart folder. So there is no path to the deployment repo.
We now have the following solution with conditional templates:
data:
{{ if .Values.config.overwrite }}
Configuration.json: {{ toYaml .Values.config.value | indent 4 }}
{{ else }}
Configuration.json: |-
{{ .Files.Get "default-config" | indent 4 }}
{{ end }}
In the deployment repo the value file then looks like this:
config:
overwrite: true
value: will_be_replaced_by_file_content
And demo-config.json is set with the upgrade command in the pipeline.
This works, but seems a bit fiddly to us. So the question: Do you know a better way?
In your very first setup, .Values.config is a string. The key: |- syntax creates a YAML block scalar that contains an indented text block that happens to be JSON. helm install --set-file also sets a value to a string, and .Files.Get returns a string.
All of these things being strings means you can simplify the logic around them. For example, consider the Helm default template function: if its parameter is an empty string, it is logically false, and so default falls back to its default value.
In your final layout you want to keep the default configuration in a separate file, but use it only if an override configuration isn't provided. So you can go with an approach where:
In values.yaml, config is an empty string. (null or just not defining it at all will also work for this setup.)
# config is a string containing JSON-format application configuration.
config: ''
As you already have it, .Files.Get "default-config.json" can be the fallback value.
Use the default function to check if .Values.config is non-empty, and if not, fall back to that default.
Use helm install --set-file config=demo-config.json to provide an alternate config at deploy time.
The updated ConfigMap could look like:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "myapp.fullname" . }}
data:
Configuration.json:
{{ .Values.config | default (.Files.Get "default-config.json") | indent 4 }}
(Since any form of .Values.config is a string, it's not a complex structure and you don't need to call toYaml on it.)

Can a GitHub Action know its version that was specified after # character?

In order to use a specific version of an action we use this syntax:
- name: Setup Python
uses: actions/setup-python#v1
or:
- name: Setup Python
uses: actions/setup-python#main
v1 is the name of a tag or a branch, so the code we want to use is already there.
However, I'd like to know if there is a way to get the version string inside the YAML file that defines an action.
When we create an action, we create a repository with action.yml.
Now, I'd like to retrieve this "v1" or "main" string from within the code in action.yml.
In action.yml, I'd like to have:
runs:
using: 'docker'
image: 'docker://some_url_to_image:${ the retrieved "v1" or "main" here}'
So that I could use an image that matches the version of the action.
Is this possible?

How to use pillar data as a variable in script deployed using saltstack

I am trying to use a value defined inside a pillar, as a variable to be setup at deployment time, e.g :
cat pillar/passwd.sls
server_gpg: 'gpgPassword'
I'd like to use the value of "server_gpg" variable inside a script, I tried this but this does work :
/usr/bin/gpg --yes --passphrase '{{ pillar['gpgPassword'] }}' [...]
I am sure this is a noob (which is what I am) question, but I could not find a working tips within the Salt / Jinja docs.
Thanks
OK, my bad, after some more research in the saltstack i found out that I was just missing a :
- template: jinja
definition in my state declaration.

Ansible - Set content of file based on dictionary key

Goal : Set the content of a file based on a dictionary value that I retrieve via FACT.
In other words, I have dictionary with like:
clients:
client0:
bar: my stuff
I learn the client name from a FACT, I would like to use the client name to index into the dictionary and retrive bar and set as the content of the file.
- name: Copy Client File Content
copy:
dest="/opt/myfile"
content=clients[{{client_name}}].bar
owner=root
group=root
mode=0600
no_log: true
Expected Content of File is : my stuff
This works for me:
- hosts: localhost
vars:
clients:
client0:
bar: my stuff
client_name: client0
tasks:
- name: copy client file content
copy:
dest: ./myfile.txt
content: "{{clients[client_name].bar}}"
Your issue is that you appear to be trying to perform variable de-referencing outside of a Jinja2 variable substitution; putting everything inside {{...}} is what makes this work.