Salt: Using salt state in jinja if-else state - jinja2

I'm trying to deploy a django project with saltstack.
I wrote a sls file and It installs packages and run some commands.
It installs django, nginx, etc and I want to run manage.py collectstatic for nginx.
but when I re-apply this formula, It returns an error that /static directory is already exists.
so I modified the sls file
collect_static_files:
{% if not salt['file.exists'][BASEDIR,'myproject/static']|join('') %}
cmd.run:
- name: '~~~ collectstatic;'
- cwd: /path/to/venv/bin
{% else %}
cmd.run:
- name: echo "Static directory exists."
{% endif %}
but when I run salt '*' state.apply myformula,
It says:
minion:
Data failed to compile:
----------
Rendering SLS 'base:myproj' failed: Jinja variable 'salt.utils.templates.AliasedLoader object' has no attribute 'file.exists'
How can I solve this problem? Thank you.

I was a fool...
{% if not salt['file.directory_exists'](BASEDIR + 'myproject/static') %}
worked well.
The problem was I used the state module not execution module of salt.
Now I understand that state module describes "state" and execution modules act like a function.

Related

I am trying to update my /etc/hosts file on my load balancer server using ansible so I created a jinja2 template and add my vars from ansible facts

{"changed": false, "msg": "AnsibleUndefinedVariable: 'dict object' has no attribute 'default_ipv4'"}
here is the error I keep running into.
ansible playbook
my jinja2 template
I think you have an issue on your variables:
{{ hostvars[host].['ansible_default_ipv4']['address']}}
Try with ansible_default_ipv4

Set build number for conda metadata inside Azure pipeline

I am using the bash script to build the conda pakage in azure pipeline conda build . --output-folder $(Build.ArtifactStagingDirectory) And here is the issue, Conda build uses the build number in the meta.yml file(see here).
A solution of What I could think of is, first, copy all files to Build.ArtifactStagingDirectory and add the Azure pipeline's Build.BuildNumber into the meta.yml and build the package to Build.ArtifactStagingDirectory (within a sub-folder)
I am trying to avoid do it by writing shell script to manipulate the yaml file in Azure pipeline, because it might be error prone. Any one knows a better way? Would be nice to read a more elegant solution in the answers or comments.
I don't know much about Azure pipelines. But in general, if you want to control the build number without changing the contents of meta.yaml, you can use a jinja template variable within meta.yaml.
Choose a variable name, e.g. CUSTOM_BUILD_NUMBER and use it in meta.yaml:
package:
name: foo
version: 0.1
build:
number: {{ CUSTOM_BUILD_NUMBER }}
To define that variable, you have two options:
Use an environment variable:
export CUSTOM_BUILD_NUMBER=123
conda build foo-recipe
OR
Define the variable in conda_build_config.yaml (docs), as follows
echo "CUSTOM_BUILD_NUMBER:" >> foo-recipe/conda_build_config.yaml
echo " - 123" >> foo-recipe/conda_build_config.yaml
conda build foo-recipe
If you want, you can add an if statement so that the recipe still works even if CUSTOM_BUILD_NUMBER is not defined (using a default build number instead).
package:
name: foo
version: 0.1
build:
{% if CUSTOM_BUILD_NUMBER is defined %}
number: {{ CUSTOM_BUILD_NUMBER }}
{% else %}
number: 0
{% endif %}

Shopware 6 theme isnt overriding Storefront?

I am trying to override the storefront using my plugin files and doesnt seem to be working. I am using Shopware 6.
I am trying to follow the example they give to set it up, overriding the logo.html.twig in the header.
My file directory is:
Installation root > custom > plugins > MY PLUGIN NAME > src > Resources > views > shopfront > layout > header > logo.html.twig
Inside my logo file I have :
{% sw_extends '#Storefront/storefront/layout/header/logo.html.twig' %}
{% block layout_header_logo_link %}
<h2>Hello world!</h2>
{% endblock %}
I have ran bin/console cache:clear as well as ./psh.phar cache and bin/console theme:compile.
Thank you.
Your directory should be src/Resources/views/storefront/layout not shopfront.
After fixing the directory, reinstall the plugin and clean the cache. by bin/console cache:warmup and bin/console http:cache:warm:up
Turned out I had pulled the master install branch and not the latest release.

Read JSON data from the YAML file

I have a .gitlab-ci.yml the file that I use to install a few plugins (craftcms/aws-s3, craftcms/redactor, etc) in the publishing stage. The file is provided below (partly):
# run the staging deploy, commands may be different baesed on the project
deploy-staging:
stage: publish
variables:
DOCKER_HOST: 127.0.0.1:2375
# ...............
# ...............
# TODO: temporary fix to the docker/composer issue
- docker-compose -p "ci-$CI_PROJECT_ID" --project-directory $CI_PROJECT_DIR -f build/docker-compose.staging.yml exec -T craft composer --working-dir=/data/craft require craftcms/aws-s3
- docker-compose -p "ci-$CI_PROJECT_ID" --project-directory $CI_PROJECT_DIR -f build/docker-compose.staging.yml exec -T craft composer --working-dir=/data/craft require craftcms/redactor
I have a JSON file that has the data for the plugins. The file is .butler.json. provided below,
{
"customer_number": "007",
"project_number": "999",
"site_name": "Welance",
"local_url": "localhost",
"db_driver": "mysql",
"composer_require": [
"craftcms/redactor",
"craftcms/aws-s3",
"nystudio107/craft-typogrify:1.1.17"
],
"local_plugins": [
"welance/zeltinger",
"ansmann/ansport"
]
}
How do I take the plugin names from the "composer_require" and the "local_plugins" inside the .butler.json file and create a for loop in the .gitlab-ci.yml file to install the plugins?
You can't create a loop in .gitlab-ci.yml since YAML is not a programming language. It only describes data. You could use a tool like jq to query for your values (cat .butler.json | jq '.composer_require') inside a script, but you cannot set variables from there (there is a feature request for it).
You could use a templating engine like Jinja (which is often used with YAML, e.g. by Ansible and SaltStack) to generate your .gitlab-ci.yml from a template. There exists a command line tool j2cli which takes variables as JSON input, you could use it like this:
j2 gitlab-ci.yml.j2 .butler.json > .gitlab-ci.yml
You could then use Jinja expression to loop over your data and create corresponding YAML in gitlab-ci.yml.j2:
{% for item in composer_require %}
# build your YAML
{% endfor %}
Drawback is that you need the processed .gitlab-ci.yml checked in to your repository. This can be done via pre-commit-hook (before each commit, regenerate the .gitlab-ci.yml file and if it changed, commit it along with other changes).

Including jinja from pillar from salt-stack

I wonder if it is possible to include
jinja
from a pillar from salt-stack?
thanks
Yes, you can use jinja in your pillar sls files.
https://docs.saltstack.com/en/latest/topics/pillar/
Yes. Ensure that the file starts with the following on the first line.
#!jinja|yaml|gpg
This will ensure that all the pre-processors are executed, in this case jinja, yaml, and gpg decryption.
Consider this example
vi /srv/pillar/packages.sls
{% if grains['os'] == 'RedHat' %}
apache: httpd
git: git
{% elif grains['os'] == 'Debian' %}
apache: apache2
git: git-core
{% endif %}
company: Foo Industries
The above pillar sets two key/value pairs. If a minion is running RedHat, then the apache key is set to httpd and the git key is set to the value of git. If the minion is running Debian, those values are changed to apache2 and git-core respectively. All minions that have this pillar targeting to them via a top file will have the key of company with a value of Foo Industries.
Consequently this data can be used from within modules, renderers, State SLS files, and more via the shared pillar dictionary:
apache:
pkg.installed:
- name: {{ pillar['apache'] }}
Source: https://docs.saltproject.io/en/latest/topics/pillar/