I wonder if it is possible to include
jinja
from a pillar from salt-stack?
thanks
Yes, you can use jinja in your pillar sls files.
https://docs.saltstack.com/en/latest/topics/pillar/
Yes. Ensure that the file starts with the following on the first line.
#!jinja|yaml|gpg
This will ensure that all the pre-processors are executed, in this case jinja, yaml, and gpg decryption.
Consider this example
vi /srv/pillar/packages.sls
{% if grains['os'] == 'RedHat' %}
apache: httpd
git: git
{% elif grains['os'] == 'Debian' %}
apache: apache2
git: git-core
{% endif %}
company: Foo Industries
The above pillar sets two key/value pairs. If a minion is running RedHat, then the apache key is set to httpd and the git key is set to the value of git. If the minion is running Debian, those values are changed to apache2 and git-core respectively. All minions that have this pillar targeting to them via a top file will have the key of company with a value of Foo Industries.
Consequently this data can be used from within modules, renderers, State SLS files, and more via the shared pillar dictionary:
apache:
pkg.installed:
- name: {{ pillar['apache'] }}
Source: https://docs.saltproject.io/en/latest/topics/pillar/
Related
I have a workflow yml file. At the top I have a section above where the jobs are defined to make them global across jobs:
env:
DBT_REPO: ${{ vars.DBT_REPO }}
This var is a repo variable and I have confirmed it is already set. Pretend it's value is fruits/apples.
Then, in one of my jobs I try to reference this var in a step:
- name: Checkout DBT repo
uses: actions/checkout#v2
with:
repository: ${{ env.DBT_REPO }}
token: ${{ secrets.WORKFLOW_TOKEN }}
ref: ${{ env.DBT_REPO_BRANCH }}
path: ./${{ env.DBT_REPO }}
- name: Run DBT
uses: ./${{ env.DBT_REPO }}/dbt-action
The last line is line 169.
Then, when I try to run this workflow I get an error:
Invalid workflow file: .github/workflows/main.yml#L169
The workflow is not valid. .github/workflows/main.yml (Line: 169, Col: 15): Unrecognized named-value: 'DBT_REPO'. Located at position 1 within expression: DBT_REPO
If I hard code it like so: uses: ./fruits/apples/dbt-action then things work fine. It's only when I attempt to use a variable.
How can I reference a variable in my uses keyword?
This is not possible because the env context is not available to uses. In fact, based on documentation no contexts are available to the uses key.
See: https://docs.github.com/en/actions/learn-github-actions/contexts#context-availability
I believe this is an architectural limitation of GitHub Actions, it appears they want to resolve all workflows/actions at the start of all jobs and thus dynamic resolution isn't possible.
I am using the bash script to build the conda pakage in azure pipeline conda build . --output-folder $(Build.ArtifactStagingDirectory) And here is the issue, Conda build uses the build number in the meta.yml file(see here).
A solution of What I could think of is, first, copy all files to Build.ArtifactStagingDirectory and add the Azure pipeline's Build.BuildNumber into the meta.yml and build the package to Build.ArtifactStagingDirectory (within a sub-folder)
I am trying to avoid do it by writing shell script to manipulate the yaml file in Azure pipeline, because it might be error prone. Any one knows a better way? Would be nice to read a more elegant solution in the answers or comments.
I don't know much about Azure pipelines. But in general, if you want to control the build number without changing the contents of meta.yaml, you can use a jinja template variable within meta.yaml.
Choose a variable name, e.g. CUSTOM_BUILD_NUMBER and use it in meta.yaml:
package:
name: foo
version: 0.1
build:
number: {{ CUSTOM_BUILD_NUMBER }}
To define that variable, you have two options:
Use an environment variable:
export CUSTOM_BUILD_NUMBER=123
conda build foo-recipe
OR
Define the variable in conda_build_config.yaml (docs), as follows
echo "CUSTOM_BUILD_NUMBER:" >> foo-recipe/conda_build_config.yaml
echo " - 123" >> foo-recipe/conda_build_config.yaml
conda build foo-recipe
If you want, you can add an if statement so that the recipe still works even if CUSTOM_BUILD_NUMBER is not defined (using a default build number instead).
package:
name: foo
version: 0.1
build:
{% if CUSTOM_BUILD_NUMBER is defined %}
number: {{ CUSTOM_BUILD_NUMBER }}
{% else %}
number: 0
{% endif %}
I have a .gitlab-ci.yml the file that I use to install a few plugins (craftcms/aws-s3, craftcms/redactor, etc) in the publishing stage. The file is provided below (partly):
# run the staging deploy, commands may be different baesed on the project
deploy-staging:
stage: publish
variables:
DOCKER_HOST: 127.0.0.1:2375
# ...............
# ...............
# TODO: temporary fix to the docker/composer issue
- docker-compose -p "ci-$CI_PROJECT_ID" --project-directory $CI_PROJECT_DIR -f build/docker-compose.staging.yml exec -T craft composer --working-dir=/data/craft require craftcms/aws-s3
- docker-compose -p "ci-$CI_PROJECT_ID" --project-directory $CI_PROJECT_DIR -f build/docker-compose.staging.yml exec -T craft composer --working-dir=/data/craft require craftcms/redactor
I have a JSON file that has the data for the plugins. The file is .butler.json. provided below,
{
"customer_number": "007",
"project_number": "999",
"site_name": "Welance",
"local_url": "localhost",
"db_driver": "mysql",
"composer_require": [
"craftcms/redactor",
"craftcms/aws-s3",
"nystudio107/craft-typogrify:1.1.17"
],
"local_plugins": [
"welance/zeltinger",
"ansmann/ansport"
]
}
How do I take the plugin names from the "composer_require" and the "local_plugins" inside the .butler.json file and create a for loop in the .gitlab-ci.yml file to install the plugins?
You can't create a loop in .gitlab-ci.yml since YAML is not a programming language. It only describes data. You could use a tool like jq to query for your values (cat .butler.json | jq '.composer_require') inside a script, but you cannot set variables from there (there is a feature request for it).
You could use a templating engine like Jinja (which is often used with YAML, e.g. by Ansible and SaltStack) to generate your .gitlab-ci.yml from a template. There exists a command line tool j2cli which takes variables as JSON input, you could use it like this:
j2 gitlab-ci.yml.j2 .butler.json > .gitlab-ci.yml
You could then use Jinja expression to loop over your data and create corresponding YAML in gitlab-ci.yml.j2:
{% for item in composer_require %}
# build your YAML
{% endfor %}
Drawback is that you need the processed .gitlab-ci.yml checked in to your repository. This can be done via pre-commit-hook (before each commit, regenerate the .gitlab-ci.yml file and if it changed, commit it along with other changes).
I'm trying to deploy a django project with saltstack.
I wrote a sls file and It installs packages and run some commands.
It installs django, nginx, etc and I want to run manage.py collectstatic for nginx.
but when I re-apply this formula, It returns an error that /static directory is already exists.
so I modified the sls file
collect_static_files:
{% if not salt['file.exists'][BASEDIR,'myproject/static']|join('') %}
cmd.run:
- name: '~~~ collectstatic;'
- cwd: /path/to/venv/bin
{% else %}
cmd.run:
- name: echo "Static directory exists."
{% endif %}
but when I run salt '*' state.apply myformula,
It says:
minion:
Data failed to compile:
----------
Rendering SLS 'base:myproj' failed: Jinja variable 'salt.utils.templates.AliasedLoader object' has no attribute 'file.exists'
How can I solve this problem? Thank you.
I was a fool...
{% if not salt['file.directory_exists'](BASEDIR + 'myproject/static') %}
worked well.
The problem was I used the state module not execution module of salt.
Now I understand that state module describes "state" and execution modules act like a function.
I have EC2 instances that needs to be added to an ELB. While trying this from ansible, getting the following error. I am able to add the same using AWS CLI. Found this open issue with the module ec2_elb in ansible: https://github.com/ansible/ansible-modules-core/issues/2115
Is there any work around for this? Or any other version of boto/python where this works as expected. I do have >400 ELB's in the profile that i am using.
msg: ELB MyTestELB does not exist.
This worked for me. Using AWS CLI command from ansible to get rid of above issue with boto/ansible not able to identify the ELB.
- name: Add EC2 instance to ELB {{ elb_result.elb.name }} using AWS - CLI from within ansible play
command: "sudo -E aws elb register-instances-with-load-balancer --load-balancer-name {{ elb_result.elb.name }} --instances i-456r3546 --profile <<MyProfileHereIfNeeded>>"
environment:
http_proxy: http://{{ proxyUserId }}:{{ proxyPwd }}#proxy.com:port
https_proxy: http://{{ proxyUserId }}:{{ proxyPwd }}#proxy.com:port