I'm using an existing role, and I wish to modify it to extend its capabilities. Currently, one of its tasks is to create directories. These directories get passed as a variable containing a list of strings to the role, and then iterated over in a with_items statement. However, I would prefer to pass a list of dictionaries of the form e.g. {name: foo, mode: 751}.
So far so good; I can simply edit the role to make it take this sort of input. However, I also want to make it backwards compatible with the old format, i.e. where the items are strings.
Is there a way to test the type of a variable and then return different values (or perform different tasks) based on this? Perhaps using a Jinja2 filter? I was briefly looking at the conditionals listed in the manual, but nothing caught my eye that could be used in this situation.
You could use default() for backwards compatibility.
- file:
path: "{{ item.name | default(item) }}"
mode: "{{ item.mode | default(omit) }}"
state: directory
with_items: your_list
If the item has a name property, use it, else simply use the item itself.
Same goes for all other properties you might have in your dict. The special variable omit would omit the whole option from the task, as if no mode was passed to the file module. Of course you could set any other default.
Documentation references:
default
omit
The quickest solution would be to have two tasks, and have they trigger with opposed conditions. Unfortunately, all items in the list will have to use the same form (you can't mix and match strings and dicts).
- name: create dirs (strings)
file:
...
with_items: items
when: string(items[0])
- name: create dirs (dicts)
file:
...
with_items: items
when: not string(items[0])
Related
I want to convey list of volumes into DockerOperator using Jinja template:
hard coded volumes works fine:
volumes=['first:/dest', 'second:/sec_destination']
however following jinja template does not work:
volumes=[f"{{{{ ti.xcom_pull(task_ids='my_task', key='dockerVolumes')
}}}}"]
500 Server Error: Internal Server Error ("invalid mode:
/sec_destination')")
I found workaround that is acceptable for me however is not perfect:
acceptable only for cases where volues would have always 2 elements
volumes=[f"{{{{ ti.xcom_pull(task_ids='my_task', key='dockerVolumes')[0] }}}}", f"{{{{ ti.xcom_pull(task_ids='my_task', key='dockerVolumes')[1] }}}}"]
For anyone who is using airflow >= 2.0.0
volumes parameter was deprecated in favor of mounts which is a list of docker.types.Mount. Fortunately, airflow evaluates templates recursively, which means that every object with template_parameters that is a value of any field in template_fields of the parent object will be evaluated as well. So in order to evaluate docker.types.Mount fields we need to do two things:
Add mounts to DockerOperator.template_fields
Add template_fields = (<field_name_1>, ..., <field_name_n>) to every docker.types.Mount.
So to template source, target, and type parameters in the DockerOperator subclass you can implement it the following way:
class DockerOperatorExtended(DockerOperator):
template_fields = (*DockerOperator.template_fields, 'mounts')
def __init__(self, **kwargs):
mounts = kwargs.get('mounts', [])
for mount in mounts:
mount.template_fields = ('Source', 'Target', 'Type')
kwargs['mounts'] = mounts
super().__init__(**kwargs)
In order to provide a value of a field by template, that field must be part of template_fields.
docker operator does not have volume as template_fields that is why you cannot set it via jinja2.
The solution for this is to extend DockerOperator and include volume as template_fields.
Another solution is writing your own ninja filter (for spliting pulled string from xcom) and add it as elem of 'user_defined_filters' in DAG object initialization.
I'm creating a blog just to dump my notes in. I love how far I can go with site.tags and site.categories. All I really need now is the ability to have another filter option. Something li site.sublog it would help me create exactly what I need
So here's a post
---
layout: post
title: "Angular.js With Codeschool:part five"
date: 2015-05-14 07:57:01 +0100
category: [Angularjs-codeschool, basics]
tags: [angular with codeschool]
sublog: [web]
---
Basically I want to write notes on everything I am interested in: web, general tech, history ... and sort of create sub blogs
There are ways around it but now that I am here I just wanted to know if such a thing was possible
Categories and tags are treated specially by Jekyll, so you won't find that other front matter entries from a post will be collected into a site-wide variable like site.categories and site.tags are.
The Jekyll documentation has a tutorial on retrieving items based on front matter properties, which shows how to iterate over an entire collection and do some per-item processing based on a front matter variable's value.
You could iterate over your collection, inspecting the sublog values of each item and concat-ing them to a list of all known sublog values (perhaps removing duplicates using uniq afterwards, and optionally sort-ing them before further processing).
This would enable you to create a list of all sub blogs, and for each sub blog, create a list of posts within it (which is presumably your end goal).
You can store Sitewide information + configuration settings in _config.yml.
Your example shows Page specific information in the YAML front matter!
Read the Jekyll Documentation, please.
Example for what I want to achieve
I have many patch pages ("Patch 1.4", "Patch 1.5" etc.) that list the changes that were made to a project, where the affected/changed things are linked to their corresponding pages ("confirmation dialog", "foo", etc.):
Patch 1.4
Fixed spelling in the [[confirmation dialog]]
Patch 1.5
Added two options: [[foo]], [[bar]]
On the pages about the things that were changed ("confirmation dialog", "foo", …), I want to automatically show all corresponding changes:
Foo
[[Patch 1.5]]: Added two options: [[foo]], [[bar]]
Semantic MediaWiki’s subobjects can do this
#subobject allows me to create an (anonymous) object for each change on the patch pages:
{{#subobject:|
|Changes=Added two options: [[foo]], [[bar]]
|Affects=Foo|Bar
}}
And on each page ("foo" etc.) I can include an #ask subobject query to list all the matching subobjects:
{{#ask: [[Affects::{{FULLPAGENAME}}]]
|? Changes
}}
Great.
Problem: I have to duplicate the change entry.
On the patch pages, a change entry looks like this:
* Added two options: [[foo]], [[bar]] {{#subobject:|
|Changes=Added two options: [[foo]], [[bar]]
|Affects=Foo|Bar
}}
So I have to specify "Added two options: [[foo]], [[bar]]" two times: one time for the visible content, one time for the invisible subobject.
Is there a way in (Semantic) MediaWiki to do this without having to duplicate the content?
The ideal solution would just require me to enclose the change entry and specify the affected pages next to it, e.g.:
* {{ Added two options: [[foo]], [[bar]] }}((foo|bar))
As each patch page can list hundreds of changes, I don’t want to have to create a separate page for each change.
If I understand your question clearly, it seems you just need a simple query:
{{#ask: [[-Has subobject::{{FULLPAGENAME}}]]
| ?Changes
| format = ul
| headers = hide
| mainlabel = -
}}
Since using SMW markup may be tedious and error-prone, you might also use MediaWiki templates. You can simplify adding patch changes:
Template:Change
<includeonly><!--
-->{{#subobject:|
| Changes = {{{1|}}}
| Affects = {{{2|}}}|+sep=;
}}<!--
--></includeonly><nowiki/>
{{{1}}} and {{{2}}} are positional parameter,s and the Affects subobject property uses the ; separator (as a pipe | is ambiguous and may break templates, parser functions, etc). The <nowiki/> is a sort of a hack saving from whitespace bloating at the call-site pages.
You can also add a special template that would encapsulate the changes query:
Template:Patch changes
<includeonly><!--
-->{{#ask: [[-Has subobject::{{{1|{{FULLPAGENAME}}}}}]]
| ?Changes
| format = ul
| headers = hide
| mainlabel = -
}}<!--
--></includeonly><nowiki/>
By default, the template asks for the changes list for the current page (if the positional parameter #1 argument is empty), or you can explicitly override it at the call-site later (say, {{Patch changes|Patch 1.5}}).
Patch 1.4
{{Change | Fixed spelling in the [[confirmation dialog]] | Confirmation dialog}}
{{Patch changes}}
Patch 1.5
{{Change | Added two options: [[foo]], [[bar]] | Foo; Bar}}
{{Patch changes}}
respectively.
These links might be useful later:
SMW subobjects and queries
SMW standard parameters for inline queries
SMW templates
MW templates anonymous parameters
I am trying to use Ansible to do some parallel computation. My data is trivially parallelizable, I just need to split the file across my hosts (EC2 instances). Is there a canonical way to do this?
The next best thing would be to have a counter that increments for each host. Assuming I have already split my data into my number of workers, I would like to be able to say within each worker task:
- file: src=data/users-{{host_index}}.csv dest=/mnt/users.csv`.
Then, each worker can process their copy of users.csv with a separate script, that is agnostic to which set of users they have. Is there any way to get this counter index?
I am a beginner to Ansible, so I wonder whether I am overlooking a simple module or idiom, either in Ansible or Jinja. Thanks in advance.
It turns out I have access to a variable called ami_launch_index inside of the ec2_facts module that gives me a zero-indexed unique ID to each EC2 instance. Here is the code for copying over files with numerical suffixes to their corresponding EC2 instances:
tasks:
- name: Gather ec2 facts
action: ec2_facts
register: facts
- name: Share data to nodes
copy: src=data/websites-{{facts.ansible_facts.ansible_ec2_ami_launch_index}}.txt dest=/mnt/websites.txt
The copy line produces the following for the src values:
data/websites-1.txt
data/websites-0.txt
data/websites-2.txt
(There is no guarantee that the hosts will iterate in ami_launch_index order)
Just getting started with using chef recently. I gather that attributes are stored in one large monolithic hash named node that's available for use in your recipes and templates.
There seem to be multiple ways of defining attributes
Directly in the recipe itself
Under an attributes file - e.g. attributes/default.rb
In a JSON object that's passed to the chef-solo call. e.g. chef-solo -j web.json
Given the above 3, I'm curious
Are those all the ways attributes can be defined?
What's the order of precedence here? I'm assuming one of these methods supercedes the others
Is #3 (the JSON method) only valid for chef-solo ?
I see both node and default hashes defined. What's the difference? My best guess is that the default hash defined in attributes/default.rb gets merged into the node hash?
Thanks!
Your last question is probably the easiest to answer. In an attributes file you don't have to type 'node' so that this in attributes/default.rb:
default['foo']['bar']['baz'] = 'qux'
Is exactly the same as this in recipes/whatever.rb:
node.default['foo']['bar']['baz'] = 'qux'
In retrospect having different syntaxes for recipes and attributes is confusing, but this design choice dates back to extremely old versions of Chef.
The -j option is available to chef-client or chef-solo and will both set attributes. Note that these will be 'normal' attributes which are persistent in the node object and are generally not recommended to use. However, the 'run_list', 'chef_environment' and 'tags' on servers are implemented this way. It is generally not recommended to use other 'normal' attributes and to avoid node.normal['foo'] = 'bar' or node.set['foo'] = 'bar' in recipe (or attribute) files. The difference is that if you delete the node.normal line from the recipe the old setting on a node will persist, while if you delete a node.default setting out of a recipe then when your run chef-client on the node that setting will get deleted.
What happens in a chef-client run to make this happen is that at the start of the run the client issues a GET to get its old node document from the server. It then wipes the default, override and automatic(ohai) attributes while keeping the 'normal' attributes. The behavior of the default, override and automatic attributes makes the most sense -- you start over at the start of the run and then construct all the state, if its not in the recipe then you don't see a value there. However, normally the run_list is set on the node and nodes do not (often) manage their own run_list. In order to make the run_list persist it is a normal attribute.
The choice of the word 'normal' is unfortunate, as is the choice of 'node.set' setting 'normal' attributes. While those look like obvious choices to use to set attributes users should avoid using those. Again the problem is that they came first and were and are necessary and required for the run_list. Generally stick with default and override attributes only. And typically you can get most of your work done with default attributes, those should be preferred.
There's a big precedence level picture here:
https://docs.chef.io/attributes.html#attribute-precedence
That's the ultimate source of truth for attribute precedence.
That graph describes all the different ways that attributes can be defined.
The problem with Chef Attributes is that they've grown organically and sprouted many options to try to help out users who painted themselves into a corner. In general you should never need to touch automatic, normal, force_default or force_override levels of attributes. You should also avoid setting attributes in recipe code. You should move setting attributes in recipes to attribute files. What this leaves is these places to set attributes:
in the initial -j argument (sets normal attributes, you should limit using this to setting the run_state, over using this is generally smell)
in the role file as default or override precedence levels (careful with this one though because roles are not versioned and if you touch these attributes a lot you will cause production issues)
in the cookbook attributes file as default or override precedence levels (this is where you should set most of your attributes)
in environment files as default or override precedence levels (can be useful for settings like DNS servers in a datacenter, although you can use roles and/or cookbooks for this as well)
You also can set attributes in recipes, but when you do that you invariably wind up getting your next lesson in the two-phase compile-converge parser that runs through the Chef Recipes. If you have recipes that need to communicate with each other its better to use the node.run_state which is just a hash that doesn't get written as node attributes. You can drop node.run_state[:foo] = 'bar' in one recipe and read it in another. You probably will see recipes that set attributes though so you should be aware of that.
Hope That Helps.
When writing a cookbook, I visualize three levels of attributes:
Default values to converge successfully -- attributes/default.rb
Local testing override values -- JSON or .kitchen.yml (have you tried chef_zero using ChefDK and Kitchen?)
Environment/role override values -- link listed in lamont's answer: https://docs.chef.io/attributes.html#attribute-precedence