get host name for jinja template in salt - jinja2

not sure where to start but hre is what i have and what i'm trying to do.
what i have.
i have three Minions as part of three tier application named employee.
there is a three servers called web01 as web server, app01 as app server and a db01 as database server.
each server has a grains value on it,
here is each server and the grains values and keys of these values.
web01.
grains value =
appname:employee and
tier:web
app01.
grains value =
appname:employee and
tier:app
db01.
grains value =
appname:employee and
tier:db
what i'm trying to do.
i'm trying to push configurations files on web01 and app01, these config files has a variables (hostname of another tier minion).. the config on the web01 should have the name app01.. and the config on app01 should have the name db01.. the name of these severs should be grabbed based on the grains value.
for example.
the host name of the app server, its the server that has grains value equal to "appname:employee and tier:app"
not sure how to do it.
too new to salt and i dont have much experiance with it nor jinja template.
any help will be really appreciated.
Thank you

So if I understand you right, you want the config file to be on web1 and app1 containing all hostnames.
If so, you can use a pillar file where you state these attributes.
/srv/pillar/employee.sls:
employee:
hostname_of_another_tier_minion: hostname.example.com
You can then reference this in your jinja template /srv/formulas/employee/templates/config.conf.jinja:
----------
hostname_of_another_tier_minion {{ pillar['employee']['hostname_of_another_tier_minion'] }}
Just to be complete you reference your template in /srv/employee/web.sls and /srv/employee/app.sls:
web-config-file:
file.managed:
- user: root
- group: root
- template: jinja
- mode: '0644'
- names:
- /etc/<web-conf-dir>/web.conf:
- source: salt://employee/templates/config.conf.jinja
Let me know if you have any further questions.
UPDATE:
If the hostnames are unknown as you said, you can first get them with grains and then put them in the jinja template that gets rendered into a config on every server.

Related

Apache Airflow: How to template Volumes in DockerOperator using Jinja Templating

I want to convey list of volumes into DockerOperator using Jinja template:
hard coded volumes works fine:
volumes=['first:/dest', 'second:/sec_destination']
however following jinja template does not work:
volumes=[f"{{{{ ti.xcom_pull(task_ids='my_task', key='dockerVolumes')
}}}}"]
500 Server Error: Internal Server Error ("invalid mode:
/sec_destination')")
I found workaround that is acceptable for me however is not perfect:
acceptable only for cases where volues would have always 2 elements
volumes=[f"{{{{ ti.xcom_pull(task_ids='my_task', key='dockerVolumes')[0] }}}}", f"{{{{ ti.xcom_pull(task_ids='my_task', key='dockerVolumes')[1] }}}}"]
For anyone who is using airflow >= 2.0.0
volumes parameter was deprecated in favor of mounts which is a list of docker.types.Mount. Fortunately, airflow evaluates templates recursively, which means that every object with template_parameters that is a value of any field in template_fields of the parent object will be evaluated as well. So in order to evaluate docker.types.Mount fields we need to do two things:
Add mounts to DockerOperator.template_fields
Add template_fields = (<field_name_1>, ..., <field_name_n>) to every docker.types.Mount.
So to template source, target, and type parameters in the DockerOperator subclass you can implement it the following way:
class DockerOperatorExtended(DockerOperator):
template_fields = (*DockerOperator.template_fields, 'mounts')
def __init__(self, **kwargs):
mounts = kwargs.get('mounts', [])
for mount in mounts:
mount.template_fields = ('Source', 'Target', 'Type')
kwargs['mounts'] = mounts
super().__init__(**kwargs)
In order to provide a value of a field by template, that field must be part of template_fields.
docker operator does not have volume as template_fields that is why you cannot set it via jinja2.
The solution for this is to extend DockerOperator and include volume as template_fields.
Another solution is writing your own ninja filter (for spliting pulled string from xcom) and add it as elem of 'user_defined_filters' in DAG object initialization.

How can a templatize subchart values?

For context I have an application, and it depends on the mysql chart. I've set up the mysql stable chart as a dependent chart in myapp chart.
I have a very large set of sql files, and due to their size, I need to pack them into a specialized seed container. Using the standard helm chart, I can pass in a seed container to init my database as shown in the below values.yaml snippet.
Are there any strategies to get subchart values created at runtime into my values.yaml?
mysql:
extraInitContainers: |
- name: init-seed
image: foobar/seed:0.1.0
env:
- name: MYSQL_HOSTNAME
value: foobar-mysql
- name: MYSQL_USER
value: foo
- name: MYSQL_PASS
value: bar
I've tried ways to do the below, but to no avail.
a. Templatize and pass a service name into the MYSQL_HOSTNAME env var
b. Pass the {{ include "mangos_zero.fullname" . }} helper into this value
c. Find the name of the other container within the mysql pod at runtime?
How can I get the service name of the mysql-chart or it's container name passed into my init pod?
Not into your values.yaml but yes into your templates. Assuming you are using Helm v3 you can use the lookup function. For example, wherever you need the service name from your MySQL DB to create your seed data.
(lookup "v1" "Service" "mynamespace" "mysql-chart").metadata.name

ZooKeeper Multi-Server Setup by Example

From the ZooKeeper multi-server config docs they show the following configs that can be placed inside of zoo.cfg (ZK's config file) on each server:
tickTime=2000
dataDir=/var/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=zoo1:2888:3888
server.2=zoo2:2888:3888
server.3=zoo3:2888:3888
Furthermore, they state that you need a myid file on each ZK node whose content matches one of the server.id values above. So for example, in a 3-node "ensemble" (ZK cluster), the first node's myid file would simply contain the value 1. The second node's myid file would contain 2, and so forth.
I have a few practical questions about what this looks like in the real world:
1. Can localhost be used? If zoo.cfg has to be repeated on each node in the ensemble, is it OK to define the current server as localhost? For example, in a 3-node ensemble, would it be OK for Server #2's zoo.cfg to look like:
tickTime=2000
dataDir=/var/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=zoo1:2888:3888
server.2=localhost:2888:3888 # Afterall, we're on server #2!
server.3=zoo3:2888:3888
Or is this not advised/not possible?
2. Do they server ids have to be numerical? For instance, could I have a 5-node ensemble where each server's zoo.cfg looks like:
tickTime=2000
dataDir=/var/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.red=zoo1:2888:3888
server.green=zoo2:2888:3888
server.blue=zoo3:2888:3888
server.orange=zoo1:2888:3888
server.purple=zoo2:2888:3888
And, say, Server 1's myid would contain the value red inside of it (etc.)?
1. Can localhost be used?
This is a good question as ZooKeeper docs don't make it cristal clear whether the configuration file only accepts IP addresses. It says only hostname which could mean either an IP address, a DNS, or a name in the hosts file, such as localhost.
server.x=[hostname]:nnnnn[:nnnnn], etc
(No Java system property)
servers making up the ZooKeeper ensemble. When the server starts up, it determines which server it is by looking for the file myid in the data directory. That file contains the server number, in ASCII, and it should match x in server.x in the left hand side of this setting.
However, note that ZooKeeper recommend to use the exactly same configuration file in all hosts:
ZooKeeper's behavior is governed by the ZooKeeper configuration file. This file is designed so that the exact same file can be used by all the servers that make up a ZooKeeper server assuming the disk layouts are the same. If servers use different configuration files, care must be taken to ensure that the list of servers in all of the different configuration files match.
So simply put the machine IP address and everything should work. Also, I have personally tested using 0.0.0.0 (in a situation where the interface IP address was different from the public IP address) and it does work.
2. Do they server ids have to be numerical?
From ZooKeeper multi-server configuration docs, myid need to be a numerical value from 1 to 255:
The myid file consists of a single line containing only the text of that machine's id. So myid of server 1 would contain the text "1" and nothing else. The id must be unique within the ensemble and should have a value between 1 and 255.
Since myid must match the x in server.x parameter, we can infer that x must be a numerical value as well.

Partitioning data across hosts in Ansible (access "index" of host in task?)

I am trying to use Ansible to do some parallel computation. My data is trivially parallelizable, I just need to split the file across my hosts (EC2 instances). Is there a canonical way to do this?
The next best thing would be to have a counter that increments for each host. Assuming I have already split my data into my number of workers, I would like to be able to say within each worker task:
- file: src=data/users-{{host_index}}.csv dest=/mnt/users.csv`.
Then, each worker can process their copy of users.csv with a separate script, that is agnostic to which set of users they have. Is there any way to get this counter index?
I am a beginner to Ansible, so I wonder whether I am overlooking a simple module or idiom, either in Ansible or Jinja. Thanks in advance.
It turns out I have access to a variable called ami_launch_index inside of the ec2_facts module that gives me a zero-indexed unique ID to each EC2 instance. Here is the code for copying over files with numerical suffixes to their corresponding EC2 instances:
tasks:
- name: Gather ec2 facts
action: ec2_facts
register: facts
- name: Share data to nodes
copy: src=data/websites-{{facts.ansible_facts.ansible_ec2_ami_launch_index}}.txt dest=/mnt/websites.txt
The copy line produces the following for the src values:
data/websites-1.txt
data/websites-0.txt
data/websites-2.txt
(There is no guarantee that the hosts will iterate in ami_launch_index order)

How can I render the ID of a minion that has a particular .sls or state in SaltStack?

I'm using SaltStack to manage some VMs. I'm looking for a way to render the ID/hostname of a minion(s) that have a specified .sls attached to them in the top.sls file or a particular state in a jinja template-enabled file. The reason I want to do this is so I can easily refer to a server(s) in a client's configuration without having to hardcode values anywhere at all. For example;
/srv/salt/top.sls:
base:
'desktoppc01':
- generic.dns
'bind9server01':
- generic.dns
- bind9
/srv/salt/generic/dns/init.sls:
/etc/resolv.conf:
file:
- managed
- source: salt://generic/dns/files/resolv.conf
- mode: 644
- template: jinja
And finally,
/srv/salt/generic/dns/files/resolv.conf:
domain {{ pillar['domain_name'] }}
search {{ pillar['domain_name'] }}
nameserver {{ list_minions_with_state['bind9'] }}
What I'm after specifically is an equivalent to {{ list_minions_with_state['bind9'] }} (which I just made up for demonstrations sake). I had assumed it would be something that would be pretty commonly needed, but after scouring the modules page I haven't found anything yet.
At the moment I have the client get information from a pillar, but this has to be manually configured which doesn't feel like time well spent.
I'm hoping I could expand this idea with a for loop so that servers are dynamically added as they're created.
edit:
With a file with the same data & hierarchy as a top.sls, rendering
base:
{% for server_id in salt['pillar.get']('servers') %}
'{{ server_id }}':
{% for states in salt['pillar.get']('servers:{{ server_id }}') %}
- {{ states }}
{% endfor %}
{% endfor %}
gives you
base:
'desktoppc01':
'bind9server01':
I tried a few variations on {{ server_id }} but was unsuccessful. Unless there's an easy way to use pillar variables in that function, I'm thinking of making a feature request and calling it a day.
The way I think around this problem is to use jinja and have a variable that contain the list of dns server... populated by a pillar variable
for instance you could have a pillar:bind:servers variable
see http://docs.saltstack.com/en/latest/topics/tutorials/states_pt3.html
and http://docs.saltstack.com/en/latest/topics/pillar/index.html#master-config-in-pillar
that can be used to both setup the nameserver of resolv.conf.. but also to add the - bind9 state to the servers.
so in the end you have just one place to edit: the list of minion that are bind server in pillar
The first thing that comes to mind would be using the test-state methodology by setting test=True for state.apply or state.highstate. If there were zero states to apply then your server would have your highstate or specific sls fully applied.
salt '*' state.highstate test=True
Using salt-run's survey.diff could be helpful (although the diff patch doesn't lend itself well to this scenario as much as examining config files):
salt-run survey.diff '*' state.apply my.state test=True
While not currently applicable to your question based on your examples another method that comes to mind would be to use salt grains within your states. When you have your states applied to the systems the state would append to the "states" grain. Grains track things like roles (eg. web, database, etc.) in your case grains could track states as more of a what was applied, instead of a what should be logic of roles. Then you can use them to target and/or query your servers.
Targeting by Grain (show only minion id's) :
salt -G 'states:bind9' test.ping
salt -G 'states:generic.dns' test.ping
salt -G 'states:my_jinja_state' test.ping
Querying Grains (for each minion show me the states grain):
salt '*' grains.get states
Diffing of Grains (compare each minions states grain):
salt-run survey.diff '*' grains.get states