For context I have an application, and it depends on the mysql chart. I've set up the mysql stable chart as a dependent chart in myapp chart.
I have a very large set of sql files, and due to their size, I need to pack them into a specialized seed container. Using the standard helm chart, I can pass in a seed container to init my database as shown in the below values.yaml snippet.
Are there any strategies to get subchart values created at runtime into my values.yaml?
mysql:
extraInitContainers: |
- name: init-seed
image: foobar/seed:0.1.0
env:
- name: MYSQL_HOSTNAME
value: foobar-mysql
- name: MYSQL_USER
value: foo
- name: MYSQL_PASS
value: bar
I've tried ways to do the below, but to no avail.
a. Templatize and pass a service name into the MYSQL_HOSTNAME env var
b. Pass the {{ include "mangos_zero.fullname" . }} helper into this value
c. Find the name of the other container within the mysql pod at runtime?
How can I get the service name of the mysql-chart or it's container name passed into my init pod?
Not into your values.yaml but yes into your templates. Assuming you are using Helm v3 you can use the lookup function. For example, wherever you need the service name from your MySQL DB to create your seed data.
(lookup "v1" "Service" "mynamespace" "mysql-chart").metadata.name
Related
I want to convey list of volumes into DockerOperator using Jinja template:
hard coded volumes works fine:
volumes=['first:/dest', 'second:/sec_destination']
however following jinja template does not work:
volumes=[f"{{{{ ti.xcom_pull(task_ids='my_task', key='dockerVolumes')
}}}}"]
500 Server Error: Internal Server Error ("invalid mode:
/sec_destination')")
I found workaround that is acceptable for me however is not perfect:
acceptable only for cases where volues would have always 2 elements
volumes=[f"{{{{ ti.xcom_pull(task_ids='my_task', key='dockerVolumes')[0] }}}}", f"{{{{ ti.xcom_pull(task_ids='my_task', key='dockerVolumes')[1] }}}}"]
For anyone who is using airflow >= 2.0.0
volumes parameter was deprecated in favor of mounts which is a list of docker.types.Mount. Fortunately, airflow evaluates templates recursively, which means that every object with template_parameters that is a value of any field in template_fields of the parent object will be evaluated as well. So in order to evaluate docker.types.Mount fields we need to do two things:
Add mounts to DockerOperator.template_fields
Add template_fields = (<field_name_1>, ..., <field_name_n>) to every docker.types.Mount.
So to template source, target, and type parameters in the DockerOperator subclass you can implement it the following way:
class DockerOperatorExtended(DockerOperator):
template_fields = (*DockerOperator.template_fields, 'mounts')
def __init__(self, **kwargs):
mounts = kwargs.get('mounts', [])
for mount in mounts:
mount.template_fields = ('Source', 'Target', 'Type')
kwargs['mounts'] = mounts
super().__init__(**kwargs)
In order to provide a value of a field by template, that field must be part of template_fields.
docker operator does not have volume as template_fields that is why you cannot set it via jinja2.
The solution for this is to extend DockerOperator and include volume as template_fields.
Another solution is writing your own ninja filter (for spliting pulled string from xcom) and add it as elem of 'user_defined_filters' in DAG object initialization.
I have a CF that deploys a MySql database and some resources to AWS. I want the script to be general and should be able to use it for different environments. For one of those resources (master db), I have a different security group configuration which is environment-specific. I create security groups for each environment conditional and are called VaultSecurityGroupInEnv1, VaultSecurityGroupInEnv2, etc. There is map that saves names of security groups for each environment. here are my configurations:
Mappings:
RegionMap:
environment1:
VaultSG: VaultSecurityGroupInEnv1
environment2:
VaultSG: VaultSecurityGroupInEnv2
Resources:
VaultSecurityGroupInEnv1:
Condition: IsEnv1Environment
VaultSecurityGroupInEnv2:
Condition: IsEnv2Environment
MasterDB:
Type: AWS::RDS::DBInstance
Properties:
VPCSecurityGroups:
- !ImportValue DbSgId
- !Sub
- '${vGroup}'
- vGroup: !FindInMap
- RegionMap
- !Ref Environment
- VaultSG
for which I get the following error:
Invalid security group , groupId= vaultsecuritygroupinF.groupid, groupName=. (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterValue;
The output from !Sub is retrieved and resolved as a name string, not as a resource. Using !Ref vaultsecuritygroupinF.GroupId works fine. any idea how to use map and sub correctly?
Thanks
You can't use FindInMap the way you are trying. It will just resolve to the literal strings VaultSecurityGroupInEnv1 or VaultSecurityGroupInEnv2. It will not resolve to the actual resources of the same name.
Instead, I think the following should be possible:
MasterDB:
Type: AWS::RDS::DBInstance
Properties:
VPCSecurityGroups:
- !ImportValue DbSgId
- !If
- IsEnv1Environment
- !Ref VaultSecurityGroupInEnv1
- !Ref "AWS::NoValue"
- !If
- IsEnv2Environment
- !Ref VaultSecurityGroupInEnv2
- !Ref "AWS::NoValue"
not sure where to start but hre is what i have and what i'm trying to do.
what i have.
i have three Minions as part of three tier application named employee.
there is a three servers called web01 as web server, app01 as app server and a db01 as database server.
each server has a grains value on it,
here is each server and the grains values and keys of these values.
web01.
grains value =
appname:employee and
tier:web
app01.
grains value =
appname:employee and
tier:app
db01.
grains value =
appname:employee and
tier:db
what i'm trying to do.
i'm trying to push configurations files on web01 and app01, these config files has a variables (hostname of another tier minion).. the config on the web01 should have the name app01.. and the config on app01 should have the name db01.. the name of these severs should be grabbed based on the grains value.
for example.
the host name of the app server, its the server that has grains value equal to "appname:employee and tier:app"
not sure how to do it.
too new to salt and i dont have much experiance with it nor jinja template.
any help will be really appreciated.
Thank you
So if I understand you right, you want the config file to be on web1 and app1 containing all hostnames.
If so, you can use a pillar file where you state these attributes.
/srv/pillar/employee.sls:
employee:
hostname_of_another_tier_minion: hostname.example.com
You can then reference this in your jinja template /srv/formulas/employee/templates/config.conf.jinja:
----------
hostname_of_another_tier_minion {{ pillar['employee']['hostname_of_another_tier_minion'] }}
Just to be complete you reference your template in /srv/employee/web.sls and /srv/employee/app.sls:
web-config-file:
file.managed:
- user: root
- group: root
- template: jinja
- mode: '0644'
- names:
- /etc/<web-conf-dir>/web.conf:
- source: salt://employee/templates/config.conf.jinja
Let me know if you have any further questions.
UPDATE:
If the hostnames are unknown as you said, you can first get them with grains and then put them in the jinja template that gets rendered into a config on every server.
I'm using an existing role, and I wish to modify it to extend its capabilities. Currently, one of its tasks is to create directories. These directories get passed as a variable containing a list of strings to the role, and then iterated over in a with_items statement. However, I would prefer to pass a list of dictionaries of the form e.g. {name: foo, mode: 751}.
So far so good; I can simply edit the role to make it take this sort of input. However, I also want to make it backwards compatible with the old format, i.e. where the items are strings.
Is there a way to test the type of a variable and then return different values (or perform different tasks) based on this? Perhaps using a Jinja2 filter? I was briefly looking at the conditionals listed in the manual, but nothing caught my eye that could be used in this situation.
You could use default() for backwards compatibility.
- file:
path: "{{ item.name | default(item) }}"
mode: "{{ item.mode | default(omit) }}"
state: directory
with_items: your_list
If the item has a name property, use it, else simply use the item itself.
Same goes for all other properties you might have in your dict. The special variable omit would omit the whole option from the task, as if no mode was passed to the file module. Of course you could set any other default.
Documentation references:
default
omit
The quickest solution would be to have two tasks, and have they trigger with opposed conditions. Unfortunately, all items in the list will have to use the same form (you can't mix and match strings and dicts).
- name: create dirs (strings)
file:
...
with_items: items
when: string(items[0])
- name: create dirs (dicts)
file:
...
with_items: items
when: not string(items[0])
I am trying to use Ansible to do some parallel computation. My data is trivially parallelizable, I just need to split the file across my hosts (EC2 instances). Is there a canonical way to do this?
The next best thing would be to have a counter that increments for each host. Assuming I have already split my data into my number of workers, I would like to be able to say within each worker task:
- file: src=data/users-{{host_index}}.csv dest=/mnt/users.csv`.
Then, each worker can process their copy of users.csv with a separate script, that is agnostic to which set of users they have. Is there any way to get this counter index?
I am a beginner to Ansible, so I wonder whether I am overlooking a simple module or idiom, either in Ansible or Jinja. Thanks in advance.
It turns out I have access to a variable called ami_launch_index inside of the ec2_facts module that gives me a zero-indexed unique ID to each EC2 instance. Here is the code for copying over files with numerical suffixes to their corresponding EC2 instances:
tasks:
- name: Gather ec2 facts
action: ec2_facts
register: facts
- name: Share data to nodes
copy: src=data/websites-{{facts.ansible_facts.ansible_ec2_ami_launch_index}}.txt dest=/mnt/websites.txt
The copy line produces the following for the src values:
data/websites-1.txt
data/websites-0.txt
data/websites-2.txt
(There is no guarantee that the hosts will iterate in ami_launch_index order)