I need to read an ip line from a dynamic generated json file and add it to a configuration file on the server.
At Ansible home page i found two modules which would help:
- lookup module
- fileinline module
The lookup examples however show looking up for the whole contents of a file using this phrase "{{ lookup('file', '/etc/foo.txt') }}"
How could i filter the result into reading a single line?
Does anybody know a good way to achieve this ?
You probably do want a special key from a JSON dict I guess? If it's just a random line which can not be accessed inside the JSON struct it will be hard. You would need to grep out the line in a separate task.
But let's assume you want a special value from a dict, then you can convert the JSON to an object with the from_json filter:
{{ lookup('file', '/etc/foo.txt') | from_json }}
Now if you want the value of bar from the contained data structure, something like this should work:
{{ (lookup('file', '/etc/foo.txt') | from_json).get('bar') }}
Related
I am working on a AWX ansible query, I got the output from the query and the data which I need is in a URL format which is a
'dn': "uni/infra/funcprof/accportgrp-xxxxxxxxxx/rtaccBaseGrp-[uni/infra/accportprof-xxxxxxxxxxxx/hports-xxxxxx-typ-range]"
I want to extract the 'xxxx' data from the above string in ansible. I was able to do in python by splitting it with / and getting the details. Wondering how can I do that in Ansible?
with the string in a variable named mystring:
you could do something like this:
{{ mystring.split("/")[1] | lower }}
this is my first post and I'm also very new into programming. Sorry if the terminology I use doesn't always make perfect sense. Feel free to correct any non-sense that would make your eyes bleed.
I am actually a network engineer but with the current trend in my field, I need to start coding and automating but have postponed it until my company had a real use case. Well, that use case arrived and it is called ACI.
I've been learning how to automate many basic things with ansible and so far so good.
My current use case requires a playbook that will concatenate two CSV files with different columns into one single CSV file which will later be used to set variables in other plays.
We mainly work with CSV files containing system names, VLAN IDs and Leaf ports, something like this:
VPC_SYS_NAME, VLAN_ID, LEAF_PAIR
sys1, 3001, 101-102
sys2, 2500, 111-112
... , ..., ... ...
So far what I have tried is to take this data, read it with the read_csv module in ansible, and use the fields in each column as variables to loop in another play:
- name: read the csv file
read_csv:
path: list.csv
delimiter: ','
register: csv
- name: GET EPG UNI PATH FROM VLAN ID
aci_rest:
host: "{{ ansible_host }}"
username: "{{ username }}"
password: "{{ password }}"
validate_certs: False
method: get
path: api/class/fvAEPg.json?query-target-filter=eq(fvAEPg.name,"{{item.VLAN_ID}}")
loop: "{{ csv.list }}"
register: register_as_variable
Once this play has finished, it will register the output into another variable, in this case, called register_as_variable.
I then parse this output with json_query and set it into a new variable:
- set_fact:
fact1: "{{ register_as_variable | json_query('results[].imdata[].fvAEPg.attributes.dn') }}"
lastly, I copy this output into another CSV file.
With the Ansible shell module and using cat and awk I remove any unwanted characters and change the CSV file from a list with 1 single row to a headerless column, getting something like this:
"uni/tn-tenant/ap-AP01/epg-3001",
"uni/tn-tenant/ap-AP01/epg-2500",
"uni/tn-tenant/ap-AP01/epg-...",
Up to this point, it works as I expect it (even if it is clearly not the cleanest way).
Where I am struggling at the moment is to find a way to merge/concatenate both the original CSV with the system name, VLAN ID etc and the newly created CSV with the output "uni/tn-tenant/ap-AP01/epg-...." into one unique "master" CSV file that would be used by other plays. The "master" CSV file should look something like this:
VPC_SYS_NAME, VLAN_ID, LEAF_PAIR, MO_PATH
sys1, 3001, 101-102, "uni/tn-tenant/ap-AP01/epg-3001",
sys2, 2500, 111-112, "uni/tn-tenant/ap-AP01/epg-2500",
... , ..., ... ..., "uni/tn-tenant/ap-AP01/epg-....",
Adding the MO_PATH header can be done with sed -i '1iMO_PATH' file.csv but merging the columns of both files in a given order is what I'm unable to accomplish.
So far I have tried to use panda and cat but without success.
I would be extremely thankful if anyone could help me just a bit or guide me in the right direction.
Thanks!
Hello and welcome to StackOverflow! A former network engineer is here to help :)
The easiest way to merge two files line by line (if you are sure that they order is correct) is to use paste utility.
I have the following files:
1.csv
VPC_SYS_NAME,VLAN_ID,LEAF_PAIR
sys1,3001,101-102
sys2,2500,111-112
2.csv
"uni/tn-tenant/ap-AP01/epg-3001",
"uni/tn-tenant/ap-AP01/epg-2500",
Then i came up with
Adding a new header to resulting file 3.csv:
echo "$(head -n 1 1.csv),MO_PATH" > 3.csv
we are reading header of 1.csv, adding missing column and redirecting output to 3.csv (while overwriting it completely)
Merging two files using paste utility, while skipping the header of 1.csv
tail -n+2 1.csv | paste -d"," - 2.csv >> 3.csv
Let's split this one:
tail -n+2 1.csv - reads 1 csv starting from 2nd line to stdout
paste -d"," - 2.csv - merges two files line by line, using , as delimiter, while getting contents of the first file from stdin (represented as -). We used a pipe | symbol to pass stdout of tail command to stdin of paste command
>> used to append the content to already existing 3.csv
The result:
VPC_SYS_NAME,VLAN_ID,LEAF_PAIR,MO_PATH
sys1,3001,101-102,"uni/tn-tenant/ap-AP01/epg-3001",
sys2,2500,111-112,"uni/tn-tenant/ap-AP01/epg-2500",
And for pipes to work, don't forget to use shell module instead of command, since this question is marked as ansible
I am using Salt with jinja2 "regex_search" and I try to extract some digits (release version) from the archive file name. Then use the value to create a symlink, that contains it. I've tried different combinations using "list", "join" and other filters to get rid of this Unicode char, but without success.
Example:
"release_info" variable gets value "release-name-0.2345.577_20190101_1030.tar.gz" and I need to get only digits between the dots.
Here is the corresponding part of the sls file:
symlink to current release {{ release_info }}:
file.symlink:
- name: /home/{{ component.software['component_name'] }}/latest
- target: /home/{{ component.software['component_name'] }}/{{ release_info |regex_search('(\d+\.\d+\.\d+)') }}
- user: support
- group: support`enter code here`
The expected result is "/home/support/0.2345.577", but I have "/home/support/(u'0.2345.577',)"
If I try to pipe "yaml" or "json" filter like:
{{ release_info |regex_search('(\d+\.\d+\.\d+)') | yaml }}
I've got:
/home/support/[0.2345.577]
which is not what I am looking for.
PS
I've got it, but seems to me as not a got approach. Just workaround.
{{ release_info |regex_search('(\d+\.\d+\.\d+)') |yaml |replace('[','') |replace(']','') }}
Hello Todor and Welcome to Stack Overflow!
I have tried the example that you have posted and here is how to achieve what you want
Note: I have changed the regex pattern a little in order to support any other possibilities that could have more digits e.g 0.1.2.3.4 and so on, but of course you can use your pattern as long as it works for you as expected.
Solution 1:
{{ release_info | regex_search("(\d(\.\d+){1,})") | first }}
The result before using first:
('0.2345.577', '.577')
The result after using first:
0.2345.577
Solution 2:
{{ release_info | regex_search("(\d\.\d+\.\d+)") | first }}
The result before using first:
('0.2345.577',)
The result after using first:
0.2345.577
first is a built-in filter in jinja that can return the first item in a sequence. you can check List of built-in filters for more information about the other filters
How can I use dynamic data file?
Say I have several data files: file1.yml, file2.yml, file3.yml and in YFM I want to tell which data file to use:
---
datafilename: file1
---
{{ site.data.datafilename.person.name }}
^
How to tell liquid that here should be file1
Ideally would be to use post's file name. So that post1.md would use post1.yml data file and so on.
This should work from inside a post :
{{ site.data[page.slug].person.name }}
I get an complex json from an rb ,and I register like this
- name: get the json
command: /abc/get_info.rb
register: JsonInfo
and the json is like this
{"a-b-c.abc.com":[["000000001","a"],["000000002","a"],["000000003","c"]],"c-d-e.abc.com":[["000000010","c"],["000000012","b"]],"c-d-m.abc.com":[["000000022","c"],["000000033","b"],["000000044","c"]]}
but what I can do is just output the json like this:
- debug: msg="{{JsonInfo}}"
and loop like this
- debug: msg="{{item.key}} and the host is{{inventory_hostname}} and value is{{item.value}}"
with_dict: "{{JsonInfo.stdout}}"
when: item.key==inventory_hostname
by the way ,the a-b-c.abc.com,c-d-e.abc.com,c-d-m.abc.com is hostname of server
but what I really want to do is to run a loop on the json first,and get the result of
"a-b-c.abc.com":[["000000001","a"],["000000002","a"],["000000003","c"]]
"c-d-e.abc.com":[["000000010","c"],["000000012","b"]]
"c-d-m.abc.com":[["000000022","c"],["000000033","b"],["000000044","c"]]
and when I got all these above ,I run another loop for each of the value of a-b-c.abc.com,c-d-e.abc.com,c-d-m.abc.com and then according to the "a","c" ,run different commmand on the a-b-c.abc.com or c-d-e.abc.com
How Can I loop those json ?
That's not possible with the available Ansible loops. You can archive this by creating your own lookup plugin.