Ansible Inventory Multi-line Inventory arrays? - configuration

I'm trying to write an array in an Ansible inventory file (i.e. hosts.local) but the array seems to have to be all on one line and can't be split upon multiple:
[all:vars]
someArr=["This",
"doesn't",
"work"]
Is there any way of doing this in Ansible inventory files?

Is there any way of doing this in Ansible inventory files?
INI file doesn't support multiline. You may find some programming specific workaround but in this scenario, best is to use YAML for inventory. A sample inventory snippet:
all:
vars:
multiline: [
"This",
"is",
"multiline"
]
# Or use below style that results the same
#multiline:
# - "This"
# - "is"
# - "multiline"
hosts:
somehost:
Have a look at inventory basics for more details.

Related

Replace value of object property in multiple JSON files

I'm working with multiple JSON files that are located in the same folder.
Files contain objects with the same properties and they are such as:
{
"identifier": "cameraA",
"alias": "a",
"rtsp": "192.168.1.1"
}
I want to replace a property for all the objects in the JSON files at the same time for a certain condition.
For example, let's say that I want to replace all the rtsp values of the objects with identifier equal to "cameraA".
I've been trying with something like:
jq 'if .identifier == \"cameraA" then .rtsp=\"cameraX" else . end' -c *.json
But it isn't working.
Is there a simple way to replace the property of an object among multiple JSON files?
jq can only write to STDIN and STDOUT, so the simplest approach would be to process one file at a time, e.g. putting your jq program inside a shell loop. sponge is often used when employing this approach.
However, there is an alternative that has the advantage of efficiency. It requires only one invocation of jq, the output of which would include the filename information (obtained from input_filename). This output would then be the input of an auxiliary process, e.g. awk.

Concatenate two CSV files with different columns into one using panda or bash within an Ansible playbook

this is my first post and I'm also very new into programming. Sorry if the terminology I use doesn't always make perfect sense. Feel free to correct any non-sense that would make your eyes bleed.
I am actually a network engineer but with the current trend in my field, I need to start coding and automating but have postponed it until my company had a real use case. Well, that use case arrived and it is called ACI.
I've been learning how to automate many basic things with ansible and so far so good.
My current use case requires a playbook that will concatenate two CSV files with different columns into one single CSV file which will later be used to set variables in other plays.
We mainly work with CSV files containing system names, VLAN IDs and Leaf ports, something like this:
VPC_SYS_NAME, VLAN_ID, LEAF_PAIR
sys1, 3001, 101-102
sys2, 2500, 111-112
... , ..., ... ...
So far what I have tried is to take this data, read it with the read_csv module in ansible, and use the fields in each column as variables to loop in another play:
- name: read the csv file
read_csv:
path: list.csv
delimiter: ','
register: csv
- name: GET EPG UNI PATH FROM VLAN ID
aci_rest:
host: "{{ ansible_host }}"
username: "{{ username }}"
password: "{{ password }}"
validate_certs: False
method: get
path: api/class/fvAEPg.json?query-target-filter=eq(fvAEPg.name,"{{item.VLAN_ID}}")
loop: "{{ csv.list }}"
register: register_as_variable
Once this play has finished, it will register the output into another variable, in this case, called register_as_variable.
I then parse this output with json_query and set it into a new variable:
- set_fact:
fact1: "{{ register_as_variable | json_query('results[].imdata[].fvAEPg.attributes.dn') }}"
lastly, I copy this output into another CSV file.
With the Ansible shell module and using cat and awk I remove any unwanted characters and change the CSV file from a list with 1 single row to a headerless column, getting something like this:
"uni/tn-tenant/ap-AP01/epg-3001",
"uni/tn-tenant/ap-AP01/epg-2500",
"uni/tn-tenant/ap-AP01/epg-...",
Up to this point, it works as I expect it (even if it is clearly not the cleanest way).
Where I am struggling at the moment is to find a way to merge/concatenate both the original CSV with the system name, VLAN ID etc and the newly created CSV with the output "uni/tn-tenant/ap-AP01/epg-...." into one unique "master" CSV file that would be used by other plays. The "master" CSV file should look something like this:
VPC_SYS_NAME, VLAN_ID, LEAF_PAIR, MO_PATH
sys1, 3001, 101-102, "uni/tn-tenant/ap-AP01/epg-3001",
sys2, 2500, 111-112, "uni/tn-tenant/ap-AP01/epg-2500",
... , ..., ... ..., "uni/tn-tenant/ap-AP01/epg-....",
Adding the MO_PATH header can be done with sed -i '1iMO_PATH' file.csv but merging the columns of both files in a given order is what I'm unable to accomplish.
So far I have tried to use panda and cat but without success.
I would be extremely thankful if anyone could help me just a bit or guide me in the right direction.
Thanks!
Hello and welcome to StackOverflow! A former network engineer is here to help :)
The easiest way to merge two files line by line (if you are sure that they order is correct) is to use paste utility.
I have the following files:
1.csv
VPC_SYS_NAME,VLAN_ID,LEAF_PAIR
sys1,3001,101-102
sys2,2500,111-112
2.csv
"uni/tn-tenant/ap-AP01/epg-3001",
"uni/tn-tenant/ap-AP01/epg-2500",
Then i came up with
Adding a new header to resulting file 3.csv:
echo "$(head -n 1 1.csv),MO_PATH" > 3.csv
we are reading header of 1.csv, adding missing column and redirecting output to 3.csv (while overwriting it completely)
Merging two files using paste utility, while skipping the header of 1.csv
tail -n+2 1.csv | paste -d"," - 2.csv >> 3.csv
Let's split this one:
tail -n+2 1.csv - reads 1 csv starting from 2nd line to stdout
paste -d"," - 2.csv - merges two files line by line, using , as delimiter, while getting contents of the first file from stdin (represented as -). We used a pipe | symbol to pass stdout of tail command to stdin of paste command
>> used to append the content to already existing 3.csv
The result:
VPC_SYS_NAME,VLAN_ID,LEAF_PAIR,MO_PATH
sys1,3001,101-102,"uni/tn-tenant/ap-AP01/epg-3001",
sys2,2500,111-112,"uni/tn-tenant/ap-AP01/epg-2500",
And for pipes to work, don't forget to use shell module instead of command, since this question is marked as ansible

Is there a `jq` command line tool or wrapper which lets you interactively explore `jq` similar to `jmespath.terminal`

jq is a lightweight and flexible command-line JSON processor.
https://stedolan.github.io/jq/
Is there a jq command line tool or wrapper which lets you pipe output into it and interactively explore jq, with the JSON input in one pane and your interactively updating result in another pane, similar to jmespath.terminal ?
I'm looking for something similar to the JMESPath Terminal jpterm
"JMESPath exploration tool in the terminal"
https://github.com/jmespath/jmespath.terminal
I found this project jqsh but it's not maintained and it appears to produce a lot of errors when I use it.
https://github.com/bmatsuo/jqsh
I've used https://jqplay.org/ and it's a great web based jq learning tool. However, I want to be able to, in the shell, pipe the json output of a command into an interactive jq which allows me to explore and experiment with jq commands.
Thanks in advance!
I've been using jiq and I'm pretty happy with it.
https://github.com/fiatjaf/jiq
It's jid with jq.
You can drill down interactively by using jq filtering queries.
jiq uses jq internally, and it requires you to have jq in your PATH.
Using the aws cli
aws ec2 describe-regions --region-names us-east-1 us-west-1 | jiq
jiq output
[Filter]> .Regions
{
"Regions": [
{
"Endpoint": "ec2.us-east-1.amazonaws.com",
"RegionName": "us-east-1"
},
{
"Endpoint": "ec2.us-west-1.amazonaws.com",
"RegionName": "us-west-1"
}
]
}
https://github.com/simeji/jid
n.b. I'm not clear how strictly it follows jq syntax and feature set
You may have to roll-your-own.
Of course, jq itself is interactive in the sense that if you invoke it without specifying any JSON input, it will process STDIN interactively.
If you want to feed the same data to multiple programs, you could easily write your own wrapper. Over at github, there's a bash script named jqplay that has a few bells and whistles. For example, if the input command begins with | then the most recent result is used as input.
Example 1
./jqplay -c spark.json
Enter a jq filter (possibly beginning with "|"), or blank line to terminate:
.[0]
{"name":"Paddington","lovesPandas":null,"knows":{"friends":["holden","Sparky"]}}
.[1]
{"name":"Holden"}
| .name
"Holden"
| .[0:1]
"H"
| length
1
.[1].name
"Holden"
Bye.
Example 2
./jqplay -n
Enter a jq filter (possibly beginning and/or ending with "|"), or blank line to terminate:
?
An initial | signifies the filter should be applied to the previous jq
output.
A terminating | causes the next line that does not trigger a special
action to be appended to the current line.
Special action triggers:
:exit # exit this script, also triggered by a blank line
:help # print this help
:input PATHNAME ...
:options OPTIONS
:save PN # save the most recent output in the named file provided
it does not exist
:save! PN # save the most recent output in the named file
:save # save to the file most recently specified by a :save command
:show # print the OPTIONS and PATHNAMEs currently in effect
:! PN # equivalent to the sequence of commands
:save! PN
:input PN
? # print this help
# # ignore this line
1+2
3
:exit
Bye.
If you're using Emacs (or willing to) then JQ-mode allows you to run JQ filters interactively on the current JSON document buffer:
https://github.com/ljos/jq-mode
There is a new one: https://github.com/PaulJuliusMartinez/jless
JLess is a command-line JSON viewer designed for reading, exploring, and searching through JSON data.
JLess will pretty print your JSON and apply syntax highlighting.
Expand and collapse Objects and Arrays to grasp the high- and low-level structure of a JSON document. JLess has a large suite of vim-inspired commands that make exploring data a breeze.
JLess supports full text regular-expression based search. Quickly find the data you're looking for in long String values, or jump between values for the same Object key.

how to use nest loop in ansible playbook with json register

I get an complex json from an rb ,and I register like this
- name: get the json
command: /abc/get_info.rb
register: JsonInfo
and the json is like this
{"a-b-c.abc.com":[["000000001","a"],["000000002","a"],["000000003","c"]],"c-d-e.abc.com":[["000000010","c"],["000000012","b"]],"c-d-m.abc.com":[["000000022","c"],["000000033","b"],["000000044","c"]]}
but what I can do is just output the json like this:
- debug: msg="{{JsonInfo}}"
and loop like this
- debug: msg="{{item.key}} and the host is{{inventory_hostname}} and value is{{item.value}}"
with_dict: "{{JsonInfo.stdout}}"
when: item.key==inventory_hostname
by the way ,the a-b-c.abc.com,c-d-e.abc.com,c-d-m.abc.com is hostname of server
but what I really want to do is to run a loop on the json first,and get the result of
"a-b-c.abc.com":[["000000001","a"],["000000002","a"],["000000003","c"]]
"c-d-e.abc.com":[["000000010","c"],["000000012","b"]]
"c-d-m.abc.com":[["000000022","c"],["000000033","b"],["000000044","c"]]
and when I got all these above ,I run another loop for each of the value of a-b-c.abc.com,c-d-e.abc.com,c-d-m.abc.com and then according to the "a","c" ,run different commmand on the a-b-c.abc.com or c-d-e.abc.com
How Can I loop those json ?
That's not possible with the available Ansible loops. You can archive this by creating your own lookup plugin.

Jekyll Filename Without Date

I want to build documentation site using Jekyll and GitHub Pages. The problem is Jekyll only accept a filename under _posts with exact pattern like YYYY-MM-DD-your-title-is-here.md.
How can I post a page in Jekyll without this filename pattern? Something like:
awesome-title.md
yet-another-title.md
etc.md
Thanks for your advance.
Don't use posts; posts are things with dates. Sounds like you probably want to use collections instead; you get all the power of Posts; but without the pesky date / naming requirements.
https://jekyllrb.com/docs/collections/
I use collections for almost everything that isn't a post. This is how my own site is configured to use collections for 'pages' as well as more specific sections of my site:
I guess that you are annoyed with the post url http://domaine.tld/category/2014/11/22/post.html.
You cannot bypass the filename pattern for posts, but you can use permalink (see documentation).
_posts/2014-11-22-other-post.md
---
title: "Other post"
date: 2014-11-22 09:49:00
permalink: anything-you-want
---
File will be anything-you-want/index.html.
Url will be http://domaine.tld/anything-you-want.
What I did without "abandoning" the posts (looks like using collections or pages is a better and deeper solution) is a combination of what #igneousaur says in a comment plus using the same date as prefix of file names:
Use permalink: /:title.html in _config.yml (no dates in published URLs).
Use the format 0001-01-01-name.md for all files in _posts folder (jekyll is happy about the file names and I'm happy about the sorting of the files).
Of course, we can include any "extra information" on the name, maybe some incremental id o anything that help us to organize the files, e.g.: 0001-01-01-001-name.md.
The way I solved it was by adding _plugins/no_date.rb:
class Jekyll::PostReader
# Don't use DATE_FILENAME_MATCHER so we don't need to put those stupid dates
# in the filename. Also limit to just *.markdown, so it won't process binary
# files from e.g. drags.
def read_posts(dir)
read_publishable(dir, "_posts", /.*\.markdown$/)
end
def read_drafts(dir)
read_publishable(dir, "_drafts", /.*\.markdown$/)
end
end
This overrides ("monkey patches") the standard Jekyll functions; the defaults for these are:
# Read all the files in <source>/<dir>/_drafts and create a new
# Document object with each one.
#
# dir - The String relative path of the directory to read.
#
# Returns nothing.
def read_drafts(dir)
read_publishable(dir, "_drafts", Document::DATELESS_FILENAME_MATCHER)
end
# Read all the files in <source>/<dir>/_posts and create a new Document
# object with each one.
#
# dir - The String relative path of the directory to read.
#
# Returns nothing.
def read_posts(dir)
read_publishable(dir, "_posts", Document::DATE_FILENAME_MATCHER)
end
With the referenced constants being:
DATELESS_FILENAME_MATCHER = %r!^(?:.+/)*(.*)(\.[^.]+)$!.freeze
DATE_FILENAME_MATCHER = %r!^(?>.+/)*?(\d{2,4}-\d{1,2}-\d{1,2})-([^/]*)(\.[^.]+)$!.freeze
As you can see, DATE_FILENAME_MATCHER as used in read_posts() requires a date ((\d{2,4}-\d{1,2}-\d{1,2})); I put date: 2021-07-06 in the frontmatter.
I couldn't really get collections to work, and this also solves another problem I had where storing binary files such as images in _drafts would error out as it tried to process them.
Arguably a bit ugly, but it works well. Downside is that it may break on update, although I've been patching various things for years and never really had any issues with it thus far. This is with Jekyll 4.2.0.
I wanted to use posts but not have the filenames in the date. The closest I got was naming the posts with an arbitrary 'date' like 0001-01-01cool-post.md and then use a different property to access the date.
If you use the last-modified-at plugin - https://github.com/gjtorikian/jekyll-last-modified-at - then you can use page.last_modified_at in your _layouts/post.html and whatever file you are running {% for post in site.posts %} in.
Now the dates are retrieved from the last git commit date (not author date) and the page.date is unused.
In the json schema for the config file are actually some useful information. See below code block for some examples.
I have set it to /:categories/:title. That drops the date and file extension, while preserving the categories.
I still use a proper date for the file name because you can use that date in your templates. I.e. to display the date on a post using {{ page.date }}.
{
"global-permalink": {
"description": "The global permalink format\nhttps://jekyllrb.com/docs/permalinks/#global",
"type": "string",
"default": "date",
"examples": [
"/:year",
"/:short_year",
"/:month",
"/:i_month",
"/:short_month",
"/:day",
"/:i_day",
"/:y_day",
"/:w_year",
"/:week",
"/:w_day",
"/:short_day",
"/:long_day",
"/:hour",
"/:minute",
"/:second",
"/:title",
"/:slug",
"/:categories",
"/:slugified_categories",
"date",
"pretty",
"ordinal",
"weekdate",
"none",
"/:categories/:year/:month/:day/:title:output_ext",
"/:categories/:year/:month/:day/:title/",
"/:categories/:year/:y_day/:title:output_ext",
"/:categories/:year/:week/:short_day/:title:output_ext",
"/:categories/:title:output_ext"
],
"pattern": "^((/(:(year|short_year|month|i_month|short_month|long_month|day|i_day|y_day|w_year|week|w_day|short_day|long_day|hour|minute|second|title|slug|categories|slugified_categories))+)+|date|pretty|ordinal|weekdate|none)$"
}
}