Writing to a particular cell in csv file from Ansible - csv

I'm trying to find if any option to write to a particular cell in CSV file from Ansible. I can use Ansible lookup plugin type "csvfile" to lookup for a value in the csv file but I need to add text to that file. Consider below example:
empID,name,mark
111,user1
112,user2
113,user3
I need to add the mark for each user, running the playbook should prompt for the empID and marks:
---
- name: Update data
hosts: localhost
vars_prompt:
- name: empid
prompt: EMP_ID
private: no
- name: marks
prompt: MARKS
private: no
tasks:
- name: Lookup
debug:
msg: "{{ lookup('csvfile', '{{ empid }} file=data.csv delimiter=, col=2') }}"
Need help to write to CSV file column C rows 1,2,3....

As #mdaniel mentioned, there's no out-of-box write_csv module in Ansible. Without creating your own module, the only workaround I can think of is the following:
You read in the CSV file with read_csv module and register the data table as a dictionary variable.
You add the new values to the dict variable recursively with the key "mark". (Check this post for modifying values in dict variable)
You loop over the dict variable and output each line to a new file with the lineinfile module. The file can be created with.csv extension, and in each line, the values are separated with delimiters (, or ;).
Here is an example:
---
- hosts: localhost
gather_facts: no
tasks:
- name: Read from CSV file
community.general.read_csv:
path: data.csv
key: empID
delimiter: ','
register: response
- name: Create a dict variable from the CSV file
set_fact:
data_dict: "{{ response.dict }}"
- name: Update values in dictionary recursively
set_fact:
data_dict: "{{ data_dict | combine(new_item, recursive=true) }}"
vars:
new_item: "{ '{{ item.key }}': { 'mark': 'some_value' } }"
with_dict: "{{ data_dict }}"
- name: Debug
debug:
msg: "{{ data_dict }}"
- name: Save ddata_dict headers to a new CSV file
lineinfile:
path: new_data.csv
line: empID,name,mark
create: yes
- name: Save data_dict values to a new CSV file
lineinfile:
path: new_data.csv
line: "{{ item.value.empID }},{{ item.value.name }},{{ item.value.mark }}"
loop: "{{ data_dict | dict2items }}"
And the outputted CSV file is:
empID,name,mark
111,user1,some_value
112,user2,some_value
113,user3,some_value

There doesn't appear to be a provided write_csv mechanism in ansible, partially because ansible is not a general purpose computing platform. However, you can easily write your own module which behaves as you wish, or you may be able to find an existing one out in ansible galaxy

Related

Is there an example somewhere of how to loop through a csv file in Ansible and have it put the values in a dict instead of a list?

I'm trying to learn how to iterate through data from csv and put it into a dict rather than a list. I can make it work with a list, but doing so isn't as helpful for a CSV that needs to run multiple tasks that each use only some of the same data because then I'd have to make a new CSV for every task and call a different csv in every task. Since most of the data is the same in each task I'd rather pull all from the same csv and use a dict to specify which particular cell's data i want to use at that moment.
Is there a way to put the data pulled from the csv into a dict?
My Playbook:
tasks:
- name: Read CSV
read_csv:
path: ./constructs.csv
dialect: excel
register: csv_data
- name: Add a new BD
cisco.mso.mso_schema_template_bd:
<<: *login
state: present
schema: "{{ item.schema }}"
template: "{{ item.template }}"
bd: "{{ item.bd }}"
loop: "{{ csv_data.dict|dict2items }}"
- name: Add a new EPG
cisco.mso.mso_schema_template_anp_epg:
<<: *aci_login
state: present
schema: "{{ item.schema }}"
template: "{{ item.template }}"
anp: "{{ item.app_profile }}"
epg: "{{ item.epg }}"
bd:
name: "{{ item.bd }}"
loop: "{{ csv_data.dict|dict2items }}"
If i run the above (with appropriate headers/etc above task: it will simply skip all my tasks. The skip reason is 'no items in the list'.

Ansible Error when using dict with csv: "The task includes an option with an undefined variable. The error was 'dict object' has no attribute 'schema'

I'm trying to get my playbook working with csv while using a dict due to the nature of the data [each task asks for different parts of the data in a row so i can't use a list?].
BD and EPG always will appear exactly once per file and can be used as a key if necessary.
I'm getting an error "the task includes an option with an undefined variable". Since the variable (schema) appears as a column header in the csv file I must have some sort of syntax issue.
What I am trying to do is loop, one row at a time, through the csv file and have schema: "{{ item.schema }}" evaluate to that particular rows value under the schema column, etc.
Playbook:
tasks:
- name: Read CSV
read_csv:
# Name of the csv
path: ./Create_EPGs_and_BDs.csv
dialect: excel
key: bd
# Creates register value to be used later
register: csv_data
#
##################################################
#Creates BD in MSO Template
##################################################
#
- name: Add a new BD
cisco.mso.mso_schema_template_bd:
<<: *aci_login
state: present
schema: "{{ item.schema }}"
template: "{{ item.template }}"
bd: "{{ item.bd }}"
vrf:
name: "{{ item.vrf }}"
loop: "{{ csv_data.dict|dict2items }}"
#
##################################################
#Creates EPG in MSO Template
##################################################
#
- name: Add a new EPG
cisco.mso.mso_schema_template_anp_epg:
<<: *aci_login
state: present
schema: "{{ item.schema }}"
template: "{{ item.template }}"
anp: "{{ item.app_profile }}"
epg: "{{ item.epg }}"
bd:
name: "{{ item.bd }}"
loop: "{{ csv_data.dict|dict2items }}"
CSV File:
https://i.stack.imgur.com/VlyLP.jpg
What is the correct syntax to pull the csv values for the entire row and then let me access each column's value in the playbook for that particular row?
The dict2items is going to transpose your dict into a list of dicts, shaped like [{"key": "the-value-of-bd-for-that-row", "value": {"schema": "schema1", ...}}, ...]
Thus, you just need to add .value in between your item and the dict key it references:
- name: Add a new BD
cisco.mso.mso_schema_template_bd:
<<: *aci_login
state: present
schema: "{{ item.value.schema }}"
template: "{{ item.value.template }}"
# or if prefer, item.value.bd
bd: "{{ item.key }}"
vrf:
name: "{{ item.value.vrf }}"
loop: "{{ csv_data.dict|dict2items }}"
it's your code style, but you'll want to be very careful using only one space for yaml indentation, as it's very easy to make a mistake doing that, and certainly makes asking questions on SO harder by doing so :-)
In the future, the use of debug: var=item will go a long way toward helping you understand the shape of your data when you encounter any such "has no attribute" error

Ansible read_csv module:How to provide path

I have a csv file as below
Hostname,Permission,User,Group,file
lbserver1,-rw-------,root,root,/tmp/dir1/4
lbserver2,drwx------,root,root,/tmp/dir1
lbserver3,-rw-------,root,root,/tmp/dir2/8
I need to use path as the key. My playbook is as below
- name: read csv
read_csv:
path: "/tmp/test.csv"
key: file
register: file_details
- name: test
debug:
msg: "{{file_details.dict./tmp/dir2/5.Permission}}"
I get error as
"msg": "template error while templating string: expected name or number. String: {{file_details.dict./tmp/dir2/5'.Permission}}"
I gave quotes as well as escape char for the paths, but still no luck. Please advise.
You can change the code like below:
- name: test
debug:
msg: "{{ file_details.dict['/tmp/dir2/5'].Permission }}"
But then it will raise an error if the key doesn't exist which is the case in your example. In that case you may use some default.
- name: test
debug:
msg: "{{ file_details.dict['/tmp/dir2/5'].Permission | default('undefined') }}"

Why is Ansible unable to read unicode string as JSON?

Summary
When retrieving data using the uri module in Ansible, I am unable to parse a section of it as JSON to retrieve a nested value.
The desired value is the ci field inside the content.data or json.data field (see output below).
Steps to Reproduce
site.yml
---
- hosts: localhost
gather_facts: false
tasks:
- name: Get String
uri:
url: "http://localhost/get-data"
method: POST
body_format: json
body: "{ \"kong-jid\": \"run-sn-discovery\" }"
return_content: yes
register: output
- set_fact:
ci: "{{ output.json.data.ci }}"
- debug:
msg: "{{ ci }}"
The {{ output }} variable
{
u'status': 200,
u'cookies': {},
u'url': u'http://kong-demo:8000/get-data',
u'transfer_encoding': u'chunked',
u'changed': False,
u'connection': u'close',
u'server': u'kong/0.34-1-enterprise-edition',
u'content':
u'{"data":"\\"{u\'ci\': u\'3bb8d625dbac3700e4f07b6e0f96195b\'}\\""}',
'failed': False,
u'json': {u'data': u'"{u\'ci\': u\'3bb8d625dbac3700e4f07b6e0f96195b\'}"'},
u'content_type': u'application/json',
u'date': u'Thu, 18 Apr 2019 15:50:25 GMT',
u'redirected': False,
u'msg': u'OK (unknown bytes)'
}
Result
[user#localhost]$ ansible-playbook site.yml
[WARNING]: Could not match supplied host pattern, ignoring: all
[WARNING]: provided hosts list is empty, only localhost is available
PLAY [localhost] ***************************************************************************************************************
TASK [Pass Redis data to next task as output] **********************************************************************************
ok: [localhost]
TASK [set_fact] ****************************************************************************************************************
fatal: [localhost]: FAILED! => {}
MSG:
The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'ci'
The error appears to have been in 'site.yml': line 19, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- set_fact:
^ here
exception type: <class 'ansible.errors.AnsibleUndefinedVariable'>
exception: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'ci'
Important Troubleshooting Information
It appears the root issue is related to which Ansible type being interpreted. I desire to parse ci from the output in one task.
The two-task solution shown below works, but this leads me to believe this should be possible in one line...
Two-Task Solution
- set_fact:
ci: "{{ output.json.data | from_json }}"
- debug:
msg: "{{ ci['ci'] }}"
But the ci fact set from {{ output.json.data | from_json }} reports a different TYPE than the inline type...
Unicode or Dict?
- debug:
msg: "{{ output.json.data | from_json | type_debug }}" # returns unicode
- set_fact:
ci: "{{ output.json.data | from_json }}"
- debug:
msg: "{{ ci | type_debug }}" # returns dict
Why isn't {{ output.json.data | from_json | type_debug }}
the same as {{ ci | type_debug }}?
Although json and data are keys in their resp objects, ci is just part of a larger string (which happens to look like a JSON object
If the relevant line in your datastructure would be:
u'json': {u'data': {'ci': u'3bb8d625dbac3700e4f07b6e0f96195b'}},
then you could expect to use "{{ output.json.data.ci }}" but not when the .ci part is just a normal part of a string.

Add a new key-value to a json file using Ansible

I'm using Ansible to automate some configuration steps for my application VM, but having difficult to insert a new key-value to an existing json file on the remote host.
Say I have this json file:
{
"foo": "bar"
}
And I want to insert a new key value pair to make the file become:
{
"foo": "bar",
"hello": "world"
}
Since json format is not line based, I'm excluding lineinfile module from my options. Also, I would prefer not to use any external modules. Google keeps giving me examples to show how to read json file, but nothing about change json values and write them back to file. Really appreciate your help please!
since the file is of json format, you could import the file to a variable, append the extra key:value pairs you want, and then write back to the filesystem.
here is a way to do it:
---
- hosts: localhost
connection: local
gather_facts: false
vars:
tasks:
- name: load var from file
include_vars:
file: /tmp/var.json
name: imported_var
- debug:
var: imported_var
- name: append more key/values
set_fact:
imported_var: "{{ imported_var | default([]) | combine({ 'hello': 'world' }) }}"
- debug:
var: imported_var
- name: write var to file
copy:
content: "{{ imported_var | to_nice_json }}"
dest: /tmp/final.json
UPDATE:
as OP updated, the code should work towards remote host, in this case we cant use included_vars or lookups. We could use the slurp module.
NEW code for remote hosts:
---
- hosts: greenhat
# connection: local
gather_facts: false
vars:
tasks:
- name: load var from file
slurp:
src: /tmp/var.json
register: imported_var
- debug:
msg: "{{ imported_var.content|b64decode|from_json }}"
- name: append more key/values
set_fact:
imported_var: "{{ imported_var.content|b64decode|from_json | default([]) | combine({ 'hello': 'world' }) }}"
- debug:
var: imported_var
- name: write var to file
copy:
content: "{{ imported_var | to_nice_json }}"
dest: /tmp/final.json
hope it helps
ilias-sp's solution is great!
In my case, it lacked the case when we may have to create a base json file.
So I had to add this task in the beginning of the play:
- name: Ensure json file exists
copy:
content: "{}"
dest: /tmp/var.json
force: false
For people who are OK with custom ansible modules: https://github.com/ParticleDecay/ansible-jsonpatch works great!
With this you can simply do:
- name: append key/values
json_patch:
src: /tmp/var.json
operations:
- op: add
path: "/hello"
value: "world"
pretty: yes
create: yes
- name: update log
copy:
content: "{{ log | to_nice_json}}"
dest: "{{ log_file }}"
vars:
log: "{{ (lookup('file', log_file) | from_json) + ([{'job': (build_id if build_id != '' else 'dev'), 'keystore': ks, 'timestamp': ansible_date_time.iso8601}]) }}"
log_file: log/log.json
build_id: "{{ lookup('ENV', 'BUILD_ID') }}"
tags: log