I have the below json data file:
[
{
"?xml": {
"attributes": {
"encoding": "UTF-8",
"version": "1.0"
}
}
},
{
"domain": [
{
"name": "mydom"
},
{
"domain-version": "12.2.1.3.0"
},
{
"server": [
{
"name": "AdminServer"
},
{
"ssl": {
"name": "AdminServer"
}
},
{
"listen-port": "12400"
},
{
"listen-address": "mydom.myserver1.mybank.com"
}
]
},
{
"server": [
{
"name": "SERV01"
},
{
"log": [
{
"name": "SERV01"
},
{
"file-name": "/web/bea_logs/domains/mydom/SERV01/SERV01.log"
}
]
},
{
"listen-port": "12401"
},
{
"listen-address": "mydom.myserver1.mybank.com"
},
{
"server-start": [
{
"java-vendor": "Sun"
},
{
"java-home": "/web/bea/platform1221/jdk"
}
]
}
]
},
{
"server": [
{
"name": "SERV02"
},
{
"log": [
{
"name": "SERV02"
},
{
"file-name": "/web/bea_logs/domains/mydom/SERV02/SERV02.log"
}
]
},
{
"listen-port": "12401"
},
{
"listen-address": "mydom.myhost2.mybank.com"
},
{
"server-start": [
{
"java-home": "/web/bea/platform1221/jdk"
} ]
}
]
}
]
}
]
I wish to display all the server names and their respective port numbers.
Below is my failed attempt to display all the server names viz
AdminServer
SERV01
SERV02
My playbook:
tasks:
- name: Read the JSON file content in a variable
shell: "cat {{ playbook_dir }}/tmpfiles/{{ Latest_Build_Number }}/testme.json"
register: result
- name: Server Names
set_fact:
servernames: "{{ jsondata | json_query(jmesquery) }}"
vars:
jmesquery: '*.domain.server[*].name'
- name: Server Names and Ports
set_fact:
serverinfo: "{{ jsondata | json_query(jmesquery) }}"
vars:
jmesquery: '*.server[*].[name, port]'
- name: Print all server names
debug:
msg: "{{ item}}"
with_items:
- "{{ servernames }}"
I also tried the below:
jmesquery: 'domain.server[*].name'
There is no error but no data in the output as well. Output below:
TASK [Print all server names] *********************************************************************************
Monday 21 February 2022 03:07:47 -0600 (0:00:00.129) 0:00:03.590 *******
ok: [localhost] => (item=) => {
"msg": ""
}
Can you please suggest how can I get the desired data?
lot of solutions, one
with jmespath, you could try this:
tasks:
- name: Read the JSON file content in a variable
shell: "cat testme.json"
register: result
- name: jsondata
set_fact:
jsondata: "{{ result.stdout | from_json }}"
- name: Server Names
set_fact:
servernames: "{{ servernames | default([]) + [dict(name=item[0], port=item[1])] }}"
loop: "{{ jsondata | json_query(jmesquery0) | zip(jsondata | json_query(jmesquery1)) | list }}"
vars:
jmesquery0: '[].domain[].server[].name'
jmesquery1: '[].domain[].server[]."listen-port"'
- name: debug result
debug:
msg: "{{ servernames }}"
result:
ok: [localhost] => {
"msg": [
{
"name": "AdminServer",
"port": "12400"
},
{
"name": "SERV01",
"port": "12401"
},
{
"name": "SERV02",
"port": "12401"
}
]
}
Due to the nature of your data being in lists, you'll have to resort to conditionals in order to get rid of the empty objects and single item lists that would otherwise pollute your data:
[].domain[?server].server, to get the objects having a property server
[?name].name | [0] to get the name
[?"listen-port"]."listen-port" | [0] to get the port
So, a valid JMESPath query on your data would be
[].domain[?server]
.server[]
.{
name: [?name].name | [0],
port: [?"listen-port"]."listen-port" | [0]
}
And in Ansible, with that single JMESPath query, given that the file is on the controller:
- debug:
var: >-
lookup(
'file',
playbook_dir ~ '/tmpfiles/' ~ Latest_Build_Number ~ '/testme.json'
)
| from_json
| json_query('
[].domain[?server]
.server[]
.{
name: [?name].name | [0],
port: [?"listen-port"]."listen-port" | [0]
}
')
vars:
Latest_Build_Number: 1
This would yield
TASK [debug] *************************************************************************
ok: [localhost] =>
? |-
lookup(
'file',
playbook_dir ~ '/tmpfiles/' ~ Latest_Build_Number ~ '/testme.json'
) | from_json | json_query('
[].domain[?server]
.server[]
.{
name: [?name].name | [0],
port: [?"listen-port"]."listen-port" | [0]
}
')
: - name: AdminServer
port: '12400'
- name: SERV01
port: '12401'
- name: SERV02
port: '12401'
If the file is on the nodes and not on the controller, then, you can either slurp the files first or resort to cat, as you did before applying the same JMESPath query.
Related
Below is my JSON file:
[
{
"?xml": {
"attributes": {
"encoding": "UTF-8",
"version": "1.0"
}
}
},
{
"domain": [
{
"name": "mydom"
},
{
"domain-version": "12.2.1.3.0"
},
{
"server": [
{
"name": "AdminServer"
},
{
"ssl": {
"name": "AdminServer"
}
},
{
"listen-port": "12400"
},
{
"listen-address": "mydom.host1.bank.com"
}
]
},
{
"server": [
{
"name": "myserv1"
},
{
"ssl": [
{
"name": "myserv1"
},
{
"login-timeout-millis": "25000"
}
]
},
{
"log": [
{
"name": "myserv1"
},
{
"file-name": "/web/bea_logs/domains/mydom/myserv1/myserv1.log"
}
]
}
]
},
{
"server": [
{
"name": "myserv2"
},
{
"ssl": {
"name": "myserv2"
}
},
{
"reverse-dns-allowed": "false"
},
{
"log": [
{
"name": "myserv2"
},
{
"file-name": "/web/bea_logs/domains/mydom/myserv2/myserv2.log"
}
]
}
]
}
]
}
]
I need to get log list's name and file-name like below using ansible code.
myserv1_log: "/web/bea_logs/domains/mydom/myserv1/myserv1.log"
myserv2_log: "/web/bea_logs/domains/mydom/myserv2/myserv2.log"
There are two challenges that i m facing.
server may not always be the 3rd key of domain array.
log array may not alway be a key for all server arrays and thus should not be printed. For example. server name AdminServer does not have any log list while myserv1 & myserv2 do have.
I need an ansible code to print the desired for the dynamically changing json.
Note: server will always be a key in the domain array
I'm posting with reference to my similar query here: unable to ideally parse a json file in ansible
Kindly suggest.
you just test if both keys exist:
- hosts: localhost
gather_facts: no
vars:
json: "{{ lookup('file', './file.json') | from_json }}"
tasks:
- name: display
debug:
msg: "name: {{ servername }} --> filename: {{ filename }}"
loop: "{{ json[1].domain }}"
vars:
servername: "{{ item.server.0.name }}_log"
filename: "{{ item['server'][2]['log'][1]['file-name'] }}"
when: item.server is defined and item.server.2.log is defined
result:
TASK [display]
skipping: [localhost] => (item={'name': 'USWL1212MRSHM01'})
skipping: [localhost] => (item={'domain-version': '12.2.1.3.0'})
skipping: [localhost] => (item={'server': [{'name': 'AdminServer'}, {'ssl': {'name': 'AdminServer'}}, {'listen-port': '12400'}, {'listen-address': 'myhost1'}]})
ok: [localhost] => (item={'server': [{'name': 'myserv1'}, {'ssl': {'name': 'myserv1'}}, {'log': [{'name': 'myserv1'}, {'file-name': '/web/bea_logs/domains/mydom/myserv1/myserv1.log'}]}]}) => {
"msg": "name: myserv1_log --> filename: /web/bea_logs/domains/mydom/myserv1/myserv1.log"
}
ok: [localhost] => (item={'server': [{'name': 'myserv2'}, {'ssl': {'name': 'myserv2'}}, {'log': [{'name': 'myserv2'}, {'file-name': '/web/bea_logs/domains/mydom/myserv2/myserv2.log'}]}]}) => {
"msg": "name: myserv2_log --> filename: /web/bea_logs/domains/mydom/myserv2/myserv2.log"
}
As you can see, when the condition is not true, the action is skipped...
you could simplify by testing only key log, because in your case, keylog is always linked to key server
when: item.server.2.log is defined
I have been trying to update the values in a dictionary in destination json with the values in the dictionary in source JSON file. Below is the example of source and destination JSON file:
Source file:
[
{
"key": "MYSQL",
"value": "456"
},
{
"key": "RDS",
"value": "123"
}
]
Destination File:
[
{
"key": "MYSQL",
"value": "100"
},
{
"key": "RDS",
"value": "111"
},
{
"key": "DB1",
"value": "TestDB"
},
{
"key": "OS",
"value": "EX1"
}
]
Expectation in destination file after running Ansible playbook:
[
{
"key": "MYSQL",
"value": "**456**"
},
{
"key": "RDS",
"value": "**123**"
},
{
"key": "DB1",
"value": "TestDB"
},
{
"key": "OS",
"value": "EX1"
}
]
Below is the playbook I have tried so far, but this only updates the value if it is hard coded:
- hosts: localhost
tasks:
- name: Parse JSON
shell: cat Source.json
register: result
- name: Save json data to a variable
set_fact:
jsondata: "{{result.stdout | from_json}}"
- name: Get key names
set_fact:
json_key: "{{ jsondata | map(attribute='key') | flatten }}"
- name: Get Values names
set_fact:
json_value: "{{ jsondata | map(attribute='value') | flatten }}"
# Trying to update the destination file with only the values provided in source.json
- name: Replace values in json
replace:
path: Destination.json
regexp: '"{{ item }}": "100"'
replace: '"{{ item }}": "456"'
loop:
- value
The main goal is to update the value in destination.json with the value provided in source.json.
In Ansible, the couple key/value tends to be handled with the filters dict2items and items2dict. And your use case can be handled by those filters.
Here would be the logic:
Read both files
Convert both files into dictionaries, with dict2items
Combine the two dictionaries, with the combine filter
Convert the dictionary back into a list with items2dict
Dump the result in JSON back into the file
Given the playbook:
- hosts: localhost
gather_facts: no
tasks:
- shell: cat Source.json
register: source
- shell: cat Destination.json
register: destination
- copy:
content: "{{
destination.stdout | from_json | items2dict |
combine(
source.stdout | from_json | items2dict
) | dict2items | to_nice_json
}}"
dest: Destination.json
We end up with Destination.json containing:
[
{
"key": "MYSQL",
"value": "456"
},
{
"key": "RDS",
"value": "123"
},
{
"key": "DB1",
"value": "TestDB"
},
{
"key": "OS",
"value": "EX1"
}
]
Without to knowing the structure of your destination file it's difficult to use a regex.
I suggest you to load your destination file in a variable, do the changes and save the content of variable to file.
This solution does the job:
- hosts: localhost
tasks:
- name: Parse JSON
set_fact:
source: "{{ lookup('file', 'source.json') | from_json }}"
destination: "{{ lookup('file', 'destination.json') | from_json }}"
- name: create new json
set_fact:
json_new: "{{ json_new | d([]) + ([item] if _rec == [] else [_rec]) | flatten }}"
loop: "{{ destination }}"
vars:
_rec: "{{ source | selectattr('key', 'equalto', item.key) }}"
- name: save new json
copy:
content: "{{ json_new | to_nice_json }}"
dest: dest_new.json
Result -> dest_new.json:
ok: [localhost] => {
"msg": [
{
"key": "MYSQL",
"value": "456"
},
{
"key": "RDS",
"value": "123"
},
{
"key": "DB1",
"value": "TestDB"
},
{
"key": "OS",
"value": "EX1"
}
]
}
can somebody please help me with this json parse?
I have this json
{
"declaration": {
"ACS-AS3": {
"ACS": {
"class": "Application",
"vs_ubuntu_22": {
"virtualAddresses": ["10.11.205.167"]
},
"pool_ubuntu_22": {
"members": {
"addressDiscovery": "static",
"servicePort": 22
}
},
"vs_ubuntu_443": {
"virtualAddresses": ["10.11.205.167"],
"virtualPort": 443
},
"pool_ubuntu01_443": {
"members": [{
"addressDiscovery": "static",
"servicePort": 443,
"serverAddresses": [
"10.11.205.133",
"10.11.205.165"
]
}]
},
"vs_ubuntu_80": {
"virtualAddresses": [
"10.11.205.167"
],
"virtualPort": 80
},
"pool_ubuntu01_80": {
"members": [{
"addressDiscovery": "static",
"servicePort": 80,
"serverAddresses": [
"10.11.205.133",
"10.11.205.165"
],
"shareNodes": true
}],
"monitors": [{
"bigip": "/Common/tcp"
}]
}
}
}
}
}
and I am trying this playbook
tasks:
- name : deploy json file AS3 to F5
debug:
msg: "{{ lookup('file', 'parse2.json') }}"
register: atc_AS3_status
no_log: true
- name : Parse json 1
debug:
var: atc_AS3_status.msg.declaration | json_query(query_result) | list
vars:
query_result: "\"ACS-AS3\".ACS"
#query_result1: "\"ACS-AS3\".ACS.*.virtualAddresses"
register: atc_AS3_status1
I got this response
TASK [Parse json 1] ******************************************************************************************************************************************************************************************
ok: [avx-bigip01.dhl.com] => {
"atc_AS3_status1": {
"atc_AS3_status.msg.declaration | json_query(query_result) | list": [
"class",
"vs_ubuntu_22",
"pool_ubuntu_22",
"vs_ubuntu_443",
"pool_ubuntu01_443",
"vs_ubuntu_80",
"pool_ubuntu01_80"
],
"changed": false,
"failed": false
}
}
but I would like to print just key which has inside key virtualAddresses
if ""ACS-AS3".ACS.*.virtualAddresses" is defined the print the key .
the result should be
vs_ubuntu_22
vs_ubuntu_443
vs_ubuntu_80
One way to get the keys of a dict, is to use the dict2items filter. This will give vs_ubuntu_22 etc. as "key" and their sub-dicts as "value". Using this we can conditionally check if virtualAddresses is defined in values.
Also parse2.json can be included as vars_file or with include_vars rather than having a task to debug and register the result.
Below task using vars_file in playbook should get you the intended keys from the JSON:
vars_files:
- parse2.json
tasks:
- name: show atc_status
debug:
var: item.key
loop: "{{ declaration['ACS-AS3']['ACS'] | dict2items }}"
when: item['value']['virtualAddresses'] is defined
I use elb_application_lb_info module to get info about my application load balancer. Here is the code I am using for it:
- name: Test playbook
hosts: tag_elastic_role_logstash
vars:
aws_access_key: AKIARWXXVHXJS5BOIQ6P
aws_secret_key: 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
tasks:
- name: Gather information about all ELBs
elb_application_lb_info:
aws_access_key: AKIXXXXXXXXXXXXXXXXXXX
aws_secret_key: gG6a586XXXXXXXXXXXXXXXXXX
region: ap-southeast-2
names:
- LoadBalancer
register: albinfo
- debug:
msg: "{{ albinfo }}"
This is working fine and I got the following output:
"load_balancers": [
{
"idle_timeout_timeout_seconds": "60",
"routing_http2_enabled": "true",
"created_time": "2021-01-26T23:58:27.890000+00:00",
"access_logs_s3_prefix": "",
"security_groups": [
"sg-094c894246db1bd92"
],
"waf_fail_open_enabled": "false",
"availability_zones": [
{
"subnet_id": "subnet-0195c9c0df024d221",
"zone_name": "ap-southeast-2b",
"load_balancer_addresses": []
},
{
"subnet_id": "subnet-071060fde585476e0",
"zone_name": "ap-southeast-2c",
"load_balancer_addresses": []
},
{
"subnet_id": "subnet-0d5f856afab8f0eec",
"zone_name": "ap-southeast-2a",
"load_balancer_addresses": []
}
],
"access_logs_s3_bucket": "",
"deletion_protection_enabled": "false",
"load_balancer_name": "LoadBalancer",
"state": {
"code": "active"
},
"scheme": "internet-facing",
"type": "application",
"load_balancer_arn": "arn:aws:elasticloadbalancing:ap-southeast-2:117557247443:loadbalancer/app/LoadBalancer/27cfc970d48501fd",
"access_logs_s3_enabled": "false",
"tags": {
"Name": "loadbalancer_test",
"srg:function": "Storage",
"srg:owner": "ISCloudPlatforms#superretailgroup.com",
"srg:cost-centre": "G110",
"srg:managed-by": "ISCloudPlatforms#superretailgroup.com",
"srg:environment": "TST"
},
"routing_http_desync_mitigation_mode": "defensive",
"canonical_hosted_zone_id": "Z1GM3OXH4ZPM65",
"dns_name": "LoadBalancer-203283612.ap-southeast-2.elb.amazonaws.com",
"ip_address_type": "ipv4",
"listeners": [
{
"default_actions": [
{
"target_group_arn": "arn:aws:elasticloadbalancing:ap-southeast-2:117557247443:targetgroup/test-ALBID-W04X8DBT450Q/c999ac1cda7b1d4a",
"type": "forward",
"forward_config": {
"target_group_stickiness_config": {
"enabled": false
},
"target_groups": [
{
"target_group_arn": "arn:aws:elasticloadbalancing:ap-southeast-2:117557247443:targetgroup/test-ALBID-W04X8DBT450Q/c999ac1cda7b1d4a",
"weight": 1
}
]
}
}
],
"protocol": "HTTP",
"rules": [
{
"priority": "default",
"is_default": true,
"rule_arn": "arn:aws:elasticloadbalancing:ap-southeast-2:117557247443:listener-rule/app/LoadBalancer/27cfc970d48501fd/671ad3428c35c834/5b5953a49a886c03",
"conditions": [],
"actions": [
{
"target_group_arn": "arn:aws:elasticloadbalancing:ap-southeast-2:117557247443:targetgroup/test-ALBID-W04X8DBT450Q/c999ac1cda7b1d4a",
"type": "forward",
"forward_config": {
"target_group_stickiness_config": {
"enabled": false
},
"target_groups": [
{
"target_group_arn": "arn:aws:elasticloadbalancing:ap-southeast-2:117557247443:targetgroup/test-ALBID-W04X8DBT450Q/c999ac1cda7b1d4a",
"weight": 1
}
]
}
}
]
}
],
"listener_arn": "arn:aws:elasticloadbalancing:ap-southeast-2:117557247443:listener/app/LoadBalancer/27cfc970d48501fd/671ad3428c35c834",
"load_balancer_arn": "arn:aws:elasticloadbalancing:ap-southeast-2:117557247443:loadbalancer/app/LoadBalancer/27cfc970d48501fd",
"port": 9200
}
],
"vpc_id": "vpc-0016dcdf5abe4fef0",
"routing_http_drop_invalid_header_fields_enabled": "false"
}
]
I need to fetch "dns_name" which is dns name of the load balancer and pass it in another play as a variable.
I tried with json_query but got the error. Here is the code:
- name: save the Json data to a Variable as a Fact
set_fact:
jsondata: "{{ albinfo.stdout | from_json }}"
- name: Get ALB dns name
set_fact:
dns_name: "{{ jsondata | json_query(jmesquery) }}"
vars:
jmesquery: 'load_balancers.dns_name'
- debug:
msg: "{{ dns_name }}"
And here is the error:
"msg": "The task includes an option with an undefined variable. The error was: Unable to look up a name or access an attribute in template string ({{ albinfo.stdout | from_json }}).\nMake sure your variable name does not contain invalid characters like '-': the JSON object must be str, bytes or bytearray
Any idea how to extract "dns_name" from the json above?
Here is the way to get the dns_name from above json output:
- name: Get Application Load Balancer DNS Name
set_fact:
rezultat: "{{ albinfo | json_query('load_balancers[*].dns_name') }}"
- debug:
msg: "{{ rezultat }}"
I have the following Ansible task:
tasks:
- name: ensure instances are running
ec2:
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
...
user_data: "{{ lookup('template', 'userdata.txt.j2') }}"
register: ec2_result
- debug:
msg: "{{ ec2_result }}"
- set_fact:
win_instance_id: "{{ ec2_result | json_query('tagged_instances[*].id') }}"
The output:
TASK [debug] ***************
ok: [localhost] => {
"msg": {
"changed": false,
"failed": false,
"instance_ids": null,
"instances": [],
"tagged_instances": [
{
"ami_launch_index": "0",
"architecture": "x86_64",
"block_device_mapping": {
"/dev/sda1": {
"delete_on_termination": true,
"status": "attached",
"volume_id": "vol-01f217e489c681211"
}
},
"dns_name": "",
"ebs_optimized": false,
"groups": {
"sg-c63822ac": "WinRM RDP"
},
"hypervisor": "xen",
"id": "i-019c03c3e3929f76e",
"image_id": "ami-3204995d",
...
"tags": {
"Name": "Student01 _ Jumphost"
},
"tenancy": "default",
"virtualization_type": "hvm"
}
]
}
}
TASK [set_fact] ****************
ok: [localhost]
TASK [debug] ******************
ok: [localhost] => {
"msg": "The Windows Instance ID is: [u'i-019c03c3e3929f76e']"
}
As you can see, the instance ID is correct, but not well formated. Is there a way to convert this output into "human readable" output? Or is there any better way to parse the instance id from the ec2 task output?
Thanks!
It's not non-human readable format, but a list object in Python notation, because you query a list.
If you want a string, you should pass it through a first filter.
win_instance_id: "{{ ec2_result | json_query('tagged_instances[*].id') | first }}"
You can also access the value directly without json_query ([0] refers to the first element of a list):
win_instance_id: "{{ ec2_result.tagged_instances[0].id }}"