Merge 2 json arrays based on their keys - json

How can I merge 2 json arrays based on the same keys and add a 3rd item from the input json pog_id in the output json file? I have tried with the code mentioned below that is creating 2 different arrays inside a key in json and not merging the values inside a same array.
mergedobject.json
[
{
"name": "ALL_DMZ",
"objectIds": [
"29570",
"29571"
],
"orgid": "777777",
"pog_id": "333333"
},
{
"name": "ALL_DMZ",
"objectIds": [
"729548",
"729549",
"295568"
],
"orgid": "777777",
"pog_id": "333333"
}
]
Playbook
- set_fact:
output: "{{ output|d([]) + [{'orgid': item.0,
'objectIds': item.1|
map(attribute='objectIds')|
list}] }}"
loop: "{{ mergedobject|groupby('name') }}"
Current Output
[
{
"name": "ALL_DMZ",
"objectIds": [
[ "29570",
"29571"
],
[
"729548",
"729549",
"295568"
]
]
"orgid": "777777"
}
]
Expected Output
[
{
"name": "ALL_DMZ",
"objectIds": [
"29570",
"29571",
"729548",
"729549",
"295568"
]
"orgid": "777777",
"pog_id": "333333"
}
]

Given the data
mergedobject:
- name: ALL_DMZ
objectIds: ['29570', '29571']
orgid: '777777'
pog_id: '333333'
- name: ALL_DMZ
objectIds: ['729548', '729549', '295568']
orgid: '777777'
pog_id: '333333'
combine the items in the list. Append the lists in attributes
output: "{{ mergedobject|combine(list_merge='append') }}"
gives
output:
name: ALL_DMZ
objectIds: ['29570', '29571', '729548', '729549', '295568']
orgid: '777777'
pog_id: '333333'
You can put the result into a list
output: "[{{ mergedobject|combine(list_merge='append') }}]"
gives what you want
output:
- name: ALL_DMZ
objectIds: ['29570', '29571', '729548', '729549', '295568']
orgid: '777777'
pog_id: '333333'

Related

Need to replace value in dictionary in destination file with the value in the dictionary in source file

I have been trying to update the values in a dictionary in destination json with the values in the dictionary in source JSON file. Below is the example of source and destination JSON file:
Source file:
[
{
"key": "MYSQL",
"value": "456"
},
{
"key": "RDS",
"value": "123"
}
]
Destination File:
[
{
"key": "MYSQL",
"value": "100"
},
{
"key": "RDS",
"value": "111"
},
{
"key": "DB1",
"value": "TestDB"
},
{
"key": "OS",
"value": "EX1"
}
]
Expectation in destination file after running Ansible playbook:
[
{
"key": "MYSQL",
"value": "**456**"
},
{
"key": "RDS",
"value": "**123**"
},
{
"key": "DB1",
"value": "TestDB"
},
{
"key": "OS",
"value": "EX1"
}
]
Below is the playbook I have tried so far, but this only updates the value if it is hard coded:
- hosts: localhost
tasks:
- name: Parse JSON
shell: cat Source.json
register: result
- name: Save json data to a variable
set_fact:
jsondata: "{{result.stdout | from_json}}"
- name: Get key names
set_fact:
json_key: "{{ jsondata | map(attribute='key') | flatten }}"
- name: Get Values names
set_fact:
json_value: "{{ jsondata | map(attribute='value') | flatten }}"
# Trying to update the destination file with only the values provided in source.json
- name: Replace values in json
replace:
path: Destination.json
regexp: '"{{ item }}": "100"'
replace: '"{{ item }}": "456"'
loop:
- value
The main goal is to update the value in destination.json with the value provided in source.json.
In Ansible, the couple key/value tends to be handled with the filters dict2items and items2dict. And your use case can be handled by those filters.
Here would be the logic:
Read both files
Convert both files into dictionaries, with dict2items
Combine the two dictionaries, with the combine filter
Convert the dictionary back into a list with items2dict
Dump the result in JSON back into the file
Given the playbook:
- hosts: localhost
gather_facts: no
tasks:
- shell: cat Source.json
register: source
- shell: cat Destination.json
register: destination
- copy:
content: "{{
destination.stdout | from_json | items2dict |
combine(
source.stdout | from_json | items2dict
) | dict2items | to_nice_json
}}"
dest: Destination.json
We end up with Destination.json containing:
[
{
"key": "MYSQL",
"value": "456"
},
{
"key": "RDS",
"value": "123"
},
{
"key": "DB1",
"value": "TestDB"
},
{
"key": "OS",
"value": "EX1"
}
]
Without to knowing the structure of your destination file it's difficult to use a regex.
I suggest you to load your destination file in a variable, do the changes and save the content of variable to file.
This solution does the job:
- hosts: localhost
tasks:
- name: Parse JSON
set_fact:
source: "{{ lookup('file', 'source.json') | from_json }}"
destination: "{{ lookup('file', 'destination.json') | from_json }}"
- name: create new json
set_fact:
json_new: "{{ json_new | d([]) + ([item] if _rec == [] else [_rec]) | flatten }}"
loop: "{{ destination }}"
vars:
_rec: "{{ source | selectattr('key', 'equalto', item.key) }}"
- name: save new json
copy:
content: "{{ json_new | to_nice_json }}"
dest: dest_new.json
Result -> dest_new.json:
ok: [localhost] => {
"msg": [
{
"key": "MYSQL",
"value": "456"
},
{
"key": "RDS",
"value": "123"
},
{
"key": "DB1",
"value": "TestDB"
},
{
"key": "OS",
"value": "EX1"
}
]
}

ansible print key if inside value is defined

can somebody please help me with this json parse?
I have this json
{
"declaration": {
"ACS-AS3": {
"ACS": {
"class": "Application",
"vs_ubuntu_22": {
"virtualAddresses": ["10.11.205.167"]
},
"pool_ubuntu_22": {
"members": {
"addressDiscovery": "static",
"servicePort": 22
}
},
"vs_ubuntu_443": {
"virtualAddresses": ["10.11.205.167"],
"virtualPort": 443
},
"pool_ubuntu01_443": {
"members": [{
"addressDiscovery": "static",
"servicePort": 443,
"serverAddresses": [
"10.11.205.133",
"10.11.205.165"
]
}]
},
"vs_ubuntu_80": {
"virtualAddresses": [
"10.11.205.167"
],
"virtualPort": 80
},
"pool_ubuntu01_80": {
"members": [{
"addressDiscovery": "static",
"servicePort": 80,
"serverAddresses": [
"10.11.205.133",
"10.11.205.165"
],
"shareNodes": true
}],
"monitors": [{
"bigip": "/Common/tcp"
}]
}
}
}
}
}
and I am trying this playbook
tasks:
- name : deploy json file AS3 to F5
debug:
msg: "{{ lookup('file', 'parse2.json') }}"
register: atc_AS3_status
no_log: true
- name : Parse json 1
debug:
var: atc_AS3_status.msg.declaration | json_query(query_result) | list
vars:
query_result: "\"ACS-AS3\".ACS"
#query_result1: "\"ACS-AS3\".ACS.*.virtualAddresses"
register: atc_AS3_status1
I got this response
TASK [Parse json 1] ******************************************************************************************************************************************************************************************
ok: [avx-bigip01.dhl.com] => {
"atc_AS3_status1": {
"atc_AS3_status.msg.declaration | json_query(query_result) | list": [
"class",
"vs_ubuntu_22",
"pool_ubuntu_22",
"vs_ubuntu_443",
"pool_ubuntu01_443",
"vs_ubuntu_80",
"pool_ubuntu01_80"
],
"changed": false,
"failed": false
}
}
but I would like to print just key which has inside key virtualAddresses
if ""ACS-AS3".ACS.*.virtualAddresses" is defined the print the key .
the result should be
vs_ubuntu_22
vs_ubuntu_443
vs_ubuntu_80
One way to get the keys of a dict, is to use the dict2items filter. This will give vs_ubuntu_22 etc. as "key" and their sub-dicts as "value". Using this we can conditionally check if virtualAddresses is defined in values.
Also parse2.json can be included as vars_file or with include_vars rather than having a task to debug and register the result.
Below task using vars_file in playbook should get you the intended keys from the JSON:
vars_files:
- parse2.json
tasks:
- name: show atc_status
debug:
var: item.key
loop: "{{ declaration['ACS-AS3']['ACS'] | dict2items }}"
when: item['value']['virtualAddresses'] is defined

Count number of json objects in output

I am using Ansible's vmware_cluster_info to give the following json. I need to be able to count the number of hosts in a cluster. Cluster names will change.
{
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"clusters": {
"Cluster1": {
"drs_default_vm_behavior": "fullyAutomated",
"ha_restart_priority": [
"medium"
],
"ha_vm_failure_interval": [
50
],
"ha_vm_max_failure_window": [
0
],
"ha_vm_max_failures": [
10
],
"ha_vm_min_up_time": [
90
],
"ha_vm_monitoring": "vmMonitoringOnly",
"ha_vm_tools_monitoring": [
"vmAndAppMonitoring"
],
"hosts": [
{
"folder": "/Datacenter/host/Cluster1",
"name": "host1"
},
{
"folder": "/Datacenter/host/Cluster1",
"name": "host2"
},
{
"folder": "/Datacenter/host/Cluster1",
"name": "host3"
},
{
"folder": "/Datacenter/host/Cluster1",
"name": "host4"
}
],
"resource_summary": {
"cpuCapacityMHz": 144000,
},
"tags": [],
"vsan_auto_claim_storage": false
}
},
"invocation": {
"module_args": {
"cluster_name": "Cluster1",
}
}
}
I have tried:
- debug:
msg: "{{ cluster_info.clusters.[].hosts.name | length }}"
- debug:
msg: "{{ cluster_info.clusters.*.hosts.name | length }}"
Both give me template error while templating string... Also, any suggestions on tutorials, etc to learn how to parse json would be appreciated. It seems to be a difficult topic for me. Any ideas?
one similar approach to what you were trying is by using json_query. I understand you want to print the count of hosts, so the .name is not needed in your code:
- name: print count of hosts
debug:
var: cluster_info.clusters | json_query('*.hosts') | first | length
json_query returns a list, this is why we pass it to first before length.

Extract value from Ansible task output and create a variable from it

I use elb_application_lb_info module to get info about my application load balancer. Here is the code I am using for it:
- name: Test playbook
hosts: tag_elastic_role_logstash
vars:
aws_access_key: AKIARWXXVHXJS5BOIQ6P
aws_secret_key: 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
tasks:
- name: Gather information about all ELBs
elb_application_lb_info:
aws_access_key: AKIXXXXXXXXXXXXXXXXXXX
aws_secret_key: gG6a586XXXXXXXXXXXXXXXXXX
region: ap-southeast-2
names:
- LoadBalancer
register: albinfo
- debug:
msg: "{{ albinfo }}"
This is working fine and I got the following output:
"load_balancers": [
{
"idle_timeout_timeout_seconds": "60",
"routing_http2_enabled": "true",
"created_time": "2021-01-26T23:58:27.890000+00:00",
"access_logs_s3_prefix": "",
"security_groups": [
"sg-094c894246db1bd92"
],
"waf_fail_open_enabled": "false",
"availability_zones": [
{
"subnet_id": "subnet-0195c9c0df024d221",
"zone_name": "ap-southeast-2b",
"load_balancer_addresses": []
},
{
"subnet_id": "subnet-071060fde585476e0",
"zone_name": "ap-southeast-2c",
"load_balancer_addresses": []
},
{
"subnet_id": "subnet-0d5f856afab8f0eec",
"zone_name": "ap-southeast-2a",
"load_balancer_addresses": []
}
],
"access_logs_s3_bucket": "",
"deletion_protection_enabled": "false",
"load_balancer_name": "LoadBalancer",
"state": {
"code": "active"
},
"scheme": "internet-facing",
"type": "application",
"load_balancer_arn": "arn:aws:elasticloadbalancing:ap-southeast-2:117557247443:loadbalancer/app/LoadBalancer/27cfc970d48501fd",
"access_logs_s3_enabled": "false",
"tags": {
"Name": "loadbalancer_test",
"srg:function": "Storage",
"srg:owner": "ISCloudPlatforms#superretailgroup.com",
"srg:cost-centre": "G110",
"srg:managed-by": "ISCloudPlatforms#superretailgroup.com",
"srg:environment": "TST"
},
"routing_http_desync_mitigation_mode": "defensive",
"canonical_hosted_zone_id": "Z1GM3OXH4ZPM65",
"dns_name": "LoadBalancer-203283612.ap-southeast-2.elb.amazonaws.com",
"ip_address_type": "ipv4",
"listeners": [
{
"default_actions": [
{
"target_group_arn": "arn:aws:elasticloadbalancing:ap-southeast-2:117557247443:targetgroup/test-ALBID-W04X8DBT450Q/c999ac1cda7b1d4a",
"type": "forward",
"forward_config": {
"target_group_stickiness_config": {
"enabled": false
},
"target_groups": [
{
"target_group_arn": "arn:aws:elasticloadbalancing:ap-southeast-2:117557247443:targetgroup/test-ALBID-W04X8DBT450Q/c999ac1cda7b1d4a",
"weight": 1
}
]
}
}
],
"protocol": "HTTP",
"rules": [
{
"priority": "default",
"is_default": true,
"rule_arn": "arn:aws:elasticloadbalancing:ap-southeast-2:117557247443:listener-rule/app/LoadBalancer/27cfc970d48501fd/671ad3428c35c834/5b5953a49a886c03",
"conditions": [],
"actions": [
{
"target_group_arn": "arn:aws:elasticloadbalancing:ap-southeast-2:117557247443:targetgroup/test-ALBID-W04X8DBT450Q/c999ac1cda7b1d4a",
"type": "forward",
"forward_config": {
"target_group_stickiness_config": {
"enabled": false
},
"target_groups": [
{
"target_group_arn": "arn:aws:elasticloadbalancing:ap-southeast-2:117557247443:targetgroup/test-ALBID-W04X8DBT450Q/c999ac1cda7b1d4a",
"weight": 1
}
]
}
}
]
}
],
"listener_arn": "arn:aws:elasticloadbalancing:ap-southeast-2:117557247443:listener/app/LoadBalancer/27cfc970d48501fd/671ad3428c35c834",
"load_balancer_arn": "arn:aws:elasticloadbalancing:ap-southeast-2:117557247443:loadbalancer/app/LoadBalancer/27cfc970d48501fd",
"port": 9200
}
],
"vpc_id": "vpc-0016dcdf5abe4fef0",
"routing_http_drop_invalid_header_fields_enabled": "false"
}
]
I need to fetch "dns_name" which is dns name of the load balancer and pass it in another play as a variable.
I tried with json_query but got the error. Here is the code:
- name: save the Json data to a Variable as a Fact
set_fact:
jsondata: "{{ albinfo.stdout | from_json }}"
- name: Get ALB dns name
set_fact:
dns_name: "{{ jsondata | json_query(jmesquery) }}"
vars:
jmesquery: 'load_balancers.dns_name'
- debug:
msg: "{{ dns_name }}"
And here is the error:
"msg": "The task includes an option with an undefined variable. The error was: Unable to look up a name or access an attribute in template string ({{ albinfo.stdout | from_json }}).\nMake sure your variable name does not contain invalid characters like '-': the JSON object must be str, bytes or bytearray
Any idea how to extract "dns_name" from the json above?
Here is the way to get the dns_name from above json output:
- name: Get Application Load Balancer DNS Name
set_fact:
rezultat: "{{ albinfo | json_query('load_balancers[*].dns_name') }}"
- debug:
msg: "{{ rezultat }}"

Ansible: Get Particular value from JSON

I am getting following output in Ansible JSON Format and need to fetch Request ID from it please help
ok: [localhost] => {
"op.content": {
"_links": {
"self": [
{
"href": "url123"
}
]
},
"entries": [
{
"_links": {
"self": [
{
"href": "url456"
}
]
},
"values": {
"Request ID": "abc|abc",
"Status": "Assigned"
}
}
]
}
}
You can use the json_query filter to get the Request ID value of your JSON object. Here is an example of how you could parse it. In my example, I am getting the JSON object from a file, and storing it in a variable called op_request. On the json_query task, take notice of how you can escape a key that has a dot (.) inside:
---
- name: Get "Request ID" from JSON
hosts: all
connection: local
gather_facts: no
vars_files:
- ./secret.yml
vars:
op_content_file: ./files/op_content.json
tasks:
- name: Read JSON file
set_fact:
op_content: '{{ lookup("file", op_content_file) }}'
- name: Get RequestID from op_content variable
set_fact:
request_id: "{{ op_content | json_query('\"op.content\".entries[0].values.\"Request ID\"') }}"
I hope it helps