I have json output which i save in a registered variable output_bgp_raw. The original output (before i save it in variable) looks like this:
{
"as": 65011,
"bestPath": {
"multiPathRelax": "false"
},
"dynamicPeers": 0,
"peerCount": 2,
"peerGroupCount": 2,
"peerGroupMemory": 128,
"peerMemory": 42352,
"peers": {
"swp1": {
"hostname": "Spine-01",
"idType": "interface",
"inq": 0,
"msgRcvd": 140386,
"msgSent": 140432,
"outq": 0,
"peerUptime": "4d17h35m",
"peerUptimeEstablishedEpoch": 1643304458,
"peerUptimeMsec": 408925000,
"pfxRcd": 17,
"pfxSnt": 27,
"prefixReceivedCount": 17,
"remoteAs": 65001,
"state": "Established",
"tableVersion": 0,
"version": 4
},
"swp2": {
"hostname": "Spine-02",
"idType": "interface",
"inq": 0,
"msgRcvd": 140383,
"msgSent": 140430,
"outq": 0,
"peerUptime": "4d17h35m",
"peerUptimeEstablishedEpoch": 1643304466,
"peerUptimeMsec": 408917000,
"pfxRcd": 17,
"pfxSnt": 27,
"prefixReceivedCount": 17,
"remoteAs": 65001,
"state": "Established",
"tableVersion": 0,
"version": 4
}
},
"ribCount": 19,
"ribMemory": 3496,
"routerId": "10.0.0.3",
"tableVersion": 0,
"totalPeers": 2,
"vrfId": 0,
"vrfName": "default"
}
I want to save the json output in a json file with the same state (without adding or subtracting). I'm using tasks with the copy module and filtering to_nice_json. The tasks looks like this :
tasks:
- name: VERIFICATION // BGP STATUS
nclu:
commands:
- show bgp l2vpn evpn summary json
register: output_bgp_raw
- name: VERIFICATION // SAVE BGP VARIABLE TO FILE
local_action:
module: copy
content: "{{ output_bgp_raw.msg | to_nice_json }}"
dest: "containers/verification/{{ instance_name }}-bgp-raw.json"
However, the text in the resulting json file becomes single-line json and is parsed differently (there are " at the beginning and end, and an unreadable /n as enter). The json file can be seen below :
"{\n \"routerId\":\"10.0.0.3\",\n \"as\":65011,\n \"vrfId\":0,\n \"vrfName\":\"default\",\n \"tableVersion\":0,\n \"ribCount\":19,\n \"ribMemory\":3496,\n \"peerCount\":2,\n \"peerMemory\":42352,\n \"peerGroupCount\":2,\n \"peerGroupMemory\":128,\n \"peers\":{\n \"swp1\":{\n \"hostname\":\"Spine-01\",\n \"remoteAs\":65001,\n \"version\":4,\n \"msgRcvd\":141122,\n \"msgSent\":141171,\n \"tableVersion\":0,\n \"outq\":0,\n \"inq\":0,\n \"peerUptime\":\"4d18h11m\",\n \"peerUptimeMsec\":411069000,\n \"peerUptimeEstablishedEpoch\":1643304458,\n \"prefixReceivedCount\":17,\n \"pfxRcd\":17,\n \"pfxSnt\":27,\n \"state\":\"Established\",\n \"idType\":\"interface\"\n },\n \"swp2\":{\n \"hostname\":\"Spine-02\",\n \"remoteAs\":65001,\n \"version\":4,\n \"msgRcvd\":141118,\n \"msgSent\":141169,\n \"tableVersion\":0,\n \"outq\":0,\n \"inq\":0,\n \"peerUptime\":\"4d18h11m\",\n \"peerUptimeMsec\":411061000,\n \"peerUptimeEstablishedEpoch\":1643304466,\n \"prefixReceivedCount\":17,\n \"pfxRcd\":17,\n \"pfxSnt\":27,\n \"state\":\"Established\",\n \"idType\":\"interface\"\n }\n },\n \"totalPeers\":2,\n \"dynamicPeers\":0,\n \"bestPath\":{\n \"multiPathRelax\":\"false\"\n }\n} \n"
I want the resulting json file to be the exact same as the actual output. Can you please suggest what changes are needed?
Your output is a string representation of a json data (transformed automatically to json output by jinja2 templating when you debug it.... that's an other story).
You need to parse that string as json into a variable to then serialize it as nice json into your file.
(Note: do yourself a favor, use delegate_to: localhost rather than the old and not really readable local_action)
- name: VERIFICATION // SAVE BGP VARIABLE TO FILE
copy:
content: "{{ output_bgp_raw.msg | from_json | to_nice_json }}"
dest: "containers/verification/{{ instance_name }}-bgp-raw.json"
delegate_to: localhost
Related
I have a YAML file as follows:
api: v1
hostname: abc
metadata:
name: test
annotations: {
"ip" : "1.1.1.1",
"login" : "fad-login",
"vip" : "1.1.1.1",
"interface" : "port1",
"port" : "443"
}
I am trying to read this data from a file, only replace the values of ip and vip and write it back to the file.
What I tried is:
open ("test.yaml", w) as f:
yaml.dump(object, f) #this does not help me since it converts the entire file to YAML
also json.dump() does not work too as it converts entire file to JSON. It needs to be the same format but the values need to be updated. How can I do so?
What you have is not YAML with embedded JSON, it is YAML with some the value for annotations being
in YAML flow style (which is a superset of JSON and thus closely resembles it).
This would be
YAML with embedded JSON:
api: v1
hostname: abc
metadata:
name: test
annotations: |
{
"ip" : "1.1.1.1",
"login" : "fad-login",
"vip" : "1.1.1.1",
"interface" : "port1",
"port" : "443"
}
Here the value for annotations is a string that you can hand to a JSON parser.
You can just load the file, modify it and dump. This will change the layout
of the flow-style part, but that will not influence any following parsers:
import sys
import ruamel.yaml
file_in = Path('input.yaml')
yaml = ruamel.yaml.YAML()
yaml.preserve_quotes = True
yaml.width = 1024
data = yaml.load(file_in)
annotations = data['metadata']['annotations']
annotations['ip'] = type(annotations['ip'])('4.3.2.1')
annotations['vip'] = type(annotations['vip'])('1.2.3.4')
yaml.dump(data, sys.stdout)
which gives:
api: v1
hostname: abc
metadata:
name: test
annotations: {"ip": "4.3.2.1", "login": "fad-login", "vip": "1.2.3.4", "interface": "port1", "port": "443"}
The type(annotations['vip'])() establishes that the replacement string in the output has the same
quotes as the original.
ruamel.yaml currently doesn't preserve newlines in a flow style mapping/sequence.
If this has to go back into some repository with minimal chances, you can do:
import sys
import ruamel.yaml
file_in = Path('input.yaml')
def rewrite_closing_curly_brace(s):
res = []
for line in s.splitlines():
if line and line[-1] == '}':
res.append(line[:-1])
idx = 0
while line[idx] == ' ':
idx += 1
res.append(' ' * (idx - 2) + '}')
continue
res.append(line)
return '\n'.join(res) + '\n'
yaml = ruamel.yaml.YAML()
yaml.preserve_quotes = True
yaml.width = 15
data = yaml.load(file_in)
annotations = data['metadata']['annotations']
annotations['ip'] = type(annotations['ip'])('4.3.2.1')
annotations['vip'] = type(annotations['vip'])('1.2.3.4')
yaml.dump(data, sys.stdout, transform=rewrite_closing_curly_brace)
which gives:
api: v1
hostname: abc
metadata:
name: test
annotations: {
"ip": "4.3.2.1",
"login": "fad-login",
"vip": "1.2.3.4",
"interface": "port1",
"port": "443"
}
Here the 15 for width is of course highly dependent on your file and might influence other lines if they
were longer. In that case you could leave that out, and make the wrapping
that rewrite_closing_curly_brace() does split and indent the whole flow style part.
Please note that your original, and the transformed output are, invalid YAML,
that is accepted by ruamel.yaml for backward compatibility. According to the YAML
specification the closing curly brace should be indented more than the start of annotation
I have been stuck to get a particular json object if the value of a key matches a variable (string).
My json file looks like this:
"totalRecordsWithoutPaging": 1234,
"jobs": [
{
"jobSummary": {
"totalNumOfFiles": 0,
"jobId": 8035,
"destClientName": "BOSDEKARLSSP010",
"destinationClient": {
"clientId": 10,
"clientName": "BOSDEKARLSSP010"
}
}
},
{
"jobSummary": {
"totalNumOfFiles": 0,
"jobId": 9629,
"destClientName": "BOSDEKARLSSP006",
"destinationClient": {
"clientId": 11,
"clientName": "BOSDEKARLSSP006"
}
}
},
.....
]
}
I read this json with result: "{{ lookup('file','CVExport-short.json') | from_json }}" and I can get only one value of destClientName key with the following code:
- name: Iterate JSON
set_fact:
app_item: "{{ item.jobSummary }}"
with_items: "{{ result.jobs }}"
register: app_result
- debug:
var: app_result.results[0].ansible_facts.app_item.destClientName
My goal is to get the value of jobIdif the value of destClientName matches some other variable or string in any jobSummary.
I don't still have much knowledge in Ansible. So, any help would be much appreciated.
Update
Ok, I have found one solution.
- name: get job ID
set_fact:
job_id: "{{ item.jobSummary.jobId }}"
with_items: "{{ result.jobs}}"
when: item.jobSummary.destClientName == '{{ target_vm }}'
- debug:
msg: "{{job_id}}"
But I think there might be a better solution than this. Any idea how?
Ansible's json_query filter let's you perform complex filtering of JSON documents by applying JMESPath expressions. Rather than looping over the jobs in the the result, you can get the information you want in a single step.
We want to query all jobs in which have a destClientName that matches the value in target_vm. Using literal values, the expression yielding that list of jobs would look like this:
jobs[?jobSummary.destClientName == `BOSDEKARLSSP006`]
The result of this, when applied to your sample data, would be:
[
{
"jobSummary": {
"totalNumOfFiles": 0,
"jobId": 9629,
"destClientName": "BOSDEKARLSSP006",
"destinationClient": {
"clientId": 11,
"clientName": "BOSDEKARLSSP006"
}
}
}
]
From this result, you want to extract the jobId, so we rewrite the expression like this:
jobs[?jobSummary.destClientName == `BOSDEKARLSSP006`]|[0].jobSummary.jobId
Which gives us:
9629
To make this work in a playbook, you'll want to replace the literal hostname in this expression with the value of your target_vm variable. Here's a complete playbook that demonstrates the solution:
---
- hosts: localhost
gather_facts: false
# This is just the sample data from your question.
vars:
target_vm: BOSDEKARLSSP006
results:
totalRecordsWithoutPaging: 1234
jobs:
- jobSummary:
totalNumOfFiles: 0
jobId: 8035
destClientName: BOSDEKARLSSP010
destinationClient:
clientId: 10
clientName: BOSDEKARLSSP010
- jobSummary:
totalNumOfFiles: 0
jobId: 9629
destClientName: BOSDEKARLSSP006
destinationClient:
clientId: 11
clientName: BOSDEKARLSSP006
tasks:
- name: get job ID
set_fact:
job_id: "{{ results|json_query('jobs[?jobSummary.destClientName == `{}`]|[0].jobSummary.jobId'.format(target_vm)) }}"
- debug:
var: job_id
Update re: your comment
The {} in the expression is a Python string formatting sequence that
is filled in by the call to .format(target_vm). In Python, the
expression:
'The quick brown {} jumped over the lazy {}.'.format('fox', 'dog')
Would evaluate to:
The quick brown fox jumped over the lazy dog.
And that's exactly what we're doing in that set_fact expression. I
could instead have written:
job_id: "{{ results|json_query('jobs[?jobSummary.destClientName == `' ~ target_vm ~ '`]|[0].jobSummary.jobId') }}"
(Where ~ is the Jinja stringifying concatenation operator)
Currently, using API to collect JSON file. I have managed to extract this output as I demonstrated below.
And now I'm on the stage, that I have JSON extraction and need to make in the way that BQ will accept it. Without too much manipulation ( as this output potentially will be loaded on the daily bases.
{
"stats": [{
"date": "2018-06-17T00:00:00.000Z",
"scores": {
"my-followers": 8113,
"my-listed": 15,
"my-favourites": 5289,
"my-followings": 230,
"my-statuses": 3107
}
}, {
"date": "2018-06-18T00:00:00.000Z",
"scores": {
"my-statuses": 3107,
"my-followings": 230,
"my-lost-followings": 0,
"my-new-followers": 0,
"my-new-statuses": 0,
"my-listed": 15,
"my-lost-followers": 5,
"my-followers": 8108,
"my-favourites": 5288,
"my-new-followings": 0
}
}
.....
],
"uid": "123456789"
}
ANy help will be appreciated.
Currently I have this error:
Errors:
query: Invalid field name "my-new-followings". Fields must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 128 characters long. Table: link_t_perf_test1_b58d4465_3a31_40cb_987f_9fb2d1de29dc_source (error code: invalidQuery
Even when "my-new-followings" contain only integer (up to 5 digit) number.
Warning - I'm new to MongoDB and JSON.
I've a log file which contain JSON datasets. A single file has multiple JSON formats as it is capturing clickstream data. Here is an example of one log file.
[
{
"username":"",
"event_source":"server",
"name":"course.activated",
"accept_language":"",
"time":"2016-10-12T01:02:07.443767+00:00",
"agent":"python-requests/2.9.1",
"page":null,
"host":"courses.org",
"session":"",
"referer":"",
"context":{
"user_id":null,
"org_id":"X",
"course_id":"3T2016",
"path":"/api/enrollment"
},
"ip":"160.0.0.1",
"event":{
"course_id":"3T2016",
"user_id":11,
"mode":"audit"
},
"event_type":"activated"
},
{
"username":"VTG",
"event_type":"/api/courses/3T2016/",
"ip":"161.0.0.1",
"agent":"Mozilla/5.0",
"host":"courses.org",
"referer":"http://courses.org/16773734",
"accept_language":"en-AU,en;q=0.8,en-US;q=0.6,en;q=0.4",
"event":"{\"POST\": {}, \"GET\": {}}",
"event_source":"server",
"context":{
"course_user_tags":{
},
"user_id":122,
"org_id":"X",
"course_id":"3T2016",
"path":"/api/courses/3T2016/"
},
"time":"2016-10-12T00:51:57.756468+00:00",
"page":null
}
]
Now I want to store this data in MongoDB. So here are my novice questions:
Do I need to parse the file and then split it into 2 datasets before storing in MongoDB? If yes, then is here a simple program to do this as my file has multiple dataset formats?
Is there some magic in MongoDB that can split the various datasets when we upload it?
First of all you have invalid json format, Make sure your json being formatted as I have cite below. After Successfully having your json data you can perform Mongodb restore option to insert your data back to database.
mongorestore --host hostname --port 27017 --dir pathtojsonfile --db <database_name_to_restore>
Fo more information refer https://docs.mongodb.com/manual/reference/program/mongorestore/
Formatted json
[
{
"username":"",
"event_source":"server",
"name":"course.activated",
"accept_language":"",
"time":"2016-10-12T01:02:07.443767+00:00",
"agent":"python-requests/2.9.1",
"page":null,
"host":"courses.org",
"session":"",
"referer":"",
"context":{
"user_id":null,
"org_id":"X",
"course_id":"3T2016",
"path":"/api/enrollment"
},
"ip":"160.0.0.1",
"event":{
"course_id":"3T2016",
"user_id":11,
"mode":"audit"
},
"event_type":"activated"
},
{
"username":"VTG",
"event_type":"/api/courses/3T2016/",
"ip":"161.0.0.1",
"agent":"Mozilla/5.0",
"host":"courses.org",
"referer":"http://courses.org/16773734",
"accept_language":"en-AU,en;q=0.8,en-US;q=0.6,en;q=0.4",
"event":"{\"POST\": {}, \"GET\": {}}",
"event_source":"server",
"context":{
"course_user_tags":{
},
"user_id":122,
"org_id":"X",
"course_id":"3T2016",
"path":"/api/courses/3T2016/"
},
"time":"2016-10-12T00:51:57.756468+00:00",
"page":null
}
]
I have my json file in files foder of playbook,I need to get a specific value of "ending" value from my json file ,how can i do it.
Here is my try:
- set_fact:
usr: "{{ (lookup('file','{{ role_path }}/files/inputfile.json')) | from_json }}"
- set_fact:
user: "{{ item }}"
with_items:
"{{ usr['meta'] | map(attribute='ending') | list }}"
My Inputjsonfile:
{
"mydata": {
"pair": [
"key": "-----BEGIN RSA PRIVATE KEY-----MIIEowIBAAKCAQEAgOh + Afb0oQEnvHifHuzBwl + Tiu8LXoJXb / ii / ehfNpJZLi1Ns8Wns4n5y8U6K0qE8E1bs / kedSUM30euKUu4YYnT5pDJT + kroo2fpsxM0nhrCRjUxCzClRSo41V / Q2a3QOSLPRXf
GL / Sf9kJVSRc6YmKDcnNkylqYWk4Ts0AP4fFTgZxbZQ6T6KQxEKeiKO + CQyvQi8ZL75UmmhbtM5R
qDTriXmPR3v4OHVTFx7zJzT2uZYxL4nNcsFi0mJLP + AvSkucIThOQcS64KVFLmxvJghSVyB + ZUfx
wrUhAORF / Q3zuIj + a9BDLTg3jMYkBC7NdAeYxAuHisJJMgEmmTU5qgPrkSabCPKJhCP3
-- -- - END RSA PRIVATE KEY-- -- - "
}
],
"name": "Jonhm",
"centre": "saquel"
}
}
Thanks
The error is because your JSON file is malformed.
Make shops this:
"shops": [
"mart",
"flip",
"amazon"
]
Or this:
"shops": [
{
"mart": 0,
"flip": 0,
"amazon": 0
}
]
And error will go away.