Shell script to get key value pair from JSON object - json

I have a JSON object like
{
"Men": [
"All Clothing",
"All Clothing",
"All footwear",
"All footwear",
"All Watches",
"All Watches",
"All Sunglasses",
"All Sunglasses"
],
"Electronics": [
"Mobiles",
"Tablets",
"Wearable Smart Devices",
"Mobile Accessories",
"Headphones and headsets",
"Tablet Accessories",
"Computer Accessories",
"Televisions",
"Large Appliances",
"Small Appliances",
"Kitchen Appliances",
"Personal Care",
"Audio and video",
"Laptop"
],
"Women": [
"Ethnic wear",
"Western wear",
"Lingerie & Sleep Wear",
"All Bags, Belts & Wallets",
"All jewellery",
"All Perfumes",
"Spectacle Frames",
"Beauty & Personal Care",
"The International Beauty Shop"
]
}
I want to get key value pair from this object.m using jq filter but it doesnot work.
keys=`jq 'keys' $categories`
$categories is the name of variable of json object. suggestions are welcome.

It's not really clear what you are asking. If $categories contains your JSON data then you need to pipe it to jq somehow. With Bash, you could use a here string:
jq keys <<<"$categories"
or more traditionally (and portably), a pipe:
printf '%s\n' "$categories" | jq keys
To capture the value of the keys into a variable, use a command substitution:
keys=$(jq 'keys' <<<"$categories")
(or `backticks` like in your attempt; but the modern notation is much preferable);
or better yet, obtain this value in the same way you assigned categories in the first place.

Related

json.load loads a string instead of json

I have a list of dictionaries written to a data.txt file. I was expecting to be able to read the list of dictionaries in a normal way when I load, but instead, I seem to load up a string.
For example - when I print(data[0]), I was expecting the first dictionary in the list, but instead, I got "[" instead.
Below attached is my codes and txt file:
read_json.py
import json
with open('./data.txt', 'r') as json_file:
data = json.load(json_file)
print(data[0])
data.txt
"[
{
"name": "Disney's Mulan (Mandarin) PG13 *",
"cast": [
"Jet Li",
"Donnie Yen",
"Yifei Liu"
],
"genre": [
"Action",
"Adventure",
"Drama"
],
"language": "Mandarin with no subtitles",
"rating": "PG13 - Some Violence",
"runtime": "115",
"open_date": "18 Sep 2020",
"description": "\u201cMulan\u201d is the epic adventure of a fearless young woman who masquerades as a man in order to fight Northern Invaders attacking China. The eldest daughter of an honored warrior, Hua Mulan is spirited, determined and quick on her feet. When the Emperor issues a decree that one man per family must serve in the Imperial Army, she steps in to take the place of her ailing father as Hua Jun, becoming one of China\u2019s greatest warriors ever."
},
{
"name": "The New Mutants M18",
"cast": [
"Maisie Williams",
"Henry Zaga",
"Anya Taylor-Joy",
"Charlie Heaton",
"Alice Braga",
"Blu Hunt"
],
"genre": [
"Action",
"Sci-Fi"
],
"language": "English",
"rating": "M18 - Some Mature Content",
"runtime": "94",
"open_date": "27 Aug 2020",
"description": "Five young mutants, just discovering their abilities while held in a secret facility against their will, fight to escape their past sins and save themselves."
}
]"
The above list is formatted properly for easy reading but the actual file is a single line and the different lines are denoted with "\n". Thanks for any help.
remove double quote in data.txt is useful for me。
eg. modify
"[{...},{...}]"
to
[{...},{...}]
Hope it helps!

How to edit a json dictionary in Robot Framework

I am currently implementing some test automation that uses a json POST to a REST API to initialize the test data in the SUT. Most of the fields I don't have an issue editing using information I found in another thread: Json handling in ROBOT
However, one of the sets of information I am editing is a dictionary of meta data.
{
"title": "Test Auotmation Post 2018-03-06T16:12:02Z",
"content": "dummy text",
"excerpt": "Post made by automation for testing purposes.",
"name": "QA User",
"status": "publish",
"date": "2018-03-06T16:12:02Z",
"primary_section": "Entertainment",
"taxonomy": {
"section": [
"Entertainment"
]
},
"coauthors": [
{
"name": "QA User - CoAuthor",
"meta": {
"Title": "QA Engineer",
"Organization": "That One Place"
}
}
],
"post_meta": [
{
"key": "credit",
"value": "QA Engineer"
},
{
"key": "pub_date",
"value": "2018-03-06T16:12:02Z"
},
{
"key": "last_update",
"value": "2018-03-06T16:12:02Z"
},
{
"key": "source",
"value": "wordpress"
}
]
}
Is it possible to use the Set to Dictionary Keyword on a dictionary inside a dictionary? I would like to be able to edit the value of the pub_date and last_update inside of post_meta, specifically.
The most straightforward way would be to use the Evaluate keyword, and set the sub-dict value in it. Presuming you are working with a dictionary that's called ${value}:
Evaluate $value['post_meta'][1]['pub_date'] = 'your new value here'
I won't get into how to find the index of the post_meta list that has the 'key' with value 'pub_date', as that's not part of your question.
Is it possible to use the Set to Dictionary Keyword on a dictionary inside a dictionary?
Yes, it's possible.
However, because post_meta is a list rather than a dictionary, you will have to write some code to iterate over all of the values of post_meta until you find one with the key you want to update.
You could do this in python quite simply. You could also write a keyword in robot to do that for you. Here's an example:
*** Keywords ***
Set list element by key
[Arguments] ${data} ${target_key} ${new_value}
:FOR ${item} IN #{data}
\ run keyword if '''${item['key']}''' == '''${target_key}'''
\ ... set to dictionary ${item} value=${new_value}
[return] ${data}
Assuming you have a variable named ${data} contains the original JSON data as a string, you could call this keyword like the following:
${JSON}= evaluate json.loads('''${data}''') json
set list element by key ${JSON['post_meta']} pub_date yesterday
set list element by key ${JSON['post_meta']} last_update today
You will then have a python object in ${JSON} with the modified values.

mongoexport - Leaf Level - JSON to CSV conversion - egrep not working with multiple patterns using "|" pipe or with -f option

Why egrep is not giving me all the matching entries?
This is my simple JSON blob:
[nukaNUKA#dev-machine csv]$ cat jsonfile.json
{"number": 303,"projectName": "giga","queueId":8881,"result":"SUCCESS"}
This is my pattern file (so that I don't scare the editor):
[nukaNUKA#dev-machine csv]$ cat egrep-pattern.txt
\"number\":.*\"projectName
\"projectName\":.*,\"queueId
\"queueId\":.*,\"result
\"result\":\".*$
This is egrep/grep command for individual searches, which works!:
[nukaNUKA#dev-machine csv]$ egrep -o "\"number\":.*\"projectName" jsonfile.json
"number": 303,"projectName
[nukaNUKA#dev-machine csv]$ egrep -o "\"projectName\":.*,\"queueId" jsonfile.json
"projectName": "giga","queueId
[nukaNUKA#dev-machine csv]$ egrep -o "\"queueId\":.*,\"result" jsonfile.json
"queueId":8881,"result
[nukaNUKA#dev-machine csv]$ egrep -o "\"result\":\".*$" jsonfile.json
"result":"SUCCESS"}
So, wth this didn't work? I don't wear glasses, yes.
[nukaNUKA#dev-machine csv]$ egrep -o "\"number\":.*\"projectName|\"projectName\":.*,\"queueId|\"queueId\":.*,\"result|\"result\":\".*$" jsonfile.json
"number": 303,"projectName
"queueId":8881,"result
[nukaNUKA#dev-machine csv]$ egrep -o -f egrep-pattern.txt jsonfile.json
"number": 303,"projectName
"queueId":8881,"result
[nukaNUKA#dev-machine csv]$
I have a complex nested JSON blob and because everything is unstructured, it seems like, I can't use JQ or JSONV or anything other Python script (as the data that I'm looking for is stored in arrays containing 1 dictionary entries (key=value) with same key names for what I'm looking for (ex: { "parameters": [ { "name": "jobname", "value": "shenzi" }, { "name": "pipelineVersion", "value": "1.2.3.4" }, ...so on..., ... ]) and the index for jobname and pipelineVersion or similar parameter names is not at the same index[X] location in every JSON entry I have.
Worst case, I can add conditional checks to see if the key at every index matches, jobname etc and then I get those fields what I looking for, but then, there are hundreds of such fields that I want to grab. I don't want to hard code those if possible.
I thought as my JSON entry is per line, I can simply write a cool patterns (ugly I know) but at least then I don't need to worry about the conditional code or just use BASH/sed/tr/cut power to get what I need but it seems like egrep -f or -o ... didn't work as shown above.
Sample JSON blob object (from one Jenkins job). There are different Jenkins build job's JSON blob entries (each having different JSON structures, parameters etc) in a single JenkinsJobsBuild collection in MongoDB.
See attached for sample JSON blob object.
{
"_id": {
"$oid": "5120349es967yhsdfs907c4f"
},
"actions": [
{
"causes": [
{
"shortDescription": "Started by an SCM change"
}
]
},
{
},
{
"oneClickDeployPossible": false,
"oneClickDeployReady": false,
"oneClickDeployValid": false
},
{
},
{
},
{
},
{
"cspec": "element * ...\/MyProject_latest_int\/LATESTnelement * ...\/MyProject_integration\/LATESTnelement \/vobs\/some_vob\/gigi \/main\/myproject_integration\/MyProject_Slot_0_maint_int\/LATESTnelement * ...\/myproject_integration\/LATESTnelement \/vobs\/some_vob \/main\/LATEST",
"latestBlsOnConfiguredStream": null,
"stream": null
},
{
},
{
"parameters": [
{
"name": "CLEARCASE_VIEWTAG",
"value": "jenkins_MyProject_latest"
},
{
"name": "BUILD_DEBUG",
"value": false
},
{
"name": "CLEAN_BUILD",
"value": true
},
{
"name": "BASEVERSION",
"value": "7.4.1"
},
{
"name": "ARTIFACTID",
"value": "lowercaseprojectname"
},
{
"name": "SYSTEM",
"value": "myprojectSystem"
},
{
"name": "LOT",
"value": "02"
},
{
"name": "PIPENUMBER",
"value": "7.4.1.303"
}
]
},
{
},
{
},
{
"parameters": [
{
"name": "DESCRIPTION_SETTER_DESCRIPTION",
"value": "lowercaseprojectname_V7.4.1.303"
}
]
},
{
},
{
},
{
},
{
}
],
"artifacts": [
],
"building": false,
"builtOn": "servername",
"changeSet": {
"items": [
{
"affectedPaths": [
"vobs\/some_vob\/myproject\/apps\/app1\/Java\/test\/src\/com\/giga\/highlevelproject\/myproject\/schedule\/validation\/SomeActivityTest.java"
],
"author": {
"absoluteUrl": "http:\/\/11.22.33.44:8080\/user\/hitj1620",
"fullName": "name1, name2 A"
},
"commitId": null,
"date": {
"$numberLong": "1489439532000"
},
"dateStr": "13\/03\/2017 21:12:12",
"elements": [
{
"action": "create version",
"editType": "edit",
"file": "vobs\/some_vob\/myproject\/apps\/app1\/Java\/test\/src\/com\/giga\/highlevelproject\/myproject\/schedule\/validation\/SomeActivityTest.java",
"operation": "checkin",
"version": "\/main\/MyProject_latest_int\/2"
}
],
"msg": "",
"timestamp": -1,
"user": "user111"
}
],
"kind": null
},
"culprits": [
{
"absoluteUrl": "http:\/\/11.22.33.44:8080\/user\/nuka1620",
"fullName": "nuka, Chuck"
}
],
"description": "lowercaseprojectname_V7.4.1.303",
"displayName": "#303",
"duration": 525758,
"estimatedDuration": 306374,
"executor": null,
"fullDisplayName": "MyProject \u00bb MyProject-build #303",
"highlevelproject_metrics_source_url": "http:\/\/11.22.33.44:8080\/job\/MyProject\/job\/MyProject-build\/303\/\/api\/json",
"id": "303",
"keepLog": false,
"number": 303,
"projectName": "MyProject-build",
"queueId": 8201,
"result": "SUCCESS",
"timeToRepair": null,
"timestamp": {
"$numberLong": "1489439650307"
},
"url": "http:\/\/11.22.33.44:8080\/job\/MyProject\/job\/MyProject-build\/303\/"
}
When the regexes are in a file, you don't have to escape double quotes; you don't have to fight to get your double quotes past the shell.
"number":.*"projectName
"projectName":.*,"queueId
"queueId":.*,"result
"result":".*$
When that's fixed, I get:
$ egrep -o -f egrep-pattern.txt jsonfile.json
"number": 303,"projectName
"queueId":8881,"result
$
The trouble now is, I think, that you've consumed the projectName with the first pattern, so the others don't get a chance to match it. Change the patterns to read up to a comma and you can get better results:
"number":[^,]*
"projectName":[^,]*
"queueId":[^,]*
"result":".*$
yields:
"number": 303
"projectName": "giga"
"queueId":8881
"result":"SUCESS"}
You could try to be more delicate, but you rapidly reach a point where a JSON-aware tool becomes more sensible. Commas in a string value would mess up the modified regexes, for example. (So, if the project name was "Giga, if not Tera", you'd have problems.)
Matching more general JSON name:value notation
As long as you're looking for simple "key":"quoted value" objects, you can use the following grep -E (aka egrep) command:
grep -Eoe '"[^"]+":"((\\(["\\/bfnrt]|u[0-9a-fA-F]{4}))|[^"])*"' data
Given the JSON-like data (in the file called data):
{"key1":"value","key2":"value2 with \"quoted\" text","key3":"value3 with \\ and \/ and \f and \uA32D embedded"}
that script produces:
"key1":"value"
"key2":"value2 with \"quoted\" text"
"key3":"value3 with \\ and \/ and \f and \uA32D embedded"
You can upgrade it to handle almost any valid JSON "key":value by using:
grep -Eoe '"[^"]+":(("((\\(["\\/bfnrt]|u[0-9a-fA-F]{4}))|[^"])*")|true|false|null|(-?(0|[1-9][0-9]*)(\.[0-9]+)?([eE][-+]?[0-9]+)?))' data
With a new data file containing:
{"key1":"value","key2":"value2 with \"quoted\" text"}
{"key3":"value3 with \\ and \/ and \f and \uA32D embedded"}
{"key4":false,"key5":true,"key6":null,"key7":7,"key8":0,"key9":0.123E-23}
{"key10":10,"key11":3.14159,"key12":0.876,"key13":-543.123}
the script produces:
"key1":"value"
"key2":"value2 with \"quoted\" text"
"key3":"value3 with \\ and \/ and \f and \uA32D embedded"
"key4":false
"key5":true
"key6":null
"key7":7
"key8":0
"key9":0.123E-23
"key10":10
"key11":3.14159
"key12":0.876
"key13":-543.123
You can follow the railroad diagrams in the outline JSON specification at http://json.org to see how I created the regex.
It could be enhanced by the judicious addition of [[:space:]]* in places where spaces are permitted but not required — before the key string, before the colon, after the colon (you could add it after the value too, but you probably don't want that).
Another simplification that I've taken is that the key doesn't allow for the various escape characters that the value string does. You could repeat that.
And, of course, this only works for 'leaf' name:value pairs; if a value is itself an object {…} or an array […], this doesn't handle the value as a whole.
However, this just goes to emphasize that it gets very messy very quickly and you would be better off using a special-purpose JSON query tool. One such tool is jq, as mentioned in a comment to the main query.
The complex JSON blob I had, was from Jenkins (i.e. Jenkins job's RestAPI data) that I had in MongoDB database.
To grab it from MongoDB, I used mongoexport command for generating (non-JsonArray or non-Pretty format) JSON blob successfully.
#/bin/bash
server=localhost
collectionFile=collections.txt
## Generate collection file contains all collections in the Jenkins database in MongoDB.
( set -x
mongo "mongoDbServer.company.com/database_Jenkins" --eval "rs.slaveOk();db.getCollectionNames()" --quiet > ${collectionFile}
)
## create collection based JSON files
for collection in $(cat ${collectionFile} | sed -e 's:,: :g')
do
mongoexport --host ${server} --db ${db} --collection "${collection}" --out ${exportDir}/${collection}.json
##mongoexport --host ${server} --db ${db} --collection "${collection}" --type=csv --fieldFile ~/mongoDB_fetch/get_these_csv_fields.txt --out ${exportDir}/${collection}.csv; ## This didn't work if you have nested fields. fieldFile file was just containing field name per line in a particular xyz.IndexNumber.yyy format.
done
Tried inbuilt mongoexport command's --type=csv with -f fields to catch topfield.0.subField, field2, field3.7.parameters.7.. nothing worked.
PS: The number after the . mark is how you define indexes if you are going to create a CSV file and using fields (mandatory) using mongoexport command.
As my JSON structure was all unstructured (Jenkins version bumps/upgrades happened in past and the data about a job was not the same structure), I tried this final sed trick (as JSON data per entry was in each individual line).
This sed command (as shown below) will give you all the keys and it's values (in the key=value format) per line at the LEAF field key=value level of almost any JSON blob / at least from the Jenkins JSON blob . Once you have this info, you can feed the output of this command to temporary file, then read all the value part (after the = mark) and create your CSV file acc. YES, you have to sort it so that your CSV file's fields are maintained in order for the header names and thus values are inserted to the right column/field. I calculated the fields names from all different collection JSON file's temporary key=value generated key names. Then, read all temporary collection files and added the values acc. into the final CSV file under respective header/field/column.
OK this is a weird solution but at least it's a solution - in one liner.
cat myJenkinsJob.json | sed "s/{}//g;s/,,*/,/g;s/},\"/\n/g;s/},{/\n/g;s/\([^\"a-zA-Z]\),\"/\1\n/g;s/:\[{/\n/g;s/\"name\":\"//g;s/\",\"value//g;s/,\"/\n/g;s/\":\"*/=/g;s/\"//g;s/[\[}\]]//g;s/[{}]//g;s/\$[a-zA-Z][a-zA-Z]*=//g"|grep "=" | sed "s/,$//"|egrep -v "=-|=$|=\[|^_class="
Tweak this acc. to your own solution for the sed part a little bit, if your JSON blob shows you funny characters that you don't want. The order of sed operations below is important. I'm also excluding any redundant variables (that I don't need at this time, for ex: JSON blob contained _class="..." values) so I'm excluding those via egrep -v after the last | pipe.

How to use `jq` to obtain the keys

My json looks like this :
{
"20160522201409-jobsv1-1": {
"vmStateDisplayName": "Ready",
"servers": {
"20160522201409 jobs_v1 1": {
"serverStateDisplayName": "Ready",
"creationDate": "2016-05-22T20:14:22.000+0000",
"state": "READY",
"provisionStatus": "PENDING",
"serverRole": "ROLE",
"serverType": "SERVER",
"serverName": "20160522201409 jobs_v1 1",
"serverId": 2902
}
},
"isAdminNode": true,
"creationDate": "2016-05-22T20:14:23.000+0000",
"totalStorage": 15360,
"shapeId": "ot1",
"state": "READY",
"vmId": 4353,
"hostName": "20160522201409-jobsv1-1",
"label": "20160522201409 jobs_v1 ADMIN_SERVER 1",
"ipAddress": "10.252.159.39",
"publicIpAddress": "10.252.159.39",
"usageType": "ADMIN_SERVER",
"role": "ADMIN_SERVER",
"componentType": "jobs_v1"
}
}
My key keeps changing from time to time. So for example 20160522201409-jobsv1-1 may be something else tomorrow. Also I may more than one such entry in the json payload.
I want to echo $KEYS and I am trying to do it using jq.
Things I have tried :
| jq .KEYS is the command i use frequently.
Is there a jq command to display all the primary keys in the json?
I only care about the hostname field. And I would like to extract that out. I know how to do it using grep but it is NOT a clean approach.
You can simply use: keys:
% jq 'keys' my.json
[
"20160522201409-jobsv1-1"
]
And to get the first:
% jq -r 'keys[0]' my.json
20160522201409-jobsv1-1
-r is for raw output:
--raw-output / -r: With this option, if the filter’s result is a string then it will be written directly to standard output rather than being formatted as a JSON string with quotes. This can be useful for making jq filters talk to non-JSON-based systems.
Source
If you want a known value below an unknown property, eg xxx.hostName:
% jq -r '.[].hostName' my.json
20160522201409-jobsv1-1

jq - How to test for the occurrence of a particular value in a JSON response

Using jq I would like to test for the occurrence of a particular key value in the JSON below, for example that "WARNING" has occurred as a 'severity' value (even once) no matter the amount of objects returned, such that I return a boolean. For simplicity I have 2 objects below, but it could be 2000
{
"events": [
{
"severity": "WARNING",
"status": "",
"time_raised": "1454502910919",
"data_1": "00000000",
"data_2": "00000000",
"data_3": "00000000",
"register_0": "40000",
"register_1": "4",
"register_2": "10",
"register_3": "0"
},
{
"severity": "ERROR",
"status": "",
"time_raised": "1454502840915",
"data_1": "00000000",
"data_2": "00000000",
"data_3": "00000000",
"register_0": "50000",
"register_1": "4",
"register_2": "8",
"register_3": "0"
}
]
}
My approach has been to try using the 'contains' filter like so
jq .events[]|.severity|contains("WARNING")
Which outputs
true
false
As I want to have a single boolean value returned, I've tried to merge the values into a single string or array before using 'contains', but I can't find a way to do this.
I'd rather keep the logic in jq, so I'm hoping I've missed the wood from the trees and that there is a simple way of doing this in jq.
Building on your approach, you could, for example, simply write:
jq '[.events[]|.severity|contains("WARNING")] | any'
Or more succinctly:
jq 'any(.events[].severity; contains("WARNING"))'
If you want to test for the condition in ANY object, no matter where it is, then consider using walk/1.