Conditionally add json file into Main json file in jq 1.5 - json

I have a json file with just one object as below. Am calling it Auth.json
{
"name": "Authorization",
"description": "This parameter represents the Authorization token obtained from the OKTA Authorization server. It is the Bearer token provided to authorize the consumer. Usage Authorization : Bearer token",
"in": "header",
"required": true,
"type": "string"
}
I need to add the above json file value into the below json, only if the below path
.paths.<any method that starts with />.get.parameters does not have that object already.
If it exists that object needs to be removed and the contents of the above Auth.json needs to be added.
I have jq 1.5 and do have access on the system to update the jq initialization file, so could not use walk function which would've made this simpler.
Main.json
{
"swagger": "2.0",
"paths": {
"/agents/delta": {
"get": {
"description": "lorem ipsum doram",
"operationId": "getagentdelta",
"summary": "GetAgentDelta",
"tags": [
"Agents"
],
"parameters": [
{
"name": "since",
"in": "query",
"description": "Format - date-time (as date-time in RFC3339). The time from which you need changes from. You should use the format emitted by Date's toJSON method (for example, 2017-04-23T18:25:43.511Z). If a timestamp older than a week is passed, a business rule violation will be thrown which will require the client to change the from date. As a best-practice, for a subsequent call to this method, send the timestamp when you <b>started</b> the previous delta call (instead of when you completed processing the response or the max of the lastUpdateOn timestamps of the returned records). This will ensure that you do not miss any changes that occurred while you are processing the response from this method",
"required": true,
"type": "string"
}
]
}
}
}
}
I tried the below command, but it is adding it recursively within all objects of the path in the Main.json.
jq --slurpfile newval Auth.json '.paths | .. | .get.parameters += $newval' Main.json > test.json
How can I achieve the above using jq 1.5?

You got it almost right, but missing the part to identify those objects whose name contains /. You can use startswith() on the keyname
jq --slurpfile auth Auth.json '
.paths |= with_entries(
if .key|startswith("/")
then
.value.get.parameters |= $auth
else
. end
)' Main.json
Additionally, if you would like to compare if .parameters object does not already contain the Authentication object, change your if condition to
if (.key|startswith("/")) and (.value.get.parameters[] != $auth)

Related

Forge Model Derivative Trying to get object properties by externalId fails but works with objectID

Using this Model Derivative endpoint we want to retrieve the properties of an object by calling it with its externalId. We follow the procedure explainined here.
The endpoint works using the objectid, in this case 3:
"data": {
"type": "properties",
"collection": [
{
"objectid": 3,
"name": "Finished Ceiling Height",
"externalId": "5d365ed4-cccc-4589-b4e1-a8c5c744672a-0046dd63",
"properties": {
"General": {
"Override": ""
},
"Extents": {
"Scope Box": "None"
},
"Constraints": {
"Elevation": "8.667 ft-and-fractional-in",
"Story Above": "Default"
},
"Dimensions": {
"Computation Height": "3.000 ft-and-fractional-in"
},
"Identity Data": {
"Name": "Finished Ceiling Height",
"Structural": "No",
"Building Story": "Yes",
"Asset ID": "",
"Asset Location": "",
"Asset Category": "",
"Workset": "Shared Views, Levels, Grids",
"Edited by": ""
}
}
}
]
}
}
The documentation states that we could also use externalId to get the same result.
The objectid of an object can change if the design is translated to SVF or SVF2 again. If you require a persistent ID to reference an object, use externalId.
But changing objectid=3 to objectid=5d365ed4-cccc-4589-b4e1-a8c5c744672a-0046dd63 returns an error 400:
{
"diagnostic": "Invalid 'objectid' parameter"
}
We also tried to convert the GUID to base64, use externalid instead of objectid as the parameter name. Everything with the same results.
Any ideas? Are we missunderstanding the documentation?
Note: Accesing the viewer is not an option at the moment, neither downloading the sqlite/json files
[Edit 1]
We tried setting the query parameter as externalId/externalid receiving the same results.
curl --location --request GET 'https://developer.api.autodesk.com/modelderivative/v2/designdata/urn/metadata/guid/properties?externalId=43f2c4e0-d09b-4151-a349-1b6f684411c6-004c8717' \
--header 'x-ads-force: true' \
--header 'x-ads-derivative-format: fallback' \
--header 'Authorization: Bearer XXXXXXX
The objectId parameter accepts numerical values only. As you can see from the API response, objectId is a numerical value. You cannot pass a string to this parameter, unfortunately.
However, our engineering team is working on the new property API which supports more filters. You might be interested in the new property metadata API mentioned in https://www.autodesk.com/autodesk-university/class/Forge-Road-Map-2021 at 20 mins.

Jenkins Pipeline Error while trying to send JSON Payload after post success

I'm trying to send a custom JSON payload from my pipeline on Jenkins after the last successful stage, like this:
post {
success {
script {
def payload = """
{
"type": "AdaptiveCard",
"body": [
{
"type": "TextBlock",
"size": "Medium",
"weight": "Bolder",
"text": "SonarQube report from Jenkins Pipeline"
},
{
"type": "TextBlock",
"text": "Code was analyzed was successfully.",
"wrap": true,
"color": "Good",
"weight": "Bolder"
}
],
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"version": "1.3"
}"""
httpRequest httpMode: 'POST',
acceptType: 'APPLICATION_JSON',
contentType: 'APPLICATION_JSON',
url: "URL",
requestBody: payload
}
}
}
}
But I get an error
Error when executing success post condition:
groovy.lang.MissingPropertyException: No such property: schema for class: groovy.lang.Binding
at groovy.lang.Binding.getVariable(Binding.java:63)
I'm using the HTTP Request plugin available for Jenkins and the format of the JSON payload is correct for MS Teams.
The issue is actually a groovy syntax error. You can easily check this in something like https://groovy-playground.appspot.com/ by adding your def payload = ... statement.
There are multiple ways to get multiline strings in groovy:
triple single quoted string
triple double quoted string
slashy string
dollary slashy string
Apart from the single quoted string, they also have a secondary property which is interpolation
Notice how in the initial JSON payload, there's a "$schema" key? Using triple double quoted strings makes groovy want to find a schema variable and use it's value to construct that payload variable.
You have two separate solutions:
Use triple single quoted string - just update """ to '''
Escape the variable - just update "$schema" to "\$schema" (making $ a literal $ instead of it being used as an interpolation prefix)

Replace and add json within another json

I have a Main json file.
{
"swagger": "2.0",
"paths": {
"/agents/delta": {
"get": {
"description": "lorem ipsum doram",
"operationId": "getagentdelta",
"summary": "GetAgentDelta",
"tags": [
"Agents"
],
"parameters": [
{
"name": "since",
"in": "query",
"description": "Format - date-time (as date-time in RFC3339). The time from which you need changes from. You should use the format emitted by Date's toJSON method (for example, 2017-04-23T18:25:43.511Z). If a timestamp older than a week is passed, a business rule violation will be thrown which will require the client to change the from date. As a best-practice, for a subsequent call to this method, send the timestamp when you <b>started</b> the previous delta call (instead of when you completed processing the response or the max of the lastUpdateOn timestamps of the returned records). This will ensure that you do not miss any changes that occurred while you are processing the response from this method",
"required": true,
"type": "string"
}
]
}
}
}
}
And I have a smaller json file.
{
"name": "Authorization",
"description": "This parameter represents the Authorization token obtained from the OKTA Authorization server. It is the Bearer token provided to authorize the consumer. Usage Authorization : Bearer token",
"in": "header",
"required": true,
"type": "string"
}
Now I need to add the contents of the smaller json file into the Main.Json file in the parameters array.
I tried the below command
cat test.json | jq --argfile sub Sub.json '.paths./agents/delta.get.parameters[ ] += $sub.{}' > test1.json
But I get the below error:
jq: error: syntax error, unexpected '{', expecting FORMAT or QQSTRING_START (Unix shell quoting issues?) at <top-level>, line 1:
.paths += $sub.{}
jq: 1 compile error
cat: write error: Broken pipe
I tried this command.
cat test.json | jq '.paths./agents/delta.get.parameters[ ] | = (.+ [{ "name": "Authorization", "description": "This parameter represents the Authorization token obtained from the OKTA Authorization server. It is the Bearer token provided to authorize the consumer. Usage Authorization : Bearer token", "in": "header", "required": true, "type": "string" }] )' > test1.json
And I get no error and no output either. How do I get around this?
I would have to add the contents of the smaller json file directly first. And then at a later stage, search if it already had name: Authorization and it's other parameters, and then remove and replace the whole name: Authorization piece with the actual contents of the smaller.json, under each path that starts with '/xx/yyy'.
Edited to add:
For the last part of the question, I could not use the walk function, since I have jq 1.5 and since am using the bash task within Azure DevOps, I can't update the jq installation file with the walk function.
Meanwhile I found the use of something similar to wildcard in jq, and was wondering why I can't use it in this way.
jq --slurpfile newval auth.json '.paths | .. | objects | .get.parameters += $newval' test.json > test1.json
Can anyone please point out the issue in the above command? It did not work, and am not sure why..
You want --slurpfile, and you need to escape /agents/delta part of the path with quotes:
$ jq --slurpfile newval insert.json '.paths."/agents/delta".get.parameters += $newval' main.json
{
"swagger": "2.0",
"paths": {
"/agents/delta": {
"get": {
"description": "lorem ipsum doram",
"operationId": "getagentdelta",
"summary": "GetAgentDelta",
"tags": [
"Agents"
],
"parameters": [
{
"name": "since",
"in": "query",
"description": "Format - date-time (as date-time in RFC3339). The time from which you need changes from. You should use the format emitted by Date's toJSON method (for example, 2017-04-23T18:25:43.511Z). If a timestamp older than a week is passed, a business rule violation will be thrown which will require the client to change the from date. As a best-practice, for a subsequent call to this method, send the timestamp when you <b>started</b> the previous delta call (instead of when you completed processing the response or the max of the lastUpdateOn timestamps of the returned records). This will ensure that you do not miss any changes that occurred while you are processing the response from this method",
"required": true,
"type": "string"
},
{
"name": "Authorization",
"description": "This parameter represents the Authorization token obtained from the OKTA Authorization server. It is the Bearer token provided to authorize the consumer. Usage Authorization : Bearer token",
"in": "header",
"required": true,
"type": "string"
}
]
}
}
}
}
And here's one that first removes any existing Authorization objects from the parameters before inserting the new one into every parameters array, and doesn't depend on an the exact path:
jq --slurpfile newval add.json '.paths |= walk(
if type == "object" and has("parameters") then
.parameters |= map(select(.name != "Authorization")) + $newval
else
.
end)' main.json

How to edit a json dictionary in Robot Framework

I am currently implementing some test automation that uses a json POST to a REST API to initialize the test data in the SUT. Most of the fields I don't have an issue editing using information I found in another thread: Json handling in ROBOT
However, one of the sets of information I am editing is a dictionary of meta data.
{
"title": "Test Auotmation Post 2018-03-06T16:12:02Z",
"content": "dummy text",
"excerpt": "Post made by automation for testing purposes.",
"name": "QA User",
"status": "publish",
"date": "2018-03-06T16:12:02Z",
"primary_section": "Entertainment",
"taxonomy": {
"section": [
"Entertainment"
]
},
"coauthors": [
{
"name": "QA User - CoAuthor",
"meta": {
"Title": "QA Engineer",
"Organization": "That One Place"
}
}
],
"post_meta": [
{
"key": "credit",
"value": "QA Engineer"
},
{
"key": "pub_date",
"value": "2018-03-06T16:12:02Z"
},
{
"key": "last_update",
"value": "2018-03-06T16:12:02Z"
},
{
"key": "source",
"value": "wordpress"
}
]
}
Is it possible to use the Set to Dictionary Keyword on a dictionary inside a dictionary? I would like to be able to edit the value of the pub_date and last_update inside of post_meta, specifically.
The most straightforward way would be to use the Evaluate keyword, and set the sub-dict value in it. Presuming you are working with a dictionary that's called ${value}:
Evaluate $value['post_meta'][1]['pub_date'] = 'your new value here'
I won't get into how to find the index of the post_meta list that has the 'key' with value 'pub_date', as that's not part of your question.
Is it possible to use the Set to Dictionary Keyword on a dictionary inside a dictionary?
Yes, it's possible.
However, because post_meta is a list rather than a dictionary, you will have to write some code to iterate over all of the values of post_meta until you find one with the key you want to update.
You could do this in python quite simply. You could also write a keyword in robot to do that for you. Here's an example:
*** Keywords ***
Set list element by key
[Arguments] ${data} ${target_key} ${new_value}
:FOR ${item} IN #{data}
\ run keyword if '''${item['key']}''' == '''${target_key}'''
\ ... set to dictionary ${item} value=${new_value}
[return] ${data}
Assuming you have a variable named ${data} contains the original JSON data as a string, you could call this keyword like the following:
${JSON}= evaluate json.loads('''${data}''') json
set list element by key ${JSON['post_meta']} pub_date yesterday
set list element by key ${JSON['post_meta']} last_update today
You will then have a python object in ${JSON} with the modified values.

mongoexport - Leaf Level - JSON to CSV conversion - egrep not working with multiple patterns using "|" pipe or with -f option

Why egrep is not giving me all the matching entries?
This is my simple JSON blob:
[nukaNUKA#dev-machine csv]$ cat jsonfile.json
{"number": 303,"projectName": "giga","queueId":8881,"result":"SUCCESS"}
This is my pattern file (so that I don't scare the editor):
[nukaNUKA#dev-machine csv]$ cat egrep-pattern.txt
\"number\":.*\"projectName
\"projectName\":.*,\"queueId
\"queueId\":.*,\"result
\"result\":\".*$
This is egrep/grep command for individual searches, which works!:
[nukaNUKA#dev-machine csv]$ egrep -o "\"number\":.*\"projectName" jsonfile.json
"number": 303,"projectName
[nukaNUKA#dev-machine csv]$ egrep -o "\"projectName\":.*,\"queueId" jsonfile.json
"projectName": "giga","queueId
[nukaNUKA#dev-machine csv]$ egrep -o "\"queueId\":.*,\"result" jsonfile.json
"queueId":8881,"result
[nukaNUKA#dev-machine csv]$ egrep -o "\"result\":\".*$" jsonfile.json
"result":"SUCCESS"}
So, wth this didn't work? I don't wear glasses, yes.
[nukaNUKA#dev-machine csv]$ egrep -o "\"number\":.*\"projectName|\"projectName\":.*,\"queueId|\"queueId\":.*,\"result|\"result\":\".*$" jsonfile.json
"number": 303,"projectName
"queueId":8881,"result
[nukaNUKA#dev-machine csv]$ egrep -o -f egrep-pattern.txt jsonfile.json
"number": 303,"projectName
"queueId":8881,"result
[nukaNUKA#dev-machine csv]$
I have a complex nested JSON blob and because everything is unstructured, it seems like, I can't use JQ or JSONV or anything other Python script (as the data that I'm looking for is stored in arrays containing 1 dictionary entries (key=value) with same key names for what I'm looking for (ex: { "parameters": [ { "name": "jobname", "value": "shenzi" }, { "name": "pipelineVersion", "value": "1.2.3.4" }, ...so on..., ... ]) and the index for jobname and pipelineVersion or similar parameter names is not at the same index[X] location in every JSON entry I have.
Worst case, I can add conditional checks to see if the key at every index matches, jobname etc and then I get those fields what I looking for, but then, there are hundreds of such fields that I want to grab. I don't want to hard code those if possible.
I thought as my JSON entry is per line, I can simply write a cool patterns (ugly I know) but at least then I don't need to worry about the conditional code or just use BASH/sed/tr/cut power to get what I need but it seems like egrep -f or -o ... didn't work as shown above.
Sample JSON blob object (from one Jenkins job). There are different Jenkins build job's JSON blob entries (each having different JSON structures, parameters etc) in a single JenkinsJobsBuild collection in MongoDB.
See attached for sample JSON blob object.
{
"_id": {
"$oid": "5120349es967yhsdfs907c4f"
},
"actions": [
{
"causes": [
{
"shortDescription": "Started by an SCM change"
}
]
},
{
},
{
"oneClickDeployPossible": false,
"oneClickDeployReady": false,
"oneClickDeployValid": false
},
{
},
{
},
{
},
{
"cspec": "element * ...\/MyProject_latest_int\/LATESTnelement * ...\/MyProject_integration\/LATESTnelement \/vobs\/some_vob\/gigi \/main\/myproject_integration\/MyProject_Slot_0_maint_int\/LATESTnelement * ...\/myproject_integration\/LATESTnelement \/vobs\/some_vob \/main\/LATEST",
"latestBlsOnConfiguredStream": null,
"stream": null
},
{
},
{
"parameters": [
{
"name": "CLEARCASE_VIEWTAG",
"value": "jenkins_MyProject_latest"
},
{
"name": "BUILD_DEBUG",
"value": false
},
{
"name": "CLEAN_BUILD",
"value": true
},
{
"name": "BASEVERSION",
"value": "7.4.1"
},
{
"name": "ARTIFACTID",
"value": "lowercaseprojectname"
},
{
"name": "SYSTEM",
"value": "myprojectSystem"
},
{
"name": "LOT",
"value": "02"
},
{
"name": "PIPENUMBER",
"value": "7.4.1.303"
}
]
},
{
},
{
},
{
"parameters": [
{
"name": "DESCRIPTION_SETTER_DESCRIPTION",
"value": "lowercaseprojectname_V7.4.1.303"
}
]
},
{
},
{
},
{
},
{
}
],
"artifacts": [
],
"building": false,
"builtOn": "servername",
"changeSet": {
"items": [
{
"affectedPaths": [
"vobs\/some_vob\/myproject\/apps\/app1\/Java\/test\/src\/com\/giga\/highlevelproject\/myproject\/schedule\/validation\/SomeActivityTest.java"
],
"author": {
"absoluteUrl": "http:\/\/11.22.33.44:8080\/user\/hitj1620",
"fullName": "name1, name2 A"
},
"commitId": null,
"date": {
"$numberLong": "1489439532000"
},
"dateStr": "13\/03\/2017 21:12:12",
"elements": [
{
"action": "create version",
"editType": "edit",
"file": "vobs\/some_vob\/myproject\/apps\/app1\/Java\/test\/src\/com\/giga\/highlevelproject\/myproject\/schedule\/validation\/SomeActivityTest.java",
"operation": "checkin",
"version": "\/main\/MyProject_latest_int\/2"
}
],
"msg": "",
"timestamp": -1,
"user": "user111"
}
],
"kind": null
},
"culprits": [
{
"absoluteUrl": "http:\/\/11.22.33.44:8080\/user\/nuka1620",
"fullName": "nuka, Chuck"
}
],
"description": "lowercaseprojectname_V7.4.1.303",
"displayName": "#303",
"duration": 525758,
"estimatedDuration": 306374,
"executor": null,
"fullDisplayName": "MyProject \u00bb MyProject-build #303",
"highlevelproject_metrics_source_url": "http:\/\/11.22.33.44:8080\/job\/MyProject\/job\/MyProject-build\/303\/\/api\/json",
"id": "303",
"keepLog": false,
"number": 303,
"projectName": "MyProject-build",
"queueId": 8201,
"result": "SUCCESS",
"timeToRepair": null,
"timestamp": {
"$numberLong": "1489439650307"
},
"url": "http:\/\/11.22.33.44:8080\/job\/MyProject\/job\/MyProject-build\/303\/"
}
When the regexes are in a file, you don't have to escape double quotes; you don't have to fight to get your double quotes past the shell.
"number":.*"projectName
"projectName":.*,"queueId
"queueId":.*,"result
"result":".*$
When that's fixed, I get:
$ egrep -o -f egrep-pattern.txt jsonfile.json
"number": 303,"projectName
"queueId":8881,"result
$
The trouble now is, I think, that you've consumed the projectName with the first pattern, so the others don't get a chance to match it. Change the patterns to read up to a comma and you can get better results:
"number":[^,]*
"projectName":[^,]*
"queueId":[^,]*
"result":".*$
yields:
"number": 303
"projectName": "giga"
"queueId":8881
"result":"SUCESS"}
You could try to be more delicate, but you rapidly reach a point where a JSON-aware tool becomes more sensible. Commas in a string value would mess up the modified regexes, for example. (So, if the project name was "Giga, if not Tera", you'd have problems.)
Matching more general JSON name:value notation
As long as you're looking for simple "key":"quoted value" objects, you can use the following grep -E (aka egrep) command:
grep -Eoe '"[^"]+":"((\\(["\\/bfnrt]|u[0-9a-fA-F]{4}))|[^"])*"' data
Given the JSON-like data (in the file called data):
{"key1":"value","key2":"value2 with \"quoted\" text","key3":"value3 with \\ and \/ and \f and \uA32D embedded"}
that script produces:
"key1":"value"
"key2":"value2 with \"quoted\" text"
"key3":"value3 with \\ and \/ and \f and \uA32D embedded"
You can upgrade it to handle almost any valid JSON "key":value by using:
grep -Eoe '"[^"]+":(("((\\(["\\/bfnrt]|u[0-9a-fA-F]{4}))|[^"])*")|true|false|null|(-?(0|[1-9][0-9]*)(\.[0-9]+)?([eE][-+]?[0-9]+)?))' data
With a new data file containing:
{"key1":"value","key2":"value2 with \"quoted\" text"}
{"key3":"value3 with \\ and \/ and \f and \uA32D embedded"}
{"key4":false,"key5":true,"key6":null,"key7":7,"key8":0,"key9":0.123E-23}
{"key10":10,"key11":3.14159,"key12":0.876,"key13":-543.123}
the script produces:
"key1":"value"
"key2":"value2 with \"quoted\" text"
"key3":"value3 with \\ and \/ and \f and \uA32D embedded"
"key4":false
"key5":true
"key6":null
"key7":7
"key8":0
"key9":0.123E-23
"key10":10
"key11":3.14159
"key12":0.876
"key13":-543.123
You can follow the railroad diagrams in the outline JSON specification at http://json.org to see how I created the regex.
It could be enhanced by the judicious addition of [[:space:]]* in places where spaces are permitted but not required — before the key string, before the colon, after the colon (you could add it after the value too, but you probably don't want that).
Another simplification that I've taken is that the key doesn't allow for the various escape characters that the value string does. You could repeat that.
And, of course, this only works for 'leaf' name:value pairs; if a value is itself an object {…} or an array […], this doesn't handle the value as a whole.
However, this just goes to emphasize that it gets very messy very quickly and you would be better off using a special-purpose JSON query tool. One such tool is jq, as mentioned in a comment to the main query.
The complex JSON blob I had, was from Jenkins (i.e. Jenkins job's RestAPI data) that I had in MongoDB database.
To grab it from MongoDB, I used mongoexport command for generating (non-JsonArray or non-Pretty format) JSON blob successfully.
#/bin/bash
server=localhost
collectionFile=collections.txt
## Generate collection file contains all collections in the Jenkins database in MongoDB.
( set -x
mongo "mongoDbServer.company.com/database_Jenkins" --eval "rs.slaveOk();db.getCollectionNames()" --quiet > ${collectionFile}
)
## create collection based JSON files
for collection in $(cat ${collectionFile} | sed -e 's:,: :g')
do
mongoexport --host ${server} --db ${db} --collection "${collection}" --out ${exportDir}/${collection}.json
##mongoexport --host ${server} --db ${db} --collection "${collection}" --type=csv --fieldFile ~/mongoDB_fetch/get_these_csv_fields.txt --out ${exportDir}/${collection}.csv; ## This didn't work if you have nested fields. fieldFile file was just containing field name per line in a particular xyz.IndexNumber.yyy format.
done
Tried inbuilt mongoexport command's --type=csv with -f fields to catch topfield.0.subField, field2, field3.7.parameters.7.. nothing worked.
PS: The number after the . mark is how you define indexes if you are going to create a CSV file and using fields (mandatory) using mongoexport command.
As my JSON structure was all unstructured (Jenkins version bumps/upgrades happened in past and the data about a job was not the same structure), I tried this final sed trick (as JSON data per entry was in each individual line).
This sed command (as shown below) will give you all the keys and it's values (in the key=value format) per line at the LEAF field key=value level of almost any JSON blob / at least from the Jenkins JSON blob . Once you have this info, you can feed the output of this command to temporary file, then read all the value part (after the = mark) and create your CSV file acc. YES, you have to sort it so that your CSV file's fields are maintained in order for the header names and thus values are inserted to the right column/field. I calculated the fields names from all different collection JSON file's temporary key=value generated key names. Then, read all temporary collection files and added the values acc. into the final CSV file under respective header/field/column.
OK this is a weird solution but at least it's a solution - in one liner.
cat myJenkinsJob.json | sed "s/{}//g;s/,,*/,/g;s/},\"/\n/g;s/},{/\n/g;s/\([^\"a-zA-Z]\),\"/\1\n/g;s/:\[{/\n/g;s/\"name\":\"//g;s/\",\"value//g;s/,\"/\n/g;s/\":\"*/=/g;s/\"//g;s/[\[}\]]//g;s/[{}]//g;s/\$[a-zA-Z][a-zA-Z]*=//g"|grep "=" | sed "s/,$//"|egrep -v "=-|=$|=\[|^_class="
Tweak this acc. to your own solution for the sed part a little bit, if your JSON blob shows you funny characters that you don't want. The order of sed operations below is important. I'm also excluding any redundant variables (that I don't need at this time, for ex: JSON blob contained _class="..." values) so I'm excluding those via egrep -v after the last | pipe.