iterating json to store key value pairs using shell script - json

I have a json file that is getting created at runtime using the sh script within groovy code. The json file has below contents.
cat.json
{
"user1":"pass1",
"user2":"pass2",
"user3":"pass3"
}
Now I want to create a file at runtime which stores key value pairs in below format
test
user1:pass1
user2:pass2
user3:pass3
can some one help me out shell codes for writing this.

You have literally dozen ways to convert that JSON document to a tabular data file (pretty much like CSV/colon-SV) since you mentioned Java, Groovy, including Java-driven scripting engines (BeanShell, JavaScript, Groovy itself), but if you can use jq then you can extract k/v pairs at least for simple values that do not require any escaping:
#!/bin/sh
jq -r 'to_entries[] | "\(.key):\(.value)"' \
< cat.json
This answer is inspired by searching for extracting entries using jq (or converting a JSON file to a CSV file) and especially by the answer https://stackoverflow.com/a/50496145/12232870 by #peak.

Related

How to make string replacement in a JSON template?

If I have a JSON template in which some variable should be replaced with their actual value, is there a good way to handle the proper escape?
For example, $value may be replaced with a string that contains characters like " that should be treated specially in JSON.
{ "x": $value }
The template could be arbitrary complex. So it is not a good solution to code the template in some programming language like python, then perform the replacement in that language, then dump the JSON output.
Could anybody show me a generic but succinct way to perform the replacement?
Note that I tagged this question with jq. I am not sure it is strictly relevant to this question. If not, please remove the tag. I tagged jq because people who know jq may also know solutions to my question, although jq is just for transforming a JSON file. An elegant solution may be similar to jq in the sense that a domain-specific language is defined.
jq works quite nicely as a template engine, but there are choices to be made, e.g. depending on whether the "template" itself is valid JSON.
In the example you gave, the template is not valid JSON, but it is potentially valid jq, so the strategy using jq "$-variables" would make sense, e.g. along the lines of:
jq -n --arg value "someValue" -f template.jq
where template.jq is your template.
Three different strategies for using jq as a template engine are explained at some length in the jq Cookbook:
https://github.com/stedolan/jq/wiki/Cookbook#using-jq-as-a-template-engine

Replace value of object property in multiple JSON files

I'm working with multiple JSON files that are located in the same folder.
Files contain objects with the same properties and they are such as:
{
"identifier": "cameraA",
"alias": "a",
"rtsp": "192.168.1.1"
}
I want to replace a property for all the objects in the JSON files at the same time for a certain condition.
For example, let's say that I want to replace all the rtsp values of the objects with identifier equal to "cameraA".
I've been trying with something like:
jq 'if .identifier == \"cameraA" then .rtsp=\"cameraX" else . end' -c *.json
But it isn't working.
Is there a simple way to replace the property of an object among multiple JSON files?
jq can only write to STDIN and STDOUT, so the simplest approach would be to process one file at a time, e.g. putting your jq program inside a shell loop. sponge is often used when employing this approach.
However, there is an alternative that has the advantage of efficiency. It requires only one invocation of jq, the output of which would include the filename information (obtained from input_filename). This output would then be the input of an auxiliary process, e.g. awk.

Pulling data out of JSON Object via jq to CSV in Bash

I'm working on a bash script (running via gitBash on Windows technically but I don't think that matters) that will convert some JSON API data into CSV files. Most of it has gone fairly well, especially since I'm not particularly familiar with JQ as this is my first time using it.
I've got some JSON data that looks like the array below. What I'm trying to do is select the cardType,MaskedPan,amount and datetime out of the data.
this is probably the first time in life that my google searching has failed me. I know(or should I say think) that that is actually an object and not just a simple array.
I've not really found anything that helps me know how to grab that data I need and export it into a CSV file. I've had no issue grabbing the other data that I need but these few pieces are proving to be a big problem for me.
The script I'm trying basically can be boiled down to this:
jq='/c/jq-win64.exe -r';
header='("cardType")';
fields='[.TransactionDetails[0].Value[0].cardType]';
$jq ''$header',(.[] | '$fields' | #csv)' < /t/API_Data/JSON/GetByDate-082719.json >
/t/API_Data/CSV/test.csv;
If I do .TransactionDetails[0].Value I can get that whole chunk of data. But that is problematic in a CSV as it contains commas.
I suppose I could make this a TSV and import it into the database as one big string and sub string it out. But that isn't the "right" solution. I'm sure there is a way JQ can give me what I need.
"TransactionDetails": [
{
"TransactionId": 123456789,
"Name": "BlacklinePaymentDetail",
"Value": "{\"cardType\":\"Visa\",\"maskedPan\":\"1234\",\"paymentDetails\":{\"reference\":\"123456789012\",\"amount\":99.99,\"dateTime\":\"2019/08/27 08:41:09\"}}",
"ShowOnTill": false,
"PrintOnOrder": false,
"PrintOnReceipt": false
}
]
Ideally I'd be able to just have a field in the CSV for cardType,MaskedPan,amount and datetime instead of pulling the "Value" that contains all of it.
Any advice would be appreciated.
The ingredient you're missing is fromjson, which converts a stringified JSON to JSON. Adding enclosing braces around your sample input,
the invocation:
jq -r -f program.jq input.json
produces:
"Visa","1234",99.99,"2019/08/27 08:41:09"
where program.jq is:
.TransactionDetails[0].Value
| fromjson
| [.cardType, .maskedPan] + (.paymentDetails | [.amount, .dateTime])
| #csv

Oracle SQLcl: Spool to json, only include content in items array?

I'm making a query via Oracle SQLcl. I am spooling into a .json file.
The correct data is presented from the query, but the format is strange.
Starting off as:
SET ENCODING UTF-8
SET SQLFORMAT JSON
SPOOL content.json
Follwed by a query, produces a JSON file as requested.
However, how do I remove the outer structure, meaning this part:
{"results":[{"columns":[{"name":"ID","type":"NUMBER"},
{"name":"LANGUAGE","type":"VARCHAR2"},{"name":"LOCATION","type":"VARCHAR2"},{"name":"NAME","type":"VARCHAR2"}],"items": [
// Here is the actual data I want to see in the file exclusively
]
I only want to spool everything in the items array, not including that key itself.
Is this possible to set as a parameter before querying? Reading the Oracle docs have not yielded any answers, hence asking here.
Thats how I handle this.
After output to some file, I use jq command to recreate the file with only the items
ssh cat file.json | jq --compact-output --raw-output '.results[0].items' > items.json
`
Using this library = https://stedolan.github.io/jq/

Ways to parse JSON using KornShell

I have a working code for parsing a JSON output using KornShell by treating it as a string of characters. The issue I have is that the vendor keeps changing the position of the field that I am intersted in. I understand in JSON, we can parse it by key-value pairs.
Is there something out there that can do this? I am intersted in a specific field and I would like to use it to run the checks on the status of another RESTAPI call.
My sample json output is like this:
JSONDATA value :
{
"status": "success",
"job-execution-id": 396805,
"job-execution-user": "flexapp",
"job-execution-trigger": "RESTAPI"
}
I would need the job-execution-id value to monitor this job through the rest of the script.
I am using the following command to parse it:
RUNJOB=$(print ${DATA} |cut -f3 -d':'|cut -f1 -d','| tr -d [:blank:]) >> ${LOGDIR}/${LOGFILE}
The problem with this is, it is field delimited by :. The field position has been known to be changed by the vendors during releases.
So I am trying to see if I can use a utility out there that would always give me the key-value pair of "job-execution-id": 396805, no matter where it is in the json output.
I started looking at jsawk, and it requires the js interpreter to be installed on our machines which I don't want. Any hint on how to go about finding which RPM that I need to solve it?
I am using RHEL5.5.
Any help is greatly appreciated.
The ast-open project has libdss (and a dss wrapper) which supposedly could be used with ksh. Documentation is sparse and is limited to a few messages on the ast-user mailing list.
The regression tests for libdss contain some json and xml examples.
I'll try to find more info.
Python is included by default with CentOS so one thing you could do is pass your JSON string to a Python script and use Python's JSON parser. You can then grab the value written out by the script. An example you could modify to meet your needs is below.
Note that by specifying other dictionary keys in the Python script you can get any of the values you need without having to worry about the order changing.
Python script:
#get_job_execution_id.py
# The try/except is because you'll probably have Python 2.4 on CentOS 5.5,
# and the straight "import json" statement won't work unless you have Python 2.6+.
try:
import json
except:
import simplejson as json
import sys
json_data = sys.argv[1]
data = json.loads(json_data)
job_execution_id = data['job-execution-id']
sys.stdout.write(str(job_execution_id))
Kornshell script that executes it:
#get_job_execution_id.sh
#!/bin/ksh
JSON_DATA='{"status":"success","job-execution-id":396805,"job-execution-user":"flexapp","job-execution-trigger":"RESTAPI"}'
EXECUTION_ID=`python get_execution_id.py "$JSON_DATA"`
echo $EXECUTION_ID