I'm trying to update an existing json file from values in another json file using jq in a bash shell.
I've got a settings json file
{
"Logging": {
"MinimumLevel": {
"Default": "Information",
"Override": "Warning"
},
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "./logs/log-.txt",
"rollingInterval": "Day"
}
}
]
},
"Settings": {
"DataServerUrl": "https://address.to.server.com",
"ServerKey": "1f969476798adfe95114dd28ed3a3ff"
"ServerTimeZone": "Mountain Standard Time",
"MaxOccupantCount": 6
}
}
In an integration step, I'm attempting to incorporate values for specific environments (think dev/staging/prod) from an external json file with limited setting values. An example of such a file is
{
"DataServerUrl": "https://dev.server.addr.com",
"ServerKey": "2a4d99233efea456b95114aa23ed342ae"
}
I can get to the data using jq. I can update the data using jq if I hard-code the updates. I'm looking for something general to take in any environment settings values and update them in the base settings file. My searches suggest I can do this in a single step without knowing the specific values. A command similar to
jq -r 'to_entries[]' settings.dev.json |
while IFS= read -r key value; do
jq -r '.[$key] |= [$value]' settings.json
done
What happens is I get error messages stating jq: error: $key is not defined at <top-level> (as well as the same message for $value). The messages appear several times in pairs. settings.json is not changed. Now, this makes partial sense because the output from just jq -r 'to_entries[]' settings.dev.json looks like (empty space in this output is included as produced by the command).
"key": "DataServerUrl",
"value": "https://dev.server.addr.com"
"key": "ServerKey",
"value": "2a4d99233efea456b95114aa23ed342ae"
How do I go about iterating over the values in the environment settings file such that I can use those values to update the base settings file for further processing (i.e., publishing to the target environment)?
The simplest way is to provide both files and address the second one using input. That way, all you need is the assignment:
jq '.Settings = input' settings.json insert.json
{
"Logging": {
"MinimumLevel": {
"Default": "Information",
"Override": "Warning"
},
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "./logs/log-.txt",
"rollingInterval": "Day"
}
}
]
},
"Settings": {
"DataServerUrl": "https://dev.server.addr.com",
"ServerKey": "2a4d99233efea456b95114aa23ed342ae"
}
}
Demo
You could do something like
jq -s '.[1] as $insert | .[0].Settings |= $insert | .[0]' settings.json insert.json
Where we :
slurp both files
Save insert.json to a variable called $insert
Append (|=) $insert to .[0].Settings
Show only the first file .[0]
So the output will become:
{
"Logging": {
"MinimumLevel": {
"Default": "Information",
"Override": "Warning"
},
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "./logs/log-.txt",
"rollingInterval": "Day"
}
}
]
},
"Settings": {
"DataServerUrl": "https://dev.server.addr.com",
"ServerKey": "2a4d99233efea456b95114aa23ed342ae"
}
}
Related
What I'm trying to do currently, is, within each environment, compare mainAccount and secondAccount values.
If they do match, then I will trigger some downstream code to check the file version. If they do not, then I will pass. That is not really relevant, however I am struggling to compare the values across each environment. Since each .json file will have different amounts of environments.
Meaning, in testing environment, I want to check if mainAccount = secondAccount, and same in production environment.
I'm running into issues parsing this JSON with jq:
json1
{
"file_version": 1.0,
"config": [
{
"environment": "testing",
"main": [
{
"mainAccount": "123"
}
],
"second": [
{
"secondAccount": "456"
}
]
},
{
"environment": "production",
"main": [
{
"mainAccount": "789"
}
],
"second": [
{
"secondAccount": "789"
}
]
}
]
}
Here's another sample .json file for comparsion:
json2
{
"file_version": 1.3,
"config": [
{
"environment": "testing",
"main": [
{
"mainAccount": "123"
}
],
"second": [
{
"secondAccount": "456"
}
]
},
{
"environment": "production",
"main": [
{
"mainAccount": "789"
}
],
"second": [
{
"secondAccount": "789"
}
]
},
{
"environment": "pre-production",
"main": [
{
"mainAccount": "456"
}
],
"second": [
{
"secondAccount": "789"
}
]
},
{
"environment": "staging",
"main": [
{
"mainAccount": "234"
}
],
"second": [
{
"secondAccount": "456"
}
]
}
]
}
If I run this command:
jq -r '.config[] | select(.main != null) | .main[].mainAccount
My output is:
123
789
If i store this output in a variable, it'll be 123 789 so comparing this to the "secondAccount" value is troublesome.
I think what I'm looking for is iteration here, however, I'm not sure how to implement this. I wanted to take a pythonic approach to check the length of the config array, create a for loop in that length range, then collect the value based on an index like
.config[0] | select(.main != null) | .main[].mainAccount
.config[1] | select(.main != null) | .main[].mainAccount
etc. The issue however, is that when I read in the .config[] value as a variable, bash doesn't interpret it like that. The length will be the length of characters, not, the amount of objects in the array.
EXPECTED OUTPUT
Nothing. I simply want to, for each .json file above, compare the mainAccount and secondAccount values with eachother, within each environment.
In json1, I want to compare mainAccount == secondAccount in environment: testing. Then mainAccount == secondAccount in environment: production.
Then move onto json 2 and compare mainAccount == secondAccount in environment: testing. Then environment production, pre-production, staging, so on and so forth.
Since all information is within this one JSON file it is better to do the processing in jq as much as possible and to keep the shell out.
Given your input you can try this jq:
jq '
.config[]
| {
environment,
condition: (.main[0].mainAccount == .second[0].secondAccount)
}' input.json
The result is:
{
"environment": "testing",
"condition": false
}
{
"environment": "production",
"condition": true
}
Some questions though:
Why are the values of first and second arrays objects and not object?
Is it really intended to match the first one of both?
Can there be more items in the arrays?
Also: If you want to process the results in a shell, I propose this expression because the output can be used (source or eval) in a shell:
jq -r '
.config[]
| "\(.environment)=\(.main[0].mainAccount == .second[0].secondAccount)"' input.json
The output is:
testing=false
production=true
You can do the comparison within jq, return the boolean result as its exit status using the -e option, and react upon that in bash, e.g. using an if statement.
if jq -e '
.config | map(select(.main != null) | .main[].mainAccount) | .[0] == .[1]
' file.json >/dev/null
then echo "equal"
else echo "not equal"
fi
not equal
I have a report.json generated by a gitlab pipeline.
It looks like:
{"version":"14.0.4","vulnerabilities":[{"id":"64e69d1185ecc48a1943141dcb6dbd628548e725f7cef70d57403c412321aaa0","category":"secret_detection"....and so on
If no vulnerabilities found, then "vulnerabilities":[]. I'm trying to come up with a bash script that would check if vulnerabilities length is null or not. If not, print the value of the vulnerabilities key. Sadly, I'm very far from scripting genius, so it's been a struggle.
While searching web for a solution to this, I've come across jq. It seems like select() should do the job.
I've tried:
jq "select(.vulnerabilities!= null)" report.json
but it returned {"version":"14.0.4","vulnerabilities":[{"id":"64e69d1185ecc48a194314... instead of expected "vulnerabilities":[{"id":"64e69d1185ecc48a194314...
and
map(select(.vulnerabilities != null)) report.json
returns "No matches found"
Would you mind pointing out what's wrong apart from my 0 experience with bash and JSON parsing? :)
Thanks in advance
Just use . filter to identify the object vulnerabilities.
these is some cases below
$ jq '.vulnerabilities' <<END
heredoc> {"version":"14.0.4","vulnerabilities":[{"id":"64e69d1185ecc48a1943141dcb6dbd628548e725f7cef70d57403c412321aaa0","category":"secret_detection"}]}
heredoc> END
[
{
"id": "64e69d1185ecc48a1943141dcb6dbd628548e725f7cef70d57403c412321aaa0",
"category": "secret_detection"
}
]
if vulnerabilities null, then jq will return null
$ jq '.vulnerabilities' <<END
{"version":"14.0.4","vulnerabilities":null}
END
null
then with pipe |, you can change it to any output you wanted.
change null to []: .vulnerabilities | if . == null then [] else . end
filter empty array: .vulnerabilities | select(length > 0)
For further information about jq filters, you can read the jq manual.
Assuming, by "print the value of the vulnerabilities key" you mean the value of an item's id field. You can retrieve it using .id and have it extracted to bash with the -r option.
If in case the array is not empty you want all of the "keys", iterate over the array using .[]. If you just wanted a specific key, let's say the first, address it using a 0-based index: .[0].
To check the length of an array there is a dedicated length builtin. However, as your final goal is to extract, you can also attempt to do so right anyway, suppress a potential unreachability error using the ? operator, and have your bash script read an appropriate exit status using the -e option.
Your bash script then could include the following snippet
if key=$(jq -re '.vulnerabilities[0].id?' report.json)
then
# If the array was not empty, $key contains the first key
echo "There is a vulnerability in key $key."
fi
# or
if keys=$(jq -re '.vulnerabilities[].id?' report.json)
then
# If the array was not empty, $keys contains all the keys
for k in $keys
do echo "There is a vulnerability in key $k."
done
fi
Firstly, please note that in the JSON world, it is important to distinguish
between [] (the empty array), the values 0 and null, and the absence of a value (e.g. as the result of the absence of a key in an object).
In the following, I'll assume that the output should be the value of .vulnerabilities
if it is not `[]', or nothing otherwise:
< sample.json jq '
select(.vulnerabilities != []).vulnerabilities
'
If the goal were to differentiate between two cases based on the return code from jq, you could use the -e command-line option.
You can use if-then-else.
Filter
if (.vulnerabilities | length) > 0 then {vulnerabilities} else empty end
Input
{
"version": "1.1.1",
"vulnerabilities": [
{
"id": "111",
"category": "secret_detection"
},
{
"id": "112",
"category": "secret_detection"
}
]
}
{
"version": "1.2.1",
"vulnerabilities": [
{
"id": "121",
"category": "secret_detection 2"
}
]
}
{
"version": "3.1.1",
"vulnerabilities": []
}
{
"version": "4.1.1",
"vulnerabilities": [
{
"id": "411",
"category": "secret_detection 4"
},
{
"id": "412",
"category": "secret_detection"
},
{
"id": "413",
"category": "secret_detection"
}
]
}
Output
{
"vulnerabilities": [
{
"id": "111",
"category": "secret_detection"
},
{
"id": "112",
"category": "secret_detection"
}
]
}
{
"vulnerabilities": [
{
"id": "121",
"category": "secret_detection 2"
}
]
}
{
"vulnerabilities": [
{
"id": "411",
"category": "secret_detection 4"
},
{
"id": "412",
"category": "secret_detection"
},
{
"id": "413",
"category": "secret_detection"
}
]
}
Demo
https://jqplay.org/s/wicmr4uVRm
I have a json formatted overview of backups, generated using pgbackrest. For simplicity I removed a lot of clutter so the main structures remain. The list can contain multiple backup structures, I reduced here to just 1 for simplicity.
[
{
"backup": [
{
"archive": {
"start": "000000090000000200000075",
"stop": "000000090000000200000075"
},
"info": {
"size": 1200934840
},
"label": "20220103-122051F",
"type": "full"
},
{
"archive": {
"start": "00000009000000020000007D",
"stop": "00000009000000020000007D"
},
"info": {
"size": 1168586300
},
"label": "20220103-153304F_20220104-081304I",
"type": "incr"
}
],
"name": "dbname1"
}
]
Using jq I tried to generate a simpeler format out of this, until now without any luck.
What I would like to see is the backup.archive, backup.info, backup.label, backup.type, name combined in one simple structure, without getting into a cartesian product. I would be very happy to get the following output:
[
{
"backup": [
{
"archive": {
"start": "000000090000000200000075",
"stop": "000000090000000200000075"
},
"name": "dbname1",
"info": {
"size": 1200934840
},
"label": "20220103-122051F",
"type": "full"
},
{
"archive": {
"start": "00000009000000020000007D",
"stop": "00000009000000020000007D"
},
"name": "dbname1",
"info": {
"size": 1168586300
},
"label": "20220103-153304F_20220104-081304I",
"type": "incr"
}
]
}
]
where name is redundantly added to the list. How can I use jq to convert the shown input to the requested output? In the end I just want to generate a simple csv from the data. Even with the simplified structure using
'.[].backup[].name + ":" + .[].backup[].type'
I get a cartesian product:
"dbname1:full"
"dbname1:full"
"dbname1:incr"
"dbname1:incr"
how to solve that?
So, for each object in the top-level array you want to pull in .name into each of its .backup array's elements, right? Then try
jq 'map(.backup[] += {name} | del(.name))'
Demo
Then, generating a CSV output using jq is easy: There is a builtin called #csv which transforms an array into a string of its values with quotes (if they are stringy) and separated by commas. So, all you need to do is to iteratively compose your desired values into arrays. At this point, removing .name is not necessary anymore as we are piecing together the array for CSV output anyway. And we're giving the -r flag to jq in order to make the output raw text rather than JSON.
jq -r '.[]
| .backup[] + {name}
| [(.archive | .start, .stop), .name, .info.size, .label, .type]
| #csv
'
Demo
First navigate to backup and only then “print” the stuff you’re interested.
.[].backup[] | .name + ":" + .type
I have two .json-files.
The first is 1.json
{
"id": "107709375",
"type": "page",
"title": "SomeTitle",
"space": {
"key": "BUSINT"
},
"version": {
"number": 62
}
}
And the second one logg.json:
{
"id": "228204270",
"type": "page",
"status": "current",
"title": "test-test",
"version": {
"when": "2016-11-23T16:54:18.313+07:00",
"number": 17,
"minorEdit": false
},
"extensions": {
"position": "none"
}
}
Can I paste version.number from logg.json into version.number 1.json using jq? I need something like that (it's absolutely wrong):
jq-win64 ".version.number 1.json" = ".version.number +1" logg.json
Read logg.json as an argument file. You could then access its values to make changes to the other.
$ jq --argfile logg logg.json '.version.number = $logg.version.number + 1' 1.json
Of course you'll need to use double quotes to work in the Windows Command prompt.
> jq --argfile logg logg.json ".version.number = $logg.version.number + 1" 1.json
Although the documentation says to use --slurpfile instead, we only have a single object in the file so it would be totally appropriate to use --argfile instead.
Could somebody help me to deal with jq command line utility to update JSON object's inner value?
I want to alter object interpreterSettings.2B263G4Z1.properties by adding several key-values, like "spark.executor.instances": "16".
So far I only managed to fully replace this object, not add new properties with command:
cat test.json | jq ".interpreterSettings.\"2B188AQ5T\".properties |= { \"spark.executor.instances\": \"16\" }"
This is input JSON:
{
"interpreterSettings": {
"2B263G4Z1": {
"id": "2B263G4Z1",
"name": "sh",
"group": "sh",
"properties": {}
},
"2B188AQ5T": {
"id": "2B188AQ5T",
"name": "spark",
"group": "spark",
"properties": {
"spark.cores.max": "",
"spark.yarn.jar": "",
"master": "yarn-client",
"zeppelin.spark.maxResult": "1000",
"zeppelin.dep.localrepo": "local-repo",
"spark.app.name": "Zeppelin",
"spark.executor.memory": "2560M",
"zeppelin.spark.useHiveContext": "true",
"spark.home": "/usr/lib/spark",
"zeppelin.spark.concurrentSQL": "false",
"args": "",
"zeppelin.pyspark.python": "python"
}
}
},
"interpreterBindings": {
"2AXUMXYK4": [
"2B188AQ5T",
"2AY8SDMRU"
]
}
}
I also tried the following but this only prints contents of interpreterSettings.2B263G4Z1.properties, not full object.
cat test.json | jq ".interpreterSettings.\"2B188AQ5T\".properties + { \"spark.executor.instances\": \"16\" }"
The following works using jq 1.4 or jq 1.5 with a Mac/Linux shell:
jq '.interpreterSettings."2B188AQ5T".properties."spark.executor.instances" = "16" ' test.json
If you have trouble adapting the above for Windows, I'd suggest putting the jq program in a file, say my.jq, and invoking it like so:
jq -f my.jq test.json
Notice that there is no need to use "cat" in this case.
p.s. You were on the right track - try replacing |= with +=