How to merge two files? - json

I am trying to merge the files using --argjson but it throws
argument list too long
Is there any other way I can merge these files?
Tried to use --slurpfile but can't get desired output
[
{
"path": "1"
},
{
"path": "a/1"
},
{
"path": "a/2"
}
]
jq --argjson groupInfo "$(jq .data file1.json)" '.records[].version.data+=$groupInfo' file2.json
File 1:
{
"id": "test",
"data": [
{
"path": "a/1"
},
{
"path": "a/2"
}
],
"information": {
"id": "1"
}
}
File 2:
{
"records": [
{
"version": {
"data": [
{
"path": "1"
}
]
}
}
]
}
Output File:
{
"records": [
{
"version": {
"data": [
{
"path": "1"
},
{
"path": "a/1"
},
{
"path": "a/2"
}
]
}
}
]
}

The --arg and --argjson options are intended for small bits of JSON. Although the --argfile option is technically deprecated, it fits nicely here with your approach:
jq --argfile groupInfo <(jq .data file1.json) '
.records[].version.data+=$groupInfo' file2.json
There are other options. E.g.
jq -s '
.[0].data as $groupInfo
| .[1]
| .records[].version.data+=$groupInfo
' file1.json file2.json
I'll let you figure out how to use --slurpfile :-)

Related

Can we use dynamic values in static json file

Am having a json file for application configuration like below.
[
{
"name": "environment",
"value": "prod"
},
{
"name": "deployment_date",
"value": "2022-12-21"
}
]
The variable deployment_date, I want it as dynamic to current UTC date. Can we use any programing language to achieve this? something like getUTCDate().toString() instead "2022-12-21"?
Using jq:
jq '(.[] | select(.name == "deployment_date")).value |= (now | todate)' file.json
Output
[
{
"name": "environment",
"value": "prod"
},
{
"name": "deployment_date",
"value": "2022-12-21T12:46:11Z"
}
]
jq '(.[] | select(.name == "deployment_date")).value |= (now | strflocaltime("%Y-%m-%d"))' file.json
Output
[
{
"name": "environment",
"value": "prod"
},
{
"name": "deployment_date",
"value": "2022-12-21"
}
]

Extract JSON including key with jq command

HERE is sample json file.
sample.json
{
"apps": [
{
"name": "app1"
},
{
"name": "app2"
},
{
"name": "app3"
}
],
"test": [
{
"name": "test1"
},
{
"name": "test2"
}
]
}
I want to divide the above JSON file into the following two files.
I want to manage the entire configuration file with one JSON, divide the file when necessary and give it to the tool.
apps.json
{
"apps": [
{
"name": "app1"
},
{
"name": "app2"
},
{
"name": "app3"
}
}
test.json
{
"test": [
{
"name": "test1"
},
{
"name": "test1"
}
]
}
jq .apps sample.json outputs only value.
[
// Not contain the key
{
"name": "app1"
},
{
"name": "app2"
},
{
"name": "app3"
}
]
Can you have any idea?
Construct a new object using {x} which is a shorthand for {x: .x}.
jq '{apps}' sample.json
{
"apps": [
{
"name": "app1"
},
{
"name": "app2"
},
{
"name": "app3"
}
]
}
Demo
And likewise with {test}.
You can do
{apps}, {test}
Demo
https://jqplay.org/s/P_9cc2uANV

jq: Delete objects from an array based on its key

I want to use jq to delete all objects from an array whose key does not correspond to a defined value.
This is my JSON:
{
"name": "config1",
"children": [
{
"customer": {
"name": "cust1"
}
},
{
"filter": {
"name": "test1"
}
},
{
"filter": {
"name": "test2"
}
},
{
"context": {
"id": "1"
}
}
]
}
For example I want to remove all objects whose key is not "filter". Desired output:
{
"name": "config1",
"children": [
{
"filter": {
"name": "test1"
}
},
{
"filter": {
"name": "test2"
}
}
]
}
I tried
jq 'del(.children[] | with_entries(select(.key != "filter")))'
but that gives the following error:
jq: error (at <stdin>:1): Invalid path expression near attempt to iterate through ["customer"]
jq 'del(.children[] | select(.filter == null))'
Check it online.
You can use to_entries function such as
jq 'del(.children[] | select( to_entries[] | .key != "filter"))'
Demo

Parse 2 files based on key value and recreate another json file [JQ]

I am new to JQ.
I need to make a json file based on another 2 files.
I am worked with it whole day and stack here. Badly need this.
Here is file 1
{
"name": "foo",
"key": "1",
"id": "x"
}
{
"name": "bar",
"key": "2",
"id": "x"
}
{
"name": "baz",
"key": "3",
"id": "y"
}
file 2
{
"name": "a",
"key": "1"
}
{
"name": "b",
"key": "1"
}
{
"name": "c",
"key": "2"
}
{
"name": "d",
"key": "2"
}
{
"name": "e",
"key": "3"
}
Expected Result:
{
"x": {
"foo": [
"a",
"b"
],
"bar": [
"c",
"d"
]
},
"y": {
"baz": [
"e"
]
}
}
I can do it with python script but I need it with jq.
Thanks in advance.
Use reduce on the first file's items ($i) to successively build up the result object using setpath with fields from the item and values as a matching map on the secondary dictionary file ($d).
jq -s --slurpfile d file2 '
reduce .[] as $i ({}; setpath(
[$i.id, $i.name];
[$d[] | select(.key == $i.key).name]
))
' file1
For efficiency, the following solution first constructs a "dictionary" based on file2; furthermore, it does so without having to "slurp" it.
< file2 jq -nc --slurpfile file1 file1 '
(reduce inputs as {$name, $key} ({};
.[$key] += [$name])) as $dict
| reduce $file1[] as {$name, $key, $id} ({};
.[$id] += [ {($name): $dict[$key]} ] )
'

Add to existing json file using jq

I have an Artifactory AQL Spec file in JSON format. The spec file is as follows:
{
"files": [
{
"aql": {
"items.find": {
"repo": "release-repo",
"modified": { "$before": "30d" },
"type": { "$eq": "folder" },
"depth": "2"
}
}
}
]
}
let's say I run a gitlab api query to acquire a list of SHAs that I want to iterate through and add to this json spec file.. The list of SHAs are assigned to a variable..
"a991fef6bb9e9759d513fd4b277fe3674b44e4f4"
"5a562d34bb1d4ab4264acc2c61327651218524ad"
"d4e296c35644743e58aed35d1afb87e34d6c8823"
I would like to iterate through all these commit IDs in and add them one by one to the json so that they are in this format:
{
"files": [
{
"aql": {
"items.find": {
"repo": "release-repo",
"modified": { "$before": "30d" },
"type": { "$eq": "folder" },
"$or": [
{
"$and": [
{
"name": {
"$nmatch": "*a991fef6bb9e9759d513fd4b277fe3674b44e4f4*"
}
}
]
},
{
"$and": [
{
"name": {
"$nmatch": "*5a562d34bb1d4ab4264acc2c61327651218524ad*"
}
}
]
},
{
"$and": [
{
"name": {
"$nmatch": "*d4e296c35644743e58aed35d1afb87e34d6c8823*"
}
}
]
}
],
"depth": "2"
}
}
}
]
}
The list of SHAs returned from the gitlab api query will be different everything and that's why I'd like this to be a dynamic entry or update every time. The number of returned SHAs will also be different... Could return 10 one day or it could return 50 on another day.
#!/usr/bin/env bash
template='{
"files": [
{
"aql": {
"items.find": {
"repo": "release-repo",
"modified": { "$before": "30d" },
"type": { "$eq": "folder" },
"$or": [],
"depth": "2"
}
}
}
]
}'
shas=(
"a991fef6bb9e9759d513fd4b277fe3674b44e4f4"
"5a562d34bb1d4ab4264acc2c61327651218524ad"
"d4e296c35644743e58aed35d1afb87e34d6c8823"
)
jq -n \
--argjson template "$template" \
--arg shas_str "${shas[*]}" \
'
reduce ($shas_str | split(" ") | .[]) as $sha ($template;
.files[0].aql["items.find"]["$or"] += [{
"$and": [{"name": {"$nmatch": ("*" + $sha + "*")}}]
}]
)
'
...emits as output:
{
"files": [
{
"aql": {
"items.find": {
"repo": "release-repo",
"modified": {
"$before": "30d"
},
"type": {
"$eq": "folder"
},
"$or": [
{
"$and": [
{
"name": {
"$nmatch": "*a991fef6bb9e9759d513fd4b277fe3674b44e4f4*"
}
}
]
},
{
"$and": [
{
"name": {
"$nmatch": "*5a562d34bb1d4ab4264acc2c61327651218524ad*"
}
}
]
},
{
"$and": [
{
"name": {
"$nmatch": "*d4e296c35644743e58aed35d1afb87e34d6c8823*"
}
}
]
}
],
"depth": "2"
}
}
}
]
}
Here is a reduce-free solution. It makes some inessential assumptions -
that the sha strings are presented as a stream of strings on STDIN, and that the Artifactory spec is in a file named spec.json. Here is the jq program:
map( {"$and": [ {name: { "$nmatch": "*\(.)*" }}]} ) as $x
| $spec[0] | (.files[0].aql."items.find"."$or" = $x)
The jq invocation might look like this:
jq -s --slurpfile spec spec.json -f program.jq <<< "${shas[*]}"