I have a config file that is like
[
"ECSClusterName=cluster",
"VPCID=vpc-xxxx",
"ALBName=ALB"
]
And with jq (or something else bash-native), I'd like to add 2 values - EnvType and KMSID - (doesn't matter where in the config file) so that the end result would look like
[
"EnvType=dev",
"KMSID=xxxxx-yyyyyy-ffffff",
"ECSClusterName=cluster",
"VPCID=vpc-xxxx",
"ALBName=ALB"
]
The closest I have been for one value is
cat config.json | jq '.[-1] += ", test=test"'
But that outputs
[
"ECSClusterName=cluster",
"VPCID=vpc-xxxx",
"ALBName=ALB, test=test"
]
Any help greatly appreciated!
Put new key=value pairs into an array, and add that array to the original one.
$ jq '. + ["EnvType=dev", "KMSID=xxxxx-yyyyyy-ffffff"]' config.json
[
"ECSClusterName=cluster",
"VPCID=vpc-xxxx",
"ALBName=ALB",
"EnvType=dev",
"KMSID=xxxxx-yyyyyy-ffffff"
]
Related
I have the following JSON file
{
"https://test.com/gg": [
"msg",
"popup",
"url"
]
}
What I want to achieve is to parse the values to output the following
https://test.com/gg?msg=gg
https://test.com/gg?popup=gg
https://test.com/gg?url=gg
I'm assuming it can be done using jq but I'm not sure how.
The way i know is if the elemets were named like bellow:
{
"url":"https://test.com/gg": [
"p1":"msg",
]
}
I would pull the elements like:
cat json | jq "url.[p1]"
But in my case is it not named.
jq --raw-output 'to_entries[0] | .key as $url | .value[] | "\($url)?\(.)=gg"' <your json file here>
Where
to_entries[0] yields {"key":"https://test.com/gg","value":["msg","popup","url"]}
(Save .key as $url for later)
Then "emit" all values with .value[]
For each "emitted" value, produce the string "\($url)?\(.)=gg" where . is the current value
I have hundreds of files being named as [guid].json where structure of them all looks similar to this:
{
"Active": true,
"CaseType": "CaseType",
"CustomerGroup": ["Core", "Extended"]
}
First I need to append a new key-value pair to all files with "CaseId": "[filename]" and then merge them all into one big array and save it as a new json manifest file.
I would like one file with the following structure from a jq command:
[
{
"Active": true,
"CaseType": "CaseType",
"CustomerGroup": ["Core", "Extended"],
"CaseId": "43d47f66-5a0a-4b86-88d6-1f1f893098d2"
},
{
"Active": true,
"CaseType": "CaseType",
"CustomerGroup": ["Core", "Extended"],
"CaseId": "e3x47f66-5a0a-4b86-88d6-1f1f893098d2"
}
]
You're looking for input_filename.
jq -n '[ inputs | .CaseId = input_filename ]' *.json
You can use reduce adding on one input object at a time. Use input_filename to get the UTF-8 encoded filename and form the record with CaseId
jq -n 'reduce inputs as $d (null; . + [ $d + { CaseId: input_filename } ] )' *.json
I would like to search a JSON file for some key or value, and have it print where it was found.
For example, when using jq to print out my Firefox' extensions.json, I get something like this (using "..." here to skip long parts) :
{
"schemaVersion": 31,
"addons": [
{
"id": "wetransfer#extensions.thunderbird.net",
"syncGUID": "{e6369308-1efc-40fd-aa5f-38da7b20df9b}",
"version": "2.0.0",
...
},
{
...
}
]
}
Say I would like to search for "wetransfer#extensions.thunderbird.net", and would like an output which shows me where it was found with something like this:
{ "addons": [ {"id": "wetransfer#extensions.thunderbird.net"} ] }
Is there a way to get that with jq or with some other json tool?
I also tried to simply list the various ids in that file, and hoped that I would get it with jq '.id', but that just returned null, because it apparently needs the full path.
In other words, I'm looking for a command-line json parser which I could use in a way similar to Xpath tools
The path() function comes in handy:
$ jq -c 'path(.. | select(. == "wetransfer#extensions.thunderbird.net"))' input.json
["addons",0,"id"]
The resulting path is interpreted as "In the addons field of the initial object, the first array element's id field matches". You can use it with getpath(), setpath(), delpaths(), etc. to get or manipulate the value it describes.
Using your example with modifications to make it valid JSON:
< input.json jq -c --arg s wetransfer#extensions.thunderbird.net '
paths as $p | select(getpath($p) == $s) | null | setpath($p;$s)'
produces:
{"addons":[{"id":"wetransfer#extensions.thunderbird.net"}]}
Note
If there are N paths to the given value, the above will produce N lines. If you want only the first, you could wrap everything in first(...).
Listing all the "id" values
I also tried to simply list the various ids in that file
Assuming that "id" values of false and null are of no interest, you can print all the "id" values of interest using the jq filter:
.. | .id? // empty
I have a below file in txt format. I want to arrange the data in json array format in linux and append more such data with for/while loop in the same json array based on condition. Please help me with the best way to achieve this.
File
Name:Rock
Name:Clock
{“Array" :[
{
"Name": "Rock",
},
{
"Name”: "Clock”,
}
]
Suppose your initial file is object.json and that it contains an empty object, {};
and that at the beginning of each iteration, the key:value pairs are defined in another file, kv.txt.
Then at each iteration, you can update array.json using the invocation:
< kv.txt jq -Rn --argfile object object.json -f program.jq | sponge object.json
where program.jq contains the jq program:
$object | .Array +=
reduce inputs as $in ([]; . + [$in | capture("(?<k>^[^:]*): *(?<v>.*)") | {(.k):.v} ])
(sponge is part of the moreutils package. If it cannot be used, then you will have to use another method of updating object.json.)
I have a series of JSON files containing an array of records, e.g.
$ cat f1.json
{
"records": [
{"a": 1},
{"a": 3}
]
}
$ cat f2.json
{
"records": [
{"a": 2}
]
}
I want to 1) extract a single field from each record and 2) output a single array containing all the field values from all input files.
The first part is easy:
jq '.records | map(.a)' f?.json
[
1,
3
]
[
2
]
But I cannot figure out how to get jq to concatenate those output arrays into a single array!
I'm not married to jq; I'll happily use another tool if necessary. But I would love to know how to do this with jq, because it's something I have been trying to figure out for years.
Assuming your jq has inputs (which is true of jq 1.5 and later), it would be most efficient to use it, e.g. along the lines of:
jq -n '[inputs.records[].a]' f*.json
Use -s (or --slurp):
jq -s 'map(.records[].a)' f?.json
You need to use --slurp so that jq will apply its filter on the aggregation of all inputs rather than on each input. When using this option, jq's input will be an array of the inputs which you need to account for.
I would use the following :
jq --slurp 'map(.records | map(.a)) | add' f?.json
We apply your current transformation to each elements of the slurped array of inputs (your previous individual inputs), then we merge those transformed arrays into one with add.
If your input files are large, slurping the file could eat up lots of memory in which you case you can reduce which works in iterative manner, appending the contents of the array .a one object at a time
jq -n 'reduce inputs.records[].a as $d (.; . += [$d])' f?.json
The -n flag is to ensure to construct the output JSON from scratch with the data available from inputs. The reduce function takes the initial value of . which because of the null input would be just null. Then for each of the input objects . += [$d] ensures that the array contents of .a are appended together.
As a compromise between the readability of --slurp and the efficiency of reduce, you can run jq twice. The first is a slightly altered version of your original command, the second slurps the undifferentiated output into a single array.
$ jq '.records[] | .a' f?.json | jq -s
[
1,
3,
2
]
--slurp (-s) key is needed and map() to do so in one shot
$ cat f1.json
{
"records": [
{"a": 1},
{"a": 3}
]
}
$ cat f2.json
{
"records": [
{"a": 2}
]
}
$ jq -s 'map(.records[].a)' f?.json
[
1,
3,
2
]