Could you please assist me to on how I can merge two json variables in bash to get the desired output mentioned below {without manually lopping over .data[] array} ? I tired echo "${firstJsonoObj} ${SecondJsonoObj}" | jq -s add but it didn't parse through the array.
firstJsonoObj='{"data" :[{"id": "123"},{"id": "124"}]}'
SecondJsonoObj='{"etag" :" 234324"}'
desired output
{"data" :[{"id": "123", "etag" :" 234324"},{"id": "124", "etag" :" 234324"}]}
Thanks in advance!
You can append to each data element using +=:
#!/bin/bash
firstJsonoObj='{"data" :[{"id": "123"},{"id": "124"}]}'
SecondJsonoObj='{"etag" :" 234324"}'
jq -c ".data[] += $SecondJsonoObj" <<< "$firstJsonoObj"
Output:
{"data":[{"id":"123","etag":" 234324"},{"id":"124","etag":" 234324"}]}
Please don't use double quotes to inject data from shell into code. jq provides the --arg and --argjson options to do that safely:
#!/bin/bash
firstJsonoObj='{"data" :[{"id": "123"},{"id": "124"}]}'
SecondJsonoObj='{"etag" :" 234324"}'
jq --argjson x "$SecondJsonoObj" '.data[] += $x' <<< "$firstJsonoObj"
# or
jq --argjson a "$firstJsonoObj" --argjson b "$SecondJsonoObj" -n '$a | .data[] += $b'
{
"data": [
{
"id": "123",
"etag": " 234324"
},
{
"id": "124",
"etag": " 234324"
}
]
}
jq -s add will not work because you want to add the second document to a deeper level within the first. Use .data[] += input (without -s), with . acessing the first and ìnput accessing the second input:
echo "${firstJsonoObj} ${SecondJsonoObj}" | jq '.data[] += input'
Or, as bash is tagged, use a Heredoc:
jq '.data[] += input' <<< "${firstJsonoObj} ${SecondJsonoObj}"
Output:
{
"data": [
{
"id": "123",
"etag": " 234324"
},
{
"id": "124",
"etag": " 234324"
}
]
}
Demo
Related
Given the following JSON file (sample.json)
{
"api": "3.0.0",
"data": {
"description": "something",
"title": "hello",
"version": "1.0",
"app": {
"name": "abc",
"id": "xyz"
}
}
}
I wish to add the following JSON object at root level to the file above:
{
"heading": {
"user": ["$username"]
}
}
Where $username is a Bash variable.
Is there a better way to achieve this than the following?
blob=$(jq -n --arg foo API_NAME '{"heading": {"user": [env.username]}}')
jq --argjson obj "$(echo $blob)" '. + $obj' < sample.json
Just move what you create as blob directly into the other filter, ending up with just one jq call:
jq --arg username "$username" '. + {heading: {user: [$username]}}' sample.json
I'm trying to use jq to iterate over some delimited text files, and generate objects from the rows.
I also want to add some "static" objects (json shell variable in the example below) to the generated results.
I've come up with the below solution, which does produce the output I want. But, because I'm not very confident in jq, every time I solve a problem with it, it feels like a monkey banging on a typewriter rather than a carefully crafted answer. So, I'm imaginging this could be incorrect.
data.txt
apple|fruit
Tesla|car
sparrow|bird
Test (bash shell):
$ json='[
{ "object": "love", "type": "emotion" },
{ "object": "Ukraine", "type": "country" }
]'
$ jq --slurp --raw-input --argjson extra "$json" '
split("\n") |
map(select(length>0)) |
map(split("|") | {
object: .[0],
type: .[1]
}) as $data |
$data + $extra' data.txt
Output:
[
{
"object": "apple",
"type": "fruit"
},
{
"object": "Tesla",
"type": "car"
},
{
"object": "sparrow",
"type": "bird"
},
{
"object": "love",
"type": "emotion"
},
{
"object": "Ukraine",
"type": "country"
}
]
Is this efficient?
I don't know if it's more efficient but you could shorten the code using --raw-input or -R without --slurp or -s to linewise read in a stream of raw text (no need to split by newlines), the / operator to do the "column" splitting within a line, and reduce to successively build up your final structure, starting with your "static" data.
jq -Rn --argjson extra "$json" '
reduce (inputs / "|") as [$object, $type] ($extra; . + [{$object, $type}])
' data.txt
If you want the "static" data at the end, add it afterwards and start with an empty array:
jq -Rn --argjson extra "$json" '
reduce (inputs / "|") as [$object, $type] ([]; . + [{$object, $type}]) + $extra
' data.txt
You can try this :
jq -nR --argjson extra "$json" '
[inputs / "|" | {object:.[0], type:.[1]}] + $extra' data.txt
[inputs / "|" | {object: .[0], type: .[1]}]
Demo
https://jqplay.org/s/XkDdy9-lBq
Or
reduce (inputs / "|") as [$obj, $typ] ([]; .+[{$obj, $typ}])
Demo
https://jqplay.org/s/5N3M-pfJIR
I have a json file with this data:
{
"data": [
{
"name": "table",
"values": [
"This is old data",
"that needs to be",
"replaced."
]
}
]
}
But my challege here is I need to replace that values array with words in a text or csv file:
this
this
this
is
is
an
an
array
My output needs to have (although I could probably get away with the words all on one line...):
"values": [
"this this this",
"is is",
"an an",
"array"
],
Is this possible with only jq? Or would I have to get awk to help out?
I already started down the awk road with:
awk -F, 'BEGIN{ORS=" "; {print "["}} {print $2} END{{print "]"}}' filename
But I know there is still some work here...
And then I came across jq -Rn inputs. But I haven't figured out how or if I can get the desired result.
Thanks for any pointers.
Assuming you have a raw ASCII text file named file and an input JSON file, you could do
jq --rawfile txt file '.data[].values |= ( $txt | split("\n")[:-1] | group_by(.) | map(join(" ")) )' json
produces
{
"data": [
{
"name": "table",
"values": [
"an an",
"array",
"is is",
"this this this"
]
}
]
}
You can use jq and awk.
Given:
$ cat file
{
"data": [
{
"name": "table",
"values": [
"This is old data",
"that needs to be",
"replaced."
]
}
]
}
$ cat replacement
this
this
this
is
is
an
an
array
First create a string for the replacement array (awk is easy to use here):
ins=$(awk '!s {s=last=$1; next}
$1==last{s=s " " $1; next}
{print s; s=last=$1}
END{print s}' replacement | tr '\n' '\t')
Then use jq to insert into the JSON:
jq --rawfile txt <(echo "$ins") '.data[].values |= ( $txt | split("\t")[:-1] )' file
{
"data": [
{
"name": "table",
"values": [
"this this this",
"is is",
"an an",
"array"
]
}
]
}
You can also use ruby to process both files:
ruby -r json -e '
BEGIN{ ar=File.readlines(ARGV[0])
.map{|l| l.rstrip}
.group_by{|e| e}
.values
.map{|v| v.join(" ")}
j=JSON.parse(File.read(ARGV[1]))
}
j["data"][0]["values"]=ar
puts JSON.pretty_generate(j)' txt file
# same output...
I have several json files I want to combine. Some are arrays of objects and some are single objects. I want to effectively concatenate all of this into a single array.
For example:
[
{ "name": "file1" }
]
{ "name": "file2" }
{ "name": "file3" }
And I want to end up with:
[
{ "name": "file1" }
{ "name": "file2" },
{ "name": "file3" },
]
How can I do this using jq or similar?
The following illustrates an efficient way to accomplish the required task:
jq -n 'reduce inputs as $in (null;
. + if $in|type == "array" then $in else [$in] end)
' $(find . -name '*.json') > combined.json
The -n command-line option is necessary to avoid skipping the first file.
This did it:
jq -n '[inputs] | add' $(find . -name '*.json') > combined.json
The JSON output returned to me after running this command
kubectl get pods -o json | jq '.items[].spec.containers[].env'
on my kuberntes cluster is this
[
{
"name": "USER_NAME",
"value": "USER_NAME_VALUE_A"
},
{
"name": "USER_ADDRESS",
"value": "USER_ADDRESS_VALUE_A"
}
]
[
{
"name": "USER_NAME",
"value": "USER_NAME_VALUE_B"
},
{
"name": "USER_ADDRESS",
"value": "USER_ADDRESS_VALUE_B"
}
]
I'd like to create a unified array/dictionary (Using Bash script) which looks like the example below and how can I get the value of each key?
[
{
"USER_NAME": "USER_NAME_VALUE_A",
"USER_ADDRESS": "USER_ADDRESS_VALUE_A"
},
{
"USER_NAME": "USER_NAME_VALUE_B",
"USER_ADDRESS": "USER_ADDRESS_VALUE_B"
}
]
use the jsonpath
C02W84XMHTD5:~ iahmad$ kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}'
coredns-c4cffd6dc-nsd2k
etcd-minikube
kube-addon-manager-minikube
kube-apiserver-minikube
kube-controller-manager-minikube
kube-dns-86f4d74b45-d5njm
kube-proxy-pg89s
kube-scheduler-minikube
kubernetes-dashboard-6f4cfc5d87-b7n7v
storage-provisioner
tiller-deploy-778f674bf5-vt4mj
https://kubernetes.io/docs/reference/kubectl/jsonpath/
it can output key values as well
C02W84XMHTD5:~ iahmad$ kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}'
coredns-c4cffd6dc-nsd2k 2018-10-16T21:44:19Z
etcd-minikube 2018-10-29T17:30:56Z
kube-addon-manager-minikube 2018-10-29T17:30:56Z
kube-apiserver-minikube 2018-10-29T17:30:56Z
kube-controller-manager-minikube 2018-10-29T17:30:56Z
kube-dns-86f4d74b45-d5njm 2018-10-16T21:44:16Z
kube-proxy-pg89s 2018-10-29T17:32:05Z
kube-scheduler-minikube 2018-10-29T17:30:56Z
kubernetes-dashboard-6f4cfc5d87-b7n7v 2018-10-16T21:44:19Z
storage-provisioner 2018-10-16T21:44:19Z
tiller-deploy-778f674bf5-vt4mj 2018-11-01T13:45:23Z
then you can split those by space and form json or list
This will do it in bash. You'd be surprised how much you can do with bash:
#!/bin/bash
NAMES=`kubectl get pods -o=jsonpath='{range .items[*]}{.spec.containers[*].env[*].name}{"\n"}' | tr -d '\011\012\015'`
VALUES=`kubectl get pods -o=jsonpath='{range .items[*]}{.spec.containers[*].env[*].value}{"\n"}' | tr -d '\011\012\015'`
IFS=' ' read -ra NAMESA <<< "$NAMES"
IFS=' ' read -ra VALUESA <<< "$VALUES"
MAXINDEX=`expr ${#NAMESA[#]} - 1`
printf "[\n"
for i in "${!NAMESA[#]}"; do
printf " {\n"
printf " \"USER_NAME\": \"${NAMESA[$i]}\",\n"
printf " \"USER_ADDRESS\": \"${VALUESA[$i]}\"\n"
if [ "$i" == "${MAXINDEX}" ]; then
printf " }\n"
else
printf " },\n"
fi
done
printf "]\n"
While you are using jq as a filter, why not use it as a transformer, too?
kubectl get pods -o json | \
jq '.items|map(.spec.containers|map(.env|map({key: .name, value})|from_entries)|add)'
I know this is totally a necromancer badge, but still ;)