How to loop through a json key that value is an array - json

I am somewhat new to bash. My Objective is to loop through the JSON and capture the user input and export it to environment variable.
This is the structure of the json. Any help will be appreciated!
{
"items": [{
"Gitlab": [
"GITLAB_URL",
"GITLAB_TOKEN",
"GITLAB_CHANNEL",
"GITLAB_SHOW_COMMITS_LIST",
"GITLAB_SHOW_MERGE_DESCRIPTION",
"GITLAB_DEBUG",
"GITLAB_BRANCHES"
]
},
{
"PagerDuty": [
"PAGERV2_API_KEY",
"PAGERV2_SCHEDULE_ID",
"PAGERV2_SERVICES",
"PAGERV2_DEFAULT_RESOLVER",
"PAGERV2_ENDPOINT",
"PAGERV2_ANNOUNCE_ROOM",
"PAGERV2_NEED_GROUP_AUTH",
"PAGERV2_LOG_PATH"
]
},
{
"SLack": [
"SLACK_TOKEN"
]
}
]
}
This is what I have so far
jq '.items[] | select( .Gitlab| startswith("first-block-"))' < configurations.json
for i in ${items[#]}
do
echo "Enter your " $i
read input
if [[ ! -z "$input" ]]; then
export $i=$input
fi
done

Related

jq to replace array key values from file

I have a json file with this data:
{
"data": [
{
"name": "table",
"values": [
"This is old data",
"that needs to be",
"replaced."
]
}
]
}
But my challege here is I need to replace that values array with words in a text or csv file:
this
this
this
is
is
an
an
array
My output needs to have (although I could probably get away with the words all on one line...):
"values": [
"this this this",
"is is",
"an an",
"array"
],
Is this possible with only jq? Or would I have to get awk to help out?
I already started down the awk road with:
awk -F, 'BEGIN{ORS=" "; {print "["}} {print $2} END{{print "]"}}' filename
But I know there is still some work here...
And then I came across jq -Rn inputs. But I haven't figured out how or if I can get the desired result.
Thanks for any pointers.
Assuming you have a raw ASCII text file named file and an input JSON file, you could do
jq --rawfile txt file '.data[].values |= ( $txt | split("\n")[:-1] | group_by(.) | map(join(" ")) )' json
produces
{
"data": [
{
"name": "table",
"values": [
"an an",
"array",
"is is",
"this this this"
]
}
]
}
You can use jq and awk.
Given:
$ cat file
{
"data": [
{
"name": "table",
"values": [
"This is old data",
"that needs to be",
"replaced."
]
}
]
}
$ cat replacement
this
this
this
is
is
an
an
array
First create a string for the replacement array (awk is easy to use here):
ins=$(awk '!s {s=last=$1; next}
$1==last{s=s " " $1; next}
{print s; s=last=$1}
END{print s}' replacement | tr '\n' '\t')
Then use jq to insert into the JSON:
jq --rawfile txt <(echo "$ins") '.data[].values |= ( $txt | split("\t")[:-1] )' file
{
"data": [
{
"name": "table",
"values": [
"this this this",
"is is",
"an an",
"array"
]
}
]
}
You can also use ruby to process both files:
ruby -r json -e '
BEGIN{ ar=File.readlines(ARGV[0])
.map{|l| l.rstrip}
.group_by{|e| e}
.values
.map{|v| v.join(" ")}
j=JSON.parse(File.read(ARGV[1]))
}
j["data"][0]["values"]=ar
puts JSON.pretty_generate(j)' txt file
# same output...

jq combine json files into single array

I have several json files I want to combine. Some are arrays of objects and some are single objects. I want to effectively concatenate all of this into a single array.
For example:
[
{ "name": "file1" }
]
{ "name": "file2" }
{ "name": "file3" }
And I want to end up with:
[
{ "name": "file1" }
{ "name": "file2" },
{ "name": "file3" },
]
How can I do this using jq or similar?
The following illustrates an efficient way to accomplish the required task:
jq -n 'reduce inputs as $in (null;
. + if $in|type == "array" then $in else [$in] end)
' $(find . -name '*.json') > combined.json
The -n command-line option is necessary to avoid skipping the first file.
This did it:
jq -n '[inputs] | add' $(find . -name '*.json') > combined.json

How to combine update with function result in jq?

Given data like this:
cat << EOF > xyz.json
[
{
"batch_id": 526,
"aCods": [
"IBDD879"
]
},
{
"batch_id": 357,
"aCods": [
"IBDD212"
]
}
]
EOF
What is the correct way to get this result?
[
{
"batch_id": "00000526",
"aCods": [
"IBDD879"
]
},
{
"batch_id": "00000357",
"aCods": [
"IBDD212"
]
}
]
I have tried three different commands hoping to be able to update a object element in an array with the result of a function on that element.
I just cannot find the right syntax.
jq -r '.[] | .batch_id |= 9999999' xyz.json;
{
"batch_id": 9999999,
"aCods": [
"IBDD879"
]
}
{
"batch_id": 9999999,
"aCods": [
"IBDD212"
]
}
jq -r '.[] | lpad("\(.batch_id)";8;"0")' xyz.json;
00000526
00000357
jq -r '.[] | .batch_id |= lpad("\(.batch_id)";8;"0")' xyz.json;
jq: error (at /dev/shm/xyz.json:14): Cannot index number with string "batch_id"
Assuming you are trying to use the lpad/2 from this peak’s comment, you can do
def lpad($len; $fill): tostring | ($len - length) as $l | ($fill * $l)[:$l] + .;
map(.batch_id |= lpad(8; "0"))
The key here is when using the update assignment operator |= the field being modified is passed on internally, so that you don’t have to call it out explicitly in the RHS

Merge json files - Concatenate into a single dict json file

here are 3 JSON files
File1
{
"component1": [
]
}
File2
{
"component2": [
]
}
File3
{
"component3": [
]
}
Don't find the jq command line that would give this JSON file as jq output:
{
"components": {
"component1": [
],
"component2": [
],
"component3": [
]
}
}
Many thanks for your support
Best Regards.
Iterate over the input objects one a time from inputs and append it to the components using the reduce function
jq -n 'reduce inputs as $d (.; .components += $d )' file{1..3}.json
You can simply use add, e.g.
jq -s '{components: add}' file{1..3}.json
or:
jq -n '{components: [inputs]|add}' file{1..3}.json

Convert a txt file into JSON

I need to convert a simple txt list into a specific json format.
My list looks like this :
server1
server2
server3
server4
I need to have a JSON output that would look like this :
{ "data": [
{ "{SERVER}":"server1" },
{ "{SERVER}":"server2" },
{ "{SERVER}":"server3" },
{ "{SERVER}":"server4" }
]}
I was able to generate this with a bash script but I don't know how to remove the comma for the last line. The list is dynamic and can have a different amount of servers every time the script is run.
Any tip please ?
EDIT : my current code :
echo "{ "data": [" > /tmp/json_output
for srv in `cat /tmp/list`; do
echo "{ \"{SERVER}\":\"$srv\" }," >> /tmp/json_output
done
echo "]}" >> /tmp/json_output
I'm very new at this, sorry if I sound noobish.
This is very easy for Xidel:
xidel -s input.txt -e '{"data":[x:lines($raw) ! {"{SERVER}":.}]}'
{
"data": [
{
"{SERVER}": "server1"
},
{
"{SERVER}": "server2"
},
{
"{SERVER}": "server3"
},
{
"{SERVER}": "server4"
}
]
}
x:lines($raw) is a shorthand for tokenize($raw,'\r\n?|\n'). It creates an array of every new line.
In human terms you can see x:lines($raw) ! {"{SERVER}":.} as "create a JSON object for every new line".
See also this xidelcgi demo.
I would use jq for this.
$ jq -nR 'reduce inputs as $i ([]; .+[$i]) | map ({"{SERVER}": .}) | {data: .}' tmp.txt
{
"data": [
{
"{SERVER}": "server1"
},
{
"{SERVER}": "server2"
},
{
"{SERVER}": "server3"
},
{
"{SERVER}": "server4"
}
]
}
(It seems to me there should be an easier way to produce the array ["server1", "server2", "server3", "server4"] to feed to the map filter, but this is functional.)
Breaking this down piece by piece, we first create an array of your server names.
$ jq -nR 'reduce inputs as $item ([]; .+[$item])' tmp.txt
[
"server1",
"server2",
"server3",
"server4"
]
This array is fed to a map filter that creates an array of objects with the {SERVER} key:
$ jq -nR 'reduce inputs as $item ([]; .+[$item]) | map ({"{SERVER}": .})' tmp.txt
[
{
"{SERVER}": "server1"
},
{
"{SERVER}": "server2"
},
{
"{SERVER}": "server3"
},
{
"{SERVER}": "server4"
}
]
And finally, this is used to create an object that maps the key data to the array of objects, as shown at the top.
As others said a tool like jq should better be used.
Alternatively, you could use an array and determine whether the element you process is the last or not, for example your code could be :
declare -a servers
servers=($(cat /tmp/list))
pos=$(( ${#servers[*]} - 1 ))
last=${servers[$pos]}
echo "{ "data": [" > /tmp/json_output
for srv in "${servers[#]}"; do
if [[ $srv == $last ]]
then
echo "{ \"{SERVER}\":\"$srv\" }" >> /tmp/json_output
else
echo "{ \"{SERVER}\":\"$srv\" }," >> /tmp/json_output
fi
done
echo "]}" >> /tmp/json_output
You can do this using Python 3 :
#!/usr/local/bin/python3
import os
import json
d = []
with open('listofservers.txt', 'r', encoding='utf-8') as f:
for server in f:
d.append({'{Server}' : server.rstrip("\n")})
print(json.dumps({'data': d}, indent=" "))
Which will print :
{
"data": [
{
"{Server}": "server1"
},
{
"{Server}": "server2"
},
{
"{Server}": "server3"
},
{
"{Server}": "server4"
}
]
}