So I have a script that does this (jq is a command line JSON processor):
echo "Getting LB Node IDs"
echo $LB_STATUS | jq '.loadBalancer.nodes[] .id'
The output of the last line is:
1
2
3
But when I try and assign it to an array:
echo "Creating an Array"
nodeIdArray=($($LB_STATUS | jq '.loadBalancer.nodes[] .id'))
I get this error:
./myShellScript.sh: line 53: {"loadBalancer":{"name":"lbName","id":1,"protocol":"HTTP","port":80,"algorithm":"WEIGHTED_LEAST_CONNECTIONS","status":"ACTIVE","cluster":{"name":"ztm-n22.dfw1.lbaas.rackspace.net"},"nodes":[{"address":"1.2.3.4","id":1,"type":"PRIMARY","port":80,"status":"ONLINE","condition":"ENABLED","weight":1},{"address":"1.2.3.4","id":2,"type":"PRIMARY","port":80,"status":"ONLINE","condition":"ENABLED","weight":1},{"address":"1.2.3.4","id":3,"type":"PRIMARY","port":80,"status":"ONLINE","condition":"ENABLED","weight":1}],"timeout":30,"created":{"time":"2016-06-28T22:14:24Z"},"healthMonitor":{"type":"CONNECT","delay":10,"timeout":5,"attemptsBeforeDeactivation":2},"sslTermination":...<A BOAT LOAD MORE JSON I CUT OUT FOR BREVITY'S SAKE>: File name too long
SO $LB_STATUS | jq '.loadBalancer.nodes[] .id' produces a few numbers while trying to assign those numbers to an array doesn't work
What Went Wrong
$variable | something doesn't pass the text in variable as input to something -- instead, it runs the contents of $variable as a command. Presumably you wanted echo "$variable" | something instead (but see below!)
Even if that were fixed, the array=( $(some-command) ) idiom is itself buggy. See BashPitfalls #50 describing why it shouldn't be used, and the various alternatives.
What To Do Instead
When feeding content from a variable as an input to a command, it's idiomatic to use a herestring: somecommand <<<"$variable". These aren't free (as they create temporary files), but they're cheaper than pipelines (which fork off subshells).
If you have bash 4.x or newer, you have readarray:
readarray -t nodeIdArray < <(jq -r '.loadBalancer.nodes[].id' <<<"$LB_STATUS")
If you need compatibility with bash 3.x, read -a can do the job:
IFS=$'\n' read -r -d '' -a nodeIdArray \
< <(jq -r '.loadBalancer.nodes[].id' <<<"$LB_STATUS" && printf '\0')
...which also has the advantage of causing read to return a nonzero exit status if the jq command fails.
You left out the echo.
Change
nodeIdArray=($($LB_STATUS | jq '.loadBalancer.nodes[] .id'))
to
nodeIdArray=($( echo $LB_STATUS | jq '.loadBalancer.nodes[] .id' ))
Related
Given a JSON line
{"a":0,"b":{"c":"C"}}{"x":33}{"asd":889}
of 3 independent JSON objects.
And need to handle then one by one. It would be nice to have something like
echo "$json" | jq --first-one
Expected output:
{"a":0,"b":{"c":"C"}}
I found the only command which can remove first object and output others. inputs
echo '{"a":0,"b":{"c":"C"}}{"x":33}{"asd":889}' | jq -c inputs
output:
{"x":33}
{"asd":889}
How to read only first object from input stream and do not touch the rest objects?
Workaround
While writing this Q I found a workaround, but it looks cumbersome
echo '{"a":0,"b":{"c":"C"}}{"x":33}{"asd":889}' | jq -c . | head -1
simply get first line...
Slurping should, in general, be avoided if possible. If your jq has input, you could simply write:
echo '{"a":0,"b":{"c":"C"}}{"x":33}{"asd":889}' |
jq -n input
If your jq does not have input, now would be a great time to upgrade to jq 1.6. If that is not an option, then by all means use the -s option, e.g. jq -s '.[0]'
I couldn't find this anywhere on the internet, so figured I'd add it as documentation.
I wanted to join a json array around the non-displaying character \30 ("RecordSeparator") so I could safely iterate over it in bash, but I couldn't quite figure out how to do it. I tried echo '["one","two","three"]' | jq 'join("\30")' and a couple permutations of that, but it didn't work.
Turns out the solution is pretty simple.... (See answer)
Use jq -j to eliminate literal newlines between records and use only your own delimiter. This works in your simple case:
#!/usr/bin/env bash
data='["one","two","three"]'
sep=$'\x1e' # works only for non-NUL characters, see NUL version below
while IFS= read -r -d "$sep" rec || [[ $rec ]]; do
printf 'Record: %q\n' "$rec"
done < <(jq -j --arg sep "$sep" 'join($sep)' <<<"$data")
...but it also works in a more interesting scenario where naive answers fail:
#!/usr/bin/env bash
data='["two\nlines","*"]'
while IFS= read -r -d $'\x1e' rec || [[ $rec ]]; do
printf 'Record: %q\n' "$rec"
done < <(jq -j 'join("\u001e")' <<<"$data")
returns (when run on Cygwin, hence the CRLF):
Record: $'two\r\nlines'
Record: \*
That said, if using this in anger, I would suggest using NUL delimiters, and filtering them out from the input values:
#!/usr/bin/env bash
data='["two\nlines","three\ttab-separated\twords","*","nul\u0000here"]'
while IFS= read -r -d '' rec || [[ $rec ]]; do
printf 'Record: %q\n' "$rec"
done < <(jq -j '[.[] | gsub("\u0000"; "#NUL#")] | join("\u0000")' <<<"$data")
NUL is a good choice because it's a character than can't be stored in C strings (like the ones bash uses) at all, so there's no loss in the range of data which can be faithfully conveyed when they're excised -- if they did make it through to the shell, it would (depending on version) either discard them, or truncate the string at the point when one first appears.
The recommended way to solve the problem is to use the -c command-line
option, e.g. as follows:
echo "$data" | jq -c '.[]' |
while read -r rec
do
echo "Record: $rec"
done
Output:
Record: "one"
Record: "two"
Record: "three"
Problems with the OP's proposed answer
There are several problems with the proposal in the OP's answer based on $'\30'
First, it doesn't work reliably, e.g. using bash on a Mac
the output is: Record: "one\u0018two\u0018three";
this is because jq correctly converts octal 30 to \u0018
within the JSON string.
Second, RS is ASCII decimal 30, i.e. octal 36, which
would be written as $'\36' in the shell.
If you use this value instead, the program produces:
Record: "one\u001etwo\u001ethree" because that is
the correct JSON string with embedded RS characters. (For the record $'\30' is Control-X.)
Third, as noted by Charles Duffy, "for rec in $(...) is inherently buggy."
Fourth, any approach which assumes jq will in future accept
illegal JSON strings is brittle in the sense that in the
future, jq might disallow them or at least require a command-line
switch to allow them.
Fifth, unset IFS is not guaranteed to restore IFS to its state beforehand.
The RS character is special in jq when used with the --seq command-line option. For example, with a JSON array stored in a shell variable named data we could invoke jq as follows:
$ jq -n --seq --argjson arg '[1,2]' '$arg | .[]'
Here is a transcript:
$ data='["one","two","three"]'
$ jq -n --seq --argjson arg "$data" '$arg | .[]' | tr $'\36' X
X"one"
X"two"
X"three"
$
You simply use bash's $'\30' syntax to insert the special character in-line, like so: echo '["one","two","three"]' | jq '. | join("'$'\30''")'.
Here's the whole working example:
data='["one","two","three"]'
IFS=$'\30'
for rec in $(echo "$data" | jq '. | join("'$'\30''")'); do
echo "Record: $rec"
done
unset IFS
This prints
Record: one
Record: two
Record: three
as expected.
NOTE: It's important not to quote the subshell in the for loop. If you quote it, it will be taken as a single argument, regardless of the RecordSeparator characters. If you don't quote it, it will work as expected.
I am trying to run the following script:
sed -E -n '/"data"/,/}/{/[{}]/d;s/^[[:space:]]*"([^"]+)":[[:space:]]*"([^"]+)".*$/\1|\2/g;p}' /tmp/data.json | while IFS="|" read -r item val;do item="${item^^}"; item="${val}"; export "${item}"; echo ${item}; done
This basically exports data from inside a JSON as environment variables.
That is,
Here, the key data will have a list (of different lengths) of key-value pairs within itself wherein the key is not fixed. Now, I want to read every key in the list and export its value. For example, I want these commands to be executed as part of the shell script.
export HELLO1
export SAMPLEKEY
However, when I run this, it gives the error: sed: 1: "/"data"/,/}/{/[{}]/d;s/ ...": extra characters at the end of p command. What might be the reason for this?
Rather than trying to use sed to parse .json files (which can rapidly grow beyond reasonable sed parsing), instead use a tool made for parsing json (like jq -- json query). You can easily obtain the keys for values under data, and then parse with your shell tools.
(note: your questions should be tagged bash since you use the parameter expansion for character-case which is a bashism, e.g. ${item^^})
Using jq, you could do something like the following:
jq '.data' /tmp/data.json | tail -n+2 | head -n-1 |
while read -r line; do line=${line#*\"}; line=${line%%\"*}; \
printf "export %s " ${line^^}; done; echo ""
Which results in the output:
export HELLO1 export SAMPLEKEY
(there are probably even cleaner way to do this with jq -- and there was)
You can have jq output the keys for data one per line with:
jq -r '.data | to_entries[] | (.key|ascii_upcase)' /tmp/data.json
This allows you to shorten your command to generate export in from of the keys with:
while read -r key; do \
printf "export %s " $key; \
done < <(jq -r '.data | to_entries[] | (.key|ascii_upcase)' /tmp/data.json); \
echo ""
(note: to effect your actual environment, you would need to export the values as part of your shell startup)
I am trying to create a menu using the select function in bash. I am accessing an API which will return its output in json format. I will then process what the API returns into a select statement where a user can then interact with.
Here is the API call and how I parse the output:
curl -H "Authorization:Bearer $ACCESS_TOKEN" https://api.runscope.com/buckets \
| python -mjson.tool > output.json
This will send the output from the curl through python's json parsing tool and finally into the output.json file.
I then create an array using this json blob. I had to set IFS to \n in order to parse the file properly:
IFS=$'\n'
BUCKETS=("$(jq '.data | .[].name' output.json)")
I then add an exit option to the array so that users have a way to quit the selection menu:
BUCKETS+=("Exit")
Finally, I create the menu:
select BUCKET in $BUCKETS;
do
case $BUCKET in
"Exit")
echo "Exiting..."
break;;
esac
echo "You picked: $BUCKET"
done
Unfortunately, this does not create the exit option. I am able to see a menu consisting of every other option I want, except the exit option. Every option in the menu and in the array has quotes around them. How do I get the Exit option to show up?
$BUCKETS is expanding to the first element of the BUCKETS array.
Which is then being word-split and used as your select entries.
This makes sense since you wrapped the jq subshell in double quotes which prevented word-splitting from happening there (and means the IFS change isn't doing anything I believe).
Iff your entries can contain spaces and you want that assigned to an array properly the way to do that is by reading the output of jq with a while IFS= read -r entry; do loop.
BUCKETS=()
while IFS= read -r entry; do
BUCKETS+=("$entry")
done < <(jq '.data | .[].name' output.json)
Then appending your exit item to the array.
BUCKETS+=(Exit)
and then using
select BUCKET in "${BUCKETS[#]}"; do
(Both select a in l; do or select a in l\ndo there's no need for ; and \n there.)
That all being said unless you need output.json for something else you can avoid that too.
BUCKETS=()
while IFS= read -r entry; do
BUCKETS+=("$entry")
done < <(curl -H "Authorization:Bearer $ACCESS_TOKEN" https://api.runscope.com/buckets | python -mjson.tool | jq '.data | .[].name')
Instead of
jq '.data | .[].name' output.json
try
jq -r '.data | .[].name' output.json
(-r : raw data without quotes)
And the most important part :
select BUCKET in "${BUCKETS[#]}"
^^^
ARRAY syntax
I'm trying to produce multiple output files from a large JSON file of many objects which can be identified by $var1.
My 'application' splits arrays within each json object into multiple rows of text which I can test with awk to get records within a period.
From a long list of $var1, and fixed $unixtime1 and $unixtime2 for a single pass, what would a bash FOR loop look like that would pass in these variables?
cat data.json | grep $var1 | application | awk -F '$10 > $unixtime1 && $10 < $unixtime2' > $var1.txt
I could use jq to obtain a list of var1, but cannot use jq for the entire operation because of some jq limitations with large integers.
jq -c .var1 data.json | sort | uniq -c
As an occasional procedure, I've chosen not to modify my 'application' for the task (yet) - but I appreciate the experienced expertise out there.
The following for loop does the first bit - But how do I introduce the timestamps as variables?
test.txt
351779050870307
351889052023932
#! /usr/bin/env bash
for i in `cat test.txt`
do
cat data.json | grep $i | application | awk -F "\"*,\"*" '$10 > 1417251600000 && $10 < 1417338000000' > $i.txt
done
I THINK this is what you're looking for:
while IFS= read -r i
do
grep "$i" data.json | application |
awk -F '"*,"*' -v ut1="$unixtime1" -v ut2="$unixtime2" '$10 > ut1 && $10 < ut2' > "$i.txt"
done < test.txt
For those with expertise in things other than BASH (like me) - I hope it helps you with your small script experiment
Here is an example of inserting multiple variables into a bash script that contains a loop and mixes multiple piped functions - The example includes an example of a user prompt, a calculation and a list (test.txt) as input.
The while loop seems tighter than a for loop (from previous answer) (but look up for vs while logic)
The interesting syntax of the awk function comes from the previous answer without explanation, but works
#! /usr/bin/env bash
read -e -p "start time?" Unixtime1
Unixtime2=$(echo $Unixtime1 + 86400000 | bc)
while IFS= read -r i
do
grep "$i" data.json | application |
awk -F '"*,"*' -v ut1="$Unixtime1" -v ut2="$Unixtime2" '$10 > ut1 && $10 < ut2' > "$i.dat"
done < test.txt