remove comma from end of json list in bash script - json

I have a bash script that output json, but apparently, I can't have a comma at the end of the list items as seen below. How do I get the last item in the list to not have a comma on the end, while keeping it on all the rest of the items?
Ideally, I'd like to do this in the loop, but I'm okay with stripping it off after creation.
Here is a portion of the script.
#Print out a list of hosts in groups
for group in $grouplist; do
printf '\t"%s": {\n' $group
echo -ne '\t\t"hosts": ['
for host in `grep $group $results|awk '{print $1}'`; do
printf '"%s", ' "$host"
done
echo ']'
echo -e '\t},'
done
It outputs a json list ending like this.
},
}
I have this in a function so I have messed around with a variety of sed stuff to clean it up, but haven't managed to pull it off.
function | sed "s/something/another/"

Maybe you can get the len of $grouplist and keep track of the loop count, then echo with and if;then; else, something like this:
len=$(wc -w <<< $grouplist)
i=0
for group in $grouplist; do
let i++
printf '\t"%s": {\n' $group
echo -ne '\t\t"hosts": ['
for host in `grep $group $results|awk '{print $1}'`; do
printf '"%s", ' "$host"
done
echo ']'
if (( $i < $len )); then
echo -e '\t},'
else
echo -e '\t}'
fi
done

Related

Redirecting mysql output to prompt using shell

I'm writing a shell code to automaticaly run multiple sql queries in multiple databases. My code is working well, but beside "select" queries, all other queries aren't displaying anything in the prompt while executing. How could I force query outputs to redirect to prompt?
That's some of my code:
for x in "${db[#]}"
do
found=0
for enreg in `cat /home/dbfile.csv`
do
#extracting database data from csv
DBNAME=`echo $enreg | awk -F";" '{ print $4 }'`
if [ $x = $DBNAME ]
then
PASS=`echo $enreg | awk -F";" '{ print $2 }'`
HOST=`echo $enreg | awk -F";" '{ print $3 }'`
USERNAME=`echo $enreg | awk -F";" '{ print $1 }'`
# Running queries in database $DBNAME
for y in "${req[#]}"
do
echo "
"
mysql -u $USERNAME -p$PASS -h $HOST $DBNAME -e "$y"
echo "
"
done
found=1
break
fi
done
if [ $found -eq 0 ]
then
echo "Database $x doesn't exist"
fi
done

How to Flatten JSON using jq and Bash into Bash Associative Array where Key=Selector?

As a follow-up to Flatten Arbitrary JSON, I'm looking to take the flattened results and make them suitable for doing queries and updates back to the original JSON file.
Motivation: I'm writing Bash (4.2+) scripts (on CentOS 7) that read JSON into a Bash associative array using the JSON selector/filter as the key. I do processing on the associative arrays, and in the end I want to update the JSON with those changes.
The preceding solution gets me close to this goal. I think there are two things that it doesn't do:
It doesn't quote keys that require quoting. For example, the key com.acme would need to be quoted because it contains a special character.
Array indexes are not represented in a form that can be used to query the original JSON.
Existing Solution
The solution from the above is:
$ jq --stream -n --arg delim '.' 'reduce (inputs|select(length==2)) as $i ({};
[$i[0][]|tostring] as $path_as_strings
| ($path_as_strings|join($delim)) as $key
| $i[1] as $value
| .[$key] = $value
)' input.json
For example, if input.json contains:
{
"a.b":
[
"value"
]
}
then the output is:
{
"a.b.0": "value"
}
What is Really Wanted
An improvement would have been:
{
"\"a.b\"[0]": "value"
}
But what I really want is output formatted so that it could be sourced directly in a Bash program (implying the array name is passed to jq as an argument):
ArrayName['"a.b"[0]']='value' # Note 'value' might need escapes for Bash
I'm looking to have the more human-readable syntax above as opposed to the more general:
ArrayName['.["a.b"][0]']='value'
I don't know if jq can handle all of this. My present solution is to take the output from the preceding solution and to post-process it to the form that I want. Here's the work in process:
#!/bin/bash
Flatten()
{
local -r OPTIONS=$(getopt -o d:m:f: -l "delimiter:,mapname:,file:" -n "${FUNCNAME[0]}" -- "$#")
eval set -- "$OPTIONS"
local Delimiter='.' MapName=map File=
while true ; do
case "$1" in
-d|--delimiter) Delimiter="$2"; shift 2;;
-m|--mapname) MapName="$2"; shift 2;;
-f|--file) File="$2"; shift 2;;
--) shift; break;;
esac
done
local -a Array=()
readarray -t Array <<<"$(
jq -c -S --stream -n --arg delim "$Delimiter" 'reduce (inputs|select(length==2)) as $i ({}; .[[$i[0][]|tostring]|join($delim)] = $i[1])' <<<"$(sed 's|^\s*[#%].*||' "$File")" |
jq -c "to_entries|map(\"\(.key)=\(.value|tostring)\")|.[]" |
sed -e 's|^"||' -e 's|"$||' -e 's|=|\t|')"
if [[ ! -v $MapName ]]; then
local -gA $MapName
fi
. <(
IFS=$'\t'
while read -r Key Value; do
printf "$MapName[\"%s\"]=%q\n" "$Key" "$Value"
done <<<"$(printf "%s\n" "${Array[#]}")"
)
}
declare -A Map
Flatten -m Map -f "$1"
declare -p Map
With the output:
$ ./Flatten.sh <(echo '{"a.b":["value"]}')
declare -A Map='([a.b.0]="value" )'
1) jq is Turing complete, so it's all just a question of which hammer to use.
2)
An improvement would have been:
{
"\"a.b\"[0]": "value"
}
That is easily accomplished using a helper function along these lines:
def flattenPath(delim):
reduce .[] as $s ("";
if $s|type == "number"
then ((if . == "" then "." else . end) + "[\($s)]")
else . + ($s | tostring | if index(delim) then "\"\(.)\"" else . end)
end );
3)
I do processing on the associative arrays, and in the end I want to update the JSON with those changes.
This suggests you might have posed an xy-problem. However, if you really do want to serialize and unserialize some JSON text, then the natural way to do so using jq is using leaf_paths, as illustrated by the following serialization/deserialization functions:
# Emit (path, value) pairs
# Usage: jq -c -f serialize.jq input.json > serialized.json
def serialize: leaf_paths as $p | ($p, getpath($p));
# Usage: jq -n -f unserialize.jq serialized.json
def unserialize:
def pairwise(s):
foreach s as $i ([];
if length == 1 then . + [$i] else [$i] end;
select(length == 2));
reduce pairwise(inputs) as $p (null; setpath($p[0]; $p[1]));
If using bash, you could use readarray (mapfile) to read the paths and values into a single array, or if you want to distinguish between the paths and values more easily, you could (for example) use the approach illustrated by the following:
i=0
while read -r line ; do
path[$i]="$line"; read -r line; value[$i]="$line"
i=$((i + 1))
done < serialized.json
But there are many other alternatives.

conditional statement in bash with mysql query

im trying to write a bash script that will do a mysql query and if the number of results is 1, do something. i cant get it to work though.
#!/bin/sh
file=`mysql -uroot -proot -e "select count(*) from MyTable.files where strFilename='file.txt'"`
if [[ $file == "count(*) 1" ]];
then
echo $file
else
echo $file
echo "no"
fi
i verified the query works. i keep getting this returned
count(*) 1
no
im not sure why but i think it might have something to do with the type of variable $file is. any ideas?
To prevent exposing your database credential in the script you can store them in the local .my.cnf file located in your home directory.
This technic will allow your script to work on any server without modification.
Path: /home/youruser/.my.cnf
Content:
[client]
user="root"
password="root"
host="localhost"
[mysql]
database="MyTable"
So, Renato code could be rewritten by following:
#!/bin/sh
file=`mysql -e "select count(*) as count from files where strFilename='file.txt'" | cut -d \t -f 2`
if [ $file == "1" ];
then
echo $file
else
echo $file
echo "no"
fi
I rewrote your script, it works now:
#!/bin/sh
file=`mysql -uroot -proot -e "select count(*) as count from MyTable.files where strFilename='file.txt'" | cut -d \t -f 2`
if [ $file == "1" ];
then
echo $file
else
echo $file
echo "no"
fi
I'm giving a better name to the count field, using 'cut' to split the mysql output into two fields and putting the content of the second field into $file. Then you can test $file for "1". Hope it helps you..
I'm guessing that isn't actually count(*) 1 but instead count(*)\n1 or something. echo $file will convert all the characters in IFS to a space, but == will distinguish between those whitespace characters. If this is the case, echo "$file" will give you a different result. (Notice the quotes.)

Output JSON from Bash script

So I have a bash script which outputs details on servers. The problem is that I need the output to be JSON. What is the best way to go about this? Here is the bash script:
# Get hostname
hostname=`hostname -A` 2> /dev/null
# Get distro
distro=`python -c 'import platform ; print platform.linux_distribution()[0] + " " + platform.linux_distribution()[1]'` 2> /dev/null
# Get uptime
if [ -f "/proc/uptime" ]; then
uptime=`cat /proc/uptime`
uptime=${uptime%%.*}
seconds=$(( uptime%60 ))
minutes=$(( uptime/60%60 ))
hours=$(( uptime/60/60%24 ))
days=$(( uptime/60/60/24 ))
uptime="$days days, $hours hours, $minutes minutes, $seconds seconds"
else
uptime=""
fi
echo $hostname
echo $distro
echo $uptime
So the output I want is something like:
{"hostname":"server.domain.com", "distro":"CentOS 6.3", "uptime":"5 days, 22 hours, 1 minutes, 41 seconds"}
Thanks.
If you only need to output a small JSON, use printf:
printf '{"hostname":"%s","distro":"%s","uptime":"%s"}\n' "$hostname" "$distro" "$uptime"
Or if you need to produce a larger JSON, use a heredoc as explained by leandro-mora. If you use the here-doc solution, please be sure to upvote his answer:
cat <<EOF > /your/path/myjson.json
{"id" : "$my_id"}
EOF
Some of the more recent distros, have a file called: /etc/lsb-release or similar name (cat /etc/*release). Therefore, you could possibly do away with dependency your on Python:
distro=$(awk -F= 'END { print $2 }' /etc/lsb-release)
An aside, you should probably do away with using backticks. They're a bit old fashioned.
I find it much more easy to create the json using cat:
cat <<EOF > /your/path/myjson.json
{"id" : "$my_id"}
EOF
I'm not a bash-ninja at all, but I wrote a solution, that works perfectly for me. So, I decided to share it with community.
First of all, I created a bash script called json.sh
arr=();
while read x y;
do
arr=("${arr[#]}" $x $y)
done
vars=(${arr[#]})
len=${#arr[#]}
printf "{"
for (( i=0; i<len; i+=2 ))
do
printf "\"${vars[i]}\": ${vars[i+1]}"
if [ $i -lt $((len-2)) ] ; then
printf ", "
fi
done
printf "}"
echo
And now I can easily execute it:
$ echo key1 1 key2 2 key3 3 | ./json.sh
{"key1":1, "key2":2, "key3":3}
#Jimilian script was very helpful for me. I changed it a bit to send data to zabbix auto discovery
arr=()
while read x y;
do
arr=("${arr[#]}" $x $y)
done
vars=(${arr[#]})
len=${#arr[#]}
printf "{\n"
printf "\t"data":[\n"
for (( i=0; i<len; i+=2 ))
do
printf "\t{ "{#VAL1}":\"${vars[i]}\",\t"{#VAL2}":\"${vars[i+1]}\" }"
if [ $i -lt $((len-2)) ] ; then
printf ",\n"
fi
done
printf "\n"
printf "\t]\n"
printf "}\n"
echo
Output:
$ echo "A 1 B 2 C 3 D 4 E 5" | ./testjson.sh
{
data:[
{ {#VAL1}:"A", {#VAL2}:"1" },
{ {#VAL1}:"B", {#VAL2}:"2" },
{ {#VAL1}:"C", {#VAL2}:"3" },
{ {#VAL1}:"D", {#VAL2}:"4" },
{ {#VAL1}:"E", {#VAL2}:"5" }
]
}
I wrote a tiny program in Go, json_encode. It works pretty good for such cases:
$ ./getDistro.sh | json_encode
["my.dev","Ubuntu 17.10","4 days, 2 hours, 21 minutes, 17 seconds"]
data=$(echo " BUILD_NUMBER : ${BUILD_NUMBER} , BUILD_ID : ${BUILD_ID} , JOB_NAME : ${JOB_NAME} " | sed 's/ /"/g')
output => data="BUILD_NUMBER":"29","BUILD_ID":"29","JOB_NAME":"OSM_LOG_ANA"
To answer the subject line, if you were to needing to to get a tab separated output from any command line and needed it to be formatted as a JSON list of lists, you can do the following:
echo <tsv_string> | python3 -c "import sys,json; print(json.dumps([row.split('\t') for row in sys.stdin.read().splitlines() if True]))
For example, to get a zfs list output as json:
zfs list -Hpr -o name,creation,mountpoint,mounted | python3 -c "import sys,json; print(json.dumps([row.split('\t') for row in sys.stdin.read().splitlines() if True]))

Creating an HTML table with BASH & AWK

I am having issues creating a html table to display stats from a text file. I am sure there are 100 ways to do this better but here it is:
(The comments in the following script show the outputs)
#!/bin/bash
function getapistats () {
curl -s http://api.example.com/stats > api-stats.txt
awk {'print $1'} api-stats.txt > api-stats-int.txt
awk {'print $2'} api-stats.txt > api-stats-fqdm.txt
}
# api-stats.txt example
# 992 cdn.example.com
# 227 static.foo.com
# 225 imgcdn.bar.com
# end api-stats.txt example
function get_int () {
for i in `cat api-stats-int.txt`;
do echo -e "<tr><td>${i}</td>";
done
}
function get_fqdn () {
for f in `cat api-stats-fqdn.txt`;
do echo -e "<td>${f}</td></tr>";
done
}
function build_table () {
echo "<table>";
echo -e "`get_int`" "`get_fqdn`";
#echo -e "`get_fqdn`";
echo "</table>";
}
getapistats;
build_table > api-stats.html;
# Output fail :|
# <table>
# <tr><td>992</td>
# <tr><td>227</td>
# <tr><td>225</td><td>cdn.example.com</td></tr>
# <td>static.foo.com</td></tr>
# <td>imgcdn.bar.com</td></tr>
# Desired output:
# <tr><td>992</td><td>cdn.example.com</td></tr>
# ...
This is reasonably simple to do in pure awk:
curl -s http://api.example.com/stats > api-stats.txt
awk 'BEGIN { print "<table>" }
{ print "<tr><td>" $1 "</td><td>" $2 "</td></tr>" }
END { print "</table>" }' api-stats.txt > api-stats.html
Awk is really made for this type of use.
You can do it with one awk at least.
curl -s http://api.example.com/stats | awk '
BEGIN{print "<table>"}
{printf("<tr><td>%d</td><td>%s</td></tr>\n",$1,$2)}
END{print "</table>"}
'
this can be done w/ bash ;)
while read -u 3 a && read -u 4 b;do
echo $a$b;
done 3&lt/etc/passwd 4&lt/etc/services
but my experience is that usually it's a bad thing to do things like this in bash/awk/etc
the feature i used in the code is deeply burried in the bash manual page...
i would recommend to use some real language for this kind of data processing for example: (ruby or python) because they are more flexible/readable/maintainable