Extract Json Key value using grep or regex - json

I need to extract the value for request ID in the below json message
{"requestId":"2ef095c8-cec6-4fb2-a4fc-3c036f9ffaa1"}
I don't have JQ command in my OS. Is there another way I can do it, possibly using grep regex or something else?
{"requestId":"2ef095c8-cec6-4fb2-a4fc-3c036f9ffaa1"}
I need to extract the value 2ef095c8-cec6-4fb2-a4fc-3c036f9ffaa1

Since PowerShell is tagged and if that is truly the entire JSON object, then:
('{"requestId":"2ef095c8-cec6-4fb2-a4fc-3c036f9ffaa1"}' | ConvertFrom-Json).requestId

First question
#!/bin/bash
CURL_OUTPUT="{\"requestId\":\"2ef095c8-cec6-4fb2-a4fc-3c036f9ffaa1\"}"
echo $CURL_OUTPUT | awk -F: '{print substr($2, 1, length($2)-1)}' | sed 's/"//g;'
Second question
#!/bin/bash
CURL_OUTPUT="{\"status\":\"INITIALIZED\",\"result\":\"NONE\"}"
status=$(echo $CURL_OUTPUT | awk -F, '{print $1}' | awk -F: '{print $2}' | sed 's/"//g')
result=$(echo $CURL_OUTPUT | awk -F, '{print $2}' | awk -F: '{print substr($2, 1, length($2)-1)}' | sed 's/"//g')
echo "CURL_OUTPUT: $CURL_OUTPUT"
echo "Status: $status"
echo "Result: $result"
CURL_OUTPUT: {"status":"INITIALIZED","result":"NONE"}
Status: INITIALIZED
Result: NONE

Related

Shell script for Multiple Linux Servers Health report in html with column color change based on condition

This code will fetch the data from Multiple servers and store in CSV file.
I am trying to get same data into HTML table format with the condition base columns, like if disk free space is less in 20% then columns of that server becomes yellow if less then 10% then becomes red.
rm -f /tmp/health*
touch /tmp/health.csv
#Servers File Path
FILE="/tmp/health.csv"
USR=root
#Create CSV File Haeader
echo " Date, Hostname, Connectivity, Root-FreeSpace, Uptime, OS-version, Total-ProcesCount, VmToolsversion, ServerLoad, Memory, Disk, CPU, LastReboot-Time, UserFailedLoginCount, " > $FILE
for server in `more /root/servers.txt`
do
_CMD="ssh $USR#$server"
Date=$($_CMD date)
Connectivity=$($_CMD ping -c 1 google.com &> /dev/null && echo connected || echo disconnected)
ip_add=`ifconfig | grep "inet addr" | head -2 | tail -1 | awk {'print$2'} | cut -f2 -d:`
RootFreeSpace=$($_CMD df / | tail -n +2 |awk '{print $5}')
Uptime=$($_CMD uptime | sed 's/.*up \([^,]*\), .*/\1/')
OSVersion=$($_CMD cat /etc/redhat-release)
TotalProcess=$($_CMD ps axue | grep -vE "^USER|grep|ps" | wc -l)
VmtoolStatus=$($_CMC vmtoolsd -v |awk '{print $5}')
ServerLoad=$($_CMD uptime |awk -F'average:' '{ print $2}'|sed s/,//g | awk '{ print $2}')
Memory=$($_CMD free -m | awk 'NR==2{printf "%.2f%%\t\t", $3*100/$2 }')
Disk=$($_CMD df -h | awk '$NF=="/"{printf "%s\t\t", $5}')
CPU=$($_CMD top -bn1 | grep load | awk '{printf "%.2f%%\t\t\n", $(NF-2)}')
Lastreboottime=$($_CMD who -b | awk '{print $3,$4}')
FailedUserloginCount=$($_CMD cat /var/log/secure |grep "Failed" | wc -l)
#updated data in CSV
echo "$Date,$HostName,$Connectivity,$RootFreeSpace,$Uptime,$OSVersion,$TotalProcess,$VmtoolStatus,$ServerLoad,$Memory,$Disk,$CPU,$Lastreboottime,$FailedUserloginCount" >> $FILE
done

How to find value of a key in a json response trace file using shell script

I have a response trace file containing below response:
#RESPONSE BODY
#--------------------
{"totalItems":1,"member":[{"name":"name","title":"PatchedT","description":"My des_","id":"70EA96FB313349279EB089BA9DE2EC3B","type":"Product","modified":"2019 Jul 23 10:22:15","created":"2019 Jul 23 10:21:54",}]}
I need to fetch the value of the "id" key in a variable which I can put in my further code.
Expected result is
echo $id - should give me 70EA96FB313349279EB089BA9DE2EC3B value
With valid JSON (remove first to second row with sed and parse with jq):
id=$(sed '1,2d' file | jq -r '.member[]|.id')
Output to variable id:
70EA96FB313349279EB089BA9DE2EC3B
I would strongly suggest using jq to parse json.
But given that json is mostly compatible with python dictionaries and arrays, this HACK would work too:
$ cat resp
#RESPONSE BODY
#--------------------
{"totalItems":1,"member":[{"name":"name","title":"PatchedT","description":"My des_","id":"70EA96FB313349279EB089BA9DE2EC3B","type":"Product","modified":"2019 Jul 23 10:22:15","created":"2019 Jul 23 10:21:54",}]}
$ awk 'NR==3{print "a="$0;print "print a[\"member\"][0][\"id\"]"}' resp | python
70EA96FB313349279EB089BA9DE2EC3B
$ sed -n '3s|.*|a=\0\nprint a["member"][0]["id"]|p' resp | python
70EA96FB313349279EB089BA9DE2EC3B
Note that this code is
1. dirty hack, because your system does not have the right tool - jq
2. susceptible to shell injection attacks. Hence use it ONLY IF you trust the response received from your service.
Quick and dirty (don't use eval):
eval $(cat response_file | tail -1 | awk -F , '{ print $5 }' | sed -e 's/"//g' -e 's/:/=/')
It is based on the exact structure you gave, and hoping there is no , in any value before "id".
Or assign it yourself:
id=$(cat response_file | tail -1 | awk -F , '{ print $5 }' | cut -d: -f2 | sed -e 's/"//g')
Note that you can't access the name field with that trick, as it is the first item of the member array and will be "swallowed" by the { print $2 }. You can use an even-uglier hack to retrieve it though:
id=$(cat response_file | tail -1 | sed -e 's/:\[/,/g' -e 's/}\]//g' | awk -F , '{ print $5 }' | cut -d: -f2 | sed -e 's/"//g')
But, if you can, jq is the right tool for that work instead of ugly hacks like that (but if it works...).
When you can't use jq, you can consider
id=$(grep -Eo "[0-9A-F]{32}" file)
This is only working when the file looks like what I expect, so you might need to add extra checks like
id=$(grep "My des_" file | grep -Eo "[0-9A-F]{32}" | head -1)

How to parse the csv file without two blank fields in awk?

The csv structure is such a way as below.
"field1","field2","field3,with,commas","field4",
Ther are four fields in the csv file.
the first : field1
the second : field2
the third : field3,with,commas
the forth : field4
Here is my regular expression for awk.
'^"|","|",$'
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print NF}'
6
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $1}'
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $2}'
field1
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $3}'
field2
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $4}'
field3,with,commas
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $5}'
field4
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $6}'
Two issues remains in my regular expression '^"|","|",$'.
1.4 fiels to be parsed as 6 fields by '^"|","|",$'.
2.$1 and $6 was parsed into blank.
How to write a regular expression format to make:
echo '"field1","field2","field3,with,commas","field4",' |awk -F format '{print NF}'
4
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F format '{print $1}'
field1
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F format '{print $2}'
field2
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F foramt '{print $3}'
field3,with,commas
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F format '{print $4}'
field4
A workaround could be to set FS to "," and remove using gsub the character at the start and the end of each record:
echo '"field1","field2","field3,with,commas","field4",' | awk -v FS='","' '{gsub(/^"|",$/, ""); print NF, $1, $2, $3, $4}'
4 field1 field2 field3,with,commas field4
I think that the FPAT variable is probably what you want. Take a look at the documentation and examples in the Users Guide

Using AWK to add DATE to last column of CSV

As a total bash newbie, I'm struggling to construct an AWK statement that prints the output as a DATE. Here is what I've been trying. Any ideas on how to make the $6 = $date?
cat file.json |
jq -r '.pagedEntities._embedded.teamActivityList |
[.[].teamName, .[].rank, .[].average, .[].total] |
#csv' |
awk -F"," 'BEGIN { OFS = "," } ; {$6=$(date) OFS $6; print}'
I know you asked for an Awk command, but since you're already using jq to generate the CSV file, you might as well do it there:
cat file.json |
jq --arg date "$(date)" -r '
.pagedEntities._embedded.teamActivityList |
[.[].teamName, .[].rank, .[].average, .[].total, $date] |
#csv'
This also saves you from the pitfalls of using tools that don't understand the language they're dealing with, as is the case with Awk and CSV; as an example, your script will break if any of the CSV entries is quoted and has a comma in it.
If in this command...
awk -F"," 'BEGIN { OFS = "," } ; {$6=$(date) OFS $6; print}'
...you are trying to assign to $6 the output of the date command, that won't work. $(command) is Bourne shell syntax and won't work in awk. The easiest way to do what you want is probably:
awk -v date="$(date)" -F"," 'BEGIN { OFS = "," } ; {$6=date; print}'
This assigns the output of date to the awk variable date, which can then be used in your awk script.
If that's not what you're trying to do, please update your question to show both some sample input as well as an example of what you would like your output to look like.
Here is a solution which builds on Santiago's approach but uses jq's now and strftime functions instead of using the unix date command.
jq -r '
(now|strftime("%c")) as $date
| .pagedEntities._embedded.teamActivityList
| [.[].teamName, .[].rank, .[].average, .[].total, $date]
| #csv
' file.json

bash script - grep a json variable

I would like the simplest solution for my pretty basic bash script:
#!/bin/bash
# Weather API url format: http://api.wunderground.com/api/{api_key}/conditions/q/CA/{location}.json
# http://api.wunderground.com/api/5e8747237f05d669/conditions/q/CA/tbilisi.json
api_key=5e8747237f05d669
location=tbilisi
temp=c
api=$(wget -qO- http://api.wunderground.com/api/$api_key/conditions/q/CA/$location.json)
temp_c=$api | grep temp_c
temp_f=$api | grep temp_f
if [ $temp = "f" ]; then
echo $temp_f
else
echo $temp_c
fi
grep returns empty. This is my first bash script, I'm getting hold of syntax, so please point out obvious errors.
I also don't understand why I have $() for wget.
You can use:
temp_c=$(echo $api|awk '{print $2}' FS='temp_c":'|awk '{print $1}' FS=',')
temp_f=$(echo $api|awk '{print $2}' FS='temp_f":'|awk '{print $1}' FS=',')
Instead of:
temp_c=$api | grep temp_c
temp_f=$api | grep temp_f
I am getting following JSON response from curl and storing in variable :
CURL_OUTPUT='{ "url": "protocol://xyz.net/9999" , "other_key": "other_value" }'
Question :I want to read the url key value and extract the id from that url:
Answer : _ID=$(echo $CURL_OUTPUT |awk '{print $2}' FS='url":' |awk '{print $1}' FS=',' | awk '{print $2}' FS='"'|awk '{print $4}' FS='/')