bash script - grep a json variable - json

I would like the simplest solution for my pretty basic bash script:
#!/bin/bash
# Weather API url format: http://api.wunderground.com/api/{api_key}/conditions/q/CA/{location}.json
# http://api.wunderground.com/api/5e8747237f05d669/conditions/q/CA/tbilisi.json
api_key=5e8747237f05d669
location=tbilisi
temp=c
api=$(wget -qO- http://api.wunderground.com/api/$api_key/conditions/q/CA/$location.json)
temp_c=$api | grep temp_c
temp_f=$api | grep temp_f
if [ $temp = "f" ]; then
echo $temp_f
else
echo $temp_c
fi
grep returns empty. This is my first bash script, I'm getting hold of syntax, so please point out obvious errors.
I also don't understand why I have $() for wget.

You can use:
temp_c=$(echo $api|awk '{print $2}' FS='temp_c":'|awk '{print $1}' FS=',')
temp_f=$(echo $api|awk '{print $2}' FS='temp_f":'|awk '{print $1}' FS=',')
Instead of:
temp_c=$api | grep temp_c
temp_f=$api | grep temp_f

I am getting following JSON response from curl and storing in variable :
CURL_OUTPUT='{ "url": "protocol://xyz.net/9999" , "other_key": "other_value" }'
Question :I want to read the url key value and extract the id from that url:
Answer : _ID=$(echo $CURL_OUTPUT |awk '{print $2}' FS='url":' |awk '{print $1}' FS=',' | awk '{print $2}' FS='"'|awk '{print $4}' FS='/')

Related

shell script-Unable to get expected response by running through loop inside .sh file, but getting it by individually defining variables in file

Fileread.sh
#!/bin/bash
s=ch.qos.logback
e=logback-access
curl -s "https://search.maven.org/solrsearch/select?q=g:$s+AND+a:$e&core=gav&rows=1&wt=json" | jq ".response.docs[].v"`
output:"1.2.11"
This code is working perfectly fine But when I try storing the s and e values in a .txt file with : seperated and then try running, I get nothing in response
textFile.txt
ch.qos.logback:logback-access
fileread.sh
#!/bin/bash
read -p "Enter file name:" filename
while IFS=':' read -r s e
do
curl -s "https://search.maven.org/solrsearch/select?q=g:${s}+AND+a:${e}&core=gav&rows=1&wt=json" | jq ".response.docs[].v"
done < $filename
I have tried :
xy=$(curl -s "https://search.maven.org/solrsearch/select?q=g:${s}+AND+a:${e}&core=gav&rows=1&wt=json" | jq ".response.docs[].v")
echo "$xy"
xy=$(curl -s "'https://search.maven.org/solrsearch/select?q=g:'${s}'+AND+a:'${e}&core=gav&rows=1&wt=json" | jq ".response.docs[].v")
echo "$xy"
url=`https://search.maven.org/solrsearch/select?q=g:${s}+AND+a:${e}&core=gav&rows=1&wt=json`
echo url
xx=`curl -s "$url" | jq ".response.docs[].v"`
echo $xx
Try this:
#!/bin/bash
echo "Enter file name:"
read filename
IFS=':' read -r s e < $filename
echo $s $e
curl -s "https://search.maven.org/solrsearch/select?q=g:${s}+AND+a:${e}&core=gav&rows=1&wt=json" | jq ".response.docs[].v"
~

Extract Json Key value using grep or regex

I need to extract the value for request ID in the below json message
{"requestId":"2ef095c8-cec6-4fb2-a4fc-3c036f9ffaa1"}
I don't have JQ command in my OS. Is there another way I can do it, possibly using grep regex or something else?
{"requestId":"2ef095c8-cec6-4fb2-a4fc-3c036f9ffaa1"}
I need to extract the value 2ef095c8-cec6-4fb2-a4fc-3c036f9ffaa1
Since PowerShell is tagged and if that is truly the entire JSON object, then:
('{"requestId":"2ef095c8-cec6-4fb2-a4fc-3c036f9ffaa1"}' | ConvertFrom-Json).requestId
First question
#!/bin/bash
CURL_OUTPUT="{\"requestId\":\"2ef095c8-cec6-4fb2-a4fc-3c036f9ffaa1\"}"
echo $CURL_OUTPUT | awk -F: '{print substr($2, 1, length($2)-1)}' | sed 's/"//g;'
Second question
#!/bin/bash
CURL_OUTPUT="{\"status\":\"INITIALIZED\",\"result\":\"NONE\"}"
status=$(echo $CURL_OUTPUT | awk -F, '{print $1}' | awk -F: '{print $2}' | sed 's/"//g')
result=$(echo $CURL_OUTPUT | awk -F, '{print $2}' | awk -F: '{print substr($2, 1, length($2)-1)}' | sed 's/"//g')
echo "CURL_OUTPUT: $CURL_OUTPUT"
echo "Status: $status"
echo "Result: $result"
CURL_OUTPUT: {"status":"INITIALIZED","result":"NONE"}
Status: INITIALIZED
Result: NONE

extract token from curl result by shell script

I write this script
#!/bin/bash
# cm.sh
curl -i \
-H "Content-Type: application/json" \
-d '
{ "auth": {
"identity": {
"methods": ["password"],
"password": {
"user": {
"name": "admin",
"domain": { "id": "default" },
"password": "secret"
}
}
}
}
}' \
"http://localhost/identity/v3/auth/tokens" ; echo
echo $tokenizer1
echo $tokenizer2
But all of them(awk or sed) it's the same
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 540 100 312 100 228 312 228 0:00:01 --:--:-- 0:00:01 5142
My goal is to put the token in a variable for later.
Thanks guys in advance.
Instead of using the direct result of cURL, you could save the result in a file, and use your grep command on it.
Something like this maybe :
curl -o boulou.txt http://localhost/identity/v3/auth/tokens && cat boulou.txt | grep "X-Subject-Token" | awk '{printf $2}'
Edit, if you just want you desired output, add the --silent to the cURL command :
curl -o boulou.txt http://localhost/identity/v3/auth/tokens --silent && cat boulou.txt | grep "X-Subject-Token" | awk '{printf $2}'
Edit 2: If you want to export it, and delete your file, you could use something like this :
export OS_TOKEN=$(curl -o billy.txt hhttp://localhost/identity/v3/auth/tokens --silent && cat billy.txt | grep "X-Subject-Token" | awk '{printf $2}') && rm billy.txt
"How do I use grep/awk to extract a header in a field from curl when I pass it a JSON document when the contents is stored in a variable?" is a very tricky and unique problem indeed.
However, if you gradually mock out every part of your code to narrow it down, you'll discover that this is the much easier question you could have researched or asked instead:
How do I use grep/awk on contents from a variable?
I have a variable containing HTTP headers, and I want to extract the value of one of them. Here's an example:
variable='Foo: 1
Bar: 2
Baz: 3'
This is what I've tried to get 2 from Bar:
# Just hangs
tokenizer1=$variable `grep "Bar" | awk '{printf $2}'`
# Is empty
tokenizer2=$variable | `grep "Bar" | awk '{printf $2}'`
The answer here is to use echo to pipe the contents so that grep can read it on stdin:
tokenizer3=$(echo "$variable" | grep "Bar" | awk '{printf $2}')
This is easily applied to your example:
tokenizer3=$(echo "$token" | grep "X-Subject-Token" | awk '{printf $2}')
echo "The value is $tokenizer3"

How to parse the csv file without two blank fields in awk?

The csv structure is such a way as below.
"field1","field2","field3,with,commas","field4",
Ther are four fields in the csv file.
the first : field1
the second : field2
the third : field3,with,commas
the forth : field4
Here is my regular expression for awk.
'^"|","|",$'
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print NF}'
6
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $1}'
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $2}'
field1
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $3}'
field2
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $4}'
field3,with,commas
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $5}'
field4
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $6}'
Two issues remains in my regular expression '^"|","|",$'.
1.4 fiels to be parsed as 6 fields by '^"|","|",$'.
2.$1 and $6 was parsed into blank.
How to write a regular expression format to make:
echo '"field1","field2","field3,with,commas","field4",' |awk -F format '{print NF}'
4
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F format '{print $1}'
field1
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F format '{print $2}'
field2
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F foramt '{print $3}'
field3,with,commas
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F format '{print $4}'
field4
A workaround could be to set FS to "," and remove using gsub the character at the start and the end of each record:
echo '"field1","field2","field3,with,commas","field4",' | awk -v FS='","' '{gsub(/^"|",$/, ""); print NF, $1, $2, $3, $4}'
4 field1 field2 field3,with,commas field4
I think that the FPAT variable is probably what you want. Take a look at the documentation and examples in the Users Guide

Modifying bash script to take each line in a file and execute command

I need to modify a bash script to to take each line in a file and execute command. I currently have this:
#!/bin/bash
if [ -z "$1" ] ; then
echo "Lipsa IP";
exit;
fi
i=1
ip=$1
while [ $i -le `wc -l pass_file | awk '{print $1}'` ] ; do
if [ -n "$ip" ]; then
rand=`head -$i pass_file | tail -1`
user=`echo $rand | awk '{print $1}'`
pass=`echo $rand | awk '{print $2}'`
CMD=`ps -eaf | grep -c mysql`
if [ "$CMD" -lt "50" ]; then
./mysql $ip $user $pass &
else
sleep 15
fi
i=`expr $i + 1`
fi
done
The password file is in format and name pfile:
username password
The intranet hosts file is in this format (line-by-line) and name hlist:
192.168.0.1
192.168.0.2
192.168.0.3
Any suggestions?
I don't understand what you want to do which you are not already doing. Do you want to use the ip number file in some fashion?
Anyway, the way you extract the username and password from the password file is unnecessarily complicated (to put it politely); you can iterate over the lines in a file in a much simpler fashion. Instead of:
while [ $i -le `wc -l pass_file | awk '{print $1}'` ] ; do
rand=`head -$i pass_file | tail -1`
user=`echo $rand | awk '{print $1}'`
pass=`echo $rand | awk '{print $2}'`
# ...
i=`expr $i + 1`
fi
Just use the bash (Posix) read command:
while read -r user pass __; do
# ...
done < pass_file
(The __ is in case there is a line in the pass_file with more than two values; the last variable name in the read command receives "the rest of the line").
I searched the web again and found a cleaner code which I adapted to suit my needs.
#!/bin/bash
while read ip
do
if [ -n "$ip" ]
then
while read user pass
do
CMD=`ps -eaf | grep -c mysql`
if [ "$CMD" -gt "50" ]
then
sleep 15
fi
./mysql $ip $user $pass &
done < pass_file
fi
done < host_file