How to parse the csv file without two blank fields in awk? - csv

The csv structure is such a way as below.
"field1","field2","field3,with,commas","field4",
Ther are four fields in the csv file.
the first : field1
the second : field2
the third : field3,with,commas
the forth : field4
Here is my regular expression for awk.
'^"|","|",$'
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print NF}'
6
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $1}'
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $2}'
field1
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $3}'
field2
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $4}'
field3,with,commas
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $5}'
field4
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F '^"|","|",$' '{print $6}'
Two issues remains in my regular expression '^"|","|",$'.
1.4 fiels to be parsed as 6 fields by '^"|","|",$'.
2.$1 and $6 was parsed into blank.
How to write a regular expression format to make:
echo '"field1","field2","field3,with,commas","field4",' |awk -F format '{print NF}'
4
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F format '{print $1}'
field1
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F format '{print $2}'
field2
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F foramt '{print $3}'
field3,with,commas
debian8#hwy:~$ echo '"field1","field2","field3,with,commas","field4",' |awk -F format '{print $4}'
field4

A workaround could be to set FS to "," and remove using gsub the character at the start and the end of each record:
echo '"field1","field2","field3,with,commas","field4",' | awk -v FS='","' '{gsub(/^"|",$/, ""); print NF, $1, $2, $3, $4}'
4 field1 field2 field3,with,commas field4

I think that the FPAT variable is probably what you want. Take a look at the documentation and examples in the Users Guide

Related

Shell script for Multiple Linux Servers Health report in html with column color change based on condition

This code will fetch the data from Multiple servers and store in CSV file.
I am trying to get same data into HTML table format with the condition base columns, like if disk free space is less in 20% then columns of that server becomes yellow if less then 10% then becomes red.
rm -f /tmp/health*
touch /tmp/health.csv
#Servers File Path
FILE="/tmp/health.csv"
USR=root
#Create CSV File Haeader
echo " Date, Hostname, Connectivity, Root-FreeSpace, Uptime, OS-version, Total-ProcesCount, VmToolsversion, ServerLoad, Memory, Disk, CPU, LastReboot-Time, UserFailedLoginCount, " > $FILE
for server in `more /root/servers.txt`
do
_CMD="ssh $USR#$server"
Date=$($_CMD date)
Connectivity=$($_CMD ping -c 1 google.com &> /dev/null && echo connected || echo disconnected)
ip_add=`ifconfig | grep "inet addr" | head -2 | tail -1 | awk {'print$2'} | cut -f2 -d:`
RootFreeSpace=$($_CMD df / | tail -n +2 |awk '{print $5}')
Uptime=$($_CMD uptime | sed 's/.*up \([^,]*\), .*/\1/')
OSVersion=$($_CMD cat /etc/redhat-release)
TotalProcess=$($_CMD ps axue | grep -vE "^USER|grep|ps" | wc -l)
VmtoolStatus=$($_CMC vmtoolsd -v |awk '{print $5}')
ServerLoad=$($_CMD uptime |awk -F'average:' '{ print $2}'|sed s/,//g | awk '{ print $2}')
Memory=$($_CMD free -m | awk 'NR==2{printf "%.2f%%\t\t", $3*100/$2 }')
Disk=$($_CMD df -h | awk '$NF=="/"{printf "%s\t\t", $5}')
CPU=$($_CMD top -bn1 | grep load | awk '{printf "%.2f%%\t\t\n", $(NF-2)}')
Lastreboottime=$($_CMD who -b | awk '{print $3,$4}')
FailedUserloginCount=$($_CMD cat /var/log/secure |grep "Failed" | wc -l)
#updated data in CSV
echo "$Date,$HostName,$Connectivity,$RootFreeSpace,$Uptime,$OSVersion,$TotalProcess,$VmtoolStatus,$ServerLoad,$Memory,$Disk,$CPU,$Lastreboottime,$FailedUserloginCount" >> $FILE
done

Extract Json Key value using grep or regex

I need to extract the value for request ID in the below json message
{"requestId":"2ef095c8-cec6-4fb2-a4fc-3c036f9ffaa1"}
I don't have JQ command in my OS. Is there another way I can do it, possibly using grep regex or something else?
{"requestId":"2ef095c8-cec6-4fb2-a4fc-3c036f9ffaa1"}
I need to extract the value 2ef095c8-cec6-4fb2-a4fc-3c036f9ffaa1
Since PowerShell is tagged and if that is truly the entire JSON object, then:
('{"requestId":"2ef095c8-cec6-4fb2-a4fc-3c036f9ffaa1"}' | ConvertFrom-Json).requestId
First question
#!/bin/bash
CURL_OUTPUT="{\"requestId\":\"2ef095c8-cec6-4fb2-a4fc-3c036f9ffaa1\"}"
echo $CURL_OUTPUT | awk -F: '{print substr($2, 1, length($2)-1)}' | sed 's/"//g;'
Second question
#!/bin/bash
CURL_OUTPUT="{\"status\":\"INITIALIZED\",\"result\":\"NONE\"}"
status=$(echo $CURL_OUTPUT | awk -F, '{print $1}' | awk -F: '{print $2}' | sed 's/"//g')
result=$(echo $CURL_OUTPUT | awk -F, '{print $2}' | awk -F: '{print substr($2, 1, length($2)-1)}' | sed 's/"//g')
echo "CURL_OUTPUT: $CURL_OUTPUT"
echo "Status: $status"
echo "Result: $result"
CURL_OUTPUT: {"status":"INITIALIZED","result":"NONE"}
Status: INITIALIZED
Result: NONE

Add values named per element in loop

I have the next piece of code:
echo -n "{ \"data\":" > test.json
echo -n " [" >> test.json
sqcur=0
sqtotal=`ls /etc/squid?.conf | wc -l`
tcp_ports=$(grep "port" /etc/squid?.conf |awk '{print $2}' |sed 's|.*[0-9]*\:||g'|sort|uniq|xargs)
for squidconf in `ls /etc/squid?.conf`; do
let "sqcur+=1" > /dev/null
squidident=`echo $squidconf | sed 's/.*\///' | awk -F '.' '{print $1}'`
tcp_ports=$(grep "port" $squidconf |awk '{print $2}' |sed 's|.*[0-9]*\:||g'|sort|uniq|xargs)
if [ $sqcur -lt $sqtotal ]; then
echo -n " { \"{#SQPROC}\": \"/usr/local/squid/bin/squid\", \"{#SQPROC_IDENT}\": \"${squidident}\", \"{#SQPROC_ARGS}\": \"-D -C -F -f ${squidconf}\", \"{#SQPROC_PORT}\": \"${tcp_ports}\", \"{#SQPROC_CONF}\": \"${squidconf}\" }," >> test.json
else
echo -n " { \"{#SQPROC}\": \"/usr/local/squid/bin/squid\", \"{#SQPROC_IDENT}\": \"${squidident}\", \"{#SQPROC_ARGS}\": \"-D -C -F -f ${squidconf}\", \"{#SQPROC_PORT}\": \"${tcp_ports}\", \"{#SQPROC_CONF}\": \"${squidconf}\" }" >> test.json
fi
done
echo -n " ]" >> test.json
echo "}" >> test.json
It gives the next content in json file:
{
"data": [
{
"{#SQPROC}": "/usr/local/squid/bin/squid",
"{#SQPROC_IDENT}": "squid1",
"{#SQPROC_ARGS}": "-D -C -F -f /etc/squid1.conf",
"{#SQPROC_PORT}": "1111 2222",
"{#SQPROC_CONF}": "/etc/squid1.conf"
},
But I want in case of multiple values of port to obtain the next result:
{
"data": [
{
"{#SQPROC}": "/usr/local/squid/bin/squid",
"{#SQPROC_IDENT}": "squid1",
"{#SQPROC_ARGS}": "-D -C -F -f /etc/squid1.conf",
"{#SQPROC_PORT_1111}": "1111",
"{#SQPROC_PORT_2222}": "2222",
"{#SQPROC_CONF}": "/etc/squid1.conf"
},
Could you please help with it?
You're probably looking for a something like this:
for port in $tcpports
do
all_ports="${all_ports} \"{#SQPROC_PORT_$port}\": \"$port\","
done
At the end of this loop you'll have an all_ports variable containing something like:
"{#SQPROC_PORT_1111}": "1111", "{#SQPROC_PORT_2222}": "2222",
for example.
To integrate that easily in the code you already have I would propose some little changes in your for-loop. For each iteration build up a variable json_content that will containt the data for the json at that iteration and at the end of the iteration, just print $json_content:
echo -n "{ \"data\":" > test.json
echo -n " [" >> test.json
sqcur=0
sqtotal=`ls /etc/squid?.conf | wc -l`
for squidconf in `ls /etc/squid?.conf`; do
let "sqcur+=1" > /dev/null
squidident=`echo $squidconf | sed 's/.*\///' | awk -F '.' '{print $1}'`
tcp_ports=$(grep "port" $squidconf |awk '{print $2}' |sed 's|.*[0-9]*\:||g'|sort|uniq|xargs)
json_content="{ \"{#SQPROC}\": \"/usr/local/squid/bin/squid\", \"{#SQPROC_IDENT}\": \"${squidident}\", \"{#SQPROC_ARGS}\": \"-D -C -F -f ${squidconf}\","
for port in $tcp_ports
do
json_content="$json_content \"{#SQPROC_PORT_$port}\": \"$port\","
done
json_content="$json_content \"{#SQPROC_CONF}\": \"${squidconf}\" }"
if [ $sqcur -lt $sqtotal ]; then
json_content="${json_content},"
fi
echo -n " ${json_content}" >> test.json
done
echo -n " ]" >> test.json
echo "}" >> test.json
I can't verify the rest of the loop. I have no squid! But as you re-calculate the tcp_ports in the for-loop, I removed the initialization before the for-loop
Hope this helps,
Good luck!

Modifying bash script to take each line in a file and execute command

I need to modify a bash script to to take each line in a file and execute command. I currently have this:
#!/bin/bash
if [ -z "$1" ] ; then
echo "Lipsa IP";
exit;
fi
i=1
ip=$1
while [ $i -le `wc -l pass_file | awk '{print $1}'` ] ; do
if [ -n "$ip" ]; then
rand=`head -$i pass_file | tail -1`
user=`echo $rand | awk '{print $1}'`
pass=`echo $rand | awk '{print $2}'`
CMD=`ps -eaf | grep -c mysql`
if [ "$CMD" -lt "50" ]; then
./mysql $ip $user $pass &
else
sleep 15
fi
i=`expr $i + 1`
fi
done
The password file is in format and name pfile:
username password
The intranet hosts file is in this format (line-by-line) and name hlist:
192.168.0.1
192.168.0.2
192.168.0.3
Any suggestions?
I don't understand what you want to do which you are not already doing. Do you want to use the ip number file in some fashion?
Anyway, the way you extract the username and password from the password file is unnecessarily complicated (to put it politely); you can iterate over the lines in a file in a much simpler fashion. Instead of:
while [ $i -le `wc -l pass_file | awk '{print $1}'` ] ; do
rand=`head -$i pass_file | tail -1`
user=`echo $rand | awk '{print $1}'`
pass=`echo $rand | awk '{print $2}'`
# ...
i=`expr $i + 1`
fi
Just use the bash (Posix) read command:
while read -r user pass __; do
# ...
done < pass_file
(The __ is in case there is a line in the pass_file with more than two values; the last variable name in the read command receives "the rest of the line").
I searched the web again and found a cleaner code which I adapted to suit my needs.
#!/bin/bash
while read ip
do
if [ -n "$ip" ]
then
while read user pass
do
CMD=`ps -eaf | grep -c mysql`
if [ "$CMD" -gt "50" ]
then
sleep 15
fi
./mysql $ip $user $pass &
done < pass_file
fi
done < host_file

bash script - grep a json variable

I would like the simplest solution for my pretty basic bash script:
#!/bin/bash
# Weather API url format: http://api.wunderground.com/api/{api_key}/conditions/q/CA/{location}.json
# http://api.wunderground.com/api/5e8747237f05d669/conditions/q/CA/tbilisi.json
api_key=5e8747237f05d669
location=tbilisi
temp=c
api=$(wget -qO- http://api.wunderground.com/api/$api_key/conditions/q/CA/$location.json)
temp_c=$api | grep temp_c
temp_f=$api | grep temp_f
if [ $temp = "f" ]; then
echo $temp_f
else
echo $temp_c
fi
grep returns empty. This is my first bash script, I'm getting hold of syntax, so please point out obvious errors.
I also don't understand why I have $() for wget.
You can use:
temp_c=$(echo $api|awk '{print $2}' FS='temp_c":'|awk '{print $1}' FS=',')
temp_f=$(echo $api|awk '{print $2}' FS='temp_f":'|awk '{print $1}' FS=',')
Instead of:
temp_c=$api | grep temp_c
temp_f=$api | grep temp_f
I am getting following JSON response from curl and storing in variable :
CURL_OUTPUT='{ "url": "protocol://xyz.net/9999" , "other_key": "other_value" }'
Question :I want to read the url key value and extract the id from that url:
Answer : _ID=$(echo $CURL_OUTPUT |awk '{print $2}' FS='url":' |awk '{print $1}' FS=',' | awk '{print $2}' FS='"'|awk '{print $4}' FS='/')