printf span long command over multiple lines - html

Is it possible within bash script, to have long printf command spanned over multiple lines?
My command is something like this, and I would like it to be more readable.
Braces are there because it's acutally part of awk block.
sqlite3 -noheader -column database.db "select * from tbl_a limit $limit" | \
awk 'BEGIN { FS = "|"; }
{ printf "\t\t<tr>\n\t\t\t<td class=\"d\">%s</td>\n\t\t\t<td class=\"m\">%s</td>\n\t\t</tr>\n", $1, $2 }' | vim -

In awk, you can use line-continuation characters to split the string across multiple lines.
sqlite3 -noheader -column database.db "select * from tbl_a limit $limit" |
awk 'BEGIN { FS = "|"; }
{ printf "\t\t<tr>\n\
\t\t\t<td class=\"d\">%s</td>\n\
\t\t\t<td class=\"m\">%s</td>\n\
\t\t</tr>\n", $1, $2 }' | vim -
Or, instead of using awk, you can process the output of sqlite line-by-line in bash:
sqlite3 -noheader -column database.db "select * from t/l_a limit $limit" |
while IFS='|' read col1 col2; do
printf '\t\t<tr>
\t\t\t<td class="d">%s</td>
\t\t\t<td class="m">%s</td>
\t\t</tr>\n' "$col1" "$col2"
done | vim -

Related

Subtract fixed number of days from date column using awk and add it to new column

Let's assume that we have a file with the values as seen bellow:
% head test.csv
20220601,A,B,1
20220530,A,B,1
And we want to add two new columns, one with the date minus 1 day and one with minus 7 days, resulting the following:
% head new_test.csv
20220601,A,B,20220525,20220531,1
20220530,A,B,20220523,20220529,1
The awk that was used to produce the above is:
% awk 'BEGIN{FS=OFS=","} { a="date -d \"$(date -d \""$1"\") -7 days\" +'%Y%m%d'"; a | getline st ; close(a) ;b="date -d \"$(date -d \""$1"\") -1 days\" +'%Y%m%d'"; b | getline cb ; close(b) ;print $1","$2","$3","st","cb","$4}' test.csv > new_test.csv
But after applying the above in a large file with more than 100K lines it runs for 20 minutes, is there any way to optimize the awk?
One GNU awk approach:
awk '
BEGIN { FS=OFS=","
secs_in_day = 60 * 60 * 24
}
{ dt = mktime( substr($1,1,4) " " substr($1,5,2) " " substr($1,7,2) " 12 0 0" )
dt1 = strftime("%Y%m%d",dt - secs_in_day )
dt7 = strftime("%Y%m%d",dt - (secs_in_day * 7) )
print $1,$2,$3,dt7,dt1,$4
}
' test.csv
This generates:
20220601,A,B,20220525,20220531,1
20220530,A,B,20220523,20220529,1
NOTES:
requires GNU awk for the mktime() and strftime() functions; see GNU awk time functions for more details
other flavors of awk may have similar functions, ymmv
You can try using function calls, it is faster than calling the .
awk -F, '
function cmd1(date){
a="date -d \"$(date -d \""date"\") -1days\" +'%Y%m%d'"
a | getline st
return st
close(a)
}
function cmd2(date){
b="date -d \"$(date -d \""date"\") -7days\" +'%Y%m%d'"
b | getline cm
return cm
close(b)
}
{
$5=cmd1($1)
$6=cmd2($1)
print $1","$2","$3","$5","$6","$4
}' OFS=, test > newFileTest
I executed this against a file with 20000 records in seconds, compared to the original awk which took around 5 minutes.

Bash script with jq wont get date difference from strings, and runs quite slowly on i7 16GB RAM

Need to find the difference between TradeCloseTime and TradeOpenTime time in dd:hh:mm format for the Exposure column in the following script.
Also the script runs super slow (~4 mins for 800 rows of json, on Core i7 16gb RAM machine)
#!/bin/bash
echo "TradeNo, TradeOpenType, TradeCloseType, TradeOpenSource, TradeCloseSource, TradeOpenTime, TradeCloseTime, PNL, Exposure" > tradelist.csv
tradecount=$(jq -r '.performance.numberOfTrades|tonumber' D.json)
for ((i=0; i<$tradecount; i++))
do
tradeNo=$(jq -r '.trades['$i']|[.tradeNo][]|tonumber' D.json)
entrySide=$(jq -r '.trades['$i'].orders[0]|[.side][]' D.json)
exitSide=$(jq -r '.trades['$i'].orders[1]|[.side][]' D.json)
entrySource=$(jq -r '.trades['$i'].orders[0]|[.source][]' D.json)
exitSource=$(jq -r '.trades['$i'].orders[1]|[.source][]' D.json)
tradeEntryTime=$(jq -r '.trades['$i'].orders[0]|[.placedTime][]' D.json | tr -d 'Z' | tr -s 'T' ' ')
tradeExitTime=$(jq -r '.trades['$i'].orders[1]|[.placedTime][]' D.json | tr -d 'Z' | tr -s 'T' ' ')
profitPercentage=$(jq -r '(.trades['$i']|[.profitPercentage][0]|tonumber)*(100)' D.json)
echo $tradeNo","$entrySide","$exitSide","$entrySource","$exitSource","$tradeEntryTime","$tradeExitTime","$profitPercentage | tr -d '"' >> tradelist.csv
done
json file looks like this
{"market":{"exchange":"BINANCE_FUTURES","coinPair":"BTC_USDT"},"strategy":{"name":"","type":"BACKTEST","candleSize":15,"lookbackDays":6,"leverageLong":1.00000000,"leverageShort":1.00000000,"strategyName":"ABC","strategyVersion":35,"runNo":"002","source":"Personal"},"strategyParameters":[{"name":"DurationInput","value":"87.0"}],"openPositionStrategy":{"actionTime":"CANDLE_CLOSE","maxPerSignal":1.00000000},"closePositionStrategy":{"actionTime":"CANDLE_CLOSE","minProfit":"NaN","stopLossValue":0.07000000,"stopLossTrailing":true,"takeProfit":0.01290000,"takeProfitDeviation":"NaN"},"performance":{"startTime":"2019-01-01T00:00:00Z","endTime":"2021-11-24T00:00:00Z","startAllocation":1000.00000000,"endAllocation":3478.58904150,"absoluteProfit":2478.58904150,"profitPerc":2.47858904,"buyHoldRatio":0.62426630,"buyHoldReturn":4.57228387,"numberOfTrades":744,"profitableTrades":0.67833109,"maxDrawdown":-0.20924885,"avgMonthlyProfit":0.05242718,"profitableMonths":0.70370370,"avgWinMonth":0.09889897,"avgLoseMonth":-0.05275563,"startPrice":null,"endPrice":57623.08000000},"trades":[{"tradeNo":0,"profit":-5.48836165,"profitPercentage":-0.00549085,"accumulatedBalance":994.51163835,"compoundProfitPerc":-0.00548836,"orders":[{"side":"Long","placedTime":"2019-09-16T21:15:00Z","placedAmount":0.09700000,"filledTime":"2019-09-16T21:15:00Z","filledAmount":0.09700000,"filledPrice":10300.49000000,"commissionPaid":0.39965901,"source":"SIGNAL"},{"side":"CloseLong","placedTime":"2019-09-17T19:15:00Z","placedAmount":0.09700000,"filledTime":"2019-09-17T19:15:00Z","filledAmount":0.09700000,"filledPrice":10252.13000000,"commissionPaid":0.39778264,"source":"SIGNAL"}]},{"tradeNo":1,"profit":-3.52735800,"profitPercentage":-0.00356403,"accumulatedBalance":990.98428035,"compoundProfitPerc":-0.00901572,"orders":[{"side":"Long","placedTime":"2019-09-19T06:00:00Z","placedAmount":0.10000000,"filledTime":"2019-09-19T06:00:00Z","filledAmount":0.10000000,"filledPrice":9893.16000000,"commissionPaid":0.39572640,"source":"SIGNAL"},{"side":"CloseLong","placedTime":"2019-09-19T06:15:00Z","placedAmount":0.10000000,"filledTime":"2019-09-19T06:15:00Z","filledAmount":0.10000000,"filledPrice":9865.79000000,"commissionPaid":0.39463160,"source":"SIGNAL"}]},{"tradeNo":2,"profit":-5.04965308,"profitPercentage":-0.00511770,"accumulatedBalance":985.93462727,"compoundProfitPerc":-0.01406537,"orders":[{"side":"Long","placedTime":"2019-09-25T10:15:00Z","placedAmount":0.11700000,"filledTime":"2019-09-25T10:15:00Z","filledAmount":0.11700000,"filledPrice":8430.00000000,"commissionPaid":0.39452400,"source":"SIGNAL"},{"side":"CloseLong","placedTime":"2019-09-25T10:30:00Z","placedAmount":0.11700000,"filledTime":"2019-09-25T10:30:00Z","filledAmount":0.11700000,"filledPrice":8393.57000000,"commissionPaid":0.39281908,"source":"SIGNAL"}]}
You can do it all (extracts, conversions and formatting) with one jq call:
#!/bin/sh
echo 'TradeNo,TradeOpenType,TradeCloseType,TradeOpenSource,TradeCloseSource,TradeOpenTime,TradeCloseTime,PNL,Exposure'
query='
.trades[]
| [
.tradeNo,
.orders[0].side,
.orders[1].side,
.orders[0].source,
.orders[1].source,
(.orders[0].placedTime | fromdate | strftime("%Y-%m-%d %H:%M:%S")),
(.orders[1].placedTime | fromdate | strftime("%Y-%m-%d %H:%M:%S")),
.profitPercentage * 100,
(
(.orders[1].placedTime | fromdate) - (.orders[0].placedTime | fromdate)
| (. / 86400 | floor | tostring) + (. % 86400 | strftime(":%H:%M"))
)
]
|#csv
'
jq -r "$query" < D.json > tradelist.csv
example of JSON (cleaned of all irrelevant keys):
{
"trades": [
{
"tradeNo": 0,
"profitPercentage": -0.00549085,
"orders": [
{
"side": "Long",
"placedTime": "2018-12-16T21:34:46Z",
"source": "SIGNAL"
},
{
"side": "CloseLong",
"placedTime": "2019-09-17T19:15:00Z",
"source": "SIGNAL"
}
]
}
]
}
output:
TradeNo,TradeOpenType,TradeCloseType,TradeOpenSource,TradeCloseSource,TradeOpenTime,TradeCloseTime,PNL,Exposure
0,"Long","CloseLong","SIGNAL","SIGNAL","2018-12-16 21:34:46","2019-09-17 20:15:00",-0.549085,"274:22:40"
If you want to get rid of the double quotes that jq adds when generating a CSV (which are completely valid, but you need a real parser to read the CSV) then you can replace #csv with #tsv and post-process the output with tr '\t' ',', like this:
query='
...
|#tsv
'
jq -r "$query" < D.json | tr '\t' ',' > tradelist.csv
and you'll get:
TradeNo,TradeOpenType,TradeCloseType,TradeOpenSource,TradeCloseSource,TradeOpenTime,TradeCloseTime,PNL,Exposure
0,Long,CloseLong,SIGNAL,SIGNAL,2018-12-16 21:34:46,2019-09-17 20:15:00,-0.549085,274:22:40
note: This method of getting rid of the " in the CSV is only accurate when there is no \n \t \r \ , or " characters in the input data.
Regarding the main question (regarding computing time differences), you're in luck as jq provides the built-in function fromdateiso8601 for converting ISO times to "the
number of seconds since the Unix epoch (1970-01-01T00:00:00Z)".
With your JSON sample,
.trades[]
| [ .orders[1].placedTime, .orders[0].placedTime]
| map(fromdateiso8601)
| .[0] - .[1]
produces the three differences:
79200
900
900
And here's a function for converting seconds to "hh:mm:ss" format:
def hhmmss:
def l: tostring | if length < 2 then "0\(.)" else . end;
(. % 60) as $ss
| ((. / 60) | floor) as $mm
| (($mm / 60) | floor) as $hh
| ($mm % 60) as $mm
| [$hh, $mm, $ss] | map(l) | join(":");
I prefer using an intermediate structure of the "entry" and "exit" JSON. This helps with debugging the jq commands. Formatted for readability over performance:
#!/usr/bin/env bash
echo "TradeNo,TradeOpenType,TradeCloseType,TradeOpenSource,TradeCloseSource,TradeOpenTime,TradeCloseTime,PNL,Exposure" > tradelist.csv
jq -r '
.trades[]
|{tradeNo,
profitPercentage,
entry:.orders[0],
exit:.orders[1],
entryTS:.orders[0].placedTime|fromdate,
exitTS:.orders[1].placedTime|fromdate}
|[.tradeNo,
.entry.side,
.exit.side,
.entry.source,
.exit.source,
(.entry.placedTime|strptime("%Y-%m-%dT%H:%M:%SZ")|strftime("%Y-%m-%d %H:%M:%S")),
(.exit.placedTime|strptime("%Y-%m-%dT%H:%M:%SZ")|strftime("%Y-%m-%d %H:%M:%S")),
(.profitPercentage*100),
(.exitTS-.entryTS|todate|strptime("%Y-%m-%dT%H:%M:%SZ")|strftime("%d:%H:%M"))]|#csv
' D.json | tr -d '"' >> tradelist.csv
WARNING: This formatting assumes Exposure is LESS THAN 1 MONTH. Good luck with that!

Get count based on value bash

I have data in this format in a file:
{"field1":249449,"field2":116895,"field3":1,"field4":"apple","field5":42,"field6":"2019-07-01T00:00:10","metadata":"","frontend":""}
{"field1":249448,"field2":116895,"field3":1,"field4":"apple","field5":42,"field6":"2019-07-01T00:00:10","metadata":"","frontend":""}
{"field1":249447,"field2":116895,"field3":1,"field4":"apple","field5":42,"field6":"2019-07-01T00:00:10","metadata":"","frontend":""}
{"field1":249443,"field2":116895,"field3":1,"field4":"apple","field5":42,"field6":"2019-07-01T00:00:10","metadata":"","frontend":""}
{"field1":249449,"field2":116895,"field3":1,"field4":"apple","field5":42,"field6":"2019-07-01T00:00:10","metadata":"","frontend":""}
Here, each entry represents a row. I want to have a count of the rows with respect to the value in field one, like:
249449 : 2
249448 : 1
249447 : 1
249443 : 1
How can I get that?
with awk
$ awk -F'[,:]' -v OFS=' : ' '{a[$2]++} END{for(k in a) print k, a[k]}' file
You can use the jq command line tool to interpret JSON data. uniq -c counts the number of occurences.
% jq .field1 < $INPUTFILE | sort | uniq -c
1 249443
1 249447
1 249448
2 249449
(tested with jq 1.5-1-a5b5cbe on linux xubuntu 18.04 with zsh)
Here's an efficient jq-only solution:
reduce inputs.field1 as $x ({}; .[$x|tostring] += 1)
| to_entries[]
| "\(.key) : \(.value)"
Invocation: jq -nrf program.jq input.json
(Note in particular the -n option.)
Of course if an object-representation of the counts is satisfactory, then
one could simply write:
jq -n 'reduce inputs.field1 as $x ({}; .[$x|tostring] += 1)' input.json
Using datamash and some shell utils, change the non-data delimiters to squeezed tabs, count field 3, (it'd be field 2, but there's a leading tab), reverse, then pretty print as per OP spec:
tr -s '{":,}' '\t' < file | datamash -sg 3 count 3 | tac | xargs printf '%s : %s\n'
Output:
249449 : 2
249448 : 1
249447 : 1
249443 : 1

Read MySQL result set with multiple columns and spaces

Pretend I have a MySQL table test that looks like:
+----+---------------------+
| id | value |
+----+---------------------+
| 1 | Hello World |
| 2 | Foo Bar |
| 3 | Goodbye Cruel World |
+----+---------------------+
And I execute the query SELECT id, value FROM test.
How would I assign each column to a variable in Bash using read?
read -a truncates everything after the first space in value:
mysql -D "jimmy" -NBe "SELECT id, value FROM test" | while read -a row;
do
id="${row[0]}"
value="${row[1]}"
echo "$id : $value"
done;
and output looks like:
1 : Hello
2 : Foo
3 : Goodbye
but I need it to look like:
1 : Hello World
2 : Foo Bar
3 : Goodbye Cruel World
I'm aware there are args I could pass to MySQL to format the results in table format, but I need to parse each value in each row. This is just a simplified example of my problem.
Use individual fields in the read loop instead of the array:
mysql -D "jimmy" -NBe "SELECT id, value FROM test" | while read -r id value;
do
echo "$id : $value"
done
This will make sure that id will be read into the id field and everything else would be read into the value field - that's how read behaves when input has more fields than the number of variables being read into. If there are more columns to be read, using a delimiter (such as #) that doesn't clash with actual data would help:
mysql -D "jimmy" -NBe "SELECT CONCAT(id, '#', value, '#', column3) FROM test" | while IFS='#' read -r id value column3;
do
echo "$id : $value : $column3"
done
You can do this, also avoid piping a command to a while read loop if possible to avoid creating a subshell.
while read -r line; do
id=$(echo $line | awk '{print $1}')
value=$(echo $line | awk '{print $1=""; print $0}'|sed ':a;N;$!ba;s/\n/ /g'| sed 's/^[ \t]*//g')
echo "ID: $id"
echo "VALUE: $value"
done< <(mysql -D "jimmy" -NBe "SELECT id, value FROM test")
If you want to store all the id's and values in an array for later use, you can modify it to look like this.
#!/bin/bash
declare -A -g arr
while read -r line; do
id=$(echo $line | awk '{print $1}')
value=$(echo $line | awk '{print $1=""; print $0}'|sed ':a;N;$!ba;s/\n/ /g'| sed 's/^[ \t]*//g')
arr[$id]=$value
done< <(mysql -D "jimmy" -NBe "SELECT id, value FROM test")
for key in "${!arr[#]}"; do
echo "$key: ${arr[$key]}"
done
Which gives you this output
dumbledore#ansible1a [OPS]:~/tmp/tmp > bash test.sh
1: Hello World
2: Foo Bar
3: Goodbye Cruel World

Run sql query in if statement shell script

I am trying to run sql query in if statement. Here is my shell script
#!/bin/bash
var="select col1, col2 from table_name where condition;"
count=$(ping -c 4 192.168.7.204 | awk -F',' '{ print $2 }' | awk '{ print $1 }')
if [ $count -eq 0 ]; then
mysql -h 192.168.7.204 -u username -ppassword db_name<<EOFMYSQL
$var
EOFMYSQL
fi
But it shows me an error
./test.sh: line 18: warning: here-document at line 12 delimited by end-of-file (wanted `EOFMYSQL')
./test.sh: line 19: syntax error: unexpected end of file
The here-document sentinelEOFMYSQL has to be up against the left margin, not indented:
var="select col1, col2 from table_name where condition;"
count=$(ping -c 4 192.168.7.204 | awk -F',' '{ print $2 }' | awk '{ print $1 }')
if [ $count -eq 0 ]; then
mysql -h 192.168.7.204 -u username -ppassword db_name <<EOFMYSQL
$var
EOFMYSQL
fi
If you change the <<EOFMYSQL to <<-EOFMYSQL you can indent it, as long as you use only tabs and not spaces.
See the manual.