How can Miller convert a local date and time to UTC? - csv

How can Miller 5.6.2 convert a local date and time to UTC using an expression simpler than the following?
$ printf "time1\n2019-06-13 05:54 PM\n" | mlr --csv put '
$time1=sec2gmt(
localtime2sec(
strftime(
strptime($time1, "%Y-%m-%d %H:%M %p"),
"%Y-%m-%d %H:%M:%S")));'
time1
2019-06-13T10:54:00Z
Note that my local time zone in June is EDT or -04:00:
$ date --date='2019-06-13 05:54 PM' '+%Y-%m-%dT%H:%M:%S %Z'
2019-06-13T17:54:00 EDT
$ date --date='2019-06-13 05:54 PM' '+%Y-%m-%dT%H:%M:%S%z'
2019-06-13T17:54:00-0400

I found two similar expressions, both of which use strptime_local() instead of strptime() to parse the local date and time string and convert it to seconds since the Epoch in UTC (GMT):
$ printf "time1\n2019-06-13 05:54 PM\n" | mlr --csv put '
$time1=strftime(
strptime_local($time1, "%Y-%m-%d %H:%M %p"),
"%Y-%m-%dT%H:%M:%SZ");'
time1
2019-06-13T10:54:00Z
$ printf "time1\n2019-06-13 05:54 PM\n" | mlr --csv put '
$time1=sec2gmt(strptime_local($time1, "%Y-%m-%d %H:%M %p"));'
time1
2019-06-13T10:54:00Z
Both strftime() and sec2gmt() assume an argument of seconds since the Epoch in UTC.
Function strptime() assumes an input date and time string in UTC and ignores a time zone in the input string:
$ printf "time1\n2019-06-13 05:54 PM\n" | mlr --csv put '
$time1=strptime($time1, "%Y-%m-%d %H:%M %p");'
time1
1560405240.000000
$ printf "time1\n2019-06-13 05:54 PM EDT\n" | mlr --csv put '
$time1=strptime($time1, "%Y-%m-%d %H:%M %p %Z");'
time1
1560405240.000000
Function strptime_local() also ignores a time zone in the input date and time string, but assumes the string is in the local time zone and converts it to UTC:
$ printf "time1\n2019-06-13 05:54 PM\n" | mlr --csv put '
$time1=strptime_local($time1, "%Y-%m-%d %H:%M %p");'
time1
1560423240.000000
$ printf "time1\n2019-06-13 05:54 PM EDT\n" | mlr --csv put '
$time1=strptime_local($time1, "%Y-%m-%d %H:%M %p %Z");'
time1
1560423240.000000
$ printf "time1\n2019-06-13 05:54 PM EST\n" | mlr --csv put '
$time1=strptime_local($time1, "%Y-%m-%d %H:%M %p %Z");'
time1
1560423240.000000
$ printf "time1\n2019-06-13 05:54 PM AUT\n" | mlr --csv put '
$time1=strptime_local($time1, "%Y-%m-%d %H:%M %p %Z");'
time1
1560423240.000000

Related

Subtract fixed number of days from date column using awk and add it to new column

Let's assume that we have a file with the values as seen bellow:
% head test.csv
20220601,A,B,1
20220530,A,B,1
And we want to add two new columns, one with the date minus 1 day and one with minus 7 days, resulting the following:
% head new_test.csv
20220601,A,B,20220525,20220531,1
20220530,A,B,20220523,20220529,1
The awk that was used to produce the above is:
% awk 'BEGIN{FS=OFS=","} { a="date -d \"$(date -d \""$1"\") -7 days\" +'%Y%m%d'"; a | getline st ; close(a) ;b="date -d \"$(date -d \""$1"\") -1 days\" +'%Y%m%d'"; b | getline cb ; close(b) ;print $1","$2","$3","st","cb","$4}' test.csv > new_test.csv
But after applying the above in a large file with more than 100K lines it runs for 20 minutes, is there any way to optimize the awk?
One GNU awk approach:
awk '
BEGIN { FS=OFS=","
secs_in_day = 60 * 60 * 24
}
{ dt = mktime( substr($1,1,4) " " substr($1,5,2) " " substr($1,7,2) " 12 0 0" )
dt1 = strftime("%Y%m%d",dt - secs_in_day )
dt7 = strftime("%Y%m%d",dt - (secs_in_day * 7) )
print $1,$2,$3,dt7,dt1,$4
}
' test.csv
This generates:
20220601,A,B,20220525,20220531,1
20220530,A,B,20220523,20220529,1
NOTES:
requires GNU awk for the mktime() and strftime() functions; see GNU awk time functions for more details
other flavors of awk may have similar functions, ymmv
You can try using function calls, it is faster than calling the .
awk -F, '
function cmd1(date){
a="date -d \"$(date -d \""date"\") -1days\" +'%Y%m%d'"
a | getline st
return st
close(a)
}
function cmd2(date){
b="date -d \"$(date -d \""date"\") -7days\" +'%Y%m%d'"
b | getline cm
return cm
close(b)
}
{
$5=cmd1($1)
$6=cmd2($1)
print $1","$2","$3","$5","$6","$4
}' OFS=, test > newFileTest
I executed this against a file with 20000 records in seconds, compared to the original awk which took around 5 minutes.

Bash script with jq wont get date difference from strings, and runs quite slowly on i7 16GB RAM

Need to find the difference between TradeCloseTime and TradeOpenTime time in dd:hh:mm format for the Exposure column in the following script.
Also the script runs super slow (~4 mins for 800 rows of json, on Core i7 16gb RAM machine)
#!/bin/bash
echo "TradeNo, TradeOpenType, TradeCloseType, TradeOpenSource, TradeCloseSource, TradeOpenTime, TradeCloseTime, PNL, Exposure" > tradelist.csv
tradecount=$(jq -r '.performance.numberOfTrades|tonumber' D.json)
for ((i=0; i<$tradecount; i++))
do
tradeNo=$(jq -r '.trades['$i']|[.tradeNo][]|tonumber' D.json)
entrySide=$(jq -r '.trades['$i'].orders[0]|[.side][]' D.json)
exitSide=$(jq -r '.trades['$i'].orders[1]|[.side][]' D.json)
entrySource=$(jq -r '.trades['$i'].orders[0]|[.source][]' D.json)
exitSource=$(jq -r '.trades['$i'].orders[1]|[.source][]' D.json)
tradeEntryTime=$(jq -r '.trades['$i'].orders[0]|[.placedTime][]' D.json | tr -d 'Z' | tr -s 'T' ' ')
tradeExitTime=$(jq -r '.trades['$i'].orders[1]|[.placedTime][]' D.json | tr -d 'Z' | tr -s 'T' ' ')
profitPercentage=$(jq -r '(.trades['$i']|[.profitPercentage][0]|tonumber)*(100)' D.json)
echo $tradeNo","$entrySide","$exitSide","$entrySource","$exitSource","$tradeEntryTime","$tradeExitTime","$profitPercentage | tr -d '"' >> tradelist.csv
done
json file looks like this
{"market":{"exchange":"BINANCE_FUTURES","coinPair":"BTC_USDT"},"strategy":{"name":"","type":"BACKTEST","candleSize":15,"lookbackDays":6,"leverageLong":1.00000000,"leverageShort":1.00000000,"strategyName":"ABC","strategyVersion":35,"runNo":"002","source":"Personal"},"strategyParameters":[{"name":"DurationInput","value":"87.0"}],"openPositionStrategy":{"actionTime":"CANDLE_CLOSE","maxPerSignal":1.00000000},"closePositionStrategy":{"actionTime":"CANDLE_CLOSE","minProfit":"NaN","stopLossValue":0.07000000,"stopLossTrailing":true,"takeProfit":0.01290000,"takeProfitDeviation":"NaN"},"performance":{"startTime":"2019-01-01T00:00:00Z","endTime":"2021-11-24T00:00:00Z","startAllocation":1000.00000000,"endAllocation":3478.58904150,"absoluteProfit":2478.58904150,"profitPerc":2.47858904,"buyHoldRatio":0.62426630,"buyHoldReturn":4.57228387,"numberOfTrades":744,"profitableTrades":0.67833109,"maxDrawdown":-0.20924885,"avgMonthlyProfit":0.05242718,"profitableMonths":0.70370370,"avgWinMonth":0.09889897,"avgLoseMonth":-0.05275563,"startPrice":null,"endPrice":57623.08000000},"trades":[{"tradeNo":0,"profit":-5.48836165,"profitPercentage":-0.00549085,"accumulatedBalance":994.51163835,"compoundProfitPerc":-0.00548836,"orders":[{"side":"Long","placedTime":"2019-09-16T21:15:00Z","placedAmount":0.09700000,"filledTime":"2019-09-16T21:15:00Z","filledAmount":0.09700000,"filledPrice":10300.49000000,"commissionPaid":0.39965901,"source":"SIGNAL"},{"side":"CloseLong","placedTime":"2019-09-17T19:15:00Z","placedAmount":0.09700000,"filledTime":"2019-09-17T19:15:00Z","filledAmount":0.09700000,"filledPrice":10252.13000000,"commissionPaid":0.39778264,"source":"SIGNAL"}]},{"tradeNo":1,"profit":-3.52735800,"profitPercentage":-0.00356403,"accumulatedBalance":990.98428035,"compoundProfitPerc":-0.00901572,"orders":[{"side":"Long","placedTime":"2019-09-19T06:00:00Z","placedAmount":0.10000000,"filledTime":"2019-09-19T06:00:00Z","filledAmount":0.10000000,"filledPrice":9893.16000000,"commissionPaid":0.39572640,"source":"SIGNAL"},{"side":"CloseLong","placedTime":"2019-09-19T06:15:00Z","placedAmount":0.10000000,"filledTime":"2019-09-19T06:15:00Z","filledAmount":0.10000000,"filledPrice":9865.79000000,"commissionPaid":0.39463160,"source":"SIGNAL"}]},{"tradeNo":2,"profit":-5.04965308,"profitPercentage":-0.00511770,"accumulatedBalance":985.93462727,"compoundProfitPerc":-0.01406537,"orders":[{"side":"Long","placedTime":"2019-09-25T10:15:00Z","placedAmount":0.11700000,"filledTime":"2019-09-25T10:15:00Z","filledAmount":0.11700000,"filledPrice":8430.00000000,"commissionPaid":0.39452400,"source":"SIGNAL"},{"side":"CloseLong","placedTime":"2019-09-25T10:30:00Z","placedAmount":0.11700000,"filledTime":"2019-09-25T10:30:00Z","filledAmount":0.11700000,"filledPrice":8393.57000000,"commissionPaid":0.39281908,"source":"SIGNAL"}]}
You can do it all (extracts, conversions and formatting) with one jq call:
#!/bin/sh
echo 'TradeNo,TradeOpenType,TradeCloseType,TradeOpenSource,TradeCloseSource,TradeOpenTime,TradeCloseTime,PNL,Exposure'
query='
.trades[]
| [
.tradeNo,
.orders[0].side,
.orders[1].side,
.orders[0].source,
.orders[1].source,
(.orders[0].placedTime | fromdate | strftime("%Y-%m-%d %H:%M:%S")),
(.orders[1].placedTime | fromdate | strftime("%Y-%m-%d %H:%M:%S")),
.profitPercentage * 100,
(
(.orders[1].placedTime | fromdate) - (.orders[0].placedTime | fromdate)
| (. / 86400 | floor | tostring) + (. % 86400 | strftime(":%H:%M"))
)
]
|#csv
'
jq -r "$query" < D.json > tradelist.csv
example of JSON (cleaned of all irrelevant keys):
{
"trades": [
{
"tradeNo": 0,
"profitPercentage": -0.00549085,
"orders": [
{
"side": "Long",
"placedTime": "2018-12-16T21:34:46Z",
"source": "SIGNAL"
},
{
"side": "CloseLong",
"placedTime": "2019-09-17T19:15:00Z",
"source": "SIGNAL"
}
]
}
]
}
output:
TradeNo,TradeOpenType,TradeCloseType,TradeOpenSource,TradeCloseSource,TradeOpenTime,TradeCloseTime,PNL,Exposure
0,"Long","CloseLong","SIGNAL","SIGNAL","2018-12-16 21:34:46","2019-09-17 20:15:00",-0.549085,"274:22:40"
If you want to get rid of the double quotes that jq adds when generating a CSV (which are completely valid, but you need a real parser to read the CSV) then you can replace #csv with #tsv and post-process the output with tr '\t' ',', like this:
query='
...
|#tsv
'
jq -r "$query" < D.json | tr '\t' ',' > tradelist.csv
and you'll get:
TradeNo,TradeOpenType,TradeCloseType,TradeOpenSource,TradeCloseSource,TradeOpenTime,TradeCloseTime,PNL,Exposure
0,Long,CloseLong,SIGNAL,SIGNAL,2018-12-16 21:34:46,2019-09-17 20:15:00,-0.549085,274:22:40
note: This method of getting rid of the " in the CSV is only accurate when there is no \n \t \r \ , or " characters in the input data.
Regarding the main question (regarding computing time differences), you're in luck as jq provides the built-in function fromdateiso8601 for converting ISO times to "the
number of seconds since the Unix epoch (1970-01-01T00:00:00Z)".
With your JSON sample,
.trades[]
| [ .orders[1].placedTime, .orders[0].placedTime]
| map(fromdateiso8601)
| .[0] - .[1]
produces the three differences:
79200
900
900
And here's a function for converting seconds to "hh:mm:ss" format:
def hhmmss:
def l: tostring | if length < 2 then "0\(.)" else . end;
(. % 60) as $ss
| ((. / 60) | floor) as $mm
| (($mm / 60) | floor) as $hh
| ($mm % 60) as $mm
| [$hh, $mm, $ss] | map(l) | join(":");
I prefer using an intermediate structure of the "entry" and "exit" JSON. This helps with debugging the jq commands. Formatted for readability over performance:
#!/usr/bin/env bash
echo "TradeNo,TradeOpenType,TradeCloseType,TradeOpenSource,TradeCloseSource,TradeOpenTime,TradeCloseTime,PNL,Exposure" > tradelist.csv
jq -r '
.trades[]
|{tradeNo,
profitPercentage,
entry:.orders[0],
exit:.orders[1],
entryTS:.orders[0].placedTime|fromdate,
exitTS:.orders[1].placedTime|fromdate}
|[.tradeNo,
.entry.side,
.exit.side,
.entry.source,
.exit.source,
(.entry.placedTime|strptime("%Y-%m-%dT%H:%M:%SZ")|strftime("%Y-%m-%d %H:%M:%S")),
(.exit.placedTime|strptime("%Y-%m-%dT%H:%M:%SZ")|strftime("%Y-%m-%d %H:%M:%S")),
(.profitPercentage*100),
(.exitTS-.entryTS|todate|strptime("%Y-%m-%dT%H:%M:%SZ")|strftime("%d:%H:%M"))]|#csv
' D.json | tr -d '"' >> tradelist.csv
WARNING: This formatting assumes Exposure is LESS THAN 1 MONTH. Good luck with that!

JSON JQ filter by date older than bash

I have a json with this format of data in a text.json file
[
{
"name": "page/page1.html",
"properties": {
"lastModified": "2021-08-10T18:00:45+00:00",
}
},
{
"name": "page/page2.html",
"properties": {
"lastModified": "2021-08-10T19:24:23+00:00",
}
},
{
"name": "page/page3.html",
"properties": {
"lastModified": "2021-08-10T20:36:21+00:00",
}
}
]
I want to make a list of all the names of files which are last modified more that 30 minutes ago. This is my query at the moment to get a list of file names as a variable which i can use later.
file_names=`cat text.json | jq -r .[].name`
How can I use jq to filter for lastModified more than 30 minutes ago based on the timestamp in the properties so I only get the relevant file names?
I'd typically calculate the target date in native bash.
#!/usr/bin/env bash
# make sure we have bash new enough for printf %(...)T time-formatting
# this makes our script work even without GNU date
case $BASH_VERSION in
''|[123].*|4.[012].*) echo "ERROR: bash 4.3+ required" >&2; exit 1;;
esac
export TZ=UTC # force all timestamps to be in UTC (+00:00 / Z)
# faster, bash-builtin now=$(date +%s)
printf -v now '%(%s)T' -1
# faster, bash-builtin start_date_iso8601=$(date +%s -d '30 minutes ago')
start_date_epoch=$((now - 60*30))
printf -v start_date_iso8601 '%(%Y-%m-%dT%H:%M:%S+00:00)T' "$start_date_epoch"
# read our resulting names into an array (not a string)
# jq -j suppresses newlines so we can use NUL delimiters
while IFS= read -r -d '' name; do
names+=( "$name" )
done < <(
jq -j --arg start_date "$start_date_iso8601" '
.[] |
select(.properties.lastModified < $start_date) |
(.name, "\u0000")
' <text.json
)
# print the content of the array we just read the names into
printf 'Matching name: %q\n' "${names[#]}"
This seems to work
date=`date +%Y-%m-%d'T'%H:%M'Z' -d "15 min ago"`
file_names=`jq -r --arg date "$date" '.[] | select(.properties.lastModified < $date) | .name' < text.json`
Let jq do all date computations:
With bash 4 and above with mapfile:
mapfile -d '' last_modified < <(
jq --join-output '(now - 1800) as $date | .[] | select((.properties.lastModified | .[:18] + "Z" | fromdate) < $date) | .name + "\u0000"' input_file.json
)
# For debug purpose
declare -p last_modified
Without mapfile, records are delimited with ASCII RS control character rather than a null byte:
IFS=$'\36' read -ra last_modified < <(jq -j '(now - 1800) as $date | .[] | select((.properties.lastModified | .[:18] + "Z" | fromdate) < $date) | .name + "\u001e"' input_file.json)
Here is the stand-alone jq script with comments:
#!/usr/bin/env -S jq -jf
# Store current timestamp minus 30 minutes (1800 seconds) as $date
(now - 1800) as $date |
.[] |
#
select(
(
# Strip the numerical timezone offset out from the timestamp
# and replace it with the Z for UTC iso8601
# to make it an iso8601 date string that jq understands
.properties.lastModified | .[:18] + "Z" | fromdate
) < $date
) |
.name + "\u0000"

How can I format the timestamp column in a CSV file?

I'm triyng to format the first column of a CSV which is a unix timestamp in milliseconds to a format like this command:
date -d #$( echo "($line_date + 500) / 1000" | bc)
where $line_date is something like 1487693882310
And my file has this information:
1487152859086,,,,,,localhost.localdomain,ServerUpDown,ServerUp,,,,,,, ,,,,
1487613634268,,,,,,localhost.localdomain,ServerUpDown,ServerUp,,,,,,, ,,,,
1487614351573,,,,,,spadmin,logout,,,,,,,, ,,,,
1487614500536,,,,,,System,run,Perform Maintenance,,,,,,, ,,,,
I would like it to be like this:
mié feb 15 11:00:59 CET 2017,,,,,,localhost.localdomain,ServerUpDown,ServerUp,,,,,,, ,,,,
lun feb 20 19:00:34 CET 2017,,,,,,localhost.localdomain,ServerUpDown,ServerUp,,,,,,, ,,,,
lun feb 20 19:12:32 CET 2017,,,,,,spadmin,logout,,,,,,,, ,,,,
lun feb 20 19:15:01 CET 2017,,,,,,System,run,Perform Maintenance,,,,,,, ,,,,
I've tried this but it didn't work:
awk 'BEGIN{FS=OFS=","}{$1=`date -d #$( echo "($date_now + 500) / 1000" | bc)\`}1' file.csv
Any help will be much apreciated.
Thank you very much in advanced.
Kind regards.
Héctor
One way is to leave the CSV line intact and prepend it with the parsed timestamp as the first column.
Something like:
gawk -F, '{ printf "%s.%03u,",strftime("%Y-%m-%dT%H:%M:%S", $1/1000),$1%1000; print }' file.csv
Outputs:
2017-02-15T10:00:59.086,1487152859086,,,,,,localhost.localdomain,ServerUpDown,ServerUp,,,,,,, ,,,,
2017-02-20T18:00:34.268,1487613634268,,,,,,localhost.localdomain,ServerUpDown,ServerUp,,,,,,, ,,,,
2017-02-20T18:12:31.573,1487614351573,,,,,,spadmin,logout,,,,,,,, ,,,,
2017-02-20T18:15:00.536,1487614500536,,,,,,System,run,Perform Maintenance,,,,,,, ,,,,
Or you can rebuild the first field and then print the whole record like this:
echo 1487152859086,,,,,,localhost.localdomain,ServerUpDown,ServerUp,,,,,,, ,,,, | awk 'BEGIN{OFS=FS=","}{$1=strftime("%a %b %d %H:%M:%S %Z %Y", $1)}1'
You'll get this:
Fri Dec 13 14:45:52 CSTM 1901,,,,,,localhost.localdomain,ServerUpDown,ServerUp,,,,,,, ,,,,

Timestamp to Epoch in a CSV file with GAWK

Looking to convert human readable timestamps to epoch/Unix time within a CSV file using GAWK in preparation for loading into a MySQL DB.
Data Example:
{null};2013-11-26;Text & Device;Location;/file/path/to/;Tuesday, November 26 12:17 PM;1;1385845647
Looking to take column 6, Tuesday, November 26 12:17 PM, and convert to epoch time for storage. All times shown will be in EST format. I realize AWK is the tool for this, but can't quite seem to structure the command. Currently have:
cat FILE_IN.CSV | awk 'BEGIN {FS=OFS=";"}{$6=strftime("%s")} {print}'
However this returns:
{null};2013-11-26;Text & Device;Location;/file/path/to/;1385848848;1;1385845647
Presumably, this means I'm calling the current epoch time (1385848848 was current epoch at time of execution) and not asking strftime to convert the string; but I can't imagine another way to doing this.
What is the proper syntax for gawk/strftime to convert an existing timestamp to epoch?
Edit: This question seems loosely related to How do I use output from awk in another command?
$ cat file
{null};2013-11-26;Text & Device;Location;/file/path/to/;Tuesday, November 26 12:17 PM;1;1385845647
$ gawk 'BEGIN{FS=OFS=";"} {gsub(/-/," ",$2); $2=mktime($2" 0 0 0")}1' file
{null};1385445600;Text & Device;Location;/file/path/to/;Tuesday, November 26 12:17 PM;1;1385845647
Here's how to generally convert a date from any format to seconds since the epoch using your current format as an example and with comments to show the conversion process step by step:
$ cat tst.awk
function cvttime(t, a) {
split(t,a,/[,: ]+/)
# 2013 Tuesday, November 26 10:17 PM
# =>
# a[1] = "2013"
# a[2] = "Tuesday"
# a[3] = "November"
# a[4] = "26"
# a[5] = "10"
# a[6] = "17"
# a[7] = "PM"
if ( (a[7] == "PM") && (a[5] < 12) ) {
a[5] += 12
}
# => a[5] = "22"
a[3] = substr(a[3],1,3)
# => a[3] = "Nov"
match("JanFebMarAprMayJunJulAugSepOctNovDec",a[3])
a[3] = (RSTART+2)/3
# => a[3] = 11
return( mktime(a[1]" "a[3]" "a[4]" "a[5]" "a[6]" 0") )
}
BEGIN {
mdt ="Tuesday, November 26 10:17 PM"
secs = cvttime(2013" "mdt)
dt = strftime("%Y-%m-%d %H:%M:%S",secs)
print mdt ORS "\t-> " secs ORS "\t\t-> " dt
}
$ awk -f tst.awk
Tuesday, November 26 10:17 PM
-> 1385525820
-> 2013-11-26 22:17:00
I'm sure you can modify that for the current problem.
Also, if you don't have gawk you can write the cvttime() function as (borrowing #sputnik's date command string):
$ cat tst2.awk
function cvttime(t, cmd,secs) {
cmd = "date -d \"" t "\" '+%s'"
cmd | getline secs
close(cmd)
return secs
}
BEGIN {
mdt ="Tuesday, November 26 10:17 PM"
secs = cvttime(mdt)
dt = strftime("%Y-%m-%d %H:%M:%S",secs)
print mdt ORS "\t-> " secs ORS "\t\t-> " dt
}
$
$ awk -f tst2.awk
Tuesday, November 26 10:17 PM
-> 1385525820
-> 2013-11-26 22:17:00
I left srtftime() in there just to show that the secs was correct - replace with date as you see fit.
For the non-gawk version, you just need to figure out how to get the year into the input month/date/time string in a way that date understands if that maters to you - shouldn't be hard.
You can convert date to epoch with this snippet :
$ date -d 'Tuesday, November 26 12:17 PM' +%s
1385464620
So finally :
awk -F";" '{system("date -d \""$6"\" '+%s'")}' file
Thanks #Keiron for the snippet.