I have the following {{ site.data.wedding.Ceremony.Start | date: "%Y%m%dT%H:%M:%S%:z" }}, which currently outputs: 20200101T16:00:00+02:00
I want to convert that time to UTC, regardless of the timezone set in the site.data.wedding.Ceremony.Start.
Contents of wedding.json:
{
"ShortName": "Bride&Groom",
"Bride": "Bride",
"Groom": "Groom",
"Ceremony": {
"Start": "2020-01-01T16:00:00+02:00",
"End": "2020-01-01T18:00:00+02:00"
},
"Reception": {
"Start": "2020-01-01T18:30:00+02:00",
"End": "2020-01-02T02:00:00+02:00"
}
}
Currently, there is no Liquid filter to convert a Date to UTC.
However, unless you're building your site via GitHub Pages, you can use a plugin to define the filter.
Simply save the following code into _plugins/utc_filter.rb:
module Jekyll
module UTCFilter
def to_utc(date)
time(date).utc
end
end
end
Liquid::Template.register_filter(Jekyll::UTCFilter)
Then use the above filter in your template:
{{ site.data.wedding.Ceremony.Start | to_utc | date: "%Y%m%dT%H:%M:%S%:z" }}
You can simply add additional methods to the module above for defining more filters.
Related
I am currently migrating a job from Airflow 1.10.14 to 2.1.4
In airflow2, I am using the operator BeamRunPythonPipelineOperator, and one of the requirements is to store data in GCS, following this pattern: gs://datalate/data_source/YYYY/MM/model.
partition_sessions_unlimited = BeamRunPythonPipelineOperator(
task_id="partition_sessions_unlimited",
dag=aggregation_dag,
py_file=os.path.join(
BEAM_SRC_DIR,
"streaming_sessions",
"streaming_session_aggregation_pipeline.py",
),
runner="DataflowRunner",
dataflow_config=DataflowConfiguration(
job_name="%s_partition_sessions_unlimited" % ds_env,
project_id=GCP_PROJECT_ID,
location="us-central1",
),
pipeline_options={
"temp_location": "gs://dataflow-temp/{}/{}/amazon_sessions/amz_unlimited".format(
sch_date, ds_env
),
"staging_location": "gs://dataflow-staging/{}/{}/amazon_sessions/amz_unlimited".format(
sch_date, ds_env
),
"disk_size_gb": "100",
"num_workers": "10",
"num_max_workers": "25",
"worker_machine_type": "n1-highcpu-64",
"setup_file": os.path.join(
BEAM_SRC_DIR, "streaming_sessions", "setup.py"
),
"input": "gs://{}/amazon_sessions/{{ ds_nodash[:4] }}/{{ ds_nodash[4:6] }}/amz_unlimited/input/listens_*".format(
w_datalake,
),
"output": "gs://{}/amazon_sessions/{{ ds_nodash[:4] }}/{{ ds_nodash[4:6] }}/amz_unlimited/output/sessions_".format(
w_datalake
),
},
)
however, I get
'output': 'gs://datalake/amazon_sessions/{ ds_nodash[:4] }/{ ds_nodash[4:6] }/amz_prime/output/sessions_',
instead of
'output': 'gs://datalake/amazon_sessions/2022/02/amz_prime/output/sessions_',
How can I achieve this?
First, you are using a format string for jinja templated field.
format() will replace {var} to value from params that are passed, if it exists.
"gs://{}/.../{{ ds_nodash[:4] }}...".format(w_datalake)
First {} is replaced with "datalake" and 2nd part doesn't have any equivalent param that is passed, so resulted in the literal "ds_nodash[:4]".
"gs://datalake/.../{ds_nodash[:4]}..."
In order to use jinja template within the formatted string, you can escape the { and } for part you are intended to get value from jinja. To escape { you add another {, and for } you add another }. Original one has 2 {{ so add 2 { on each side like this;
"gs://{}/.../{{{{ ds_nodash[:4] }}}}...".format(w_datalake)
With this, format will be applied first (replacing the value and take out the escape symbol) and turned this string to
gs://datalake/.../{{ ds_nodash[:4] }}...
then this string is passed to BeamRunPythonPipelineOperator where this part is converted with jinja fields.
Secondly, instead of using ds_nodash twice with slicing, you can use execution_date to format as you like
{{ execution_date.strftime('%Y/%m') }}
I have this API Endpoint setup on Lambda where my applications talk to and get the data it needs.
Problem am running across right now is trying to access an element that is based on the day before today's date.
Language: Python 3.7
Service: AWS Lambda
Provider: WeatherStack
Example: https://weatherstack.com/documentation -> API Features
-> Weather Forecast
In order to access this element from the API provider; I have to basically setup a JSON structure that goes like this:
"forecast": {
"2020-04-04": {
"date": "2020-04-04",
"date_epoch": 1585958400,
"astro": {
"sunrise": "06:42 AM",
"sunset": "07:31 PM",
"moonrise": "03:26 PM",
"moonset": "04:56 AM",
"moon_phase": "Waxing Gibbous",
"moon_illumination": 79
},
"mintemp": 46,
"maxtemp": 54,
"avgtemp": 50,
"totalsnow": 0,
"sunhour": 7.7,
"uv_index": 2
}
}
Now the problem here is the "2020-04-04" date; as I can't access it simply by calling api_endpoint['forecast'][0] as it will throw an error. I checked using Lens however and did find that it does have one element in 'forecast' which is of course the 2020-04-04 that I'm having trouble trying to access.
I don't know if there's a way to dynamically set the element to be called based on yesterday's date since the api provider will change the forecast date element daily.
I've tried api_endpoint['forecast'][datetime.now()] and got an error.
Is there a way to set the [] after ['forecast] dynamically via variable so that i can always call it based on api_endpoint['forecast'][yesterdaysdate]?
Solution:
from datetime import timedelta, datetime
ts = time.gmtime()
todaysdate = (time.strftime("%Y-%m-%d", ts))
yesterday_date = (datetime.datetime.utcnow() - timedelta(1)).strftime('%Y-%m-%d')
data = api_response['forecast'][yesterday_date]
If I understand corectly, you want to call the data inside the api_endpoint['forecast'][yesterday_date].
If so, this can be achieved by this:
from datetime import datetime, timedelta
yesterday_date = (datetime.now() - timedelta(1)).strftime('%Y-%m-%d')
# call to api
api_endpoint['forecast'][yesterday_date]
If you want to days ago, change timedelta(2) and so on.
Today variable can be assigned by this:
current_date = datetime.now().strftime('%Y-%m-%d')
api_endpoint['forecast'][current_date]
If none of the above solutions answer to your question, leave a comment.
So I'm trying to build a quick page listing event details with Hugo (first time working with it, so bear with me).
I've put the two categories of events into two JSON files and added them to /data/events/aevents.json and /data/events/bevents.json
Sample json
{
"devcon 1": {"evname": "Dev Con 1", "year": "2019", "date": "2020-05-12T23:29:49Z"},
"devcon2": {"evname": "Dev Con 1", "year": "2018", "date": "2018-05-12T23:29:49Z"}
}
Now when I use
{{ range .Site.Data.events.aevents }}
things work as expected.But they don't when I use
{{ range .Site.Data.events }}
which I thought would give me events from aevents.json and bevents.json.
Second part
The json events have a date property. When I try to filter to just show upcoming events, my list is blank. I've been playing with variants of this:
{{ range where .Site.Data.events.aevents "date" "ge" now }}
and have tried a bunch of different date formats. Any tips on where I might be going wrong?
First Part
The snippet:
{{ range .Site.Data.events }}
Will get you two objects - one for aevents and one for bevents. You will need to do a nested range to process the individual events in the two files.
{{ range .Site.Data.events }}
{{ range . }}
.....
Second Part
I don't think that can be done directly in the range so you will need to filter afterwards:
{{ range .Site.Data.events }}
{{ range . }}
{{ if .date > $date }}
...
I am using FullCalendar JS Plugin for a tool I am building and I am creating a JSON Feed.
"id": "'.$row['AppointmentID'].'",
"title": "'.$row['AppointmentName'].'",
"url": "'.$row['URL'].'",
"class": "event-important",
"start": "1364407286400"
},
The time stamp for the start of this is 1364407286400 I for the life of me cannot work out how this timestamp is formated. I thought it was Unix but I generated a timestamp for today and replaced it and it is still not showing.
Can anyone point me in right direction?
This is a timestamp in miliseconds. You can easily test this value using:
$test = (int)(1364407286400/1000);
var_dump((new DateTime())->setTimestamp($test));
the output will be:
object(DateTime)#1 (3) {
["date"]=>
string(26) "2013-03-27 11:01:26.000000"
["timezone_type"]=>
int(3)
["timezone"]=>
string(10) "US/Pacific"
}
Does anyone know what json-query filter can be used to select Tigger's food in the sample JSON below? The JSON is a simplified stand-in for a massive and relatively complicated AWS blob.
Some background: I was rather pleased to discover that Ansible has a json-query filter. Given that I was trying to select an element from an AWS JSON blob this looked as if it was just what I needed. However I quickly ran into trouble because the AWS objects have tags and I needed to select items by tag.
I tried selector paths equivalent to Foods[Tags[(Key='For') & (Value='Tigger')]] and similar but didn't manage to get it to work. Using a standalone json-query library such as https://www.npmjs.com/package/json-query I can use the parent attribute but that does not appear to be in Ansible, quite apart from being a deviation from the core idea of json-query.
It might be better to sidestep the problem and use a jsonpath selector. jsonpath is similar to json-query and is a translation from xpath.
{ "Foods" :
[ { "Id": 456
, "Tags":
[ {"Key":"For", "Value":"Heffalump"}
, {"Key":"Purpose", "Value":"Food"}
]
}
, { "Id": 678
, "Tags":
[ {"Key":"For", "Value":"Tigger"}
, {"Key":"Purpose", "Value":"Food"}
]
}
, { "Id": 911
, "Tags":
[ {"Key":"For", "Value":"Roo"}
, {"Key":"Purpose", "Value":"Food"}
]
}
]
}
References
json_query in ansible:
http://docs.ansible.com/ansible/playbooks_filters.html#json-query-filter
json-query standalone node: https://www.npmjs.com/package/json-query
jmespath, the library ansible uses: http://jmespath.org/
json-query standalone python: https://pypi.python.org/pypi/jsonquery/ (red herring)
Do you need list of ids? If so, try:
- debug: msg="{{ lookup('file','test.json') | from_json | json_query(query) }}"
vars:
query: "Foods[].{id: Id, for: (Tags[?Key=='For'].Value)[0]} | [?for=='Tigger'].id"
First construct simple objects with necessary fields and then pipe it to a filter.