Format jinja template as INT in operator parameter in Airflow - jinja2

I'm trying to format a jinja template parameter as an integer so I can pass it to an operator which expects INT (could be custom or PythonOperator) and I'm not able to.
See sample DAG below. I'm using the built-in Jinja filter | int but that's not working - the type remains <class 'str'>
I'm still new with Airflow but I don't think this is possible based on what I've read about Jinja/Airflow works. I see two main workarounds:
Change the operator parameter to expect string and handle the conversion underneath.
Handle this conversion in a separate PythonOperator which converts the string to an int and export that using xcom/task context. (I think this will work but not sure)
Please let me know of any other workarounds
def greet(mystr):
print (mystr)
print(type(mystr))
default_args = {
'owner': 'airflow',
'start_date': days_ago(2)
}
dag = DAG(
'template_dag',
default_args=default_args,
description='template',
schedule_interval='0 13 * * *'
)
with dag:
# foo = "{{ var.value.my_custom_var | int }}" # from variable
foo = "{{ execution_date.int_timestamp | int }}" # built in macro
# could be MyCustomOperator
opr_greet = PythonOperator(task_id='greet',
python_callable=greet,
op_kwargs={'mystr': foo}
)
opr_greet
Airflow 1.10.11

Updated answer:
As of Airflow 2.1, you can pass render_template_as_native_obj=True to the dag and Airflow will return the Python type (dict, int, etc) instead of string. See this pull request
dag = DAG(
dag_id="example_template_as_python_object",
schedule_interval=None,
start_date=days_ago(2),
render_template_as_native_obj=True,
)
Old answer for prior versions:
I found a related question that provides the best workaround, IMO.
Airflow xcom pull only returns string
The trick is to use a PythonOperator, do the datatype conversion there and then call the main operator with the parameter. Below is an example in converting a json string to a dict object. Same can apply for converting string to int, etc.
def my_func(ds, **kwargs):
ti = kwargs['ti']
body = ti.xcom_pull(task_ids='privious_task_id')
import_body= json.loads(body)
op = CloudSqlInstanceImportOperator(
project_id=GCP_PROJECT_ID,
body= import_body,
instance=INSTANCE_NAME,
gcp_conn_id='postgres_default',
task_id='sql_import_task',
validate_body=True,
)
op.execute()
p = PythonOperator(task_id='python_task', python_callable=my_func)

I believe Jinja is going always to return you a string: the template is a string and replacing values inside the template will return you a string.
If you are sure that foo is always an integer, you can do
opr_greet = PythonOperator(task_id='greet',
python_callable=greet,
op_kwargs={'mystr': int(foo)}
)
Update: it looks like Airflow uses the render method from Jinja2, which returns a Unicode string.
At this point, if you can modify greet, it is easier to manage the input parameter in that function.

Related

reading json into django error

I'm passing a context variable, x, into a template from a Djano view. It is a list of strings
x = ['Braselton', 'Buford']
Then I am using an ajax function to pass that variable back to a django view. The problem is when I retrieve that variable in a python view with the following code:
new_x = request.GET['x']
print(new_x)
I see the following:
['Braselton', 'Buford']
I've tried json.loads(request.GET['x']) and I keep getting the following error
json.decoder.JSONDecodeError: Expecting value: line 1 column 2 (char 1)
Any help is much appreciated
You need to unescape those characters, there are lots of ways to do it..
Python Documentation for more info
import html.parser
import json
variable = "['Braselton', 'Buford']"
parser = html.parser.HTMLParser()
new_variable = parser.unescape(variable)
new_variable = json.loads(parser.unescape(new_variable).replace("'",'"')) # replace single quote
>>> ['Braselton', 'Buford'] # Type List
The problem is that python escaping the HTML elements. Note that it's not JSON.
to unescape you have to use HTML module.
import html
y = html.unenscape(new_x)
print(y) # output is ['Braselton', 'Buford']
Mark the variable as safe.
'{{ x | safe }}'

Formatting Biquery query to ML appropriate JSON to Pass through ML Predict

Using Python 2.7, I wont to pass a query from BigQuery to ML Predict which has a specific formating request.
First: Is there an easier way to go directly from the BigQuery query to JSON in the correct format so it can be passed to requests.post() instead of going through pandas (from what I understand pandas is still not supported for GCP Standard)?
Second: Is there a way to construct the query to go directly to a JSON format and then modify the JSON to reflect the ML Predict JSON requirments?
Currently my code looks like this:
#I used the bigquery to dataframe option here to view the output.
#I would like to not use pandas in the end code.
logs = log_data.execute(output_options=bq.QueryOutput.dataframe()).result()
data = logs.to_json(orient='index')
print data
'{"0":{"end_time":"2018-04-19","device":"iPad","device_os":"iOS","device_os_version":"5.1.1","latency":0.150959,"megacycles":140.0,"cost":"1.3075e-08","device_brand":"Apple","device_family":"iPad","browser_version":"5.1","app":"567","ua_parse":"0"}}'
#The JSON needs to be in this format according to google documentation.
#data = {
# 'instances': [
# {
# 'key':'',
# 'end_time': '2018-04-19',
# 'device': 'iPad',
# 'device_os': 'iOS',
# 'device_os_version': '5.1.1',
# 'latency': 0.150959,
# 'megacycles':140.0,
# 'cost':'1.3075e-08',
# 'device_brand':'Apple',
# 'device_family':'iPad',
# 'browser_version':'5.1',
# 'app':'567',
# 'ua_parse':'40.9.8'
# }
# ]
#}
So all I would need to change is the leading key '0' to 'instances' and I should be all set to pass into `requests.post().
Is there a way to accomplish this?
Edit-Adding BigQuery query:
%%bq query --n log_data
WITH `my.table` AS (
SELECT ARRAY<STRUCT<end_time STRING, device STRING, device_os STRING, device_os_version STRING, latency FLOAT64, megacycles FLOAT64,
cost STRING, device_brand STRING, device_family STRING, browser_version STRING, app STRING, ua_parse STRING>>[] instances
)
SELECT TO_JSON_STRING(t)
FROM `my.table` AS t
WHERE end_time >='2018-04-19'
LIMIT 1
data = log_data.execute().result()
Thanks to #MikhailBerlyant I have adjust my query and code to look like this:
%%bq query --n log_data
SELECT [TO_JSON_STRING(t)] AS instance
FROM `yourproject.yourdataset.yourtable` AS t
WHERE end_time >='2018-04-19'
LIMIT 1
But when I run the execute logs = log_data.execute().result() I get this
Which results in this error when passing into request.post
TypeError: QueryResultsTable job_zfVEiPdf2W6msBlT6bBLgMusF49E is not JSON serializable
Is there a way within execut() to just return the json?
First: Is there an easier way to go directly from the BigQuery query to JSON in the correct format
See example below
#standardSQL
WITH yourTable AS (
SELECT ARRAY<STRUCT<id INT64, type STRING>>[(1, 'abc'), (2, 'xyz')] instances
)
SELECT TO_JSON_STRING(t)
FROM yourTable t
with result is in the format you asked for:
{"instances":[{"id":1,"type":"abc"},{"id":2,"type":"xyz"}]}
Above demonstrates the query and how it will work
In you real case - you should use something like below
SELECT TO_JSON_STRING(t)
FROM `yourproject.yourdataset.yourtable` AS t
WHERE end_time >='2018-04-19'
LIMIT 1
hope this helps :o)
Update based on comments
SELECT [TO_JSON_STRING(t)] AS instance
FROM `yourproject.yourdataset.yourtable` t
WHERE end_time >='2018-04-19'
LIMIT 1
I wanted to add this in case someone has the same problem I had or at least are stuck on were to go once you have the query.
I was able to write a function that formatted the query in the way Google ML Predict wants it to be passed into requests.post(). This is most likely a horrible way to accomplish this but I could not find a direct way to go from BigQuery to ML Predict in the correct format.
def logs(query):
client = gcb.Client()
query_job = client.query(query)
CSV_COLUMNS ='end_time,device,device_os,device_os_version,latency,megacycles,cost,device_brand,device_family,browser_version,app,ua_parse'.split(',')
for row in query_job.result():
var = list(row)
l1 = dict(zip(CSV_COLUMNS,var))
l1.update({'key':''})
l2 = {'instances':[l1]}
return l2

How to access json schema info in SparkDataset Api when using plain map/reduce functions?

Given there is a dataset of messages, defined by following code:
case class Message(id: Int, value: String)
var messages = Seq(
(0, """{"action":"update","timestamp":"2017-10-05T23:01:19Z"}"""),
(1, """{"action":"update","timestamp":"2017-10-05T23:01:19Z"}""")
).toDF("id", "value").as[Message]
var schema = new StructType().add("action", StringType).add("timestamp", TimestampType)
var res = messages.select(
from_json(col("value").cast("string"), schema)
)
+------------------------------------+
|jsontostructs(CAST(value AS STRING))|
+------------------------------------+
| [update,2017-10-0...|
| [update,2017-10-0...|
What is the best way to access the schema information in a plain map function. The function itself returns a row which has lost all the Type infos. In order to reach to the values one has to specify the type again e.g
res.head().getStruct(0).getValuesMap[TimestampType](Seq("timestamp"))
=> Map[String,org.apache.spark.sql.types.TimestampType] = Map(timestamp -> 2017-10-06 01:01:19.0)
or
res.head().getStruct(0).getString(0)
=> res20: String = update
Is there some better way to access the raw json data without spark sql aggregation functions?
As a rule of thumb:
To use collection API (map, flatMap, mapPartitions, groupByKey, etc.) use strongly typed API - define record type (case class works the best) which reflects the schema and use Encoders to convert things back and forth:
case class Value(action: String, timestamp: java.sql.Timestamp)
case class ParsedMessage(id: Int, value: Option[Value])
messages.select(
$"id", from_json(col("value").cast("string"), schema).alias("value")
).as[ParsedMessage].map(???)
With Dataset[Row] stay with high level SQL / DataFrame API (select, where, agg, groupBy)

How do we update an HSTORE field with Flask-Admin?

How do I update an HSTORE field with Flask-Admin?
The regular ModelView doesn't show the HSTORE field in Edit view. It shows nothing. No control at all. In list view, it shows a column with data in JSON notation. That's fine with me.
Using a custom ModelView, I can change the HSTORE field into a TextAreaField. This will show me the HSTORE field in JSON notation when in edit view. But I cannot edit/update it. In list view, it still shows me the object in JSON notation. Looks fine to me.
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
When I attempt to save/edit the JSON, I receive this error:
sqlalchemy.exc.InternalError
InternalError: (InternalError) Unexpected end of string
LINE 1: UPDATE mytable SET attributes='{}' WHERE mytable.id = ...
^
'UPDATE mytable SET attributes=%(attributes)s WHERE mytable.id = %(mytable_id)s' {'attributes': u'{}', 'mytable_id': 14L}
Now -- using code, I can get something to save into the HSTORE field:
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
def on_model_change(self, form, model, is_created):
model.attributes = {"a": "1"}
return
This basically overrides the model and put this object into it. I can then see the object in the List view and the Edit view. Still not good enough -- I want to save/edit the object that the user typed in.
I tried to parse and save the content from the form into JSON and back out. This doesn't work:
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
def on_model_change(self, form, model, is_created):
x = form.data['attributes']
y = json.loads(x)
model.attributes = y
return
json.loads(x) says this:
ValueError ValueError: Expecting property name: line 1 column 1 (char
1)
and here are some sample inputs that fail:
{u's': u'ff'}
{'s':'ff'}
However, this input works:
{}
Blank also works
This is my SQL Table:
CREATE TABLE mytable (
id BIGSERIAL UNIQUE PRIMARY KEY,
attributes hstore
);
This is my SQA Model:
class MyTable(Base):
__tablename__ = u'mytable'
id = Column(BigInteger, primary_key=True)
attributes = Column(HSTORE)
Here is how I added the view's to the admin object
admin.add_view(ModelView(models.MyTable, db.session))
Add the view using a custom Model View
admin.add_view(MyView(models.MyTable, db.session))
But I don't do those views at the same time -- I get a Blueprint name collision error -- separate issue)
I also attempted to use a form field converter. I couldn't get it to actually hit the code.
class MyModelConverter(AdminModelConverter):
def post_process(self, form_class, info):
raise Exception('here I am') #but it never hits this
return form_class
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
The answer gives you a bit more then asked
Fist of all it "extends" hstore to be able to store actually JSON, not just key-value
So this structure is also OK:
{"key":{"inner_object_key":{"Another_key":"Done!","list":["no","problem"]}}}
So, first of all your ModelView should use custom converter
class ExtendedModelView(ModelView):
model_form_converter=CustomAdminConverter
Converter itself should know how to use hstore dialect:
class CustomAdminConverter(AdminModelConverter):
#converts('sqlalchemy.dialects.postgresql.hstore.HSTORE')
def conv_HSTORE(self, field_args, **extra):
return DictToHstoreField(**field_args)
This one as you can see uses custom WTForms field which converts data in both directions:
class DictToHstoreField(TextAreaField):
def process_data(self, value):
if value is None:
value = {}
else:
for key,obj in value.iteritems():
if (obj.startswith("{") and obj.endswith("}")) or (obj.startswith("[") and obj.endswith("]")):
try:
value[key]=json.loads(obj)
except:
pass #
self.data=json.dumps(value)
def process_formdata(self, valuelist):
if valuelist:
self.data = json.loads(valuelist[0])
for key,obj in self.data.iteritems():
if isinstance(obj,dict) or isinstance(obj,list):
self.data[key]=json.dumps(obj)
if isinstance(obj,int):
self.data[key]=str(obj)
The final step will be to actual use this data in application
I did not make it in common nice way for SQLalchemy, since was used with flask-restful, so I have only adoption for flask-restful in one direction, but I think it's easy to get the idea from here and do the rest.
And if your case is simple key-value storage so nothing additionaly should be done, just use it as is.
But if you want to unwrap JSON somewhere in code, it's simple like this whenever you use it, just wrap in function
if (value.startswith("{") and value.endswith("}")) or (value.startswith("[") and value.endswith("]")):
value=json.loads(value)
Creating dynamical field for actual nice non-JSON way for editing of data also possible by extending FormField and adding some javascript for adding/removing fields, but this is whole different story, in my case I needed actual json storage, with blackjack and lists :)
Was working on postgres JSON datatype. The above solution worked great with a minor modifications.
Tried
'sqlalchemy.dialects.postgresql.json.JSON',
'sqlalchemy.dialects.postgresql.JSON',
'dialects.postgresql.json.JSON',
'dialects.postgresql.JSON'
The above versions did not work.
Finally the following change worked
#converts('JSON')
And changed class DictToHstoreField to the following:
class DictToJSONField(fields.TextAreaField):
def process_data(self, value):
if value is None:
value = {}
self.data = json.dumps(value)
def process_formdata(self, valuelist):
if valuelist:
self.data = json.loads(valuelist[0])
else:
self.data = '{}'
Although, this is might not be the answer to your question, but by default SQLAlchemy's ORM doesn't detect in-place changes to HSTORE field values. But fortunately there's a solution: SQLAlchemy's MutableDict type:
from sqlalchemy.ext.mutable import MutableDict
class MyClass(Base):
__tablename__ = 'mytable'
id = Column(Integer, primary_key=True)
attributes = Column(MutableDict.as_mutable(HSTORE))
Now when you change something in-place:
my_object.attributes.['some_key'] = 'some value'
The hstore field will be updated after session.commit().

Coercion in SQLAlchemy from Column annotations

Good day everyone,
I have a file of strings corresponding to the fields of my SQLAlchemy object. Some fields are floats, some are ints, and some are strings.
I'd like to be able to coerce my string into the proper type by interrogating the column definition. Is this possible?
For instance:
class MyClass(Base):
...
my_field = Column(Float)
It feels like one should be able to say something like MyClass.my_field.column.type and either ask the type to coerce the string directly or write some conditions and int(x), float(x) as needed.
I wondered whether this would happen automatically if all the values were strings, but I received Oracle errors because the type was incorrect.
Currently I naively coerce -- if it's float()able, that's my value, else it's a string, and I trust that integral floats will become integers upon inserting because they are represented exactly. But the runtime value is wrong (e.g. 1.0 vs 1) and it just seems sloppy.
Thanks for your input!
SQLAlchemy 0.7.4
You can iterate over columns of the mapped Table:
for col in MyClass.__table__.columns:
print col, repr(col.type)
... so you can check the type of each field by its name like this:
def get_col_type(cls_, fld_):
for col in cls_.__table__.columns:
if col.name == fld_:
return col.type # this contains the instance of SA type
assert Float == type(get_col_type(MyClass, 'my_field'))
I would cache the results though if your file is large in order to save the for-loop on every row imported from the file.
Type coercion for sqlalchemy prior to committing to some database.
How can I verify Column data types in the SQLAlchemy ORM?
from sqlalchemy import (
Column,
Integer,
String,
DateTime,
)
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import event
import datetime
Base = declarative_base()
type_coercion = {
Integer: int,
String: str,
DateTime: datetime.datetime,
}
# this event is called whenever an attribute
# on a class is instrumented
#event.listens_for(Base, 'attribute_instrument')
def configure_listener(class_, key, inst):
if not hasattr(inst.property, 'columns'):
return
# this event is called whenever a "set"
# occurs on that instrumented attribute
#event.listens_for(inst, "set", retval=True)
def set_(instance, value, oldvalue, initiator):
desired_type = type_coercion.get(inst.property.columns[0].type.__class__)
coerced_value = desired_type(value)
return coerced_value
class MyObject(Base):
__tablename__ = 'mytable'
id = Column(Integer, primary_key=True)
svalue = Column(String)
ivalue = Column(Integer)
dvalue = Column(DateTime)
x = MyObject(svalue=50)
assert isinstance(x.svalue, str)
I'm not sure if I'm reading this question correctly, but I would do something like:
class MyClass(Base):
some_float = Column(Float)
some_string = Column(String)
some_int = Column(Int)
...
def __init__(self, some_float, some_string, some_int, ...):
if isinstance(some_float, float):
self.some_float = somefloat
else:
try:
self.some_float = float(somefloat)
except:
# do something intelligent
if isinstance(some_string, string):
...
And I would repeat the checking process for each column. I would trust anything to do it "automatically". I also expect your file of strings to be well structured, otherwise something more complicated would have to be done.
Assuming your file is a CSV (I'm not good with file reads in python, so read this as pseudocode):
while not EOF:
thisline = readline('thisfile.csv', separator=',') # this line is an ordered list of strings
thisthing = MyClass(some_float=thisline[0], some_string=thisline[1]...)
DBSession.add(thisthing)