I'm working with a database schema that has a relationship that isn't always true, and I'm not sure how to describe it with sqlalchemy's ORM.
All the primary keys in this database are stored as a blob type, and are 16 byte binary strings.
I have a table called attribute, and this table has a column called data_type. There are a number of built in data_types, that are not defined explicitly in the database. So, maybe a data_type of 00 means it is a string, and 01 means it is a float, etc (those are hex values). The highest value for the built in data types is 12 (18 in decimal).
However, for some rows in attribute, the value of the attribute stored in the row must exist in a pre-defined list of values. In this case, data_type referrs to lookup.lookup_id. The actual data type for the attribute can then be retrieved from lookup.data_type.
I'd like to be able to call just Attribue.data_type and get back 'string' or 'number'. Obviously I'd need to define the {0x00: 'string', 0x01: 'number'} mapping somewhere, but how can I tell sqlalchemy that I want lookup.data_type if the value of attribute.data_type is greater than 18?
There are a couple of ways to do this.
The simplest, by far, is to just put your predefined data types into the table lookup. You say that you "need to define the... mapping somewhere", and a table is as good a place as any.
Assuming that you can't do that, the next simplest thing is to create a python property on class Attribute. The only problem will be that you can't query against it. You'll want to reassign the column data_type so that it maps to _data_type:
data_type_dict = {0x00: 'string',
0x01: 'number,
...}
class Attribute(Base):
__tablename__ = 'attribute'
_data_type = Column('data_type')
...
#property
def data_type(self):
dt = data_type_dict.get(self._data_type, None)
if dt is None:
s = Session.object_session(self)
lookup = s.query(Lookup).filter_by(id=self._data_type).one()
dt = lookup.data_type
return dt
If you want this to be queryable, that is, if you want to be able to do session.query(Attribute).filter_by(data_type='string'), you need to map data_type to something the database can handle, i.e., an SQL statement. You could do this in raw SQL as a CASE expression:
from sqlalchemy.sql.expression import select, case
class Attribute(Base):
...
data_type = column_property(select([attribute, lookup])\
.where(attribute.data_type==lookup.lookup_id)\
.where(case([(attribute.data_type==0x00, 'string'),
(attribute.data_type==0x01, 'number'),
...],
else_=lookup.data_type))
I'm not 100% certain that last part will work; you may need to explicitly join the tables attribute and lookup to specify that it's an outer join, though I think SQLAlchemy does that by default. The downside of this approach is that you are always going to try to join with the table lookup, though to query using SQL, you sort of have to do that.
The final option is to use a polymorphism, and map the two cases (data_type greater/less than 18) to two different subclasses:
class Attribute(Base):
__tablename__ = 'attribute'
_data_type = Column('data_type')
_lookup = column_property(attribute.data_type > 18)
__mapper_args__ = {'polymorphic_on': _lookup}
class FixedAttribute(Attribute):
__mapper_args__ = {'polymorphic_identity': 0}
data_type = column_property(select([attribute.data_type])\
.where(case([(attribute.data_type==0x00, 'string'),
(attribute.data_type==0x01, 'number'),
...])))
class LookupAttribute(Attribute):
__mapper_args__ = {'polymorphic_identity': 1}
data_type = column_property(select([lookup.data_type],
whereclause=attribute.data_type==lookup.lookup_id))
You might have to replace the 'polymorphic_on': _lookup with an explicit attribute.data_type > 18, depending on when that ColumnProperty gets bound.
As you can see, these are all really messy. Do #1 if it's at all possible.
Related
Assume I have this simple function in Python:
def f(gender, name):
if gender == 'male':
return ranking_male(name)
else:
return ranking_female(name)
where gender belongs to ['male', 'female'] whereas name belongs to ['Adam', 'John', 'Max', 'Frodo'] (if gender is male) or ['Mary', 'Sarah', 'Arwen'] (otherwise).
I wish to apply interact from ipywidgets to this function f. Normally one would do
from ipywidgets import interact
interact(f, gender = ('male', 'female'), name = ('Adam', 'John', 'Max', 'Frodo'))
The problem is that the admissible values for name now depend on the value chosen for gender.
I tried to find it in the docs but couldn't find it. The only thing I think may be important is
This is used to setup dynamic notifications of trait changes.
Parameters
----------
handler : callable
A callable that is called when a trait changes. Its
signature should be ``handler(change)``, where ``change```is a
dictionary. The change dictionary at least holds a 'type' key.
* ``type``: the type of notification.
Other keys may be passed depending on the value of 'type'. In the
case where type is 'change', we also have the following keys:
* ``owner`` : the HasTraits instance
* ``old`` : the old value of the modified trait attribute
* ``new`` : the new value of the modified trait attribute
* ``name`` : the name of the modified trait attribute.
names : list, str, All
If names is All, the handler will apply to all traits. If a list
of str, handler will apply to all names in the list. If a
str, the handler will apply just to that name.
type : str, All (default: 'change')
The type of notification to filter by. If equal to All, then all
notifications are passed to the observe handler.
But I have no idea how to do it nor to interpret what the doc string is talking about. Any help is much appreciated!
For example you have brand and model of car and model depends on brand.
d = {'Volkswagen' : ['Tiguan', 'Passat', 'Polo', 'Touareg', 'Jetta'], 'Chevrolet' : ['TAHOE', 'CAMARO'] }
brand_widget = Dropdown( options=list(d.keys()),
value='Volkswagen',
description='Brand:',
style=style
)
model_widget = Dropdown( options=d['Volkswagen'],
value=None,
description='Model:',
style=style
)
def on_update_brand_widget(*args):
model_widget.options = d[brand_widget.value]
brand_widget.observe(on_update_brand_widget, 'value')
I've used nested widgets to solve this problem. It'll work, but it's ugly, partially because it doesn't seem to be a common use case in ipywidgets (see discussion).
Given your function f(gender, name) you can define an intermediate wrapper:
def f_intermediate_wrapper(gender):
if gender=="male":
possible_names = ['Adam', 'John', 'Max', 'Frodo']
else:
possible_names = ['Mary', 'Sarah', 'Arwen']
try:
f_intermediate_wrapper.name_widget.widget.close()
except AttributeError:
pass
f_intermediate_wrapper.name_widget = interact(f,
gender=widgets.fixed(gender),
name = possible_names)
The first part sets the possible name options given the gender, as desired.
The second part closes the name_widget from your previous evaluation, if it exists. Otherwise, every time you change the gender, it'll leave up the old list of names, which are the wrong gender (see example).
The third part creates a name widget of the possible names for that gender, and stores it somewhere sufficiently static. (Otherwise, when you change the gender the old name widget will be out of scope, and you won't be able to close it.)
Now you can create your gender and name widget:
gender_and_name_widget = interact(f_intermediate_wrapper,
gender = ["male", "female"])
And you can access the result of your f(gender, name) using
gender_and_name_widget.name_widget.widget.result
In SqlAlchemy I use:
price = Column(Numeric(18, 5))
in various placed throught my app. When I get a number formatted in swedish, with a comma instead of a dot (0,34 instead of 0.34) and try to change the price column the number gets set to 0.00000.
To solve this I have this code:
obj.price = price.replace(',','.')
But having this all over the code makes it pretty ugly and the risk is that I forget one place. Would it be possible to have some kind of generic converter function which gets called before a value is converted from a string to a Numeric? And that I have that in one place only.
Check the validates decorator of SQLAlchemy: http://docs.sqlalchemy.org/en/rel_1_0/orm/mapped_attributes.html
A quick way to add a “validation” routine to an attribute is to use
the validates() decorator. An attribute validator can raise an
exception, halting the process of mutating the attribute’s value, or
can change the given value into something different.
In your case the code could look similar to:
from sqlalchemy.orm import validates
class Obj(Base):
__tablename__ = 'obj'
id = Column(Integer, primary_key=True)
price = Column(Numeric(18, 5))
#validates('price')
def validate_price(self, key, price):
if ',' in price:
return float(price.replace(',','.'))
else:
return float(price)
I've realized that in the newest version of SQLAlchemy (v1.0.4) I'm getting errors when using the table.c.keys() for selecting columns.
from sqlalchemy import MetaData
from sqlalchemy import (Column, Integer, Table, String, PrimaryKeyConstraint)
metadata = MetaData()
table = Table('test', metadata,
Column('id', Integer,nullable=False),
Column('name', String(20)),
PrimaryKeyConstraint('id')
)
stmt = select(table.c.keys()).select_from(table).where(table.c.id == 1)
In previous versions it used to work fine, but now this is throwing the following errors:
sqlalchemy/sql/elements.py:3851: SAWarning: Textual column expression 'id' should be explicitly declared with text('id'), or use column('id') for more specificity.
sqlalchemy/sql/elements.py:3851: SAWarning: Textual column expression 'name' should be explicitly declared with text('name'), or use column('name') for more specificity.
Is there a function for retrieving all these table columns rather than using a list comprehension like the following? [text(x) for x in table.c.keys()]
No, but you can always roll your own.
def all_columns(model_or_table, wrap=text):
table = getattr(model_or_table, '__table__', model_or_table)
return [wrap(col) for col in table.c.keys()]
then you would use it like
stmt = select(all_columns(table)).where(table.c.id == 1)
or
stmt = select(all_columns(Model)).where(Model.id == 1)
Note that in most cases you don't need select_from, i.e when you don't actually join to some other table.
How do I update an HSTORE field with Flask-Admin?
The regular ModelView doesn't show the HSTORE field in Edit view. It shows nothing. No control at all. In list view, it shows a column with data in JSON notation. That's fine with me.
Using a custom ModelView, I can change the HSTORE field into a TextAreaField. This will show me the HSTORE field in JSON notation when in edit view. But I cannot edit/update it. In list view, it still shows me the object in JSON notation. Looks fine to me.
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
When I attempt to save/edit the JSON, I receive this error:
sqlalchemy.exc.InternalError
InternalError: (InternalError) Unexpected end of string
LINE 1: UPDATE mytable SET attributes='{}' WHERE mytable.id = ...
^
'UPDATE mytable SET attributes=%(attributes)s WHERE mytable.id = %(mytable_id)s' {'attributes': u'{}', 'mytable_id': 14L}
Now -- using code, I can get something to save into the HSTORE field:
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
def on_model_change(self, form, model, is_created):
model.attributes = {"a": "1"}
return
This basically overrides the model and put this object into it. I can then see the object in the List view and the Edit view. Still not good enough -- I want to save/edit the object that the user typed in.
I tried to parse and save the content from the form into JSON and back out. This doesn't work:
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
def on_model_change(self, form, model, is_created):
x = form.data['attributes']
y = json.loads(x)
model.attributes = y
return
json.loads(x) says this:
ValueError ValueError: Expecting property name: line 1 column 1 (char
1)
and here are some sample inputs that fail:
{u's': u'ff'}
{'s':'ff'}
However, this input works:
{}
Blank also works
This is my SQL Table:
CREATE TABLE mytable (
id BIGSERIAL UNIQUE PRIMARY KEY,
attributes hstore
);
This is my SQA Model:
class MyTable(Base):
__tablename__ = u'mytable'
id = Column(BigInteger, primary_key=True)
attributes = Column(HSTORE)
Here is how I added the view's to the admin object
admin.add_view(ModelView(models.MyTable, db.session))
Add the view using a custom Model View
admin.add_view(MyView(models.MyTable, db.session))
But I don't do those views at the same time -- I get a Blueprint name collision error -- separate issue)
I also attempted to use a form field converter. I couldn't get it to actually hit the code.
class MyModelConverter(AdminModelConverter):
def post_process(self, form_class, info):
raise Exception('here I am') #but it never hits this
return form_class
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
The answer gives you a bit more then asked
Fist of all it "extends" hstore to be able to store actually JSON, not just key-value
So this structure is also OK:
{"key":{"inner_object_key":{"Another_key":"Done!","list":["no","problem"]}}}
So, first of all your ModelView should use custom converter
class ExtendedModelView(ModelView):
model_form_converter=CustomAdminConverter
Converter itself should know how to use hstore dialect:
class CustomAdminConverter(AdminModelConverter):
#converts('sqlalchemy.dialects.postgresql.hstore.HSTORE')
def conv_HSTORE(self, field_args, **extra):
return DictToHstoreField(**field_args)
This one as you can see uses custom WTForms field which converts data in both directions:
class DictToHstoreField(TextAreaField):
def process_data(self, value):
if value is None:
value = {}
else:
for key,obj in value.iteritems():
if (obj.startswith("{") and obj.endswith("}")) or (obj.startswith("[") and obj.endswith("]")):
try:
value[key]=json.loads(obj)
except:
pass #
self.data=json.dumps(value)
def process_formdata(self, valuelist):
if valuelist:
self.data = json.loads(valuelist[0])
for key,obj in self.data.iteritems():
if isinstance(obj,dict) or isinstance(obj,list):
self.data[key]=json.dumps(obj)
if isinstance(obj,int):
self.data[key]=str(obj)
The final step will be to actual use this data in application
I did not make it in common nice way for SQLalchemy, since was used with flask-restful, so I have only adoption for flask-restful in one direction, but I think it's easy to get the idea from here and do the rest.
And if your case is simple key-value storage so nothing additionaly should be done, just use it as is.
But if you want to unwrap JSON somewhere in code, it's simple like this whenever you use it, just wrap in function
if (value.startswith("{") and value.endswith("}")) or (value.startswith("[") and value.endswith("]")):
value=json.loads(value)
Creating dynamical field for actual nice non-JSON way for editing of data also possible by extending FormField and adding some javascript for adding/removing fields, but this is whole different story, in my case I needed actual json storage, with blackjack and lists :)
Was working on postgres JSON datatype. The above solution worked great with a minor modifications.
Tried
'sqlalchemy.dialects.postgresql.json.JSON',
'sqlalchemy.dialects.postgresql.JSON',
'dialects.postgresql.json.JSON',
'dialects.postgresql.JSON'
The above versions did not work.
Finally the following change worked
#converts('JSON')
And changed class DictToHstoreField to the following:
class DictToJSONField(fields.TextAreaField):
def process_data(self, value):
if value is None:
value = {}
self.data = json.dumps(value)
def process_formdata(self, valuelist):
if valuelist:
self.data = json.loads(valuelist[0])
else:
self.data = '{}'
Although, this is might not be the answer to your question, but by default SQLAlchemy's ORM doesn't detect in-place changes to HSTORE field values. But fortunately there's a solution: SQLAlchemy's MutableDict type:
from sqlalchemy.ext.mutable import MutableDict
class MyClass(Base):
__tablename__ = 'mytable'
id = Column(Integer, primary_key=True)
attributes = Column(MutableDict.as_mutable(HSTORE))
Now when you change something in-place:
my_object.attributes.['some_key'] = 'some value'
The hstore field will be updated after session.commit().
Good day everyone,
I have a file of strings corresponding to the fields of my SQLAlchemy object. Some fields are floats, some are ints, and some are strings.
I'd like to be able to coerce my string into the proper type by interrogating the column definition. Is this possible?
For instance:
class MyClass(Base):
...
my_field = Column(Float)
It feels like one should be able to say something like MyClass.my_field.column.type and either ask the type to coerce the string directly or write some conditions and int(x), float(x) as needed.
I wondered whether this would happen automatically if all the values were strings, but I received Oracle errors because the type was incorrect.
Currently I naively coerce -- if it's float()able, that's my value, else it's a string, and I trust that integral floats will become integers upon inserting because they are represented exactly. But the runtime value is wrong (e.g. 1.0 vs 1) and it just seems sloppy.
Thanks for your input!
SQLAlchemy 0.7.4
You can iterate over columns of the mapped Table:
for col in MyClass.__table__.columns:
print col, repr(col.type)
... so you can check the type of each field by its name like this:
def get_col_type(cls_, fld_):
for col in cls_.__table__.columns:
if col.name == fld_:
return col.type # this contains the instance of SA type
assert Float == type(get_col_type(MyClass, 'my_field'))
I would cache the results though if your file is large in order to save the for-loop on every row imported from the file.
Type coercion for sqlalchemy prior to committing to some database.
How can I verify Column data types in the SQLAlchemy ORM?
from sqlalchemy import (
Column,
Integer,
String,
DateTime,
)
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import event
import datetime
Base = declarative_base()
type_coercion = {
Integer: int,
String: str,
DateTime: datetime.datetime,
}
# this event is called whenever an attribute
# on a class is instrumented
#event.listens_for(Base, 'attribute_instrument')
def configure_listener(class_, key, inst):
if not hasattr(inst.property, 'columns'):
return
# this event is called whenever a "set"
# occurs on that instrumented attribute
#event.listens_for(inst, "set", retval=True)
def set_(instance, value, oldvalue, initiator):
desired_type = type_coercion.get(inst.property.columns[0].type.__class__)
coerced_value = desired_type(value)
return coerced_value
class MyObject(Base):
__tablename__ = 'mytable'
id = Column(Integer, primary_key=True)
svalue = Column(String)
ivalue = Column(Integer)
dvalue = Column(DateTime)
x = MyObject(svalue=50)
assert isinstance(x.svalue, str)
I'm not sure if I'm reading this question correctly, but I would do something like:
class MyClass(Base):
some_float = Column(Float)
some_string = Column(String)
some_int = Column(Int)
...
def __init__(self, some_float, some_string, some_int, ...):
if isinstance(some_float, float):
self.some_float = somefloat
else:
try:
self.some_float = float(somefloat)
except:
# do something intelligent
if isinstance(some_string, string):
...
And I would repeat the checking process for each column. I would trust anything to do it "automatically". I also expect your file of strings to be well structured, otherwise something more complicated would have to be done.
Assuming your file is a CSV (I'm not good with file reads in python, so read this as pseudocode):
while not EOF:
thisline = readline('thisfile.csv', separator=',') # this line is an ordered list of strings
thisthing = MyClass(some_float=thisline[0], some_string=thisline[1]...)
DBSession.add(thisthing)