conditionals inside SQLAlchemy's column_property - sqlalchemy

I'm using SQLAlchemy to map a class:
class Model(sqlalchemy.declarative_base()):
attr_a = Column(String)
attr_b = Column(Integer)
attr_c = Column(Integer)
aggr = column_property(attr_b + attr_c IF attr_a=='go' ELSE attr_b - attr_c)
Last line is pseoudo code that requires some conditional logic. Is such logic even possible inside column_property? How can I implement it as a simple conditional aggregate?

Actually it turns out to be a common technique, sqlalchemy provides a tool set inside sqlalchemy.sql, one can easily write SQL logic such as case:
from sqlalchemy.sql import case
...
aggr = column_property(case([(attr_a=="go", attr_b + attr_c), (attr_a=="return", attr_b + attr_c + attrition]
Just note here that case takes a python iterable as parameter.

Related

How to execute a scenario using data from the previous scenario?

I'd like to execute two scenarios that should be executed one after another and the data "produced" by the first scenario should be used as base for the second scenario.
So a case could be for example clearing of a credit card. The first scenarios is there to authorize/reserve of a certain amount on the card:
val auths = scenario("auths").during(durationInMinutes minutes) {
feed(credentials)
.feed(firstNames)
.feed(lastNames)
.feed(cards)
.feed(amounts)
.exec(http("send auth requests")
.post(...)
.check(...))}
The second one is there to capture/take the amount from the credit card:
val caps = scenario("caps").during(durationInMinutes minutes) {
feed(credentials)
.feed(RESPONSE_IDS_FROM_PREVIOUS_SCENARIO)
.exec(http("send auth requests")
.post(...)
.check(...))}
I initially thought about using the saveAs(...) option on check but I figured out that the saved field is only valid for the given session.
So basically I want to preserve the IDs I got from the auths scenario and use them in the caps scenario.
I cannot execute both steps in one scenario though (saveAs would work for that) because I have different requirement for both scenarios.
Quoting the documentation: "Presently our Simulation is one big monolithic scenario. So first let us split it into composable business processes, akin to the PageObject pattern with Selenium. This way, you’ll be able to easily reuse some parts and build complex behaviors without sacrificing maintenance." at gatling.io/Advanced Tutorial
Thus your there is no build-in mechanism for communication between scenarios (AFAIK). Recommendation is to structure your code that way that you can combine your calls to URIs subsequently. In your case (apart from implementation details) you should have something like this:
val auths = feed(credentials)
.feed(firstNames)
.feed(lastNames)
.feed(cards)
.feed(amounts)
.exec(http("send auth requests")
.post(...)
.check(...) // extract and store RESPONSE_ID to session
)
val caps = exec(http("send auth requests")
.post(...) // use of RESPONSE_ID from session
.check(...))
Then your scenario can look something like this:
val scn = scenario("auth with caps").exec(auths, caps) // rest omitted
Maybe even better way to structure your code is to use objects. See mentioned tutorial link.
More illustrative example (which compiles, but I didn't run it while domain is foo.com):
import io.gatling.core.Predef._
import io.gatling.http.Predef._
class ExampleSimulation extends Simulation {
import scala.util.Random
import scala.concurrent.duration._
val httpConf = http.baseURL(s"http://foo.com")
val emails = Iterator.continually(Map("email" -> (Random.alphanumeric.take(20).mkString + "#foo.com")))
val names = Iterator.continually(Map("name" -> Random.alphanumeric.take(20).mkString))
val getIdByEmail = feed(emails)
.exec(
http("Get By Email")
.get("/email/$email")
.check(
jsonPath("userId").saveAs("anId")
)
)
val getIdByName = feed(names)
.exec(
http("Get By Name")
.get("/name/$name")
.check(
jsonPath("userId").is(session =>
session("anId").as[String]
)
)
)
val scn = scenario("Get and check user id").exec(getIdByEmail, getIdByName).inject(constantUsersPerSec(5) during (5.minutes))
setUp(scn).protocols(httpConf)
}
Hope it is what you're looking for.

Modify all numbers before insert or update

In SqlAlchemy I use:
price = Column(Numeric(18, 5))
in various placed throught my app. When I get a number formatted in swedish, with a comma instead of a dot (0,34 instead of 0.34) and try to change the price column the number gets set to 0.00000.
To solve this I have this code:
obj.price = price.replace(',','.')
But having this all over the code makes it pretty ugly and the risk is that I forget one place. Would it be possible to have some kind of generic converter function which gets called before a value is converted from a string to a Numeric? And that I have that in one place only.
Check the validates decorator of SQLAlchemy: http://docs.sqlalchemy.org/en/rel_1_0/orm/mapped_attributes.html
A quick way to add a “validation” routine to an attribute is to use
the validates() decorator. An attribute validator can raise an
exception, halting the process of mutating the attribute’s value, or
can change the given value into something different.
In your case the code could look similar to:
from sqlalchemy.orm import validates
class Obj(Base):
__tablename__ = 'obj'
id = Column(Integer, primary_key=True)
price = Column(Numeric(18, 5))
#validates('price')
def validate_price(self, key, price):
if ',' in price:
return float(price.replace(',','.'))
else:
return float(price)

How do we update an HSTORE field with Flask-Admin?

How do I update an HSTORE field with Flask-Admin?
The regular ModelView doesn't show the HSTORE field in Edit view. It shows nothing. No control at all. In list view, it shows a column with data in JSON notation. That's fine with me.
Using a custom ModelView, I can change the HSTORE field into a TextAreaField. This will show me the HSTORE field in JSON notation when in edit view. But I cannot edit/update it. In list view, it still shows me the object in JSON notation. Looks fine to me.
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
When I attempt to save/edit the JSON, I receive this error:
sqlalchemy.exc.InternalError
InternalError: (InternalError) Unexpected end of string
LINE 1: UPDATE mytable SET attributes='{}' WHERE mytable.id = ...
^
'UPDATE mytable SET attributes=%(attributes)s WHERE mytable.id = %(mytable_id)s' {'attributes': u'{}', 'mytable_id': 14L}
Now -- using code, I can get something to save into the HSTORE field:
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
def on_model_change(self, form, model, is_created):
model.attributes = {"a": "1"}
return
This basically overrides the model and put this object into it. I can then see the object in the List view and the Edit view. Still not good enough -- I want to save/edit the object that the user typed in.
I tried to parse and save the content from the form into JSON and back out. This doesn't work:
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
def on_model_change(self, form, model, is_created):
x = form.data['attributes']
y = json.loads(x)
model.attributes = y
return
json.loads(x) says this:
ValueError ValueError: Expecting property name: line 1 column 1 (char
1)
and here are some sample inputs that fail:
{u's': u'ff'}
{'s':'ff'}
However, this input works:
{}
Blank also works
This is my SQL Table:
CREATE TABLE mytable (
id BIGSERIAL UNIQUE PRIMARY KEY,
attributes hstore
);
This is my SQA Model:
class MyTable(Base):
__tablename__ = u'mytable'
id = Column(BigInteger, primary_key=True)
attributes = Column(HSTORE)
Here is how I added the view's to the admin object
admin.add_view(ModelView(models.MyTable, db.session))
Add the view using a custom Model View
admin.add_view(MyView(models.MyTable, db.session))
But I don't do those views at the same time -- I get a Blueprint name collision error -- separate issue)
I also attempted to use a form field converter. I couldn't get it to actually hit the code.
class MyModelConverter(AdminModelConverter):
def post_process(self, form_class, info):
raise Exception('here I am') #but it never hits this
return form_class
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
The answer gives you a bit more then asked
Fist of all it "extends" hstore to be able to store actually JSON, not just key-value
So this structure is also OK:
{"key":{"inner_object_key":{"Another_key":"Done!","list":["no","problem"]}}}
So, first of all your ModelView should use custom converter
class ExtendedModelView(ModelView):
model_form_converter=CustomAdminConverter
Converter itself should know how to use hstore dialect:
class CustomAdminConverter(AdminModelConverter):
#converts('sqlalchemy.dialects.postgresql.hstore.HSTORE')
def conv_HSTORE(self, field_args, **extra):
return DictToHstoreField(**field_args)
This one as you can see uses custom WTForms field which converts data in both directions:
class DictToHstoreField(TextAreaField):
def process_data(self, value):
if value is None:
value = {}
else:
for key,obj in value.iteritems():
if (obj.startswith("{") and obj.endswith("}")) or (obj.startswith("[") and obj.endswith("]")):
try:
value[key]=json.loads(obj)
except:
pass #
self.data=json.dumps(value)
def process_formdata(self, valuelist):
if valuelist:
self.data = json.loads(valuelist[0])
for key,obj in self.data.iteritems():
if isinstance(obj,dict) or isinstance(obj,list):
self.data[key]=json.dumps(obj)
if isinstance(obj,int):
self.data[key]=str(obj)
The final step will be to actual use this data in application
I did not make it in common nice way for SQLalchemy, since was used with flask-restful, so I have only adoption for flask-restful in one direction, but I think it's easy to get the idea from here and do the rest.
And if your case is simple key-value storage so nothing additionaly should be done, just use it as is.
But if you want to unwrap JSON somewhere in code, it's simple like this whenever you use it, just wrap in function
if (value.startswith("{") and value.endswith("}")) or (value.startswith("[") and value.endswith("]")):
value=json.loads(value)
Creating dynamical field for actual nice non-JSON way for editing of data also possible by extending FormField and adding some javascript for adding/removing fields, but this is whole different story, in my case I needed actual json storage, with blackjack and lists :)
Was working on postgres JSON datatype. The above solution worked great with a minor modifications.
Tried
'sqlalchemy.dialects.postgresql.json.JSON',
'sqlalchemy.dialects.postgresql.JSON',
'dialects.postgresql.json.JSON',
'dialects.postgresql.JSON'
The above versions did not work.
Finally the following change worked
#converts('JSON')
And changed class DictToHstoreField to the following:
class DictToJSONField(fields.TextAreaField):
def process_data(self, value):
if value is None:
value = {}
self.data = json.dumps(value)
def process_formdata(self, valuelist):
if valuelist:
self.data = json.loads(valuelist[0])
else:
self.data = '{}'
Although, this is might not be the answer to your question, but by default SQLAlchemy's ORM doesn't detect in-place changes to HSTORE field values. But fortunately there's a solution: SQLAlchemy's MutableDict type:
from sqlalchemy.ext.mutable import MutableDict
class MyClass(Base):
__tablename__ = 'mytable'
id = Column(Integer, primary_key=True)
attributes = Column(MutableDict.as_mutable(HSTORE))
Now when you change something in-place:
my_object.attributes.['some_key'] = 'some value'
The hstore field will be updated after session.commit().

Unique Sequencial Number to column

I need create sequence but in generic case not using Sequence class.
USN = Column(Integer, nullable = False, default=nextusn, server_onupdate=nextusn)
, this funcion nextusn is need generate func.max(table.USN) value of rows in model.
I try using this
class nextusn(expression.FunctionElement):
type = Numeric()
name = 'nextusn'
#compiles(nextusn)
def default_nextusn(element, compiler, **kw):
return select(func.max(element.table.c.USN)).first()[0] + 1
but the in this context element not know element.table. Exist way to resolve this?
this is a little tricky, for these reasons:
your SELECT MAX() will return NULL if the table is empty; you should use COALESCE to produce a default "seed" value. See below.
the whole approach of inserting the rows with SELECT MAX is entirely not safe for concurrent use - so you need to make sure only one INSERT statement at a time invokes on the table or you may get constraint violations (you should definitely have a constraint of some kind on this column).
from the SQLAlchemy perspective, you need your custom element to be aware of the actual Column element. We can achieve this either by assigning the "nextusn()" function to the Column after the fact, or below I'll show a more sophisticated approach using events.
I don't understand what you're going for with "server_onupdate=nextusn". "server_onupdate" in SQLAlchemy doesn't actually run any SQL for you, this is a placeholder if for example you created a trigger; but also the "SELECT MAX(id) FROM table" thing is an INSERT pattern, I'm not sure that you mean for anything to be happening here on an UPDATE.
The #compiles extension needs to return a string, running the select() there through compiler.process(). See below.
example:
from sqlalchemy import Column, Integer, create_engine, select, func, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.sql.expression import ColumnElement
from sqlalchemy.schema import ColumnDefault
from sqlalchemy.ext.compiler import compiles
from sqlalchemy import event
class nextusn_default(ColumnDefault):
"Container for a nextusn() element."
def __init__(self):
super(nextusn_default, self).__init__(None)
#event.listens_for(nextusn_default, "after_parent_attach")
def set_nextusn_parent(default_element, parent_column):
"""Listen for when nextusn_default() is associated with a Column,
assign a nextusn().
"""
assert isinstance(parent_column, Column)
default_element.arg = nextusn(parent_column)
class nextusn(ColumnElement):
"""Represent "SELECT MAX(col) + 1 FROM TABLE".
"""
def __init__(self, column):
self.column = column
#compiles(nextusn)
def compile_nextusn(element, compiler, **kw):
return compiler.process(
select([
func.coalesce(func.max(element.column), 0) + 1
]).as_scalar()
)
Base = declarative_base()
class A(Base):
__tablename__ = 'a'
id = Column(Integer, default=nextusn_default(), primary_key=True)
data = Column(String)
e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
# will normally pre-execute the default so that we know the PK value
# result.inserted_primary_key will be available
e.execute(A.__table__.insert(), data='single row')
# will run the default expression inline within the INSERT
e.execute(A.__table__.insert(), [{"data": "multirow1"}, {"data": "multirow2"}])
# will also run the default expression inline within the INSERT,
# result.inserted_primary_key will not be available
e.execute(A.__table__.insert(inline=True), data='single inline row')

Django equivalent of SqlAlchemy's literal_column

Trying to port some SqlAlchemy to Django and I've got this tricky little bit:
version = Column(
BIGINT,
default=literal_column(
'UNIX_TIMESTAMP() * 1000000 + MICROSECOND(CURRENT_TIMESTAMP)'
),
nullable=False)
What's the best option for porting the literal_column bit to Django? Best idea I've got so far is a function to set as the default that executes the same raw sql, but I'm not sure if there's an easier way? My google-foo is failing me there.
Edit: the reason we need to use a timestamp created by mysql is that we are measuring how out of date something is (so we need to actually know time) and we want, for correctness, to have only one time-stamping authority (so that we don't introduce error using python functions that look at system times, which could be different across servers).
At present I've got:
def get_current_timestamp(self):
cursor = connection.cursor()
cursor.execute("SELECT UNIX_TIMESTAMP() * 1000000 + MICROSECOND(CURRENT_TIMESTAMP)")
row = cursor.fetchone()
return row
version = models.BigIntegerField(default=get_current_timestamp)
which, at this point, sounds like my best/only option.
If you don't care about having a central time authority:
import time
version = models.BigIntegerField(
default = lambda: int(time.time()*1000000) )
To bend the database to your will:
from django.db.models.expressions import ExpressionNode
class NowInt(ExpressionNode):
""" Pass this in the same manner you would pass Count or F objects """
def __init__(self):
super(Now, self).__init__(None, None, False)
def evaluate(self, evaluator, qn, connection):
return '(UNIX_TIMESTAMP() * 1000000 + MICROSECOND(CURRENT_TIMESTAMP))', []
### Model
version = models.BigIntegerField(default=NowInt())
because expression nodes are not callables, the expression will be evaluated database side.