Check if object is an sqlalchemy model instance - sqlalchemy

I want to know how to know, given an object, if it is an instance of an sqlalchemy mapped model.
Normally, I would use isinstance(obj, DeclarativeBase). However, in this scenario, I do not have the DeclarativeBase class used available (since it is in a dependency project).
I would like to know what is the best practice in this case.
class Person(DeclarativeBase):
__tablename__ = "Persons"
p = Person()
print isinstance(p, DeclarativeBase)
#prints True
#However in my scenario, I do not have the DeclarativeBase available
#since the DeclarativeBase will be constructed in the depending web app
#while my code will act as a library that will be imported into the web app
#what are my alternatives?

You can use class_mapper() and catch the exception.
Or you could use _is_mapped_class, but ideally you should not as it is not a public method.
from sqlalchemy.orm.util import class_mapper
def _is_sa_mapped(cls):
try:
class_mapper(cls)
return True
except:
return False
print _is_sa_mapped(MyClass)
# #note: use this at your own risk as might be removed/renamed in the future
from sqlalchemy.orm.util import _is_mapped_class
print bool(_is_mapped_class(MyClass))

for instances there is the object_mapper(), so:
from sqlalchemy.orm.base import object_mapper
def is_mapped(obj):
try:
object_mapper(obj)
except UnmappedInstanceError:
return False
return True
the complete mapper utilities are documented here: http://docs.sqlalchemy.org/en/rel_1_0/orm/mapping_api.html
Just a consideration: since specific errors are raised by SQLAlchemy (UnmappedClassError for calsses and UnmappedInstanceError for instances) why not catch them rather than a generic exception? ;)

Related

Python objects in dealloc in cython

In the docs it is written, that "Any C data that you explicitly allocated (e.g. via malloc) in your __cinit__() method should be freed in your __dealloc__() method."
This is not my case. I have following extension class:
cdef class SomeClass:
cdef dict data
cdef void * u_data
def __init__(self, data_len):
self.data = {'columns': []}
if data_len > 0:
self.data.update({'data': deque(maxlen=data_len)})
else:
self.data.update({'data': []})
self.u_data = <void *>self.data
#property
def data(self):
return self.data
#data.setter
def data(self, new_val: dict):
self.data = new_val
Some c function has an access to this class and it appends some data to SomeClass().data dict. What should I write in __dealloc__, when I want to delete the instance of the SomeClass()?
Maybe something like:
def __dealloc__(self):
self.data = None
free(self.u_data)
Or there is no need to dealloc anything at all?
No you don't need to and no you shouldn't. From the documentation
You need to be careful what you do in a __dealloc__() method. By the time your __dealloc__() method is called, the object may already have been partially destroyed and may not be in a valid state as far as Python is concerned, so you should avoid invoking any Python operations which might touch the object. In particular, don’t call any other methods of the object or do anything which might cause the object to be resurrected. It’s best if you stick to just deallocating C data.
You don’t need to worry about deallocating Python attributes of your object, because that will be done for you by Cython after your __dealloc__() method returns.
You can confirm this by inspecting the C code (you need to look at the full code, not just the annotated HTML). There's an autogenerated function __pyx_tp_dealloc_9someclass_SomeClass (name may vary slightly depending on what you called your module) does a range of things including:
__pyx_pw_9someclass_9SomeClass_3__dealloc__(o);
/* some other code */
Py_CLEAR(p->data);
where the function __pyx_pw_9someclass_9SomeClass_3__dealloc__ is (a wrapper for) your user-defined __dealloc__. Py_CLEAR will ensure that data is appropriately reference-counted then set to NULL.
It's a little hard to follow because it all goes through several layers of wrappers, but you can confirm that it does what the documentation says.

JSON Encoding custom class not calling overriden default

I have a class definition which I have based on json.JSONEncoder, within which I have overriden the default method. Now when I call json.dumps on an instance of that class the default method is not being called? Is there something I have missed?
In my example code I do not expect this to magically produce the serialized object but I would expect the print("here") to be executed.
import json
class MyClass(json.JSONEncoder):
id = "myId"
data = "myData"
def default(self, o):
print("here")
print ("Create instance")
obj = MyClass()
print("Serialize")
print(json.dumps(obj))
print ("and done")
I am quite new to Python, so apologies if this is something horribly obvious.
After some further digging and tracing I think I have found the cause. Part of the issue I think it my own misunderstanding of how this is intended to be used.
When calling json.dumps if you wish to use a custom encoder you need to specify the class of that encoder, otherwise dumps defaults to using the standard implementation of JSONEncoder.
json.dumps(obj, cls=MyEncoder)
My misconception was that by basing my class on json.JSONEncoder that dumps would simply recognise the instance as inheriting from JSONEncoder and call the override default method. However this is not the case.
I have now created the logic in it's own class to encode my own class/types and when I call json.dumps I pass in that class name.
So I now have
class MyEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, MyClass):
return {"id": obj.id, "data": obj.data}
return json.JSONEncoder.default(self, obj)
And when I wish to serialize I use
json.dumps(object_to_serialize, cls=MyEncoder)
Which recognises my class and handles it, or passes on the encoding to the default encoder.

How to insert json fixture data in Play Specification tests?

I have a Scala Play 2.2.2 application and as part of my Specification tests I would like to insert some fixture data for testing preferably in json format. For the tests I use the usual in-memory H2 database. How can I accomplish this? I have searched all the documentation but there is no mention to this anywhere.
Note that I would prefer not to build my own flavor of fixture implementation via the Global. There should be a non-hacky way to this right?
AFAIK there is no built-in stuff to do this, ala Rails, and it's hard to imagine what the devs could do without making Play Scala much more opinionated about the way persistence should be handled (which I'd personally consider a negative.)
I also use H2 for testing and employ plain SQL fixtures in a resource file and load them before tests using a couple of (fairly simple) helpers:
package object helpers {
import java.io.File
import java.sql.CallableStatement
import org.specs2.execute.{Result, AsResult}
import org.specs2.mutable.Around
import org.specs2.specification.Scope
import play.api.db.DB
import play.api.test.FakeApplication
import play.api.test.Helpers._
/**
* Load a file containing SQL statements into the DB.
*/
private def loadSqlResource(resource: String)(implicit app: FakeApplication) = DB.withConnection { conn =>
val file = new File(getClass.getClassLoader.getResource(resource).toURI)
val path = file.getAbsolutePath
val statement: CallableStatement = conn.prepareCall(s"RUNSCRIPT FROM '$path'")
statement.execute()
conn.commit()
}
/**
* Run a spec after loading the given resource name as SQL fixtures.
*/
abstract class WithSqlFixtures(val resource: String, val app: FakeApplication = FakeApplication()) extends Around with Scope {
implicit def implicitApp = app
override def around[T: AsResult](t: => T): Result = {
running(app) {
loadSqlResource(resource)
AsResult.effectively(t)
}
}
}
}
Then, in your actual spec you can do something like so:
package models
import helpers.WithSqlFixtures
import play.api.test.PlaySpecification
class MyModelSpec extends PlaySpecification {
"My model" should {
"locate items correctly" in new WithSqlFixtures("model-fixtures.sql") {
MyModel.findAll().size must beGreaterThan(0)
}
}
}
Note: this specs2 stuff could probably be better.
Obviously if you really need JSON you'll have to add extra machinery to deserialise your models and persist them in the database (often in your app you'll be doing these things anyway, in which case that might be relatively trivial.)
You'll also need:
Some evolutions to establish your DB schema in conf/evolutions/default
The evolution plugin enabled, which will build your schema when the FakeApplication starts up
The appropriate H2 DB config

Error working with JSON in Play 2

I am trying to serialize / deserialize using JSON and the Play Framework 2.1.0 and Scala 2.10. I am using Anorm, and I have a pretty simple Object that I want to store in a database. The Order is very simple:
case class Order(id: Pk[Long] = NotAssigned, mfg: String, tp: String)
In my controller, I am trying to build a REST interface to be able to accept and send an Order instance (above) as JSON. In there, I have the following code:
implicit object PkFormat extends Format[Pk[Long]] {
def reads(json: JsValue):Pk[Long] = Id(json.as[Long])
def writes(id: Pk[Long]):JsNumber = JsNumber(id.get)
}
However, this is failing to compile when I run "play test" with the following:
overriding method reads in trait Reads of type (json: play.api.libs.json.JsValue)play.api.libs.json.JsResult[anorm.Pk[Long]];
[error] method reads has incompatible type
[error] def reads(json: JsValue):Pk[Long] = Id(json.as[Long])
Does anyone know why this is happening?
I have a lot of experience with JAXB, but I am realtively new to Play and Scala, and I haven't been able to find any answers so far. This seems like a pretty simple use case, actually I was hoping there would be a simpler solution (like Annotations), but I wasn't able to find one (at least not yet)
Any help is greatly appreciated!
Thanks
play.api.libs.json.Reads trait defines reads method as:
def reads(json : play.api.libs.json.JsValue) : play.api.libs.json.JsResult[A]
Therefore, the response of reads method is expected to be JsResult[A], not A; that is, JsResult[Pk[Long]] and not Pk[Long]. In a case of success, you'll want to return this:
implicit object PkFormat extends Format[Pk[Long]] {
def reads(json: JsValue):JsResult[Pk[Long]] = JsSuccess(Id(json.as[Long]))
def writes(id: Pk[Long]):JsNumber = JsNumber(id.get)
}

RESTful interface in Flask and issues serializing

I'm learning Backbone.js and Flask (and Flask-sqlalchemy). I chose Flask because I read that it plays well with Backbone implementing RESTful interfaces. I'm currently following a course that uses (more or less) this model:
class Tasks(db.Model):
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.String(80), unique=True)
completed = db.Column(db.Boolean, unique=False, default=False)
def __init__(self, title, completed):
self.title = title
self.completed = completed
def json_dump(self):
return dict(title=self.title, completed=self.completed)
def __repr__(self):
return '<Task %r>' % self.title
I had to add a json_dump method in order to send JSON to the browser. Otherwise, I would get errors like object is not JSON serializable, so my first question is:
Is there a better way to do serialization in Flask? It seems that some objects are serializable but others aren't, but in general, it's not as easy as I expected.
After a while, I ended up with the following views to take care of each type of request:
#app.route('/tasks')
def tasks():
tasks = Tasks.query.all()
serialized = json.dumps([c.json_dump() for c in tasks])
return serialized
#app.route('/tasks/<id>', methods=['GET'])
def get_task(id):
tasks = Tasks.query.get(int(id))
serialized = json.dumps(tasks.json_dump())
return serialized
#app.route('/tasks/<id>', methods=['PUT'])
def put_task(id):
task = Tasks.query.get(int(id))
task.title = request.json['title']
task.completed = request.json['completed']
db.session.add(task)
db.session.commit()
serialized = json.dumps(task.json_dump())
return serialized
#app.route('/tasks/<id>', methods=['DELETE'])
def delete_task(id):
task = Tasks.query.get(int(id))
db.session.delete(task)
db.session.commit()
serialized = json.dumps(task.json_dump())
return serialized
#app.route('/tasks', methods=['POST'])
def post_task():
task = Tasks(request.json['title'], request.json['completed'])
db.session.add(task)
db.session.commit()
serialized = json.dumps(task.json_dump())
return serialized
In my opinion, it seems a bit verbose. Again, what is the proper way to implement them? I have seen some extensions that offer RESTful interfaces in Flask but those look quite complex to me.
Thanks
I would use a module to do this, honestly. We've used Flask-Restless for some APIs, you might take a look at that:
https://flask-restless.readthedocs.org/en/latest/
However, if you want build your own, you can use SQLAlchemy's introspection to output your objects as key/value pairs.
http://docs.sqlalchemy.org/en/rel_0_7/core/schema.html#metadata-reflection
Something like this, although I always have to triple-check I got the syntax right, so take this as a guide more than working code.
#app.route('/tasks')
def tasks():
tasks = Tasks.query.all()
output = []
for task in tasks:
row = {}
for field in Tasks.__table__.c:
row[str(field)] = getattr(task, field, None)
output.append(row)
return jsonify(data=output)
I found this question which might help you more. I'm familiar with SQLAlchemy 0.7 and it looks like 0.8 added some nicer introspection techniques:
SQLAlchemy introspection
Flask provides jsonify function to do this. Check out its working here.
Your json_dump method is right though code can be made concise. See this code snippet
#app.route('/tasks')
def tasks():
tasks = Tasks.query.all()
return jsonify(data=[c.json_dump() for c in tasks])