Back in October 2010, I posted this question to the Sqlalchemy user list.
At the time, I just used the clear_mappers workaround mentioned in the message, and didn't try to figure out what the problem was. That was very naughty of me. Today I ran into this bug again, and decided to construct a minimal example, which appears below. Michael also addressed what is probably the same issue back in 2006. I decided to follow up here, to give Michael a break from my dumb questions.
So, the upshot appears to be that for a given class definition, you can't have more than one mapper defined. In my case I have the Pheno class declared in module scope (I assume that is top level scope here) and each time make_tables runs, it tries to define another mapper.
Mike wrote "Based on the description of the problem above, you need to ensure your Python classes are declared in the same scope as your mappers. The error message you're getting suggests that 'Pheno' is declared at the module level." That would take care of the problem, but how do I manage that, without altering my current structure? What other options do I have, if any? Apparently mapper doesn't have an option like "if the mapper is already defined, exit without doing anything", which would take care of it nicely. I guess I could define a wrapper function, but that would be pretty ugly.
from sqlalchemy import *
from sqlalchemy.orm import *
def make_pheno_table(meta, schema, name='pheno'):
pheno_table = Table(
name, meta,
Column('patientid', String(60), primary_key=True),
schema=schema,
)
return pheno_table
class Pheno(object):
def __init__(self, patientid):
self.patientid = patientid
def make_tables(schema):
from sqlalchemy import MetaData
meta = MetaData()
pheno_table = make_pheno_table(meta, schema)
mapper(Pheno, pheno_table)
table_dict = {'metadata': meta, 'pheno_table':pheno_table}
return table_dict
table_dict = make_tables('foo')
table_dict = make_tables('bar')
Error message follows. Tested with SQLAlchemy 0.6.3-3 on Debian squeeze.
$ python test.py
Traceback (most recent call last):
File "test.py", line 25, in <module>
table_dict = make_tables('bar')
File "test.py", line 20, in make_tables
mapper(Pheno, pheno_table)
File "/usr/lib/python2.6/dist-packages/sqlalchemy/orm/__init__.py", line 818, in mapper
return Mapper(class_, local_table, *args, **params)
File "/usr/lib/python2.6/dist-packages/sqlalchemy/orm/mapper.py", line 209, in __init__
self._configure_class_instrumentation()
File "/usr/lib/python2.6/dist-packages/sqlalchemy/orm/mapper.py", line 381, in _configure_class_instrumentation
self.class_)
sqlalchemy.exc.ArgumentError: Class '<class '__main__.Pheno'>' already has a primary mapper defined. Use non_primary=True to create a non primary Mapper. clear_mappers() will remove *all* current mappers from all classes.
EDIT: Per the documentation in SQLAlchemy: The mapper() API, I could replace mapper(Pheno, pheno_table) above with
from sqlalchemy.orm.exc import UnmappedClassError
try:
class_mapper(Pheno)
except UnmappedClassError:
mapper(Pheno, pheno_table)
If a mapper is not defined for Pheno, it throws an UnmappedClassError. This at least doesn't return an error in my test script, but I haven't checked if it actually works. Comments?
EDIT2: Per Denis's suggestion, the following works:
class Tables(object):
def make_tables(self, schema):
class Pheno(object):
def __init__(self, patientid):
self.patientid = patientid
from sqlalchemy import MetaData
from sqlalchemy.orm.exc import UnmappedClassError
meta = MetaData()
pheno_table = make_pheno_table(meta, schema)
mapper(Pheno, pheno_table)
table_dict = {'metadata': meta, 'pheno_table':pheno_table, 'Pheno':Pheno}
return table_dict
table_dict = Tables().make_tables('foo')
table_dict = Tables().make_tables('bar')
However, the superficially similar
# does not work
class Tables(object):
class Pheno(object):
def __init__(self, patientid):
self.patientid = patientid
def make_tables(self, schema):
from sqlalchemy import MetaData
from sqlalchemy.orm.exc import UnmappedClassError
meta = MetaData()
pheno_table = make_pheno_table(meta, schema)
mapper(self.Pheno, pheno_table)
table_dict = {'metadata': meta, 'pheno_table':pheno_table, 'Pheno':self.Pheno}
return table_dict
table_dict = Tables().make_tables('foo')
table_dict = Tables().make_tables('bar')
does not. I get the same error message as before.
I don't really understand the scoping issues well enough to say why.
Isn't the Pheno class in both cases in some kind of local scope?
You are trying to map the same class Pheno to 2 different tables. SQLAlchemy allows only one primary mapper for each class, so that it knows what table to use for session.query(Pheno). It's not clear what do you wish to get from your question, so I can't propose solution. There are 2 obvious options:
define separate class to map to second table,
create non-primary mapper for second table by passing non_primary=True parameter and pass it (the value returned by mapper() function) to session.query() instead of class.
Update: to define separate class for each table you can put its definition into the make_tables():
def make_tables(schema):
from sqlalchemy import MetaData
meta = MetaData()
pheno_table = make_pheno_table(meta, schema)
class Pheno(object):
def __init__(self, patientid):
self.patientid = patientid
mapper(Pheno, pheno_table)
table_dict = {'metadata': meta,
'pheno_class': Pheno,
'pheno_table':pheno_table}
return table_dict
maybe i didn't quite understand what you want, but this recipe create identical column in different __tablename__
class TBase(object):
"""Base class is a 'mixin'.
Guidelines for declarative mixins is at:
http://www.sqlalchemy.org/docs/orm/extensions/declarative.html#mixin-classes
"""
id = Column(Integer, primary_key=True)
data = Column(String(50))
def __repr__(self):
return "%s(data=%r)" % (
self.__class__.__name__, self.data
)
class T1Foo(TBase, Base):
__tablename__ = 't1'
class T2Foo(TBase, Base):
__tablename__ = 't2'
engine = create_engine('sqlite:///foo.db', echo=True)
Base.metadata.create_all(engine)
sess = sessionmaker(engine)()
sess.add_all([T1Foo(data='t1'), T1Foo(data='t2'), T2Foo(data='t3'),
T1Foo(data='t4')])
print sess.query(T1Foo).all()
print sess.query(T2Foo).all()
sess.commit()
info in example sqlalchemy
Related
I'm considering porting my app to the SQLAlchemy as it's much more extensive than my own ORM implementation, but all the examples I could find show how to set the schema name at class declaration rather than dynamically at runtime.
I need to map my objects to Postgres tables from multiple schemas. Moreover, the application creates new schemas in runtime and I need to map new instances of the class to rows of the table from that new schema.
Currently, I use my own ORM module where I just provide the schema name as an argument when creating new instances of a class (I call class' method with the schema name as an argument and it returns an object(s) that holds the schema name). The class describes a table that can exist in many schemas. The class declaration doesn't contain information about schema, but instances of that class do contain it and include it when generating SQL statements.
This way, the application can work with many schemas simultaneously and even create foreign keys in tables from "other" schemas to the "main" table in the public schema. In such a way it is also possible to delete data in other schemas cascaded when deleting the row in the public schema.
The SQLAlchemy gives this example to set the schema for the table (documentation):
metadata_obj = MetaData(schema="remote_banks")
financial_info = Table(
"financial_info",
metadata_obj,
Column("id", Integer, primary_key=True),
Column("value", String(100), nullable=False),
)
But on ORM level, when I declare the class, I should pass an already constructed table (example from documentation):
metadata = MetaData()
group_users = Table(
"group_users",
metadata,
Column("user_id", String(40), nullable=False),
Column("group_id", String(40), nullable=False),
UniqueConstraint("user_id", "group_id"),
)
class Base(DeclarativeBase):
pass
class GroupUsers(Base):
__table__ = group_users
__mapper_args__ = {"primary_key": [group_users.c.user_id, group_users.c.group_id]}
So, the question is: is it possible to map class instances to tables/rows from dynamically created database schemas (in runtime) in SQLAlchemy? The way of altering the connection to set the current schema is not acceptable to me. I want to work with all schemas simultaneously.
I'm free to use the newest SQLAlchemy 2.0 (currently in BETA release).
You can set the schema per table so I think you have to make a table and class per schema. Here is a made up example. I have no idea what the ramifications are of changing the mapper registry during runtime. Especially as I have done below, mid-transaction and what would happen with threadsafety. You could probably use a master schema list table in public and lock it or lock the same row across connections to syncronize the schema list and provide threadsafety when adding a schema. I'm suprised it works. Kind of cool.
import sys
from sqlalchemy import (
create_engine,
Integer,
MetaData,
Float,
event,
)
from sqlalchemy.schema import (
Column,
CreateSchema,
Table,
)
from sqlalchemy.orm import Session
from sqlalchemy.orm import registry
username, password, db = sys.argv[1:4]
engine = create_engine(f"postgresql+psycopg2://{username}:{password}#/{db}", echo=True)
metadata = MetaData()
mapper_registry = registry()
def map_class_to_some_table(cls, table, entity_name, **mapper_kwargs):
newcls = type(entity_name, (cls,), {})
mapper_registry.map_imperatively(newcls, table, **mapper_kwargs)
return newcls
class Measurement(object):
pass
units = []
cls_for_unit = {}
tbl_for_unit = {}
def add_unit(unit, create_bind=None):
units.append(unit)
schema_name = f"unit_{unit}"
if create_bind:
create_bind.execute(CreateSchema(schema_name))
else:
event.listen(metadata, "before_create", CreateSchema(schema_name))
cols = [
Column("id", Integer, primary_key=True),
Column("value", Float, nullable=False),
]
# One table per schema.
tbl_for_unit[unit] = Table("measurement", metadata, *cols, schema=schema_name)
if create_bind:
tbl_for_unit[unit].create(create_bind)
# One class per schema.
cls_for_unit[unit] = map_class_to_some_table(
Measurement, tbl_for_unit[unit], Measurement.__name__ + f"_{unit}"
)
for unit in ["mm", "m"]:
add_unit(unit)
metadata.create_all(engine)
with Session(engine) as session, session.begin():
# Create a value for each unit (schema).
session.add_all([cls(value=i) for i, cls in enumerate(cls_for_unit.values())])
with Session(engine) as session, session.begin():
# Read back a value for each unit (schema).
print(
[
(unit, cls.__name__, cls, session.query(cls).first().value)
for (unit, cls) in cls_for_unit.items()
]
)
with Session(engine) as session, session.begin():
# Add another unit, add a value, flush and then read back.
add_unit("km", create_bind=session.bind)
session.add(cls_for_unit["km"](value=100.0))
session.flush()
print(session.query(cls_for_unit["km"]).first().value)
Output of last add_unit()
2022-12-16 08:16:13,446 INFO sqlalchemy.engine.Engine CREATE SCHEMA unit_km
2022-12-16 08:16:13,446 INFO sqlalchemy.engine.Engine [no key 0.00015s] {}
2022-12-16 08:16:13,447 INFO sqlalchemy.engine.Engine COMMIT
2022-12-16 08:16:13,469 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2022-12-16 08:16:13,469 INFO sqlalchemy.engine.Engine
CREATE TABLE unit_km.measurement (
id SERIAL NOT NULL,
value FLOAT NOT NULL,
PRIMARY KEY (id)
)
Ian Wilson posted a great answer to my question which I'm going to use.
About the same time I got an idea of how it can work and would like to post it here just as a very simple example. I think this is the same mechanism behind it as posted by Ian.
This example only "reads" an object from the schema that can be referenced at runtime.
from sqlalchemy import create_engine, Column, Integer, String, MetaData
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import sessionmaker
import psycopg
engine = create_engine(f"postgresql+psycopg://user:password#localhost:5432/My_DB", echo=True)
Session = sessionmaker(bind=engine)
session = Session()
class Base(DeclarativeBase):
pass
class A(object):
__tablename__ = "my_table"
id = Column("id", Integer, primary_key=True)
name = Column("name", String)
def __repr__(self):
return f"A: {self.id}, {self.name}"
metadata_obj = MetaData(schema="my_schema") # here we create new mapping
A1 = type("A1", (A, Base), {"metadata": metadata_obj}) # here we make a new subclass with desired mapping
data = session.query(A1).all()
print(data)
This info helped me to come to this solution:
https://github.com/sqlalchemy/sqlalchemy/wiki/EntityName
"... SQLAlchemy mapping makes modifications to the mapped class, so it's not really feasible to have many mappers against the exact same class ..."
This means a separate class must be created in runtime for each schema
This is my first question in Stackoverflow ever. :P Everything work just fine, except a crawl order, I add a priority method but didn`t work correctly. Need to first write all author data, then all album and songs data and store to DB with this order. I want to query items in a MySql table by order from item in another one.
Database structure: https://i.postimg.cc/GhF4w32x/db.jpg
Example: first write all author items in Author table, and then order album items in Album table by authorId from Author table.
Github repository: https://github.com/markostalma/discogs/tree/master/discogs
P.S. I have a three item class for author, album and song parser.
Also I was tried to make a another flow of spider and put all in one item class, but with no success. Order was a same. :(
Sorry for my bad English.
You need to setup an item pipeline for this. I would suggest using SQL Alchemy to build the SQL item and connect to the DB. You're SQL Alchemy class will reflect all the table relationships you have in your DB schema. Let me show you. This is a working example of a similar pipeline that I have except you would setup your class on the SQLAlchemy to container the m2m or foreignkey relationships you need. You'll have to refer to their documentation [1] .
An even more pythonic way of doing this would be to keep your SQL Alchemy class and item names the same and do something like for k,v in item.items():
This way you can just loop the item and set what is there. Code is long and violates DRY for a purpose though.
# -*- coding: utf-8 -*-
from scrapy.exceptions import DropItem
from sqlalchemy import create_engine, Column, Integer, String, DateTime, ForeignKey, Boolean, Sequence, Date, Text
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
import datetime
DeclarativeBase = declarative_base()
def db_connect():
"""
This function connections to the database. Tables will automatically be created if they do not exist.
See __tablename__ under RateMds class
MySQL example: engine = create_engine('mysql://scott:tiger#localhost/foo')
"""
return create_engine('sqlite:///reviews.sqlite', echo=True)
class GoogleReviewItem(DeclarativeBase):
__tablename__ = 'google_review_item'
pk = Column('pk', String, primary_key=True)
query = Column('query', String(500))
entity_name = Column('entity_name', String(500))
user = Column('user', String(500))
review_score = Column('review_score', Integer)
description = Column('description', String(5000))
top_words = Column('top_words', String(10000), nullable=True)
bigrams = Column('bigrams', String(10000), nullable=True)
trigrams = Column('trigrams', String(10000), nullable=True)
google_average = Column('google_average', Integer)
total_reviews = Column('total_reviews', Integer)
review_date = Column('review_date', DateTime)
created_on = Column('created_on', DateTime, default=datetime.datetime.now)
engine = db_connect()
Session = sessionmaker(bind=engine)
def create_individual_table(engine):
# checks for tables existance and creates them if they do not already exist
DeclarativeBase.metadata.create_all(engine)
create_individual_table(engine)
session = Session()
def get_row_by_pk(pk, model):
review = session.query(model).get(pk)
return review
class GooglePipeline(object):
def process_item(self, item, spider):
review = get_row_by_pk(item['pk'], GoogleReviewItem)
if review is None:
googlesite = GoogleReviewItem(
query=item['query'],
google_title=item['google_title'],
review_score=item['review_score'],
review_count=item['review_count'],
website=item['website'],
website_type=item['website_type'],
top_words=item['top_words'],
bigrams=item['bigrams'],
trigrams=item['trigrams'],
text=item['text'],
date=item['date']
)
session.add(googlesite)
session.commit()
return item
else:
raise DropItem()
[1]: https://docs.sqlalchemy.org/en/13/core/constraints.html
I discovered the new generated columns functionality of MySQL 5.7, and wanted to replace some properties of my models by those kind of columns. Here is a sample of a model:
class Ligne_commande(models.Model):
Quantite = models.IntegerField()
Prix = models.DecimalField(max_digits=8, decimal_places=3)
Discount = models.DecimalField(max_digits=5, decimal_places=3, blank=True, null=True)
#property
def Prix_net(self):
if self.Discount:
return (1 - self.Discount) * self.Prix
return self.Prix
#property
def Prix_total(self):
return self.Quantite * self.Prix_net
I defined generated field classes as subclasses of Django fields (e.g. GeneratedDecimalField as a subclass of DecimalField). This worked in a read-only context and Django migrations handles it correctly, except a detail : generated columns of MySQL does not support forward references and django migrations does not respect the order the fields are defined in a model, so the migration file must be edited to reorder operations.
After that, trying to create or modify an instance returned the mysql error : 'error totally whack'. I suppose Django tries to write generated field and MySQL doesn't like that. After taking a look to django code I realized that, at the lowest level, django uses the _meta.local_concrete_fields list and send it to MySQL. Removing the generated fields from this list fixed the problem.
I encountered another problem: during the modification of an instance, generated fields don't reflect the change that have been made to the fields from which they are computed. If generated fields are used during instance modification, as in my case, this is problematic. To fix that point, I created a "generated field descriptor".
Here is the final code of all of this.
Creation of generated fields in the model, replacing the properties defined above:
Prix_net = mk_generated_field(models.DecimalField, max_digits=8, decimal_places=3,
sql_expr='if(Discount is null, Prix, (1.0 - Discount) * Prix)',
pyfunc=lambda x: x.Prix if not x.Discount else (1 - x.Discount) * x.Prix)
Prix_total = mk_generated_field(models.DecimalField, max_digits=10, decimal_places=2,
sql_expr='Prix_net * Quantite',
pyfunc=lambda x: x.Prix_net * x.Quantite)
Function that creates generated fields. Classes are dynamically created for simplicity:
from django.db.models import fields
def mk_generated_field(field_klass, *args, sql_expr=None, pyfunc=None, **kwargs):
assert issubclass(field_klass, fields.Field)
assert sql_expr
generated_name = 'Generated' + field_klass.__name__
try:
generated_klass = globals()[generated_name]
except KeyError:
globals()[generated_name] = generated_klass = type(generated_name, (field_klass,), {})
def __init__(self, sql_expr, pyfunc=None, *args, **kwargs):
self.sql_expr = sql_expr
self.pyfunc = pyfunc
self.is_generated = True # mark the field
# null must be True otherwise migration will ask for a default value
kwargs.update(null=True, editable=False)
super(generated_klass, self).__init__(*args, **kwargs)
def db_type(self, connection):
assert connection.settings_dict['ENGINE'] == 'django.db.backends.mysql'
result = super(generated_klass, self).db_type(connection)
# double single '%' if any because it will clash with later Django format
sql_expr = re.sub('(?<!%)%(?!%)', '%%', self.sql_expr)
result += ' GENERATED ALWAYS AS (%s)' % sql_expr
return result
def deconstruct(self):
name, path, args, kwargs = super(generated_klass, self).deconstruct()
kwargs.update(sql_expr=self.sql_expr)
return name, path, args, kwargs
generated_klass.__init__ = __init__
generated_klass.db_type = db_type
generated_klass.deconstruct = deconstruct
return generated_klass(sql_expr, pyfunc, *args, **kwargs)
The function to register generated fields in a model. It must be called at django start-up, for example in the ready method of the AppConfig of the application.
from django.utils.datastructures import ImmutableList
def register_generated_fields(model):
local_concrete_fields = list(model._meta.local_concrete_fields[:])
generated_fields = []
for field in model._meta.fields:
if hasattr(field, 'is_generated'):
local_concrete_fields.remove(field)
generated_fields.append(field)
if field.pyfunc:
setattr(model, field.name, GeneratedFieldDescriptor(field.pyfunc))
if generated_fields:
model._meta.local_concrete_fields = ImmutableList(local_concrete_fields)
And the descriptor. Note that it is used only if a pyfunc is defined for the field.
class GeneratedFieldDescriptor(object):
attr_prefix = '_GFD_'
def __init__(self, pyfunc, name=None):
self.pyfunc = pyfunc
self.nickname = self.attr_prefix + (name or str(id(self)))
def __get__(self, instance, owner):
if instance is None:
return self
if hasattr(instance, self.nickname) and not instance.has_changed:
return getattr(instance, self.nickname)
return self.pyfunc(instance)
def __set__(self, instance, value):
setattr(instance, self.nickname, value)
def __delete__(self, instance):
delattr(instance, self.nickname)
Note the instance.has_changed that must tell if the instance is being modified. If found a solution for this here.
I have done extensive tests of my application and it works fine, but I am far from using all django functionalities. My question is: could this settings clash with some use cases of django ?
I have a declarative mapping:
class User(base):
username = Column(Unicode(30), unique=True)
How can I tell sqlalchemy that this attribute may not be modified?
The workaround I came up with is kindof hacky:
from werkzeug.utils import cached_property
# regular #property works, too
class User(base):
_username = Column('username', Unicode(30), unique=True)
#cached_property
def username(self):
return self._username
def __init__(self, username, **kw):
super(User,self).__init__(**kw)
self._username=username
Doing this on the database column permission level will not work because not all databases support that.
You can use the validates SQLAlchemy feature.
from sqlalchemy.orm import validates
...
class User(base):
...
#validates('username')
def validates_username(self, key, value):
if self.username: # Field already exists
raise ValueError('Username cannot be modified.')
return value
reference: https://docs.sqlalchemy.org/en/13/orm/mapped_attributes.html#simple-validators
I can suggest the following ways do protect column from modification:
First is using hook when any attribute is being set:
In case above all column in all tables of Base declarative will be hooked, so you need somehow to store information about whether column can be modified or not. For example you could inherit sqlalchemy.Column class to add some attribute to it and then check attribute in the hook.
class Column(sqlalchemy.Column):
def __init__(self, *args, **kwargs):
self.readonly = kwargs.pop("readonly", False)
super(Column, self).__init__(*args, **kwargs)
# noinspection PyUnusedLocal
#event.listens_for(Base, 'attribute_instrument')
def configure_listener(class_, key, inst):
"""This event is called whenever an attribute on a class is instrumented"""
if not hasattr(inst.property, 'columns'):
return
# noinspection PyUnusedLocal
#event.listens_for(inst, "set", retval=True)
def set_column_value(instance, value, oldvalue, initiator):
"""This event is called whenever a "set" occurs on that instrumented attribute"""
logging.info("%s: %s -> %s" % (inst.property.columns[0], oldvalue, value))
column = inst.property.columns[0]
# CHECK HERE ON CAN COLUMN BE MODIFIED IF NO RAISE ERROR
if not column.readonly:
raise RuntimeError("Column %s can't be changed!" % column.name)
return value
To hook concrete attributes you can do the next way (adding attribute to column not required):
# standard decorator style
#event.listens_for(SomeClass.some_attribute, 'set')
def receive_set(target, value, oldvalue, initiator):
"listen for the 'set' event"
# ... (event handling logic) ...
Here is guide about SQLAlchemy events.
Second way that I can suggest is using standard Python property or SQLAlchemy hybrid_property as you have shown in your question, but using this approach result in code growing.
P.S. I suppose that compact way is add attribute to column and hook all set event.
Slight correction to #AlexQueue answer.
#validates('username')
def validates_username(self, key, value):
if self.username and self.username != value: # Field already exists
raise ValueError('Username cannot be modified.')
return value
I need create sequence but in generic case not using Sequence class.
USN = Column(Integer, nullable = False, default=nextusn, server_onupdate=nextusn)
, this funcion nextusn is need generate func.max(table.USN) value of rows in model.
I try using this
class nextusn(expression.FunctionElement):
type = Numeric()
name = 'nextusn'
#compiles(nextusn)
def default_nextusn(element, compiler, **kw):
return select(func.max(element.table.c.USN)).first()[0] + 1
but the in this context element not know element.table. Exist way to resolve this?
this is a little tricky, for these reasons:
your SELECT MAX() will return NULL if the table is empty; you should use COALESCE to produce a default "seed" value. See below.
the whole approach of inserting the rows with SELECT MAX is entirely not safe for concurrent use - so you need to make sure only one INSERT statement at a time invokes on the table or you may get constraint violations (you should definitely have a constraint of some kind on this column).
from the SQLAlchemy perspective, you need your custom element to be aware of the actual Column element. We can achieve this either by assigning the "nextusn()" function to the Column after the fact, or below I'll show a more sophisticated approach using events.
I don't understand what you're going for with "server_onupdate=nextusn". "server_onupdate" in SQLAlchemy doesn't actually run any SQL for you, this is a placeholder if for example you created a trigger; but also the "SELECT MAX(id) FROM table" thing is an INSERT pattern, I'm not sure that you mean for anything to be happening here on an UPDATE.
The #compiles extension needs to return a string, running the select() there through compiler.process(). See below.
example:
from sqlalchemy import Column, Integer, create_engine, select, func, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.sql.expression import ColumnElement
from sqlalchemy.schema import ColumnDefault
from sqlalchemy.ext.compiler import compiles
from sqlalchemy import event
class nextusn_default(ColumnDefault):
"Container for a nextusn() element."
def __init__(self):
super(nextusn_default, self).__init__(None)
#event.listens_for(nextusn_default, "after_parent_attach")
def set_nextusn_parent(default_element, parent_column):
"""Listen for when nextusn_default() is associated with a Column,
assign a nextusn().
"""
assert isinstance(parent_column, Column)
default_element.arg = nextusn(parent_column)
class nextusn(ColumnElement):
"""Represent "SELECT MAX(col) + 1 FROM TABLE".
"""
def __init__(self, column):
self.column = column
#compiles(nextusn)
def compile_nextusn(element, compiler, **kw):
return compiler.process(
select([
func.coalesce(func.max(element.column), 0) + 1
]).as_scalar()
)
Base = declarative_base()
class A(Base):
__tablename__ = 'a'
id = Column(Integer, default=nextusn_default(), primary_key=True)
data = Column(String)
e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
# will normally pre-execute the default so that we know the PK value
# result.inserted_primary_key will be available
e.execute(A.__table__.insert(), data='single row')
# will run the default expression inline within the INSERT
e.execute(A.__table__.insert(), [{"data": "multirow1"}, {"data": "multirow2"}])
# will also run the default expression inline within the INSERT,
# result.inserted_primary_key will not be available
e.execute(A.__table__.insert(inline=True), data='single inline row')