How to create postgresql's sequences in Alembic - sqlalchemy

I'm using alembic to maintain my tables. At the same time, I update my models using the declarative way.
This is one the alembic's table:
op.create_table(
'groups',
Column('id', Integer, Sequence('group_id_seq'), primary_key=True),
Column('name', Unicode(50)),
Column('description', Unicode(250)),
)
And the model is like the following:
class Group(Base):
__tablename__ = 'groups'
id = Column(Integer, Sequence('group_id_seq'), primary_key=True)
name = Column(Unicode(50))
description = Column(Unicode(250))
def __init__(self, name, description):
self.description = description
self.name = name
You can see, I'm using the Sequence in both the alembic migration and in the declarative model.
But I have noticed that when using PostgreSQL (v9.1) no sequences are created by alembic, and so the models fail to create instances since they will use the nextval(<sequence name>) clause.
So, how can I create my alembic migrations so that the sequences are truly generated in postgresql?

Just add following to your model:
field_seq = Sequence('groups_field_seq')
field = Column(Integer, field_seq, server_default=field_seq.next_value())
And add following to your migration file (before creating table):
from sqlalchemy.schema import Sequence, CreateSequence
op.execute(CreateSequence(Sequence('groups_field_seq')))

Found a hint at https://bitbucket.org/zzzeek/alembic/issue/60/autogenerate-for-sequences-as-well-as#comment-4100402
Following the CreateSequence found in the previous link I still have to jump through several hoops to make my migrations works in SQLite and PostgreSQL. Currently I have:
def dialect_supports_sequences():
return op._proxy.migration_context.dialect.supports_sequences
def create_seq(name):
if dialect_supports_sequences():
op.execute(CreateSequence(Sequence(name)))
And then call the create_seq whenever I need it.
Is this the best practice?

Not sure if I got your question right but as nobody else chose to answer, here is how I get perfectly normal ids:
Alembic:
op.create_table('event',
sa.Column('id', sa.INTEGER(), autoincrement=True, nullable=False),
The class:
class Event(SQLADeclarativeBase):
__tablename__ = 'event'
id = Column(Integer, primary_key = True)

I ran into this same issue recently and here is how i solved it.
op.execute("create sequence SEQUENCE_NAME")
I ran the above command inside the upgrade function and for the downgrade run the below code inside the downgrade function respectively.
op.execute("drop sequence SEQUENCE_NAME")

Related

How to combine declarative mappings from different packages?

before I start contriving a minimal example with a lot of sqlalchemy boilerplate stuff, maybe I can explain the concept theoretically.
I have a package "foo" that defines some tables in a database schema "foo_db" in the standard ORM manner:
class FooTable(Base):
__tablename__ = 'foo_data'
id = Column(Integer, primary_key=True)
The package is typically used stand-alone with its own database schema, "foo_db."
A separate package "bar" has its own schema "bar_db" with its own tables, but it also needs to use the tables of "foo" on "foo_db," and it has foreign keys into foo_db's tables (both schemas obviously are on the same server):
from foo.models import Base, FooTable
class BarTable(Base):
__tablename__ = 'bar_data'
id = Column(Integer, primary_key=True)
foo_id = Column(ForeignKey('foo_db.foo_data.id'))
foo = relationship(FooTable)
When I try to use this code I get errors like these:
(MySQLdb._exceptions.ProgrammingError) (1146, "Table 'bar_db.foo_data' doesn't exist")
The only way I found to get around this is to literally re-define FooTable in package bar:
class FooTable(Base):
__tablename__ = 'foo_data'
__table_args__ = {'schema': 'foo_db'}
id = Column(Integer, primary_key=True)
This is very obviously not how it should be done. Any suggestions?

SQLAlchemy check that column entry is in list when inserting

I want to specify a column in my sqlalchemy model to only accept values that are in a predefined list (in the example below ['bar','baz'].
Below is a minimal example of what such a model would look like
class Foo(db.Model):
"""Example Model"""
# simple autoincrement primary key
id: int = db.Column(db.Integer, primary_key=True, autoincrement=True)
# this column should be constrained to ['bar', 'baz']
exclusive_str: str = db.Column(db.String, nullable=False)
I found the check constraint in the official documentation, but the examples listed are super simple and don't seem to cover a use case like the above. How do I go about solving this?

Can't create MySQL Boolean Field Using flask-sqlalchemy & flask-migrate

I have a working flask app with few models. The User model is as follow...
class User(UserMixin, db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(64), index=True, unique=True)
email = db.Column(db.String(128), index=True, unique=True)
password_hash = db.Column(db.String(128))
first_name = db.Column(db.String(32))
last_name = db.Column(db.String(32))
bio = db.Column(db.String(255))
patterns = db.relationship('Pattern', backref='user', lazy='dynamic')
Now I want to add a new boolean column to the model. I'm utilizing MySQL as a database. I have tried the following...
invited = db.Column(db.Boolean, default=0)
but when I runt flask db migrate I get the following...
INFO [alembic.runtime.migration] Context impl MySQLImpl.
I also tried
from sqlalchemy import BOOLEAN # also with Boolean
...
invited = db.Column(BOOLEAN, default=0) # also with Boolean
but get the same error. Reading MySQL documentation found out that MySQL doesn't have boolean type rather TINYINT. But reading this Github thread I understand that the Boolean class will turn into TINYINT based on the dialect. So I did the following...
from sqlalchemy.dialect.mysql import BOOLEAN
and still I get the same error when flask run migrate. Seems like Alembic can't see the changes in the model.
Is there a way to create a boolean field in mysql utilizing flask-migrate and flask-sqlalchemy?
Try below, I'm assuming Flask migrate is not recognizing db.Column(BOOLEAN, default=0).
a_boolean_field = db.Column(db.Boolean(), default=False)
I've just tested it and above works and Flask-Migrate was able to detect it.
You can do it like so
from sqlalchemy import Boolean, Column
invited = Column(Boolean, nullable=False, default=False)
this should work with MySQL.

Alembic does not make composite primary key

I had a SQLAlchemy model like -
class UserFavPlace(db.Model):
# This model stores the feedback from the user whether he has
# faved a place or not
__tablename__ = u'user_fav_places'
id = db.Column(db.Integer, primary_key = True)
public_place_id = db.Column(db.Integer, db.ForeignKey(u'public_places.id'))
user_id = db.Column(db.Integer, db.ForeignKey(u'users.user_id'))
fav = db.Column(db.Boolean)
updated_time = db.Column(db.DateTime)
place = relationship(u'PublicPlace', backref = u'user_fav_places')
user = relationship(u'User', backref = u'user_fav_places')
And then I changed this model to the following -
class UserFavPlace(db.Model):
# This model stores the feedback from the user whether he has
# faved a place or not
__tablename__ = u'user_fav_places'
public_place_id = db.Column(db.Integer, db.ForeignKey(u'public_places.id'),
primary_key = True)
user_id = db.Column(db.Integer, db.ForeignKey(u'users.user_id'),
primary_key = True)
fav = db.Column(db.Boolean)
updated_time = db.Column(db.DateTime)
place = relationship(u'PublicPlace', backref = u'user_fav_places')
user = relationship(u'User', backref = u'user_fav_places')
However, Alembic is not generating the correct upgrade and downgrade statements. Seems like it is not adding the newly introduced primary key constraints.
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_column('user_fav_places', 'id')
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.add_column('user_fav_places', sa.Column('id', mysql.INTEGER(display_width=11), nullable=False))
### end Alembic commands ###
I am not sure on how to add this.
I find the autogeneration to be pretty bad about detecting primary/foreign key changes. Which is okay, usually you want to customize those anyway.
I haven't created composite primary keys, but it looks like this should work based on the doc:
http://alembic.zzzcomputing.com/en/latest/ops.html#alembic.operations.Operations.create_primary_key
op.create_primary_key(
"pk_user_fav_places", "user_fav_places",
["public_place_id", "user_id"]
)
However, you might run into a different problem, which I wrote up here: Adding primary key to existing MySQL table in alembic
This might have been fixed in more recent versions of alembic, but it may not let you create primary keys on existing tables (see the answer to above for why). If that happens, you can just craft the SQL yourself and run it with op.execute.
Hope that helps!

Trouble defining multiple self-referencing foreign keys in a table

I have some code here. I recently added this root_id parameter. The goal of that is to let me determine whether a File belongs to a particular Project without having to add a project_id FK into File (which would result in a model cycle.) Thus, I want to be able to compare Project.directory to File.root. If that is true, File belongs to Project.
However, the File.root attribute is not being autogenerated for File. My understanding is that defining a FK foo_id into table Foo implicit creates a foo attribute to which you can assign a Foo object. Then, upon session flush, foo_id is properly set to the id of the assigned object. In the snippet below that is clearly being done for Project.directory, but why not for File.root?
It definitely seems like it has to do with either 1) the fact that root_id is a self-referential FK or 2) the fact that there are several self-referential FKs in File and SQLAlchemy gets confused.
Things I've tried.
Trying to define a 'root' relationship() - I think this is wrong, this should not be represented by a join.
Trying to define a 'root' column_property() - allows read access to an already set root_id property, but when assigning to it, does not get reflected back to root_id
How can I do what I'm trying to do? Thanks!
from sqlalchemy import create_engine, Column, ForeignKey, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import backref, relationship, scoped_session, sessionmaker, column_property
Base = declarative_base()
engine = create_engine('sqlite:///:memory:', echo=True)
Session = scoped_session(sessionmaker(bind=engine))
class Project(Base):
__tablename__ = 'projects'
id = Column(Integer, primary_key=True)
directory_id = Column(Integer, ForeignKey('files.id'))
class File(Base):
__tablename__ = 'files'
id = Column(Integer, primary_key=True)
path = Column(String)
parent_id = Column(Integer, ForeignKey('files.id'))
root_id = Column(Integer, ForeignKey('files.id'))
children = relationship('File', primaryjoin=id==parent_id, backref=backref('parent', remote_side=id), cascade='all')
Base.metadata.create_all(engine)
p = Project()
root = File()
root.path = ''
p.directory = root
f1 = File()
f1.path = 'test.txt'
f1.parent = root
f1.root = root
Session.add(f1)
Session.add(root)
Session.flush()
# do this otherwise f1 will be returned when calculating rf1
Session.expunge(f1)
rf1 = Session.query(File).filter(File.path == 'test.txt').one()
# this property does not exist
print rf1.root
My understanding is that defining a FK foo_id into table Foo implicit creates a foo attribute to which you can assign a Foo object.
No, it doesn't. In the snippet, it just looks like it is being done for Project.directory, but if you look at the SQL statements being echo'ed, there is no INSERT at all for the projects table.
So, for it to work, you need to add these two relationships:
class Project(Base):
...
directory = relationship('File', backref='projects')
class File(Base):
...
root = relationship('File', primaryjoin='File.id == File.root_id', remote_side=id)